CN110018880A - IO thread execution method and device - Google Patents

IO thread execution method and device Download PDF

Info

Publication number
CN110018880A
CN110018880A CN201810018487.4A CN201810018487A CN110018880A CN 110018880 A CN110018880 A CN 110018880A CN 201810018487 A CN201810018487 A CN 201810018487A CN 110018880 A CN110018880 A CN 110018880A
Authority
CN
China
Prior art keywords
thread
task
guardian
loading
delay loading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810018487.4A
Other languages
Chinese (zh)
Other versions
CN110018880B (en
Inventor
王晓龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810018487.4A priority Critical patent/CN110018880B/en
Publication of CN110018880A publication Critical patent/CN110018880A/en
Application granted granted Critical
Publication of CN110018880B publication Critical patent/CN110018880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of IO thread execution methods, comprising: receives upload task or delay loading task;By consistency hash algorithm, thread guardian corresponding with the upload task or delay loading task is selected;On the input and output thread of thread guardian maintenance, to assist the journey mechanism addition upload or delay loading task;The upload or delay loading task are executed on the input and output thread.The IO thread execution method provided by the present application, pass through consistency hash algorithm, selection thread guardian corresponding with the task of upload or delay loading task, and it is gone on the IO thread that journey mechanism adds the upload or delay loading task is safeguarded to the thread guardian by assisting, and then the task is executed on the IO thread, it realizes that the timesharing of IO thread is shared, task starvation as caused by cannot handle for a long time, thread resources waste will not be led to the problem of.

Description

IO thread execution method and device
Technical field
This application involves public cloud mass field of storage, and in particular to a kind of IO thread execution method.The application is related to simultaneously A kind of IO thread executive device.
Background technique
Cloud mass storage service is virtual disk project, is in new general of the conceptive extension of cloud computing and developed one It reads, is a kind of emerging Network storage technology.In the project implementation process, in order to realize the standby need with disaster tolerance of virtual disk calamity It asks, it is necessary to realize the function of virtual disk snapshot, the main snapshot data for realizing local virtual disk uploads to Cloud Server (upload), and the snapshot data based on Cloud Server creation disk function (lazyload), snapshot data is with delay loading Mode download.For example, Ali's cloud mass storage service OSS (Object Storage Service) is very-high performance virtual disk Project realizes virtual disk snapshot functions.OSS bottom is based on SATA disk (Serial Advanced Technology Attachment, serial port hard disk) building low speed storage system, be capable of handling it is per second be written and read (I/O) operation number It is limited to bottom storage, uploads and there are SLA (Service-Level Agreement, grade of service association in the delay loading deadline View) it requires, delay loading read operation has delay to require.Stand-alone assembly snapshot is used for above-mentioned requirements virtual disk snapshot functions The mode of server, the cutting of unified realization snapshot task.Therefore, it in order to meet above-mentioned requirements, in snapshot server, adopts It what kind of IO threading model to manage snapshot task operation with, is a urgent problem to be solved.
Existing snapshot module IO threading model is as follows: all IO thread meeting timing scan task maps are seen if there is Task to be treated realizes upload or delay loading task function, while the same thread exists if so, then running task One task processing does not allow to handle other tasks before completing.
The disadvantages of this solution is as follows: the same thread does not allow to handle other tasks before a task processing is completed, Will cause other tasks simultaneously cannot be handled for a long time, be in starvation, thread resources is caused to waste.And global appoints Business map needs locking protection, increases maintenance complexity;Delay loading read operation is completed by client retry mechanism, can be very big Increase read operation delay.
Summary of the invention
The present invention provides a kind of IO thread execution method, is made with solving prior art task since long-time cannot be handled The problem of being wasted at hungry, thread resources.The present invention additionally provides a kind of IO thread executive devices.
The application provides a kind of IO thread execution method, comprising:
Receive upload task or delay loading task;
By consistency hash algorithm, thread maintenance corresponding with the upload task or delay loading task is selected Person;
On the input and output thread of thread guardian maintenance, to assist the journey mechanism addition upload or delay to add Load task;
The upload or delay loading task are executed on the input and output thread.
Optionally, described by consistency hash algorithm, it selects corresponding with the upload task or delay loading task Thread guardian's method and step after further include:
On the input and output thread of thread guardian maintenance, the upload task or delay loading task are added It is added to task map corresponding with the thread guardian.
Optionally, the delete operation of task is executed on the task map.
Optionally, described by consistency hash algorithm, it selects corresponding with the upload task or delay loading task Thread guardian's method and step before further include:
Initialize the thread guardian list.
Optionally, the implementation of the initialization thread guardian listings step includes:
Each thread guardian corresponds to an input and output thread, each thread guardian is with safeguarding a task Figure.
Optionally, the method also includes:
Obtain the delay loading read request in the input and output candidate list of specific delays loading tasks;
In the specific delays loading tasks implementation procedure, delay loading read request described in priority processing.
Optionally, the delay loading read request step in the input and output candidate list for obtaining the delay loading task Before rapid further include:
Receive the delay loading read request of the specific delays loading tasks;
By consistency hash algorithm, the thread guardian corresponding with the delay loading read request is selected;
On the input and output thread of thread guardian maintenance, to assist journey mechanism to obtain the corresponding specific delays Loading tasks;
The delay loading read request is added to the input and output candidate list of the specific delays loading tasks up.
Optionally, the line corresponding with the delay loading read request is selected by consistency hash algorithm described After journey guardian's step further include:
By the related data in the thread guardian list, the delay loading read request content to be read is judged It is in Cloud Server or local, if it is defeated to enter the input that next step is safeguarded in the thread guardian in Cloud Server Out on thread, to assist journey mechanism to obtain the corresponding specific delays loading tasks.
Optionally, the task time-sharing multiplex on the same input and output thread is realized by association's journey mechanism.
In addition the application provides a kind of IO thread executive device, comprising:
Task receiving module, for receiving upload task or delay loading task;
Thread guardian's selecting module, for passing through consistency hash algorithm, selection and the upload task or delay The corresponding thread guardian of loading tasks;
Task adding module, the input and output thread for being safeguarded in the thread guardian, to assist journey mechanism to add The upload or delay loading task;
Task execution module, for executing the upload or delay loading task on the input and output thread.
Optionally, described device further include:
Task map adding module, the input and output thread for being safeguarded in the thread guardian, by the upload Task or delay loading task are added to task map corresponding with the thread guardian.
Optionally, described device further include:
Initial module, for initializing the thread guardian list.
Optionally, described device further include:
Delay loading read request obtains module, in the input and output candidate list for obtaining the delay loading task Delay loading read request;
Delay loading read request execution module, for prolonging described in the priority processing during delay loading task execution Load read request late.
Optionally, described device further include:
Delay loading read request receiving module, for receiving the delay loading read request of specific delays loading tasks;
The corresponding thread guardian selecting module is used for through consistency hash algorithm, and selection adds with the delay Carry the corresponding thread guardian of read request:
The corresponding specific delays loading tasks obtain module, the input and output for safeguarding in the thread guardian On thread, to assist journey mechanism to obtain the corresponding specific delays loading tasks;
Delay loading read request adding module, for the delay loading read request to be added to the defeated of the delay loading Enter to export candidate list up.
Optionally, described device further include:
Judgment module, for judging that the delay loading reading is asked by the related data in the thread guardian list Seeking the content to be read is in Cloud Server or local, if entering next step in Cloud Server and safeguarding in the thread On the input and output thread of person's maintenance, to assist journey mechanism to obtain the corresponding specific delays loading tasks.
Compared with prior art, the invention has the following advantages that
The IO thread execution method provided by the present application, by consistency hash algorithm, selection and the task of upload or The corresponding thread guardian of delay loading task, and on the input and output thread of thread guardian maintenance, to assist journey machine System addition it is described upload perhaps delay loading task so that executed on the input and output thread it is described upload or delay plus Load task realizes the execution of the task on IO thread.The upload or delay loading task are added to line by association's journey mechanism The IO thread of journey guardian maintenance gets on, and realizes that the timesharing of IO thread is shared, will not generate task since long-time cannot be located The problem of starvation caused by reason, thread resources waste.
Further, on the basis of above-mentioned IO thread execution method, by thread local task map, and in thread Delete operation is added in interior execution, is realized without lockization processing, is reduced maintenance complexity;And during delay loading, delay Load read operation is no longer rely on client retry mechanism and completes, but priority processing delay loading read request, reduces repetition behaviour Make, is greatly reduced the delay of request processing.
Detailed description of the invention
Fig. 1 is the flow chart for the IO thread execution method that the application first embodiment provides;
Fig. 2 is another flow chart for the IO thread execution method that the application first embodiment provides;
Fig. 3 is the structural block diagram for the IO thread executive device that the application second embodiment provides;
Fig. 4 is another structural block diagram for the IO thread executive device that the application second embodiment provides.
Specific embodiment
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention.But the present invention can be with Much it is different from other way described herein to implement, those skilled in the art can be without prejudice to intension of the present invention the case where Under do similar popularization, therefore the present invention is not limited to the specific embodiments disclosed below.
The application first embodiment provides a kind of IO thread execution method.Fig. 1 is please referred to, which is that the application first is implemented The flow chart of example.A kind of IO thread execution method is provided to the application first embodiment below in conjunction with Fig. 1 to be described in detail.
As shown in Figure 1, showing the flow chart of the IO thread execution method of the application first embodiment offer, including following Step.
Step S101 receives upload task or delay loading task.
The snapshot data that this step is used to receive local virtual disk uploads to the task of Cloud Server, or receives cloud clothes The locally downloading delay loading task of the snapshot data of business device.
Virtual disk can be worked as in Cloud Server based on the virtual disk example of rear end distributed memory system building Make ordinary magnetic disc to be equally written and read.
Virtual disk snapshot, the complete documentation of virtual disk at a time storage content are mainly used for backup and data Restore.
The snapshot data of virtual disk is uploaded to cloud server, such as pair of Ali's cloud by upload task (upload) It is magnanimity, safety, low cost, highly reliable cloud storage service that Ali's cloud provides as storage service OSS.
Delay loading task (lazyload) at once may be used based on the snapshot data of cloud storage localling create disk, disk To use, cloud storage snapshot data is downloaded in a manner of delay loading.
Cloud mass storage service is virtual disk project, is in new general of the conceptive extension of cloud computing and developed one It reads, is a kind of emerging Network storage technology.In the project implementation process, in order to realize the standby need with disaster tolerance of virtual disk calamity It asks, it is necessary to realize the function of virtual disk snapshot, virtual disk snapshot is different from traditional backup, in virtual disk snapshot Data, which need to download, to be used.The snapshot data that virtual disk snapshot is mainly used for local virtual disk uploads to cloud service Device, and the function of the creation disk of the snapshot data based on Cloud Server, snapshot data are downloaded in a manner of delay loading.Virtual machine When starting, the data of disk are completed, virtual machine can be run from Slow loading on a disk snapshot without loading.For example, Ali's cloud mass storage service OSS is very-high performance virtual disk project, realizes virtual disk snapshot functions.OSS bottom is to be based on The low speed storage system of SATA disk building, the number per second for being written and read (I/O) operation being capable of handling are limited to bottom storage, It uploads and there is SLA requirement in the delay loading deadline, delay loading read operation has delay to require.For above-mentioned requirements virtual disk Snapshot functions are by the way of stand-alone assembly snapshot server, the cutting of unified realization snapshot task.
The snapshot data that above-mentioned local virtual disk is received in snapshot server uploads to the task of cloud server, or The locally downloading delay loading task of the snapshot data of person's reception Cloud Server, for the IO thread mould in subsequent snapshot task Type optimization and the execution of inter-related task.
Step S102 is selected corresponding with the upload task or delay loading task by consistency hash algorithm Thread guardian.
This step is used to select corresponding with the upload task or delay loading task by consistency hash algorithm Thread guardian.
Hash algorithm is exactly the input random length, by hashing algorithm, is transformed into the output of regular length, this is defeated It is out exactly hashed value.Hash algorithm is not a specific algorithm but the general designation of a kind of algorithm.Hash algorithm is also hash Algorithm, in general meet such relationship: f (data)=key inputs the data data of random length, by hash algorithm The data key of a fixed length is exported after processing.This process is irreversible simultaneously, can not go out data by key backstepping.
Consistency hash algorithm, consistency hash algorithm are averagely to be reflected a large amount of data using certain hash algorithm It is mapped in different storage targets, while guaranteeing that it searches accuracy, it is also contemplated that when one of storage targeted failure, Load balancing of other storage targets to its responsibility storage content.
Thread guardian (worker), a kind of multithreading running technology that can be created independently of html page, for safeguarding Thread can also record and safeguard data.Incidence relation of the thread guardian to record local data Yu Cloud Server data, It can check the upload download progress of local data and Cloud Server, by thread guardian thereby executing relevant operation.
Thread guardian it allow developer write can long-play without the background program that is interrupted by user, It goes to execute affairs or logic, and guarantees response of the page to user simultaneously.It is simply that allow to create multiple threads, But sub thread is controlled by main thread completely.It is thus possible to be handled with thread guardian some than relatively time-consuming calculating.At this In application, a thread guardian is only used for one thread of maintenance.By consistency hash algorithm, specific hashed value is set, Thread guardian corresponding with the upload task or delay loading task is addressed by matching hashed value.Add on thread When adding new task or cancellation task, the corresponding relationship of task and thread guardian on original thread will not be because of new The addition or cancellation of task and change, therefore the consistency of thread guardian can be maintained.
Preferably, Fig. 2 is please referred to, further includes step S105 after step S102, in the defeated of thread guardian maintenance Enter to export on thread, with being added to task corresponding with the thread guardian by the upload task or delay loading task Figure.
This step is used for the input and output thread safeguarded in the thread guardian, by the upload task or delay Loading tasks are added to task map corresponding with thread guardian's journey.
Process (Process) is program in computer about the primary operation activity on certain data acquisition system, be system into The basic unit of row Resource Distribution and Schedule is the basis of operating system configuration.In computer structure of the early stage towards process design In structure, process is the basic execution entity of program;In computer configuation of the present age towards threaded design, process is the appearance of thread Device.Process is primary operation activity of the program with standalone feature about some data acquisition system, it can apply and gather around There is system resource, be a dynamic concept, be a movable entity, is one " program in execution ".
Thread, sometimes referred to as Lightweight Process (LWP, Lightweight Process) are the minimums that program executes stream Unit.Thread is an entity in process, is the basic unit independently dispatched and assigned by system, thread oneself, which does not possess, is System resource only possesses a little essential resource in operation, but it can be shared with the other threads for belonging to a process Whole resources that process is possessed.One thread can create and cancel another thread, and operation is more simultaneously in single program A thread completes different work, referred to as multithreading.Due to the mutual restriction between thread, thread is caused to show in operation Discontinuity, thread also have three kinds of basic status of ready, obstruction and operation.At least one thread of each program, if program Only one thread, that is, program itself.
Input and output thread, i.e. IO thread, when client has input output request, server establishes IO thread Execute relevant input output request.
Task map (taskmap) can be global task map for recording and safeguarding needing to be implemented for task It can be local task map, the modification of the task in global lock task map needs plus global lock protection.
Solving data integrity and the simplest mode of state synchronized between multithreading is exactly to lock, and task map has recorded The task list needed to be implemented, the input and output thread safeguarded by the thread guardian, by the upload task or prolongs Slow loading tasks are added to task map corresponding with thread guardian's journey, and local task map is realized without lockization processing.
Preferably, the delete operation of task is executed on the task map.Task map has recorded times needed to be implemented Business list, can not only be added task, can also delete, no lockization processing task map.
Preferably, Fig. 2 is please referred to, before the step S102 further include: step S106 initializes the thread guardian List.
This step is for initializing the thread guardian list.
Thread guardian is a kind of multithreading running technology that can be created independently of html page, can safeguard thread, and And record and safeguard data.Obtain corresponding with the upload task or delay loading task thread guardian step it Before, the thread guardian list should be initialized.
Incidence relation of the thread guardian to record local data Yu Cloud Server data, can be with by thread guardian The upload download progress of local data and Cloud Server is checked, thereby executing subsequent relevant operation.Pass through the thread guardian The upload download progress of local data and Cloud Server in list judges that the delay loading read request content to be read is In Cloud Server or local, if entering the input and output that next step is safeguarded in the thread guardian in Cloud Server On thread, to assist journey mechanism to obtain the corresponding specific delays loading tasks (such as Fig. 2, step S108).
Preferably, the implementation of the initialization thread guardian listings step includes: the maintenance of each thread Person corresponds to an input and output thread, each thread guardian safeguards a task map.
Each thread guardian safeguards an input and output thread, each thread guardian is with safeguarding a task Figure, therefore the corresponding task map of an input and output thread, and by adding and deleting the upload in task map Task or delay loading task, local task map realize handling without lockization for task map.
Step S103, on the input and output thread of thread guardian maintenance, to assist journey mechanism to add the upload Or delay loading task.
This step is used for the input and output thread safeguarded in the thread guardian, to assist journey mechanism to add the upload Or delay loading task.
Journey, the lightweight multi-task mechanism of operating system User space are assisted, programmer oneself must undertake the responsibility of scheduling, by Performance can be improved in this, but also therefore, meanwhile, association's journey is lost the ability that standard thread uses multi -CPU.And it can retain State (specific combinations of all local states) when last time calls is equivalent to enter when each process is reentried The state that last time calls, rephrases the statement: the position of locating logic flow when leaving into the last time.
Assisting journey is a kind of process control mechanism, can easily realize collaborative multitask with it.One program can be with Comprising multiple association's journeys, can compare with a process includes multiple threads.It is understood that multiple threads are relatively independent, there is oneself Context, switching are controlled by system;And assist journey also relatively independent, there is the context of oneself, but its switching is controlled by oneself, by Current association's journey is switched to other association's journeys and is controlled by currently assisting journey.On the input and output thread of thread guardian maintenance, Programmer oneself the addition upload or delay loading task no longer allow operating system to be determined when needing multitask to execute It is fixed which task, referred to as Preemptive multitask when executed, but task is given power to make decision, allow them to think suitable at oneself When voluntarily abandon executing, task is voluntarily executed when oneself thinks suitable, referred to as collaborative multitask.The same line Journey allows to handle other tasks before a task processing is completed, and not will cause other tasks cannot be handled for a long time.And And state when assisting journey that can retain last call, when each process is reentried, it is equivalent to enter the last state called, mentions The high efficiency of task processing, is preferably utilized thread resources.
Preferably, the task time-sharing multiplex on the same input and output thread is realized by association's journey mechanism.
The same thread allows to handle other tasks before the processing of task is completed, not will cause other tasks it is long when Between cannot handle.Solving the former intrinsic stationary problem makes calling program often and has unpredictable hungry, thread resources wave Take problem, realizes that the timesharing of IO thread is shared.
For example, it is assumed that there is an operating system, it is monokaryon, needs to run without other programs in system, there are two Thread A and B, and task is all arithmetic operation, does not also have the problem of competition and shared data between AB.Present A two lines of B Cheng Binghang, operating system can ceaselessly switch between two threads of AB, achieve the effect that a kind of puppet is parallel.If using association's journey Mode, can on the same thread first operation association journey A, association journey B or B are made way for when A terminates and is allowed when terminate Association journey A is given in position, other association's journeys are switched to by currently assisting process control, the actual pairing of task is realized, realizes the timesharing of IO thread It is shared.And retains state when last call, when each process is reentried, be equivalent to enter the last state called.
Preferably, Fig. 2 is please referred to, the IO threading model optimization method is further comprising the steps of:
Step S112 obtains the delay loading read request in the input and output candidate list of specific delays loading tasks.
This step is used to obtain the delay loading read request in the input and output candidate list of specific delays loading tasks.
Delay loading read request, during snapshot data delay loading, data that client needs promptly to read.
Specific delays loading tasks, specific delays loading tasks corresponding with delay loading read request, i.e., the described delay add Carry the read request delay loading task to be read.
Delay loading task has corresponding task execution candidate list in the process of implementation, and delay loading task is according to candidate Sequence in list executes specific tasks process, needs urgent reading related data in client, need to first obtain specific delays and add Delay loading read request in the input and output candidate list of load task.
Preferably, Fig. 2 is please referred to, before the step S112 further include:
Step S107 receives the delay loading read request of specific delays loading tasks;
Step S108 selects thread guardian corresponding with the delay loading read request by consistency hash algorithm;
Step S110, on the input and output thread of thread guardian maintenance, to assist journey mechanism to obtain corresponding institute State specific delays loading tasks;
The delay loading read request is added to the input and output candidate of the specific delays loading tasks by step S111 List is got on.
Above-mentioned steps are used to receive the delay loading read request of specific delays loading tasks, and delay loading reading is asked It asks and is added to the input and output candidate lists of the specific delays loading tasks up.
The delay loading read request of specific delays loading tasks is received, and passes through consistency hash algorithm, selection and delay Load the corresponding thread guardian of read request, the thread guardian corresponding with generalized delay loading tasks of thread guardian herein Unanimously, will not because of task change and change corresponding thread guardian.In the input-output line of thread guardian maintenance Cheng Shang needs to pre-establish acquisition specific delays load to assist journey mechanism to obtain in corresponding specific delays loading tasks step The task of task obtains specific delays loading tasks by completing the task creation.And then added by completing acquisition specific delays Load task obtains the specific delays loading tasks for needing to carry out preferential read operation.It obtains the task of specific delays loading tasks, obtain It takes the execution of specific delays loading tasks consistent with common task execution process, is executed in accordance with the step of step S101-S104, Details are not described herein again.
Preferably, Fig. 2 is please referred to, further includes step S108 after the step S107:
By the related data in the thread guardian list, the delay loading read request content to be read is judged It is in Cloud Server or local, if it is defeated to enter the input that next step is safeguarded in the thread guardian in Cloud Server Out on thread, to assist journey mechanism to obtain the corresponding specific delays loading tasks.
Thread guardian record and maintenance data, the pass in thread guardian containing local data Yu Cloud Server data Connection relationship can check the upload download progress of local data and Cloud Server by thread guardian, judge that the delay adds Carry the read request content to be read whether beyond the clouds server or local.If illustrating to be read interior in Cloud Server Hold locally downloading not yet, it is also necessary to from Cloud Server delay loading to local, then enter next step S109 in the line On the input and output thread of journey guardian maintenance, to assist journey mechanism to obtain the corresponding specific delays loading tasks, thus will The delay loading read request is added to the input and output candidate list of the specific delays loading tasks up, is executing the spy The delay loading read request is preferentially executed when determining delay loading task.
Step S113, in the specific delays loading tasks implementation procedure, delay loading read request described in priority processing.
This step is used for the execution of the delay loading read request.
Specific delays loading tasks execute specific tasks process according to the sequence in candidate list, need in client urgent Related data is read, it is excellent when first obtaining the delay loading read request in the input and output candidate list of specific delays loading tasks The delay loading read request in the list of input and output candidate is first handled, corresponding read operation is carried out.So that snapshot server is no longer It is completed by client retry mechanism, but priority processing delay loading read request, reduce repetitive operation, is greatly reduced Request the delay of processing.
Step S104 executes the upload or delay loading task on the input and output thread.
This step is for the upload or the execution of delay loading task.
After snapshot server receives upload task or delay loading task, safeguarded by the thread guardian defeated Enter to export thread, to assist the journey mechanism addition upload or delay loading task, according to association on the input and output thread The sequence of journey mechanism setting executes the upload or delay loading task.The same thread is before a task processing is completed Allow to handle other tasks, not will cause other tasks cannot be handled for a long time, and the timesharing for realizing IO thread is shared.
As shown in figure 3, the structural block diagram of the IO thread executive device of the application second embodiment offer is provided, including with Lower module:
Task receiving module 301, for receiving upload task or delay loading task;
Thread guardian selecting module 302 selection and the upload task or is prolonged for by consistency hash algorithm The corresponding thread guardian of slow loading tasks;
Task adding module 303, the input and output thread for being safeguarded in the thread guardian, to assist journey mechanism to add Add the upload or delay loading task;
Task execution module 304, for executing the upload or delay loading task on the input and output thread.
As shown in figure 4, showing another structural block diagram of the IO thread executive device of the application second embodiment offer, packet It includes with lower module:
Task map adding module 305, the input and output thread for being safeguarded in the thread guardian, on described Biography task or delay loading task are added to task map corresponding with the thread guardian.
Initial module 306, for initializing the thread guardian list.
Delay loading read request obtains module 312, for obtaining the input and output candidate list of the delay loading task On delay loading read request;
Delay loading read request execution module 313, for the priority processing institute during delay loading task execution State delay loading read request.
Delay loading read request receiving module 307, for receiving the delay loading read request of specific delays loading tasks;
The corresponding thread guardian selecting module 309, for passing through consistency hash algorithm, selection and the delay Load the corresponding thread guardian of read request;
The corresponding specific delays loading tasks obtain module 310, the input for safeguarding in the thread guardian It exports on thread, to assist journey mechanism to obtain the corresponding specific delays loading tasks;
Delay loading read request adding module 311, for the delay loading read request to be added to the delay loading Input and output candidate list get on.
Judgment module 308, for judging that the delay loading is read by the related data in the thread guardian list The request content to be read is in Cloud Server or local, if entering next step in Cloud Server and tieing up in the thread On the input and output thread of shield person's maintenance, to assist journey mechanism to obtain the corresponding specific delays loading tasks.
The IO thread execution method provided by the present application, by consistency hash algorithm, selection and the task of upload or The corresponding thread guardian of delay loading task, and on the input and output thread of thread guardian maintenance, to assist journey machine System addition it is described upload perhaps delay loading task so that executed on the input and output thread it is described upload or delay plus Load task realizes the execution of the task on IO thread.The upload or delay loading task are added to line by association's journey mechanism The IO thread of journey guardian maintenance gets on, and realizes that the timesharing of IO thread is shared, will not generate task since long-time cannot be located The problem of starvation caused by reason, thread resources waste.
Further, on the basis of above-mentioned IO thread execution method, by thread local task map, and in thread Delete operation is added in interior execution, is realized without lockization processing, is reduced maintenance complexity;And during delay loading, delay Load read operation is no longer rely on client retry mechanism and completes, but priority processing delay loading read request, reduces repetition behaviour Make, is greatly reduced the delay of request processing.
In a typical configuration, calculating equipment includes one or more processors (CPU), input/output interface, net Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium Example.
1, computer-readable medium can be by any side including permanent and non-permanent, removable and non-removable media Method or technology realize that information stores.Information can be computer readable instructions, data structure, the module of program or other numbers According to.The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory techniques, CD-ROM are read-only Memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or Other magnetic storage devices or any other non-transmission medium, can be used for storage can be accessed by a computing device information.According to Herein defines, and computer-readable medium does not include non-temporary computer readable media (transitory media), such as modulates Data-signal and carrier wave.
2, it will be understood by those skilled in the art that embodiments herein can provide as the production of method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the application Form.It can be used moreover, the application can be used in the computer that one or more wherein includes computer usable program code The computer program product implemented on storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) Form.

Claims (15)

1. a kind of IO thread execution method characterized by comprising
Receive upload task or delay loading task;
By consistency hash algorithm, thread guardian corresponding with the upload task or delay loading task is selected;
On the input and output thread of thread guardian maintenance, to assist the journey mechanism addition upload or delay loading to appoint Business;
The upload or delay loading task are executed on the input and output thread.
2. IO thread execution method according to claim 1, which is characterized in that described to pass through consistency hash algorithm, choosing After selecting thread guardian method and step corresponding with the upload task or delay loading task further include:
On the input and output thread of thread guardian maintenance, the upload task or delay loading task are added to Task map corresponding with the thread guardian.
3. IO thread execution method according to claim 2, which is characterized in that execute task on the task map Delete operation.
4. IO thread execution method according to claim 2, which is characterized in that described to pass through consistency hash algorithm, choosing Before selecting thread guardian method and step corresponding with the upload task or delay loading task further include:
Initialize the thread guardian list.
5. IO thread execution method according to claim 4, which is characterized in that the initialization thread guardian column The implementation of table step includes:
Each thread guardian corresponds to an input and output thread, each thread guardian safeguards a task map.
6. IO thread execution method according to claim 4, which is characterized in that the method also includes:
Obtain the delay loading read request in the input and output candidate list of specific delays loading tasks;
In the specific delays loading tasks implementation procedure, delay loading read request described in priority processing.
7. IO thread execution method according to claim 6, which is characterized in that the acquisition delay loading task Before delay loading read request step in input and output candidate list further include:
Receive the delay loading read request of the specific delays loading tasks;
By consistency hash algorithm, the thread guardian corresponding with the delay loading read request is selected;
On the input and output thread of thread guardian maintenance, to assist journey mechanism to obtain the corresponding specific delays load Task;
The delay loading read request is added to the input and output candidate list of the specific delays loading tasks up.
8. IO thread execution method according to claim 7, which is characterized in that described by consistency hash algorithm, After the selection thread guardian step corresponding with the delay loading read request further include:
By the related data in the thread guardian list, judge the delay loading read request content to be read be Cloud Server or local, if entering the input-output line that next step is safeguarded in the thread guardian in Cloud Server Cheng Shang, to assist journey mechanism to obtain the corresponding specific delays loading tasks.
9. IO thread execution method according to claim 1, which is characterized in that realized by association's journey mechanism same defeated Enter the task time-sharing multiplex exported on thread.
10. a kind of IO thread executive device characterized by comprising
Task receiving module, for receiving upload task or delay loading task;
Thread guardian's selecting module, for passing through consistency hash algorithm, selection and the upload task or delay loading The corresponding thread guardian of task;
Task adding module, the input and output thread for being safeguarded in the thread guardian, to assist described in the addition of journey mechanism Upload or delay loading task;
Task execution module, for executing the upload or delay loading task on the input and output thread.
11. IO thread executive device according to claim 10, which is characterized in that described device further include:
Task map adding module, the input and output thread for being safeguarded in the thread guardian, by the upload task Or delay loading task is added to task map corresponding with the thread guardian.
12. IO thread executive device according to claim 10, which is characterized in that described device further include:
Initial module, for initializing the thread guardian list.
13. IO thread executive device according to claim 10, which is characterized in that described device further include:
Delay loading read request obtains module, the delay in input and output candidate list for obtaining the delay loading task Load read request;
Delay loading read request execution module adds for delay described in the priority processing during delay loading task execution Carry read request.
14. IO thread executive device according to claim 10, which is characterized in that described device further include:
Delay loading read request receiving module, for receiving the delay loading read request of specific delays loading tasks;
The corresponding thread guardian selecting module, for by consistency hash algorithm, selection to be read with the delay loading Request the corresponding thread guardian;
The corresponding specific delays loading tasks obtain module, the input and output thread for safeguarding in the thread guardian On, to assist journey mechanism to obtain the corresponding specific delays loading tasks;
Delay loading read request adding module, the input for the delay loading read request to be added to the delay loading are defeated Candidate list out is got on.
15. IO thread executive device according to claim 10, which is characterized in that described device further include:
Judgment module, for judging that the delay loading read request is wanted by the related data in the thread guardian list The content of reading is in Cloud Server or local, if entering next step in Cloud Server and tieing up in the thread guardian On the input and output thread of shield, to assist journey mechanism to obtain the corresponding specific delays loading tasks.
CN201810018487.4A 2018-01-09 2018-01-09 IO thread execution method and device Active CN110018880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810018487.4A CN110018880B (en) 2018-01-09 2018-01-09 IO thread execution method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810018487.4A CN110018880B (en) 2018-01-09 2018-01-09 IO thread execution method and device

Publications (2)

Publication Number Publication Date
CN110018880A true CN110018880A (en) 2019-07-16
CN110018880B CN110018880B (en) 2023-04-14

Family

ID=67187570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810018487.4A Active CN110018880B (en) 2018-01-09 2018-01-09 IO thread execution method and device

Country Status (1)

Country Link
CN (1) CN110018880B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6983339B1 (en) * 2000-09-29 2006-01-03 Intel Corporation Method and apparatus for processing interrupts of a bus
CN1967487A (en) * 2005-11-18 2007-05-23 Sap股份公司 Cooperative scheduling using coroutines and threads
US20130332711A1 (en) * 2012-06-07 2013-12-12 Convey Computer Systems and methods for efficient scheduling of concurrent applications in multithreaded processors
CN107172174A (en) * 2017-06-01 2017-09-15 重庆高略启达智慧城市信息技术有限公司 The quick upload of file is realized between clients and servers
US20170310628A1 (en) * 2016-04-21 2017-10-26 Confluent, Inc. Distributed message queue stream verification
CN107368362A (en) * 2017-06-29 2017-11-21 上海阅文信息技术有限公司 A kind of multithreading/multi-process for disk read-write data is without lock processing method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6983339B1 (en) * 2000-09-29 2006-01-03 Intel Corporation Method and apparatus for processing interrupts of a bus
CN1967487A (en) * 2005-11-18 2007-05-23 Sap股份公司 Cooperative scheduling using coroutines and threads
US20130332711A1 (en) * 2012-06-07 2013-12-12 Convey Computer Systems and methods for efficient scheduling of concurrent applications in multithreaded processors
US20170310628A1 (en) * 2016-04-21 2017-10-26 Confluent, Inc. Distributed message queue stream verification
CN107172174A (en) * 2017-06-01 2017-09-15 重庆高略启达智慧城市信息技术有限公司 The quick upload of file is realized between clients and servers
CN107368362A (en) * 2017-06-29 2017-11-21 上海阅文信息技术有限公司 A kind of multithreading/multi-process for disk read-write data is without lock processing method and system

Also Published As

Publication number Publication date
CN110018880B (en) 2023-04-14

Similar Documents

Publication Publication Date Title
CN107431696B (en) Method and cloud management node for application automation deployment
CN107239335B (en) Job scheduling system and method for distributed system
Zaharia et al. Job scheduling for multi-user mapreduce clusters
US20170180220A1 (en) Techniques to Generate Workload Performance Fingerprints for Cloud Infrastructure Elements
Levis Experiences from a Decade of {TinyOS} Development
Zhang et al. Improving Hadoop service provisioning in a geographically distributed cloud
US10585785B2 (en) Preservation of modifications after overlay removal from a container
CN110233802B (en) Method for constructing block chain structure with one main chain and multiple side chains
Petrov et al. Adaptive performance model for dynamic scaling Apache Spark Streaming
Memishi et al. Fault tolerance in MapReduce: A survey
Liu et al. Optimizing shuffle in wide-area data analytics
CN113220431A (en) Cross-cloud distributed data task scheduling method, device and storage medium
CN110569113A (en) Method and system for scheduling distributed tasks and computer readable storage medium
WO2010091495A1 (en) System and method for parallel stream processing
Krevat et al. Applying performance models to understand data-intensive computing efficiency
US20240104031A1 (en) Forwarding incoming io to scm namespaces
US20140040191A1 (en) Inventorying and copying file system folders and files
US10338891B2 (en) Migration between model elements of different types in a modeling environment
CN110018880A (en) IO thread execution method and device
EP3396553B1 (en) Method and device for processing data after restart of node
Toasa et al. Performance evaluation of apache zookeeper services in distributed systems
Chaari et al. On the robot path planning using cloud computing for large grid maps
US7979393B2 (en) Multiphase topology-wide code modifications for peer-to-peer systems
US20230325234A1 (en) System and method for dynamic task scheduling
Höger Fault tolerance in parallel data processing systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40010943

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant