CN109426562A - Priority weighted robin scheduling device - Google Patents

Priority weighted robin scheduling device Download PDF

Info

Publication number
CN109426562A
CN109426562A CN201710761856.4A CN201710761856A CN109426562A CN 109426562 A CN109426562 A CN 109426562A CN 201710761856 A CN201710761856 A CN 201710761856A CN 109426562 A CN109426562 A CN 109426562A
Authority
CN
China
Prior art keywords
event
rotation
message
task
register
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710761856.4A
Other languages
Chinese (zh)
Other versions
CN109426562B (en
Inventor
田冰
王树柯
路向峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Memblaze Technology Co Ltd
Original Assignee
Beijing Memblaze Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Memblaze Technology Co Ltd filed Critical Beijing Memblaze Technology Co Ltd
Priority to CN201710761856.4A priority Critical patent/CN109426562B/en
Publication of CN109426562A publication Critical patent/CN109426562A/en
Application granted granted Critical
Publication of CN109426562B publication Critical patent/CN109426562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Executing Machine-Instructions (AREA)

Abstract

This application discloses priority weighted robin scheduling devices.For dispatching the processing to multiple events.Disclosed scheduler, comprising: event registers, rotation EventSelect device and current rotation event to be processed enable register;The event registers to be processed indicate one or more events to be scheduled;The rotation EventSelect device is coupled to the bit corresponding to one or more rotation events of the event registers to be processed, and is coupled to the current rotation event and enables register;The current rotation event enables register and indicates the one or more rotation events that can be scheduled;And the rotation EventSelect device selects one of rotation event to be scheduled according to the instruction that the current rotation event enables register.

Description

Priority weighted robin scheduling device
Technical field
This application involves the schedulers of embedded type CPU, particularly, are related to respectively having priority in embedded system Event be weighted the scheduler of robin scheduling.
Background technique
Each core of embedded multi-core CPU (central processing unit, Central Processing Unit) handles respective Business.There is the wilderness demand to communication, collaboration between each core of CPU.There is succession between task, task becomes dependent upon It is completed in the processing of preceding one or more tasks.Each core of CPU handles a variety of events, knows according to event in preceding task Processing progress.For example, event includes occurring entry to be processed, the passage of designated length time, interruption, processing times in queue The customized event etc. generated during business.
Fig. 1 is the block diagram of the embedded multi-core cpu system of the prior art.CPU 0 and CPU 1 is the CPU of isomorphism or isomery Core is coupled by bus.Each CPU has local storage, and CPU can access to low latency the local storage of oneself.CPU External memory, such as DDR (Dual Data Rate, Double Data Rate) memory are also coupled to by bus.External memory Big memory capacity is provided, but access delay is higher.Therefore, when CPU accesses external memory, height is usually cached by queue The order of delay.The form of order can be the message with specified data format.
The entry of queue is message.CPU receives message from inbound queue, and sends message by outbound queue.CPU can Possess the queue of a variety of quantity.It illustratively, include inbound (Inbound) queue 0, inbound referring to Fig. 1, CPU 0 (Inbound) queue 1, outbound (Outbound) queue 0 and outbound (Outbound) queue 1.
To access external memory, CPU 0 adds message to outbound queue 0.Message is by Bus repeater to external memory. The access of external memory output storage is as a result, by Bus repeater to inbound queue 0.
Message is exchanged by queue between CPU.For example, CPU 0 adds message to queue 1 of going to war.Message is by Bus repeater To the inbound queue 0 of CPU 1.CPU 1 obtains the message that CPU 0 is sent from inbound queue 0.
Summary of the invention
For accessing external memory, CPU need to check quene state to outbound queue addition message, and in outbound queue It is operated when less than, and adds the message of access external memory from CPU to outbound queue, received to CPU from inbound queue Between the access result of external memory, there is longer time.It is waiting inbound queue to become non-empty, or is waiting outside to be visited Result this period of memory, CPU need other tasks of dispatch deal to promote cpu busy percentage.As CPU while handling multiple Identical or different task, using multiple queues and/or respond different types of event when, to the multiple tasks run on CPU Effectively scheduling becomes complicated.
CPU can add message to outbound queue when outbound queue is non-full, or in inbound queue with snoop queue state When non-empty, message is taken out from inbound queue and is handled.But snoop queue state causes the waste to CPU processing capacity. CPU can respond the interruption generated by quene state, and identification events type is simultaneously handled.But when queue events frequently occur When, a large amount of interrupt processing bring expenses seriously increase CPU burden.
In desktop CPU, server CPU, by running operating system, run on CPU by operating system scheduling more A process and/or thread, user need not excessively intervene the switching between process/thread, and by operating system selection it is appropriate into Journey/thread is scheduled, to make full use of CPU computing capability.However, in embedded multi-core CPU, workable memory, The resources such as CPU processing capacity are all limited, it is difficult to bear the expense of process/thread management introducing.And some embedded systems pair The processing delay of performance, especially task has strict demand, and operating system is also difficult to be applicable in this scene.
When CPU handles a variety of and a large amount of task, task schedule also becomes the burden of CPU, influences task processing system Performance.Task schedule process is also complexity, by carrying out extensive task schedule to CPU programming, has high complexity. Wish to share the load of CPU, promote the efficiency of task schedule process, and reduces the realization difficulty of task schedule process.
According to a first aspect of the present application, the first scheduler according to the application first aspect is provided, comprising: to be processed Event registers, rotation EventSelect device and current rotation event enable register;The event registers to be processed instruction to Scheduled one or more events;It is described rotation EventSelect device be coupled to the event registers to be processed correspond to one The bit of a or multiple rotation events, and be coupled to the current rotation event and enable register;The current rotation event Enabled register indicates the one or more rotation events that can be scheduled;And the rotation EventSelect device is according to described current The instruction that rotation event enables register selects one of rotation event to be scheduled.
According to the first scheduler of the application first aspect, the second scheduler according to the application first aspect is provided, It further include that event handling function calls component;The event handling function scheduler is according to the rotation EventSelect device Instruction calls processing function to handle scheduled rotation event.
According to the second scheduler of the application first aspect, the third scheduler according to the application first aspect is provided, It is wherein processed in response to the first rotation event, it removes the rotation event and enables register corresponding to the first rotation event Bit.
According to one of first of the application first aspect to third scheduler, according to the application first aspect is provided Four schedulers select in turn wherein the rotation EventSelect device enables the instruction of register according to the current rotation event To rotation event that is scheduled and being scheduled.
According to one of first to fourth scheduler of the application first aspect, according to the application first aspect is provided Five schedulers, wherein enabling the one or more rotation events of register instruction being scheduled all in response to currently rotating event It is selected, updates the current rotation event and enable register.
According to one of first to fourth scheduler of the application first aspect, according to the application first aspect is provided Six schedulers, wherein in response to the one or more rotation things being scheduled for enabling register instruction according to current rotation event Each of part and selected rotation event to be scheduled in turn, update the current rotation event and enable register.
According to the 5th or the 6th scheduler of the application first aspect, provides and adjusted according to the 7th of the application first aspect the Device is spent, further includes rotation event weights exterior portion part;The rotation event weights exterior portion part is sequentially recorded multiple values;And for more The new current rotation event enables register, in order from the rotation event weights table acquired value, and with acquired value It updates the current rotation event and enables register.
According to the 7th scheduler of the application first aspect, the 8th scheduler according to the application first aspect is provided, Wherein the value of the rotation event weights exterior portion part record indicates whether one or more rotation events can be scheduled.
According to eight schedulers of the application first aspect, the 9th scheduler according to the application first aspect is provided, In the first rotation event it is described rotation event weights exterior portion part record the first quantity value in be instructed to be scheduled;Second Rotation event is instructed to be scheduled in the value of the second quantity of the rotation event weights exterior portion part record;And first number Amount is greater than the second quantity.
According to the 5th or the 6th scheduler of the application first aspect, provides and adjusted according to the tenth of the application first aspect the Device is spent, further includes rotation event weights exterior portion part;The rotation event weights exterior portion part records one or more rotation events Weight.
According to ten schedulers of the application first aspect, the 11st scheduler according to the application first aspect is provided, It further include that rotation event enables set parts and one or more rotation event weights registers;The enabled setting of the rotation event Unit response is processed in the first rotation event, according to the first rotation thing from the rotation event weights table component retrieval The weight of part selects one of one or more of rotation event weights registers to be made to record the first rotation event Energy.
According to 11 schedulers of the application first aspect, provide according to the 12nd of the application first aspect the scheduling Device, wherein one or more of rotation event weights registers are sequences, according to from the rotation event weights exterior portion part Sequence of the weight of the first rotation event obtained in the weight set of rotation event, selects one or more of wheels Turn one of event weights register.
According to 12 schedulers of the application first aspect, provide according to the 13rd of the application first aspect the scheduling Device, wherein the one or more rotation events being scheduled for enabling register instruction in response to currently rotating event are all chosen It selects, the value for the rotation event weights register for sorting most preceding in one or more of rotation event weights registers is updated into institute It states current rotation event and enables register.
According to 13 schedulers of the application first aspect, provide according to the 14th of the application first aspect the scheduling Device, wherein the one or more rotation events being scheduled for enabling register instruction in response to currently rotating event are all chosen Select, with it is one or more of rotation event weights registers in sort it is posterior first rotation event weights register values more New second rotation event weights register, the second rotation event weights register are weighed in one or more of rotation events The heavy responsibilities of government storage sequence it is described first rotation event weights register it is tight before.
According to 12 schedulers of the application first aspect, provide according to the 15th of the application first aspect the scheduling Device, wherein in response to according to current rotation event enable register instruction the one or more rotation events being scheduled it is every It is a and selected rotation event to be scheduled in turn, before sorting most in one or more of rotation event weights registers The value of rotation event weights register update the current rotation event and enable register.
According to 15 schedulers of the application first aspect, provide according to the 16th of the application first aspect the scheduling Device, wherein in response to according to current rotation event enable register instruction the one or more rotation events being scheduled it is every It is a and selected rotation event to be scheduled in turn, with sequence in one or more of rotation event weights registers rear First rotation event weights register value update second rotation event weights register, it is described second rotation event weights posts Storage it is one or more of rotation event weights registers sequence it is described first rotation event weights register it is tight before.
According to one of the first to the 16th scheduler of the application first aspect, provide according to the application first aspect 17th scheduler further includes priority event selector;The priority event selector is coupled to the event deposit to be processed The bit for corresponding to one or more priority events of device;And the priority event selector is according to one or more of excellent The rotation event to be scheduled of the priority selection highest priority of first event.
According to 17 schedulers of the application first aspect, provide according to the 18th of the application first aspect the scheduling Device further includes event moderator;The event moderator selects the priority event selector and the rotation EventSelect device One of selected event is as scheduled event.
According to 18 schedulers of the application first aspect, provide according to the 19th of the application first aspect the scheduling Device, wherein if the priority event selector has selected event, the event moderator selects the priority event selector institute The event of selection;
If the non-selected event of priority event selector, and the rotation EventSelect device has selected event, the thing Part moderator selects the selected event of the rotation EventSelect device.
According to one of the first to the 19th scheduler of the application first aspect, provide according to the application first aspect 20th scheduler further includes processing function table component;The processing function table component stores event in association and handles with it The entry address of function.
According to 20 schedulers of the application first aspect, provide according to the 21st of the application first aspect the scheduling Device, wherein the event handling function calls component to handle letter from the processing function table component retrieval according to scheduled event Several entry addresses.
According to one of the first to the 19th scheduler of the application first aspect, provide according to the application first aspect 22nd scheduler, wherein the event handling function calls component instruction to obtain from the corresponding message of same scheduled event Take the entry address of the processing function of scheduled event.
According to one of the first to the 22nd scheduler of the application first aspect, provide according to the application first aspect The 23rd scheduler, further includes: enabled event registers and event state register;The enabled event registers it is every A bit is used to indicate whether corresponding event can be scheduled;Each bit of the event state register is used to indicate correspondence Event whether occur.
According to 23 schedulers of the application first aspect, provides and adjusted according to the 24th of the application first aspect the Spend device, wherein according to the step-by-step of the enabled event registers and the event state register and result setting described in wait locate Manage each bit of event registers.
According to a second aspect of the present application, the method for sending message according to the first of the application second aspect, packet are provided It includes: registering through the processing function that queue sends message;Indicate the message to be sent;It is called in response to the processing function, The processing function sends the message by the queue.
The method that first according to a second aspect of the present application sends message provides the according to the application second aspect Two methods for sending message, wherein by indicating that the processing function is called according to any scheduler of the application first aspect.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, embodiment will be described below Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some Embodiment for those of ordinary skill in the art without creative efforts, can also be attached according to these Figure obtains other attached drawings.
Fig. 1 is the block diagram of embedded multi-core cpu system;
Fig. 2A is the task processing system architecture diagram according to the embodiment of the present application;
Fig. 2 B is the schematic diagram executed to the code segment scheduling of processing task according to the embodiment of the present application;
Fig. 3 is the schematic diagram that task schedule is carried out according to the task scheduling layer program of the embodiment of the present application;
Fig. 4 is the flow chart according to the transmission message of the embodiment of the present application;
Fig. 5 is the flow chart according to the reception message of the embodiment of the present application;
Fig. 6 is the flow chart that data are read according to the slave external memory of the embodiment of the present application;
Fig. 7 is the flow chart that data are write to external memory according to the embodiment of the present application;
Fig. 8 is the flow chart using customer incident according to the embodiment of the present application;
Fig. 9 is the flow chart that data are read according to the slave nonvolatile external memory of the embodiment of the present application;
Figure 10 is the flow chart that data are write to nonvolatile external memory according to the embodiment of the present application;
Figure 11 is the flow chart according to the I/O command of the processing access storage equipment of the embodiment of the present application;
Figure 12 A is the block diagram according to the priority weighted robin scheduling device of the embodiment of the present application;
Figure 12 B is the rotation event weights table according to the embodiment of the present application;
Figure 12 C is the another rotation event weights table according to the embodiment of the present application;And
Figure 13 is the block diagram according to the priority weighted robin scheduling device of the another embodiment of the application;
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Embodiment one
Fig. 2A is the task processing system architecture diagram according to the embodiment of the present application.
As shown in Figure 2 A, task processing system includes one or more CPU and hardware resource (for example, queue), CPU coupling Multiple queues (inbound queue 0, inbound queue 1, outbound queue 0, outbound queue 1) is closed, for exchanging message with external equipment. The program of task scheduling layer program and processing task in CPU run memory, such as the code segment of processing task.
CPU is upper can to handle multiple tasks simultaneously, include the code of processing task 0 in the example shown in Fig. 2A, on CPU Section (using " task 0 " instruction), handles the code segment of task 2 (with " task at the code segment (being indicated with " task 1 ") for handling task 1 2 " instruction), processing task 3 code segment (with " task 3 " indicate) and processing task 4 code segment (with " task 4 " instruction).It is more A task is handled in certain sequence, and the function of cooperative achievement task processing system is (for example, the IO life of processing access storage equipment It enables).It can specify its follow-up work, or the different follow-up works in response to different event in each task.For example, task 0 subsequent tasks first is that task 2, the subsequent tasks of task 1 first is that task 0, and the subsequent tasks of task 4 are tasks 3.Make For citing, each task realizes a stage of I/O command processing.And in task processing system, multiple IO can be handled simultaneously Order.Correspondingly, context resource (Context) is provided for each I/O command, context can be used in the code segment for handling task Resource distinguishes the processing to different I/O commands.
When being run by CPU, task scheduling layer program for the code segment of processing task provides API, and (application programming is connect Mouthful, Application Programming Interface).The code segment of processing task is by calling API to inform task tune Its hardware to be operated (for example, queue) of layer program is spent, by task scheduling layer program checkout hardware state, and in hardware resource When available, the code segment that operation hardware completes processing task passes through the requested operation of API.Optionally, the code of task is handled Section is by calling API also register other code segments for handling event, for example, to the filling message of outbound queue 1, and from inbound After queue 1 receives response message, handled by the code segment of processing task 2.Task scheduling layer program is in response to inbound queue 1 On there is message, call the code segment of processing task 2.
According to an embodiment of the present application, task scheduling layer program provides fortune by API for the code segment of processing task Row environment, has the advantage that
(1) API that task scheduling layer program provides is asynchronous, after the code segment call API for handling task, is called API is immediately returned to, and will not block the execution of the code segment of processing task;
(2) task scheduling layer program handles hardware operation, to the code segment shielding hardware operation details and poor of processing task The opposite sex, so that the code segment of processing task need not pay close attention to the delay of availability of hardware resources and/or hardware handles;
(3) task scheduling layer program arranges each code segment according to the node state scheduling code segment appropriate of hardware resource Sequence is executed, the delay and CPU execution efficiency handled with balancing tasks.
Fig. 2 B is the schematic diagram executed to the code segment scheduling of processing task according to the embodiment of the present application.In Fig. 2 B, from The direction of from left to right is the direction of time passage.Solid arrow indicates the time sequencing of task processing, and dotted arrow indicates The logical order of task processing.
For single CPU, any moment is only capable of one section of code of processing.Illustratively, as shown in Figure 2 B, for be processed Multiple code segments first carry out the code segment of processing task 0, next execute the code segment of processing task 1, following execute processing Next the code segment of task 4, the code segment for next executing processing task 2 execute the code segment of processing task 0 and connect down To execute the code segment of processing task 3.And the logical order of task processing is indicated in the code segment of each processing task, example Such as, the logical order include task 2 will the post-processing of task 0, task 0 in the post-processing of task 1, task 3 after task 4 Processing etc..
By using task scheduling layer program, the application programming interface provided to task scheduling layer program, which is registered, to be carried out The code segment of subsequent processing, so that only need to be specified thereafter after code according to the logical order of task in the code segment of processing task Section, by task scheduling layer program under the requirement for meeting logical order, the code segment of dispatch deal task on CPU.
According to an embodiment of the present application, the code segment for handling task need not check availability of hardware resources, Wu Xuwei The sequence between the code segment of multiple processing tasks is protected, cpu busy percentage is improved, also reduces answering for the code segment of processing task Miscellaneous degree.
Fig. 3 is that task schedule ground schematic diagram is carried out according to the task scheduling layer program of the embodiment of the present application.
The available hardware resource of CPU includes multiple queues, for example, inbound queue and outbound queue (referring to Fig. 1).Hardware money Source further includes memory, timer etc..Message is sent for the code segment of assist process task, task scheduling layer program provides registration (register) API such as API, transmission (SendMessage) API.
In one example, as shown in figure 3, when being run by CPU, task scheduling layer program maintenance hardware resource state Table records the state of each hardware resource using hardware resource state table.For example, record is each in hardware resource state table Whether queue can be used.For inbound queue, the available message for meaning to occur to be processed in inbound queue of queue.For going out to stand in line Column, queue is available to mean to add message to outbound queue to be sent to the recipient of queue.For timer, timer can With meaning timer then or timer is not in timing.For memory, memory is available to mean Memory Controller sky Spare time completes data write operation to memory or has read data from memory.
As shown in figure 3, task scheduling layer program also safeguards processing function table, processing function table record when being run by CPU Registered processing function corresponding with each hardware resource.It is available in response to hardware resource, it calls and is obtained from processing function table The corresponding processing function with available hardware resource.
Optionally, hardware resource state table and processing function table are integrated in single table.Using hardware resource as table item Purpose index records the state and processing function of hardware resource in table clause.Still optionally, only remember in the single table Available hardware resource is recorded, available in response to hardware resource, addition corresponds to the entry of available hardware resource in single table, with And it is unavailable in response to hardware resource, the entry for corresponding to unavailable resource is deleted, from single table so as to omit to hardware The record of the state of resource.
In further example, as shown in figure 3, when being run by CPU, task scheduling layer program also maintaining context table, Context table is for recording buffered context (mContext).
Optionally, in processing function table, record corresponding processing function with each message to be processed, and upper and lower Corresponding buffered context (mContext) is recorded with each message in literary table.
As shown in figure 3, as an example, the code segment for handling task passes through registration in CPU initial phase (Register) API to task scheduling layer program be one or more queues registrations send message processing function (for example, by It is registered to the processing function messageOutboundCallback of outbound queue or is registered to the processing function of inbound queue messageInboundCallback)(301).Task scheduling layer program, in response, by queue and associated processing function It is recorded in processing function table.
As another example, handle the code segment of task by registration API (for example, Register (qID, MessageInboundCallback)) specify inbound queue (qID) and operation hardware (inbound queue as indicated by qID) with Receive the processing function (messageInboundCallback) (301) of message.Task scheduling layer program in response will be by The queue of qID instruction and associated processing function (messageInboundCallback) are recorded in processing function table.
In step 302, the code segment for handling task will be by by qID institute by sending (SendMessage) API instruction The queue of instruction sends message (mContext).Task scheduling layer program, in response, by the queue indicated by qID and its pass The context record of connection is in context table.
Registration and send API be all it is asynchronous, the code segment of processing task will not be blocked after scheduled.Registration and transmission API can be called repeatedly, to register multiple processing functions, or send multiple message.
In response to, there are message to be processed, task scheduling layer routine call operates hardware to receive message in inbound queue It handles function (messageInboundCallback) (303).
In response to, there are message to be processed, task scheduling layer routine call executes operation hardware and disappeared with sending in outbound queue The processing function (messageOutboundCallback) (304) of breath.
In Fig. 3, as an example, step 301 is related to step 304.By step 301, function will be handled (messageInboundCallback) task scheduling layer program is registered to the inbound queue by qID instruction;And refer to by qID (step 304) when the inbound queue shown message to be handled, task scheduling layer routine call is the same as the inbound queue phase indicated by qID The processing function (messageInboundCallback) associatedly registered.
In step 302, the not specified processing function registered in association with the outbound queue indicated by qID.It needs to lead to Other calling to registration API are crossed, so that task scheduling layer program record in processing function table is outbound with being indicated by qID The processing function that queue is registered in association.And pass through processing when the outbound queue indicated by qID is available by task scheduling layer Function table obtains associated processing function, and to be sent in the outbound queue indicated by qID by the acquisition of context table Message.And dispatch deal function sends message.
Task scheduling layer program circular treatment step 310, step 320 and step 330.
In step 310, task scheduling layer program obtains available hardware resource.Illustratively, by accessing hardware resource State table obtains available hardware resource (for example, queue).In one example, when queue can be used, hardware setting is corresponding Status register indicates that queue is available, using status register as the list item of hardware resource state table.In another example, when Interruption is generated when queue is available, the list item in interrupt handling program according to available queue setting hardware resource state table.
Optionally, if hardware resource state table indicates no available hardware resource, ignore to step 320 and step 330 Execution, to reduce to the utilization rate of CPU.Still optionally, no available hard in response to the instruction of hardware resource state table Part resource, task scheduling layer program makes itself temporary suspend mode, or makes CPU suspend mode, to reduce power consumption.And it is available in response to having Hardware resource, also restore to the execution of task scheduling layer program and/or wake up CPU.
Next, in step 320, task scheduling layer program is obtained with the associated processing letter of available hardware resource Number.And in a step 330, processing function is called.As an example, in response to knowing that inbound queue 0 can be used in step 310, In step 320 access process function table, to obtain with the associated processing function of inbound queue 0, and the processing function is called.It can Selection of land is also supplied to processing function for the information of available queue as parameter.
Still optionally, context is also needed with the associated processing function of queue.It illustratively, can be by accessing up and down Literary table is obtained with the associated context of queue (for example, the message to be issued by queue), and the context that will acquire also is made The associated processing function of same queue is supplied to for parameter.
Completion is executed in processing function, calls the step 330 of processing function to execute completion, return step 310, and obtain again It takes available hardware resource and is handled based on available hardware resource.
It as another example, can use in response to outbound queue 0, obtain in step 320 with the associated processing of outbound queue 0 Function.Optionally, if step 320 processing is completed there is no with the associated processing function of outbound queue 0 in processing function table, And processing to step 330 is skipped over, and return step 310 obtains available hardware resource again.Still optionally, with outbound The associated processing function of queue 0 needs context, and access context table does not obtain corresponding context, then step 320 is handled It completes, and skips over the processing to step 330, and return step 310 obtains available hardware resource again.
In one example, processing function uses available hardware resource, and available hardware resource is become can not With.In another example, processing function does not use hardware resource, and then after the completion of handling function execution, leads to the processing The called available hardware resource of function still can be used.In another example, task scheduling layer program uses available hard Part resource is simultaneously supplied to processing function, so that no matter handling whether function uses hardware resource, available hardware resource will all become It is unavailable.In still another example, available hardware resource has more parts, and task scheduling layer program or processing function use The portion of available hardware resource.
Processing function belongs to the code segment of processing task.It can be by registering API to task scheduling layer journey in processing function Sequence changes the processing function for being registered to one or more queues, or indicates to have to disappear to task scheduling layer program by sending API Breath will be issued by specified outbound queue (outbound queue indicated by qID).Message to be sent is also indicated in sending API Context, optionally, send API in also indicate processing function.In response, task scheduling layer program by outbound queue and Outbound queue and associated processing function are recorded in by the associated context record that send message in context table It handles in function table.
Optionally, hardware resource is also possible to the other kinds of messaging device in addition to queue.For example, message transmission Device may include the inbound messages transfer device for the outbound message transfer device of outgoing message and for receiving message.Disappear Ceasing transfer device may include multiple slots, and the message that each slot can transmit between transmitting message independent of each other and each slot does not have There is sequence constraint.The distribution of slot is handled by task scheduling layer.The code segment of processing task can only indicate to send the message of message Transfer device.
In one example, task scheduling layer program is when having available slot on specified messaging device, at calling Function is managed to send the message indicated in context.In another example, the upper and lower of message will be sent by sending instruction in API Text, recipient's (rather than messaging device) of message and processing function, and transmitted by task scheduling layer program assignment messages Device, and the messaging device of distribution and processing function are recorded in processing function table.
According to the embodiment of Fig. 3, the code segment of task scheduling layer program and processing task is all without waiting for hardware to message The completion of transmitting, helps to improve cpu busy percentage.
Embodiment two
Fig. 4 is the flow chart according to the transmission message of the embodiment of the present application.As an example, shown in Fig. 4 to send message Method is used to transmit the message to other CPU.
Message is sent for the code segment of assist process task, task scheduling layer program provides registration (register) API, hair Send the API such as (SendMessage) API.By registration processing function from API to task scheduling layer program registration event (for example, behaviour Make hardware to send the processing function of message), when the event (such as hardware resource is available) for occurring specifying, task scheduling layer program Registered processing function is called to execute the process for sending message.Even if passing through to operate on different hardware platforms The processing function call of modification event is to be adapted to the difference of different hardware platforms in the case where not changing code segment structure.
As shown in figure 4, to send message, in step 410, handle the code segment of task by registration API (for example, Register (qID, messageOutboundCallback)) specify outbound queue (qID) and operation hardware to send message It handles function (messageOutboundCallback).Correspondingly, in step 401, task scheduling layer program is in processing letter Hardware resource (for example, the outbound queue indicated by qID) and processing function are recorded in number table (messageOutboundCallback) mapping relations.In step 420, the code segment of task is handled by sending API When having message to be sent (mContext) in (SendMessage (qID, mContext)) instruction outbound queue (qID), task The associated processing function (messageOutboundCallback) of the same outbound queue of dispatch layer routine call (qID).Task schedule Layer program also determines to call the opportunity of processing function (messageOutboundCallback).In step 403, in response to hardware (for example, outbound queue (qID) is non-full) can be used in resource, calls the processing function of registration, so that processing function (messageOutboundCallback) it is performed (430), outbound queue (qID) can be added message.
As an example, in step 410, register API (for example, Register (qID, )) and not specified message to be sent messageOutboundCallback.
When needing to send message, at step 420, the code segment for handling task sends API by calling (SendMessage (qID, mContext)) to the instruction of task scheduling layer program have message will by specified outbound queue (by The outbound queue of qID instruction) it issues, and the context (mContext) of message to be sent is also indicated (for example, to be sent disappear The storage address and length of breath itself or message).
Task scheduling layer program is called in response to sending API, in the outbound queue (qID) that context table cache is specified With message context (mContext), and (402) are returned to immediately, to call the code segment of processing task for sending API need not It waits for the arrival of news and is truly sent by hardware, subsequent operation can be handled.
Step 410 need not be executed continuously with step 420.For example, the code segment for handling task executes in initial phase Step 410, and when needing to send message, step 420 is executed.But step 410 and step 420 have relevance (by dotted line arrow Head instruction).In registration API, the processing function of such as outbound queue (qID) is specified, and is specified in sending API outbound The message to be sent in queue (qID), and the processing function specified in task scheduling layer schedule registration API comes in outbound queue (qID) it sends on as sending message specified by API.
The state of the specified outbound queue of task scheduling layer program checkout is (referring also to the step 310) of Fig. 3.Go out in specified Stand in line column (qID) hardware resource can be used (for example, outbound queue is non-full), be invoked at the processing registered in specified outbound queue Function (messageOutboundCallback) (403,430).And in processing function messageOutboundCallback Directly message (mContext) is sent to operate hardware using available hardware resource.
In the above example, sending API (SendMessage (qID, mContext)) does not have designated treatment function, appoints Dispatch layer program of being engaged in is based on the specified processing function of registration API (Register (qID, messageOutboundCallback)), Specified processing function (messageOutboundCallback) is called when specified queue (qID) is available.
Step 420 need not be executed continuously with step 430, but indicated in step 420 and next to be executed step 430.After step 420 execution, execution opportunity of the task scheduling layer program according to the state determining step 430 of hardware resource.It can Selection of land, the designated treatment function (messageOutboundCallback) when calling transmission API, so that registration need not be passed through API establishes fixed mapping relations between outbound queue and processing function.In addition, can by sending API also location registration process function, To send the mapping relations established with processing function for each message, enhance the flexibility of message transmitting process.It is understood that Ground can specify different places in each location registration process function although the single name word description of function will be handled above Manage function.
Still optionally, API is sent in response to calling, hardware resource (example used in task scheduling layer program checkout Such as, as indicated by qID) whether can be used.If hardware resource is available, processing function can be called directly without buffer context.And Only when used hardware resource is unavailable, ability buffer context simultaneously returns immediately.
Fig. 5 is the flow chart according to the reception message of the embodiment of the present application.
Message is received for the code segment of assist process task, task scheduling layer program provides registration (register) API etc. API.Operate hardware to task scheduling layer program registration by registration API to receive the processing function of message, when hardware indicate into It stands in line in column when there is message, task scheduling layer program can call registered processing function to execute the process of reception message. Even if to operate on different hardware platforms, by modification operation hardware to receive the processing function call of message not change In the case where becoming code segment structure, it is adapted to the difference of different hardware platforms.
As shown in figure 5, to receive message, in step 510, handle the code segment of task by registration API (for example, Register (qID, messageInboundCallback)) specify inbound queue (qID) and operation hardware to receive message It handles function (messageInboundCallback).Correspondingly, in response, in step 501, task scheduling layer program Record hardware resource (inbound queue (qID)) and processing function (messageInboundCallback) in processing function table Mapping relations.Task scheduling layer program is available in response to hardware resource (inbound queue (qID)), in step 502, calls registration Processing function (messageInboundCallback).Processing function (messageInboundCallback) belongs to processing The part of the code segment of task.In step 520, handles function (messageInboundCallback) and be performed.
Optionally, function (messageInboundCallback) is handled by executing, is directly provided using available hardware Source, by operating hardware acceptance message.
Optionally, a variety of uses of processing function are indicated to task scheduling layer program by registration API (Register ()) Mode.In one embodiment, in response to occurring message in inbound queue (qID), task scheduling layer routine call handles letter Number (messageInboundCallback), and task scheduling layer program assumes processing function (messageInboundCallback) (specified quantity (such as 1)) occurred in inbound queue (qID) will necessarily be handled Message.And whether still remained after being removed the message of specified quantity in task scheduling layer program checkout inbound queue (qID) Message and deciding whether calls processing function (messageInboundCallback) again.In another embodiment, it rings Should occur message in inbound queue (qID), task scheduling layer routine call handles function (messageInboundCallback), and by processing function (messageInboundCallback) decide whether from inbound Message is taken out in queue (qID).If message is not taken out in processing function (messageInboundCallback) from inbound queue, Task scheduling layer program will be based on still remaining message to be processed in inbound queue (qID) and calling processing function again (messageInboundCallback);If handling function (messageInboundCallback) from inbound queue (qID) It takes message away, whether still remains message after message is removed in task scheduling layer program checkout inbound queue (qID) and determine Whether processing function (messageInboundCallback) is called again.
External memory is accessed for the code segment of assist process task, the offer of task scheduling layer program is deposited for accessing outside The API of reservoir, including read memory (readMemory) API, memory write (writeMemory) API, synchronous reading memory (readMemorySync) API, synchronous memory write (writeMemorySync) API, memory copy (copyMemory) The API such as API.
By processing function from the API for accessing external memory to task scheduling layer program registration event (for example, right The processing function (memoryCallback) that the data read from external memory are handled), when the event (example for occurring specifying As external memory operation is completed), task scheduling layer program can call registered processing function to execute subsequent operation.
Fig. 6 is the flow chart that data are read according to the slave external memory of the embodiment of the present application.
As shown in fig. 6, handling the code segment of task by reading memory API (for example, readMemory in step 610 (src, dest, memoryCallback)) specify source address (src), destination address (dest) and processing function (memoryCallback), wherein processing function is completed for response external storage operation.
Correspondingly, in step 601, task scheduling layer program is in response to reading memory API (for example, readMemory (src, dest, memoryCallback)) it is called, the context of read memory operation is cached (for example, context includes source Location, destination address, size of data, specified processing function etc.), and return, so that the code segment of processing task can continue to execute Other operations.
In step 602, task scheduling layer program is available (for example, storage in response to the hardware resource for accessing external memory Device controller is idle, access queue free time etc.), the context of caching is obtained, data are read from external memory, by the number of reading According to write-in destination address.Next, the processing function (memoryCallback) that task scheduling layer routine call is specified, as right The response of event (reading memory to complete).Processing function (memoryCallback) belongs to the part of the code segment of processing task. Correspondingly, it in step 620, handles function (memoryCallback) and is performed.
Optionally, the also specified size for reading data of memory API is read.Optionally, source address is located in external memory, And destination address is located at the local storage of such as CPU.
Read memory API be it is asynchronous, can be returned immediately after the API is called, the operation without blocking CPU.Processing The code of task is by reading memory API also designated treatment function (memoryCallback), for complete in read memory operation Subsequent processing is executed at rear (data are written into destination address).
Optionally, it is divided into two stages from the process of destination address access data: is issued to external memory and read memory Request, with the data for receiving reading from external memory.There may be longer delays between two stages.Task scheduling layer journey Sequence performed after the external memory sending read memory request stage, again buffer context, and available in hardware resource After (external memory provides the data of reading), then the context of caching is obtained, writes data into destination address, and call and refer to Fixed processing function, to shorten delay.
Optionally, it calls and reads memory API (for example, not designated treatment function when readMemory (src, dest).Task Dispatch layer program is completed also to never call processing function after reading storage operation.
It is to use data from memory read data.According to an embodiment of the present application, task scheduling layer program from After memory has read data, processing function (memoryCallback) is called to handle the data of reading, thus need not The code segment of processing task waits or inquires repeatedly whether from memory have read data.Eliminate asynchronous memory access mode Introduced overhead reduces the programming complexity of the code segment of processing task.
Fig. 7 is the flow chart that data are write to external memory according to the embodiment of the present application.
As shown in fig. 7, the code segment for handling task passes through memory write API (for example, writeMemory in step 710 (src, dest, memoryCallback)) specified source address (src) and destination address (dest).Destination address is deposited positioned at outside In reservoir, and source address is located at the local storage of such as CPU.Optionally, the also specified size for writing data of memory write API. Memory write API be it is asynchronous, can be returned immediately after the API is called, the operation without blocking CPU.
Task scheduling layer program in response to memory write API (for example, writeMemory (src, dest, MemoryCallback it)) is called, caches the context of read memory operation (for example, context includes source address, destination Location, size of data etc.), and (701) are returned, so that the code of processing task can continue to execute other operations.
Task scheduling layer program is available (for example, Memory Controller is empty in response to the hardware resource for accessing external memory Spare time, access queue free time etc.), the context of caching is obtained, data are written to external memory.
Optionally, memory write API also designated treatment function (memoryCallback).Handle function (memoryCallback), for (data are written into destination address) execution subsequent processing after the completion of memory write operation.
In response to being written with data to memory, in step 702, the specified processing function of task scheduling layer routine call (memoryCallback), as the response to event (memory write completion).Correspondingly, function is handled (memoryCallback) it is performed.As an example, to another processor in processing function (memoryCallback) Hair message is to indicate the addressable data for being written into memory of another processor.Optionally, function pointer is carried in the message, Another processor is by calling function indicated by function pointer, to access the data for being written into memory.
Optionally, it is divided into two stages to the process of the destination address of external memory write-in data: to external memory Memory write request is issued, receives the instruction that write-in is completed with from external memory.There may be longer between two stages Delay.Task scheduling layer program is performing after external memory sending memory write request stage, again buffer context, And after hardware resource available (external memory indicates that memory write is completed), then the context of caching is obtained, and call specified Processing function, with shorten delay.
Optionally, not designated treatment function when calling memory write API (for example, writeMemory (src, dest)).Appoint Business dispatch layer program completion also never calls processing function after writing access to memory operation.
Similarly, the code segment for handling task specifies source address by memory copy (Copy (src, dest)) API (src) with destination address (dest), the data of source address are copied into destination address.Source address and destination address are respectively positioned on outer In portion's memory.Memory copy operation is handled by task scheduling layer.
In addition, also providing synchronous reading memory API, synchronous memory write API etc. according to an embodiment of the present application API.It is complete in task scheduling layer program after the code segment call of processing task synchronous reading memory API or synchronous memory write API After memory access operation, the code segment for just returning to processing task is continued to execute.At this point, the code segment of processing task may be used To use the data obtained from memory.
Fig. 8 is the flow chart using customer incident according to the embodiment of the present application.
According to an embodiment of the present application, the code segment for handling task can be by application programming interface to task tune It spends the processing function of layer program registration response customer incident and triggers the trigger condition of customer incident.For example, trigger condition packet It includes: the time of trigger event and/or the number of trigger event.Task scheduling layer program is called according to specified trigger condition Handle function.Task scheduling layer program provides registration (register) API, is used for registered events.As shown in figure 8, in step 810, the code of task is handled by registering API (for example, Register (eventID, userEventCallback)) as thing The processing function (userEventCallback) of part specified identifier (eventID) and response events.Conduct is responded, in step Rapid 801, task scheduling layer program record event (for example, identifier (eventID) of record event) handles function with it (userEventCallback) mapping relations.
In one embodiment, task scheduling layer program also provides triggering API (for example, TriggerUserEvent (eventID)).In step 820, the code segment for handling task is produced by the triggering API for calling task scheduling layer program to provide Raw specified event (indicated by event ID (eventID)).Correspondingly, in step 802, task scheduling layer program is slow The context (identifier (EventID) etc.) of specified event is deposited, for example, what record event as indicated by eventID had generated State, and return, so that the code of processing task is continued to execute.
Task schedule task scheduling layer program obtains the event in the state that generated, in turn by the context of caching Acquisition is same to have generated the corresponding registered processing function (userEventCallback) of event.In step 803, task tune The processing function (userEventCallback) for spending layer routine call registration, correspondingly, in step 830, same to customer incident Associated processing function (userEventCallback) is performed.
Optionally, the condition that also may specify trigger event in API is triggered.In another embodiment, in registration API Express or imply the condition of trigger event, and triggering API need not be used.Known from the context of caching by task scheduling layer program The condition of other trigger event calls registered processing function when condition meets.
Fig. 9 is read according to the slave nonvolatile external memory (NVM, Non-Volatile Memory) of the embodiment of the present application The flow chart of data.
External NVM is accessed for the code segment of assist process task, task scheduling layer program is provided for accessing NVM's API, including read NVM (readNVM), write NVM (writeNVM), setting NVM (SetNVM) API etc..
In step 910, handle the code segment of task by read NVM API (for example, readNVM (dest, pba, NVMCallback it)) specifies source address (pba) and destination address (dest), by the data as indicated by source address on NVM It reads and stores the position as indicated by destination address (dest).Optionally, the big of the also specified reading data of NVM API is read It is small.Destination address is located in the local storage of external memory or CPU.Reading NVM API (readNVM (dest, pba, NVMCallback)) be it is asynchronous, can be stood after reading NVM API (readNVM (dest, pba, NVMCallback)) is called It returns, the operation without blocking CPU.The code of processing task by read NVM API (readNVM (dest, pba, NVMCallback)) also designated treatment function (NVMCallback), for (i.e. data to be by from NVM after the completion of reading NVM operation Read) execute subsequent processing.
In step 901, task scheduling layer program in response to read NVM API (readNVM (dest, pba, NVMCallback it)) is called, caching reads the context of NVM operation (for example, context includes source address, destination address, data Size, specified processing function etc.), and return, so that the code segment of processing task can continue to execute other operations.
Task scheduling layer program in response to the hardware resource that accesses NVM it is available (for example, Media Interface Connector controller free time etc., Chinese patent application CN201610009789.6, CN201510253428.1, CN201610861793.5, A variety of Media Interface Connector controllers are provided in CN201611213755.5, CN201611213754.0, it is possible to use the prior art In access flash memory etc. NVM Media Interface Connector controller), obtain the context of caching, the request of data will be read from source address It is sent to Media Interface Connector controller.Received from Media Interface Connector controller to read the requests of data have to NVM output data it is larger Delay.Task scheduling layer program reads the request of data performing to the transmission of Media Interface Connector controller from NVM, caches again Context, and (Media Interface Connector controller provides the data read from NVM) can be used afterwards (902) in hardware resource, then obtain slow The context deposited, and specified processing function (920) is called, to shorten delay.
Optionally, processing function is used to the data of reading being sent to host by dma operation, or deposits reading data Data recovery or error handle are executed in mistake.
Optionally, reading NVM API may specify one or more snippets continuous or discontinuous source address and/or destination address, with From multiple position acquisition data of NVM.
Figure 10 is writing to nonvolatile external memory (NVM, Non-Volatile Memory) according to the embodiment of the present application The flow chart of data.
In step 1010, handle the code segment of task by write NVM API (for example, WriteNVM (sec, pba, NVMCallback it)) specifies source address (src) and destination address (pba), the data at source address is written on NVM by purpose Position indicated by address.Optionally, the also specified size for reading data of NVM API is write.Source address be located at external memory or In the local storage of CPU.Write NVM API be it is asynchronous, this write NVM API it is called after can return immediately, without blocking The operation of CPU.The code of processing task is by writing NVM API also designated treatment function (NVMCallback), for writing NVM (i.e. data are written into NVM) executes subsequent processing after the completion of operation.
In step 1001, task scheduling layer program in response to write NVM API (for example, WriteNVM (sec, pba, NVMCallback it)) is called, caching writes the context of NVM operation (for example, context includes source address, destination address, data Size, specified processing function etc.), and return, so that the code segment of processing task can continue to execute other operations.
Task scheduling layer program is available in response to the hardware resource (for example, Media Interface Connector controller) for accessing NVM, obtains slow The context deposited sends the request to NVM write-in data to Media Interface Connector controller.It to be written from the reception of Media Interface Connector controller Data and write operation completion have biggish delay.Task scheduling layer program is write sending to Media Interface Connector controller to NVM Enter the request of data, again buffer context, and available (Media Interface Connector controller indicates that data are written to NVM in hardware resource Complete) afterwards (1002), the context of caching is obtained again, and calls specified processing function (NVMCallback) (1020), with Shorten delay.
Optionally, processing function error process when write operation malfunctions.
Optionally, writing NVM API may specify one or more snippets continuous or discontinuous source address and/or destination address, with Data are written to multiple positions of NVM.
Similarly, API couples of the setting NVM (SetNVM) that the code segment of task is provided by task scheduling layer program is handled Specified NVM is configured.Task scheduling layer program also provides other API of asynchronous operation NVM.
Embodiment three
According to an embodiment of the present application, multiple tasks are handled in certain sequence, cooperative achievement task processing system Function (for example, I/O command of processing access storage equipment).To realize function, multiple (such as passing through outbound queue) is needed to send Message and repeatedly (such as passing through inbound queue) reception message.
Task scheduling layer program provides a plurality of types of processing functions to handle the operation for sending message and receiving message.It is logical Combined treatment function is crossed, realizes multiple-task treatment process.It is different types of to handle function to be adapted to the function of task processing system The different phase of energy.
As an example, it is the function of realizing processing I/O command, needs to be divided into multiple stages, and mention for each stage For different types of function.Illustratively, as shown in figure 11, following several stages are divided into: (1) receiving I/O command (1115);(2) according to I/O command access NVM chip (1118);(3) access result (1155) is obtained from NVM chip;(4) it indicates (1158) are completed in I/O command processing;And optionally (5) receive response (1185).
(1) stage receives the message (1115) of instruction I/O command, the beginning as I/O command treatment process.(2) rank Section distributes resource (for example, mark I/O command, record I/O command processing status), concurrent outbound message, to access storage for I/O command Equipment (1118).(3) stage receives message, indicates storage device access result (1155) and I/O command institute in message Use resource (thus by the instruction of received message the same step of I/O command (2) in the I/O command of message instruction be associated with). (4) stage issues the message (1158) that instruction I/O command processing is completed.Optionally, (5) stage receives response message (1185), the message that the I/O command processing completion in (4) stage is indicated in response message is correctly received.
Correspondingly, the processing letter that processing function and multiclass of the multiclass for receiving message are used to send message is provided Number.Illustratively, the first kind is used to receive the processing function (PortDriver) of message, does not include function in the received message of institute Context, for such as function initial phase receive message.Second class is used to receive the processing function of message (Generic Driver), include in received message function context, for the intermediate stage reception in such as function Message.Third class is used to receive the processing function (MiniDriver) of message, cannot be used alone, but to the second class for connecing The extension of the processing function of message is received, and instruction third class is used to receive the processing function of message in the received message. The first kind is used to issue the processing function of message, for handling message transmitting process (GenericDriver).Second class is for sending out The processing function (GenericDriver+minidriver) of outbound message is used to handle message transmitting process, and in the message Indicate that third class is used to receive the processing function of message.
It is of course also possible to there is the other kinds of processing function for sending message or receiving message.To realize at task The one or more of processing function are applied in combination in the function of reason system.
Figure 11 is the flow chart according to the I/O command of the processing access storage equipment of the embodiment of the present application.The generation of processing task Code section and task scheduling layer program operate on one of CPU of task processing system (being denoted as CPU 0).
Occur the message of instruction I/O command in response to inbound queue 11A, in step 1110, task scheduling layer routine call the One kind is for receiving the processing function of message.The first kind is used to receive portion of the processing function of message as one of task or task Point, the content of I/O command is identified by obtaining message.In one embodiment, task scheduling layer program is from inbound queue Message is obtained, and passes to the first kind for receiving the processing function of message.In yet another embodiment, called first The processing function that class is used to receive message obtains message from inbound queue.Task scheduling layer program in inbound queue only when occurring When message, the first kind is just called to be used to receive the processing function of message, thus the first kind be used to receive the processing function of message without It must inquire or wait inbound queue available.
In response to having received I/O command (1115), the code segment for handling task is that I/O command distributes resource, and accesses storage Equipment (1118).For access store equipment, to outbound queue 11B issue message, with indicate task processing system other CPU or Controller carries out read/write operation to the NVM chip of storage equipment.Optionally, the code segment call for handling task is sent (sendMessage) API issues message to outbound queue.In step 1120, task scheduling layer program is in response to sending (sendMessage) API is called, and the second class of registration is used to send the processing function of message, and the second class is for sending message Processing function is used to send the message that instruction carries out read/write operation to NVM chip, so that in step 1130, task scheduling layer journey Sequence is used to send the processing function of message according to schedulable second class of outbound queue state, so that the second class is for sending message Processing function, which is performed outbound queue, can be used, and need not inquire or wait outbound queue available.In optional embodiment party Called in response to transmission (sendMessage) API in formula, task scheduling layer program, which is based on outbound queue, can be used, and straight The processing function for calling the second class to be used to send message is connect, and saves the mistake that the second class of registration is used to send the processing function of message Journey.In the processing function that the second class is used to send message, outbound queue is operated to issue message, Yi Jike by outbound queue Selection of land, also instruction third class is used to receive the processing function of message in the message, for example, adding third class in the message for connecing Receive the pointer of the processing function of message.
Other CPU or controller read/write NVM chip of task processing system, and by the instruction of read/write result pass through into Column of standing in line are sent to the CPU (CPU 0) that the code segment of processing task is run in Figure 11.It is indicated in response to inbound queue 11C The message of the result of read/write NVM chip, in step 1140, the second class of task scheduling layer routine call is used to receive the place of message Manage function.Second class is used to receive part of the processing function of message as one of task or task, is known by obtaining message The result of other read/write NVM chip.Optionally, in the embodiment in figure 11, the second class is used to receive the processing function of message or appoints The business also message based instruction of dispatch layer program calls third class to be used to receive the processing function of message in step 1150.Third The processing function that class is used to receive message is further processed message as the part of one of task or task, for example, right Restored in the presence of the data of mistake.By the processing function for indicating that third class is called to be used to receive message in the message, so that When issuing message by outbound queue the flexibility of Message Processing can be improved to message designated treatment function.
Optionally, the message (1155) that the result of read/write NVM chip is indicated in response to having received, handles the code of task Section issues the message (1158) that instruction I/O command processing is completed, and informs that I/O command processing is completed to such as host.Optionally, it handles The code segment of task sends (sendMessage) API by calling, and in step 1160, task scheduling layer program is in response to sending (sendMessage) API is called, and the registration first kind is used to send the processing function of message, and the first kind is for sending message It handles function and is used to send the message that instruction I/O command processing is completed, so that task scheduling layer program foundation goes out in step 1170 The schedulable first kind of quene state of standing is used to send the processing function of message.It is used to send the processing function of message in the first kind In, outbound queue is operated to issue message by outbound queue 11D.It does not include to third class in the message of sending for receiving The instruction of the processing function of message.And since I/O command processing is completed, it is released to the resource of I/O command distribution.
Optionally, the message completed in response to instruction I/O command processing, I/O command sender can also provide response message, with Confirm the reception for the message completed to instruction I/O command processing.Response message is received in inbound queue.In response to inbound queue 11E There is response message, in step 1180, task scheduling layer calls another first kind to answer for receiving the processing function acquisition of message Answer message (1185).
Optionally, the function of task processing system may include more or fewer stages.Each stage by other CPU/ controller issues message or receives message from other CPU/ controllers to handle task.To the realization of function by reception message Start, in the intermediate one or more stages for realizing function, sends message and occur in pairs with message is received, it can when sending message Registration receives the second class used when message and is used to receive the processing function of message, and docking can be added in the message of sending The third class used when receiving message is used to receive the instruction of the processing function of message, to call the second class when receiving message For receiving the processing function of message, and optionally, third class is used to receive the processing function of message.Optionally, it is initializing When register the second class for one or more inbound queues and be used to receive the processing function of message, and added in the message of sending It is used to receive the instruction of the processing function of message when receiving message to the third class used.
Task scheduling layer program, a variety of processing functions and a variety of processing functions for being used to send message for receiving message To realize that the function of task processing system provides running environment or frame.Realize that the code segment of multiple processing tasks of function is logical It crosses calling and sends (sendMessage) API transmission message, and register the processing function for receiving message to receiving queue.
It calls and sends (sendMessage) API, the processing function for sending message and the processing letter for receiving message Number will not all block the execution of the code segment of processing task.Send (sendMessage) API be it is asynchronous, for register be used for Send the processing function of message.And task scheduling layer program is according to outbound queue/inbound queue state, only when resource can be used Just processing function of the scheduling for sending the processing function of message or for receiving message, so that the generation of processing task will not be blocked The execution of code section.
The processing function of transmission message/and for receiving the processing function of message, hardware is shielded to the code segment of processing task (such as outbound queue/inbound queue) details of operation and otherness;So that the code segment of processing task need not concern hardware resource The delay of availability and/or hardware handles.To simplify the exploitation of task processing system, the task processing system made is easy to move Move on to other hardware platforms.
Figure 12 A is the block diagram according to the priority weighted robin scheduling device of the embodiment of the present application.Task processing system is usual A variety of events are handled, for example, occurring message to be processed in inbound queue, have Messages-Waiting to issue by outbound queue, user The timer of definition then, access memory etc..According to the priority weighted robin scheduling device of the embodiment of Figure 12 A to a variety of things Part is scheduled, assist or substitute according to the task scheduling layer program (referring also to Fig. 2A to Figure 11) of the embodiment of the present application selection to Processing event simultaneously calls the code segment of processing task to be handled.
Scheduled a variety of events are divided into two classes: priority event and rotation event.As an example, such as in inbound queue There is message to be processed or there is Messages-Waiting to belong to priority event by the event that outbound queue issues, and it is user-defined fixed When device then event belongs to rotation event.Priority event is needed by prior to rotating event handling.Priority event respectively has excellent First grade, priority event with high priority are handled prior to priority event with low priority.Still as an example, exist When not having pending priority event, each of event is rotated by wheel stream process.Rotation event respectively has weight.Weight instruction Rotation event is by the frequency of wheel stream process.For example, the processed frequency of event with high weight is lower than the thing with low weight Part.
Referring to Figure 12 A, priority weighted robin scheduling device includes enabled event registers 1210, event state register 1220, event registers 1230 to be processed, priority event selector 1240, rotation EventSelect device 1250, currently rotate event Enabled register 1260, rotation event weights table 1270, event moderator 1280 and event handling function calling module 1290.Make For citing, priority weighted robin scheduling device is coupled to the CPU of task processing system.CPU may have access to enabled event registers 1210, event state register 1220, event registers to be processed 1230 and/or current rotation event enable register 1260, Configuration rotation EventSelect device 1250, current rotation event enable the work plan of register 1260 and/or event moderator 1280 Slightly, update or read rotation event weights table.The event handling function calling module 1290 of priority weighted robin scheduling device is logical It crosses and the program counter of CPU is for example set to indicate that CPU calls the code segment of processing task.
Each bit of enabled event registers 1210 is used to indicate whether corresponding event is enabled.Event is enabled finger When the event occurs, allow to handle the event.If the corresponding event of the bit indication of event registers 1210 not by It is enabled, even if then the event occurs, also the event is not handled.CPU is shielded by the way that enabled event registers 1210 are arranged One or more events are covered, not handle these events.
Each bit of event state register 1220 is used to indicate whether corresponding event occurs.For example, going out in queue Existing message to be processed, in response, with the queue, corresponding bit is set event state register 1220.As another Event state register 1220 is arranged to indicate to have occurred one or more events in example, CPU.
Whether the corresponding event of each bit indication of event registers 1230 to be processed has occurred and has been enabled.To, Priority weighted robin scheduling device is only to having occurred and the event being enabled is handled, though do not occur or occurred without handling But the event not being enabled.For example, each bit of event state register 1220, indicates enabled event registers 1210 and thing The result of the corresponding bit phase "AND" of part status register 1220.
As an example, task processing system handles 40 kinds of events, enables event registers 1210, event state register 1220 and event registers to be processed 1230 include respectively 40 bits, every bit corresponds with processed 40 kinds of events.
Priority event selector 1240 is coupled to the bit that priority event is indicated in event registers 1230 to be processed, and defeated Out to the instruction of one of priority event, to select the priority event.As an example, priority event selector 1240 is according to be processed The significant bit (priority event that instruction has occurred and has been enabled) of the instruction priority event of event registers 1230, selection has The event of highest priority, and it is supplied to event moderator 1280.
Rotation EventSelect device 1250 is coupled to the bit that rotation event is indicated in event registers 1230 to be processed, and defeated Out to the instruction of one of rotation event, to select the rotation event.As an example, rotation EventSelect device 1250 is successively handled more A rotation event, or multiple rotation events are successively handled by specified frequency.
According to the embodiment of Figure 12 A, EventSelect device 1250 is rotated, enables register 1260 according to current rotation event Instruction, the selection one from the event for having occurred and having been enabled by the enabled register 1260 of preceding rotation event, and it is secondary to be supplied to event Cut out device 1280.Preceding rotation event enables weight of the register 1260 according to rotation event, and setting corresponds to each rotation event Bit, to indicate that rotation event is enabled.In one example, event weight having the same, priority weighted rotation are rotated After scheduler initialization, current rotation event enables register 1260 and indicates that each rotation event is enabled.Event moderator After 1280 one rotation event of selection are processed, inform that current rotate enables register 1260.In response, event is currently rotated Enabled register 1260 removes the bit for corresponding to processed rotation event, so that the processed rotation event is temporarily not It can be processed again.Further, in response to currently rotate event enable register 1260 the corresponding rotation event of bit all by Processing, all bits that current rotation event enables register 1260 are removed, and the current event that rotates enables register 1260 again It is secondary to be set to indicate that each rotation event is enabled.
Event moderator 1280 is from the event that priority event moderator 1240 and rotation event moderator 1250 indicate, choosing One of them is selected to be handled.In one example, for priority processing priority event, when priority event moderator 1240 and wheel When turning event moderator 1250 and all indicating to have event to be processed, event moderator 1280 selects priority event moderator 1240 to refer to The event shown;Only when priority event moderator 1240 does not indicate event to be processed, and rotates event moderator 1250 and indicate wait locate When director's part, the event that rotation event moderator 1250 indicates just is handled.Further, event moderator 1280 provides preferentially Grade reversion or anti-mechanism hungry to death, the event indicated in response to rotation event moderator 1250 cannot be handled for a long time, and provisionally The event of the instruction of rotation event moderator 1250 is preferentially handled relative to the event that priority event moderator 1240 indicates.
The event processed of selection is supplied to event handling function calling module 1290 by event moderator 1280, by event It handles function call module 1290 and calls the code segment of corresponding processing task to handle event according to event.For example, thing The case index that part processing function call module 1290 is provided according to event moderator 1280, access process function table is (referring to figure 3), obtain and call the code segment of the processing task corresponding to event.As another example, refer in the message of inbound queue Show the code segment of processing task, the case index that event handling function calling module 1290 is provided according to event moderator 1280, The specific field of the message to be processed of inbound queue, the entry address of the code segment as processing task are called in instruction The code segment of the processing task.Still optionally, the message of inbound queue indicates the code segment of processing task by specific field Entry address offer mechanism, selection by processing function table provide entry address still by the designated word of the message of inbound queue Section provides entry address and specified entry address is arranged event handling function calling module 1290 to the program meter of CPU Number device.In still another example, event handling function calling module 1290 obtains the generation of processing task from processing function table The processing address of code section, and after the code segment of the processing task is called, also obtained from the specific field of the message of inbound queue The entry address of the code segment of another processing task and the code segment for calling the another processing task.
Optionally, position (example of the entry address of the code segment of processing task in the message of inbound queue is arranged in CPU Such as, the deviant of opposite message initial address).Still optionally, by message sender, in the finger of the message header of inbound queue Positioning is set, and whether description message carries the position of the entry address and the entry address of the code segment of processing task in the message It sets.
As another example, the code segment of processing task, event handling function tune are indicated in the message of outbound queue The case index provided with module 1290 according to event moderator 1280, instruction are specified the message to be processed of outbound queue Field, the entry address of the code segment as processing task, and call the code segment of the processing task.Optionally, at CPU setting Position of the entry address of the code segment of reason task in the message of outbound queue is (for example, relative to the offset of message initial address Value).Still optionally, describe whether message carries in the designated position of the message header of outbound queue by message sender The position of the entry address of the code segment of processing task and the entry address in the message.
As another example, event weights exterior portion part 1270 is rotated to current rotation event, the offer of register 1260 is provided The event that should be enabled.Figure 12 B and Figure 12 C is the rotation event weights table for rotating event weights exterior portion part 1270 and being stored. Referring to Figure 12 B, there are 3 rotation events (being denoted as E1, E2 and E3 respectively) to be processed, wherein the weight of rotation event E1 is minimum, wheel The weight time for turning event E2 is low, and rotates the weight highest of event E3.In Figure 12 B, the thing of alphabetical " Y " instruction respective column gauge outfit Part is enabled, and the event of alphabetical " N " instruction respective column gauge outfit is disabled.Round-robin processing event E1, event E2 and event E3.Thing The every wheel of part E1 is all processed, and event E2 is processed primary every 1 wheel, and event E3 is processed primary every 2 wheels.Rotation event power A line of Figure 12 B is supplied to current rotation event every time and enables register 1260 by weight exterior portion part 1270, is handling a wheel Afterwards, arrow shifts to the next line of Figure 12 B.In every 1 wheel event handling, all 3 rotation events have the opportunity to obtain most one Secondary processing;If corresponding event does not occur, then no longer handles it in the wheel when preparing to handle rotation event.
As an example, after initialization, arrow is directed toward the 1st row of the rotation event weights table of Figure 12 B.In the wheel, it will scheme The 1st row of 12B is supplied to current rotation event and enables register 1260, and instruction event E1, event E2 and event E3 are enabled. Still as an example, event registers 1230 to be processed indicate that event E1 occurs, event E2 does not occur and event E3 occurs.Wheel Turn EventSelect device 1250 whether to occur and currently rotate event according to event to enable the event that register 1260 indicates enabled One of state selection event.For example, since event E1 occurs, and be enabled, rotation EventSelect device 1250 selects event E1 to mention Supply event moderator 1280.After event moderator 1280 selects event E1 processing, inform that current rotation event enables register 1260, removing enables event E1, to, even if event E1 occurs again, rotate EventSelect device 1250 in front-wheel Also not reselection event E1.Still as an example, next, since event E2 does not occur, rotation EventSelect device selects event E3 (being enabled) is supplied to event moderator 1280.After event moderator 1280 is in response to selection event E3 processing, informing is worked as Preceding rotation event enables register 1260, and removing enables event E3.And in response to rotating 1250 pairs of institutes of EventSelect device There are 3 kinds of events to carry out a wheel selection (event E1 and event E3 are processed, and event E2 is not occurred), rotates event weights table The arrow of component 1270 shifts to next line (for example, the 2nd row), and it is enabled that the 2nd row content of Figure 12 B is supplied to current event rotation Register.In the 2nd row of Figure 12 B, only event E1 is enabled, and event E2 and event E3 is not enabled, thus Figure 12 B Each row embodies the weight of each event, so that, in every wheel processing of rotation event, event E1 is enabled, and every 1 Wheel, event E2 are enabled once, and every 2 wheels, event E3 is enabled once.The event being enabled is in working as front-wheel, if occurring, It will be dealt with.The event being enabled, if not occurring, is lost in front-wheel and is working as the processed chance of front-wheel.
Figure 12 C is another example for rotating event weights table.Have 4 rotation events (be denoted as respectively E0, E1, E2 with E3) to be processed.Event E1 weight is minimum, and every wheel has the opportunity to processed.Event E2 weight time is low, has an opportunity to be located every 1 wheel Reason.Event E3 weight time is high, takes turns organic will be dealt with every 2.Event E3 weight highest takes turns organic will be dealt with every 3.Initially After change, the 1st row of Figure 12 C is provided to current rotation event and enables register 1260, so that event E0 is enabled with event E1, And event E2 is prohibited with event E3.In a wheel, if event E0 and event E2 occurs, rotating EventSelect device 1250 will Event E0 is supplied to event moderator 1280, and event E2 is not processed, and event E1 is lost because not being enabled in epicycle quilt The chance of processing.It is processed in response to event E0, the current bit for rotating event and enabling to correspond in register 1260 event E0 It is removed (indicate that event E0 is disabled).In response to all being checked all rotation events (although only thing when in front-wheel Part E0 is processed), the arrow of rotation event weights table is directed toward next line (the 2nd row), and the content of the 2nd row is supplied to and works as front-wheel Turn event and enables register 1260.
If the last line of the arrow Compass of rotation event weights table, in next round, arrow unrolls the 1st of Compass Row.
Optionally, the rotation event weights table of such as Figure 12 B or Figure 12 C is arranged in CPU.
Figure 13 is the block diagram according to the priority weighted robin scheduling device of the another embodiment of the application.According to the reality of Figure 13 The priority weighted robin scheduling device for applying example is scheduled a variety of events, assists or substitute the task according to the embodiment of the present application Dispatch layer program (referring also to Fig. 2A to Figure 11) selects event to be processed and the code segment of processing task is called to be handled.
The priority weighted robin scheduling device shown with Figure 12 is similar, and the priority weighted robin scheduling device of Figure 13 includes making Can event registers 1310, event state register 1320, event registers to be processed 1330, priority event selector 1340, Rotate EventSelect device 1350, current rotation event enables register 1360, rotation event weights table 1370, event moderator 1380 with event handling function calling module 1390.
The priority weighted robin scheduling device of Figure 13 further includes the processing for being coupled to event handling function calling module 1390 Function table component 1392.Optionally, event handling function calling module 1390 is additionally coupled to the message storingmeans of inbound queue 1394.Optionally, the event that event handling function calling module 1390 is indicated according to event moderator is obtained from processing function table The entry address of the code segment of processing task is taken, and calls the code segment of corresponding processing task.Still optionally, event handling The event that function call module 1390 is indicated according to event moderator (disappears for example, event indicates that inbound queue is pending Breath), the message storingmeans 1394 of the corresponding inbound queue with event are accessed, the code segment of processing task is obtained from message Entry address, and call the code segment of corresponding processing task.For instruction have message wait for by outbound queue issue and it is outbound The less than event of queue, event handling function calling module 1390 obtain the code segment of the corresponding processing task with event, and Optionally, it also obtains message to be issued and is supplied to the code segment of processing task.Still optionally, by the code of processing task Message to be issued is inquired and/or obtained to section.
Optionally, handle task code segment it is called after, event handling letter is temporarily turned off by shut-off block 1396 Number calling module 1390 avoids the code segment of processing task from being interrupted during processing event with sequential processes event.And Before the code segment of processing task terminates, event handling function calling module 1390 is opened, to allow event handling function to call Module 1390 calls the code segment of other processing tasks.
As an example, priority weighted robin scheduling device is coupled to the CPU of task processing system.CPU, which may have access to, makes what one is particularly good at Part register 1310, event state register 1320, event registers to be processed 1330 and/or the current enabled deposit of rotation event Device 1360, configuration rotation EventSelect device 1350, current rotation event enable register 1360, rotation event weights table 1370 And/or the working strategies of event moderator 1380.
Rotation EventSelect device 1350 is coupled to event registers 1330 to be processed, and exports the finger to one of rotation event Show, to select the rotation event.EventSelect device 1350 is rotated, the instruction of register 1360 is enabled according to current rotation event, The selection one from the event for having occurred and having been enabled by the enabled register 1360 of preceding rotation event, and it is supplied to event moderator 1380.After event moderator 1380 selects a rotation event processed, inform that current rotate enables register 1360.As sound It answers, the current event that rotates enables the bit that the removing of register 1360 corresponds to processed rotation event.Further, in response to Current rotation event enables the corresponding rotation event of the bit that is enabled of register 1360, and all processed (or corresponding event does not go out It is existing), the current rotation event of value setting that rotation event enables the rotation event weights register 1374 of set parts 1372 is enabled Register 1360.
Rotation event weights table 1370 records the weight of every kind of rotation event.As an example, there are 3 rotation events (respectively It is denoted as E0, E1 and E2) it is to be processed.Event E0 weight is minimum, and every wheel has the opportunity to processed.Event E1 weight time is low, every 1 wheel It is organic to will be dealt with.Event E2 weight highest takes turns organic will be dealt with every 2.It is every respectively in rotation event weights table 1370 The weighted value of kind rotation event, for example, corresponding to event E0, E1 and E2, weighted value is respectively 0,1 and 2.In response to such as event E0 is handled (or processing is completed) by 1380 instruction of event moderator, and rotation event enables set parts 1372 from rotation event power Weight table 1370 obtains the corresponding weighted value (0) of event E0, according to weighted value, by event E0 setting in rotation event weights deposit In one of device 1374,1376 and 1378, to indicate opportunity that the event is enabled next time.In the example of Figure 13, rotation event has Accordingly identical rotation event weights register is arranged with weighted value quantity in 3 kinds of weighted values.Each rotation event weights are posted Storage includes multiple bits, and each bit corresponds to one of rotation event.As an example, in response to the weight of event processed (E0) It is 0, corresponds to the bit of event E0 in the middle setting of rotation event weights register 1374;In response to event E1's processed Weight is 1, corresponds to the bit of event E1 in the middle setting of rotation event weights register 1376;In response to event E2 processed Weight be 2, rotation event weights register 1378 middle setting correspond to event E2 bit.
A wheel selection (thing for being enabled and occurring has been carried out to all 3 kinds of events in response to rotation EventSelect device 1350 The event that part is processed, is not enabled or does not occur is not processed), rotation event, which enables set parts 1372, will rotate event The setting of weight register 1374 enables register 1360 to current rotation event, and rotation event weights register 1376 is set It sets and gives rotation event weights register 1374, give the setting of rotation event weights register 1378 to rotation event weights register 1376.And rotation event weights register 1378 is emptied.Optionally, rotation event weights register 1374,1376 with 1378 are embodied as shift register, so that the value of rotation event weights register 1376 is delivered to rotation event power by displacement Heavy responsibilities of government storage 1374, and rotate the value of event weights register 1378 while passing through displacement and be delivered to rotation event weights deposit Device 1376.
As another example, there are 4 rotation events (being denoted as E0, E1, E2 and E3 respectively) to be processed.Event E0 weight is most Low, every wheel has the opportunity to processed.Event E1 weight time is low, takes turns organic will be dealt with every 1.Event E2 weight time is high, every 2 Take turns organic will be dealt with.Event E3 weight highest takes turns organic will be dealt with every 3.Since event has 4 kinds of weights or weight 4 kinds are separated between the rotation of instruction, be arranged be arranged successively 4 rotation event weights registers (be denoted as respectively R0, R1, R2 with R3).Every one wheel event of processing, rotation event enable set parts 1372 and will rotate event weights register R0 to be arranged to working as front-wheel Turn event weights register 1360, and the rotation thing being arranged in front is set with the value for arranging posterior rotation event weights register Part weight register (gives register R0 for example, register R1 is arranged, register R2 is arranged to register R1).In every wheel thing Part processing during, in response to event (such as E1) it is processed or processing complete, will currently rotate event enable register 1360 in Bit corresponding to event (E1) is removed, and according to event (E1) weight that rotation event weights table 1370 indicates, same weight is arranged Sort the corresponding bit for rotating instruction event (E1) in event weights register (R1).
As another example, there are 4 rotation events (being denoted as E0, E1, E2 and E3 respectively) to be processed.Event E0 weight is most Low, every wheel has the opportunity to processed.Event E1 weight time is low, takes turns organic will be dealt with every 1.Event E2 weight time is high, every 2 Take turns organic will be dealt with.Event E3 weight highest takes turns organic will be dealt with every 5.5 are separated between rotation according to weight instruction Kind, 5 rotation event weights registers (being denoted as R0, R1, R2, R3 and R4 respectively) being arranged successively are set.At every wheel event Processed in response to event (such as E4) or processing is completed during reason, the event that will currently rotate enables corresponding in register 1360 It is removed in the bit of event (E4), according to event (E4) weight that rotation event weights table 1370 indicates, same weight sequencing is set The bit of instruction event (E4) in corresponding rotation event weights register (R4).
Occur message to be processed in one embodiment according to the application, in inbound queue and is defined as priority thing Part, and the event of user's registration belongs to rotation event.Optionally, when task processing system initializes, inbound queue is arranged in CPU And/or the corresponding relationship of the event of user's registration and the bit of event state register 1220.Still optionally further, CPU is also set It is corresponding for handling code segment and its entry address of task to set same event.For example, by registering (Register) API as one A or multiple event registrations handle function (also referred to as, for handling the code segment of task).In response, event and associated Processing function is recorded in processing function table.
Still optionally further, CPU is arranged the priority of priority events and/or rotates the weight of event.
During task processing system is run, in response to occurring message to be processed, event state register in inbound queue The corresponding bit of 1220 (referring to Figure 12) is set.It is triggered (for example, timer is then) in response to the event of user's registration, thing The corresponding bit of part status register 1220 is set.Priority weighted robin scheduling device is scheduled multiple events, from more One of selection in the event of a appearance, the program counter that CPU is arranged are the generation for corresponding to the processing task of selected event The entry address of code section.After the completion of selected event is processed, priority weighted robin scheduling device selects another event to be processed. To which the scheduling strategy to multiple events is arranged in CPU, and need not intervene scheduling process, has shared the load of CPU.
Occur message to be processed in another embodiment according to the application, in inbound queue and is defined as priority Event has Messages-Waiting to issue by outbound queue and outbound queue is less than is defined as priority events, and user's registration Event belongs to rotation event.Optionally, when task processing system initializes, inbound queue, outbound queue and/or use is arranged in CPU The corresponding relationship of the bit of the event and enabled 1210/ event state register 1220 of event registers of family registration.Task processing During system is run, in response to occurring message to be processed in inbound queue, event state register 1220 (referring to Figure 12 A) Corresponding bit is set;Less than in response to outbound queue, the corresponding bit of enabled event registers 1210 is set, in response to having Messages-Waiting is issued by outbound queue, and the corresponding bit of event state register 1220 is set, in response to user's registration Event is triggered (for example, timer then or the event of user's registration has been processed into), event state register 1220 Corresponding bit is set.Priority weighted robin scheduling device is scheduled multiple events, selects from the event of multiple appearance One of them, the program counter that CPU is arranged is the entry address for corresponding to the code segment of processing task of selected event.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.

Claims (10)

1. a kind of scheduler, comprising: event registers, rotation EventSelect device and the current enabled deposit of rotation event to be processed Device;
The event registers to be processed indicate one or more events to be scheduled;
The rotation EventSelect device is coupled to one or more rotation events that correspond to of the event registers to be processed Bit, and be coupled to the current rotation event and enable register;
The current rotation event enables register and indicates the one or more rotation events that can be scheduled;And
The rotation EventSelect device selects rotation to be scheduled according to the instruction that the current rotation event enables register One of event.
2. scheduler according to claim 1 further includes that event handling function calls component;
The event handling function scheduler indicates to call processing function to handle according to the rotation EventSelect device Scheduled rotation event.
3. scheduler according to claim 1 or 2, wherein
The one or more rotation events being scheduled that register instruction is enabled in response to currently rotating event are all selected, more The new current rotation event enables register.
4. scheduler according to claim 3 further includes rotation event weights exterior portion part;
The rotation event weights exterior portion part is sequentially recorded multiple values;And
Register is enabled to update the current rotation event, in order from the rotation event weights table acquired value, and uses institute The value of acquisition updates the current rotation event and enables register.
5. scheduler according to claim 3 further includes rotation event weights exterior portion part;
The rotation event weights exterior portion part records the weight of one or more rotation events.
6. scheduler according to claim 5 further includes that rotation event enables set parts and one or more rotation things Part weight register;
It is processed in response to the first rotation event that the rotation event enables set parts, according to from the rotation event weights table The weight of the first rotation event of component retrieval, selects one of one or more of rotation event weights registers to remember The first rotation event is recorded to be enabled.
7. scheduler according to claim 6, wherein
One or more of rotation event weights registers are sequences, according to from the rotation event weights table component retrieval It is described first rotation event weight rotation event weight set in sequence, select one or more of rotation things One of part weight register.
8. scheduler according to claim 7, wherein
The one or more rotation events being scheduled that register instruction is enabled in response to currently rotating event are all selected, will The value for the rotation event weights register for sorting most preceding in one or more of rotation event weights registers is worked as described in updating Preceding rotation event enables register.
9. scheduler described in one of -8 according to claim 1 further includes priority event selector;
The priority event selector is coupled to the one or more priority events of corresponding to of the event registers to be processed Bit;
And
The priority event selector is according to the priority of one or more of priority events selection highest priority to quilt The rotation event of scheduling.
10. a kind of method for sending message, comprising:
Register through the processing function that queue sends message;
Indicate the message to be sent;
Called, the processing function in response to the processing function sends the message by the queue.
CN201710761856.4A 2017-08-30 2017-08-30 priority weighted round robin scheduler Active CN109426562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710761856.4A CN109426562B (en) 2017-08-30 2017-08-30 priority weighted round robin scheduler

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710761856.4A CN109426562B (en) 2017-08-30 2017-08-30 priority weighted round robin scheduler

Publications (2)

Publication Number Publication Date
CN109426562A true CN109426562A (en) 2019-03-05
CN109426562B CN109426562B (en) 2023-10-13

Family

ID=65503997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710761856.4A Active CN109426562B (en) 2017-08-30 2017-08-30 priority weighted round robin scheduler

Country Status (1)

Country Link
CN (1) CN109426562B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695672A (en) * 2019-03-14 2020-09-22 百度(美国)有限责任公司 Method for improving AI engine MAC utilization rate
CN114326560A (en) * 2021-11-18 2022-04-12 北京华能新锐控制技术有限公司 Method and device for reducing CPU load of home-made PLC of wind turbine generator

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110225583A1 (en) * 2010-03-12 2011-09-15 Samsung Electronics Co., Ltd. Virtual machine monitor and scheduling method thereof
CN103136045A (en) * 2011-11-24 2013-06-05 中兴通讯股份有限公司 Dispatching method and device of virtualization operating system
CN104243274A (en) * 2013-06-14 2014-12-24 亿览在线网络技术(北京)有限公司 Message processing method and message center system
CN106844250A (en) * 2017-02-14 2017-06-13 山东师范大学 The bus arbiter and referee method of a kind of mixed scheduling

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110225583A1 (en) * 2010-03-12 2011-09-15 Samsung Electronics Co., Ltd. Virtual machine monitor and scheduling method thereof
CN103136045A (en) * 2011-11-24 2013-06-05 中兴通讯股份有限公司 Dispatching method and device of virtualization operating system
CN104243274A (en) * 2013-06-14 2014-12-24 亿览在线网络技术(北京)有限公司 Message processing method and message center system
CN106844250A (en) * 2017-02-14 2017-06-13 山东师范大学 The bus arbiter and referee method of a kind of mixed scheduling

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALOK SRIVASTAVA: "priority specific dispatching including round robin" *
耿登田: "嵌入式***中RTOS响应能力分析" *
郝继锋: "一种多核混合分区调度算法设计与实现" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695672A (en) * 2019-03-14 2020-09-22 百度(美国)有限责任公司 Method for improving AI engine MAC utilization rate
CN111695672B (en) * 2019-03-14 2023-09-08 百度(美国)有限责任公司 Method for improving MAC utilization rate of AI engine
CN114326560A (en) * 2021-11-18 2022-04-12 北京华能新锐控制技术有限公司 Method and device for reducing CPU load of home-made PLC of wind turbine generator
CN114326560B (en) * 2021-11-18 2024-02-09 北京华能新锐控制技术有限公司 Method and device for reducing CPU load of domestic PLC of wind turbine generator

Also Published As

Publication number Publication date
CN109426562B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
US9098462B1 (en) Communications via shared memory
CN100499565C (en) Free list and ring data structure management
CN100367257C (en) SDRAM controller for parallel processor architecture
CN103257933B (en) The method, apparatus and system that transaction memory in out-of-order processors performs
CN101320360B (en) Message queuing system for parallel integrated circuit and related operation method
US5530897A (en) System for dynamic association of a variable number of device addresses with input/output devices to allow increased concurrent requests for access to the input/output devices
US7694310B2 (en) Method for implementing MPI-2 one sided communication
CN104025185B (en) Mechanism for preloading caching using GPU controllers
CN108475194A (en) Register communication in on-chip network structure
CN101243396B (en) Method and apparatus for supporting universal serial bus devices in a virtualized environment
CN103999051A (en) Policies for shader resource allocation in a shader core
JP5309703B2 (en) Shared memory control circuit, control method, and control program
CN102141905A (en) Processor system structure
CN104937564B (en) The data flushing of group form
CN108369562A (en) Intelligently encoding memory architecture with enhanced access scheduling device
CN101369224A (en) Providing quality of service via thread priority in a hyper-threaded microprocessor
CN107391400A (en) A kind of memory expanding method and system for supporting complicated access instruction
EP0374338B1 (en) Shared intelligent memory for the interconnection of distributed micro processors
US5448708A (en) System for asynchronously delivering enqueue and dequeue information in a pipe interface having distributed, shared memory
CN105389134B (en) A kind of flash interface control method and device
EP1760580A1 (en) Processing operation information transfer control system and method
CN109144749A (en) A method of it is communicated between realizing multiprocessor using processor
CN103019655A (en) Internal memory copying accelerating method and device facing multi-core microprocessor
CN108958903A (en) Embedded multi-core central processing unit method for scheduling task and device
CN109426562A (en) Priority weighted robin scheduling device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100192 room A302, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing

Applicant after: Beijing yihengchuangyuan Technology Co.,Ltd.

Address before: 100192 room A302, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing

Applicant before: BEIJING MEMBLAZE TECHNOLOGY Co.,Ltd.

CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Tian Bing

Inventor after: Wang Shuke

Inventor after: Lu Xiangfeng

Inventor before: Tian Bing

Inventor before: Wang Shuke

Inventor before: Lu Xiangfeng

GR01 Patent grant
GR01 Patent grant