CN100345132C - Parallel processing method and system - Google Patents

Parallel processing method and system Download PDF

Info

Publication number
CN100345132C
CN100345132C CNB031440436A CN03144043A CN100345132C CN 100345132 C CN100345132 C CN 100345132C CN B031440436 A CNB031440436 A CN B031440436A CN 03144043 A CN03144043 A CN 03144043A CN 100345132 C CN100345132 C CN 100345132C
Authority
CN
China
Prior art keywords
task
processing unit
processing
buffer
waiting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB031440436A
Other languages
Chinese (zh)
Other versions
CN1577305A (en
Inventor
黄伟才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CNB031440436A priority Critical patent/CN100345132C/en
Publication of CN1577305A publication Critical patent/CN1577305A/en
Application granted granted Critical
Publication of CN100345132C publication Critical patent/CN100345132C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Multi Processors (AREA)

Abstract

The present invention discloses a parallel processing method. The method of the present invention comprises the following steps: A, tasks to be processed are input into a task scheduling device; B, the task scheduling device analyzes the tasks to be processed to judge whether the processing time of the tasks to be processed, and a task to be processed in any processing unit in a task newly put processing device needs to be guaranteed in order; if true, the tasks to be processed are put in the processing unit; or else, the tasks to be processed are put in a processing unit with fewest tasks in the task processing device; C, a plurality of processing units in the task processing device respectively and correspondingly process obtained tasks to be processed. The present invention also discloses a system for realizing the parallel processing method.

Description

A kind of parallel processing method and system
Technical field
The present invention relates to a kind of parallel processing method and system, refer to a kind of multiplied unit method for parallel processing and system especially.
Background technology
In system of multiprocessing unit, when a plurality of tasks on the processing time without any dependence (being the sequencing requirement) time, each processing unit can interiorly be handled the task of appointment at one time independently, be the parallel processing appointed task, thereby improved the task processing power of whole system of multiprocessing unit effectively.
Yet interrelated between a plurality of tasks, i.e. when the processing between them had certain sequencing to require, these tasks can not be by synchronous processing.Such as there being two instructions to be correlated with in a series of computer instruction, the execution of a then back instruction must wait for that last instruction just can carry out after being finished.
At present, the common method of multiplied unit parallel processing is to specify a certain class or a few generic task for each processing unit.Such as in voice data processing system, specify some processing units only to handle the speech data of a few passages, other processing unit is then handled the speech data of other passage.And for example in data communication field, can pending bag be classified, specify each processing unit only to be responsible for the bag of a certain class or a few classes according to the separate sources or the destination address of bag based on packet switch.This method is actually according to the different source addresses of bag, destination address or similar routing characteristic and will be responsible for assigning to a plurality of independently processing units, and the use of this method has following precondition: i.e. source address, destination address or the different bag of other routing characteristic, without any association, but the processing of the identical bag of same source address, destination address or other routing characteristic then must be gone up precedence the assurance time on processing order.
Yet, described multiplied unit parallel processing method can't realize effective load sharing, promptly under the uneven situation of each sorting flow, cause the flow of some route to surpass the processing power of alignment processing unit easily, and idle situation appear in other processing unit.In addition, the restricted application of this method give a plurality of independent processing unit because this method is source address, destination address or other routing characteristic according to message with a plurality of Task Distribution, and these information is only just meaningful in data communication and association area.
Another kind of common multiplied unit parallel processing method is: in system of multiprocessing unit, each processing unit all reads pending task successively, when if current pending task of a certain processing unit and current just handling of the task of other processing unit have sequencing to require on the processing time, till then this processing unit waits until that always other processing unit disposes relevant task.
But, this method when having a plurality of tasks on the processing time, to require to guarantee sequencing, at one time in, have a plurality of processing units and be in waiting status, till other processing unit disposed relevant task, like this, the concurrency of processing unit just can't be fully used.
Therefore, in order to strengthen the task processing power of whole system of multiprocessing unit, the concurrency that improves processing unit is necessary in fact.
Summary of the invention
Based on the deficiencies in the prior art, the object of the invention is to provide a kind of parallel processing method and system, and these method and system can effectively improve the concurrency of processing unit.
Parallel processing method of the present invention may further comprise the steps: A, with waiting task incoming task dispatching device; B, task scheduling apparatus are analyzed current waiting task, judge whether this waiting task and a waiting task putting into any processing unit of Task Processing Unit recently must guarantee precedence on the processing time, if then this task is also put into this processing unit; Otherwise this task is put into the Task Processing Unit minimum processing unit of task that is untreated; A plurality of processing units in C, the Task Processing Unit carry out respective handling to the waiting task that is obtained respectively.Wherein, the mode by serial notice guarantees that they read the order of task between a plurality of processing units in the described Task Processing Unit; The minimum processing unit of task has a plurality ofly if be untreated in the described Task Processing Unit, and then task scheduling apparatus is put into the minimum processing unit of described a plurality of tasks that are untreated with current waiting task and can be obtained the processing unit of serving the earliest.In the present invention, described step B can comprise the steps: further that B1, task scheduling apparatus analyze current waiting task, and judge whether this waiting task and a task putting into certain buffer queue of buffer storage recently must guarantee precedence on the processing time, if then this task is also put into this buffer queue; Otherwise this task is put into the buffer storage minimum buffer queue of task that is untreated; A plurality of processing units in B2, the Task Processing Unit take out pending task respectively in the corresponding cache formation from buffer storage.The minimum buffer queue of task has a plurality ofly if be untreated in the described buffer storage, and then task scheduling apparatus is put into the minimum buffer queue of described a plurality of task that is untreated with current waiting task and can be obtained the buffer queue of serving the earliest.In the present invention, described " must guarantee precedence on the processing time " refers to a back task handling and will depend on last task handling result.
Parallel processing system (PPS) of the present invention comprises and is responsible for the Task Processing Unit that reading of task is handled and is responsible for the task scheduling apparatus that the scheduling waiting task enters Task Processing Unit that described Task Processing Unit comprises one to a plurality of processing units.The mode of notifying by serial between the described processing unit from taking out pending task the formation separately, is taken out a task successively at every turn at the most.In the present invention, described parallel processing system (PPS) also can comprise and is located at the buffer storage that is used for the described waiting task of buffer memory between task scheduling apparatus and the Task Processing Unit.Described buffer storage is that described each processing unit is set up corresponding buffer queue, the pending task in buffer memory alignment processing unit in each buffer queue, and in same buffer queue, described processing unit adopts the mode of first in first out to handle the task of buffer memory.When dispatching waiting task, described task scheduling apparatus analyzes current waiting task, if the current task of putting into Task Processing Unit or buffer storage of waiting must guarantee precedence with an arbitrary processing unit of this device of putting into recently or a task of buffer queue in the task scheduling apparatus on the processing time, then this task is also put into this processing unit or buffer queue; If a task of arbitrary processing unit of current this device of waiting to put into the task of Task Processing Unit or buffer storage and putting into recently or buffer queue does not all have the sequence requirement on the processing time in the task scheduling apparatus, then this task is put into Task Processing Unit or buffer storage be untreated task minimum processing unit or buffer queue, if it is a plurality of that processing unit that the task that is untreated in Task Processing Unit or the buffer storage is minimum or buffer queue have, then in these a plurality of processing units or buffer queue, select to obtain the earliest the processing unit or the buffer queue of serving.
After adopting parallel processing method of the present invention and system, at first, it has guaranteed the sequencing that each task is handled, and the task load that a plurality of processing unit of making in the Task Processing Unit shares is even as far as possible putting before this, promptly improved the concurrency of processing unit, thereby made the processing power maximum of total system; Secondly, it is applied widely, can in the parallel system of various multiplied units, use, and handled task less-restrictive, do not require that each processing unit has identical processing speed yet; At last, it is realized easily, only must in system, increase a task scheduling apparatus, whether only task that this task scheduling apparatus only needs to put into recently in more current task to be scheduled and each buffer queue or the processing unit needs to guarantee that the sequencing on the processing time gets final product, the dispatching principle simple possible.
Description of drawings
Fig. 1 is the structural representation of parallel processing system (PPS) first embodiment of the present invention;
Fig. 2 is the process flow diagram of method for parallel processing first embodiment of the present invention;
Fig. 3 is the structural representation of parallel processing system (PPS) second embodiment of the present invention;
Fig. 4 is the process flow diagram of method for parallel processing second embodiment of the present invention.
Embodiment
Further set forth the present invention below in conjunction with accompanying drawing.
As shown in Figure 1, first embodiment of parallel processing system (PPS) of the present invention comprises buffer storage, Task Processing Unit and task scheduling apparatus.Described Task Processing Unit comprises one to a plurality of processing units.
Wherein, buffer storage is used for the pending task of buffer memory, and it sets up corresponding buffer queue for each processing unit, the pending task in buffer memory alignment processing unit in each buffer queue.In same buffer queue, processing unit adopts first in first out, and (mode FIFO) reads the task of buffer memory for First In, First Out.
Task Processing Unit is responsible for reading from buffer storage of task is handled, (processing unit of saying here only has processing capacity to suppose the individual processing unit of N (N is greater than 1) is arranged Task Processing Unit, do not have caching function), the mode of notifying by serial between the described processing unit is successively from taking out pending task the formation separately, take out a task at the most at every turn, the order of processing unit notice is: processing unit 1 notifier processes unit 2, processing unit 2 notifier processes unit 3, ..., processing unit N notifier processes unit 1, circulation notice according to this.
Task scheduling apparatus is responsible for task is put into buffer storage, when it puts into certain buffer queue of buffer storage in decision with a concrete task, follows following Several principles:
(1) if a current task waiting to put into the task of buffer queue and any buffer queue of putting into buffer storage recently must guarantee precedence in the task scheduling apparatus on the processing time, then this task is also put into this buffer queue;
(2), then this task is put into the buffer storage minimum buffer queue of task that is untreated if currently in the task scheduling apparatus wait to put into the task of buffer queue and task that all buffer queues of buffer storage are put into recently and all do not have sequence requirement on the processing time.
Under aforementioned (2) kind situation, a plurality of if the minimum buffer queue of the task that is untreated in the buffer storage has, then in these a plurality of buffer queues, select to obtain the earliest the buffer queue of serving.Described " obtain the earliest serve buffer queue " implication is as follows: in the minimum buffer queue of all tasks that are untreated, from the current time, a task of this buffer queue is all more Zao than other buffer queue by the time that the alignment processing unit reads.
What wherein, described " putting into recently " task referred to is exactly that last puts into the task of respective cache formation before task current to be put into; Described " must guarantee precedence on the processing time " refers to a back task handling and will depend on last task handling result.
Described " obtain the earliest serve buffer queue " judges like this: because each processing unit with the serial mode poll from reading next pending task the formation separately, therefore in any specific moment, (1) have and only have a processing unit reading next pending task, perhaps (2) have determined just which processing unit present this takes turns to and read next pending task.So just can know, from this constantly, the precedence that task is read by the alignment processing unit of any two buffer queues.Illustrate with an object lesson below, suppose that we have N processing unit, N corresponding buffer queue then arranged.The poll order that the predetermined processing unit reads task is: processing unit 1, and processing unit 2, processing unit 3, processing unit 4 ..., processing unit N, processing unit 1, processing unit 2, processing unit 3 ..., so circulation is repeatedly.Suppose that again the task in processing unit 1 and processing unit 3 corresponding buffer queues (being respectively buffer queue 1 and 3) sometime is minimum, and
1) just taking turns to processing unit 4 or be about to read next task.Therefore, according to the order 4,5,6 of poll ..., N, 1,2,3,4,5 ..., obviously processing unit 1 will read next pending task prior to processing unit 3, and promptly buffer queue 1 is to obtain the formation of serving among the two the earliest;
2) if current just in time to be processing unit 2 read next pending task reading or take turns to processing unit 2, then according to the order 2,3,4 of poll ..., N, 1 ..., buffer queue 3 obtains service the earliest;
3) if reading next pending task when pretreatment unit 3, then according to the order 3,4,5 of poll ..., N, 1,2 ..., buffer queue 3 obtains service the earliest;
4) if taken turns to processing unit 3 but it does not also really read next pending task, so, according to the order 3,4,5 of poll ..., N, 1,2 ..., also be that buffer queue 3 obtains service the earliest.And the like.
With reference to figure 2, first embodiment of parallel processing method of the present invention comprises the steps: A, with waiting task incoming task dispatching device; B, task scheduling apparatus are analyzed current waiting task, as if a task on processing time necessary the guarantee precedence of this waiting task with certain buffer queue of putting into buffer storage recently, then this task are also put into this buffer queue; Otherwise this task is put into a minimum buffer queue of buffer storage task; Processing unit in C, the Task Processing Unit takes out pending task and handles in the corresponding cache formation from buffer storage.
Wherein, in step B, if the minimum buffer queue of task has a plurality ofly in the described buffer storage, task scheduling apparatus is then put into current waiting task described buffer queue can obtain the buffer queue of serving the earliest.In addition, determine whether a described waiting task and a task of certain buffer queue of putting into buffer storage recently have the standard of sequence requirement can be different on the processing time.For example, in the bag disposal system, can whether there be identical IP address to judge according to two pending bags, also can be according to whether identical VPI/virtual channel identifier (Virtual Path Identifier/Virtual Channel Identifier is arranged, VPI/VCI), or port numbers wait and judge; And in the instruction process system of computing machine, execution result or other standard that can whether depend on previous instruction according to the execution of a back instruction are judged.In step C, at the most one at every turn of the waiting task of described taking-up.
In the present invention, if the processing power under any circumstance of the processing unit in the Task Processing Unit is all enough, parallel processing system (PPS) then of the present invention can omit buffer storage, and as shown in Figure 3, second embodiment of parallel processing system (PPS) of the present invention only comprises task scheduling apparatus and Task Processing Unit.Wherein, task scheduling apparatus is responsible for task is put into Task Processing Unit, when it puts into certain processing unit of Task Processing Unit in decision with a concrete task, follows following Several principles:
(1) if a current task waiting to put into the task of processing unit and any processing unit of putting into buffer storage recently must guarantee precedence in the task scheduling apparatus on the processing time, then this task is also put into this processing unit;
(2), then this task is put into the buffer storage minimum processing unit of task that is untreated if currently in the task scheduling apparatus wait to put into the task of processing unit and task that all processing units of buffer storage are put into recently and all do not have sequence requirement on the processing time.
Under aforementioned (2) kind situation, a plurality of if the minimum processing unit of the task that is untreated in the buffer storage has, then in these a plurality of processing units, select to obtain the earliest the processing unit of serving.Wherein, judge that the processing unit that obtains serving the earliest is to realize by the serial advice method between a plurality of processing units in the Task Processing Unit.
Correspondingly, omit at parallel processing system (PPS) of the present invention under the situation of buffer storage, method for parallel processing of the present invention also need not the task that buffer memory is dispatched out from task scheduling apparatus, and can directly scheduling being come out of task is given the processing unit processes in the Task Processing Unit.With reference to figure 4, second embodiment of parallel processing method of the present invention comprises the steps: A, with waiting task incoming task dispatching device; B, task scheduling apparatus are analyzed current waiting task, as if a task on processing time necessary the guarantee precedence of this waiting task with certain processing unit of putting into Task Processing Unit recently, then this task are also put into this processing unit; Otherwise this task is put into a minimum processing unit of Task Processing Unit task; Processing unit in C, the Task Processing Unit takes out pending task and handles in the corresponding cache formation from buffer storage.
Wherein, in step B, if the minimum processing unit of task has a plurality ofly in the described Task Processing Unit, task scheduling apparatus is then put into current waiting task described processing unit can obtain the processing unit of serving the earliest.
In previous embodiment of the present invention, be that mode by serial notice guarantees that they read the order of task between a plurality of processing units in the Task Processing Unit.But, if described a plurality of processing unit has identical processing speed, then they just can carry out according to the mode of poll from the order that buffering reads task the formation, and not necessarily need by certain mode serial notice, and under identical situation of the processing time that waiting task needs, described a plurality of processing unit can be shared a buffer queue, not necessarily need an independently buffer queue to be set separately for them, in such cases, each processing unit poll takes out task sharing from this buffer queue of sharing and gets final product.Except that aforementioned manner, also can guarantee that they read the order of task between the described processing unit by modes such as shared storage, semaphores.Wherein, the mode of shared storage is: each processing unit can be visited the storage unit of same address, and they communicate by the storage unit that reads this address.The semaphore mode then is to be used for a kind of mode of mutual exclusion control task to critical shared resource visit, can be used as a kind of means that the control each processing unit reads next waiting task here.
Above disclosed only is preferred embodiment of the present invention, can not limit the present invention's interest field certainly with this, and therefore the equivalent variations of being done according to the present patent application claim still belongs to the scope that the present invention is contained.

Claims (12)

1, a kind of parallel processing method is characterized in that may further comprise the steps:
A, with waiting task incoming task dispatching device;
B, task scheduling apparatus are analyzed current waiting task, judge whether this waiting task and a waiting task putting into any processing unit of Task Processing Unit recently must guarantee precedence on the processing time, if then this task is also put into this processing unit; Otherwise this task is put into the Task Processing Unit minimum processing unit of task that is untreated;
A plurality of processing units in C, the Task Processing Unit carry out respective handling to the waiting task that is obtained respectively.
2. parallel processing method as claimed in claim 1 is characterized in that: the mode by serial notice between a plurality of processing units in the described Task Processing Unit guarantees that they read the order of task; The minimum processing unit of task has a plurality ofly if be untreated in the described Task Processing Unit, and then task scheduling apparatus is put into the minimum processing unit of described a plurality of task that is untreated with current waiting task and can be obtained the processing unit of serving the earliest.
3. parallel processing method as claimed in claim 1 is characterized in that: described step B further comprises the steps:
B1, task scheduling apparatus are analyzed current waiting task, and judge whether this waiting task and a task putting into certain buffer queue of buffer storage recently must guarantee precedence on the processing time, if then this task is also put into this buffer queue; Otherwise this task is put into the buffer storage minimum buffer queue of task that is untreated;
A plurality of processing units in B2, the Task Processing Unit take out pending task respectively in the corresponding cache formation from buffer storage.
4, parallel processing method as claimed in claim 3 is characterized in that: the mode by serial notice between a plurality of processing units in the described Task Processing Unit guarantees that they read the order of task; The minimum buffer queue of task has a plurality ofly if be untreated in the described buffer storage, and then task scheduling apparatus is put into the minimum buffer queue of described a plurality of tasks that are untreated with current waiting task and can be obtained the buffer queue of serving the earliest.
5. as claim 1 or 3 described parallel processing method, it is characterized in that: if described a plurality of processing unit has identical processing speed, the order that then described processing unit reads task just can carry out according to the mode of poll.
6. parallel processing method as claimed in claim 5 is characterized in that: the processing time as the waiting task needs is identical, and described a plurality of processing units can be shared a buffer queue.
7. parallel processing system (PPS), comprise the responsible Task Processing Unit that reading of task is handled, described Task Processing Unit comprises one to a plurality of processing units, it is characterized in that also comprising and be responsible for dispatching the task scheduling apparatus that waiting task enters Task Processing Unit, this task scheduling apparatus is used to analyze current waiting task, judge whether this waiting task and a waiting task putting into any processing unit of Task Processing Unit recently must guarantee precedence on the processing time, if then this task is also put into this processing unit; Otherwise this task is put into the Task Processing Unit minimum processing unit of task that is untreated.
8. parallel processing system (PPS) as claimed in claim 7 is characterized in that: the mode by the serial notice between the processing unit of described Task Processing Unit reads pending task successively, takes out a task at the most at every turn.
9, parallel processing system (PPS) as claimed in claim 8, it is characterized in that: when described task scheduling apparatus is dispatched waiting task, the minimum processing unit of task has a plurality ofly if be untreated in the Task Processing Unit, then selects to obtain the earliest the processing unit of serving in these a plurality of processing units.
10. parallel processing system (PPS) as claimed in claim 7 is characterized in that: described parallel processing system (PPS) also comprises is located at the buffer storage that is used for the described waiting task of buffer memory between task scheduling apparatus and the Task Processing Unit.
11. parallel processing system (PPS) as claimed in claim 10, it is characterized in that: described buffer storage is that described each processing unit is set up corresponding buffer queue, the pending task in buffer memory alignment processing unit in each buffer queue, in same buffer queue, described processing unit adopts the mode of first in first out to handle the task of buffer memory; The mode of notifying by serial between the described processing unit from taking out pending task the formation separately, is taken out a task successively at every turn at the most.
12. parallel processing system (PPS) as claimed in claim 11, it is characterized in that: analyze current waiting task when described task scheduling apparatus is dispatched waiting task, if a current task waiting to put into the task of buffer queue and any buffer queue of putting into buffer storage recently must guarantee precedence on the processing time in the task scheduling apparatus, then this task is also put into this buffer queue; If currently in the task scheduling apparatus wait to put into the task of buffer queue and task that all buffer queues of buffer storage are put into recently and all do not have sequence requirement on the processing time, then this task is put into the buffer storage minimum buffer queue of task that is untreated, the minimum buffer queue of task has a plurality ofly if be untreated in the buffer storage, then selects to obtain the earliest the buffer queue of serving in these a plurality of buffer queues.
CNB031440436A 2003-07-28 2003-07-28 Parallel processing method and system Expired - Fee Related CN100345132C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB031440436A CN100345132C (en) 2003-07-28 2003-07-28 Parallel processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB031440436A CN100345132C (en) 2003-07-28 2003-07-28 Parallel processing method and system

Publications (2)

Publication Number Publication Date
CN1577305A CN1577305A (en) 2005-02-09
CN100345132C true CN100345132C (en) 2007-10-24

Family

ID=34579570

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB031440436A Expired - Fee Related CN100345132C (en) 2003-07-28 2003-07-28 Parallel processing method and system

Country Status (1)

Country Link
CN (1) CN100345132C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111147603A (en) * 2019-09-30 2020-05-12 华为技术有限公司 Method and device for networking reasoning service

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100377084C (en) * 2006-03-10 2008-03-26 浙江大学 Multi-task parallel starting optimization of built-in operation system
CN100444121C (en) * 2006-09-11 2008-12-17 中国工商银行股份有限公司 Batch task scheduling engine and dispatching method
JP5324934B2 (en) * 2009-01-16 2013-10-23 株式会社ソニー・コンピュータエンタテインメント Information processing apparatus and information processing method
CN102486732A (en) * 2010-12-02 2012-06-06 上海可鲁***软件有限公司 Distributed type platform and control method for starting priorities of functional modules in platform
CN103780635B (en) * 2012-10-17 2017-08-18 百度在线网络技术(北京)有限公司 Distributed asynchronous task queue execution system and method in cloud environment
CN103218449B (en) * 2013-04-26 2016-04-13 中国农业银行股份有限公司 Form is operation exception disposal route and system in the daytime
CN108509220B (en) * 2018-04-02 2021-01-22 厦门海迈科技股份有限公司 Revit engineering calculation amount parallel processing method, device, terminal and medium
CN108491260A (en) * 2018-04-12 2018-09-04 迈普通信技术股份有限公司 Communication equipment multitask test method and device
CN110753341A (en) 2018-07-23 2020-02-04 华为技术有限公司 Resource allocation method and device
CN109886652A (en) * 2019-02-22 2019-06-14 中国农业银行股份有限公司 Formation gathering method and system
CN111861853A (en) * 2019-04-30 2020-10-30 百度时代网络技术(北京)有限公司 Method and apparatus for processing data
CN113360256A (en) * 2020-03-06 2021-09-07 烽火通信科技股份有限公司 Thread scheduling method and system based on control plane massive concurrent messages
CN112365520B (en) * 2020-06-16 2024-01-30 公安部第三研究所 Pedestrian target real-time tracking system and method based on video big data resource efficiency evaluation
CN111562948B (en) * 2020-06-29 2020-11-10 深兰人工智能芯片研究院(江苏)有限公司 System and method for realizing parallelization of serial tasks in real-time image processing system
CN112415862B (en) * 2020-11-20 2021-09-10 长江存储科技有限责任公司 Data processing system and method
CN114816652A (en) * 2021-01-29 2022-07-29 上海阵量智能科技有限公司 Command processing device and method, electronic device, and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5848257A (en) * 1996-09-20 1998-12-08 Bay Networks, Inc. Method and apparatus for multitasking in a computer system
US5918243A (en) * 1996-01-30 1999-06-29 International Business Machines Corporation Computer mechanism for reducing DASD arm contention during parallel processing
CN1326567A (en) * 1998-11-16 2001-12-12 艾利森电话股份有限公司 Job-parallel processor
CN1409209A (en) * 2001-09-24 2003-04-09 深圳市中兴通讯股份有限公司上海第二研究所 Realizing method for multiple task real-time operation system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5918243A (en) * 1996-01-30 1999-06-29 International Business Machines Corporation Computer mechanism for reducing DASD arm contention during parallel processing
US5848257A (en) * 1996-09-20 1998-12-08 Bay Networks, Inc. Method and apparatus for multitasking in a computer system
CN1326567A (en) * 1998-11-16 2001-12-12 艾利森电话股份有限公司 Job-parallel processor
CN1409209A (en) * 2001-09-24 2003-04-09 深圳市中兴通讯股份有限公司上海第二研究所 Realizing method for multiple task real-time operation system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111147603A (en) * 2019-09-30 2020-05-12 华为技术有限公司 Method and device for networking reasoning service

Also Published As

Publication number Publication date
CN1577305A (en) 2005-02-09

Similar Documents

Publication Publication Date Title
CN100345132C (en) Parallel processing method and system
CN1146192C (en) Ethernet exchange chip output queue management and dispatching method and device
CN1310135C (en) Multithreaded microprocessor with register allocation based on number of active threads
US7349399B1 (en) Method and apparatus for out-of-order processing of packets using linked lists
US20110149991A1 (en) Buffer processing method, a store and forward method and apparatus of hybrid service traffic
US7499470B2 (en) Sequence-preserving deep-packet processing in a multiprocessor system
US20060085554A1 (en) System and method for balancing TCP/IP/workload of multi-processor system based on hash buckets
US20080002681A1 (en) Network wireless/RFID switch architecture for multi-core hardware platforms using a multi-core abstraction layer (MCAL)
JP2005508550A (en) Method and apparatus for scheduling requests to a dynamic random access memory device
CN102779075A (en) Method, device and system for scheduling in multiprocessor nuclear system
US8463928B2 (en) Efficient multiple filter packet statistics generation
US20110258694A1 (en) High performance packet processing using a general purpose processor
CN113518130B (en) Packet burst load balancing method and system based on multi-core processor
EP2035928A2 (en) Systems and methods for processing data packets using a multi-core abstraction layer (mcal)
Jeż et al. Online scheduling of packets with agreeable deadlines
CA2719841A1 (en) Adaptive scheduler for communication systems apparatus, system and method
CN107483405B (en) scheduling method and scheduling system for supporting variable length cells
US8391305B2 (en) Assignment constraint matrix for assigning work from multiple sources to multiple sinks
CN114518940A (en) Task scheduling circuit, method, electronic device and computer-readable storage medium
Huang et al. AutoVNF: An Automatic Resource Sharing Schema for VNF Requests.
CN114020471B (en) Sketch-based lightweight elephant flow detection method and platform
US20030041073A1 (en) Method and apparatus for reordering received messages for improved processing performance
CN114518941A (en) Task scheduling circuit, method, electronic device and computer-readable storage medium
US7010673B2 (en) Apparatus and method for processing pipelined data
US7599361B2 (en) Wire-speed packet management in a multi-pipeline network processor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20071024

Termination date: 20150728

EXPY Termination of patent right or utility model