CN105045658B - A method of realizing that dynamic task scheduling is distributed using multinuclear DSP embedded - Google Patents
A method of realizing that dynamic task scheduling is distributed using multinuclear DSP embedded Download PDFInfo
- Publication number
- CN105045658B CN105045658B CN201510381740.9A CN201510381740A CN105045658B CN 105045658 B CN105045658 B CN 105045658B CN 201510381740 A CN201510381740 A CN 201510381740A CN 105045658 B CN105045658 B CN 105045658B
- Authority
- CN
- China
- Prior art keywords
- core
- event
- multinuclear
- openem
- distribution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Multi Processors (AREA)
Abstract
The invention discloses a kind of methods for realizing dynamic task scheduling distribution using multinuclear DSP embedded, the multinuclear runtime system library OpenEM of a dynamic task scheduling distribution based on Multicore Navigator independently of operating system is provided on the KeyStone platforms that TI is released, the dynamic dispatching distribution that task is realized by the component, realizes the load balance on multinuclear.The DSP core of multinuclear embeded processor under KeyStone frameworks is divided for main core and from core, main core completes the global initialization of programming model, and all cores complete locally initialization.Programming model is dispatched Distribution Events by main core generation event, event-driven, OpenEM, is formed from core processing event.The present invention is supplied to a kind of multinuclear DSP embedded based on OpenEM of embedded software developing personnel to unify parallel programming model;Its implementation autgmentability is very strong, can be adapted for current most of multinuclears or many-core embeded processor based on KeyStone frameworks, disclosure satisfy that the scheduling distribution of task under multi-core environment, the application demand of balancing dynamic load.
Description
Technical field
It is specifically a kind of real using multinuclear DSP embedded the present invention relates to the multiple programming field of multinuclear embedded system
The method of existing dynamic task scheduling distribution.
Background technology
With the rapid development of embedded technology, embedded processing demand also in rapid growth, flies in integrated circuit technique
Today of speed development, the application of multi-core technology in embedded systems is also more and more extensive, uses many places in embedded systems
Device cooperation completion task is managed, for improving the embedded performance of system and meeting the real-time demand etc. of embedded system, is had
Important meaning.And multinuclear embedded system platform also has become current main computing platform, no matter desktop application, movement
All start to use coenocytism using, server or special embedded platform.
The mainstreaming of multi-core technology, to parallel computation architecture, parallel algorithm, parallel programming model and Parallel application
Exploitation all produce important influence.Traditional Programming Methodology based on single thread is obviously unable to fully utilize multinuclear
The computing capability of CPU, in multinuclear embedded system, it is necessary to algorithm for design is come with the mode of thinking of parallel computation, multinuclear is embedding
The hardware superiority and parallel Programming method for entering formula platform combine, and obtain higher system performance.Therefore, how effectively
Utilize platform resource, efficient multinuclear of dispatching is nowadays to develop and one of the emphasis studied and hot issue.Current multinuclear
Operating system under environment can complete certain multi-core dispatching task, however, be but not suitable under some situations, especially when
When task is divided poling multiple small task blocks, the scheduling overhead of operating system is excessively high, and is not suitable for that a thread is allowed to go
The needing to spend thousands of a clock cycle of the task is executed, is similarly not suitable for creating hundreds and thousands of a threads.That is, finding
One kind can not depend on operating system, and the implementation method of efficient scheduler task is very necessary.
Application No. is the patents of invention of CN201510002039 to disclose a kind of optimal localization based on MapReduce times
Business dispatching method.The invention proposes a kind of MapReduce task tune that may be simultaneously operated under isomorphism and isomeric group environment
Algorithm is spent, field of computer technology is belonged to.The dispatching algorithm considers the process performance of each calculate node in cluster, calculating
Node and calculating task are abstracted as a bipartite graph, by suitably extending the bipartite graph and combining KM cum rights Optimum Matching algorithm shapes
At final global task scheduling approach.But the invention is mainly for the task scheduling between cluster environment lower node, not needle
Task scheduling between multinuclear under multi-core environment does not make full use of the multiple hardwares resource in embedded multi-core processor.
Application No. is the patents of invention of CN201410610548 to disclose the restructural of a kind of task number and performance aware
The resource allocation methods of multi-core processor.In the invention, in each operating system scheduling interval, resource allocator first appoint by basis
The number mean allocation Logic Core of business, after running a clock cycle, according to performance (need of the reflection task to resource of task
Ask) it is ranked up, the task small to resource requirement is found out, reduces the granularity of occupied Logic Core, and small from resource requirement
Task there obtain free physical core distribute to the task high to resource requirement, to increase the task high to resource requirement
The granularity of the Logic Core of occupancy.When the current load of system changes or task itself enters the new operation phase, money
Source distributor will in time adjust in next operation system call to make full use of resources of chip.But the invention be
The strategy that the schedule gaps of operating system are implemented, therefore be the scheduling based on operating system, it is detached from operating system, is just no longer had
Basic scheduling feature.
Invention content
The method based on multinuclear DSP embedded realization Real-time Task Dispatch, distribution that the purpose of the present invention is to provide a kind of,
This method is made full use of its various hardware resource, is realized the dynamic of task based on the component N avigator on embeded processor
Scheduling and distribution, reach load balance, to solve the problems mentioned in the above background technology.
To achieve the above object, the present invention provides the following technical solutions:
A method of realizing that dynamic task scheduling is distributed using multinuclear DSP embedded, including the following contents:
1)One is provided independently of operating system based on Multicore on the KeyStone platforms that TI is released
OpenEM when the multinuclear operation of the dynamic task scheduling distribution of Navigator(That is Open Event Machine), pass through
OpenEM realizes the dynamic dispatching distribution of task, realizes the load balance on multinuclear;
Include mainly the QMSS of management hardware queue in Multicore Navigator, realize the logical of actual data communication
Letter medium PKTDMA, the different firmwares of load realize the PDSP of different functions;Wherein PDSP is RSIC processors;It is based on
The OpenEM of Multicore Navigator, including scheduler and distributor;The scheduler operates on PDSP, according to one
Scheduling strategy is determined by event scheduling to current idle core;Distributor operates on each core, inquires pending event simultaneously
Call corresponding processing function;Customer incident is carried by QMSS, and user data information is completed using the descriptor in QMSS
It encapsulates and is put into hardware queue as OpenEM events;Processing function is encapsulated in EO structures;
2)It is main core and from core, the main core that the DSP core of multinuclear embeded processor under KeyStone frameworks, which is divided,
The global initialization of programming model is completed, which includes the configuration to memory CACHE, PDSP, QMSS, PKTDMA
And its configuration of hardware queue, user handle the encapsulation of function and the initialization to each environmental variance in parallel programming model;
The global initialization of the main core is shared with from core;All cores complete local initialization;
3)Programming model is dispatched Distribution Events by main core generation event, event-driven, OpenEM, is formed from core processing event
Main core classification by function obtains a null event from event pond, user data is filled in event, then
Event is put into specified hardware queue;Scheduler finds the arrival of event by monitoring hardware queue, passes through the side of interruption
Event is transferred to the purpose hardware queue of purpose core by event scheduling to the light core of load and by PKTDMA by formula;Once from core
An event is received, corresponding user is extracted from the associated EO structures of hardware queue and handles function, and then is called
User handles processing of the function realization to event, then deletes processed event, discharges its space.
As a further solution of the present invention:Step 1)Include 8192 hardware queues altogether in middle QMSS, use is wherein special
The fixed hardware queue for transmission completes internuclear synchronous and communication with the hardware queue with priority, realizes OpenEM events
Distribution.
As a further solution of the present invention:Step 1)Middle PKTDMA includes Rx DMA channels, Tx DMA channels and Rx
The transmission operation in the channels flow, descriptor passes through Tx DMA channels and the completion of Rx DMA channels in PKTDMA.
As a further solution of the present invention:Step 2)Middle multinuclear embeded processor is isomorphism or heterogeneous processor, institute
The quantity for stating DSP core is at least 8, wherein the main core is core 0, it is described from core be other cores in addition to core 0.
As a further solution of the present invention:Step 3)Middle scheduler use LAZY scheduling strategy, i.e., by from core to
Main core sends request, illustrates that main core, which receives request, to give pending case distribution to request core when pronucleus is idle.
As a further solution of the present invention:Step 3)The process of middle event handling is the pattern of run-to-complete.
Compared with prior art, the beneficial effects of the invention are as follows:
1, the present invention is supplied to a kind of multinuclear DSP embedded based on OpenEM of embedded software developing personnel unified parallel
Programming model;
2, parallel programming model implementation method autgmentability proposed by the present invention is very strong, can be adapted for most of at present be based on
The multinuclear or many-core embeded processor of KeyStone frameworks disclosure satisfy that the scheduling distribution of task under multi-core environment, dynamic are negative
Carry the application demand of balance.
3, the hardware resource that embeded processor has fully been used in the present invention efficiently dispatches distributed tasks, realizes multinuclear
Between load balance.
Description of the drawings
Fig. 1 is OpenEM general frame figures of the present invention;
Fig. 2 is that user handles function encapsulation process figure in the present invention;
Fig. 3 is state-event transition diagram in the present invention;
Fig. 4 is serial parallel event model figure in the present invention;
Fig. 5 is image processing application flow chart in the present invention;
Fig. 6 is image processing application test result figure in the present invention, and each core handles event number;
Fig. 7 is image processing application test result figure in the present invention, each core average treatment cycle numbers.
Fig. 8 is the flow chart of the present invention.
Specific implementation mode
Below in conjunction with the embodiment of the present invention, technical scheme in the embodiment of the invention is clearly and completely described,
Obviously, described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based in the present invention
Embodiment, all other embodiment obtained by those of ordinary skill in the art without making creative efforts, all
Belong to the scope of protection of the invention.
Referring to Fig. 8, in the present invention, OpenEM is a multinuclear runtime system library for realizing task scheduling distribution,
OpenEM independent of operating system, can the naked race on chip, can also run on operating system.Provide operation system
Being directed to largely than the scheduling of process, the task of thread smaller particle size other than system.It can be in the efficient scheduling on multinuclear point
Hair task realizes dynamic load balance on multinuclear by the way that task is distributed to the light core of load.
Multicore Navigator(Abbreviation Navigator)A kind of hardware mechanisms, help realize data movement and
Collaborative work between multinuclear.Communication internuclear, between network and between peripheral hardware, including data and message exchange are mainly provided, sent
After message, do not reprocess and the relevant operation of the message, that is to say, that after a message has been sent, sender no matter message whether
It is received.Briefly, it is exactly only to need to be loaded into data, remaining operation is all voluntarily responsible for completion by Navigator, is not necessarily to CPU
Intervention.Navigator is mainly by Queue Manager Subsystem(QMSS)With multiple Packet DMA(PKTDMA)With
And two PDSP(Packed Data Structure Processors)Composition.
Queue Manager are a hardware modules for being responsible for carrying out hardware queue acceleration management.Including
8192 hardware queues, different hardware queue purposes are different.Pass through one position being specifically mapped into module
The descriptor of write-in 32(Descriptor)A packet can be added in hardware queue for address.On the contrary, from this hardware team
The same position readings of row can then be completed to wrap out team.Descriptor(Descriptor)Be in internuclear movement with information and
The message of data.The data carry of user can be encapsulated into an event on descriptor.PKTDMA is a kind of
DMA is mainly responsible for the movement of data.PKTDMA includes a plurality of Rx DMA channels, and a plurality of Tx DMA channels and a plurality of Rx flow are logical
Road.
PDSP is a kind of RSIC processors, and the function of accumulation, monitoring or service quality can be realized with loading firmware.It can be with
Hardware queue is monitored, the descriptor in hardware queue is accumulated, can also sent and interrupt to core or peripheral hardware.
In OpenEM, scheduler firmware is loaded on PDSP, PDSP is incited somebody to action by the monitoring to hardware queue, and by certain scheduling strategy
Descriptor in hardware queue, which is scheduled to, loads light core.Two PDSP are not fully identical, need to load different firmware realities
Existing identical function.
Multinuclear embeded processor generally mostly uses the isomorphism or heterogeneous processor of KeyStone frameworks, and the number of DSP core
Amount is at least 8, wherein regard core 0 as main core, other cores in addition to core 0 are used as from core.Core 0 as main core is completed
The global initialization of OpenEM environment, including the configuration to caching, the initialization to event pond and event is right
The initialization of each components of Multicore Navigator, the encapsulation for handling user function and to each in parallel programming model
The initialization of environmental variance.Of overall importance when the initialization of core 0 is also to be shared with from core.It only need to will be at the beginning of main core from core initialization
The environmental variance copy portion of beginningization is placed on local use.
Main core first configures caching, can improve operational efficiency by controlling the switch of caching function.Then according to
According to the functional requirement of user, the demand to different establishes event pond, the deposit space as each function.It is marked off in event pond
Event, when needing to produce an event, one null event of taking-up is filled from event pond.Main core also needs to initialization QMSS
And the hardware queue of configuration quantity is opened, the communication port in PKTDMA is created and open and loads scheduler for PDSP is solid
Part.User's processing function is encapsulated in EO by final step(Execution object)In structure, and closed with EM Queue
Connection gets up.
It waits for main core to complete global initialization from core, then copies the environmental variance that main core initializes to local.
For environment variable initializer show in parallel programming model, mainly to the various global variables that are used in model into
Row initialization.Shared variable is required for for all task execution cores, is initialized by main core, other cores are directly shared should
Variable so that the global variable in all cores is consistent, and in order to realize zero-copy to improve execution efficiency, global variable
It is stored in the sharable region of memory of multinuclear.
The main core completes generation, encapsulation and triggering to the encapsulation and event that handle function.It is described to be waited for from core acquisition
Processing event extracts the processing that corresponding processing function completes event.The present invention by event-driven, event experience from be created to by
It is dispatched to the process for being processed to release.The process of event handling is the pattern of run-to-complete, i.e., the processing of event is not
It can be suspended, until the completion of an event cannot handle other events.
Embodiment 1
In the embodiment of the present invention, chooses image processing application and model is realized and tested, meanwhile, it is released using TI
TMS320C6678(Abbreviation C6678)For processor as platform is realized, C6678 is the high-performance multinuclear based on KeyStone frameworks
DSP, 8 C66x cores of Embedded, fusion fixed point and floating-point processing function, including component N avigator.
As shown in Figure 1, the scheduler of OpenEM operates on risc processor, distributor operates on each core.Core
To scheduler dispatches event request, scheduler monitoring hardware queue takes out pending event, by scheduling from hardware queue
Event gives distributor, then gives corresponding EO structures by distributor, and function pair thing is handled using volume user is wherein encapsulate
Part is handled.EO structures therein are shared by unified create of main core by all cores.One EO structure can only encapsulate one
User handles function.
As shown in Fig. 2, user handles the encapsulation process of function, first have to create an EO structure, and in EO structures
In indicate user handle function.Then hardware queue is created, hardware queue here is not the hardware queue in QMSS, nor
Software construct queue, its main function are to map event and EO structures, to be sent to a hardware queue
In the EO structures that can be mapped of event in user handle function processing.Therefore, third step needs to be created
EO structures and hardware queue associate.Finally start EO structures.
As shown in figure 3, entire model is by event-driven, from null event initializes in event pond, therefore event is in
The state of free, user call em_alloc to distribute a new events, then system can take out first from null event queue
Null event returns to user, and at this moment event is in preparing states.User obtains a new events, fills user data, so
It is sent an event in hardware queue by em_send afterwards, event transition is ready states at this time.If at this point, there is core to scheduling
Device sends request, and scheduling can give the event delivery in ready states request core, request core to call corresponding user's processing
Function is handled, and event becomes running states in turn.When the processing of the event of completion, user needs release event space, adjusts
It can be multiplexed in next event to which event is restored to the state of free again with em_free.
As shown in figure 4, when user's algorithm is larger, multiple processing functions may be divided into, each processing function
A kind of corresponding event type.Wherein handle function between there may be certain front and back dependences, at this time can there will be according to
The processing function for the relationship of relying is defined as series process, and the function without dependence, then being defined as can be with independent parallel.At user
The relationship between function is managed, programming model is transmitted to by the description definition of user structure body, model is first by all processing letters
Number is packaged, and secondly in triggering, first event in all concurrent events and all series processes is all put into firmly
In part queue, to which these event concurrencies are handled.When an event handling is near completion, after judging whether this event has
Whether continuous event, i.e., have event connected in series to need serial process.If nothing, terminate the processing of current event, if so, then existing
Serial next event is triggered in current event.The new thing that serial event can be tantamount to other events by system as one
Part is scheduled distribution processor.That is, the event in series process can't be bound to the same core, but may quilt
Different core processing.
As shown in figure 5, by realizing image processing application come the availability and performance of test model.Main core reads original graph
Picture, and user structure body is initialized.Then user structure body is given to the model initialization that model carries out OpenEM.From
Core then directly carries out model initialization.All user is handled function by main core in initial phase, i.e. split, dehaze,
Combine is encapsulated in EO structures.And according to the parameter in user structure body, event is distributed for user, and by user
Data carry is in event, then trigger event.After all cores all complete initialization, it is put into the processing stage of model.Institute
There is the core for participating in handling all can obtain pending event to scheduler dispatches event request, scheduler, be distributed to each core.Each core is received
To event, the processing function in EO structures is called to carry out the work of split, dehaze and combine successively.This test case
Three in journey handle the relationship that functions are serial, i.e., to be first called split segmentations to original image, then call
Dehaze handles a piece of image, finally combine is called to be tied a picture of processing from newly.Therefore,
All split events need to be only triggered for the first time, when split closes to an end, then trigger dehaze, likewise, by
Dehaze triggers combine.Among these, all core all completes the process of split, dehaze, combine.But it is all
Event is all to be distributed to load light core by scheduler point scheduling, that is to say, that such as not by the dehaze of main core split triggerings
Centainly by main core processing, and may be to be handled from core by other.Equally, not necessarily by the dehaze of core 7 dehaze triggerings
It is handled by core 7, and may be to be handled by other cores.
As Figure 6-Figure 7, the present invention test when application image handle, give original image to model, by model into
Row split, dehaze, combine.In test, choose 512384 pending image, and set divide the image into 12 pieces into
Row parallel processing.As shown in Figure 6, it can be seen that the number of each core completion event accounts for the percentage of total event number, Fig. 7 explanations
The average treatment event of each core, from block diagram as can be seen that the average treatment event of each core remains basically stable, difference is not
Greatly, the balancing dynamic load on multinuclear reached to reflect course of event scheduling distribution.
By above-mentioned realization, image processing application can be realized by parallel programming model and operate in embedded multi-core
In DSP platform, it is not only able to the improving performance that uses by hardware resource, and the implementation method is with very strong expansible
Property.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie
In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power
Profit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent requirements of the claims
Variation is included within the present invention.
In addition, it should be understood that although this specification is described in terms of embodiments, but not each embodiment is only wrapped
Containing an independent technical solution, this description of the specification is merely for the sake of clarity, and those skilled in the art should
It considers the specification as a whole, the technical solutions in the various embodiments may also be suitably combined, forms those skilled in the art
The other embodiment being appreciated that.
Claims (4)
1. a kind of method for realizing dynamic task scheduling distribution using multinuclear DSP embedded, which is characterized in that including following interior
Hold:
1)One is provided independently of operating system based on Multicore Navigator on the KeyStone platforms that TI is released
Dynamic task scheduling distribution multinuclear operation when OpenEM, pass through OpenEM realize task dynamic dispatching distribution, realize
Load balance on multinuclear;
Include mainly the QMSS of management hardware queue in Multicore Navigator, realize the communication matchmaker of actual data communication
Jie PKTDMA, the different firmwares of load realize the PDSP of different functions;Wherein PDSP is RSIC processors;Based on Multicore
The OpenEM of Navigator, including scheduler and distributor;The scheduler operates on PDSP, according to certain scheduling strategy
By event scheduling to current idle core;Distributor operates on each core, inquires pending event and calls corresponding
Handle function;Customer incident is carried by QMSS, using in QMSS descriptor complete user data information encapsulation and as
OpenEM events are put into hardware queue;Processing function is encapsulated in EO structures;
2)The DSP core of multinuclear embeded processor under KeyStone frameworks is divided for main core and from core, the main core is completed
The global initialization of programming model, the initialization flow include the configuration to memory CACHE, PDSP, QMSS, PKTDMA and its
The configuration of hardware queue handles the encapsulation of function and the initialization to each environmental variance in parallel programming model;The main core
Global initialization with it is shared from core;All cores complete local initialization;
3)Programming model generates event, OpenEM scheduling Distribution Events by event-driven, main core, is formed from core processing event;
Main core classification by function obtains a null event from event pond, user data is filled in event, then by thing
Part is put into specified hardware queue;Scheduler finds the arrival of event by monitoring hardware queue, will by way of interruption
Event scheduling gives the purpose hardware queue for loading light core and event being transferred to purpose core by PKTDMA;From core once receiving
To an event, corresponding processing function is extracted from the associated EO structures of hardware queue, and then calls processing function
It realizes the processing to event, then deletes processed event, discharge its space;
Step 1)Include 8192 hardware queues altogether in middle QMSS, using the hardware queue for being wherein specifically used for sending and with excellent
The hardware queue of first grade completes internuclear synchronization and communication, realizes the distribution of OpenEM events;
Step 2)Middle multinuclear embeded processor is isomorphism or heterogeneous processor, and the quantity of the DSP core is at least 8, wherein
The main core is core 0, it is described from core be other cores in addition to core 0.
2. the method according to claim 1 for realizing dynamic task scheduling distribution using multinuclear DSP embedded, feature exist
In step 1)Middle PKTDMA includes Rx DMA channels, Tx DMA channels and the channels Rx flow, and the transmission operation of descriptor passes through
Tx DMA channels and Rx DMA channels in PKTDMA are completed.
3. the method according to claim 1 for realizing dynamic task scheduling distribution using multinuclear DSP embedded, feature exist
In step 3)Middle scheduler uses the scheduling strategy of LAZY, i.e., is asked by being sent from core to main core, illustrates when the pronucleus free time,
Main core, which receives request, to give pending case distribution to request core.
4. the method according to claim 1 for realizing dynamic task scheduling distribution using multinuclear DSP embedded, feature exist
In step 3)The process of middle event handling is the pattern of run-to-complete.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510381740.9A CN105045658B (en) | 2015-07-02 | 2015-07-02 | A method of realizing that dynamic task scheduling is distributed using multinuclear DSP embedded |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510381740.9A CN105045658B (en) | 2015-07-02 | 2015-07-02 | A method of realizing that dynamic task scheduling is distributed using multinuclear DSP embedded |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105045658A CN105045658A (en) | 2015-11-11 |
CN105045658B true CN105045658B (en) | 2018-10-23 |
Family
ID=54452222
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510381740.9A Active CN105045658B (en) | 2015-07-02 | 2015-07-02 | A method of realizing that dynamic task scheduling is distributed using multinuclear DSP embedded |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105045658B (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106851296A (en) * | 2015-12-04 | 2017-06-13 | 宁波舜宇光电信息有限公司 | Image processing system and image processing method based on embedded platform |
CN108958904B (en) * | 2017-05-25 | 2024-04-05 | 北京忆恒创源科技股份有限公司 | Driver framework of lightweight operating system of embedded multi-core central processing unit |
CN108958905B (en) * | 2017-05-25 | 2024-04-05 | 北京忆恒创源科技股份有限公司 | Lightweight operating system of embedded multi-core central processing unit |
CN107302570B (en) * | 2017-06-09 | 2020-05-26 | 东华大学 | Equipment monitoring cloud component design method based on priority queue and Canvas technology |
CN107357666B (en) * | 2017-06-26 | 2020-04-21 | 西安微电子技术研究所 | Multi-core parallel system processing method based on hardware protection |
CN107608784B (en) * | 2017-06-28 | 2020-06-09 | 西安微电子技术研究所 | Multi-mode scheduling method for mass data stream under multi-core DSP |
CN107832129B (en) * | 2017-10-24 | 2020-05-19 | 华中科技大学 | Dynamic task scheduling optimization method for distributed stream computing system |
CN109905898B (en) * | 2017-12-07 | 2022-10-11 | 北京中科晶上科技股份有限公司 | Baseband processing resource allocation method |
CN109144691B (en) * | 2018-07-13 | 2021-08-20 | 哈尔滨工程大学 | Task scheduling and distributing method for multi-core processor |
CN109558226B (en) * | 2018-11-05 | 2021-03-30 | 上海无线通信研究中心 | DSP multi-core parallel computing scheduling method based on inter-core interruption |
CN109508231B (en) * | 2018-11-17 | 2020-09-18 | 中国人民解放军战略支援部队信息工程大学 | Synchronization method and device between equivalents of heterogeneous multimode processors |
CN110347504B (en) * | 2019-06-28 | 2020-11-13 | 中国科学院空间应用工程与技术中心 | Many-core computing resource scheduling method and device |
CN112243266B (en) * | 2019-07-18 | 2024-04-19 | 大唐联仪科技有限公司 | Data packet method and device |
CN112491426B (en) * | 2020-11-17 | 2022-05-10 | 中国人民解放军战略支援部队信息工程大学 | Service assembly communication architecture and task scheduling and data interaction method facing multi-core DSP |
CN112486681A (en) * | 2020-11-26 | 2021-03-12 | 迈普通信技术股份有限公司 | Communication method and network equipment |
WO2022141297A1 (en) * | 2020-12-30 | 2022-07-07 | 华为技术有限公司 | Event processing method and apparatus |
CN112859753B (en) * | 2021-01-19 | 2022-03-25 | 深圳市汇川技术股份有限公司 | Secondary development method, device and equipment for numerical control system and readable storage medium |
CN114741137B (en) * | 2022-05-09 | 2024-02-20 | 潍柴动力股份有限公司 | Software starting method, device, equipment and storage medium based on multi-core microcontroller |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010244332A (en) * | 2009-04-07 | 2010-10-28 | Nec Corp | Means of task assignment for multi-core system, method of the same, and program of the same |
CN104331331A (en) * | 2014-11-02 | 2015-02-04 | 中国科学技术大学 | Resource distribution method for reconfigurable chip multiprocessor with task number and performance sensing functions |
CN104572483A (en) * | 2015-01-04 | 2015-04-29 | 华为技术有限公司 | Device and method for management of dynamic memory |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9047121B2 (en) * | 2013-02-25 | 2015-06-02 | Texas Instruments Incorporated | System and method for scheduling jobs in a multi-core processor |
-
2015
- 2015-07-02 CN CN201510381740.9A patent/CN105045658B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010244332A (en) * | 2009-04-07 | 2010-10-28 | Nec Corp | Means of task assignment for multi-core system, method of the same, and program of the same |
CN104331331A (en) * | 2014-11-02 | 2015-02-04 | 中国科学技术大学 | Resource distribution method for reconfigurable chip multiprocessor with task number and performance sensing functions |
CN104572483A (en) * | 2015-01-04 | 2015-04-29 | 华为技术有限公司 | Device and method for management of dynamic memory |
Non-Patent Citations (2)
Title |
---|
Open event machine:A multi-core run-time designed for performance;Filip Moerman;《Proceedings of the 6th European Embedded Design in Education and Research》;20140912;第41-45页 * |
基于多核嵌入式DSP的并行编程模型研究;周梦;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150115;I138-115 * |
Also Published As
Publication number | Publication date |
---|---|
CN105045658A (en) | 2015-11-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105045658B (en) | A method of realizing that dynamic task scheduling is distributed using multinuclear DSP embedded | |
US20190377604A1 (en) | Scalable function as a service platform | |
CN110719206B (en) | Space-based FPGA (field programmable Gate array) virtualization computing service system, method and readable storage medium | |
CN101452404B (en) | Task scheduling apparatus and method for embedded operating system | |
Sengupta et al. | Scheduling multi-tenant cloud workloads on accelerator-based systems | |
CN103677990B (en) | Dispatching method, device and the virtual machine of virtual machine real-time task | |
CN104503832B (en) | A kind of scheduling virtual machine system and method for fair and efficiency balance | |
US10782999B2 (en) | Method, device, and single-tasking system for implementing multi-tasking in single-tasking system | |
CN111752971B (en) | Method, device, equipment and storage medium for processing data stream based on task parallel | |
CN111897654A (en) | Method and device for migrating application to cloud platform, electronic equipment and storage medium | |
US11042414B1 (en) | Hardware accelerated compute kernels | |
Duong et al. | A framework for dynamic resource provisioning and adaptation in iaas clouds | |
CN106250217A (en) | Synchronous dispatching method between a kind of many virtual processors and dispatching patcher thereof | |
Alvares de Oliveira Jr et al. | Synchronization of multiple autonomic control loops: Application to cloud computing | |
CN113535362A (en) | Distributed scheduling system architecture and micro-service workflow scheduling method | |
CN115686805A (en) | GPU resource sharing method and device, and GPU resource sharing scheduling method and device | |
CN109729113A (en) | Manage method, server system and the computer program product of dedicated processes resource | |
CN112395056B (en) | Embedded asymmetric real-time system and electric power secondary equipment | |
CN107528871A (en) | Data analysis in storage system | |
CN102508696A (en) | Asymmetrical resource scheduling method and device | |
CN117435324A (en) | Task scheduling method based on containerization | |
Park et al. | A scalable framework for parallel discrete event simulations on desktop grids | |
CN105874453B (en) | Consistent tenant experience is provided for more tenant databases | |
CN115658278A (en) | Micro task scheduling machine supporting high concurrency protocol interaction | |
CN113254143B (en) | Virtualized network function network element arrangement scheduling method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210107 Address after: 311200 room b1-3-034, No. 198, Qidi Road, economic and Technological Development Zone, Xiaoshan District, Hangzhou City, Zhejiang Province Patentee after: Hangzhou purevision Technology Co.,Ltd. Address before: No.2, Taibai South Road, Yanta District, Xi'an City, Shaanxi Province Patentee before: XIDIAN University |