CN107391247B - Breadth-first greedy mapping method for network-on-chip application - Google Patents

Breadth-first greedy mapping method for network-on-chip application Download PDF

Info

Publication number
CN107391247B
CN107391247B CN201710599782.9A CN201710599782A CN107391247B CN 107391247 B CN107391247 B CN 107391247B CN 201710599782 A CN201710599782 A CN 201710599782A CN 107391247 B CN107391247 B CN 107391247B
Authority
CN
China
Prior art keywords
task
mapping
node
tasks
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710599782.9A
Other languages
Chinese (zh)
Other versions
CN107391247A (en
Inventor
江建慧
陆曹波
张颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201710599782.9A priority Critical patent/CN107391247B/en
Publication of CN107391247A publication Critical patent/CN107391247A/en
Application granted granted Critical
Publication of CN107391247B publication Critical patent/CN107391247B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to a breadth-first greedy mapping method for network-on-chip application, which is used for quickly and efficiently generating an application mapping scheme by simultaneously considering a topological structure of a task in the network-on-chip application and a structure of an application mapping area and reducing internal congestion of the network-on-chip. The method comprises the following steps: a first task mapping step, namely selecting a node for mapping the first task according to the number of the first task and the subsequent tasks and the out-degree of the network-on-chip node, wherein the out-degree of the node is defined as the number of available neighbor nodes of the node; and a subsequent task mapping step, namely sequentially selecting nodes for mapping the subsequent tasks in a breadth-first greedy manner. Compared with the prior art, the result obtained by the method is closer to an exhaustive algorithm, and the time complexity of the method is in a low polynomial level.

Description

Breadth-first greedy mapping method for network-on-chip application
Technical Field
The invention relates to a mapping method of network-on-chip application, in particular to a breadth-first greedy mapping method of network-on-chip application.
Background
A network on chip (NoC), a multi-core communication system on chip, has excellent spatial scalability and parallel communication capability. In a network on chip, each core divides its information into several packets and transmits them through channels and routes on the network of chips. In this way, the communication of the network on chip is not affected by large transmission delays and throughput limitations on the bus system. Even if many cores are connected on the network, the network-on-chip can concurrently process single-task applications or multi-application tasks.
In network on chip, application mapping, i.e. the assignment of tasks of an application to individual network on chip nodes, is one of the most important issues. Specifically, each task of the application is mapped to a corresponding node. Then, when the first task of the application is completed, some packets are transmitted to its immediate successor. Once the subsequent tasks get these packets, they start running. The whole process is circulated until all tasks are executed. In this process, the total traffic (proportional to energy consumption) can be considered as a function of the number of packets and the propagation path length of the packets, the latter depending mainly on the mapping. Once the mapping is not proper, the internal packets are blocked from each other during transmission or the transmission path is too long, which results in serious internal blocking of the network on chip. It is common for multiple applications in a network on chip to execute in parallel. In this case, incorrect mapping may cause severe external congestion, that is, some tasks may be mapped to nodes that are not connected to each other, so that the mapping area is fragmented, and then the transmission of packets needs to pass through the area where other applications are located, so that different applications are blocked from each other. Fragmentation of the application map area is shown in fig. 1 as the map area of App 3. Once internal blocking or external blocking occurs, a transmission path may become long or packets may be blocked from each other, at this time, execution time and total energy consumption of the application may increase, and performance of the network on chip may be greatly reduced.
CoNA is an intelligent mountaineering algorithm proposed by Mohammad et al in Smart Hill learning for Agile Dynamic Mapping in Man-Core Systems (M.Fattah et al, DAC'2013, Austin, TX, USA.pp.39:1-39: 6). in the algorithm, a node with the required number of neighbor nodes is first selected, and then a heuristic Hill Climbing algorithm is used to search a continuous Mapping region. This avoids fragmentation of the area and reduces external congestion, and furthermore the algorithm prioritizes square areas for mapping, since in this case the average traffic is very small, while nodes at the borders are preferentially selected for mapping. CoNA maps an App3 containing six tasks to a network on chip as shown in FIG. 2. However, this method cannot solve the problem of internal congestion after obtaining a continuous mapping area. On the one hand, when the CoNA is mapping, the CoNA always maps the first task to the central node, and then maps the subsequent tasks to the rest nodes according to the Manhattan distance. In this process, it ignores the structure of the mapping region, it misses opportunities to further reduce the internal congestion, and on the other hand, the mapping region has an irregular structure, as shown by App6 in fig. 3, and the method of starting mapping with the center ignores the structure of the mapping region, which may aggravate the internal congestion.
When the network on chip runs an application, the internal congestion of the application has a great influence on the overall performance of the network on chip. Therefore, a new mapping method for network-on-chip applications needs to be proposed.
Disclosure of Invention
The present invention aims to overcome the defects of the prior art and provide an efficient and fast breadth-first greedy mapping method for network-on-chip applications.
The purpose of the invention can be realized by the following technical scheme:
a breadth-first greedy mapping method for network-on-chip applications includes:
a first task mapping step, namely selecting a node for mapping the first task according to the number of the first task and the subsequent tasks and the out-degree of the network-on-chip node, wherein the out-degree of the node is defined as the number of available neighbor nodes of the node;
and a subsequent task mapping step, namely sequentially selecting nodes for mapping the subsequent tasks in a breadth-first greedy manner.
In the first task mapping step, the following conditions are provided:
a) when the node satisfaction degree is equal to the number of the subsequent tasks of the first task, selecting the node as the node for mapping the first task;
b) when the out-degree of all the nodes is smaller than the number of the subsequent tasks of the first task, selecting the node with the largest out-degree as the node for mapping the first task;
c) and when the degree of departure of all the nodes is larger than the number of the subsequent tasks of the first task, selecting the node with the minimum degree of departure as the node for mapping the first task.
When the number of the nodes meeting the conditions a), b) or c) is more than 1, firstly selecting the nodes at the boundary or the corner as the nodes of the first mapping task.
In the subsequent task mapping step, mapping is sequentially realized in a greedy manner for each task of each layer in a breadth-first manner according to a task topological structure, namely, each time the task with the largest inflow data quantity is taken as the current task, the node with the smallest Manhattan distance with the node where the predecessor task of the current task is located is firstly selected for mapping until all tasks are mapped.
The subsequent task mapping step specifically comprises the following steps:
101) for the tasks on the same layer, sequencing the tasks according to the size of the inflow data through a breadth-first search algorithm to form a queue;
102) the queue outputs the tasks in sequence from big to small according to the size of the data amount of the task inflow;
103) for the output current task, judging whether the current task only has one precursor task, if so, executing step 104), and if not, executing step 105);
104) selecting the node with the minimum Manhattan distance with the node where the precursor task is located to map, and executing step 106);
105) traversing available neighbor nodes of nodes where all precursor tasks are located, selecting a node with the minimum data transmission quantity for mapping, and executing step 106);
106) updating the out-degree of the neighbor node of the mapping node;
107) judging whether the queue is empty, if so, executing step 108), and if not, returning to step 102);
108) repeating steps 101) -107) to implement the mapping of all layer tasks.
And in the step 104), when the nodes with the minimum Manhattan distance are more than 1, selecting the nodes with the node out degrees closest to the number of the direct successor tasks of the current task for mapping.
The manhattan distance man calculation formula is as follows:
MD(ni,nj)=|jx-ix|+|jy-iy|
wherein, MD (n)i,nj) Is the Manhattan distance, i, from node i to node jx、iyIs X, Y coordinates, j, of node ix、jyIs the X, Y coordinate of node j.
The maximum order of time complexity of the method is less than or equal to 2.
Compared with the prior art, the invention has the following advantages:
1. the application mapping method considers the topological structure of the task and the structure of the mapping area. Regardless of the topological structure of the task and the mapping area of the on-chip network is a square or irregular graph, the mapping method can efficiently calculate the application mapping scheme according to the related information, can effectively reduce internal congestion during on-chip network data transmission, and has high efficiency.
2. When the first task is mapped, different mapping nodes are selected for different subsequent tasks, and the nodes at the boundary or the corner are preferentially selected, so that the fragmentation of a mapping area can be avoided to a certain extent to cause the increase of the transmission distance of data.
3. When the subsequent task mapping is carried out, the method is carried out in a breadth-first greedy mode, the layer-by-layer processing is carried out, and tasks with large data flowing in different layers cannot influence each other.
4. The time complexity of the mapping method is low, the maximum series number of the mapping method is not more than 2, and the mapping method is in a low polynomial level. Compared with an exhaustive algorithm, although the exhaustive algorithm can calculate the optimal solution, the time complexity is O (| n |), n represents the number of tasks of one application, when n is large enough, the time consumed by the exhaustive algorithm for searching the optimal solution is even longer than the execution time of the application under the worst mapping scheme, the time consumed by the mapping method is far shorter than the time consumed by the exhaustive algorithm, and the energy consumption of the final mapping scheme and the mapping time of the application are both close to the time consumed by the exhaustive algorithm.
5. Compared with the method in the prior academic paper, the mapping method can reduce the energy consumption by 22 to 44 percent and the application execution time by 16 to 57 percent.
Drawings
FIG. 1 is a diagram of a mapping applied to a network on chip;
FIG. 2 is a diagram illustrating a second mapping applied to a network on chip;
FIG. 3 is a diagram of a third mapping applied to a network on chip;
FIG. 4 is a schematic of the topology of a 6 task application;
fig. 5 is a comparison graph after mapping the application of fig. 4 to 2 × 3 regions by the coaa method and the exhaustive method;
FIG. 6 is a schematic flow chart of a breadth-first greedy mapping method of the present invention;
FIG. 7 is a detailed flowchart of the first task mapping according to the present invention;
FIG. 8 is a detailed flow chart of the subsequent task mapping according to the present invention;
FIG. 9 is a schematic diagram of an example implementation of the present mapping method;
FIG. 10 is a graph of AWMD test results over a square mapped area;
FIG. 11 is a diagram of LPWMD test results on a square mapped region;
FIG. 12 is a graph of AWMD test results over an irregular mapped area;
FIG. 13 is a diagram of LPWMD test results on an irregular mapping region;
FIG. 14 is a comparison of the time consumption of the present mapping method, exhaustive method, and CoNA method.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
First, relevant definition and relevant index
1. Correlation definition
Internal congestion and external congestion: in the network on chip, serious internal congestion of the network on chip can be caused by mutual blockage of data packets or overlong transmission paths in the transmission process; the network on chip often runs a plurality of applications in parallel, some tasks in the applications are mapped to nodes which are not connected with each other, so that a mapping area is fragmented, and at the moment, the transmission of data packets needs to pass through the area where other applications are located, so that different applications are mutually blocked, and the external congestion of the network on chip is caused.
Applications in network on chip: first, an application containing several tasks can be represented by an edge weighted directed graph, each node represents a task, a directed edge from one node to another represents the transmission of packets between two tasks, and the weight of the edge represents the number of packets. AG<T,W>An application is described, each vertex ti∈ T represents a task, wi,j∈ W represent the number of packages of task i to task j the topology of an application containing 6 tasks is shown in figure 4,when task 6 gets the packets for task 4 and task 5, the delegate application ends.
Mapping region and manhattan distance: the mapping region may be described as NG<N,λ>,ni∈ N represents a node i, λi∈ λ represents the degree of departure of node i, where λiOut-degree of (c) refers to the neighbor nodes that node i has available on the X or Y coordinate system. Once a task is mapped onto this node, the out-degree of the neighbor nodes of the node is reduced by one. In NG<N,λ>In the above description, Manhattan Distance (MD) represents a communication distance, and a calculation method of MD is shown in formula (1). For example, MD (n)i,nj) Represents the minimum number of paths that the packet needs to travel from node i to node j, which is equal to the sum of the absolute values of the distances between node i and node j in the X, Y coordinates.
MD(ni,nj)=|jx-ix|+|jy-iy| (1)
2. Correlation index
In order to measure the influence of the mapping method on the execution of the network-on-chip application, the evaluation is performed by the following two indexes:
average Weighted Manhattan Distance (AWMD): the AWMD describes the transmission state of internal packets, which is equal to the total amount of all transmissions divided by the number of all packets, which is equal to the number of packets wi,jMultiplying by the corresponding Manhattan distance MD (n)i,nj). The AWMD calculation method is shown in formula (2):
Figure BDA0001356848950000051
taking the application in fig. 4 as an example, fig. 5 shows the difference between the two methods when the application is mapped to 2 × 3 regions by the coaa method and the exhaustive method. The mapping is performed by a CoNA method, the AWMD is 85/62-1.37, and the AWMD is 1 after the optimal solution is obtained by an exhaustive method.
Longest Weighted Manhattan Distance (LPWMD): assuming that packets are transmitted from one node to another one by one, LPWMD equals wmax(i, j) times the corresponding MDmax(ni,nj) Wherein w ismaxRefers to the number of packets, MD, on the critical pathmaxRefers to the transmission distance. To compute LPWMD, it is necessary to find the critical path on the task graph and replace the weight of the edge with the product of w and MD. The calculation method of the LPWMD is shown in formula (3):
LPWMD=∑wmax(i,j)×MDmax(ni,nj) (3)
taking the example of the method for mapping the coaa in fig. 3, the critical paths in this scheme are 1-2(12 ═ 12 × 1), 2-4(20 ═ 10 × 2), and 5-6(26 ═ 13 × 2). Its LPWMD was 58.
Secondly, the method of the invention
As shown in fig. 6, the breadth-first greedy mapping method for network-on-chip application of the present invention includes: a first task mapping step, namely selecting a node for mapping the first task according to the number of the first task and the subsequent tasks and the out-degree of the network-on-chip node, wherein the out-degree of the node is defined as the number of available neighbor nodes of the node; and a subsequent task mapping step, namely sequentially selecting nodes for mapping the subsequent tasks in a breadth-first greedy manner.
1. First task mapping
The first task plays an important role in the overall mapping process, which determines the traffic of the subsequent tasks. The first task mapping process is shown in fig. 7, and the first task step is used to select the best node to map the first task. In this process, the selection needs to be performed in consideration of both the topology of the task and the structure of the mapping region. First, when the out-degree of some nodes is equal to the number of tasks that follow the first task, if there is only one such node, then that node will be selected to map the first task; secondly, if the degree of departure of all the nodes is smaller than the number of the subsequent tasks of the first task, the node with the maximum degree of departure is selected to map the first task; thirdly, if the minimum out-degree of all the nodes is larger than the number of the subsequent tasks of the tasks, selecting the node with the minimum out-degree to map the first task; fourthly, in addition to the above conditions, the nodes at the boundary or the corner are preferentially selected, so that fragmentation of the mapping region can be avoided to some extent, thereby causing an increase in the transmission distance of data.
2. Mapping of successor tasks
And carrying out mapping on the subsequent tasks by a greedy method with breadth first. Firstly, if the inflow data volume of the task is the largest, the task is preferentially mapped to the node with the minimum Manhattan distance to the node where the previous task is located, the task with the second largest inflow data volume is mapped according to the method, and the whole process is continued until all tasks are mapped. In this way, the overall throughput can be reduced, since large data transmissions are carried out between adjacent nodes. Secondly, the whole process is carried out in a breadth-first mode according to the topological structure of the tasks, namely, the tasks on the same layer are processed each time and are mapped according to the greedy mode. Thus, the tasks with large data flow among different layers cannot influence each other.
The specific process is shown in fig. 8. Firstly, sequencing tasks on the same layer according to the size of inflow data through a breadth-first search algorithm; secondly, the tasks are put into a queue after being sorted; thirdly, the queue outputs the sequential tasks according to the size of the data volume of the task inflow; fourth, once a task is output, it begins to find a suitable node for the task. When the task has only one predecessor task, the node with the minimum MD can be selected for the mapping task, and it is necessary to supplement that if the node with the minimum MD has more than one node, the node with the node degree closest to the number of the immediately successor tasks of the current task can be selected for the mapping task. When one task has two or more precursor tasks, all available neighbor nodes are tried, the data transmission quantity of the neighbor nodes is calculated, and the node with the minimum data transmission quantity is selected as a local optimal node; fifthly, when the task is mapped to the node, the out degree of the corresponding neighbor node is required to be modified; sixthly, when the queue is empty, the mapping of the task of the layer is finished, and the mapping of the task of the lower layer is started at the moment; and seventhly, the whole process is continued until the tasks of all layers are mapped and then the process is finished.
The process of mapping the 6-task application instance of fig. 4 to a 2 x 3 square mapping region according to the present mapping method is shown in fig. 9.
Third, experimental results and analysis
The time consumption of the AWMD, LPWMD and the present mapping algorithm in different cases was calculated and compared to the coaa algorithm and the exhaustive method.
This embodiment randomly selects applications of 5, 6, 9, and 12 Task numbers and generates different Task topologies according to the document "Task Graph Generator (TGG)" ([ Online ]. Available at: http:// resource. Assuming that the number of nodes and the number of tasks are the same and the mapping area is continuous, the mapping area is divided into a square area and an irregular area and different shapes of the areas are randomly generated through an algorithm. Therefore, the actual performance of the method can be measured according to the difference between the topological structure of the task and the mapping area.
1. AWMD and LPWD over a square area
A square mapping area is the most ideal case, so the performance of the mapping method is evaluated in this case first. First, as shown in fig. 10, blue, red, and green columns describe the average values of the AWMD of the exhaustive method, the mapping method, and the coaa method under different task topologies and mapping regions, i.e., the average energy consumption, respectively. The cylinder of the CoNA is much higher than the other cylinders because CoNA ignores the topology of the task and does not attempt to find an optimal solution. In this case, the unsuitable mapping scheme increases the transmission path of data and causes greater power consumption. The column shape of the mapping method is closer to that of an exhaustive algorithm, even the same in some cases, and compared with CoNA, the mapping method reduces AWMD by 29-44%, because the mapping method gives higher priority to a task with large input data volume according to the topological structure of the task and maps the task to a node close to the node where the previous task is located. All tasks are mapped in a greedy manner, so that a large amount of data is transmitted from a short path, and the overall power consumption is reduced.
Next, LPWMD in fig. 11 represents execution time of the application. In this case, the height of the column of the mapping method is still close to that of the exhaustive method, and the column corresponding to the coaa is still high, because the mapping method does not search for a proper mapping mechanism, once the proper mapping mechanism is selected, internal congestion can be greatly reduced, each predecessor task can quickly transmit the data packet to its successor task, and execution of the whole application is accelerated.
2. AWMD and LPWMD over irregular areas
The mapping method is a general mapping method on NoC, and is also applicable to the situation that the mapping area is irregular. First, the AWMD under the irregular mapping area for the various methods is shown in fig. 12. Since the AWMD of the CoNA method is high without considering the information of the mapping area, in FIG. 12, the AWMD of CoNA is 22% -32% more than that of the exhaustive method. The mapping method can search a proper mapping scheme on an irregular area, so that the AWMD is closer to an exhaustive method. The present mapping method does not increase much regardless of whether the mapping region is regular or not, because the present mapping method takes into account the information of the mapping region. The mapping method preferentially selects the nodes on the boundary to map the first task and maps the tasks layer by layer according to the increasing Manhattan distance between the nodes according to the breadth-first principle. Next, fig. 13 shows the execution time of the application, i.e., LPWMD. The mapping method reduces internal congestion of data transmission in irregular areas and reduces execution time of application. In fig. 13, the present mapping method reduces application execution time by about 16% -42% compared to the coaa.
3. Time consumption
The mapping method is a more suitable application mapping algorithm, because the time complexity is lower, the maximum series does not exceed 2, for NP-hard problem, the exhaustive algorithm can always find the optimal solution, but the time complexity is O (| n |), n represents the number of tasks of an application, and when n is large enough, the time consumed by the exhaustive method for finding the optimal solution is even longer than the execution time of the application under the worst mapping scheme. Finally, the time complexity of the coaa is the same as that of the mapping method, but the AWMD and LPWMD cannot be further reduced because the topology and mapping area of the task are not considered.
The time consumption of the three methods is shown in fig. 14. First, CoNA consumes minimal time because it does not consider the topology of the tasks and the structure of the mapping area; secondly, the time consumed by the exhaustive method is acceptable when the number of tasks is small, but the time consumed by the exhaustive method is increased rapidly when the number of tasks is increased, for example, when 9 tasks are applied, the time consumed by the method reaches 11000ms, and when the number of tasks is 12, the time consumed by the method is even small to calculate the optimal solution; third, the mapping method consumes slightly more time than the CoNA, but as the number of tasks increases, the time consumption increases quite smoothly. In conclusion, the mapping method can efficiently and quickly calculate the mapping scheme and can reduce internal congestion.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (6)

1. A breadth-first greedy mapping method for network-on-chip applications, comprising:
a first task mapping step, namely selecting a node for mapping the first task according to the number of the first task and the subsequent tasks and the out-degree of the network-on-chip node, wherein the out-degree of the node is defined as the number of available neighbor nodes of the node;
a subsequent task mapping step, namely sequentially selecting nodes for mapping the subsequent tasks in a breadth-first greedy manner;
in the first task mapping step, the following conditions are provided:
a) when the node satisfaction degree is equal to the number of the subsequent tasks of the first task, selecting the node as the node for mapping the first task;
b) when the out-degree of all the nodes is smaller than the number of the subsequent tasks of the first task, selecting the node with the largest out-degree as the node for mapping the first task;
c) when the degree of departure of all the nodes is larger than the number of the subsequent tasks of the first task, selecting the node with the minimum degree of departure as the node for mapping the first task;
in the subsequent task mapping step, mapping is sequentially realized in a greedy manner for each task of each layer in a breadth-first manner according to a task topological structure, namely, each time the task with the largest inflow data quantity is taken as the current task, the node with the smallest Manhattan distance with the node where the predecessor task of the current task is located is firstly selected for mapping until all tasks are mapped.
2. The breadth-first greedy mapping method for network-on-chip applications according to claim 1, wherein when the number of nodes satisfying the conditions a), b) or c) is greater than 1, a node at a boundary or a corner is selected as a node for mapping a first task.
3. The breadth-first greedy mapping method for network-on-chip applications according to claim 1, wherein the subsequent task mapping step specifically comprises the steps of:
101) for the tasks on the same layer, sequencing the tasks according to the size of the inflow data through a breadth-first search algorithm to form a queue;
102) the queue outputs the tasks in sequence from big to small according to the size of the data amount of the task inflow;
103) for the output current task, judging whether the current task only has one precursor task, if so, executing step 104), and if not, executing step 105);
104) selecting the node with the minimum Manhattan distance with the node where the precursor task is located to map, and executing step 106);
105) traversing available neighbor nodes of nodes where all precursor tasks are located, selecting a node with the minimum data transmission quantity for mapping, and executing step 106);
106) updating the out-degree of the neighbor node of the mapping node;
107) judging whether the queue is empty, if so, executing step 108), and if not, returning to step 102);
108) repeating steps 101) -107) to implement the mapping of all layer tasks.
4. The breadth-first greedy mapping method for network-on-chip applications according to claim 3, wherein in the step 104), when the nodes with the minimum Manhattan distance are greater than 1, the node with the node emergence degree closest to the number of directly succeeding tasks of the current task is selected for mapping.
5. The breadth-first greedy mapping method for network-on-chip applications according to claim 3 or 4, wherein the Manhattan distance is calculated by the following formula:
MD(ni,nj)=|jx-ix|+|jy-iy|
wherein, MD (n)i,nj) Is the Manhattan distance, i, from node i to node jx、iyIs X, Y coordinates, j, of node ix、jyIs the X, Y coordinate of node j.
6. The breadth-first greedy mapping method for network-on-chip applications according to claim 1, wherein a maximum number of stages of temporal complexity of the method is less than or equal to 2.
CN201710599782.9A 2017-07-21 2017-07-21 Breadth-first greedy mapping method for network-on-chip application Expired - Fee Related CN107391247B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710599782.9A CN107391247B (en) 2017-07-21 2017-07-21 Breadth-first greedy mapping method for network-on-chip application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710599782.9A CN107391247B (en) 2017-07-21 2017-07-21 Breadth-first greedy mapping method for network-on-chip application

Publications (2)

Publication Number Publication Date
CN107391247A CN107391247A (en) 2017-11-24
CN107391247B true CN107391247B (en) 2020-06-26

Family

ID=60337428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710599782.9A Expired - Fee Related CN107391247B (en) 2017-07-21 2017-07-21 Breadth-first greedy mapping method for network-on-chip application

Country Status (1)

Country Link
CN (1) CN107391247B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144854A (en) * 2018-07-27 2019-01-04 同济大学 A kind of software self-test method using Bounded Model detection for gate level circuit
CN113360450B (en) * 2021-06-09 2022-09-20 中山大学 Construction heuristic mapping method based on network on chip
CN114500355B (en) * 2022-02-16 2023-06-16 上海壁仞智能科技有限公司 Routing method, network-on-chip, routing node and routing device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625673A (en) * 2008-07-07 2010-01-13 中国科学院计算技术研究所 Method for mapping task of network on two-dimensional grid chip
CN101834780A (en) * 2010-01-28 2010-09-15 武汉理工大学 Method for optimizing topological structure and mapping of network on chip
CN103428804A (en) * 2013-07-31 2013-12-04 电子科技大学 Method for searching mapping scheme between tasks and nodes of network-on-chip (NoC) and network code position
CN103580890A (en) * 2012-07-26 2014-02-12 深圳市中兴微电子技术有限公司 Reconfigurable on-chip network structure and configuration method thereof
CN103761212A (en) * 2014-01-21 2014-04-30 电子科技大学 Method for designing mapping scheme and topological structure between task and node in on-chip network
CN104102532A (en) * 2013-04-15 2014-10-15 同济大学 Low-energy-consumption-based scientific workflow scheduling method in heterogeneous cluster
CN105049315A (en) * 2015-08-07 2015-11-11 浙江大学 Improved virtual network mapping method based on virtual network partition

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625673A (en) * 2008-07-07 2010-01-13 中国科学院计算技术研究所 Method for mapping task of network on two-dimensional grid chip
CN101834780A (en) * 2010-01-28 2010-09-15 武汉理工大学 Method for optimizing topological structure and mapping of network on chip
CN103580890A (en) * 2012-07-26 2014-02-12 深圳市中兴微电子技术有限公司 Reconfigurable on-chip network structure and configuration method thereof
CN104102532A (en) * 2013-04-15 2014-10-15 同济大学 Low-energy-consumption-based scientific workflow scheduling method in heterogeneous cluster
CN103428804A (en) * 2013-07-31 2013-12-04 电子科技大学 Method for searching mapping scheme between tasks and nodes of network-on-chip (NoC) and network code position
CN103761212A (en) * 2014-01-21 2014-04-30 电子科技大学 Method for designing mapping scheme and topological structure between task and node in on-chip network
CN105049315A (en) * 2015-08-07 2015-11-11 浙江大学 Improved virtual network mapping method based on virtual network partition

Also Published As

Publication number Publication date
CN107391247A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN108243044B (en) Service deployment method and device
Fattah et al. Smart hill climbing for agile dynamic mapping in many-core systems
CN108260169B (en) QoS guarantee-based dynamic service function chain deployment method
CN107391247B (en) Breadth-first greedy mapping method for network-on-chip application
Resende et al. A GRASP with path‐relinking for private virtual circuit routing
CN108965014B (en) QoS-aware service chain backup method and system
Chou et al. Incremental run-time application mapping for homogeneous NoCs with multiple voltage levels
CN109614215A (en) Stream scheduling method, device, equipment and medium based on deeply study
Haghbayan et al. Mappro: Proactive runtime mapping for dynamic workloads by quantifying ripple effect of applications on networks-on-chip
CN107533538A (en) Tenant is handled in the system using acceleration components to require
Liu Intelligent routing based on deep reinforcement learning in software-defined data-center networks
CN108111335A (en) A kind of method and system dispatched and link virtual network function
WO2024051388A1 (en) Neural network on-chip mapping method and device based on tabu search algorithm
CN107360031B (en) Virtual network mapping method based on optimized overhead-to-revenue ratio
CN105677447A (en) Clustering-based delay bandwidth minimization virtual machine deployment method in distributed cloud
Kumari et al. Efficient edge rewiring strategies for enhancement in network capacity
Javadpour et al. Reinforcement learning-based slice isolation against ddos attacks in beyond 5g networks
CN108737268A (en) Software definition industry Internet of Things resource regulating method
Chai et al. A parallel placement approach for service function chain using deep reinforcement learning
Stavrinides et al. Security and cost aware scheduling of real-time IoT workflows in a mist computing environment
Mehran et al. Spiral: A heuristic mapping algorithm for network on chip
CN110958666A (en) Network slice resource mapping method based on reinforcement learning
Shi et al. Priority assignment for real-time wormhole communication in on-chip networks
CN104125146B (en) A kind of method for processing business and device
CN111245701B (en) Link priority virtual network mapping method based on maximum weighted matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200626