GB2365258A - Switching devices - Google Patents

Switching devices Download PDF

Info

Publication number
GB2365258A
GB2365258A GB0024468A GB0024468A GB2365258A GB 2365258 A GB2365258 A GB 2365258A GB 0024468 A GB0024468 A GB 0024468A GB 0024468 A GB0024468 A GB 0024468A GB 2365258 A GB2365258 A GB 2365258A
Authority
GB
United Kingdom
Prior art keywords
ingress
egress
multicast
bandwidth
forwarder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0024468A
Other versions
GB0024468D0 (en
GB2365258B (en
Inventor
Simon Paul Davis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Roke Manor Research Ltd
Original Assignee
Roke Manor Research Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Roke Manor Research Ltd filed Critical Roke Manor Research Ltd
Publication of GB0024468D0 publication Critical patent/GB0024468D0/en
Priority to EP01202698A priority Critical patent/EP1176767B1/en
Priority to ES01202698T priority patent/ES2288905T3/en
Priority to DE60130292T priority patent/DE60130292T2/en
Priority to AT01202698T priority patent/ATE372629T1/en
Priority to CA002353170A priority patent/CA2353170C/en
Priority to US09/915,553 priority patent/US6956859B2/en
Priority to JP2001227380A priority patent/JP4618942B2/en
Publication of GB2365258A publication Critical patent/GB2365258A/en
Application granted granted Critical
Publication of GB2365258B publication Critical patent/GB2365258B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/254Centralised controller, i.e. arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/201Multicast operation; Broadcast operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3018Input queuing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Described herein is a method for making bandwidth allocation for data to be sent from a plurality of ingress forwarders or LICs (310, 312, 314, 316) to a plurality of egress forwarders or LICs (320, 322, 324, 326) across a routing device (330). The data may be unicast and/or multicast. The bandwidth allocation is calculated in accordance with ingress forwarder multicast queue occupancy for each ingress forwarder (312, 314, 316), the number of multicast cells received by egress forwarders (322, 324, 326) from the ingress forwarders in the last bandwidth allocation period, and the bandwidth allocated to non-real time multicast flows from ingress forwarders (310, 312, 314, 316) to egress forwarders (320, 322, 324, 326).

Description

<Desc/Clms Page number 1> IMPROVEMENTS IN OR RELATING TO SWITCHING DEVICES The present invention relates to improvements in or relating to switching devices, and is more particularly concerned with a method of adjusting bandwidth in such devices.
Traffic volume in the Internet is growing exponentially, doubling every three to six months. The current capacity of Internet Protocol (IP) routers is insufficient to meet this demand and hence products that can route IP traffic at extremely large aggregate bandwidths of the order of l OGbit/s to several Tbit/s. Such routers are.termed "Super Routers".
Additionally, there is a growing need to support multicast (one to many or many to many) communications within the internet or any other IP based network. To support such a service, an IP router must be able to replicate packets and send them to multiple outputs on a per packet basis. In a router where bandwidth allocations are strictly controlled (in order to support Quality of Service criteria), it is necessary to determine how much bandwidth to allocate to multicast traffic across the core switching fabric.
It is therefore an object of the present invention to provide a method which overcomes the problems mentioned above.
In accordance with one aspect of the present invention, there is provided a method of allocating bandwidth for multicast traffic in a switching device connected between a plurality of ingress means and a plurality of egress means, the method comprising the steps of-.- a) determining ingress multicast queue occupancy for each ingress means; b) determining the number of multicast cells received by the egress means from the ingress means in the last bandwidth allocation period;
<Desc/Clms Page number 2>
c) determining the bandwidth at each ingress means and egress means after real time bandwidth allocations have been taken into account; and d) calculating the bandwidth allocation for the next bandwidth allocation period in accordance with the values determined in steps a), b) and c).
For a better understanding of the present invention, reference will now be made, by way of example only, to the accompanying drawings in which:- Figure 1 illustrates an ingress forwarder scheduling function; Figure 2 illustrates centralised bandwidth allocation in accordance with the present invention; and Figure 3 illustrates effective queue lengths for non-real time multicast bandwidth allocation.
The present invention relates to a method of dynamically adjusting the bandwidth, allocated to multicast traffic, across an asynchronous transfer mode (ATM) switch or crossbar like switching fabric that joins several IP packet forwarder functions to form a "Super Router" node.
In order to prevent head of line blocking, unicast traffic is queued in separate logical scheduling entities (called scheduler blocks) according to which egress forwarder it is destined. The scheduler block serves a set of queues (per class or per connection) via any mechanism desired (e.g. strict priority or Weighted Fair Queuing) provided that the real time IP traffic class is guaranteed a minimum bandwidth.
However, for multicast traffic, it is not practical to queue traffic on the basis of a unique combination of egress destinations. This is because the number of queues required becomes unmanageable even for a relatively
<Desc/Clms Page number 3>
small number of egress ports. Hence, a separate multicast scheduler block is used in each ingress forwarder containing one real time multicast queue and one or more non-real time multicast queues as shown in Figure 1.
Figure 1 shows an ingress forwarder 100 which includes a unicast scheduler block 110 for scheduling unicast traffic and a multicast scheduler block 130 for scheduling multicast traffic. Although only one unicast scheduler block 110 and one multicast scheduler block 130 are shown, it will be appreciated that any number of such scheduler blocks may be provided in any combination according to a particular application.
The unicast scheduler block 110 comprises a plurality of queues 112, 114, 116 connected to a scheduler 118 which has an output 120 connected to a particular egress forwarder (not shown), for example, egress forwarder 1 as indicated. Although only three queues 112, 114, 116 are shown, it will readily be understood that any number of queues may be provided in accordance with a particular application.
The scheduler 118 has a scheduling rate which is determined by unicast bandwidth allocation and operates to transmit cells 122, 124, 126 at the head of respective queues 112, 114, 116 according to their priority, as indicated by arrow 128, to the output 120. Unicast bandwidth allocation is described in co-pending British patent application no. 9907313.2 (docket number F21558/98P4863), incorporated herein by reference.
The multicast scheduler block 130 comprises queues 132, 134 - a real time queue 132 and a non-real time queue 134. Both queues 132, 134 are connected to a scheduler 136 through which all multicast traffic passes. The scheduler 136 has an output 138.
<Desc/Clms Page number 4>
It will readily be appreciated that although only one real time queue and one non-real time queue are shown, there may any number of such queues depending on the particular application.
Cells 142, l44 at the head of respective ones of the queues 132, 134 are selected for passage through the scheduler 136 to output 138 in accordance with a priority basis as indicated by arrow 146.
Incoming IP traffic from the line is queued in the relevant queues associated with a scheduler block. The choice of scheduler block is governed by the destination egress forwarder and whether it is multicast or unicast traffic. The class of service determines the specific queue to be utilised.
The overall view of a centralised bandwidth allocation arrangement 200 is shown in Figure 2. The arrangement 200 comprises a plurality of ingress forwarders 210,a plurality of egress forwarders 220, a switching network 230 and a bandwidth allocation controller 240. Each ingress forwarder 212, 214, 216, 218 can be connected to one or more egress forwarders 222, 224, 226, 228 as required via the switching network 230 under the control of the bandwidth allocation controller 240.
Although only four ingress forwarders 212, 2145 216, 218 and four egress forwarders 222, 224, 226, 228 are shown, it will be appreciated that any number of ingress and egress forwarders can be provided in accordance with a particular application.
As shown in Figure 2, each ingress forwarder 212, 214, 216, 218 interfaces with the bandwidth allocation controller 240 via links 242 and 244 - only the links 242, 244 to ingress forwarder 212 being shown for clarity. Link 242 provides the bandwidth allocation controller 240 with information relating to buffer occupancy, arrival rate of packets of cells etc for each ingress forwarder 212, 214, 216, 218. Link 244 provides each ingress
<Desc/Clms Page number 5>
forwarder 212, 214, 216, 218 with scheduling decisions made by the bandwidth allocation controller 240.
Similarly, each egress forwarder 222, 224, 226, 228 interfaces with the bandwidth allocation controller 240 via link 246 which provides the bandwidth allocation controller 240 with information relating to the multicast cells sent. Again, only the link 246 with egress forwarder 222 is shown for clarity.
For every fixed period, that is, the Bandwidth Allocation Period (BAP), each ingress forwarder 212, 214, 216, 218 sends buffer occupancy (and possibly other information) via link 242 to the central bandwidth allocation controller 240. In addition, each egress forwarder 222, 224, 226, 228 sends information via link 246 on how many multicast cells were received in the last BAP from each ingress forwarder 212, 214, 216, 218. The bandwidth allocation controller 240 works out the allocation of bandwidth between all ingress/egress forwarder pairs for the next BAP and uses this to provide scheduling information to the ingress forwarders 212, 2141 216, 218 via link 244 telling them which cells/packets to transmit in the . next cell period.
However, in order to include multicast functionality into the bandwidth allocation process some additions are required to the unicast algorithm defined in British patent application no. 9907313.2 mentioned above. The unicast bandwidth allocation algorithm essentially divides the available bandwidth at ingress and egress amongst competing forwarders using the ingress queue length as a weighting factor. The queue length of the unicast scheduler block for traffic on ingress forwarder i destined for egress forwarder j is denoted by qj. Thus, for example, the amount of bandwidth
<Desc/Clms Page number 6>
allocated to unicast traffic from ingress forwarder i to egress forwarder j at the egress, be;;, is given by the following equation:-
Here, ABWEj is the available bandwidth after real time reservations have been accounted for at the egress forwarder j, and Zq,k is the sum of the buffer occupancies for data destined for egress forwarder j in ever ingress forwarder. The term q;k represents the buffer occupancy of the scheduler block in ingress forwarder i destined for egress forwarder j.
For real time multicast flows, the fan-out and bandwidth guarantees are known in advance and the sum of a11 ingress and egress requirements can be subtracted from the available bandwidth in the same way as for real time unicast traffic flows.
As the amount of egress bandwidth required for non-real time multicast flows is not known (compared with the case for real time multicast), it must be determined by the system. One way of determining the amount of egress bandwidth required is to collect statistics at the egress forwarders on the number of multicast cells received from each ingress forwarder in the last BAP. These statistics can then be included in the queue statistics message sent from the ingress forwarder to the central scheduler every BAR Although Figures 1 and 2 have been described with reference to ingress and egress forwarders, it will be appreciated that the ingress and egress devices are not limited to such devices and may comprise any suitable device which enables packets of data to be transferred from one side of a switching network to another.
<Desc/Clms Page number 7>
Figure 3 illustrates a system 300 for calculating non-real time multicast bandwidth allocation. The system 300 comprises a plurality of ingress forwarders or line interface cards (LICs) 310,a plurality of egress forwarders or line interface cards (LICs) 320, and a switching network 330. Each ingress LIC 312, 314, 316 has at least one queue 342, 344, 346 as shown. Each egress LIC 322, 324, 326 receives data from one or more ingress LICs 312, 314, 316 across the switching network 330 as shown. Only one queue 342, 344, 346 is shown in each ingress LIC 312, 314, 316 for clarity.
The ingress forwarder multicast queue occupancy is denoted as mcqt for ingress forwarder i. The number of multicast cells received by egress forwarder j from ingress forwarder i in the last BAP is denoted by mcqj. The bandwidth allocated to non-real time multicast flows from ingress forwarder i to egress forwarder j is denoted by mcb;..
The value of mcq; is used in the ingress bandwidth fair share in the same manner as qj does in the unicast centralised bandwidth allocation algorithm.
The values mcqj take part in the egress fair share allocation by providing a proportion with which to scale the ingress multicast queue occupancies. This means that the effective weight that the occupancy of the ingress non-real time (nrt) multicast queue (mcq;) has on an egress forwarder j (called emcqj) is determined by the proportion of nrt cells received by egress forwarder j compared to all those received at egress forwarder j in the last BAP period. It is therefore governed by the following equation:-
<Desc/Clms Page number 8>
The value of emcqij will be used in egress bandwidth allocation functions alongside the unicast equivalents qj.
Thus, the equivalent of equation (1) when including the multicast traffic is given in equation (3):-
where bmell is the nrt multicast bandwidth allocated between ingress forwarder i and egress forwarder j, AB WEf, emcq;j and q;, are the same as before.
Similar principles can be applied at the ingress for bandwidth allocation and any left over bandwidth can be distributed between unicast and multicast allocations by any fair mechanism required.
The ingress equation for nrt multicast bandwidth becomes:-
ABWI is the available bandwidth at the ingress forwarder i after real time traffic reservations have been taken into account. The term imcq, is the ingress equivalent of emcqj and is the effective weight with which to scale the ingress multicast queue occupancy. It is calculated from equation (5).
The actual allocated multicast bandwidth between ingress forwarder i and egress forwarder j is the minimum of the ingress and egress bandwidth allocations as defined in equation (6).
<Desc/Clms Page number 9>
Any remaining bandwidth not allocated after this process has been carried out can be distributed between unicast and multicast allocations in any fair manner desired.
The advantages of the scheme described above include: a) A 100% efficient distribution of bandwidth b) A fair distribution of nrt bandwidth according to ingress queue occupancies c) Prevention of overload of the crossbar by restricting the total amount of bandwidth allocated at ingress and egress ports to the crossbar.
<Desc/Clms Page number 10>

Claims (2)

  1. CLAIMS: 1. A method of allocating bandwidth for multicast traffic in a switching device connected between a plurality of ingress means and a plurality of egress means, the method comprising the steps of:- ' a) determining ingress multicast queue occupancy for each ingress means; b) determining the number of multicast cells received by the egress means from the ingress means in the last bandwidth allocation period; c) determining the bandwidth at each ingress means and egress means; and d) calculating the bandwidth allocation for the next bandwidth allocation period in accordance with the values determined in steps a), b) and c).
  2. 2. A method of allocating bandwidth for multicast traffic substantially as hereinbefore described with reference to the accompanying drawings.
GB0024468A 2000-07-27 2000-10-06 Improvements in or relating to switching devices Expired - Fee Related GB2365258B (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
ES01202698T ES2288905T3 (en) 2000-07-27 2001-07-16 IMPROVEMENTS IN OR RELATING TO SWITCHING DEVICES.
EP01202698A EP1176767B1 (en) 2000-07-27 2001-07-16 Improvements in or relating to switching devices
DE60130292T DE60130292T2 (en) 2000-07-27 2001-07-16 Improvement in or relating to switching equipment
AT01202698T ATE372629T1 (en) 2000-07-27 2001-07-16 IMPROVEMENT IN OR RELATING TO BROKERAGE FACILITIES
CA002353170A CA2353170C (en) 2000-07-27 2001-07-17 Improvements in or relating to switching devices
US09/915,553 US6956859B2 (en) 2000-07-27 2001-07-27 Switching devices
JP2001227380A JP4618942B2 (en) 2000-07-27 2001-07-27 Multicast traffic bandwidth allocation method for switching equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GBGB0018328.5A GB0018328D0 (en) 2000-07-27 2000-07-27 Centralised bandwidth allocation for multicast traffic

Publications (3)

Publication Number Publication Date
GB0024468D0 GB0024468D0 (en) 2000-11-22
GB2365258A true GB2365258A (en) 2002-02-13
GB2365258B GB2365258B (en) 2003-10-29

Family

ID=9896374

Family Applications (2)

Application Number Title Priority Date Filing Date
GBGB0018328.5A Ceased GB0018328D0 (en) 2000-07-27 2000-07-27 Centralised bandwidth allocation for multicast traffic
GB0024468A Expired - Fee Related GB2365258B (en) 2000-07-27 2000-10-06 Improvements in or relating to switching devices

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GBGB0018328.5A Ceased GB0018328D0 (en) 2000-07-27 2000-07-27 Centralised bandwidth allocation for multicast traffic

Country Status (1)

Country Link
GB (2) GB0018328D0 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2379589B (en) * 2000-06-20 2004-08-11 Nds Ltd Unicast/multicast architecture

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0678996A2 (en) * 1994-03-23 1995-10-25 Roke Manor Research Limited Apparatus and method of processing bandwidth requirements in an ATM switch
EP0680180A1 (en) * 1994-03-23 1995-11-02 Roke Manor Research Limited Improvements in or relating to asynchronous transfer mode (ATM) system
GB2318250A (en) * 1996-08-21 1998-04-15 4Links Ltd A multiple output routing switch for a packet network
WO1999000949A1 (en) * 1997-06-30 1999-01-07 Sun Microsystems, Inc. A system and method for a quality of service in a multi-layer network element

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0678996A2 (en) * 1994-03-23 1995-10-25 Roke Manor Research Limited Apparatus and method of processing bandwidth requirements in an ATM switch
EP0680180A1 (en) * 1994-03-23 1995-11-02 Roke Manor Research Limited Improvements in or relating to asynchronous transfer mode (ATM) system
GB2318250A (en) * 1996-08-21 1998-04-15 4Links Ltd A multiple output routing switch for a packet network
WO1999000949A1 (en) * 1997-06-30 1999-01-07 Sun Microsystems, Inc. A system and method for a quality of service in a multi-layer network element

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2379589B (en) * 2000-06-20 2004-08-11 Nds Ltd Unicast/multicast architecture
US7631080B2 (en) 2000-06-20 2009-12-08 Nds Limited Unicast/multicast architecture
US7882233B2 (en) 2000-06-20 2011-02-01 Nds Limited Unicast/multicast architecture
US7945672B2 (en) 2000-06-20 2011-05-17 Nds Limited Unicast/multicast architecture

Also Published As

Publication number Publication date
GB0024468D0 (en) 2000-11-22
GB0018328D0 (en) 2000-09-13
GB2365258B (en) 2003-10-29

Similar Documents

Publication Publication Date Title
US7042883B2 (en) Pipeline scheduler with fairness and minimum bandwidth guarantee
JP2608003B2 (en) Congestion control method using multiple types of frames
US7006438B2 (en) Distributed control of data flow in a network switch
CA2365677C (en) Allocating buffers for data transmission in a network communication device
US6526060B1 (en) Dynamic rate-based, weighted fair scheduler with explicit rate feedback option
JP3606565B2 (en) Switching device and method
KR100328642B1 (en) Arrangement and method relating to packet flow control
Chrysos et al. Scheduling in Non-Blocking Buffered Three-Stage Switching Fabrics.
JPH0846590A (en) Data transmission system
GB2288096A (en) Apparatus and method of processing bandwidth requirements in an ATM switch transmitting unicast and multicast traffic
CA2406074A1 (en) Method and apparatus for distribution of bandwidth in a switch
CA2353170C (en) Improvements in or relating to switching devices
US6947418B2 (en) Logical multicast packet handling
US5978357A (en) Phantom flow control method and apparatus with improved stability
EP1133110A2 (en) Switching device and method
US20040202175A1 (en) Bandwidth control method, cell receiving apparatus, and traffic control system
EP1271856B1 (en) Flow and congestion control in a switching network
GB2365258A (en) Switching devices
EP0680180A1 (en) Improvements in or relating to asynchronous transfer mode (ATM) system
Ramamurthy et al. A congestion control framework for BISDN using distributed source control
Zukerman et al. A shared medium multi-service protocol
EP1381192A1 (en) Improved phantom flow control method and apparatus
KR100204492B1 (en) Method for ensuring the jitter in hrr queueing service of atm networks
Pelsser et al. Improvements to core stateless fair queueing
Krachodnok et al. Buffer management for TCP over GFR service in an ATM network

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20141006