EP1258114A1 - Method and device for data traffic shaping - Google Patents
Method and device for data traffic shapingInfo
- Publication number
- EP1258114A1 EP1258114A1 EP01904830A EP01904830A EP1258114A1 EP 1258114 A1 EP1258114 A1 EP 1258114A1 EP 01904830 A EP01904830 A EP 01904830A EP 01904830 A EP01904830 A EP 01904830A EP 1258114 A1 EP1258114 A1 EP 1258114A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- data
- data packets
- packet
- priority
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L12/5602—Bandwidth control in ATM Networks, e.g. leaky bucket
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q11/00—Selecting arrangements for multiplex systems
- H04Q11/04—Selecting arrangements for multiplex systems for time-division multiplexing
- H04Q11/0428—Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
- H04Q11/0478—Provisions for broadband connections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5629—Admission control
- H04L2012/5631—Resource management and allocation
- H04L2012/5636—Monitoring or policing, e.g. compliance with allocated rate, corrective actions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5678—Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
- H04L2012/5679—Arbitration or scheduling
Definitions
- the present invention relates to the field of high speed data packet processing for computer networking systems, and in particular to shaping streams of data packets to conform to varying data packet size and format requirements of these computer networking systems.
- ISPs Internet Service Providers
- Other network managers are recognizing the need to manage data traffic such that different users, as well as different types of data, get different treatment by the networks (i.e., different data transfer parameters).
- Some users require greater bandwidth transfer rates and certain "priority" data requires more stringent transfer parameters (e.g., real-time traffic such as voice or video) .
- stringent transfer parameters e.g., real-time traffic such as voice or video
- bandwidth is a scarce resource
- network connections or interfaces must be managed to isolate users or groups of users that are using common connections or interfaces in order to limit the amount of bandwidth available to each of these users or group of users based on their subscription rates. Further, these network connections or interfaces must be able to recognize "priority" data traffic and provide data transfer based on the more stringent transfer requirements of this data.
- Switches and routers are known to "police" data on an incoming connection or interface to determine whether the transfer bandwidth is in compliance with a subscriber level. It is also known to shape data traffic in these switches and routers to control the outgoing connection or interface to ensure that the data being transferred is limited to the bandwidth assigned to the particular user or subscriber whose data packet or packets are in the traffic stream.
- traffic management or “data transfer management” is known.
- traffic management is currently limited to only single level data management (e.g., traffic shaping of individual users or traffic shaping of groups of users) .
- a method and device for efficiently managing data traffic at multiple levels, and in particular, for multi-level data shaping of traffic on the Internet.
- a device and method capable of processing in parallel data traffic to two or more rates (i.e., a total rate and a sub-rate) and to provide multi-level traffic shaping to shape, for example, oversubscribed bandwidth provided to groups of users.
- traffic shaping is needed that has the ability to determine priority data such that it provides low latency to delay sensitive connections.
- the present invention provides a method and device for traffic shaping at multiple levels or layers in parallel and that can provide priority shaping to certain data.
- the invention provides for shaping connections or groups of connections over a common channel or interface in order to identify and/or isolate users or groups of users from each other in order to control the data transfer rate of each user or group of users (i.e., limit bandwidth of each user or group of users to their subscription rate) .
- the traffic shaper of the present invention provides parallel and efficient use of circular queues to confirm, without a search, when data packets are conformant with a provisioned bandwidth.
- the traffic shaper also schedules the provisioned bandwidth to a logical link and thereafter to an output port .
- the traffic shaper also provides for transmission of bursts of data packets .
- the invention includes the efficient use of linked lists to provide priorities to certain data on specific connections for low latency to delay sensitive connections. Shaping is also possible at different bandwidths (i.e., a total rate and a sub-rate) and at multiple levels or layers to enable the shaping of, for example, oversubscribed bandwidth provided to groups of users.
- the traffic shaping of the present invention is preferably software based, but uses hardware control for implementing overall data traffic transfer.
- the invention holds data packets in a buffer until user and protocol information is processed and the data packet is transmitted to its next network destination.
- the invention provides inputs to examine incoming traffic from various interfaces, for example asynchronous transfer mode (ATM) , Gigabit Ethernet or Packet over synchronous optical network (POS) .
- ATM asynchronous transfer mode
- POS synchronous optical network
- This allows for the extraction of relevant control information from data packets (e.g., characterizing information, such as user identification information) , which is then forwarded to appropriate internet processing engines (IPEs) of the present invention.
- the IPEs provide protocol processing and management of users and tunnels. For example, data packet identification may be provided as described in co-pending U.S.
- PPUs protocol processing units
- Multiple PPUs may be implemented depending upon the amount of bandwidth managed.
- the invention shapes the flow of traffic on the egress side of the traffic shaper to ensure conformity to negotiated transfer parameters. The invention thereby gives a pre-defined shape to the data stream profile.
- the invention provides both a method and device for traffic shaping.
- the method of shaping data packets in a data stream is provided to control the rate of transfer of the data packets having characterizing information corresponding to users and a predefined data packet transfer rate for each of the packets.
- the method preferably comprises the steps of processing the characterizing information in parallel to thereby determine a plurality of data transfer requirements, and forwarding the data packets to a next destination at the predefined data packet transfer rate based on the determined data transfer requirements.
- the method may further comprise processing a plurality of levels of user information in parallel and wherein the processing includes processing in parallel a first level of user information comprising individual user information and processing another level of user information comprising group user information.
- processing the characterizing information in parallel may include determining a plurality of levels of transfer requirements.
- the method may further include forwarding the data packets at a plurality of predefined data packet transfer rates based on the plurality of levels of determined data transfer requirements and storing the data packets while the parallel processing is performed.
- the data packets being processed may be of variable length.
- the method may further provide forwarding the data packets at a higher data packet transfer rate than the predefined data packet transfer and storing the data packets by logically associating the data packets of each individual user as they are stored.
- the step of processing the characterizing information may further comprise determining whether a data packet is a priority data packet or a non- priority data packet and parallel processing further comprises processing in parallel the individual user and group user information for both the priority and non-priority data packets.
- the method may also include the step of separately scheduling a data transfer rate for each of the priority and non-priority data packets .
- the invention further provides a method of shaping a data stream to control the transfer rate of a plurality of data packets comprising the data stream with the method comprising the steps of storing the data packets while a data transfer rate is determined for each data packet, determining individual user and group user desired data transfer rates for each of the stored data packets, processing in parallel the individual user and group user desired data transfer rates to determine an allowable data packet transmission time for each of the stored data packets, and transmitting the stored data packets on or after the allowable data packet transmission times.
- the method may further comprise determining a plurality of levels of desired data transfer rates for each of the individual user and group user data transfer rates and logically associating the data packets of each individual user.
- the method may further include determining whether each data packet is a priority or non-priority data packet and parallel processing the desired data transfer rates for both the priority and non-priority data packets for the individual and group users.
- the method may provide separately scheduling the allowable departure time for each of the priority and non-priority data packets and processing the characterizing information for variable length data packets.
- the method may also provide for transmitting the data packets before the allowable data packet transmission times.
- the device of the present invention is preferably a data stream shaper providing multiple level shaping of a data stream, with the data stream comprising data packets having characterizing information.
- the data stream shaper comprises a plurality of processors for multiple level parallel processing of the characterizing information to determine a plurality of allowable user data transfer rates.
- a buffer may be connected between an input and the processors for storing the data packets as the processors process the characterizing information and the processors may be configured to process the characterizing information for data packets in different processors.
- the data stream shaper may be provided wherein each of the plurality of processors is configured for processing each level in separate of the different processors.
- the data stream shaper may also provide logical association of related data packets in the buffer.
- the data stream shaper buffer may further comprise a priority storage area for storing priority data packets until the priority data packets are to be transmitted and a non-priority storage area for storing non-priority data packets until the non-priority packets are to be transmitted.
- the data stream may comprise a plurality of data streams emanating from a plurality of users, and wherein the multiple users are associated into groups, with the plurality of processors being configured to determine allowable data transfer rates based on characterizing information for the users and the groups.
- a plurality of line cards may also be provided with the buffer comprising a plurality of buffer elements mounted on the plurality of line cards.
- the plurality of processors may also be mounted on the line cards, and the data stream shaper may further comprise a packet identifier for determining characterizing information connected to the line cards through a switch fabric and a packet manager for processing the data packets into a preselected format, also connected to the line cards through said switch fabric.
- the device of the present invention may also be a controller for controlling the rate of transfer of data packets in a data stream, with the controller comprising at least one packet identifier for processing protocol and user information and at least one data stream shaper connected to said packet identifier.
- the data stream shaper comprises an input interface for receiving the data packets, an output for forwarding the data packets at a determined allowable transfer rate, and a plurality of processors for multiple level processing of the data packets in parallel to determine the determined allowable transfer rate for each of the data packets.
- Each of the packet identifiers may have a packet inspector for determining the protocol and user information for users with the controller further comprising a packet manager connected to the processors for formatting the data packets into one of a plurality of predetermined data protocols .
- the controller may include a buffer connected to the packet identifier for storing the data packets as the characterizing information is processed and the packet identifiers may determine whether each data packet is a priority or non-priority data packet.
- a device of the invention may also be provided for shaping a plurality of data streams, with each of the data streams comprising a plurality of data packets and the device including a plurality of processors for shaping the data streams, a plurality of packet managers for formatting the data packets comprising the data streams, and a switch fabric interconnecting the plurality of processors and the plurality of packet managers.
- a device of the invention may further be provided for shaping a plurality of data streams, with each of the data streams comprising a plurality of variable length data packets and the device including a plurality of line cards and a plurality of data processing cards.
- the cards shape the data streams and the switch fabric interconnects the plurality of line cards and plurality of data processing cards . Therefore, the present invention provides traffic shaping for individual users, as well as for groups of users. Although a data packet may consist of a plurality of cells, shaping is preferably performed at the data packet level and not at the data cell level. As an example, traffic shaping of the present invention is first performed on a user level, where data packets of individual users are shaped according to the individual user's profile. Next, shaping is performed on a "logical link" level which may carry a plurality or group of users. Both levels of shaping are performed in parallel to increase speed in processing. The invention also provides shaping based on priority and non-priority traffic, wherein priority traffic is preferably given strict priority over non-priority traffic (e.g., real-time traffic versus non-real-time traffic) .
- priority traffic is preferably given strict priority over non-priority traffic (e.g., real-time traffic versus non-real-time traffic) .
- the present invention controls the flow of data packets such that the characteristics of the flow, after being processed and shaped, are readily definable which allows users to negotiate these determined transmission parameters more easily.
- the invention also facilitates the use of network capacity with traffic management because of the ability to better predict the traffic stream characteristics. Monitoring the traffic on the network side (policing) is much easier and more reliable because input data flow has known characteristics . While the principal advantages and features of the present invention have been explained above, a more complete understanding of the invention may be attained by referring to the description of the preferred embodiment which follows .
- Fig. 1 is a schematic block diagram of a system constructed according to the principals of one embodiment of the present invention for shaping data traffic-
- Fig. 2 is a schematic block diagram of a line card in the system of Fig. 1;
- Fig. 3 is another schematic block diagram of a line card in the system of Fig. 1
- Fig. 4 is a schematic block diagram of an IPE card in the system of Fig. 1;
- Fig. 5 is another schematic block diagram of an IPE card in the system of Fig. 1
- Fig. 6 is a chart of the user table of the present invention.
- Fig. 7 is a chart of the logical link table of the present invention
- Fig. 8 is a schematic block diagram of portion of the memory in a PPU of the system of Fig. 1;
- Fig. 9 is a block time line representation of processing functions of the present invention.
- Fig. 10 is a schematic block diagram of a user circular queue process of the present invention.
- Fig. 11 is a schematic block diagram of a logical link circular queue process of the present invention.
- Fig. 12 is a schematic block diagram of the circular queues of Fig. 8 and 9;
- Fig. 13 is a schematic block diagram of a portion of the memory of a PPU of the present invention.
- Fig. 14 is a flow chart of a priority • data packet transmitting procedure of the present invention.
- Fig. 15 is a flow chart of a non-priority data packet transmitting procedure of the present invention.
- Fig. 16 is a flow chart of a non-priority data packet receiving procedure of the present invention.
- Fig. 17 is a flow chart of a priority data packet receiving procedure of the present invention
- Fig. 18 is a block diagram of the process of data packet scheduling of the present invention
- Fig. 19 is an illustration of "traffic management”; and Fig. 20 is a flow diagram of the "traffic management” process.
- FIG. 1 A system in which the preferred traffic shaping of the present invention is implemented is shown in Fig. 1 and is indicated generally by reference character 100.
- the system may be provided as a mid-network router or hub, but may be any type of high-speed switch providing transmission of data.
- streams of data cells or packets 102 are provided at inputs to the system. The number of inputs may be varied depending upon bandwidth demands.
- the data packets are then processed, formatted and shaped (using the traffic shaper of the present invention) before being provided at the output of the system 100 as a shaped data stream 103 for transmission to their next destination.
- the data stream is shaped at the data packet level and not at the data cell level. Therefore, common connections or inputs provided with different data packets from different users and groups of users are shaped according to the bandwidth available to each user or group of users (i.e., subscribed transmission rate) .
- the system 100 is preferably provided with a plurality of line cards 104, a plurality of Internet processing (IPE) cards 106 and a switch fabric 108 providing bi- directional communication between the line cards 104 and the IPE cards 106.
- IPE Internet processing
- the line cards 104 provide the physical interface to the transmission medium and examine data traffic from various interfaces, such as ATM sources, to extract relevant characterizing information from the data packets including, for example, protocol and user identification information.
- the relevant control information extracted from the data packets in the data streams are forwarded to appropriate IPE cards 106.
- the IPE cards 106 use this control information to provide protocol processing, and for managing users and tunnels.
- the line cards 104 and IPE cards 106 are provided with a plurality of general purpose processors, shown in Figs. 2-5 as protocol processing units (PPUs) 110.
- PPUs protocol processing units
- Each of the line cards 104 and IPE cards 106 are provided with a master processor, which is shown in line card 104 as illustrated in Figs. 2 and 3 and IPE card 106 as illustrated in Figs. 4 and 5, as a Master PPU (MPPU) 112.
- the MPPUs 112 are provided mainly to implement functions relating to protocol processing, as well as to supervise and control the PPUs 110 on its card.
- the MPPU 112 also provides bandwidth management and processing within a card, as well as aggregating the bandwidth needs of all the PPUs 100 on a given card.
- the PPUs 110 and MPPUs 112 may be any type of general purpose processors, depending upon the demand requirements of the system 100, and may be for example, Pentium ® Processor chips or Power PC Processor chips.
- the line cards 104 together terminate the link protocol and distribute the received packets based on user, tunnel or logical link (i.e., group of users) information to a particular PPU 110 on a particular IPE card 106 through the switch fabric 108. It should be recognized that if more bandwidth is needed than a single PPU can handle, the data packets will be distributed and processed over cascaded multiple PPUs 110, as shown in Figs. 2-5.
- the line cards 104 perform both ingress functions and egress functions.
- the PPUs 110 of the line cards 104 perform load distribution to the various PPUs 110 on the IPE cards 106 of the system 100.
- Data packets processed through this system 100 are queued for their destined PPU or PPUs 110 based on the packet requirements and/or limitations of the data packet, with the data packets forwarded when they are eligible for service based on the distribution of switch fabric bandwidth.
- the traffic shaping of the present invention is performed on the egress side of the line cards 104. This traffic shaping controls the flow of data traffic (i.e., bandwidth) transmitted from the egress interfaces of the line cards 104.
- This traffic shaping ensures that the data packet transmission rate is within the negotiated parameters of the particular user or groups of users, so that the data packet will not be rejected by the network or user.
- the traffic shaping operation modifies the data traffic, giving the data packets a pre- defined shape to their profile.
- each of the line cards 104 includes a plurality of physical input interfaces (PHYs) 114 for receiving and transmitting data packets.
- PHYs physical input interfaces
- the number of these PHYs may be modified and the system constructed according to its specific data traffic demands. For example, in a system providing 10 Giga- bits per second (Gbps) transmission rate, the input and output data stream transmission rate at each PHY is equal to the total transmission rate or bandwidth of the system divided by the number of PHYs 114.
- the preferred system is also provided with packet inspectors or identifiers 123 and PPUs 110, and packet managers 124, each of which includes a- packet formatter.
- the packet identifiers 123 and packet formatters may be provided as described in the co-pending U.S. applications disclosed herein. However, any appropriate data packet processing system may be used which provides the required information.
- the preferred embodiment includes eight PPUs 110 in each line card 104 and sixteen PPUs in each IPE card 106. However, the number of PPUs 110 is easily increased or decreased depending upon the requirements of the particular system (i.e., switch or router). Specifically, as shown in Figs.
- each IPE card 106 is preferably provided with one packet inspector 123 and one packet manager 124.
- the packet inspectors 123 provide for examining the data packets and extracting the relevant control information for providing to the PPUs 110 for processing, as well as receiving back processed information from the packet managers 124 for use in
- the packet inspectors 123 provide characterizing information from the data packets, such as user identification information to the packet managers 124 to enable traffic shaping of the data packets based on the stored user information (i.e., bandwidth, priority and burst limits) in user tables 128 of the line cards 104.
- the user tables 128 are preferably provided in the memory storage connected to each of the PPUs 110 on the egress side of the line cards 104.
- Each PPU 110 includes a central processing unit (CPU) or general processing unit and memory. Additionally, as shown in Figs. 2 and 3, two packet inspectors 123 and two packet managers 124 are provided on each line card 104, one on the ingress side and one on the egress side of the line card 104. In particular, the processing is performed in the PPUs 110 with the data maintained in the packet buffer (egress) 131 until the shaping is complete and the data packets are transmitted from the PHYs 114. Data is communicated between the packet inspector 123, the packet buffers and the PPUs 110 using the buffer access controllers (BAG) 133. This provides for "splicing" or dividing the data packets provided by the packet inspector 123. As shown in Figs. 4 and 5, the IPE cards 106 are also provided with BACs 133 for communicating with the packet buffer on those cards .
- BAG buffer access controllers
- packet buffers 130 are provided throughout the system 100 to ensure that data packets transmitted and processed through the system 100 using the switch fabric 108 are held until such time that transferring of the data packets is available. Specifically, a buffer 130 is provided on the ingress side and buffer 131 on the egress side of the line cards 104, as shown in Figs. 2 and 3, as well as between the packet inspectors 123 and packet managers 124 on the card. A buffer 130 is likewise provided between the packet inspector 123 and packet manager 124 of the IPE cards 106 as shown in Figs. 4 and 5.
- the packet buffer (egress) 131 between the packet inspector 123 and packet manager 124 on the egress side of line cards 104 hold the data packets while the PPUs 110 of the line cards 104 use the characterizing information extracted by the packet inspectors 123 to process the data packets based on the user information stored in the user tables 128.
- the traffic shaping of the present invention is performed in the PPUs 110 on the egress side of the line cards 104.
- these PPU's 110 use the characterizing information from the ingress side packet inspectors 123, as well as processed information from the packet managers 124, to identify a specific user or group of users within the user table 128 or logical link table 129, respectively.
- TB Total Bandwidth
- L Burst Limit
- PB Priority Bandwidth
- the TB parameter defines the user's average bandwidth that the user is provisioned based on that user's subscription rate.
- L defines the maximum allowed transfer of a burst of data for that user.
- the PB parameter defines the user's average bandwidth for priority traffic that the user is provisioned based on that user's subscription rate.
- the PB is part of TB. Therefore, for example, if a user is assigned a TB of 10 Mbps with a PB of 4 Mbps, the user is entitled to a total of 10 Mbps of bandwidth, out of which up to 4 Mbps can be for priority traffic (e.g, real-time traffic).
- priority traffic e.g, real-time traffic
- the user defined parameters include a user identification (UID) to determine the user associated with the particular data packet and a physical identification (PHYID) to determine which PHY 114 is associated with the relevant data packet.
- UID user identification
- PHYID physical identification
- Each individual is a member of a group of users defined preferably as the logical link.
- an individual user subscribes through an Internet Service Provider (ISP) for service and access to the Internet.
- ISP Internet Service Provider
- Each ISP contracts for bandwidth and that bandwidth is defined by the parameters of the logical link.
- the logical link defined parameters are in the logical link table 129, as shown in Fig. 7, and include three parameters associated with each logical link to describe that particular logical link's (e.g., ISP) traffic shaping profile. Specifically, these parameter include the following: Logical Link Total Bandwidth (bits per second) (LLTB) , Logical Link Burst Limit (L) and Logical Link Priority Bandwidth (LLPB) .
- LLTB Logical Link Total Bandwidth
- L Logical Link Burst Limit
- LLPB Logical Link Priority Bandwidth
- the LLTB parameter defines the logical link's average bandwidth that the logical link is provisioned based on that logical link's subscription rate.
- the L parameter defines the maximum allowed transfer of a burst of data for that particular logical link.
- the LLPB parameter defines the logical link's average bandwidth for priority traffic that the logical link is provisioned based on that logical link's subscription rate. The LLPB is part of LLTB.
- a logical link is assigned an LLTB of 10 Mbps with an LLPB of 4 Mbps, the logical link is entitled to a total of 10 Mbps of bandwidth, out of which up to 4 Mbps can be for priority traffic (e.g, real-time traffic) .
- priority traffic e.g, real-time traffic
- Data packets are maintained in the buffer 131 on the egress side of the line cards 104 until the user or logical link information described above is processed and the data packets are transmitted out of the PHYs 114 to their next destination. It should be noted that the data packets remain in this buffer while the PPUs 110 process the relevant information to determine the limits of the user or logical link associated with the data packets in the buffer 131.
- the data packets are logically organized on a per user basis in the buffer 131 on the egress side of the line cards 104. As shown in that figure, the data packets are maintained in two separate queues, one for priority data packets and one for non-priority data packets. These queues are organized on a first-in- first-out basis.
- the PPUs 110 processing the user parameters in the tables determine the Earliest Theoretical Departure Time Total (ETDTT) for any user with data packets in the non-priority queue and the Earliest Theoretical Departure Time Priority (ETDTP) for any user with packets in the priority queue.
- ETDTT Earliest Theoretical Departure Time Total
- ETDTP Earliest Theoretical Departure Time Priority
- This value is calculated and stored in the user table 128 the first time a user's data packet enters the buffer's queue, which may be the first time that particular user has ever transmitted data packets through the system 100, or may be after the particular user's queue is empty or clear and a first data packet is received again in the queue.
- the PPUs calculate the time between which data packets for a particular user are conformant with their defined parameters (i.e., TB and PB) . This time, as shown in Fig. 9, is used by the PPUs 110 to determine the next time at which a particular user's data packet becomes conformant and is ready for processing using the logical link's parameters .
- a user who has both priority and non-priority packets in the queues of the buffer 131, will have a ETDTT and a ETDTP pointer.
- each PPU 110 in the system 100 has a real time counter (RTC) , which maintains real time for the operations of the system 100.
- RTC real time counter
- Each PPU 110 also maintains within its memory a software defined User Circular Queue (UCQ) 132 as shown in Fig. 10 and a software defined logical link Circular Queue (LLCQ) 134 as shown in Fig. 11.
- the UCQ has n number of logical bins 136 (0, 1, 2, . . . . n-1) , with each bin representing a time interval of T units.
- the Time unit T for each bin 136 can be defined in the software as required by the bandwidth and other transfer parameters of the system 100.
- the UCQ 132 and LLCQ 134 are provided with a wrap-around feature such that a particular user's and/or a particular logical link's ETDTT does not have to be processed in one cycle of the UCQ 132 or LLCQ 134 (i.e., n-1 bins or m-1 bins) .
- the UCQ 132 preferably has an associated current bin pointer (CBPTR) which points to one of the bins 136 of the UCQ 132.
- CBPTR current bin pointer
- the CBPTR is updated to point to the next consecutive bin 136.
- the CBPTR will point to the same bin. Note that the time nT must be greater than the maximum increment (packet length / bandwidth) corresponding to the lowest rate user that the system supports .
- NPLL Non-Priority Linked List
- PLL Priority Linked List
- the NPLL is a doubly linked list and the PLL is a singly linked list as shown in this figure. Therefore, for non-priority data packets, pointers point to both the previous and next user in the linked list. For priority data packets, pointers only point to the next user in the linked list. This is because priority data packets may be processed and have to be updated, which results in both the ETDTP and ETDTT being recalculated.
- the ETDTT and the ETDTP are calculated whenever a packet is transmitted on the physical interface. For example, if at current time (CT) , a priority packet for user Ul is transmitted, then for that user,
- ETDTP CT + (Packet Length) / Priority Bandwidth)- L.
- ETDTT ETDTT + (Packet Length) / Total Bandwidth) .
- the NPLL 140 must be a doubly linked list.
- ETDTT ETDTT + (Packet Length / Total Bandwidth)
- ETDTT CT + (Packet Length) / Total Bandwidth) - L.
- Updates to the entries in a logical link queue are based on the logical link corresponding to the user for which a data packet was transmitted.
- every user in the NPLL 140 is allowed to transmit the head-of-line or first-in-line packet in the user's non-priority buffer (indicated by the NHP pointer)
- every user in the PLL 142 is allowed to transmit the head-of-line or first-in-line packet in the user's priority buffer (indicated by the PHP) .
- each PPU 110 also maintains the LLCQ 134 with m number of logical bins 138 each representing a time interval of S time units. Therefore, as in the UCQ 132, a CBPTR will point to the same bin 138 every mS time interval. Note that the time mS must be greater than the maximum increment (packet length / bandwidth) corresponding to the lowest rate logical link that the system supports.
- the invention calculates and maintains an Earliest Theoretical Departure Time Total (ETDTT) for any logical link with any conformant users, and calculates and maintains an Earliest Theoretical Departure Time Priority (ETDTP) for a logical link with any conformant users having data packets in the priority queue.
- ETDTT Earliest Theoretical Departure Time Total
- ETDTP Earliest Theoretical Departure Time Priority
- the ETDTT and ETDTP for the LLCQ 134 are calculated based on the defined parameters of that particular Logical Link (i.e., LLTB and LLPB).
- Logical Link i.e., LLTB and LLPB
- a logical link that has both priority and non- priority conformant users will have an ETDTT and an ETDTP pointer.
- Corresponding to each bin 138 of the LLCQ 134 are also two linked lists of logical links, a Non-Priority Linked List (NPLL) 144 and Priority Linked List (PLL) 146.
- NPLL Non-Priority Linked List
- PLL Priority Linked List
- a CBPTR is also provided such that at a given time when the CBPTR points to a specific bin 138, all the logical links in the two linked lists are considered conformant .
- Each individual user e.g., individual subscriber that contracts with an ISP for access to the Internet
- a logical link e.g., ISP
- LID logical link identification
- the logical link may be assigned a certain amount of bandwidth on the particular PHYs 114 with which the user is associated. The amount of bandwidth is that amount for which the ISP contracts with a bandwidth reseller. It is possible, and in fact common, for the logical link to comprise a group of individual users whose total TB may exceed the LLTB of that logical link and/or whose total PB may exceed the LLPB of that logical link (i.e., oversubscribed) . Referring now to Fig.
- each logical link is provided with two schedulers, a priority scheduler 148 for scheduling the transmission of priority data packets and a non-priority scheduler 150 for scheduling the transmission of non-priority data packets.
- the data packets of users that become conformant in the USC 132 based upon their user parameters are linked to the logical link to which they belong in the LLCQ 134, and when that logical link becomes conformant, the users are placed in one of the two schedulers of the logical link.
- the transmission of data packets of users in the schedulers is determined and scheduled by Deficit Round Robin as disclosed in "Efficient Fair Queueing using Deficit Round Robin" by Madhavapeddi Shreedhar and George Varghese, Proceedings of SIGCOMM, August 1995.
- FIG. 14 illustrates the process used by the traffic shaper to determine if a particular priority data packet of a particular user has become conformant with the user's parameters as defined in the user table 128. The process shown also illustrates how the ETDTP pointer is updated.
- the ETDTT and the ETDTP pointers are calculated whenever a packet is first received in the buffer 131 and is updated when a user is linked to the proper linked list and delinked from the original linked list if necessary.
- ETDTP ETDTP + (Packet Length) / Priority Bandwidth
- LLPS continue to schedule the User (Process based on DRR algori thm) else Upda te User' s informa tion for LLPS (Deficit Regis ter upda te)
- Figure 15 illustrates the process of transmitting non-priority packets using the traffic shaping of the present invention. As shown, if at current time CT, a non-priority packet for a user Ul is transmitted by LLNPS, then for that user, the ETDTT is updated as follows :
- ETDTT CT + (Packet Length / Total Bandwidth) - L else
- ETDTT ETDTT + (Packet Length) / Total Bandwidth) .
- Figure 16 illustrates in flow chart form the process executed by the traffic shaper of the present invention when a priority packet is received for shaping.
- the pseudo code for that process is as follows:
- Figure 17 illustrates the process executed by the traffic shaper of the present invention when non-priority packets are received.
- the pseudo code for that process is as follows:
- the traffic shaper first calculates and then updates the ETDTT and ETDTP pointers for use in the UCQ 132 and LLCQ 134 preferably based upon the above definitions.
- the traffic shaper first calculates and then updates the ETDTT and ETDTP pointers for use in the UCQ 132 and LLCQ 134 preferably based upon the above definitions.
- other procedures may be implemented to achieve the same or similar processing depending upon the application and requirements of the system that is shaping the data stream.
- the UCQ 132 is provided in each bin 136 with a priority pointer and non-priority pointer.
- the priority pointer includes the PNXTPTR pointing to the next user in the PLL 142. Only one pointer is required in this singly linked list.
- the non-priority pointers include a NXTPTR to point to the next user in the NPLL 140 and a PREVPTR to point to the previous user in the NPLL 140. Two pointers are required because as described herein, the NPLL 140 is doubly linked. Each of the users in the linked lists is associated with a logical link. Therefore, as shown in Fig.
- each user has a logical link association, such that when any of the users become conformant to that user's transfer parameters, it is associated with either the NPLL 144 or PLL 146 of the LLCQ 134 and added to that logical link's linked list, depending upon whether the user is a non-priority or priority user.
- a logical link that includes both priority and non-priority conformant users will have a corresponding ETDTT and ETDTP.
- the CBPTR pointer points to the next consecutive bin in the circular queue.
- the CBPTR points to a bin 126 in the UCQ 132 all of the users, both priority and non- priority, are considered conformant.
- the NPLL 140 is allowed to transmit the NHP and the PLL 142 is allowed to transmit the PHP, assuming the logical link with which the user is associated can schedule the data packets of the users that are conformant.
- group shaping when the CBPTR points to a bin 138 in the LLCQ 134, both the priority and non-priority data packets belonging to the conformant users in the bins 138 are scheduled for transmission.
- the pointer is incremented in both the UCQ 132 and LLCQ 134. This provides the parallel multi-level or multi-layer shaping of the present invention.
- FIG. 18 A specific example of the process of traffic shaping relating to receiving data packets is shown in Fig. 18.
- a non-priority packet for a user is received, if the user has non-priority packets already in buffer 131, the packet is just appended to the non- priority packet queue. If the user has no non-priority packet, the user's ETDTT is initialized and the user is linked in the doubly linked NPLL 140 in the UCQ 140 or into the NPLL 144 in the LLCQ 134 corresponding to the user.
- a priority packet for a user When a priority packet for a user is received, if the user has priority packets already in buffer 131, the packet is just appended to priority packet queue. ' If the user has no priority packets, the user's ETDTP is initialized and the user is linked into the PLL 142 in the UCQ 140 or into the LLCQ 134 corresponding to the user.
- the packet is queued up in the packet buffer 131, which is organized on a per user basis. For transmission, an entry is created in the corresponding bin. For example if the current time is X and the ETDTT for a packet for user U4 is X5, an entry is created in the X5 bin of the scheduler. The entry is preferably just a pointer to a row in the user table 128. Similarly if the next packet comes (U5) and its ETDTT is again calculated as X5, a new entry is not created for this user in the X5 bin of the scheduler.
- a bin preferably contains only a single pointer to an entry in the user table and if there are more users in a bin, the users are linked together in the order they are received.
- Data packets of a single bin are preferably rearranged according to the priority of the users (i.e., packets with high priority are scheduled for transmission prior to the packets of the lower priorities in a single bin) .
- the priority of the users i.e., packets with high priority are scheduled for transmission prior to the packets of the lower priorities in a single bin.
- transmission sequence will be U4,U5,U7 .
- Fig. 19 is an illustration of the complementary functions of "traffic policing" and "traffic shaping.”
- a system such as that in Fig. 1, takes input logical link groups comprised of individual users and provides “policing" of the data packets and thereafter "shapes" the data packets to a pre-defined profile for transmission based on the individual user's transfer parameters and logical link's transfer parameters.
- the overall process is shown in flow form in Fig. 20.
- the processing and shaping of the traffic shaper of the present invention occurs at high speed and at multiple levels in parallel due to the structure of the circular queues and the provision of the packet buffers to hold the data packets while the parallel processing is occurring.
- the traffic shaping of the present invention may be configured in alternate ways, and is not limited by the number of component parts and the specific code as described in the preferred embodiment.
- additional circular queues may be included for additional layers of parallel processing.
- the number of line cards 104 and IPE cards 106 may be scaled up or down depending upon the requirements for the "packet shaping" to be performed.
- the number of PPUs 110 on the line cards may be scaled depending upon the processing demands of the system.
- traffic shaper of the present invention has been described in detail only in the context of shaping data packets through routers and switches, the traffic shaper may also be readily configured to shape data in other non-networking applications and anywhere shaping of a data stream is required.
- the various block representations as described herein represent hardware implementation of the invention, such as in chip architecture. Several of the chips or functions could be incorporated into a custom chip. Although not preferable, one or more of the functions of the traffic shaping performed in software could be implemented in hardware .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
Claims
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US511059 | 1983-07-05 | ||
US51105900A | 2000-02-23 | 2000-02-23 | |
PCT/US2001/000910 WO2001063860A1 (en) | 2000-02-23 | 2001-01-11 | Method and device for data traffic shaping |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1258114A1 true EP1258114A1 (en) | 2002-11-20 |
Family
ID=24033284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP01904830A Withdrawn EP1258114A1 (en) | 2000-02-23 | 2001-01-11 | Method and device for data traffic shaping |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1258114A1 (en) |
AU (1) | AU2001232776A1 (en) |
WO (1) | WO2001063860A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10924415B2 (en) | 2016-08-24 | 2021-02-16 | Viasat, Inc. | Device shaping in a communications network |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7158788B2 (en) * | 2001-10-31 | 2007-01-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and apparatus for auto-configuration for optimum multimedia performance |
WO2004032429A1 (en) * | 2002-10-01 | 2004-04-15 | Telefonaktiebolaget Lm Ericsson (Publ) | Access link bandwidth management scheme |
CN101964740A (en) * | 2009-07-24 | 2011-02-02 | 中兴通讯股份有限公司 | Method and device for distributing service traffic |
CN105306384A (en) * | 2014-06-24 | 2016-02-03 | 中兴通讯股份有限公司 | Message processing method and device, and line card |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0817431B1 (en) * | 1996-06-27 | 2007-01-17 | Xerox Corporation | A packet switched communication system |
US6041059A (en) * | 1997-04-25 | 2000-03-21 | Mmc Networks, Inc. | Time-wheel ATM cell scheduling |
AU732962B2 (en) * | 1997-11-18 | 2001-05-03 | Enterasys Networks, Inc. | Hierarchical schedules for different ATM traffic |
-
2001
- 2001-01-11 WO PCT/US2001/000910 patent/WO2001063860A1/en not_active Application Discontinuation
- 2001-01-11 EP EP01904830A patent/EP1258114A1/en not_active Withdrawn
- 2001-01-11 AU AU2001232776A patent/AU2001232776A1/en not_active Abandoned
Non-Patent Citations (1)
Title |
---|
See references of WO0163860A1 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10924415B2 (en) | 2016-08-24 | 2021-02-16 | Viasat, Inc. | Device shaping in a communications network |
US11722414B2 (en) | 2016-08-24 | 2023-08-08 | Viasat, Inc. | Device shaping in a communications network |
Also Published As
Publication number | Publication date |
---|---|
WO2001063860A1 (en) | 2001-08-30 |
AU2001232776A1 (en) | 2001-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6490248B1 (en) | Packet transfer device and packet transfer method adaptive to a large number of input ports | |
US6122673A (en) | Port scheduler and method for scheduling service providing guarantees, hierarchical rate limiting with/without overbooking capability | |
US7649882B2 (en) | Multicast scheduling and replication in switches | |
EP1111858B1 (en) | A weighted round robin scheduling engine | |
EP0981228B1 (en) | Two-component bandwidth scheduler having application in multi-class digital communication systems | |
US5732087A (en) | ATM local area network switch with dual queues | |
US6058114A (en) | Unified network cell scheduler and flow controller | |
US6587437B1 (en) | ER information acceleration in ABR traffic | |
JP2000501260A (en) | Scheduler for information packet switch | |
WO1997034395A1 (en) | Event-driven cell scheduler and method for supporting multiple service categories in a communication network | |
JP3673025B2 (en) | Packet transfer device | |
WO2003103236A1 (en) | Buffer memory reservation | |
CA2462793C (en) | Distributed transmission of traffic streams in communication networks | |
US20020150047A1 (en) | System and method for scheduling transmission of asynchronous transfer mode cells | |
EP1111851B1 (en) | A scheduler system for scheduling the distribution of ATM cells | |
Chiussi et al. | Implementing fair queueing in atm switches: The discrete-rate approach | |
JP3906231B2 (en) | Packet transfer device | |
EP1258114A1 (en) | Method and device for data traffic shaping | |
US7079545B1 (en) | System and method for simultaneous deficit round robin prioritization | |
US6807171B1 (en) | Virtual path aggregation | |
WO2004062214A2 (en) | System and method for providing quality of service in asynchronous transfer mode cell transmission | |
US9363186B2 (en) | Hierarchical shaping of network traffic | |
EP1835672B1 (en) | Data-switching apparatus having a scheduling mechanism for arbitrating between requests for transfers of data sets, for a node of very high data rate communications network | |
Zhu et al. | A new scheduling scheme for resilient packet ring networks with single transit buffer | |
James et al. | A 40 Gb/s packet switching architecture with fine-grained priorities |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20020916 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ZHAO, XINGGUO Owner name: CELOX NETWORKS, INC. Owner name: BORDES, JEAN PIERRE Owner name: HEGDE, MANJU Owner name: SCHMID, OTTO ANDREAS Owner name: MAHER, MONIER Owner name: DAVIS, CURTIS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20050802 |