WO2004017650A2 - Scheduler for a shared channel - Google Patents

Scheduler for a shared channel Download PDF

Info

Publication number
WO2004017650A2
WO2004017650A2 PCT/CA2003/001256 CA0301256W WO2004017650A2 WO 2004017650 A2 WO2004017650 A2 WO 2004017650A2 CA 0301256 W CA0301256 W CA 0301256W WO 2004017650 A2 WO2004017650 A2 WO 2004017650A2
Authority
WO
WIPO (PCT)
Prior art keywords
subscriber
base station
subscriber stations
station
subscriber station
Prior art date
Application number
PCT/CA2003/001256
Other languages
French (fr)
Other versions
WO2004017650A3 (en
Inventor
Faisal Shad
Original Assignee
Soma Networks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soma Networks, Inc. filed Critical Soma Networks, Inc.
Priority to JP2004528217A priority Critical patent/JP2005536129A/en
Priority to AU2003257341A priority patent/AU2003257341A1/en
Priority to MXPA05002000A priority patent/MXPA05002000A/en
Publication of WO2004017650A2 publication Critical patent/WO2004017650A2/en
Publication of WO2004017650A3 publication Critical patent/WO2004017650A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2416Real-time traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • H04L47/2433Allocation of priorities to traffic types
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/535Allocation or scheduling criteria for wireless resources based on resource usage policies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/04Large scale networks; Deep hierarchical networks
    • H04W84/08Trunked mobile radio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters
    • H04W28/22Negotiating communication rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/52Allocation or scheduling criteria for wireless resources based on load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/54Allocation or scheduling criteria for wireless resources based on quality criteria
    • H04W72/543Allocation or scheduling criteria for wireless resources based on quality criteria based on requested quality, e.g. QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/56Allocation or scheduling criteria for wireless resources based on priority criteria
    • H04W72/566Allocation or scheduling criteria for wireless resources based on priority criteria of the information or information source or recipient
    • H04W72/569Allocation or scheduling criteria for wireless resources based on priority criteria of the information or information source or recipient of the traffic information

Definitions

  • the present invention relates to a system, method and apparatus for scheduling data in a network. More specifically, the present invention relates to a system, method and apparatus for scheduling data traffic over a shared channel.
  • a transmitting station In a network including a shared channel to deliver data traffic to multiple receiving stations from a single transmitting station, a transmitting station must determine how to allocate its downlink capacity among receiving stations.
  • Examples of such a network include CATV- based data networks and wireless networks such as the AMOSPHERETM system sold by the assignee of the present invention, etc. hi the latter system, a base station transceiver will service a plurality of subscriber stations through an air interface that provides both shared and dedicated downlink (base station to subscriber station) channels. Because the transmission capacity of such systems is limited (typically by the available bandwidth), allocating the available capacity among the users to ensure efficient use of the transmission capacity and providing acceptable service levels can be difficult. Accordingly, such systems can benefit from appropriately scheduling transmissions over the shared link(s).
  • Round-robin scheduling provides each subscriber station with an equal amount of transmission time over the shared channel. While this can be an advantageous method in some circumstances, in many networks, such as those employing radio-based links, not all subscriber stations will have the • same data reception rates, due to factors such as different signal-to-noise ratios (SNR). Thus, round-robin sharing does not actually provide an equal delivery of data over the shared downlink to each subscriber station.
  • SNR signal-to-noise ratios
  • This inequality can result in dissatisfied subscribers, particularly for those subscribers using subscriber stations at the edges of the service area who have substantially lower average data rates than subscribers with subscriber stations located close to the base station. Furthermore, this inequality requires the service provider to be conservative when advertising the performance capabilities offered by the system.
  • proportionally fair sharing provides each subscriber station with an adjusted amount of channel capacity on the shared link, where each subscriber station receives a channel share that is adjusted by their data reception rates, so that each subscriber station receives approximately the same average amount of data. While proportional fair sharing can provide a better degree of equality between subscriber stations, it can also lead to an overall drop in total system throughput, as the base station must devote a large amount of channel capacity to service a small minority of subscriber stations with poor SNRs.
  • Bender et al demonstrate how unequal latency between subscriber stations with different data reception capabilities (i.e. - different SNRs) can increase total throughput on the downlink in the network. By providing a greater portion of channel capacity on the shared link to subscriber stations with better instantaneous SNRs, the base station can transmit more traffic overall.
  • This method increases the total throughput of the system, potentially clearing backlogs and improving overall network performance, but also creates latency and a lower data rate for subscriber stations with poorer average SNRs. To ensure that all subscriber stations possess at least a tolerable individual data rate, the system limits the maximum permitted latency value to subscriber stations with poorer SNR ratios.
  • opportunistic transmission scheduling can provide an increase in both total and individual throughput, it is not without its disadvantages.
  • One disadvantage is that opportunistic scheduling can increase latency for subscriber stations as the base station waits for the opportune time to transmit to them.
  • Another disadvantage is that this method assumes a constant level of data to be transmitted, such as a WAP session on a cellular phone.
  • the opportunistic method contemplate intentional differences in the treatment of subscribers, such as those provided by different qualities of service (QoS), or for treatments of different types of media data.
  • QoS qualities of service
  • a base station may service subscriber stations with different priorities, so that one subscriber using a latency-intolerant Voice-over-IP (VoIP) service receives a guaranteed service, while another subscriber is web surfing and will be provided only with a best-effort service by the base station.
  • VoIP Voice-over-IP
  • M-LWDF Modified Largest Weighted Delays First
  • each time slot (t) serves a waiting packet queue (f) for which the function y j Wj(t)r t) is maximal.
  • Wj(t) is the current waiting time for the packets stored in queue./
  • (t) is the channel capacity, or data rate, for data flow j
  • ⁇ ,- is an arbitrary value. If ⁇ is the same for each packet queue, then all subscriber stations have the same QoS level. A packet queue with a higher value for ⁇ thus has a higher level of service than a packet queue with a lower value for ⁇ .
  • ⁇ frft is the same for each packet queue, then all subscriber stations have the same data rate.
  • M-LWDF provides some advantage over the prior art, it also has its limitations.
  • a key disadvantage is that M-LWDF provides no means to provide a policy of fairness between subscriber stations with different channel quality.
  • Another disadvantage is that the M-LWDF can schedule traffic for only a single subscriber station per time slot. This creates a significant amount of latency (x number of timeslots) for all other subscribers waiting for packet delivery. Furthermore, since each timeslot bears data traffic for a single subscriber station, it is possible that some of the capacity of that timeslot is wasted due to internal fragmentation.
  • a method of scheduling data for transmission from a base station over a shared channel to a plurality of subscriber stations comprising: determining a fairness factor from a range of possible fairness factors, where a first end of said range indicates a policy of scheduling data with maximum fairness between said plurality of subscriber stations and a second end of said range indicates a policy of scheduling data for maximum data traffic over said shared channel; for each subscriber station in said plurality of subscriber stations that said base station has data to be delivered to determining a quality of service priority value, said quality of service priority value indicating a priority of said subscriber station relative to other subscriber stations in said plurality of subscriber stations; determining a throughput value, said throughput value indicating the quantity of data to be moved to said subscriber station if data is scheduled to said subscribe
  • a system for transmitting data comprising: a plurality of subscriber stations each having a processor, a modem, a radio and an antenna, each subscriber station operable to transmit a request for a dedicated data channel from a base station; and a base station having a processor, a modem, a radio and an antenna, and operable to receive said requests for a dedicated data channel from said subscriber stations and to schedule data for transmission to said plurality of subscriber stations over a shared channel in accordance with a scheduling policy that varies priorities between scheduling for fairness between subscriber stations and for improved throughput to said plurality of subscriber stations.
  • the present invention provides a method, system and apparatus for scheduling data to a plurality of subscriber stations from a base station over a shared channel.
  • Data destined for each subscriber station is placed into individual queues at the base station.
  • the base station allocates a portion of the shared channel to transmit the data in each particular queue based upon a priority value it assigns to that queue.
  • the priority value for each queue is determined by a QoS value and a throughput value, where each of these two values is adjusted by a fairness factor.
  • the QoS value indicates whether a subscriber station has been receiving data from the base station according to an agreed-upon QoS level.
  • the throughput value indicates the data rate that can be achieved by the base station transmitting to that subscriber station.
  • the fairness factor represents a scheduling policy at the base station.
  • Scheduling policies can include scheduling data to emphasize fairness between subscriber stations at a particular QoS level, scheduling data to maximize throughput over the shared channel, and scheduling data to achieve a balance between fairness and maximum throughput.
  • Figure 1 is a schematic representation of a wireless network in accordance with an embodiment of the invention.
  • Figure 2 is a representation of a communications link as shown in Figure 1, comprised of multiple channels;
  • FIG 3 is a schematic representation of the base station shown in Figure 1;
  • Figure 4 is a schematic representation of one of the subscriber stations shown in Figure 1;
  • Figure 5 is a schematic representation of a scheduler for a shared channel running on the base station shown in Figure 3;
  • Figure 6 is a flowchart showing how the broadcast downlink channel scheduler shown in Figure 5 manages the scheduling of backlogged traffic flows to the shared channel.
  • Network 20 includes a radio base station 24 and a plurality of subscriber stations 28a, 28b ... 28n.
  • radio base station 24 is connected to at least one data telecommunications network (not shown), such as a land line-based switched data network, a packet network, etc., by an appropriate gateway and one or more backhauls (not shown), such as a Tl, T3, El, E3, OC3 or other suitable land line link, or can be a satellite or other radio or microwave channel link or any other link suitable for operation as a backhaul as will occur to those of skill in the art.
  • Base station 24 communicates with subscriber stations 28 which can be fixed, nomadic or mobile devices.
  • the number 'n' of subscriber stations serviced by a base station 24 can vary depending upon the amount of radio bandwidth available and/or the configuration and requirements of the subscriber stations 28.
  • the geographic distribution of subscriber stations 28 with respect to base station 24 need not be symmetric nor will subscriber stations which are physically located close to one another necessarily experience the same or similar data reception rates, due to varying signal to noise ratios (SNRs) experienced at the subscriber stations 28 due to a variety of factors including the geographic environment (the presence or absence of buildings which can reflect or mask signals), the radio environment (the presence or absence of radio noise sources), etc.
  • SNRs signal to noise ratios
  • subscriber stations 28 can be geographically divided into different sectors 36, formed via directional antennas at base station 24 to increase the number of subscriber station 28 that can. be served from a single base station location.
  • each sector 36 essentially acts as a different base station and base station 24 can manage the network resources in each sector 36 independent of each other sector 36.
  • FIG. 1 shows only one base station 24, it will further be apparent to those of skill in the art that network 20 can contain multiple, geographically distributed base stations 24, with overlapping sector 36 coverage of subscriber stations 28, and where each subscriber station 28 in an overlapping sector 36 coverage area can select which base station 24 it will be serviced by.
  • a communication link 32 is established in each sector 36 between base station 24 and each subscriber station 28 in the sector 36 via radio.
  • Communication link 32a carries information to be transferred between base station 24 and subscriber station 28b
  • communication link 32b carries information to be transferred between base station 24 and subscriber stations 28c and 28d, etc.
  • Communication link 32 can be implemented using a variety of multiple access techniques, including TDMA, FDMA, CDMA or hybrid systems such as GSM, etc.
  • data transmitted over communication link 32 is transmitted using CDMA as a multiple access technology and the data is in the form of packets, transmitted within slotted time frames, the details of which will be discussed in greater detail below.
  • packaging refers to the overall arrangement of the transmission of the packaged data for its reception at an intended destination receiver.
  • Packaging of data can include, without limitation, applying different levels of forward error correcting (FEC) codes (from no coding to high levels of coding and/or different coding methods), employing various levels of symbol repetition, employing different modulation schemes (4-QAM, 16-QAM, 64-QAM, etc.) and any other techniques or methods for arranging data transmission with a selection of the amount of radio (or other physical layer) resources required, the data rate and probability of transmission errors which are appropriate for the transmission.
  • FEC forward error correcting
  • data can be packaged with rate 1/4 FEC coding (each 1 data bit is transmitted in 4 bits of information) and 16-QAM modulation for transmission to a first intended receiver and packaged with rate 1/2 FEC coding and 64-QAM modulation for transmission to a second intended receiver which has a better reception-quality than the first.
  • Communications link 32 operates in both an uplink (from a subscriber station 28 to base station 24) and a downlink direction (from base station 24 to subscriber stations 28).
  • the method of providing both uplink and downlink direction is not particularly limited, and in the present embodiment communications link 32 operates by frequency division duplexing (FDD).
  • FDD frequency division duplexing
  • TDD time division duplexing
  • hybrid schemes are within the scope of the invention.
  • communications link 32 is comprised of a plurality of channels, which in the present CDMA implementation, is achieved with orthogonal coding of link 32.
  • base station 24 uses a shared channel, referred to as the broadcast data channel (BDCH) 38 to carry variable-rate and bursty traffic (consisting primarily of signaling and internet traffic) across a sector 36.
  • BDCH broadcast data channel
  • BDCH 38 makes use of adaptive FEC and modulation to maximize downlink capacity and contains multiple packets or, more commonly, segments of packets of data for various subscriber stations 28 all time-multiplexed together into a single frame, i the present embodiment, BDCH 38 can be configured with spreading factor 4, wherein eight blocks of data can be sent within a ten millisecond frame, spreading factor 8 wherein four blocks of data can be sent witiiin a frame or spreading factor 16 wherein two blocks of data can be sent within a frame. Also in the present embodiment, base station 24 can support one or more BDCH 38 per sector 36 at any one time, i the uplink direction, data traffic is carried from subscriber station 28 to base station 24 using a dedicated data channel (DDCH) 44.
  • DDCH dedicated data channel
  • a separate DDCH 44 is set-up between base station 24 and each subscriber station 28 with an active communications link 32.
  • Signaling traffic is carried from subscriber station 28 to base station 24, typically inband using DDCH 44.
  • Subscriber stations 28 measure their received SNR, or other metric of their ability to receive data from base station 24 and report this information back to base station 24 on a regular basis over their DDCH 44 using an upper layer signaling protocol.
  • Subscriber stations 28 with high SNRs require less channel coding and can use higher order modulation than subscriber stations 28 with lower SNRs and thus, each block transmitted on BDCH 38 using a different block type (i.e., different packaging ofFEC type, FEC rate, modulation, etc.).
  • Figure 3 shows an example of a base station 24 in greater detail.
  • base station 24 shows an example of a single sector base station.
  • Base station 24 comprises an antenna 50, or antennas, for receiving and transmitting radio-communications over communications link 32.
  • antenna 50 is connected to a radio 52 and a modem 54.
  • Modem 50 is connected to a microprocessor-router assembly 56 such as a Intel Corporation Pentium processor based system using a conventional operating system such as Linux.
  • Microprocessor-router assembly 56 is responsible for traffic scheduling of all subscriber stations 28 within its sector 36 and for radio resource management. It will be understood that assembly 56 can include multiple microprocessors, as desired and/or that the router can be provided as a separate unit, if desired.
  • the router within microprocessor-router assembly 56 is connected to a backhaul 58 in any suitable manner, which in turn connects base station 24 to a data network (not shown).
  • Subscriber station 28 comprises an antenna 60, or antennas, for receiving and transmitting radio-communications over communications link 32.
  • antenna 60 is connected to a radio 64 and a modem 68, which in turn is connected to a microprocessor-assembly 72.
  • Microprocessor-assembly 72 can include, for example, a StrongARM processor manufactured by Intel Corporation, that performs a variety of functions, including implementing A/D-D/A conversion, filters, encoders, decoders, data compressors, de-compressors and/or packet disassembly.
  • Micro-processor-assembly 72 also includes buffers 74 which stores queued data traffic waiting for transport up communications link 32.
  • microprocessor-assembly 72 interconnects modem 68 and a data port 76, for connecting subscriber station 28 to a data client device (not shown), such as a personal computer, personal digital assistant or the like which is operable to use data received over communications link 32. Accordingly, microprocessor-assembly 72 is operable to process data between data port 76 and modem 68. Microprocessor-assembly 72 is also interconnected to at least one telephony port 80, for connecting subscriber station 28 to a telephony device (not shown) such as a telephone. In some cases, particularly in the case of a mobile subscriber station 28, the data client device can be integrated into the subscriber station 28.
  • Scheduler 100 is responsible for assigning queued packets of data intended to be transmitted from base station 24 to subscriber stations 28 into the bitstream of BDCH 38 while maintaining any agreed-upon QoS terms for each subscriber station 28 and implementing a scheduling policy based upon a fairness factor provided by a network operator (and discussed in greater detail below) in order to provide varying degrees of prioritization of fairness between subscriber stations 28 and overall throughput on BDCH 38.
  • a method for scheduler 100 to implement a scheduling policy and schedule queued packets is described further below with reference to Figure 6.
  • scheduler 100 is a software program running within base station 24 on microprocessor-assembly 56.
  • other implementations, such as a hardware or firmware implementation are also within the scope of the invention.
  • Traffic queues 104 for subscriber stations 28 are established for each subscriber station 28 that is known to, and is connected to, base station 24 over communication link 32.
  • the example in Figure 5 shows a scenario with four traffic queues 104, each servicing a flow to a corresponding subscriber station 28 (e.g.
  • traffic queue 104a holds traffic bound for subscriber station 28a, etc.) hi the example shown in the Figure, traffic queue 104a has five queued packets, 104b has no queued packets, traffic queue 104c has three queued packets, and traffic queue 104d has four queued packets. Traffic queues 104a, 104c, and 104d therefore have backlogged (e.g. - non-zero length queues) flows while traffic queue 104b has no backlogged flow.
  • backlogged e.g. - non-zero length queues
  • scheduler 100 Besides holding traffic queues 104 n , scheduler 100 stores link quality parameter 108 n , negotiated service share parameter 112 n , and measured service share parameter 116 n parameters for each active subscriber station 28 n with a traffic flow. Furthermore, scheduler 100 stores a fairness factor 120. As described in further detail below, there is at least one instance of fairness factor 120 per sector 36.
  • the link quality parameter 108 n holds a suitable measurement of the reception quality experienced at subscriber station 28 n. (a subscriber station 28 aromatic). hi a present embodiment, the value of
  • E c link quality parameter 108 is the signal to noise (SNR) estimate a subscriber tatio 28 - — of at least
  • each subscriber station 28 n periodically updates its value for link quality parameter 108 by transmitting its received SNR over an uplink channel such as DDCH 44.
  • the negotiated service share parameter 112 n stores the value for an agreed-upon quality of service level ( ⁇ committee eg ) for subscriber station 28 n .
  • negotiated service share parameter represents a guaranteed data rate (bits/s); however, other definitions of a negotiated service share such as maximum delay before transmitting a waiting packet, or a combination of guaranteed data rate and maximum delay are within the scope of the invention.
  • Subscriber stations 28 with higher values for negotiated service share parameter 112 will receive better service than subscriber stations 28 with lower negotiated service share parameter 112.
  • negotiated service share parameter 112 is negotiated between base station 24 and each subscriber station 28 when the subscriber station 28 connects to base station 24.
  • negotiated service share parameter 112 can be determined by the service provided based upon the media type about to be transmitted to subscriber station 28, a monthly subscription agreement for subscriber station 28, a fee for service, etc. Other methods of determining negotiated service share parameter 112 will occur to those of skill in the art.
  • Measured service share parameter 116 n stores the value of the measured service share ( ⁇ m ⁇ a s ) for subscriber station 28 n .
  • the measured service share is the portion of BDCH 38 that has carried packets destined for that particular subscriber station 28 n .
  • a larger value for the measured service share parameter 116 n indicates a higher average data rate delivered to that subscriber station 28 n .
  • fairness factor 120 is an adjustable parameter (F) that represents the scheduling policy and this parameter controls the trade-off between individual flow fairness versus overall throughput over communications link 32.
  • F an adjustable parameter
  • fairness factor 120 per sector 36 which is set by a network operator, or there can be one instance of fairness factor 120 per BDCH 38 (in cases where there is more than one BDCH 38 per sector 36).
  • F is normalized and ranges from zero to one.
  • a setting of zero indicates a policy that prioritizes the scheduling of data flows 102 to prioritize throughput on the downlink by scheduling data to subscriber stations 28 with the best SNRs, without regard to providing fairness between subscriber stations 28.
  • a setting of one indicating a policy that prioritizes the scheduling of flows 102 to provide fairness between subscriber stations 28s so that all subscriber stations 28 at the same QoS level will receive the same data rate, regardless of their respective SNRs.
  • BDCH 38 Once packets in a traffic queue 104 are scheduled by scheduler 100, they are moved into the blocks 128 of frames 124 of BDCH 38.
  • the spreading factor of a BDCH 38 is predetermined by a network operator and is fixed for every subscriber station 28 that is being serviced by a particular BDCH 38. In a current embodiment, a spreading factor of 4 is preferred (thus providing eight blocks 128 per frame 124).
  • each block 128 can carry between three-hundred and twenty and nine-thousand seven-hundred and forty-four bits of information.
  • blocks 128 with smaller information payloads use lower order modulation and higher symbol repetition to provide greater redundancy for noisy or otherwise poor communication links 32 experienced by subscriber stations 28 with low SNRs.
  • Communication to subscriber stations 28 with better SNRs can employ a block structure that carries more bits of information.
  • blocks 128 with different block structures can be carried in the same frame 124.
  • FIG. 6 a flowchart for a method of scheduling data transmissions for a BDCH 38 is shown beginning at 200.
  • the method described below is run once per scheduled frame 124.
  • the frequency of the method is not particularly limited and it can be run more or less frequently than described here, if desired.
  • a scheduling policy is determined by setting a value for fairness factor 120 for sector 36 (between 0 and 1) in base station 24.
  • a value of 0 indicates a scheduling policy where scheduler 100 prioritizes sector throughput over fairness between subscriber stations 28 and a value of 1 indicates a scheduling policy where scheduler 100 transmits queued packets with maximum fairness to subscriber stations 28.
  • a fairness factor 120 set somewhere between the two bounds of this range (zero and one) will be preferable. For example a setting of 0.5 would provide a policy which provides a reasonable degree of fairness to most subscriber stations 28, while still taking advantage of SNR variances to maximize downlink throughput. It should be noted that even when fairness factor 120 is set to one (indicating maximum fairness), subscriber stations 28 with different QoS levels 112 will be scheduled differently according to their QoS levels, as fairness factor 120 determines fairness in scheduling only with regards to different SNRs. In order to provide totally equal service, QoS levels 112 should be the same for all subscriber stations 28.
  • scheduler 100 receives the link quality parameter 108 for each subscriber station 28 with backlogged traffic (i.e., any subscriber station 28 n whose corresponding traffic queue 104 n has at least one packet in it).
  • Link quality parameter 108 can be updated on each iteration through the method, or can be updated at appropriate longer intervals.
  • scheduler 100 calculates a value for the QoS-based priority (q j ) for each traffic queue 104 having at least one packet in the queue for this frame, q / is a positive number between 0 and 1, with higher values indicating a higher priority for that queue 104. A value for ⁇ greater than 0.5.
  • ⁇ fr eq represents the amount of the service share required for a negotiated service share parameter 112 to be met. For each frame that queue 104 is backlogged, the required service share increases.
  • ⁇ fr eq is calculated as follows:
  • ⁇ v f,req * the numerator is the product of the number of backlogged frames for flow 102 over a sliding interval of frames (w ) and ⁇ / is the negotiated service share parameter 112 negotiated at setup. The absolute value notation is used to indicate the length of the vector, and B(n) is the indices of the flows which are backlogged during frame n.
  • the denominator is the sum of negotiated service shares of all backlogged queues 104 over the same sliding interval of frames.
  • the measured service share ( m __._) is determined by taking all the bits transmitted over BDCH 38 for the selected queue 104 during a sliding window interval, and dividing it by all the bits transmitted over BDCH 38 during the same sliding window interval for every queue 104. In the current embodiment, ⁇ jmeas is calculated as follows:
  • scheduler 100 computes the throughput priority (tj) for each backlogged queue 104 for this frame, t/is a normalized interpretation of the SNR for a particular flow 102 (f), yielding a value between 0 and 1, where 0 indicates a minimum SNR ratio and 1 indicates the maximum possible SNR Mathematically, b(flfmax) — b( min) where b(x) is the number of bits in block 128 at SNR x, as determined by the block structure.
  • scheduler 100 computes the total priority function (p) for each backlogged queue 104 by calculating a value derived from adding the QoS priority (qf, as determined at step 208) to the throughput value (t ⁇ as determined at step 212), where each priority is multiplied by fairness factor 120 or an inverse of the fairness factor 120 respectively.
  • j_y is calculated as follows: P Fq j +tj(l-F),fe B where B is the set of all backlogged queues 104.
  • F is the fairness factor 120.
  • fairness factor 120 When fairness factor 120 is zero, then the product of F and qf s zero and the term (a ' ss (l-F)), which is the throughput value, is maximal.
  • fairness factor 120 is one, then q s, maximal and the throughput value is multiplied to a product of zero.
  • step 220 all the backlogged queues 104 are sorted by the total priority function (p ") determined in step 212 in descending order, ⁇ p ' ⁇ p p '( Pp'Q), * ⁇ /•
  • scheduler 100 computes the maximum number of blocks 128, referred to herein as (m '( ⁇ )), that will be allocated to this queue 104 by calculating a percentage of blocks 128 available to it, based upon the priority of this queue 104 relative to the sum of all total priority values for all backlogged queues . More specifically, scheduler 100 multiplies the number of blocks 128 in frame 124 (M) by the priority function (pj) for this queue 104 (as determined in step 212), then dividing the result by the sum of all priority functions (also determined in step 212) for all backlogged queues.
  • the maximum number of blocks 128 m '(i) is determined as follows:
  • step 2208 where the round() operator rounds the number of scheduled blocks, m '(i), to the closest integer.
  • scheduler 100 allocates up to m '(1) blocks 128. Fewer blocks 128 are allocated if the number of bits available for scheduling to queue 104 (p'(l)) requires fewer than m '(1) blocks 128.
  • Step 220 is repeated for all backlogged queues 104 (p '(2)),p' ⁇ )), etc., until either all backlogged queues 104 are cleared or all the blocks 128 in frame 124 are scheduled.
  • step 220 is completed, the method returns to step 204 to schedule the next frame 124. The method continues whenever data is present to be scheduled for transmission.
  • scheduler 100 may run less frequently when the total amount of traffic in all queues 104 is below a certain threshold (representing a low amount of data traffic volume), as to reduce latency for microprocessor-router assembly 36.
  • scheduler 100 may elect to suspend its scheduling method (as described with regards to Figure 6), and instead schedule traffic according to some other scheduling method such as FIFO, in order to reduce latency for each queue 104
  • scheduler 100 can run more frequently than once per frame 124, where the scheduling frequency is a multiple integer of the duration of block 128. This will permit the scheduler to perform well in situations where the coherence time of the channel is lower, such as when the carrier frequency is higher or if there is a greater amount of motion of pedestrians or other traffic near to the subscriber or the base station.
  • the present invention be limited to use with the particular radio based system described above, nor to radio-based systems in general and it is believed that the present invention can be advantageously employed with any system for scheduling data for transmission from a single node to one or more of a plurality of other nodes over a shared communications link. Use with optical, wireline and other shared communication links is contemplated.
  • the above-described embodiments of the invention are intended to be examples of the present invention and alterations and modifications may be effected thereto, by those of skill in the art, without departing from the scope of the invention which is defined solely by the claims appended hereto.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method, system and apparatus for scheduling data to subscriber stations from a base station over a shared channel. Data destined for each subscriber station is placed into queues at the base station. The base station allocates a portion of the shared channel to each particular queue based upon that queue's priority value. The priority value for each queue is determined by an adjusted QoS value and an adjusted throughput value. The QoS value indicates whether a subscriber station has been receiving data according to an agreed-upon QoS level. The throughput value indicates the data rate that can be achieved by transmitting to that subscriber station. These two values are examined by a scheduling policy at the base station. Scheduling policies can include scheduling data to emphasize fairness between subscriber stations, scheduling data to improve overall throughput, and scheduling data to achieve a balance between fairness and throughput.

Description

Scheduler for a Shared Channel
FIELD OF THE INVENTION
The present invention relates to a system, method and apparatus for scheduling data in a network. More specifically, the present invention relates to a system, method and apparatus for scheduling data traffic over a shared channel.
BACKGROUND OF THE INVENTION
In a network including a shared channel to deliver data traffic to multiple receiving stations from a single transmitting station, a transmitting station must determine how to allocate its downlink capacity among receiving stations. Examples of such a network include CATV- based data networks and wireless networks such as the AMOSPHERE™ system sold by the assignee of the present invention, etc. hi the latter system, a base station transceiver will service a plurality of subscriber stations through an air interface that provides both shared and dedicated downlink (base station to subscriber station) channels. Because the transmission capacity of such systems is limited (typically by the available bandwidth), allocating the available capacity among the users to ensure efficient use of the transmission capacity and providing acceptable service levels can be difficult. Accordingly, such systems can benefit from appropriately scheduling transmissions over the shared link(s).
One of the simplest scheduling methods is round-robin scheduling. Round-robin scheduling provides each subscriber station with an equal amount of transmission time over the shared channel. While this can be an advantageous method in some circumstances, in many networks, such as those employing radio-based links, not all subscriber stations will have the same data reception rates, due to factors such as different signal-to-noise ratios (SNR). Thus, round-robin sharing does not actually provide an equal delivery of data over the shared downlink to each subscriber station. This inequality can result in dissatisfied subscribers, particularly for those subscribers using subscriber stations at the edges of the service area who have substantially lower average data rates than subscribers with subscriber stations located close to the base station. Furthermore, this inequality requires the service provider to be conservative when advertising the performance capabilities offered by the system. Another known scheduling method, proportionally fair sharing, provides each subscriber station with an adjusted amount of channel capacity on the shared link, where each subscriber station receives a channel share that is adjusted by their data reception rates, so that each subscriber station receives approximately the same average amount of data. While proportional fair sharing can provide a better degree of equality between subscriber stations, it can also lead to an overall drop in total system throughput, as the base station must devote a large amount of channel capacity to service a small minority of subscriber stations with poor SNRs.
In contrast to the above two methods which focus on providing equal service to subscriber stations, some other methods focus on achieving optimal throughput across the entire system, at the expense of fairness between subscriber stations, hi their article "CDMA/HDR: A Bandwidth-Efficient High-Speed Wireless Data Service for Nomadic Users" (IEEE Communication Magazine, July 2000, pp. 70-78), Bender et al demonstrate how unequal latency between subscriber stations with different data reception capabilities (i.e. - different SNRs) can increase total throughput on the downlink in the network. By providing a greater portion of channel capacity on the shared link to subscriber stations with better instantaneous SNRs, the base station can transmit more traffic overall. This method increases the total throughput of the system, potentially clearing backlogs and improving overall network performance, but also creates latency and a lower data rate for subscriber stations with poorer average SNRs. To ensure that all subscriber stations possess at least a tolerable individual data rate, the system limits the maximum permitted latency value to subscriber stations with poorer SNR ratios.
Liu et al. in their article "Opportunistic Transmission Scheduling with Resource-Sharing Constraints in Wireless Networks" (IEEE Journal on Selected Areas in Communication, Vol. 19, No. 10, October 2001, pp. 2053-2064) discuss the potential of improving wireless resource efficiency by exploiting variances in channel conditions, while still maintaining a level of fairness between subscriber stations. In their model, each subscriber station is allocated a fraction of transmission time. Data packets are stored in traffic queues for each subscriber station until their scheduled transmission time occurs, like the methods described above. However, the base station continuously measures the quality of the link (which can vary over time) at each subscriber station, and transmits to the best possible subscriber station at that moment while still providing a required average data rate to each subscriber station. While opportunistic transmission scheduling can provide an increase in both total and individual throughput, it is not without its disadvantages. One disadvantage is that opportunistic scheduling can increase latency for subscriber stations as the base station waits for the opportune time to transmit to them.
Another disadvantage is that this method assumes a constant level of data to be transmitted, such as a WAP session on a cellular phone. Nor does the opportunistic method contemplate intentional differences in the treatment of subscribers, such as those provided by different qualities of service (QoS), or for treatments of different types of media data. For example, a base station may service subscriber stations with different priorities, so that one subscriber using a latency-intolerant Voice-over-IP (VoIP) service receives a guaranteed service, while another subscriber is web surfing and will be provided only with a best-effort service by the base station.
In their article "Providing Quality of Service over a Shared Wireless Link" (IEEE Communications Magazine Feb 2001, pp. 50-54), Andrews et al, describe a new scheduling algorithm, Modified Largest Weighted Delays First (M-LWDF) which ameliorates some of the above disadvantages. M-LWDF is similar to opportunistic scheduling in that it schedules transmissions to take advantage of SNR fluctuations. However, particular traffic flows (traveling on the downlink to their respective subscriber stations) are given preferential treatment over other traffic flows by virtue of their QoS level. For each time slot (t), the base station calculates a value for each traffic queue, this value consisting of the product of the packet delay multiplied by the channel capacity for that subscriber station multiplied by an arbitrary value. The traffic queue with the highest derived value is scheduled. Mathematically, each time slot (t) serves a waiting packet queue (f) for which the function yjWj(t)r t) is maximal. Wj(t) is the current waiting time for the packets stored in queue./, (t) is the channel capacity, or data rate, for data flow j, and χ,- is an arbitrary value. If γ is the same for each packet queue, then all subscriber stations have the same QoS level. A packet queue with a higher value for γ thus has a higher level of service than a packet queue with a lower value for γ. ϊfrft) is the same for each packet queue, then all subscriber stations have the same data rate. A packet queue with a higher value for r(t) thus transmits at a higher average data rate than another packet queue which has the same γ value. While M-LWDF provides some advantage over the prior art, it also has its limitations. A key disadvantage is that M-LWDF provides no means to provide a policy of fairness between subscriber stations with different channel quality. Another disadvantage is that the M-LWDF can schedule traffic for only a single subscriber station per time slot. This creates a significant amount of latency (x number of timeslots) for all other subscribers waiting for packet delivery. Furthermore, since each timeslot bears data traffic for a single subscriber station, it is possible that some of the capacity of that timeslot is wasted due to internal fragmentation.
It is therefore desired to provide a data transmission scheduler that provides QoS- differentiated service to subscriber stations while offering service providers the ability to implement scheduling policies designed to provide improved sector-throughput, fairness between subscribers, or some combination of the two. It is also desirable to implement a data transmission scheduler which is opportunistic and can take advantage of variance in the data reception characteristics of subscriber stations served by the transmitter. Finally the scheduling mechanism should be flexible enough to accommodate a wide variety of computational capabilities at the base station.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a novel data transmission scheduler which obviates or mitigates at least some of the above-identified disadvantages of the prior art. According to a first aspect of the present invention, there is provided a method of scheduling data for transmission from a base station over a shared channel to a plurality of subscriber stations, said method comprising: determining a fairness factor from a range of possible fairness factors, where a first end of said range indicates a policy of scheduling data with maximum fairness between said plurality of subscriber stations and a second end of said range indicates a policy of scheduling data for maximum data traffic over said shared channel; for each subscriber station in said plurality of subscriber stations that said base station has data to be delivered to determining a quality of service priority value, said quality of service priority value indicating a priority of said subscriber station relative to other subscriber stations in said plurality of subscriber stations; determining a throughput value, said throughput value indicating the quantity of data to be moved to said subscriber station if data is scheduled to said subscriber station; determining a total priority value, said total priority value is the sum of said quality of service priority adjusted according to said fairness factor and said throughput value conversely adjusted according to said fairness factor; and scheduling data on a portion of said shared channel to at least one of said plurality of subscriber stations, beginning with the subscriber station with the highest total priority value.
According to another aspect of the present invention, there is provided a system for transmitting data, comprising: a plurality of subscriber stations each having a processor, a modem, a radio and an antenna, each subscriber station operable to transmit a request for a dedicated data channel from a base station; and a base station having a processor, a modem, a radio and an antenna, and operable to receive said requests for a dedicated data channel from said subscriber stations and to schedule data for transmission to said plurality of subscriber stations over a shared channel in accordance with a scheduling policy that varies priorities between scheduling for fairness between subscriber stations and for improved throughput to said plurality of subscriber stations. The present invention provides a method, system and apparatus for scheduling data to a plurality of subscriber stations from a base station over a shared channel. Data destined for each subscriber station is placed into individual queues at the base station. The base station allocates a portion of the shared channel to transmit the data in each particular queue based upon a priority value it assigns to that queue. The priority value for each queue is determined by a QoS value and a throughput value, where each of these two values is adjusted by a fairness factor. The QoS value indicates whether a subscriber station has been receiving data from the base station according to an agreed-upon QoS level. The throughput value indicates the data rate that can be achieved by the base station transmitting to that subscriber station. The fairness factor represents a scheduling policy at the base station. Scheduling policies can include scheduling data to emphasize fairness between subscriber stations at a particular QoS level, scheduling data to maximize throughput over the shared channel, and scheduling data to achieve a balance between fairness and maximum throughput.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described, by way of example only, with reference to the attached Figures, wherein:
Figure 1 is a schematic representation of a wireless network in accordance with an embodiment of the invention;
Figure 2 is a representation of a communications link as shown in Figure 1, comprised of multiple channels;
Figure 3 is a schematic representation of the base station shown in Figure 1;
Figure 4 is a schematic representation of one of the subscriber stations shown in Figure 1;
Figure 5 is a schematic representation of a scheduler for a shared channel running on the base station shown in Figure 3;
Figure 6 is a flowchart showing how the broadcast downlink channel scheduler shown in Figure 5 manages the scheduling of backlogged traffic flows to the shared channel.
DETAILED DESCRIPTION OF THE INVENTION
Referring now to Figure 1, a wireless network for transmitting data is indicated generally at 20. Network 20 includes a radio base station 24 and a plurality of subscriber stations 28a, 28b ... 28n. In a presently preferred embodiment, radio base station 24 is connected to at least one data telecommunications network (not shown), such as a land line-based switched data network, a packet network, etc., by an appropriate gateway and one or more backhauls (not shown), such as a Tl, T3, El, E3, OC3 or other suitable land line link, or can be a satellite or other radio or microwave channel link or any other link suitable for operation as a backhaul as will occur to those of skill in the art. Base station 24 communicates with subscriber stations 28 which can be fixed, nomadic or mobile devices. The number 'n' of subscriber stations serviced by a base station 24 can vary depending upon the amount of radio bandwidth available and/or the configuration and requirements of the subscriber stations 28. As illustrated in Figure 1, the geographic distribution of subscriber stations 28 with respect to base station 24 need not be symmetric nor will subscriber stations which are physically located close to one another necessarily experience the same or similar data reception rates, due to varying signal to noise ratios (SNRs) experienced at the subscriber stations 28 due to a variety of factors including the geographic environment (the presence or absence of buildings which can reflect or mask signals), the radio environment (the presence or absence of radio noise sources), etc. Thus, in most circumstances subscriber stations 28 served by a base station 24 can have significantly different SNRs and these SNRs can change over time.
As known to those of skill in the art, subscriber stations 28 can be geographically divided into different sectors 36, formed via directional antennas at base station 24 to increase the number of subscriber station 28 that can. be served from a single base station location. In such a case, each sector 36 essentially acts as a different base station and base station 24 can manage the network resources in each sector 36 independent of each other sector 36.
While Figure 1 shows only one base station 24, it will further be apparent to those of skill in the art that network 20 can contain multiple, geographically distributed base stations 24, with overlapping sector 36 coverage of subscriber stations 28, and where each subscriber station 28 in an overlapping sector 36 coverage area can select which base station 24 it will be serviced by.
A communication link 32 is established in each sector 36 between base station 24 and each subscriber station 28 in the sector 36 via radio. Communication link 32a carries information to be transferred between base station 24 and subscriber station 28b, communication link 32b carries information to be transferred between base station 24 and subscriber stations 28c and 28d, etc. Communication link 32 can be implemented using a variety of multiple access techniques, including TDMA, FDMA, CDMA or hybrid systems such as GSM, etc. hi a present embodiment, data transmitted over communication link 32 is transmitted using CDMA as a multiple access technology and the data is in the form of packets, transmitted within slotted time frames, the details of which will be discussed in greater detail below.
As used herein, the terms "package", "packaged" and "packaging" refer to the overall arrangement of the transmission of the packaged data for its reception at an intended destination receiver. Packaging of data can include, without limitation, applying different levels of forward error correcting (FEC) codes (from no coding to high levels of coding and/or different coding methods), employing various levels of symbol repetition, employing different modulation schemes (4-QAM, 16-QAM, 64-QAM, etc.) and any other techniques or methods for arranging data transmission with a selection of the amount of radio (or other physical layer) resources required, the data rate and probability of transmission errors which are appropriate for the transmission. For example, data can be packaged with rate 1/4 FEC coding (each 1 data bit is transmitted in 4 bits of information) and 16-QAM modulation for transmission to a first intended receiver and packaged with rate 1/2 FEC coding and 64-QAM modulation for transmission to a second intended receiver which has a better reception-quality than the first. Communications link 32 operates in both an uplink (from a subscriber station 28 to base station 24) and a downlink direction (from base station 24 to subscriber stations 28). The method of providing both uplink and downlink direction is not particularly limited, and in the present embodiment communications link 32 operates by frequency division duplexing (FDD). However, other methods of providing both an uplink and downlink direction, such as time division duplexing (TDD) and hybrid schemes are within the scope of the invention.
Referring now to Figure 2, in the current embodiment, communications link 32 is comprised of a plurality of channels, which in the present CDMA implementation, is achieved with orthogonal coding of link 32. In the downlink direction, base station 24 uses a shared channel, referred to as the broadcast data channel (BDCH) 38 to carry variable-rate and bursty traffic (consisting primarily of signaling and internet traffic) across a sector 36. BDCH 38 makes use of adaptive FEC and modulation to maximize downlink capacity and contains multiple packets or, more commonly, segments of packets of data for various subscriber stations 28 all time-multiplexed together into a single frame, i the present embodiment, BDCH 38 can be configured with spreading factor 4, wherein eight blocks of data can be sent within a ten millisecond frame, spreading factor 8 wherein four blocks of data can be sent witiiin a frame or spreading factor 16 wherein two blocks of data can be sent within a frame. Also in the present embodiment, base station 24 can support one or more BDCH 38 per sector 36 at any one time, i the uplink direction, data traffic is carried from subscriber station 28 to base station 24 using a dedicated data channel (DDCH) 44. A separate DDCH 44 is set-up between base station 24 and each subscriber station 28 with an active communications link 32. Signaling traffic is carried from subscriber station 28 to base station 24, typically inband using DDCH 44. Subscriber stations 28 measure their received SNR, or other metric of their ability to receive data from base station 24 and report this information back to base station 24 on a regular basis over their DDCH 44 using an upper layer signaling protocol. Subscriber stations 28 with high SNRs require less channel coding and can use higher order modulation than subscriber stations 28 with lower SNRs and thus, each block transmitted on BDCH 38 using a different block type (i.e., different packaging ofFEC type, FEC rate, modulation, etc.). Figure 3 shows an example of a base station 24 in greater detail. For the sake of clarity, base station 24 shows an example of a single sector base station. However, multi-sector base stations 24 are also within the scope of the invention. Base station 24 comprises an antenna 50, or antennas, for receiving and transmitting radio-communications over communications link 32. In turn, antenna 50 is connected to a radio 52 and a modem 54. Modem 50 is connected to a microprocessor-router assembly 56 such as a Intel Corporation Pentium processor based system using a conventional operating system such as Linux. Microprocessor-router assembly 56 is responsible for traffic scheduling of all subscriber stations 28 within its sector 36 and for radio resource management. It will be understood that assembly 56 can include multiple microprocessors, as desired and/or that the router can be provided as a separate unit, if desired. The router within microprocessor-router assembly 56 is connected to a backhaul 58 in any suitable manner, which in turn connects base station 24 to a data network (not shown).
Referring now to Figure A, an example of a subscriber station 28 is shown in greater detail. Subscriber station 28 comprises an antenna 60, or antennas, for receiving and transmitting radio-communications over communications link 32. In turn, antenna 60 is connected to a radio 64 and a modem 68, which in turn is connected to a microprocessor-assembly 72. Microprocessor-assembly 72 can include, for example, a StrongARM processor manufactured by Intel Corporation, that performs a variety of functions, including implementing A/D-D/A conversion, filters, encoders, decoders, data compressors, de-compressors and/or packet disassembly. Micro-processor-assembly 72 also includes buffers 74 which stores queued data traffic waiting for transport up communications link 32.
As shown in Figure 4, microprocessor-assembly 72 interconnects modem 68 and a data port 76, for connecting subscriber station 28 to a data client device (not shown), such as a personal computer, personal digital assistant or the like which is operable to use data received over communications link 32. Accordingly, microprocessor-assembly 72 is operable to process data between data port 76 and modem 68. Microprocessor-assembly 72 is also interconnected to at least one telephony port 80, for connecting subscriber station 28 to a telephony device (not shown) such as a telephone. In some cases, particularly in the case of a mobile subscriber station 28, the data client device can be integrated into the subscriber station 28.
Referring now to Figure 5, the logical architecture of a shared channel, such as a BDCH scheduler is shown generally at 100. Scheduler 100 is responsible for assigning queued packets of data intended to be transmitted from base station 24 to subscriber stations 28 into the bitstream of BDCH 38 while maintaining any agreed-upon QoS terms for each subscriber station 28 and implementing a scheduling policy based upon a fairness factor provided by a network operator (and discussed in greater detail below) in order to provide varying degrees of prioritization of fairness between subscriber stations 28 and overall throughput on BDCH 38. A method for scheduler 100 to implement a scheduling policy and schedule queued packets is described further below with reference to Figure 6. hi the current embodiment, scheduler 100 is a software program running within base station 24 on microprocessor-assembly 56. However, other implementations, such as a hardware or firmware implementation are also within the scope of the invention.
Data 102 bound for each subscriber station 28 is queued in traffic queues 104 before being sent downstream on BDCH 38 to various subscriber stations 28. Each flow of data 102 can contain a variety of different types of data, such as web pages, FTP data, streaming media, voice over IP data, or other data types as will occur to those of skill in the art. Traffic queues 104 for subscriber stations 28 are established for each subscriber station 28 that is known to, and is connected to, base station 24 over communication link 32. The example in Figure 5 shows a scenario with four traffic queues 104, each servicing a flow to a corresponding subscriber station 28 (e.g. traffic queue 104a holds traffic bound for subscriber station 28a, etc.) hi the example shown in the Figure, traffic queue 104a has five queued packets, 104b has no queued packets, traffic queue 104c has three queued packets, and traffic queue 104d has four queued packets. Traffic queues 104a, 104c, and 104d therefore have backlogged (e.g. - non-zero length queues) flows while traffic queue 104b has no backlogged flow.
Besides holding traffic queues 104n, scheduler 100 stores link quality parameter 108n, negotiated service share parameter 112n, and measured service share parameter 116n parameters for each active subscriber station 28n with a traffic flow. Furthermore, scheduler 100 stores a fairness factor 120. As described in further detail below, there is at least one instance of fairness factor 120 per sector 36.
The link quality parameter 108n holds a suitable measurement of the reception quality experienced at subscriber station 28n. (a subscriber station 28„). hi a present embodiment, the value of
Ec link quality parameter 108 is the signal to noise (SNR) estimate a subscriber tatio 28 - — of at least
one suitable channel, where Ec represents the per chip BDCH channel signal energy at the antenna of subscriber station 28, and Nt represents the total noise received at antenna 60 of subscriber station 28, the total noise equaling the sum of the average noise density, interference from other interfering cells and sectors, plus multi-path interference. In the present embodiment, each subscriber station 28n periodically updates its value for link quality parameter 108 by transmitting its received SNR over an uplink channel such as DDCH 44.
The negotiated service share parameter 112n stores the value for an agreed-upon quality of service level (Φ„eg) for subscriber station 28n. In the present embodiment, negotiated service share parameter represents a guaranteed data rate (bits/s); however, other definitions of a negotiated service share such as maximum delay before transmitting a waiting packet, or a combination of guaranteed data rate and maximum delay are within the scope of the invention. Subscriber stations 28 with higher values for negotiated service share parameter 112 will receive better service than subscriber stations 28 with lower negotiated service share parameter 112. In the present embodiment, negotiated service share parameter 112 is negotiated between base station 24 and each subscriber station 28 when the subscriber station 28 connects to base station 24. However, the means of determining negotiated service share parameter 112 are not particularly limited. For example, negotiated service share parameter 112 can be determined by the service provided based upon the media type about to be transmitted to subscriber station 28, a monthly subscription agreement for subscriber station 28, a fee for service, etc. Other methods of determining negotiated service share parameter 112 will occur to those of skill in the art. Measured service share parameter 116n stores the value of the measured service share (Φmεas) for subscriber station 28n. The measured service share is the portion of BDCH 38 that has carried packets destined for that particular subscriber station 28n. Thus, a larger value for the measured service share parameter 116n indicates a higher average data rate delivered to that subscriber station 28n. Measured service share parameter 116n can be equated to the average bit rate over a fixed interval of time to a particular subscriber station 28n. Finally, fairness factor 120 is an adjustable parameter (F) that represents the scheduling policy and this parameter controls the trade-off between individual flow fairness versus overall throughput over communications link 32. There can be either one instance of fairness factor 120 per sector 36, which is set by a network operator, or there can be one instance of fairness factor 120 per BDCH 38 (in cases where there is more than one BDCH 38 per sector 36). In the current embodiment, F is normalized and ranges from zero to one. A setting of zero indicates a policy that prioritizes the scheduling of data flows 102 to prioritize throughput on the downlink by scheduling data to subscriber stations 28 with the best SNRs, without regard to providing fairness between subscriber stations 28. A setting of one indicating a policy that prioritizes the scheduling of flows 102 to provide fairness between subscriber stations 28s so that all subscriber stations 28 at the same QoS level will receive the same data rate, regardless of their respective SNRs.
Once packets in a traffic queue 104 are scheduled by scheduler 100, they are moved into the blocks 128 of frames 124 of BDCH 38. Normally the spreading factor of a BDCH 38 is predetermined by a network operator and is fixed for every subscriber station 28 that is being serviced by a particular BDCH 38. In a current embodiment, a spreading factor of 4 is preferred (thus providing eight blocks 128 per frame 124).
As known to those of skill in the art, the structure of a block 128 can vary according to differences in modulation order, symbol repetition, etc. The number of bits of information carried in each block 128 can vary according to the block structure used. In a current embodiment, each block 128 can carry between three-hundred and twenty and nine-thousand seven-hundred and forty-four bits of information. Generally, blocks 128 with smaller information payloads use lower order modulation and higher symbol repetition to provide greater redundancy for noisy or otherwise poor communication links 32 experienced by subscriber stations 28 with low SNRs. Communication to subscriber stations 28 with better SNRs can employ a block structure that carries more bits of information. In a current embodiment, blocks 128 with different block structures can be carried in the same frame 124.
Referring now to Figure 6, a flowchart for a method of scheduling data transmissions for a BDCH 38 is shown beginning at 200. In the current embodiment, the method described below is run once per scheduled frame 124. However, the frequency of the method is not particularly limited and it can be run more or less frequently than described here, if desired.
At step 200, a scheduling policy is determined by setting a value for fairness factor 120 for sector 36 (between 0 and 1) in base station 24. As described above, a value of 0 indicates a scheduling policy where scheduler 100 prioritizes sector throughput over fairness between subscriber stations 28 and a value of 1 indicates a scheduling policy where scheduler 100 transmits queued packets with maximum fairness to subscriber stations 28.
It is contemplated that, in the majority of deployments, a fairness factor 120 set somewhere between the two bounds of this range (zero and one) will be preferable. For example a setting of 0.5 would provide a policy which provides a reasonable degree of fairness to most subscriber stations 28, while still taking advantage of SNR variances to maximize downlink throughput. It should be noted that even when fairness factor 120 is set to one (indicating maximum fairness), subscriber stations 28 with different QoS levels 112 will be scheduled differently according to their QoS levels, as fairness factor 120 determines fairness in scheduling only with regards to different SNRs. In order to provide totally equal service, QoS levels 112 should be the same for all subscriber stations 28.
At step 204, scheduler 100 receives the link quality parameter 108 for each subscriber station 28 with backlogged traffic (i.e., any subscriber station 28n whose corresponding traffic queue 104n has at least one packet in it). Link quality parameter 108 can be updated on each iteration through the method, or can be updated at appropriate longer intervals. At step 208, scheduler 100 calculates a value for the QoS-based priority (qj) for each traffic queue 104 having at least one packet in the queue for this frame, q/is a positive number between 0 and 1, with higher values indicating a higher priority for that queue 104. A value for ^greater than 0.5. indicates that the data in queue 104 is lagging (i.e., the measured service share parameter 116 is less than the negotiated service share in negotiated service share parameter 112) and a Rvalue less than 0.5 indicates that that particular queue 104 is leading in service (i.e., the measured service share parameter 116 is greater than the negotiated service share in negotiated service share parameter 112). A higher value of ^/implies that the data in queue 104 requires service sooner. The value for q is derived by first subtracting a required service share Φfreq from the measured service share Φβ„eas, dividing the difference by two, then adding ϋ.5. -eq — neas g = + 0.5
2
Both Φfreq and Φβ„eas (described further below) are defined as ranging between zero and one, so qf is always a number between zero and one. The required service share (Φfreq) represents the amount of the service share required for a negotiated service share parameter 112 to be met. For each frame that queue 104 is backlogged, the required service share increases. In the current embodiment Φfreq is calculated as follows:
Φ vf,req *
Figure imgf000016_0001
where the numerator is the product of the number of backlogged frames for flow 102 over a sliding interval of frames (w ) and Φ/is the negotiated service share parameter 112 negotiated at setup. The absolute value notation is used to indicate the length of the vector, and B(n) is the indices of the flows which are backlogged during frame n. The denominator is the sum of negotiated service shares of all backlogged queues 104 over the same sliding interval of frames. The measured service share ( m__._) is determined by taking all the bits transmitted over BDCH 38 for the selected queue 104 during a sliding window interval, and dividing it by all the bits transmitted over BDCH 38 during the same sliding window interval for every queue 104. In the current embodiment, Φjmeas is calculated as follows:
Figure imgf000017_0001
where b/η) is the number of bits transmitted from flow/during frame n.
At step 212, scheduler 100 computes the throughput priority (tj) for each backlogged queue 104 for this frame, t/is a normalized interpretation of the SNR for a particular flow 102 (f), yielding a value between 0 and 1, where 0 indicates a minimum SNR ratio and 1 indicates the maximum possible SNR Mathematically,
Figure imgf000017_0002
b(flfmax) — b( min) where b(x) is the number of bits in block 128 at SNR x, as determined by the block structure. At step 216, scheduler 100 computes the total priority function (p) for each backlogged queue 104 by calculating a value derived from adding the QoS priority (qf, as determined at step 208) to the throughput value (tβ as determined at step 212), where each priority is multiplied by fairness factor 120 or an inverse of the fairness factor 120 respectively. In the current embodiment, j_y is calculated as follows: P Fqj+tj(l-F),fe B where B is the set of all backlogged queues 104. hi the above equation, F is the fairness factor 120. When fairness factor 120 is zero, then the product of F and qf s zero and the term (a 'ss (l-F)), which is the throughput value, is maximal. When fairness factor 120 is one, then q s, maximal and the throughput value is multiplied to a product of zero.
At step 220, all the backlogged queues 104 are sorted by the total priority function (p ") determined in step 212 in descending order, {p '}\pp'( Pp'Q), *</•
At step 224, starting with the queue 104 with the highest total priority value p '(1), scheduler 100 computes the maximum number of blocks 128, referred to herein as (m '(ϊ)), that will be allocated to this queue 104 by calculating a percentage of blocks 128 available to it, based upon the priority of this queue 104 relative to the sum of all total priority values for all backlogged queues . More specifically, scheduler 100 multiplies the number of blocks 128 in frame 124 (M) by the priority function (pj) for this queue 104 (as determined in step 212), then dividing the result by the sum of all priority functions (also determined in step 212) for all backlogged queues. In the current embodiment the maximum number of blocks 128 m '(i) is determined as follows:
Figure imgf000018_0001
where the round() operator rounds the number of scheduled blocks, m '(i), to the closest integer. At step 228, starting with the highest priority queue 104 (p'(lj), scheduler 100 allocates up to m '(1) blocks 128. Fewer blocks 128 are allocated if the number of bits available for scheduling to queue 104 (p'(l)) requires fewer than m '(1) blocks 128. Step 220 is repeated for all backlogged queues 104 (p '(2)),p'β)), etc., until either all backlogged queues 104 are cleared or all the blocks 128 in frame 124 are scheduled. Once step 220 is completed, the method returns to step 204 to schedule the next frame 124. The method continues whenever data is present to be scheduled for transmission. While the embodiments discussed herein are directed specific implementations of the invention, it will be understood that combinations, sub-sets and variations of the embodiments are within the scope of the invention. For example, in the current embodiment, only a single queue is scheduled for each traffic block 128. However, multiple queues can be scheduled in a traffic block 128 and are within the scope of the invention. If multiple subscriber stations 28 were scheduled in the same block 128, the block structure must be suitable to satisfy the SNR requirements of all subscriber stations 28 scheduled. One way to ensure this, for example, is to only allocate subscribers that have an SNR that is greater than the first subscriber that is allocated to the block.
It is contemplated that scheduler 100 may run less frequently when the total amount of traffic in all queues 104 is below a certain threshold (representing a low amount of data traffic volume), as to reduce latency for microprocessor-router assembly 36. Alternatively, when the total amount of traffic is below a certain threshold, scheduler 100 may elect to suspend its scheduling method (as described with regards to Figure 6), and instead schedule traffic according to some other scheduling method such as FIFO, in order to reduce latency for each queue 104
It is further contemplated that scheduler 100 can run more frequently than once per frame 124, where the scheduling frequency is a multiple integer of the duration of block 128. This will permit the scheduler to perform well in situations where the coherence time of the channel is lower, such as when the carrier frequency is higher or if there is a greater amount of motion of pedestrians or other traffic near to the subscriber or the base station.
It is also not intended that the present invention be limited to use with the particular radio based system described above, nor to radio-based systems in general and it is believed that the present invention can be advantageously employed with any system for scheduling data for transmission from a single node to one or more of a plurality of other nodes over a shared communications link. Use with optical, wireline and other shared communication links is contemplated. The above-described embodiments of the invention are intended to be examples of the present invention and alterations and modifications may be effected thereto, by those of skill in the art, without departing from the scope of the invention which is defined solely by the claims appended hereto.

Claims

What is claimed is:
1. A method of scheduling data for transmission from a base station over a shared channel to a plurality of subscriber stations, said method comprising: determining a fairness factor from a range of possible fairness factors, where a first end of said range indicates a policy of scheduling data with maximum fairness between said plurality of subscriber stations and a second end of said range indicates a policy of scheduling data for maximum data traffic over said shared channel; for each subscriber station in said plurality of subscriber stations that said base station has data to be delivered to detenmning a quality of service priority value, said quality of service priority value indicating a priority of said subscriber station relative to other subscriber stations in said plurality of subscriber stations; determining a throughput value, said throughput value indicating the quantity of data to be moved to said each subscriber station if data is scheduled to said subscriber station; determining a total priority value, said total priority value is the sum of said quality of service priority adjusted according to said fairness factor and said throughput value conversely adjusted according to said fairness factor; and scheduling data on a portion of said shared channel to at least one subscriber station of said plurality of subscriber stations, beginning with the subscriber station with the highest total priority value.
2. The method of claim 1 , where said quality of service priority value is determined by comparing said negotiated service share to said measured service share, where said quality of service priority is higher when said negotiated service share is greater than said measured share and said quality of service priority value is low when said measured share is greater than said negotiated service share.
3. The method of claim 1 , where said throughput value is calculated by determining the largest block foπnat receivable at said each subscriber station in said plurality of subscriber stations.
4. The method of claim 3, where said determining the largest block format receivable at said each subscriber station occurs by determining the signal to noise ratio received at said each subscriber station of said plurality of subscriber stations.
5. The method of claim 4, where said portion of said shared channel scheduled to said at least one subscriber station of said plurality of subscriber stations is proportional to said total priority value for said at least one subscriber station.
6. The method of claim 5, where said method occurs at least once per frame on said shared channel.
7. A system for transmitting data, comprising: a plurality of subscriber stations, each having a processor, a modem, a radio and an antenna, each subscriber station operable to receive data traffic from a base station; and a base station having a processor, a modem, a radio and an antenna, and operable to receive said requests for a dedicated data channel from said subscriber stations and to schedule data for transmission to said plurality of subscriber stations over a shared channel in accordance with a scheduling policy that varies priorities between scheduling for fairness between subscriber stations and for improved throughput to said plurality of subscriber stations.
8. The system of claim 7, where each subscriber station of said plurality of subscriber stations is operable to negotiate with said at least one base station a negotiated service share of said shared channel for data traffic bound to said each subscriber station.
9. The system of claim 8, where said base station determines a quality of service priority value for each subscriber station in said plurality of subscriber stations by comparing a negotiated service share for said each subscriber station to a measured service share for said each subscriber station.
10. The system of claim 9, where said base station determines a throughput value for said each subscriber station in said plurality of subscriber stations by determining the largest block format receivable at said each subscriber station in said plurality of subscriber stations.
11. The system of claim 10, where said base station determines the largest block format receivable at said each subscriber station occurs by determining the signal to noise ratio received at said each subscriber station of said plurality of subscriber stations.
12. The system of claim 11, where said each subscriber station of said plurality of subscriber stations transmits an indication of its signal to noise ratio to said base station.
13. A system for transmitting data, comprising: a plurality of subscriber stations, each having a processor, a modem, a radio and an antenna, each subscriber station operable to receive data traffic from a base station; and at least one base station, said at least one base station having a processor, a modem, a radio and an antenna, operable to schedule data traffic to each of said plurality of subscriber stations over a shared channel in accordance with the method described in claim 1.
14. The system of claim 13, where each subscriber station of said plurality of subscriber stations is operable to negotiate with said at least one base station a negotiated service share of said shared channel for data traffic bound to said each subscriber station.
15. The system of claim 14, where said base station determines a quality of service priority value for each subscriber station in said plurality of subscriber stations by comparing a negotiated service share for said each subscriber station to a measured service share for said each subscriber station.
16. The system of claim 15, where said base station determines a throughput value for said each subscriber station in said plurality of subscriber stations by determining the largest block format receivable at said each subscriber station in said plurality of subscriber stations.
17. The system of claim 16, where said base station determines the largest block format receivable at said each subscriber station occurs by determining the signal to noise ratio received at said each subscriber station of said plurality of subscriber stations.
18. The system of claim 17, where said each subscriber station of said plurality of subscriber stations transmits an indication of its signal to noise ration to said base station.
19. The system of claim 18, where said base station schedules data traffic to each of said plurality of subscriber stations over a shared channel in accordance with the method described in claim 1 whenever the total amount of data traffic reaches at least a predetermined threshold and schedules data traffic to each of said plurality of subscriber stations in accordance with another method whenever the total amount of data traffic is below said predetermined threshold.
20. A base station having a microprocessor, a modem, a radio and an antenna, and operable to schedule data traffic to a plurality of subscriber stations over a shared channel in accordance with the method described in claim 1.
21. The base station of claim 20, where said base station determines a quality of service priority value for each subscriber station in said plurality of subscriber stations by comparing a negotiated service share for said each subscriber station to a measured service share for each subscriber station.
22. The base station of claim 21 , where said base station determines a throughput value for said each subscriber station in said plurality of subscriber stations by determining the largest block format receivable at said each subscriber station in said plurality of subscriber stations.
23. The base station of claim 22, where said base station determines the largest block format receivable at said each subscriber station occurs by determining the signal to noise ratio received at said each subscriber station of said plurality of subscriber stations.
24. The base station of claim 23, where said each subscriber station of said plurality of subscriber stations transmits an indication of its signal to noise ratio to said base station.
25. The base station of claim 24, where said base station schedules data traffic to each of said plurality of subscriber stations over a shared channel in accordance with the method described in claim 1 whenever the total amount of data traffic reaches at least a predetermined threshold and schedules data traffic to each of said plurality of subscriber stations in accordance with another method whenever the total amount of data traffic is below said predetermined threshold.
PCT/CA2003/001256 2002-08-19 2003-08-14 Scheduler for a shared channel WO2004017650A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2004528217A JP2005536129A (en) 2002-08-19 2003-08-14 Shared communication channel scheduler
AU2003257341A AU2003257341A1 (en) 2002-08-19 2003-08-14 Scheduler for a shared channel
MXPA05002000A MXPA05002000A (en) 2002-08-19 2003-08-14 Scheduler for a shared channel.

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CA2,398,755 2002-08-19
CA002398755A CA2398755A1 (en) 2002-08-19 2002-08-19 Scheduler for a shared channel

Publications (2)

Publication Number Publication Date
WO2004017650A2 true WO2004017650A2 (en) 2004-02-26
WO2004017650A3 WO2004017650A3 (en) 2004-05-06

Family

ID=31722361

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2003/001256 WO2004017650A2 (en) 2002-08-19 2003-08-14 Scheduler for a shared channel

Country Status (6)

Country Link
JP (1) JP2005536129A (en)
CN (1) CN1330153C (en)
AU (1) AU2003257341A1 (en)
CA (1) CA2398755A1 (en)
MX (1) MXPA05002000A (en)
WO (1) WO2004017650A2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006081570A1 (en) * 2005-01-28 2006-08-03 Nortel Networks Limited Optimized scheduling method for delay-sensitive traffic on high speed shared packet data channels
EP1804438A1 (en) * 2005-12-28 2007-07-04 Nokia Corporation Packet scheduler in a radio system
JPWO2005084066A1 (en) * 2004-02-27 2007-08-30 三菱電機株式会社 Scheduler, base station, and scheduling method
JP2007536827A (en) * 2004-05-05 2007-12-13 クゥアルコム・インコーポレイテッド Method and apparatus for adaptive delay management in a wireless communication system
CN100401847C (en) * 2005-08-01 2008-07-09 华为技术有限公司 Different service realizing method
US8331377B2 (en) 2004-05-05 2012-12-11 Qualcomm Incorporated Distributed forward link schedulers for multi-carrier communication systems
CN103152284A (en) * 2013-03-18 2013-06-12 神州数码网络(北京)有限公司 Method for balancing multipath output intelligent loads of router and router
CN103188165A (en) * 2013-03-12 2013-07-03 神州数码网络(北京)有限公司 Intelligent router multipath output load balancing method and router
CN103780532A (en) * 2014-01-16 2014-05-07 广东省电信规划设计院有限公司 Uplink OFDM system subcarrier and power distribution method and system
CN105939280A (en) * 2015-08-27 2016-09-14 杭州迪普科技有限公司 Flow scheduling method and device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8228920B2 (en) * 2006-05-01 2012-07-24 Agere Systems Inc. High-throughput scheduler with guaranteed fairness for wireless networks and other applications
KR100911304B1 (en) * 2007-06-18 2009-08-11 엘지전자 주식회사 Method for transmitting data of radio bearer having priority in wirelss communication system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001078420A1 (en) * 2000-04-10 2001-10-18 Switchcore Ab Method and apparatus for distribution of bandwidth in a switch
US20020093976A1 (en) * 2000-11-30 2002-07-18 Leonid Razoumov Method and apparatus for scheduling packet data transmissions in a wireless communication system
WO2003051007A1 (en) * 2001-12-13 2003-06-19 Nortel Networks Limited Priority scheduler

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001063856A1 (en) * 2000-02-25 2001-08-30 Telefonaktiebolaget Lm Ericsson (Publ) Flow control between transmitter and receiver entities in a communications system
IT1319611B1 (en) * 2000-12-22 2003-10-20 Siemens Inf & Comm Networks PROCEDURE FOR SCHEDULING PERMITS OF DATA TRANSMISSION APACCHETTO ON RADIO CHANNELS SHARED BY MOBILE STATIONS IN SYSTEMS
CN1347253A (en) * 2001-11-23 2002-05-01 杨大成 Algorithm for scheduling and distributing packet data service resources in mobile environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001078420A1 (en) * 2000-04-10 2001-10-18 Switchcore Ab Method and apparatus for distribution of bandwidth in a switch
US20020093976A1 (en) * 2000-11-30 2002-07-18 Leonid Razoumov Method and apparatus for scheduling packet data transmissions in a wireless communication system
WO2003051007A1 (en) * 2001-12-13 2003-06-19 Nortel Networks Limited Priority scheduler

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2005084066A1 (en) * 2004-02-27 2007-08-30 三菱電機株式会社 Scheduler, base station, and scheduling method
JP4537390B2 (en) * 2004-02-27 2010-09-01 三菱電機株式会社 Base station and scheduling method
US8472322B2 (en) 2004-05-05 2013-06-25 Qualcomm Incorporated Method and apparatus for adaptive delay management
JP2007536827A (en) * 2004-05-05 2007-12-13 クゥアルコム・インコーポレイテッド Method and apparatus for adaptive delay management in a wireless communication system
US8331377B2 (en) 2004-05-05 2012-12-11 Qualcomm Incorporated Distributed forward link schedulers for multi-carrier communication systems
US8750329B2 (en) 2005-01-28 2014-06-10 Rockstar Consortium Us Lp Optimized scheduling method for delay-sensitive traffic on high speed shared packet data channels
WO2006081570A1 (en) * 2005-01-28 2006-08-03 Nortel Networks Limited Optimized scheduling method for delay-sensitive traffic on high speed shared packet data channels
CN100401847C (en) * 2005-08-01 2008-07-09 华为技术有限公司 Different service realizing method
EP1804438A1 (en) * 2005-12-28 2007-07-04 Nokia Corporation Packet scheduler in a radio system
CN103188165A (en) * 2013-03-12 2013-07-03 神州数码网络(北京)有限公司 Intelligent router multipath output load balancing method and router
CN103188165B (en) * 2013-03-12 2016-06-01 神州数码网络(北京)有限公司 A kind of router multipath exports method and the router of intelligent load equilibrium
CN103152284A (en) * 2013-03-18 2013-06-12 神州数码网络(北京)有限公司 Method for balancing multipath output intelligent loads of router and router
CN103152284B (en) * 2013-03-18 2016-06-01 神州数码网络(北京)有限公司 A kind of router multipath exports method and the router of intelligent load equilibrium
CN103780532A (en) * 2014-01-16 2014-05-07 广东省电信规划设计院有限公司 Uplink OFDM system subcarrier and power distribution method and system
CN103780532B (en) * 2014-01-16 2018-01-05 广东省电信规划设计院有限公司 Upgoing O FDM system subcarriers and power distribution method and system
CN105939280A (en) * 2015-08-27 2016-09-14 杭州迪普科技有限公司 Flow scheduling method and device

Also Published As

Publication number Publication date
CA2398755A1 (en) 2004-02-19
MXPA05002000A (en) 2005-05-27
CN1330153C (en) 2007-08-01
CN1692663A (en) 2005-11-02
AU2003257341A1 (en) 2004-03-03
WO2004017650A3 (en) 2004-05-06
JP2005536129A (en) 2005-11-24

Similar Documents

Publication Publication Date Title
AU2003246474B2 (en) System, apparatus and method for uplink resource allocation
US8194556B2 (en) Latency-aware adaptive bandwidth request mechanism for real-time communication in WiMAX
US8259687B2 (en) Dynamic bandwidth allocation for multiple access communications using buffer urgency factor
CA2547000C (en) Quality of service scheduler for a wireless network
US20070026868A1 (en) Licensed/unlicensed frequency management in a wireless wide-area network
US8180359B2 (en) Data transmission method, system, base station and subscriber station, a data processing unit, computer program product, computer program distribution medium and baseband module
KR20040012947A (en) Adaptive scheduling for multi-carrier system
CA2698363A1 (en) Method and apparatus for controlling transmissions of a communications system
US20080205275A1 (en) Communication Resource Scheduling
US20040228296A1 (en) Data transmission method, system, base station and subscriber station
WO2004017650A2 (en) Scheduler for a shared channel
Taghipoor et al. Scheduling Algorithm and Bandwidth Allocation in WiMAX
Wang et al. QoS. Services. in. Wireless. Metropolitan. Area. Networks
AU2007216880A1 (en) System, apparatus and method for uplink resource allocation
Chen et al. A channel state aware uplink bandwidth allocation algorithm for wireless networks

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2003257341

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 593/DELNP/2005

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 20038195674

Country of ref document: CN

Ref document number: PA/a/2005/002000

Country of ref document: MX

WWE Wipo information: entry into national phase

Ref document number: 2004528217

Country of ref document: JP

122 Ep: pct application non-entry in european phase