EP2898626A1 - Method and system for supporting dynamic resource management in a backhaul network - Google Patents

Method and system for supporting dynamic resource management in a backhaul network

Info

Publication number
EP2898626A1
EP2898626A1 EP13779151.3A EP13779151A EP2898626A1 EP 2898626 A1 EP2898626 A1 EP 2898626A1 EP 13779151 A EP13779151 A EP 13779151A EP 2898626 A1 EP2898626 A1 EP 2898626A1
Authority
EP
European Patent Office
Prior art keywords
pipes
traffic
excess
offline
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP13779151.3A
Other languages
German (de)
French (fr)
Inventor
Johannes LESSMANN
Stefan Schmid
Stella Spagna
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Laboratories Europe GmbH
Original Assignee
NEC Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Europe Ltd filed Critical NEC Europe Ltd
Priority to EP13779151.3A priority Critical patent/EP2898626A1/en
Publication of EP2898626A1 publication Critical patent/EP2898626A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/125Shortest path evaluation based on throughput or bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network

Definitions

  • the present invention relates to a method and a system for supporting dynamic resource management in a backhaul network.
  • a backhaul network that may also be designated as mobile backhaul network.
  • Mobile backhaul networks connect the remote base stations and cell towards to the mobile operator's core networks and provide access to both the voice network and the internet.
  • the mobile backhaul network may consist of a wired part and a wireless part.
  • optical fiber is the predominant, but not necessarily only, technology in the middle mile and microwave radio is the predominant, but not necessarily only, technology in the last mile.
  • the first and middle mile typically employ ring or mesh topologies.
  • the last mile i.e. the final leg of the telecommunications networks delivering communications connectivity to enterprise customers or base stations
  • most current deployments are still tree, star or chain topologies.
  • ring and mesh topologies in the last mile as well, all the more so with the introduction of small cells.
  • TE traffic engineering
  • the aforementioned object is accomplished by a method comprising the features of claim 1.
  • a method for supporting dynamic resource management in a backhaul network wherein a resource management function is provided that includes an offline component and an online component for routing data traffic in the form of pipes, wherein said pipes include a path from an ingress switch to an egress switch of said backhaul network and an assigned capacity, wherein said offline component performs offline path computation based on expected traffic demands that are determined from one or more traffic matrices in order to compute offline computed paths for said pipes being represented in said one or more traffic matrices, wherein said expected traffic demands constitute claimable resources for said pipes, wherein said offline computed paths are installed in the backhaul network in order to configure said pipes, wherein initially only a fraction of the maximum allowable capacity that corresponds to the amount of said claimable resources is allocated as capacity for said pipes, wherein said resource management function allocates capacities to said pipes dependent on current data traffic, wherein in case excess traffic demands for one or more of said pipes - excess pipes - occur, because being beyond said expected traffic demands for said excess pipes, said excess
  • a system for supporting dynamic resource management in a backhaul network includes a resource management function that includes an offline component and an online component for routing data traffic in the form of pipes, wherein said pipes include a path from an ingress switch to an egress switch of said backhaul network and an assigned capacity, wherein said offline component is configured to perform offline path computation based on expected traffic demands that are determined from one or more traffic matrices in order to compute offline computed paths for said pipes being represented in said one or more traffic matrices, wherein said expected traffic demands constitute claimable resources for said pipes, wherein said offline computed paths are installed in the backhaul network in order to configure said pipes, wherein initially only a fraction of the maximum allowable capacity that corresponds to the amount of said claimable resources is allocated as capacity for said pipes, wherein said resource management function is configured to allocate capacities to said pipes dependent on current data traffic, wherein in case excess traffic demands for one or
  • a resource management function is implemented that includes an offline component and an online component for routing data traffic in the form of pipes.
  • a pipe represents a path from an ingress switch to an egress switch.
  • the notion of ingress and egress switches depends on the direction of traffic. For upstream traffic, the first switch after the base station is the ingress switch and the access gateway to the core network is the egress switch. The reverse is true for the downstream direction. For the sake of easier explanation with regard to the description of the present invention and its preferred embodiments, it is assumed the upstream direction in the following used terminology.
  • a pipe is defined by a capacity that is assigned to the path or rather to the pipe. More specifically, a pipe may be a tunnel such as, e.g., an MPLS (Multi Protocol Label Switching) LSP (Label Swichted Path) or an Ethernet EPL (Ethernet Private Line) between an ingress switch, e.g. the cell-site switch and an egress switch, e.g. the Serving Gateway in a LTE network, with a certain capacity.
  • a pipe will generally bundle multiple application-level flows of the same QoS class.
  • the offline component performs offline path computation based on expected traffic demands that are determined from one or more traffic matrices in order to compute offline computed paths for the pipes, wherein the pipes are represented in the one or more traffic matrices.
  • these expected traffic demands constitute claimable resources for the pipes represented in the one or more traffic matrices.
  • the claimable resources of a pipe that may be dependent on the value in the traffic matrix can be assigned or allocated to this pipe and may be primarily intended for being used by this pipe.
  • an enormous improvement for dynamic resource management in a backhaul network may be achieved by allocating and blocking the claimable resources only to the extent as needed. Consequently, the offline computed paths are installed in the backhaul network in order to configure the pipes and, initially, only a fraction of the maximum allowable capacity that corresponds to the amount of the claimable resources is allocated as capacity for, preferably each of, the pipes.
  • the resource management function dynamically allocates capacities to the pipes dependent on current/actual data traffic, wherein in case excess traffic demands occur for one or more of the pipes which may be designated as excess pipes, because the traffic demands are more than the expected traffic demands for the excess pipes, then the excess traffic demands constitute opportunistic resources for the excess pipes.
  • the online component then performs online path computation in such a way that the online component dynamically uses unblocked claimable resources of one or more pipes different from the excess pipes in order to provide the opportunistic resources for the excess pipes.
  • the present invention distinguishes between resource ownership and actual allocation and allows dynamic reuse of owned but unused resources.
  • the method and the system according to the invention improve the resource utilization of the backhaul links in a backhaul network and allow maximizing the amount of data traffic which the network can serve.
  • the method and the system provides a solution that is highly applicable and suitable for being deployed in actual converged mobile backhaul networks comprising wired and wireless backhaul links.
  • the offline computed paths and/or online computed paths may be configured by the remote management function by means of installing forwarding entries in intermediate switches without performing resource reservation.
  • the intermediate switches are switches located between an ingress switch and an egress switch.
  • the paths computed by the offline component may be initially only installed in terms of the required forwarding entries in the intermediate nodes.
  • the capacity corresponding to the traffic matrix values may not fully be allocated immediately. Rather, only a fraction of the full capacity may be allocated initially. Only if more capacity is needed over time, the size of the pipe may be increased, ultimately until it has reached the full capacity as stated in the traffic matrix.
  • the resources that are assigned to an ingress- egress pair by virtue of the traffic matrix and the offline path computation are the claimable resources. Allocating the claimable resources only as needed rather than directly blocking them allows the online component to dynamically reuse as yet unallocated resources for accommodating other pipes in case of excess traffic. To this extent, the online component may perform dynamic path computation for cases where the actual data traffic deviates from the traffic matrix, i.e. for excess traffic. It is noted that in a mobile backhaul network, path computation as well as installing and deinstalling pipes is usually done via the Network Management System (NMS), i.e. a centralized entity. Thus, the resource management function including the offline and online component may be part of the NMS or interfacing to it.
  • NMS Network Management System
  • offline computation may also be completely decoupled, requiring a human in the loop to setup the paths suggested by the offline component manually.
  • the online component may also be handled via distributed IGP (Interior Gateway Protocol) protocols such as OSPF (Open Shortest Path First).
  • IGP Interior Gateway Protocol
  • OSPF Open Shortest Path First
  • the resource management function may be a centralized entity.
  • traffic classification for assigning the data traffic to the pipes may be performed by the ingress switches and/or by base stations located in front of the ingress switches.
  • load monitoring is performed by the ingress switches and/or base stations located in front of the ingress switches, wherein results of the load monitoring are reported to the resource management function.
  • the functionality for traffic classification and load monitoring may be located in the ingress switches and/or in the base stations.
  • the ingress switches may have functionality for performing traffic shaping in order to enforce that only an admitted amount of data traffic is transmitted over the pipes.
  • allocating capacities to the pipes in the form of increasing or decreasing the pipe capacities is triggered by base stations, wherein the base stations translate bearer requests on the mobile network layer to corresponding pipe change requests or new pipe requests on the transport network layer.
  • the base stations translate bearer requests on the mobile network layer to corresponding pipe change requests or new pipe requests on the transport network layer.
  • QoS Quality of Service
  • allocating capacities to the pipes in the form of increasing or decreasing the pipe capacities is triggered based on thresholds that are in relation to monitored current data traffic of the pipes.
  • triggering pipe size changes could be done by the first switch after the base station, i.e. by the ingress switch.
  • the triggering is done based on monitoring current data traffic/load, since switches are generally oblivious of bearer management messages of the mobile network layer, unless they would do deep packet inspection, which will often not be possible, because packets are IPSec encrypted.
  • thresholds that are in relation to monitored current data traffic of the pipes.
  • a link has a capacity of ci
  • a pipe p has a capacity of c p with current traffic t p
  • the switch will ask the resource management function for new resources. The resource management function then decides the amount of new resources that might depend on parameters, e.g.
  • the existing pipe size the existing pipe size, the amount of open claimable resources, overall network load, fairness considerations, Quality of Service (QoS) class of p, etc., and the path. If the new resources are still within the claimable resources of that ingress-egress pair, no new path has to be computed, but the one that was computed by the offline component can be used. Otherwise, the path returned by resource management function can be the old one, in which case the pipe is just increased in capacity, or a new one, if the old path is too saturated or does not meet the availability requirements for the pipe's Quality of Service (QoS) class. Releasing resources based on r p may be performed analogously.
  • increase threshold i p is set identical to capacity c p .
  • traffic will burst not only exceeding the increase threshold but also the actual pipe capacity.
  • This may be designated as soft-pipes.
  • a soft-pipe basically defines an upper bound in terms of resources that are assigned to the soft-pipe (c p ), but this upper bound is not strictly enforced. For example, in case of a short burst of high traffic, i.e. t p > c p , the traffic could be still admitted into the pipe.
  • the ingress switch may start to limit/shape the data traffic admitted into the soft- pipe. Admission control for soft-pipes is thus based on the pipe's capacity c p and the traffic t p over a preconfigured time period T p .
  • long buffers may be included in the ingress switches for allowing the ingress switch to perform traffic shaping in case of high traffic over a sustained time, which may allow the network to possibly avoid unnecessary packet drops and thus stabilizes the Transmission Control Protocol (TCP) performance.
  • TCP Transmission Control Protocol
  • the proposed soft-pipe allows an interaction between ingress shaping and pipe size adaptation. The more aggressive the shaper, the more pipe adaptations need to take place to follow traffic patterns. On the other hand the more optimistic the shaper, the less pipe adaptations. Thus, combining over-admission, supported by the ingress shaper, and pipe adaptation represents a good trade-off between pipe management and signaling complexity and packet losses/retransmissions.
  • capacities may be assigned to the pipes dependent on a modulation and coding scheme (MCS) in such a way that a minimum availability provided by the modulation and coding scheme (MCS) and required for a Quality of Service (QoS) class is ensured.
  • MCS modulation and coding scheme
  • QoS Quality of Service
  • time slots may be employed in order to consider recurring pattern of data traffic in the backhaul network, wherein preferably one traffic matrix is captured per time slot.
  • expected traffic demands determined from the traffic matrices or rather the values of the traffic matrices are closer to the actual and current data traffic demand.
  • multiple traffic matrices are employed.
  • Quality of Service (QoS) classes of data traffic may be considered in such a way that the number of the time slots is made dependent on the Quality of Service (QoS) class of data traffic transmitted in one or more of the pipes.
  • QoS Quality of Service
  • QoS Quality of Service
  • QoS Quality of Service
  • the number of routing configurations to be computed may be made dependent on the Quality of Service (QoS) class of data traffic transmitted in one or more of the pipes.
  • QoS Quality of Service
  • a single routing configuration for the pipes may be computed, wherein the single routing configuration takes into account multiple traffic matrices.
  • the pipes may be rerouted at boundaries of the time slots.
  • one or more or each of the pipes may bundle multiple data traffic flows of the same Quality of Service (QoS) class.
  • QoS Quality of Service
  • the ingress switches upon detecting too much data traffic in one or more of the pipes the ingress switches request the resource management function to allocate further resources.
  • the ingress switches merely inform the remote management function that load in the pipe has critically increased. Then the resource management function decides whether further resources are allocated or which decision is to be taken.
  • the offline path computation and/or the online path computation may take a splitting granularity into account, wherein the splitting granularity is derived from predetermined flow classification granularities.
  • the possible traffic splitting granularity is taken into consideration. This is particularly advantageous both for offline and online pipe placement.
  • the online adaptation of pipe sizes it will not be possible in all cases to just increase existing pipes, but rather the added resource share must be routed along a different path. By doing this, it is important to not change pipe sizes in units that are smaller than the possible splitting granularity. The latter is defined by the granularity by which packet flows can be distinguished. If the base station is responsible for pipe management, splitting can be done on a flow granularity.
  • IPSec Internet Protocol Security
  • VLAN Virtual Local Area Network
  • DSCP Differentiated services Code Point
  • packets would have to be mapped to the 3 Mbps pipe according to some flow-unaware scheduling strategy, i.e. quasi- randomly. This would lead to reordering problems at the egress switch as packets from the same flow might take different paths with different latencies. For TCP flows, this would be highly detrimental.
  • the splitting granularity may be automatically estimated based on monitoring of the ingress switches and/or the base station.
  • discrete splitting granularities may be derived from the given flow classification granularities.
  • the discrete splitting granularities may be taken into account for pipe routing, e.g. on-demand splitting.
  • the offline path computation may be based on traffic matrices. Hence, it needs to be executed only once the traffic patterns have deviated to a large enough extent from the matrices on which the current routing is based. For example, this may be in the order of days, weeks, or months. In any case, offline computation does not have to happen in real-time, which is why some time to compute can be afforded.
  • the offline component may take as an input the set of base stations, the backhaul topology, e.g. an adjacency matrix, with the possible link capacities - which can vary depending on the MCS in case of wireless links - and intermediate switches, the set of gateway nodes, and/or the traffic demands.
  • the offline component computes routes for all ingress-egress pairs such that the required traffic demand can be accommodated and the resource utilization of the backhaul network is maximized.
  • the latter could mean to maximize the minimum residual link capacity, to maximize the mean residual link capacity, etc.
  • the offline path computation may be implemented by using an Integer Linear Program that takes the splitting granularity and/or a capacity variation u m n per link (m,n) into account, wherein the capacity variation is the difference between the maximum utilization and the minimum utilization of a predetermined link (m,n) across all time slots.
  • the offline path computation may be implemented as a Mixed Integer Linear Program (MILP) as follows:
  • MILP Mixed Integer Linear Program
  • Equations (1) to (5) are classical network flow constraints.
  • Variable r is a routing variable that captures the traffic demand between node i and j (as given by the traffic matrix) in time slot s is non-zero and routed over link (m,n), i.e. in that direction. Otherwise, it is zero.
  • the number of fragments i.e. variable k
  • representing the splitting granularity, since it is not wanted to assume a fluid traffic model, i.e. infinite splitting granularity.
  • Equation (1) says that the sum of routing variables for the k-th fragment of a traffic demand between i and j in time slot s, leaving the source node i of that demand, must amount to 1 , i.e. the full demand must leave the source node. Likewise, the full demand must enter the destination j (cf. Equation (3)). Equations (2) and (4) say that demands may not enter its source or leave its destination. Equation (5) is the classical continuity constraint on intermediate nodes for a demand. Clearly, k is bound by the splitting granularity ⁇ , i.e. the maximum number of distinct flow classes that the ingress flow classifier can distinguish (cf. Equation (14)).
  • Equation (1 ) is only specified, if the traffic demand d i,j,k,s > 0 and and k ⁇ ⁇ . Similar omissions hold for other equations.
  • Variable u m,n,s in Equation (6) is the consumed capacity of link (m,n) in time slot s and thus the sum of all demands that are routed over that link in either direction. Equation (7) then defines v m,n as the maximum consumed capacity of (m,n) in any time slot. This in turn is bound by the available physical link capacity C m n (Equation (8)).
  • Variable c max is the maximum link utilization in the network and will be minimized as part of the objective function.
  • the consumed capacity in slot s, u m,n,s is used in Equation (10) to define two other variables, namely the lower bound or minimum consumed capacity across all time slots, , and the upper bound or maximum consumed capacity across all slots, .
  • the difference between the two is defined as in Equation (1 1), which represents the capacity variation per link (m,ri).
  • Variable (cf. Equation (12)) is then the total capacity variation in the network.
  • the capacity variation of a link is the difference between the maximum utilization and the minimum utilization of a particular link across all time slots. A high value thus means that some load goes over a certain link in time slot t 1 , while much less load goes over that same link in another time slot t2.
  • Equation (16) is an important constraint which expresses that a demand should not be routed over a link (m,n) and (n,m) at the same time (i.e. in both directions of the same physical link). If this constraint was not given, a solution will often have exactly that, because thus adding a demand "twice" to the consumed capacity u m,n,s on a (not so congested) link can reduce the capacity variation on that link (without impacting the maximum link utilization value), thereby optimizing the objective function. Since the term r can be any decimal value, it cannot be used r directly in Equation (16), but rather need to introduce corresponding binary values that constitute an upper bound of the routing variables r (cf. Equation (15)), but are at the same time either 0 or 1 (cf. Equation (17)). This indirection via binary variables b allows the linear program to be computed much faster (because of so- called integer relaxation).
  • the offline path computation is implemented by using a genetic algorithm that takes the splitting granularity into account.
  • the offline path computation may be achieved by using a high-level genetic algorithm with, e.g. unique heuristics for population creation, mutation, recombination operators and/or taking the dynamically determined splitting granularity into account.
  • the offline path computation may be implemented as a genetic algorithm as follows:
  • a solution in the genetic algorithm population may be initialized as follows:
  • the mutation operator for a solution in the genetic algorithm may be implemented as follows:
  • the online path computation may be handled by any algorithm with fast reaction times, e.g. by a Constrained Shortest Path First approach with constraints reflecting the required capacity, QoS parameters etc.
  • the Security Parameter Index may be employed for packet classification to increase the splitting granularity in such a way that new Security Parameter Indexes (SPIs) are dynamically employed in order to increase the splitting granularity, in particular once the resource management function discovers that the current splitting granularity is insufficient.
  • SPIs Security Parameter Indexes
  • the splitting granularity can be increased in the particularly challenging case of IPSec.
  • the SPI is created by the sender of a packet and has only local significance, it could be used for the purpose of packet marking.
  • One option may be that ingress switch and base station interact, potentially via the resource management function such as to adjust the packet marking on demand, e.g. when a more fine-grained pipe allocation is needed. Upon such a request for increased packet marking, the base station could adjust the number of distinct SPIs.
  • Fig. 1 is a schematic view of an application scenario of an embodiment of a method or a system according to the present invention illustrating an availability-based path computation
  • Fig. 2 is a diagram illustrating traffic demand over time with regard to another embodiment of a method or system according to the present invention.
  • Fig. 1 schematically illustrates an application scenario of an embodiment of a method or a system according to the present invention illustrating an availability- based path computation.
  • QoS Quality of Service
  • MCS modulation and coding scheme
  • a capacity 6 with an amount of 40 Mbps is available when applying Quadrature Phase-Shift Keying (QPSK). If 16 QAM (Quadrature Amplitude Modulation) is applied, then a capacity of 80 Mbps would be available for an availability requirement of 99,995%.
  • QPSK Quadrature Phase-Shift Keying
  • Fig. 2 shows a diagram illustrating traffic demand over time with regard to an embodiment of a method or system according to the present invention according to which time slots are employed in order to balance between the resource waste problem for peak provisioning and the resource congestion problem with mean provisioning.
  • Data traffic in mobile backhaul networks typically has recurring pattern with fixed periodicity, e.g. a day, and n homogeneous sub-periods, e.g. morning, day, evening, night. If one traffic matrix per sub-period or rather time slot is assumed, mean based path computation, i.e. n different path configurations and one per time slot, will be much closer to the traffic demand curve and less excess traffic must be handled.
  • Fig. 2 shows a data traffic demand curve 8 over time.
  • a day as recurring period is divided in two time slots.
  • One time slot between matrix boundaries t1 and t2 represents the time slot during the day with a traffic matrix 9.
  • Another time slot between the matrix boundaries t2 and t1 represents the time slot at night with another traffic matrix 10.
  • an average traffic matrix 1 1 over the whole period is depicted in Fig. 2.
  • traffic matrices for high-priority traffic should be based on peak values to avoid many online path computation processes for excess traffic.
  • mean-based traffic matrices with arbitrary granularity can be used.
  • the number of time slots may vary depending on the QoS class. In case an operator decides to use one routing configuration per time slot, the resulting disruptions at time slot boundaries should be done as seamlessly as possible.
  • resource management is a matter of optimized path computation, centralized capacity bookkeeping and careful admission control / shaping / policing at the edge devices, in particular the ingress and/or egress switches, of the backhaul network.
  • new ingress shapers are configured with the new pipe capacities, burst parameters and thresholds.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method for supporting dynamic resource management in a backhaul network, wherein a resource management function is provided that includes an offline component and an online component for routing data traffic in the form of pipes, wherein said pipes include a path from an ingress switch to an egress switch of said backhaul network and an assigned capacity, wherein said offline component performs offline path computation based on expected traffic demands that are determined from one or more traffic matrices in order to compute offline computed paths for said pipes being represented in said one or more traffic matrices, wherein said expected traffic demands constitute claimable resources for said pipes, wherein said offline computed paths are installed in the backhaul network in order to configure said pipes, wherein initially only a fraction of the maximum allowable capacity that corresponds to the amount of said claimable resources is allocated as capacity for said pipes, wherein said resource management function allocates capacities to said pipes dependent on current data traffic, wherein in case excess traffic demands for one or more of said pipes - excess pipes - occur, because being beyond said expected traffic demands for said excess pipes, said excess traffic demands constitute opportunistic resources for said excess pipes, and wherein said online component performs online path computation in such a way that said online component uses unblocked claimable resources of one or more pipes different from said excess pipes in order to provide said opportunistic resources for said excess pipes.

Description

METHOD AND SYSTEM FOR SUPPORTING DYNAMIC RESOURCE MANAGEMENT IN A BACKHAUL NETWORK
The present invention relates to a method and a system for supporting dynamic resource management in a backhaul network.
In the field of mobile networks, data traffic from base stations, like e.g. NodeBs or eNBs, to access gateways, like e.g. SGSN or SGW, is transported via a backhaul network that may also be designated as mobile backhaul network. Mobile backhaul networks connect the remote base stations and cell towards to the mobile operator's core networks and provide access to both the voice network and the internet. The mobile backhaul network may consist of a wired part and a wireless part. Generally, optical fiber is the predominant, but not necessarily only, technology in the middle mile and microwave radio is the predominant, but not necessarily only, technology in the last mile. Furthermore, there can be hybrid regions with a mix of wired and wireless backhaul links. The first and middle mile typically employ ring or mesh topologies. In the last mile, i.e. the final leg of the telecommunications networks delivering communications connectivity to enterprise customers or base stations, most current deployments are still tree, star or chain topologies. However, in recent years there is a clear trend towards ring and mesh topologies in the last mile as well, all the more so with the introduction of small cells.
Traditionally, the connection between mobile users and base stations has mainly been considered the main bottleneck of the end-to-end communication path between a host in the internet (or the operator core network) and a mobile user. However, with the introduction of HSPA, LTE, etc., and the introduction of small cells with much higher spatial reuse, the bottleneck for the exploding mobile data traffic has more and more shifted towards the backhaul network. Operators struggle considerably to keep the capacity offered by their backhaul networks on a par with the ever increasing traffic demand of their mobile user base.
Consequently, the backhaul network becomes more and more the primary capacity bottleneck in mobile networks. Existing approaches for antagonizing against this problem apply traffic engineering (TE), wherein basically static pipes between ingress and egress nodes are computed based on traffic matrices.
Exemplarily it is referred to the following references concerning traffic engineering approaches:
• G. Conte, P. lovanna, R. Sabella, M. Settembre, L. Valentini: "A traffic engineering solution for GMPLS Networks: A hybrid approach based on offline and on-line routing methods", ONDM 2003, Budapest, February 2003
• Mingui Zhang, Bin Liu, Beichuan Zhang: "Multi-Commodity Flow Traffic Engineering with Hybrid MPLS/OSPF Routing", IEEE GlobeCom 2009
• Srikanth Kandula, Dina Katabi, Shantanu Sinha, Arthur Berger: "Dynamic Load Balancing Without Packet Reordering", ACM SIGCOMM Computer
Communication Review 2007
• F. Ricciato, S. Salsano, Angelo Belmonte, M. Listanti, "Off-line Configuration of a MPLS over WDM Network under Time-Varying Offered Traffic", IEEE InfoCom 2002
• R. Goguen, G. Swallow, "Dynamically Adjusting Multiprotocol Label Switching (MPLS) Traffic Engineering Tunnel Bandwidth", US 6,665,273 B1 · J. -P. Vasseur, J. Apcar, "Technique for Dynamically Splitting MPLS TE-
LSPs, US 7,903,584 B2
• Y. Yang, C.-H. Lung, "The Role of Traffic Forecasting in QoS Routing - A Case Study of Time-Dependent Routing", IEEE International Conference on Communications (ICC), 2005
• J. L. Marzo, E. Calle, C. Scoglio, T. Anjali, "QoS Online Routing and MPLS Multilevel Protection: A Survey", IEEE Communications Magazine, 2003 However, while such approaches may work sufficiently for wired networks with stable link capacities and networks with fairly stable data traffic demands, the same is not true in mobile backhaul networks where wireless links may have changing link capacities and high traffic fluctuations, in particular both on a second scale but also over time, and large imbalances between different base station traffic demands are the norm.
It is therefore an object of the present invention to improve and further develop a method and a system of the initially described type for supporting dynamic resource management in a backhaul network in such a way that the resource utilization of the backhaul links is improved, in particular in terms of efficiency and preferably under consideration of hybrid regions with a mix of wired and wireless backhaul links. In accordance with the invention, the aforementioned object is accomplished by a method comprising the features of claim 1. According to this claim, a method for supporting dynamic resource management in a backhaul network is claimed, wherein a resource management function is provided that includes an offline component and an online component for routing data traffic in the form of pipes, wherein said pipes include a path from an ingress switch to an egress switch of said backhaul network and an assigned capacity, wherein said offline component performs offline path computation based on expected traffic demands that are determined from one or more traffic matrices in order to compute offline computed paths for said pipes being represented in said one or more traffic matrices, wherein said expected traffic demands constitute claimable resources for said pipes, wherein said offline computed paths are installed in the backhaul network in order to configure said pipes, wherein initially only a fraction of the maximum allowable capacity that corresponds to the amount of said claimable resources is allocated as capacity for said pipes, wherein said resource management function allocates capacities to said pipes dependent on current data traffic, wherein in case excess traffic demands for one or more of said pipes - excess pipes - occur, because being beyond said expected traffic demands for said excess pipes, said excess traffic demands constitute opportunistic resources for said excess pipes, and wherein said online component performs online path computation in such a way that said online component uses unblocked claimable resources of one or more pipes different from said excess pipes in order to provide said opportunistic resources for said excess pipes. Furthermore, the aforementioned object is accomplished by a system comprising the features of claim 22. According to this claim, a system for supporting dynamic resource management in a backhaul network is provided, wherein the system includes a resource management function that includes an offline component and an online component for routing data traffic in the form of pipes, wherein said pipes include a path from an ingress switch to an egress switch of said backhaul network and an assigned capacity, wherein said offline component is configured to perform offline path computation based on expected traffic demands that are determined from one or more traffic matrices in order to compute offline computed paths for said pipes being represented in said one or more traffic matrices, wherein said expected traffic demands constitute claimable resources for said pipes, wherein said offline computed paths are installed in the backhaul network in order to configure said pipes, wherein initially only a fraction of the maximum allowable capacity that corresponds to the amount of said claimable resources is allocated as capacity for said pipes, wherein said resource management function is configured to allocate capacities to said pipes dependent on current data traffic, wherein in case excess traffic demands for one or more of said pipes - excess pipes - occur, because being beyond said expected traffic demands for said excess pipes, said excess traffic demands constitute opportunistic resources for said excess pipes, and wherein said online component is configured to perform online path computation in such a way that said online component uses unblocked claimable resources of one or more pipes different from said excess pipes in order to provide said opportunistic resources for said excess pipes.
According to the invention it has first been recognized that the resource utilization of links in a backhaul network can be optimized by combining offline path computation and online path computation in an ingenious way. For this purpose a resource management function is implemented that includes an offline component and an online component for routing data traffic in the form of pipes. A pipe represents a path from an ingress switch to an egress switch. The notion of ingress and egress switches depends on the direction of traffic. For upstream traffic, the first switch after the base station is the ingress switch and the access gateway to the core network is the egress switch. The reverse is true for the downstream direction. For the sake of easier explanation with regard to the description of the present invention and its preferred embodiments, it is assumed the upstream direction in the following used terminology. In addition to a path, a pipe is defined by a capacity that is assigned to the path or rather to the pipe. More specifically, a pipe may be a tunnel such as, e.g., an MPLS (Multi Protocol Label Switching) LSP (Label Swichted Path) or an Ethernet EPL (Ethernet Private Line) between an ingress switch, e.g. the cell-site switch and an egress switch, e.g. the Serving Gateway in a LTE network, with a certain capacity. A pipe will generally bundle multiple application-level flows of the same QoS class. The offline component performs offline path computation based on expected traffic demands that are determined from one or more traffic matrices in order to compute offline computed paths for the pipes, wherein the pipes are represented in the one or more traffic matrices.
According to the invention these expected traffic demands constitute claimable resources for the pipes represented in the one or more traffic matrices. To this extent, the claimable resources of a pipe that may be dependent on the value in the traffic matrix can be assigned or allocated to this pipe and may be primarily intended for being used by this pipe. However, it has been further recognized that an enormous improvement for dynamic resource management in a backhaul network may be achieved by allocating and blocking the claimable resources only to the extent as needed. Consequently, the offline computed paths are installed in the backhaul network in order to configure the pipes and, initially, only a fraction of the maximum allowable capacity that corresponds to the amount of the claimable resources is allocated as capacity for, preferably each of, the pipes. According to the invention the resource management function dynamically allocates capacities to the pipes dependent on current/actual data traffic, wherein in case excess traffic demands occur for one or more of the pipes which may be designated as excess pipes, because the traffic demands are more than the expected traffic demands for the excess pipes, then the excess traffic demands constitute opportunistic resources for the excess pipes. The online component then performs online path computation in such a way that the online component dynamically uses unblocked claimable resources of one or more pipes different from the excess pipes in order to provide the opportunistic resources for the excess pipes. Hence, the present invention distinguishes between resource ownership and actual allocation and allows dynamic reuse of owned but unused resources.
Thus, the method and the system according to the invention improve the resource utilization of the backhaul links in a backhaul network and allow maximizing the amount of data traffic which the network can serve. The method and the system provides a solution that is highly applicable and suitable for being deployed in actual converged mobile backhaul networks comprising wired and wireless backhaul links.
According to a preferred embodiment the offline computed paths and/or online computed paths may be configured by the remote management function by means of installing forwarding entries in intermediate switches without performing resource reservation. The intermediate switches are switches located between an ingress switch and an egress switch. Thus, there is no involvement of intermediate backhaul switches in resource management and therefore new routings can be activated seamlessly. Hence, the paths computed by the offline component may be initially only installed in terms of the required forwarding entries in the intermediate nodes. Furthermore, the capacity corresponding to the traffic matrix values may not fully be allocated immediately. Rather, only a fraction of the full capacity may be allocated initially. Only if more capacity is needed over time, the size of the pipe may be increased, ultimately until it has reached the full capacity as stated in the traffic matrix. The resources that are assigned to an ingress- egress pair by virtue of the traffic matrix and the offline path computation are the claimable resources. Allocating the claimable resources only as needed rather than directly blocking them allows the online component to dynamically reuse as yet unallocated resources for accommodating other pipes in case of excess traffic. To this extent, the online component may perform dynamic path computation for cases where the actual data traffic deviates from the traffic matrix, i.e. for excess traffic. It is noted that in a mobile backhaul network, path computation as well as installing and deinstalling pipes is usually done via the Network Management System (NMS), i.e. a centralized entity. Thus, the resource management function including the offline and online component may be part of the NMS or interfacing to it. Principally, offline computation may also be completely decoupled, requiring a human in the loop to setup the paths suggested by the offline component manually. The online component may also be handled via distributed IGP (Interior Gateway Protocol) protocols such as OSPF (Open Shortest Path First). However, such protocols typically do not ensure fairness of resource distribution and require an extensive signaling plane, while signaling in a non-distributed approach could be only directed towards the central entity. The present invention does not impose a requirement on the exact deployment scenario. For example, the resource management function may be a centralized entity. According to a preferred embodiment traffic classification for assigning the data traffic to the pipes may be performed by the ingress switches and/or by base stations located in front of the ingress switches. Furthermore, it may be provided that load monitoring is performed by the ingress switches and/or base stations located in front of the ingress switches, wherein results of the load monitoring are reported to the resource management function. Thus, the functionality for traffic classification and load monitoring may be located in the ingress switches and/or in the base stations.
According to a preferred embodiment the ingress switches may have functionality for performing traffic shaping in order to enforce that only an admitted amount of data traffic is transmitted over the pipes.
According to a preferred embodiment, it may be provided that allocating capacities to the pipes in the form of increasing or decreasing the pipe capacities is triggered by base stations, wherein the base stations translate bearer requests on the mobile network layer to corresponding pipe change requests or new pipe requests on the transport network layer. Thus, it is considered that for enabling dynamic pipe size increase and decrease based on actual traffic demand, there may be needs to have a trigger for the pipe size changes. This may be done by the base station which is responsible for bearer setup and teardown anyway and is aware of corresponding Quality of Service (QoS) parameters and resource requirements. Hence, the base station could translate bearer requests on the mobile network layer to corresponding pipe change requests or new pipe requests on the transport network layer.
According to a preferred embodiment it may be provided that allocating capacities to the pipes in the form of increasing or decreasing the pipe capacities is triggered based on thresholds that are in relation to monitored current data traffic of the pipes. By doing this, triggering pipe size changes could be done by the first switch after the base station, i.e. by the ingress switch. Hence, the triggering is done based on monitoring current data traffic/load, since switches are generally oblivious of bearer management messages of the mobile network layer, unless they would do deep packet inspection, which will often not be possible, because packets are IPSec encrypted. Thus, if monitoring is employed, pipe size changes are triggered based on thresholds.
According to a preferred embodiment, if a link has a capacity of ci, and a pipe p has a capacity of cp with current traffic tp, there may be two thresholds: an increase threshold ip and a release threshold rp. Then generally one has 0≤ rp < tp < ip < cp ≤ ci. However, if tp > ip for a sustained time period, i.e. not a short burst, the switch will ask the resource management function for new resources. The resource management function then decides the amount of new resources that might depend on parameters, e.g. the existing pipe size, the amount of open claimable resources, overall network load, fairness considerations, Quality of Service (QoS) class of p, etc., and the path. If the new resources are still within the claimable resources of that ingress-egress pair, no new path has to be computed, but the one that was computed by the offline component can be used. Otherwise, the path returned by resource management function can be the old one, in which case the pipe is just increased in capacity, or a new one, if the old path is too saturated or does not meet the availability requirements for the pipe's Quality of Service (QoS) class. Releasing resources based on rp may be performed analogously. With respect to enabling a more efficient resource usage at packet forwarding time, it may be provided that increase threshold ip is set identical to capacity cp. Thus, traffic will burst not only exceeding the increase threshold but also the actual pipe capacity. This may be designated as soft-pipes. Thus, a soft-pipe basically defines an upper bound in terms of resources that are assigned to the soft-pipe (cp), but this upper bound is not strictly enforced. For example, in case of a short burst of high traffic, i.e. tp > cp, the traffic could be still admitted into the pipe. Only in case the traffic tp exceeds the soft-pipe capacity over a sustained time period, the ingress switch may start to limit/shape the data traffic admitted into the soft- pipe. Admission control for soft-pipes is thus based on the pipe's capacity cp and the traffic tp over a preconfigured time period Tp.
Thus by using the concept of a soft-pipe it is not unnecessarily restricted how much data traffic is admitted into the network, which by doing so would remove any possible gains from statistical multiplexing, but at the same time the soft-pipe does not freely admit all traffic into the network, which leads to random packet drops at the bottleneck links and to waste of resources in the backhaul network. Furthermore, for sustained bursts, new resources may be attempted to be allocated, e.g. a new pipe is requested or the old one is increased, instead of starting to drop or delay packets by the shaper.
According to a further preferred embodiment long buffers may be included in the ingress switches for allowing the ingress switch to perform traffic shaping in case of high traffic over a sustained time, which may allow the network to possibly avoid unnecessary packet drops and thus stabilizes the Transmission Control Protocol (TCP) performance. As a result, the proposed soft-pipe allows an interaction between ingress shaping and pipe size adaptation. The more aggressive the shaper, the more pipe adaptations need to take place to follow traffic patterns. On the other hand the more optimistic the shaper, the less pipe adaptations. Thus, combining over-admission, supported by the ingress shaper, and pipe adaptation represents a good trade-off between pipe management and signaling complexity and packet losses/retransmissions. According to a preferred embodiment capacities may be assigned to the pipes dependent on a modulation and coding scheme (MCS) in such a way that a minimum availability provided by the modulation and coding scheme (MCS) and required for a Quality of Service (QoS) class is ensured. Thus, re-routing of existing Quality of Service (QoS) sensitive flows/pipes which may lead to disruptions can be avoided. In the wireless part of the backhaul network, links can experience modulation and coding scheme (MCS) and therefore capacity changes in an unpredictable way. Specifically, to avoid re-routing QoS-sensitive pipes despite the chance of MCS changes, for each QoS class, a minimum required availability is determined, which in turn determines the maximum possible MCS and thus the maximum possible capacity for that QoS class.
According to a preferred embodiment time slots may be employed in order to consider recurring pattern of data traffic in the backhaul network, wherein preferably one traffic matrix is captured per time slot. Thus, expected traffic demands determined from the traffic matrices or rather the values of the traffic matrices are closer to the actual and current data traffic demand. To this extent multiple traffic matrices are employed. According to a preferred embodiment Quality of Service (QoS) classes of data traffic may be considered in such a way that the number of the time slots is made dependent on the Quality of Service (QoS) class of data traffic transmitted in one or more of the pipes. According to a preferred embodiment Quality of Service (QoS) classes of data traffic may be considered in such a way that the one or more traffic matrices are captured per Quality of Service (QoS) class per time slot.
According to a preferred embodiment the number of routing configurations to be computed may be made dependent on the Quality of Service (QoS) class of data traffic transmitted in one or more of the pipes. According to a preferred embodiment a single routing configuration for the pipes may be computed, wherein the single routing configuration takes into account multiple traffic matrices. According to a preferred embodiment the pipes may be rerouted at boundaries of the time slots.
According to a preferred embodiment one or more or each of the pipes may bundle multiple data traffic flows of the same Quality of Service (QoS) class.
According to a preferred embodiment, it may be provided that upon detecting too much data traffic in one or more of the pipes the ingress switches request the resource management function to allocate further resources. To this extent, is possible that the ingress switches merely inform the remote management function that load in the pipe has critically increased. Then the resource management function decides whether further resources are allocated or which decision is to be taken.
According to a preferred embodiment the offline path computation and/or the online path computation may take a splitting granularity into account, wherein the splitting granularity is derived from predetermined flow classification granularities. Thus, the possible traffic splitting granularity is taken into consideration. This is particularly advantageous both for offline and online pipe placement. With regard to the online adaptation of pipe sizes, it will not be possible in all cases to just increase existing pipes, but rather the added resource share must be routed along a different path. By doing this, it is important to not change pipe sizes in units that are smaller than the possible splitting granularity. The latter is defined by the granularity by which packet flows can be distinguished. If the base station is responsible for pipe management, splitting can be done on a flow granularity. However, if monitoring in ingress switches is used, granularity is often much more coarse. Especially if Internet Protocol Security (IPSec) is used, the only packet header information that may be available for flow identification are VLAN (Virtual Local Area Network) tags if present, DSCP (Differentiated services Code Point) values if present and the IP addresses of the IPSec tunnel endpoints. It is clear that higher granularity allows better load balancing. Generally, depending on the scenario and the available packet header information, the splitting granularity varies. The ingress switch should keep track of the possible granularity. There is no point in installing a new pipe with 3 Mbps when the smallest identifiable granularity is 10 Mbps. In this case, packets would have to be mapped to the 3 Mbps pipe according to some flow-unaware scheduling strategy, i.e. quasi- randomly. This would lead to reordering problems at the egress switch as packets from the same flow might take different paths with different latencies. For TCP flows, this would be highly detrimental.
According to a preferred embodiment the splitting granularity may be automatically estimated based on monitoring of the ingress switches and/or the base station. Thus, discrete splitting granularities may be derived from the given flow classification granularities. The discrete splitting granularities may be taken into account for pipe routing, e.g. on-demand splitting.
According to a preferred embodiment the offline path computation may be based on traffic matrices. Hence, it needs to be executed only once the traffic patterns have deviated to a large enough extent from the matrices on which the current routing is based. For example, this may be in the order of days, weeks, or months. In any case, offline computation does not have to happen in real-time, which is why some time to compute can be afforded. The offline component may take as an input the set of base stations, the backhaul topology, e.g. an adjacency matrix, with the possible link capacities - which can vary depending on the MCS in case of wireless links - and intermediate switches, the set of gateway nodes, and/or the traffic demands. Based on this information, the offline component computes routes for all ingress-egress pairs such that the required traffic demand can be accommodated and the resource utilization of the backhaul network is maximized. The latter could mean to maximize the minimum residual link capacity, to maximize the mean residual link capacity, etc.
According to a preferred embodiment the offline path computation may be implemented by using an Integer Linear Program that takes the splitting granularity and/or a capacity variation um n per link (m,n) into account, wherein the capacity variation is the difference between the maximum utilization and the minimum utilization of a predetermined link (m,n) across all time slots.
According to a preferred embodiment the offline path computation may be implemented as a Mixed Integer Linear Program (MILP) as follows:
Equations (1) to (5) are classical network flow constraints. Variable r is a routing variable that captures the traffic demand between node i and j (as given by the traffic matrix) in time slot s is non-zero and routed over link (m,n), i.e. in that direction. Otherwise, it is zero. Note that the number of fragments, i.e. variable k, needs to be upper-bound by a fixed number γ, representing the splitting granularity, since it is not wanted to assume a fluid traffic model, i.e. infinite splitting granularity. Hence, Equation (1), for instance, says that the sum of routing variables for the k-th fragment of a traffic demand between i and j in time slot s, leaving the source node i of that demand, must amount to 1 , i.e. the full demand must leave the source node. Likewise, the full demand must enter the destination j (cf. Equation (3)). Equations (2) and (4) say that demands may not enter its source or leave its destination. Equation (5) is the classical continuity constraint on intermediate nodes for a demand. Clearly, k is bound by the splitting granularity γ, i.e. the maximum number of distinct flow classes that the ingress flow classifier can distinguish (cf. Equation (14)). For the sake of brevity, some constraints in the MILP are left out. Equation (1 ) is only specified, if the traffic demand di,j,k,s > 0 and and k < γ. Similar omissions hold for other equations. Variable um,n,s in Equation (6) is the consumed capacity of link (m,n) in time slot s and thus the sum of all demands that are routed over that link in either direction. Equation (7) then defines vm,n as the maximum consumed capacity of (m,n) in any time slot. This in turn is bound by the available physical link capacity Cm n (Equation (8)). Variable cmax is the maximum link utilization in the network and will be minimized as part of the objective function.
The consumed capacity in slot s, um,n,s is used in Equation (10) to define two other variables, namely the lower bound or minimum consumed capacity across all time slots, , and the upper bound or maximum consumed capacity across all slots, . The difference between the two is defined as in Equation (1 1), which represents the capacity variation per link (m,ri). Variable (cf. Equation (12)) is then the total capacity variation in the network. Intuitively, the capacity variation of a link is the difference between the maximum utilization and the minimum utilization of a particular link across all time slots. A high value thus means that some load goes over a certain link in time slot t1, while much less load goes over that same link in another time slot t2. Clearly, the higher this capacity variation, the higher is the capacity waste. This is because the operator has to invest into a link with sufficient capacity to accommodate the load in slot t1 , while much of the time this capacity is not used at all and a smaller link would have been sufficient. The MlLP above therefore minimizes not only the maximum link utilization, but also tries to minimize the capacity variation in the whole network.
Equation (16) is an important constraint which expresses that a demand should not be routed over a link (m,n) and (n,m) at the same time (i.e. in both directions of the same physical link). If this constraint was not given, a solution will often have exactly that, because thus adding a demand "twice" to the consumed capacity um,n,s on a (not so congested) link can reduce the capacity variation on that link (without impacting the maximum link utilization value), thereby optimizing the objective function. Since the term r can be any decimal value, it cannot be used r directly in Equation (16), but rather need to introduce corresponding binary values that constitute an upper bound of the routing variables r (cf. Equation (15)), but are at the same time either 0 or 1 (cf. Equation (17)). This indirection via binary variables b allows the linear program to be computed much faster (because of so- called integer relaxation).
With regard to an improved scalability, it may be provided that the offline path computation is implemented by using a genetic algorithm that takes the splitting granularity into account. Specifically, the offline path computation may be achieved by using a high-level genetic algorithm with, e.g. unique heuristics for population creation, mutation, recombination operators and/or taking the dynamically determined splitting granularity into account.
According to a preferred embodiment the offline path computation may be implemented as a genetic algorithm as follows: A solution in the genetic algorithm population may be initialized as follows:
1 For each QoS class
2 For each time slot
3 For each ingress-egress pair i, j
4 Find path for pipe p with sufficient capacity and install p along path
5 If no path found,
6 Identify all bottleneck links lx on a desirable path r between i and j
7 For each lx,
8 Find pipe p' going over lx with sufficient capacity for p
9 Re-route p' on different path (meeting QoS requirements)
10 If successful for all lx, install p along r
I I If re-routing failed,
12 split p into two fragments if granularity constraints allow
13 execute from step 4 for both fragments independently
14 if this also fails, this solution could not be initialized The recombination/crossover operator for two solutions s1 and s2 in the genetic algorithm may be implemented as follows:
1 Create new empty solution s
2 For each QoS class q
3 For each time slot t
4 C1 := maximum residual capacity in the network given solution s1
5 C2 := maximum residual capacity in the network given solution s2 6 If C1 > C2, route all pipes in s with QoS class q in slot t according to s1
7 Else route all pipes in s with QoS class q in slot t according to s2
The mutation operator for a solution in the genetic algorithm may be implemented as follows:
1 Do tournament selection to select a link (biased towards ones with less residual capacity)
2 Do tournament selection to select a pipe p (biased towards ones with higher capacity)
3 With probability r,
4 Split p according to granularity constraints and re-route fragments independently
5 With probability 1 - r,
6 Re-route pipe p as a whole (might cause temporary overload)
7 while overload is not resolved
8 identify bottleneck link l' on new route of p
9 Find pipe p' going over l', with sufficient capacity
10 Re-route p' such that a new bottleneck link I" is produced, but less overload
1 1 if no such candidate p' could be found, exit mutation According to a preferred embodiment the online path computation may be handled by any algorithm with fast reaction times, e.g. by a Constrained Shortest Path First approach with constraints reflecting the required capacity, QoS parameters etc.
According to a preferred embodiment the Security Parameter Index (SPI) may be employed for packet classification to increase the splitting granularity in such a way that new Security Parameter Indexes (SPIs) are dynamically employed in order to increase the splitting granularity, in particular once the resource management function discovers that the current splitting granularity is insufficient. Thus, it is achieved that the splitting granularity can be increased in the particularly challenging case of IPSec. Since the SPI is created by the sender of a packet and has only local significance, it could be used for the purpose of packet marking. One option may be that ingress switch and base station interact, potentially via the resource management function such as to adjust the packet marking on demand, e.g. when a more fine-grained pipe allocation is needed. Upon such a request for increased packet marking, the base station could adjust the number of distinct SPIs.
There are several ways how to design and further develop the teaching of the present invention in an advantageous way. To this end it is to be referred to the patent claims subordinate to patent claim 1 on the one hand and to the following explanation of preferred embodiments of the invention by way of example, illustrated by the figure on the other hand. In connection with the explanation of the preferred embodiments of the invention by the aid of the figure, generally preferred embodiments and further developments of the teaching will be explained. In the drawing
Fig. 1 is a schematic view of an application scenario of an embodiment of a method or a system according to the present invention illustrating an availability-based path computation, and
Fig. 2 is a diagram illustrating traffic demand over time with regard to another embodiment of a method or system according to the present invention.
Fig. 1 schematically illustrates an application scenario of an embodiment of a method or a system according to the present invention illustrating an availability- based path computation.
It is noted that in terms of Quality of Service (QoS) support an important aspect is to avoid re-routing of existing QoS-sensitive flows/pipes, since that may lead to disruptions. However, in a wireless part of the backhaul, links can experience modulation and coding scheme (MCS) and therefore capacity changes in an unpredictable way. To avoid re-routing QoS-sensitive pipes despite the chance of MCS changes, for each QoS class, a minimum required availability is determined, which in turn determines the maximum possible MCS and thus the maximum possible capacity for that QoS class. Specifically, Fig. 1 shows switches 1 , 2, 3 and 4 which are connected via links 5. In case of requiring an availability of 99,999% a capacity 6 with an amount of 40 Mbps is available when applying Quadrature Phase-Shift Keying (QPSK). If 16 QAM (Quadrature Amplitude Modulation) is applied, then a capacity of 80 Mbps would be available for an availability requirement of 99,995%. According to the application scenario of Fig. 1 , there is for a predetermined QoS class with an availability requirement of 99.999% only Quadrature Phase-Shift Keying (QPSK) available. Thus 40 Mbps are available as capacity out of possible 155 Mbps of the complete link 5. Not going beyond this boundary of 40 Mbps ensures that traffic of the predetermined QoS class will never have to be re-routed as long as MCS stays QPSK or better.
Furthermore, to ensure that really all traffic of a given QoS class can be accommodated within the availability fractions of the network links, even excess traffic that goes beyond the traffic matrix values, there are two options. Either excess traffic is dynamically routed using the online component of the resource management function, potentially via a path different from previous traffic of the same QoS class between a certain ingress-egress pair, or the traffic matrix values capture the peak traffic demand such that there will never be any excess traffic case.
It is evident that path computation based on the peak traffic demands may lead to a situation where not all traffic matrix pairs can be successfully routed, since the network is dimensioned such that a certain amount of statistical multiplexing gain is assumed, which is the norm in mobile backhaul networks today. In the best case, peak demand based computation leads to much resource waste, as most of the time the peak demand will not be reached. On the other hand, mean demand based computation can lead to lots of excess traffic during peak times. Since excess traffic is not part of the global offline path computation process, it might be impossible to accommodate it anymore at the time it actually occurs. Therefore, according to an embodiment of the present invention, time slots are employed.
Fig. 2 shows a diagram illustrating traffic demand over time with regard to an embodiment of a method or system according to the present invention according to which time slots are employed in order to balance between the resource waste problem for peak provisioning and the resource congestion problem with mean provisioning. Data traffic in mobile backhaul networks typically has recurring pattern with fixed periodicity, e.g. a day, and n homogeneous sub-periods, e.g. morning, day, evening, night. If one traffic matrix per sub-period or rather time slot is assumed, mean based path computation, i.e. n different path configurations and one per time slot, will be much closer to the traffic demand curve and less excess traffic must be handled. Specifically, Fig. 2 shows a data traffic demand curve 8 over time. According to the diagram illustrated in Fig. 2 a day as recurring period is divided in two time slots. One time slot between matrix boundaries t1 and t2 represents the time slot during the day with a traffic matrix 9. Another time slot between the matrix boundaries t2 and t1 represents the time slot at night with another traffic matrix 10. Furthermore, an average traffic matrix 1 1 over the whole period is depicted in Fig. 2.
In the normal case, pipes must be re-routed at the time slot boundaries t1 , t2. However, this is not strictly necessary. It may be provided that a single routing configuration is computed taking into account multiple traffic matrices, e.g. for highly sensitive traffic classes.
Hence, traffic matrices for high-priority traffic should be based on peak values to avoid many online path computation processes for excess traffic. On the other hand, to avoid disruptions due to rerouting at time slot boundaries, there should be either few traffic matrices / time slots for high-priority traffic or there should be a single routing configuration which is optimized for the whole set of matrices at the same time. For less sensitive traffic, mean-based traffic matrices with arbitrary granularity can be used. Hence, the number of time slots may vary depending on the QoS class. In case an operator decides to use one routing configuration per time slot, the resulting disruptions at time slot boundaries should be done as seamlessly as possible. This may be achieved as follows: ahead of the time slot boundary, the new paths are already set up, i.e. the forwarding entries in the intermediate switches are installed. It is proposed to not perform any actual resource reservations in intermediate switches as is done by protocols such as RSVP-TE (Resource Reservation Protocol - Traffic Engineering), for example. Thus, resource management according to embodiments of the present invention is a matter of optimized path computation, centralized capacity bookkeeping and careful admission control / shaping / policing at the edge devices, in particular the ingress and/or egress switches, of the backhaul network. After having installed the forwarding entries corresponding to a new routing, new ingress shapers are configured with the new pipe capacities, burst parameters and thresholds. Only once the new routing must be activated, new ingress packet filter rules are installed in the ingress switches that map incoming packets to the new instead of the old pipes. This process, i.e. no involvement of intermediate backhaul switches in resource management because made before break, allows activating the new routing very seamlessly. Only some if any in-flight packets on the old pipes might arrive out-of-order at their destination, but even this effect should be very minor.
Many modifications and other embodiments of the invention set forth herein will come to mind the one skilled in the art to which the invention pertains having the benefit of the teachings presented in the foregoing description and the associated drawings. Therefore, it is to be understood that the invention is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims

C l a i m s
1. Method for supporting dynamic resource management in a backhaul network,
wherein a resource management function is provided that includes an offline component and an online component for routing data traffic in the form of pipes,
wherein said pipes include a path from an ingress switch to an egress switch of said backhaul network and an assigned capacity,
wherein said offline component performs offline path computation based on expected traffic demands that are determined from one or more traffic matrices in order to compute offline computed paths for said pipes being represented in said one or more traffic matrices,
wherein said expected traffic demands constitute claimable resources for said pipes,
wherein said offline computed paths are installed in the backhaul network in order to configure said pipes,
wherein initially only a fraction of the maximum allowable capacity that corresponds to the amount of said claimable resources is allocated as capacity for said pipes,
wherein said resource management function allocates capacities to said pipes dependent on current data traffic,
wherein in case excess traffic demands for one or more of said pipes - excess pipes - occur, because being beyond said expected traffic demands for said excess pipes, said excess traffic demands constitute opportunistic resources for said excess pipes, and
wherein said online component performs online path computation in such a way that said online component uses unblocked claimable resources of one or more pipes different from said excess pipes in order to provide said opportunistic resources for said excess pipes.
2. Method according to claim 1 , wherein said offline computed paths and/or online computed paths are set up by installing forwarding entries without performing resource reservation in intermediate switches that are located between the ingress switch and the egress switch.
3. Method according to claim 1 or 2, wherein traffic classification for assigning the data traffic to said pipes is performed by the ingress switches and/or by base stations located in front of the ingress switches.
4. Method according to any of claims 1 to 3, wherein load monitoring is performed by the ingress switches and/or base stations located in front of the ingress switches, wherein results of said load monitoring are reported to said resource management function.
5. Method according to any of claims 1 to 4, wherein the ingress switches perform traffic shaping in order to enforce that only an admitted amount of data traffic is transmitted over said pipes.
6. Method according to any of claims 1 to 5, wherein allocating capacities to said pipes in the form of increasing or decreasing the pipe capacities is triggered by base stations, wherein the base stations translate bearer requests on the mobile network layer to corresponding pipe change requests or new pipe requests on the transport network layer.
7. Method according to any of claims 1 to 6, wherein allocating capacities to said pipes in the form of increasing or decreasing the pipe capacities is triggered, preferably by the ingress switches, based on thresholds that are in relation to monitored current data traffic of said pipes.
8. Method according to any of claims 1 to 7, wherein capacities are assigned to said pipes dependent on a modulation and coding scheme (MCS) in such a way that a minimum availability provided by said modulation and coding scheme (MCS) and required for a Quality of Service (QoS) class is ensured.
9. Method according to any of claims 1 to 8, wherein time slots are employed in order to consider recurring pattern of data traffic in said backhaul network, wherein preferably one traffic matrix is captured per time slot.
10. Method according to claim 9, wherein Quality of Service (QoS) classes of data traffic are considered in such a way that the number of the time slots is made dependent on the Quality of Service (QoS) class of data traffic transmitted in one or more of said pipes.
1 1. Method according to claim 9 or 10, wherein Quality of Service (QoS) classes of data traffic are considered in such a way that said one or more traffic matrices are captured per Quality of Service (QoS) class per time slot.
12. Method according to any of claims 1 to 1 1 , wherein the number of routing configurations to be computed is made dependent on the Quality of Service (QoS) class of data traffic transmitted in one or more of said pipes.
13. Method according to any of claims 1 to 12, wherein a single routing configuration for said pipes is computed, wherein said single routing configuration takes into account multiple traffic matrices.
14. Method according to any of claims 1 to 13, wherein said pipes are rerouted at boundaries of the time slots.
15. Method according to any of claims 1 to 14, wherein one or more or each of said pipes bundle multiple data traffic flows of the same Quality of Service (QoS) class.
16. Method according to any of claims 1 to 15, wherein upon detecting too much data traffic in one or more of said pipes, the ingress switches request said resource management function to allocate further resources.
17. Method according to any of claims 1 to 16, wherein said offline path computation and/or said online path computation takes a splitting granularity into account, wherein said splitting granularity is derived from predetermined flow classification granularities.
18. Method according to claim 17, wherein said splitting granularity is automatically estimated based on monitoring of the ingress switches and/or the base station.
19. Method according to claim 17 or 18, wherein said offline path computation is implemented by using an Integer Linear Program that takes said splitting granularity and/or a capacity variation um n per link (m,n) into account, wherein the capacity variation is the difference between the maximum utilization and the minimum utilization of a predetermined link (m,n) across all time slots.
20. Method according to claim 17 or 18, wherein said offline path computation is implemented by using a genetic algorithm that takes said splitting granularity into account and/or a capacity variation um n per link (m,n) into account, wherein the capacity variation is the difference between the maximum utilization and the minimum utilization of a predetermined link (m,n) across all time slots.
21. Method according to any of claims 17 to 20, wherein the Security Parameter Index (SPI) is employed to increase said splitting granularity in such a way that new Security Parameter Indexes (SPIs) are dynamically employed in order to increase said splitting granularity, in particular in case of the resource management function discovering that the current splitting granularity is insufficient.
22. System for supporting dynamic resource management in a backhaul network, in particular for executing a method according to any of claims 1 to 21 , wherein the system provides a resource management function that includes an offline component and an online component for routing data traffic in the form of pipes,
wherein said pipes include a path from an ingress switch to an egress switch of said backhaul network and an assigned capacity, wherein said offline component is configured to perform offline path computation based on expected traffic demands that are determined from one or more traffic matrices in order to compute offline computed paths for said pipes being represented in said one or more traffic matrices,
wherein said expected traffic demands constitute claimable resources for said pipes,
wherein said offline computed paths are installed in the backhaul network in order to configure said pipes,
wherein initially only a fraction of the maximum allowable capacity that corresponds to the amount of said claimable resources is allocated as capacity for said pipes,
wherein said resource management function is configured to allocate capacities to said pipes dependent on current data traffic,
wherein in case excess traffic demands for one or more of said pipes - excess pipes - occur, because being beyond said expected traffic demands for said excess pipes, said excess traffic demands constitute opportunistic resources for said excess pipes, and
wherein said online component is configured to perform online path computation in such a way that said online component uses unblocked claimable resources of one or more pipes different from said excess pipes in order to provide said opportunistic resources for said excess pipes.
EP13779151.3A 2012-09-20 2013-09-20 Method and system for supporting dynamic resource management in a backhaul network Withdrawn EP2898626A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP13779151.3A EP2898626A1 (en) 2012-09-20 2013-09-20 Method and system for supporting dynamic resource management in a backhaul network

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP12185248 2012-09-20
EP13779151.3A EP2898626A1 (en) 2012-09-20 2013-09-20 Method and system for supporting dynamic resource management in a backhaul network
PCT/EP2013/069622 WO2014044821A1 (en) 2012-09-20 2013-09-20 Method and system for supporting dynamic resource management in a backhaul network

Publications (1)

Publication Number Publication Date
EP2898626A1 true EP2898626A1 (en) 2015-07-29

Family

ID=46875687

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13779151.3A Withdrawn EP2898626A1 (en) 2012-09-20 2013-09-20 Method and system for supporting dynamic resource management in a backhaul network

Country Status (2)

Country Link
EP (1) EP2898626A1 (en)
WO (1) WO2014044821A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016155782A1 (en) * 2015-03-31 2016-10-06 Nec Europe Ltd. Method and system for working and protection paths determination in a wireless backhaul network
WO2016155974A1 (en) * 2015-03-31 2016-10-06 Nec Europe Ltd. Stability- and capacity-aware time-dependent routing in transport networks
EP4298770A1 (en) * 2021-02-28 2024-01-03 Microsoft Technology Licensing, LLC Traffic engineering for improved bandwidth allocations
US11831538B2 (en) 2021-02-28 2023-11-28 Microsoft Technology Licensing, Llc Traffic engineering for improved bandwidth allocations

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7948962B2 (en) * 2007-08-31 2011-05-24 Wireless Technology Solutions Llc Cellular communication system, apparatus and method for management of backhaul resources
EP2378807A1 (en) * 2010-04-16 2011-10-19 Thomson Telecom Belgium Method at a gateway for reserving link capacity on a broadband network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IOVANNA P ET AL: "A traffic engineering system for multilayer networks based on the GMPLS paradigm", IEEE NETWORK, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 17, no. 2, 1 March 2003 (2003-03-01), pages 28 - 37, XP011095762, ISSN: 0890-8044, DOI: 10.1109/MNET.2003.1188284 *

Also Published As

Publication number Publication date
WO2014044821A1 (en) 2014-03-27

Similar Documents

Publication Publication Date Title
US8027245B2 (en) Efficient and robust routing of potentially-variable traffic for path restoration following link failure
CN101406023B (en) Method and system for implementing multi-protocol label switching network differential service flux engineering
JP4598640B2 (en) Route selection method and apparatus in telecommunication network
CN101496357B (en) Dynamic TE-LSP priority and preemption
US8463916B2 (en) Traffic engineering and bandwidth management of bundled links
JP4476292B2 (en) Real-time service data transmission line selection method
US7082102B1 (en) Systems and methods for policy-enabled communications networks
US8274983B2 (en) Low-impact call connection request denial
US8320380B2 (en) Under-assigning resources to video in triple-play virtual topologies to protect data-class traffic
CN1731768A (en) Method for forwarding traffic in a connectionless communications network
EP2898626A1 (en) Method and system for supporting dynamic resource management in a backhaul network
JP2009055357A (en) Node device, communications network, path setting method and program
Cho et al. Multi-path constraint-based routing algorithms for MPLS traffic engineering
Lessmann Resource optimization in realistic mobile backhaul networks
Autenrieth et al. RD-QoS-the integrated provisioning of resilience and QoS in MPLS-based networks
Lee et al. Traffic engineering with constrained multipath routing in MPLS networks
Mesbahi et al. An efficient algorithm for traffic flow optimization in MPLS networks
Wang et al. QoS Management In Disadvantaged Tactical Environments
Hussein Multipath bandwidth capacity allocation and MPLS internet traffic engineering
AbuAli et al. Multi-path traffic engineering distributed VPLS routing algorithm
Chaieb et al. Improved MPLS-TE LSP path computation using preemption
Chaieb et al. LSP setup arrival reordering approach for MPLS-TE routing
Yabusaki et al. Network management for higher-bandwidth-use efficiency by optimizing paths in virtual networks
JP2012104886A (en) Transfer control device
Lau et al. Path selection with preemption and re-routing control for multi-protocol label switching networks

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150311

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20170927

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: NEC LABORATORIES EUROPE GMBH

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20180410