GB2578453A - Software defined networks - Google Patents

Software defined networks Download PDF

Info

Publication number
GB2578453A
GB2578453A GB1817504.2A GB201817504A GB2578453A GB 2578453 A GB2578453 A GB 2578453A GB 201817504 A GB201817504 A GB 201817504A GB 2578453 A GB2578453 A GB 2578453A
Authority
GB
United Kingdom
Prior art keywords
node
link
network
usage
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1817504.2A
Other versions
GB2578453B (en
GB201817504D0 (en
GB2578453B8 (en
Inventor
Dagiuklas Anastasios
Iqbal Muddesar
Ghosh Saparshi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South Bank University Enterprises Ltd
Original Assignee
South Bank University Enterprises Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South Bank University Enterprises Ltd filed Critical South Bank University Enterprises Ltd
Priority to GB1817504.2A priority Critical patent/GB2578453B8/en
Publication of GB201817504D0 publication Critical patent/GB201817504D0/en
Publication of GB2578453A publication Critical patent/GB2578453A/en
Publication of GB2578453B publication Critical patent/GB2578453B/en
Application granted granted Critical
Publication of GB2578453B8 publication Critical patent/GB2578453B8/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/125Shortest path evaluation based on throughput or bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/124Shortest path evaluation using a combination of metrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/123Evaluation of link metrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/127Shortest path evaluation based on intermediate node capabilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method of routing packets through a network, which may be a software-defined network (SDN), is disclosed, the network comprising a plurality of nodes and links connecting the nodes, and a control unit for determining packet routing. The method comprises, at the control unit: receiving a link usage for each link from one or more of the nodes 501, and calculating a link cost for each link based thereon 503; receiving a node usage from each of the nodes 502 and calculating a node cost for each node based thereon 504; identifying a minimum cost route between first and second endpoints based on the link and node costs 505; and distributing a routing instruction based on the minimum cost route to at least one node 506. A corresponding method at a node comprising a switch or router comprises: monitoring a node usage for the node and a link usage for at least one link connected to the node; reporting the node and link usages to a network node, and receiving a routing instruction from the network control unit. The node may be an OpenVSwitch. The routing instructions may be converted into flows, each flow comprising source and destination IP addresses.

Description

Intellectual Property Office Application No. GII1817504.2 RTM Date:18 April 2019 The following terms are registered trade marks and should be read as such wherever they occur in this document: Openflow Cisco Python Quagga Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
SOFTWARE DEFINED NETWORKS
Technical field
The technical field is networking.
Background
Software-defined networking (SDN) is an approach to open-based networking that enable efficient network configuration in order to improve network performance and monitoring. The static architecture of traditional networks is decentralized, so that the network operating system allows the common representation, monitoring and allocation of network resources into the overall management framework. The essential features of SDN are the separation of the control plane and the data plane.
Figure 1 is a schematic diagram of a typical architecture for SDN. An SDN architecture 100 comprises an application plane 101, a control plane 102 and a data plane 103. The application plane comprises SDN applications 104, which are programs comprising SDN application logic 105, which communicate their requirements via a northbound interface, NBI, 106 to the SDN controller 107. SDN NB's are interfaces between SDN Applications and SDN Controllers and typically provide abstract network views and enable direct expression of network behavior and requirements. This may occur at any level of abstraction (latitude") and across different sets of functionality ("longitude"). The SDN controller 107 is a logically centralized entity which translates requirements from the SDN application layer down to the SDN data paths and may provide SON applications with an abstract view of the network. The latter may include events and statistics. The control plane comprises one or more SDN controllers. An SDN controller typically comprises SDN control logic 108 and one or more NBI agents 109. The data plane comprises network elements 110, which comprise one or more SDN data paths 111. A network element is an integrated physical combination of communications resources, managed as a unit. An SDN Data path is a logical network device, which exposes visibility and uncontended control over its advertised forwarding and data processing capabilities. An SDN Datapath may also be defined across multiple physical network elements. This may encompass implementation details such as the logical to physical mapping, management of shared physical resources, virtualization or slicing of the SDN Datapath, interoperability with non-SDN networking, and the data processing functionality. The logical representation may encompass all or a subset of the physical substrate resources. Typically, an SON Datapath comprises a Control to Data-Plane Interface, CDPI, agent, 112, and a set of one or more traffic forwarding engines 113 and optionally one or more traffic processing functions. These engines and functions may include simple forwarding between the datapath's external interfaces or internal traffic processing or termination functions. An SON CDPI is the interface defined between an SON Controller and an SON Datapath, which may provide programmatic control of forwarding operations, capabilities advertisement, statistics reporting, and event notification.
The Management plane 114 implements static tasks that are better handled outside the application, control and data planes. Examples include business relationship management between provider and client, assigning resources to clients, physical equipment setup, coordinating reachability and credentials among logical and physical entities, configuring bootstrapping. Each business entity has its own management entities. Contracts and service level agreements 115 require interaction between the management and application planes. The interaction between the management plane and the control plane typically comprises configuration policy and performance policy 116. The interaction between the management plane and the data plane typically comprises element set-up 117. Another inter-layer interaction is between the application layer and the data layer 118, which typically comprises the exchange of application explicit requirements, capability and fault discovery, and network events and statistics.
In such networks, the controller accumulates the network devices information and builds a topology. All the control decisions such as routing and switching are made in the controller which is fed back to the network devices. However, traditional routing does not adequately serve the purposes of SON networks due to the diversity of devices which act as network elements. Furthermore, routing of traffic and network slicing means that control must pass across the boundaries between operator domains. Conventional techniques for routing and control are therefore inefficient for SDNs. Prior art solutions have sought to improve the efficiency of control in software-defined networks.
GB2550844 discloses a device for interfacing between an SDN controller and devices such as optical switches, digital SON enabled optical switches, all optical systems, WDM systems and loT systems. The device "translates" OpenFlow into vendor and technology-specific control messages. This enables the controller to be agnostic about the devices as the interfaces interact with the protocol.
US8693374 discloses an SON controller for an access network. The access network comprises a plurality of access nodes, aggregation nodes and edge routers. The controller comprises a topology module which receives topology from the aggregation network. This may be via the SON protocol. The topology module may implement a routing protocol. Traffic engineering in the form of queueing information at nodes is used in the routing algorithm. This is real-time queueing information.
U52013/0163475 discloses a flexible SON with a protocol-specific predefined flow configuration.
US 9450817 discloses an SON controller, which enables the automatic configuration of devices in the network US 2014/0098669 discloses a fast forwarding method for packets in an SON, wherein the packets do not match any flows within a set of flow tables. It maintains a packet miss table (PMT) to keep track of the partial matches of the incoming packets.
US 20130250770 discloses a network node and a method for managing control plane functions for a plurality of nodes in an SON domain. A major purpose of the disclosure is to enable the integration of SON with Application-Layer Traffic Optimization (ALTO) nodes. The method comprises collecting, filtering and aggregating traffic optimization information. Cost mapping of the topology is performed to enable the optimization.
There is, however, no disclosure of node costs being used in optimization algorithms.
Multiple domains are discussed, but the SON controllers are for individual domains and there is no disclosure of a multi-domain controller.
None of these disclosures however fully address the problems of routing and control in SDN's. The present disclosure seeks to address the deficiencies of conventional techniques and to improve the efficiency of routing and control in software-defined networks.
Summary of invention
In a first aspect of the disclosure there is provided a method of routing packets through a network, the network comprising a plurality of nodes, a plurality of links connecting the nodes, and a control unit for determining routing of the packets. The method comprises, at the control unit, receiving a link usage for each of the links, the link usages being received from one or more of the nodes, receiving a node usage from each of the nodes, determining a link cost for each link based on the link usage, determining a node cost for each node based on the node usage identifying a minimum cost route between a first endpoint and a second endpoint based on the costs of the nodes and links, and distributing at least one routing instruction based the minimum cost route on to at least one node.
In an embodiment, the method further comprises, at the control unit, for each link, receiving a capacity of the link, determining a normalised link usage based on the capacity of the link and the usage of the link, and, for each node, receiving a capacity of the node, determining a normalised node usage based on the capacity of the link and the usage of the node. The step of determining the link cost for each link is based on the respective normalized link usage and the step of determining the node cost for each node is based on the respective normalized node cost.
In an embodiment, the method further comprises, for each node, determining a link hop probability for each link connected to the node, and determining, based on the link hop probability, a weighting for each link connected to the node, and calculating a weighted node cost for each connected link based on the weighting and the node cost. For each link, the method further comprises identifying a first node at a first endpoint of the link and a second node at a second endpoint of the link, and calculating a combined link cost by summing the link cost for the link, a first weighted node cost associated with the first node and a second weighted node cost associated with the second node. The step of identifying a minimum cost route between a first endpoint and a second endpoint comprises running a routing algorithm based on the combined link costs for each of the links.
In an embodiment, the method further comprises receiving network topology information from one or more nodes or from one or more network controllers, determining a network topology and representing the topology as a graph. The graph comprises graph edges representing links and graph nodes representing network nodes. The method further comprises associating received node usages with respective graph nodes and associating received link usages with respective graph edges. The step of identifying a minimum cost route comprises implementing a routing algorithm based on the graph.
In an embodiment, a same metric is used to represent node and link costs, the metric comprising a queuing time or queue length, the queueing time or queue length being determined based on a queueing model and a normalized node or link usage.
In an embodiment, the network is a software defined network.
In an embodiment, at least one node comprises an OpenVSwitch.
In an embodiment, the network is a software defined network, and the method further comprises converting the routing instructions into flows. The method further comprises identifying a source IP address, identifying a destination IP address, identifying a port at a following node and creating the flow.
According to a second aspect, there is provided a method at a network node comprising a switch or router, in a network comprising a plurality of nodes, a plurality of links connecting the nodes, and a control unit for determining routing of the packets. The method comprises monitoring a node usage for the node, monitoring a link usage for at least one link connected to the node, reporting the node usage to a network node, reporting the link usage of the at least one link to the network node and receiving at least one routing instruction from the control unit.
In an embodiment, the reporting is performed by a push agent, when usage changes.
In an embodiment, the node comprises an OpenVSwitch.
According to a third aspect, there is provided a control unit for a network comprising a plurality of nodes and a plurality of links between the nodes, the control unit comprising a transmitter, a receiver, a processor and a memory. The control unit is configured to receive a link usage for each of the links, the link usages being received from one or more of the nodes, to receive a node usage from each of the nodes, to determine a link cost for each link based on the link usage, to determine a node cost for each node based on the node usage, to identify a minimum cost route between a first endpoint and a second endpoint based on the costs of the nodes and links and to distribute at least one routing instruction based the minimum cost route on to at least one node.
In an embodiment the control unit is further configured to receive a capacity of each link, to determine a normalised link usage based on the capacity of the link and the usage of the link, and to receive a capacity of each node, and to determine a normalised node usage based on the capacity of the link and the usage of the node. The control unit is further configured to determine the link cost for each link based on the respective normalized link usage and to determine the node cost for each node based on the respective normalized node cost.
In an embodiment the control unit is further configured to determine, for each node, a link hop probability for each link connected to the node, to determine, based on the link hop probability, a weighting for each link connected to the node and to calculate a weighted node cost for each connected link based on the weighting and the node cost.
The control unit is further configured to identify, for each link, a first node at a first endpoint of the link and a second node at a second endpoint of the link, and to calculate a combined link cost by summing the link cost for the link, a first weighted node cost associated with the first node and a second weighted node cost associated with the second node. The node is further configured to identify a minimum cost route between a first endpoint and a second endpoint by running a routing algorithm based on the combined link costs for each of the links.
In an embodiment the control unit is further configured to receive network topology information from one or more nodes or from one or more network controllers, to determine a network topology, and to represent the topology as a graph. The graph comprises graph edges representing links and graph nodes representing network nodes. The control unit is further configured to associate received node usages with respective graph nodes and to associate received link usages with respective graph edges. The control unit is further configured to identify a minimum cost route by implementing a routing algorithm based on the graph.
In an embodiment the control unit is further configured to use a same metric to represent node and link costs, the metric comprising a queuing time or queue length, the queueing time or queue length being determined based on a queueing model and a normalized node or link usage.
In an embodiment the control unit is configured for use in a software defined network.
In an embodiment the control unit is configured for use in a in network wherein at least one node comprises an OpenVSwitch.
In an embodiment the control unit is configured for use in a software defined network, and further configured to convert the routing instructions into flows, each flow comprising a source IP address, identifying a destination IP address, identifying a port at a following node and creating the flow.
According to a fourth aspect there is provided a network node comprising a switch or router, for use in a network comprising a plurality of nodes, a plurality of links connecting the nodes, and a control unit for determining routing of the packets. The node is connected to one or more links. The node is further configured to monitor a node usage for the node, to monitor a link usage for at least one link connected to the node, to report the node usage to the control, to report the link usage of the at least one link connected network node and to receive at least one routing instruction from the control unit.
In an embodiment the node comprises a push agent, which is configured to report usages when usage changes.
In an embodiment the node comprises an OpenVSwitch.
According to a fifth aspect, there is provided a computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out a method according to any one of the first or the second aspect.
According to a sixth aspect, there is provided a computer program product comprising a computer product according to the firth aspect.
According to a seventh aspect, there is provided a carrier containing the computer program product according to the sixth aspect, wherein the carrier optionally includes an electrical signal, an optical signal, a radio signal, a magnetic tape or disk, an optical disk or a memory stick.
Brief description of figures
The above and other aspects of the present disclosure will now be described by way of example only, with reference to the following figures: Figure 1 is a schematic diagram of a typical Software Defined Network, SDN; Figure 2 is a schematic diagram illustrating an example of the outcome of a traditional IF routing algorithm; Figure 3 is a schematic diagram of a network set up for testing the effects of virtual node processing capabilities on network throughput; Figure 4 is a graphical representation of the testing the effects of virtual node processing load on network throughput for two dominant IF routing protocols (RIP, OSPF); Figure 5 is a flow chart of a method according to an embodiment; Figure 6 is a schematic diagram of a node according to an embodiment; Figure 7 is a flow chart, which illustrates the steps of a method implemented at a network node according to an embodiment; Figure 8 is a flow chart, which illustrates the steps of a method implemented at a control unit; Figure 9a is a schematic diagram which illustrates a typical network; Figure 9b is a diagram illustrating the network represented as a graph; Figure 9c is a representation of the outcome of a routing algorithm according to an embodiment; Figure 10 is a schematic representation of an example of a network to which a method according to an embodiment may be applied; Figure 11 is a depiction of the network of Figure 10 in terms of a set of queues; Figure 12 is a schematic representation of the network of Figure 11, with the queues normalized; Figure 13 is a flowchart which illustrates the steps of a method for combining the node and link costs according to an embodiment; Figure 14 is a flowchart which illustrates the steps of a method for determining topology and associating node and link usages with the topology according to an embodiment; Figure 15 is a flowchart illustrating the steps of an algorithm according to an embodiment; Figure 16 is a flow chart illustrating a method of collecting data according to an embodiment; Figure 17 illustrates the flow chart of the algorithm according to an embodiment; Figure 18 is a graph of results comparing the end to end delays achieved using conventional routing protocols; Figure 19 is a schematic diagram illustrating a network using a method according to an embodiment; Figure 20 is a schematic diagram of a control unit according to an embodiment; Figure 21 is a schematic diagram of a network node according to an embodiment; Figure 22 is a schematic diagram illustrating the logical units of a control unit according to an embodiment; Figure 23 is a schematic diagram illustrating further logical units that may be provided in the control unit in an embodiment; Figure 24 is a schematic diagram illustrating further logical units that may be provided in the control unit in an embodiment; Figure 25 is a schematic diagram illustrating further logical units that may be provided in the control unit in an embodiment; and Figure 26 is a schematic diagram illustrating the logical units that are present in a network node according to an embodiment.
Detailed description
The present disclosure will now be discussed with reference to several example embodiments. It should be understood that these embodiments are discussed only for the purpose of enabling those skilled persons in the art to better understand and thus implement the present disclosure, rather than suggesting any limitations on the scope
of the present disclosure.
Traditional IP routing protocols such as Routing Information Protocol, RIP, Open Shortest Path First, OSPF, and Enhanced Interior Gateway Protocol, EIGRP, perform routing by calculating path costs with the bandwidths available for a link within the route. For networks with identical link speed, the hop count is used to determine the route. EIGRP advances the process with a second-best path in the registry, that gets switched when the primary route fails. In traditional networks, routers are standardized and hence have known performance with regard to processing times etc. In contrast, in Software Defined Networks, SDN, any virtual machine may act as a router or switch. The result of this is that there is a variation in the distribution of resources and performance not found in conventional networks. The conventional response to this problem is to keep the network capacity fixed and stress the virtual machines. This, however, causes a drop in overall performance, sometimes quite drastically.
Figure 2 is a schematic diagram of a network on which a conventional routing protocol is applied. The network comprises a plurality of nodes 201, 202, 203, 204, 205, 206 and a plurality of links 207. Each of the links has a cost 208 associated with it. A link cost minimizing path is found through the network. In this example, the routing is from node A 201 to node B 202, and the link cost minimizing path 209 is illustrated. No account has been taken of the processing times for each of the nodes when determining the minimum link cost route.
The total time for a packet to be transferred across a network from one node to another depends on a transmission delay and service delay. The transmission delay is introduced by the links. The parameters that determine the delay are the bandwidth of the links, the signal to noise/interference ratio, the bit error rate, the diversity number (for Multiple Input Multiple Output, MIMO, systems) and path loss.
The service delay is introduced by nodes during processing of a packet. In networks using conventional equipment, such as Cisco 2600, 3500, etc., processing of packets is implemented in dedicated hardware, using application specific integrated circuits, ASICs. This allowed for a consistent and predictable performance. In contrast, in SDN, there is no consistency in resource specification. Network devices are typically arranged logically as Virtual Network Functions, VNF, and the processing load may eventually get assigned to the server's Central Processing Unit, CPU. Network elements in SDN are non-standardized. There may result in large variations in the VNF's resource specification, in terms of CPU memory, Network Interface Card, NIC, etc. For example, a 100% load for an 8 core VM is double that of a 4 core VM. The architecture of the CPU, memory type, auxiliary processing load etc. also affects processing time.
The processing of a packet is a task submitted to the operating system. The performance of an operating system is proportional to the degree of multiprocessing until a critical point is reached, beyond which the performance drops drastically. This phenomenon is known as thrashing. Figure 3 is a schematic diagram of a network used for experimentation on the effect of processing load on the throughput of a network. An intermediary node C 303 is located between the two end nodes 301, 302 in the route 304 identified as having a minimal link. The amount of processing 305 on the CPU of node C is increased. Figure 4 is a graphical illustration of the experimental results. The graph 400 plots the bandwidth 401 throughput of the link between A and B, using RIP 402 and OSPF 403 against CPU allocation 404. The graph illustrates how the increase in CPU loading causes a reduction in throughput as a result of increasing processing time for packets at a node. The root cause of the problem is that algorithms used for conventional routing were not designed to include time delays of the nodes.
In virtual networking paradigm, any virtual machine, VM, can be either router or a switch. This brings a non-uniform distribution of resources (CPU, Memory etc. ) to the network. In a typical scenario, an SDN comprising a number of OpenVSwitches each hosted VM running across a cluster of hypervisors as a Virtual Network function, VNF. The hypervisors simultaneously execute other jobs with the job load unevenly distributed across the VNFs. In addition, there is a variation in the resource specification of each of the VNFs. Consequently, the network is heterogeneous.
Initially, the individual performance increases as more tasks are assigned, but the load distribution may not be uniform for the overall system. Therefore a packet service time could be non-uniformly distributed across the network. This random non-uniformity of the processing time of nodes regardless of the link cost makes the system stochastic in nature. The present disclosure seeks to address the problem of the variation in node processing time.
Figure 5 is a flowchart 500, which illustrates the steps of determining the routing of a packet through a network at a control unit according to an embodiment. The network comprises a plurality of nodes, a plurality of links connecting the nodes, and a control unit for determining routing of the packets. At a first step 501, the control unit receives a link usage for each of the links, the link usages being received from one or more of the nodes. The control unit receives 502 a node usage from each of the nodes. The link usage may be determined directly from an endpoint node of the link, or indirectly from nodes which are not directly connected to a link. In an embodiment, all nodes report link usages. In other embodiments, some nodes report link usages for multiple links and other nodes may not report at all. The person skilled in the art will appreciate that there are a large number of options for reporting of node and link usage, and the invention is not limited to any one method. The control unit then determines 503 a link cost for each link based on the link usage, determines 504 a node cost for each node based on the node usage and identifies 505 a minimum cost route between a first endpoint and a second endpoint based on the costs of the nodes and links. Standard routing algorithms may be used for the step of identifying minimum cost paths. In an embodiment, the same metric may be used for node and link usage. However, the skilled person will appreciate that other methodologies could be used. The final step of the method according to this embodiment is distributing at least one routing instruction based the minimum cost route on to at least one node.
The control unit refers to any network location at which routing algorithms are implemented. The person skilled in the art will appreciate that many arrangements of such a device are possible. In an embodiment, the control unit is a separate device from other network controllers or nodes, such as SON controllers. This can mean either or physical or logical separation. In another embodiment, the control unit is collocated with a network controller. In yet another embodiment, the control unit is located at one of the network nodes. In an embodiment, the monitoring tools run independently from network controllers. In an embodiment, the monitoring tools run on top of OpenVSwitches. In an embodiment, the control unit is located at a remote server. In an embodiment, statistics are collected and stored at a remote database. The control unit may be referred to as a server or server agent. The invention is not limited to any one arrangement of the monitoring tools or the physical or logical location of the control unit. The terms node or network node are used to refer to devices comprising routers, switches or similar devices used in networks for terminating or routing communications traffic i.e. the terms are used with their conventional meaning. Both terms, control unit and node may also refer to a virtual device. In an embodiment the network comprises a software defined network, SON, and the network nodes comprises OpenVSwitches.
However the invention is not limited to this type of network and the person skilled in the art will appreciate that he disclosure may relate to other types of networks that use switches, routers or equivalent devices. The term link is used in its conventional sense to mean any means of providing a communications connection between nodes in a network, and may include wireless, optical or electrical means.
Figure 6 is a schematic diagram of a node according to an embodiment. The node 601 comprises at least one receiver 602, a switch 602, a transmitter 603, node monitoring apparatus 604, link monitoring apparatus 605 and a push agent 607, for forwarding changes in utilization to a control unit.
Figure 7 is a flow chart 700, which illustrates the steps of a method implemented at a network node according to an embodiment. The method comprises the steps of monitoring a node usage for the node 701, monitoring a link usage for at least one link connected to the node 702, reporting 703 the node usage to a network node, reporting the link usage of the at least one link to the network node 704 and receiving 705 at least one routing instruction from the control unit.
In an embodiment, monitoring of local resource utilization is implemented at each network node. This information is fed back to a control unit. Typically, the system is heterogeneous, with nodes varying in their capacity. Parameters such as CPU capacity and memory may, therefore, be provided to the SDN from each node. In an embodiment, network usage is normalised. instead of calculating just CPU usage (U,), a normalised load is calculated: (Zcpu -(1 Uc)(Cdocx x Core) Equation 1 where Can"k& core are the clock rate and core count respectively of the CPU, Zcpu is the normalised CPU usage.
Similarly instead of memory usage (Urn), a normalised memory load is calculated: Zynern (1 -Urn) x Mjreq. Equation 2 Where /v/p.m? is the memory capacity, Zmom is the normalised memory utilization.
This information is then normalized to produce a single scaler called Zn(t), which provides a representation of current node usage. This is then sent to the control unit. The normalisation process eliminates the costly dimensionality reduction step on a server level where several such Zu values are deposited.
Figure 8 is a flow chart 800, which illustrates the steps of a method implemented at a control unit. The method comprises for each link, receiving 801 a capacity of the link and determining 802a normalised link usage based on the capacity of the link and the usage of the link. The method further comprises, for each node, receiving 803 a capacity of the node and determining 804a normalised node usage based on the capacity of the link and the usage of the node. The step of determining 805 the link cost for each link is based on the respective normalized link usage and the step of determining the node cost for each node is based on the respective normalized node usage.
In an embodiment, a "push agent" is used to report link and node utilization to a control unit. Whenever a change occurs in the level of utilization, this is sent to the control unit or server agent, which receives the utilization factors from client agents and topology information from one or more network controllers. In another embodiment, the information is reported periodically. The invention is not limited to any one utilization reporting strategy. In an embodiment, the switches are OpenFlow Switches and the network controllers are SON controllers. The server agent receives the utilisation factors from client agents and topology information from the network controller. The server agent fuses the topology information with the utilization factors. It then mirrors the actual network in a graph structure and runs a shortest path algorithm. In an embodiment, the algorithm only finds the shortest path for those switch pair whose change in utilisation is eligible for rerouting, i.e. there is a benefit in terms of a reduced "cost" to the network. This selective nature reduces the complexity and makes the algorithm faster. In an embodiment, the shortest path is then translated into a sequence of flows and injected to the switches. Route calculation is implemented in a mirrored graph. This has the advantage that there is no control message overhead.
In addition to the utilization values, the topology of the network is required to implement a routing algorithm. In an embodiment, the network is represented as a graph, with nodes of the graph representing switches and edges of the graph representing links.
Figures 9 a to c provide an overview of the process of implementing a routing protocol according to an embodiment. Figure 9a is a schematic diagram, which illustrates a typical network. The network 901 comprises nodes 904 and links 905. It would be clear to the person skilled in the art that the method may be applied many different topologies, typically of far higher complexity than that illustrated, and that the illustrated network is only an example. In order to implement the method according to an embodiment, the actual network structure is represented as a graph structure. Figure 9b is a diagram illustrating the network represented as a graph 902, with the switches of the network as nodes 906 and the links as edges 907. Figure 9c is a representation of the outcome of a routing algorithm according to an embodiment, which provides the shortest path routes 908, between the nodes 904.
Figure 10 is a schematic representation of an example of a network 1000 to which a method according to an embodiment may be applied. The network comprises a plurality of nodes 1001, 1002, 1003, 1004, 1005, connected by a plurality of links 1006, 1007, 1008, 1009, 1010, 1011. Each of the nodes has associated with it a host 1012, 1013, 1014, 1015, 1016. In an embodiment, the nodes comprise OpenVSwitches, OVS. Associated with each of the links is a link utilisation value Z( t)el(cy, wherein eix,y1 signifies the edge x -y, x and y representing the nodes at each end of the link.
Associated with each of the nodes there is a node utilization value ZWelx where x signifies the node number.
In an embodiment, the utilization data is normalized with respect to the capacity of the nodes and the links, by using the following approach: In an embodiment, the normalized cost of a node is given by: = a(fc x CAlc)(1 U) Arm X Nm)(1 -Um) equation 3 Where: ft: Frequency of the CPU per core fm: Frequency of the RAM A I c: Total number of Cores 1\1 in: Total volume of RAM Ur. Percentage of processer utilization Um: Percentage of memory utilization a and p are weighting coefficients which reflect the relative contributions of CPU and memory utilisation on the overall cost of a node.
In an embodiment, the normalized cost of a link is given by: ze = a (c-Ho) cssimc",-Rsst) equation 4 + yNa Ba RSSI Where: C: Link capacity Ra: Available bandwidth RSSIni": Maximum signal strength (RSSI) value" RSSI: Received signal strength value* 15: Average contention a, p and y are weighting coefficients.
In an embodiment, the algorithm used may be based on stochastic network calculus, SNC, in which the network is modeled as a series of network queues. In an embodiment, a simplified version of SNC is used for developing an algorithm. The algorithms in this disclosure use the following definitions: * The arrival time To is the time at which a packet arrives at a switch v,.
* The service time (To = Tk -TO or queuing time is the interval between the arrival time and the time Tk the packet leaves the switch.
The queuing time is proportional to the queue size, which is proportional to the load of the system. Similarly, the edges or links can also be treated as a queue.
In an embodiment, the costs of the switches (nodes) and the links (edges) are generalized and expressed as queuing times. In the description of the algorithms below, a node cost is given the symbol Zo and an edge cost Ze.
A path through the network comprises an alternating sequence of nodes and edges may be represented as a sequence of queues. The total path cost may, therefore be represented as the summation of the individual nodes and edges that constitute the path. Figure 11 depicts the network of Figure 12 in terms of a set of queues. Each node (switch) 1101 is represented as a queue 1102, and each edge (link) 1103 is likewise represented as a queue 1104. Each node is represented as a weighted self-loop 1105. The costs of each edge and node are represented as a length of the respective queue. In an embodiment, the costs of the links and the switches for the purpose of a routing algorithm are calculated as described above. Each queue a point of entry and exit called rear 1106 and front 1107. For depiction simplicity, it is assumed that the links are simplex.
In an embodiment, the queuing system is heterogeneous, i.e. each queue may run a different scheduling mechanism. The queue size is proportional to the processing load for the nodes and traffic load for the edges. The queue size also proportional to the queuing time. The mean of the queueing is referred to as the average waiting time, AVVT. Hence choosing a least time-consuming path can also be a sequence of queues such the sum of AVVT is least among the possible alternatives, which inherently choose nodes and edges which are comparatively under-loaded. The relationship between AVVT and Queue size.
(La equation 5 W = W -= --s it A 11 Where W = AVVT of system Wq = AVVT of a queue Lq = mean number of requests in the queue A = mean rate of arrival p = mean service rate Packets arrive at nodes from many sources and the service time depends on the system load which, in turn, depends on several random causes, therefore the arrival time and the service distribution are, in an embodiment, assumed to be distribution agnostic. Each node is modelled as a single server with high capacity in terms of queue size. In an embodiment, the queueing model is assumed to be a GIG/1 queue.
The queue length is related to utilisation (p=A/p). Since this is GIG/1, queue length is proportional to (1/p^2), where A is the arrival rate and p is the service rate.
From the approximated value of Lq for GIG/1 queues: Lq = (,2, ok R2, A2) equation 6 Where, p: Utilization of the server 01, a variance of the service & inter-arrival time respectively Hence: Wnode = O(Lq) = 0(p2) =(*) equation 7 However, the person skilled in the art will appreciate that other queueing modal may be used to derive a queueing length or queueing time, for use as a metric for a routing algorithm, and the invention is not limited to any one queueing model.
A problem arises in the application of routing algorithms to the above scenario in order to find the optimum path. Shortest path algorithms assume that the graph (the representation of the network in terms of nodes representing switches and edges representing links) is "simple", i.e. that it has no self-loops or parallel edges. The above model has self-loops, which are used to represent the delays cause by processing times at the nodes.
In an embodiment, this problem is overcome by distributing the costs of the nodes to the adjacent edges. A weighting is assigned to each of the adjacent edges. The weighting 4 is a number between 0 and 1, which represents the next hop probability.
For each node, the summation of the weightings is 1. Each link therefore has a cost: Cost(edge) = lexyl+aleycycl + a; leyyl equation 5 where I exyis the cost of the edge xy, I416.1 is the cost, lex% I, of node x, weighted by the factor, IaI and cr; leyyl is the cost of node y weighted by ct;' Figure 12 is a schematic representation of the network of Figure 11, with the queues normalized. Shortest path algorithms may now be implemented on the network. The costs for the edge 1201 have been adapted as in equation 5, and the costs of the nodes 1202 have been set to zero.
In an embodiment, an algorithm with the following pseudocode is implemented: Algorithm 1: 1. While (true) { 2. Set route <-rtemp (-0 3. Normalise G: G' = x (G) 4. For all vertex pair (v" v,) E V(G') xV(G'){ 5. If (vi, v0) E E'{ 6. If min (e',10+ min (e'kj) =Min(E') 7. Route <-Route u 8. Ae',; : change in edge weight 9. If Ae1,; > min (e'ao + min (e'ki) 10. Route <-Route u 20 11. Else rtemp rtemp u 12.1f route # 13. route <-route u rtemp 14. For all (vi, v0) E route 15. IRU IRU dijkstra (v,, vi) 16. For all ni R; call algorithm 2 17. Sleep (Timeout) The above algorithm takes a graph G(V,E), comprising a plurality of nodes and edges, representing respectively switches and links in a network. The graph is then normalized according to the process of normalisation described above. In an embodiment, a Dijstra shortest path algorithm is then run to determine a route between a node V, and which is denoted as ra),J. Each route represents a sequence of nodes. In an embodiment, a function may be run on a node to determine the successor node.
Figure 13 is a flowchart 1300 which illustrates the steps of a method for combining the node and link costs according to an embodiment. The method comprises, for each node, determining 1301 a link hop probability for each link connected to the node and determining 1302, based on the link hop probability, a weighting for each link connected to the node. The method further comprises calculating 1303 a weighted node cost for each connected link based on the weighting and the node cost. The method further comprises, for each link, identifying 1305 a first node at a first endpoint of the link and a second node at a second endpoint of the link and calculating 1306 a combined link cost by summing the link cost for the link, a first weighted node cost associated with the first node and a second weighted node cost associated with the second node. The step of identifying 1306 a minimum cost route between a first endpoint and a second endpoint comprises running a routing algorithm based on the combined link costs for each of the links.
Figure 14 is a flowchart 1400 which illustrates the steps of a method for determining topology and associating node and link usages with the topology according to an embodiment. The method comprises, at a control unit, receiving 1401 network topology information from one or more nodes or from one or more network controllers, determining 1402 a network topology, representing 1403 the topology as a graph, the graph comprising graph edges representing links and graph nodes representing network nodes, associating 1404 received node usages with respective graph nodes, and associating 140received link usages with respective graph edges. The step of identifying 1406 a minimum cost route comprises implementing a routing algorithm based on the graph.
In an embodiment, a second algorithm is implemented in order to translate a route into a set flow entry F(t)u {fk lk 6 V}. This enables the configuration of switches. In an embodiment, the switches are OpenVSwitches, OVS.
Algorithm 2: 1. For all Vk in ru 2. If succ(Vk) # icI){ 3. OVS Vk 4. sip 5. dip 6. Port <-p(succ(VO) 7. OVS.adflow( nwsrc= sip Nwdst= dip action = output: port) Table 1 illustrates examples of flow entries for ro. The example relates to openflow switches.
Table 1
OVS Match Action Source IF Destination IF V1 H1 H5 Out: P(SUCC(Vi)) V2 H1 H5 Out: P(SUCC(V2)) Figure 15 is a flowchart 100 illustrating the steps of the second algorithm. For each switch in the route 1501, which is not an endpoint of the route, a source IF address 1502 and a destination IP address 1503 are identified. A port is then identified 1504 at the next switch in the route and a flow is created 1505.
Figure 16 is a flow chart illustrating a method of collecting data according to an embodiment. The data is organised as a "dictionary", comprising data pairs, the pairs being a node id and a node utilization 1601. An event can be an update of an existing node or a new node attachment 1602. For a new node attachment, a new node id is created 1603, and a utilization recorded 1604. For other events, the node id is obtained 1605 and the utilization updated 1606. The collected system information, such as CPU utilization, memory utilization, CPU and memory capacity, bandwidth utilization and battery life are recorded 1607, normalized 1608 and stored 1609.
Figure 17 is a schematic diagram of a network model used for testing the above methods. The model comprises the obtaining network topology 1701, obtaining utilization values 1702, updating of topology and utilization data 1703, the implementation of the routing algorithm 1704 and the application of the routed topology 1705.
Figure 18 is a graph 1800 of results comparing the end to end delays achieved using conventional routing protocols, RIP 1801 and OSPF 1802, with the results 1803 using the above method. The end to end delay 1804 is plotted against CPU usage 1805. The improvement in the end to end delay is evident.
Figure 19 is a schematic diagram illustrating a network using a method according to an embodiment. In an embodiment, this is a software-defined network, but the person skilled in the art will appreciate that the methods herein disclosed may be applied to other types of network. The invention is not limited to any one type of network. The levels of network intelligence are illustrated, namely, the end user devices layer 1901, the data plane layer 1902, the control plane layer 1903, the application plane layer 1904 and the analytics plane layer 1905.
The end user devices 1906 communicate 1907 with network nodes 1908 comprising switches or routers 1908, which, in an embodiment, are OpenVSwitches. The person skilled in the art will appreciate that other types of switch or router may be used, such as a conventional router. The invention is not limited to any one type of switch or router. The switches 1908 communicate 1909 with one or more control plane devices 1910. In an embodiment, this is implemented by OpenFlow. In an embodiment, the control plane devices comprise an SDN controller. The control plane devices provide topology information 1911 to the analytics engine 1912. In an embodiment, this is implemented by RESTconf. Node and link utilisation information 1913 is also provided to the analytics engine 1912.
In an embodiment, the End Devices Layer 1901 is responsible for interfacing with the various Internet of Things, loT, end-devices 1906. In an embodiment, it runs a standard wireless network, with, typically, an loT protocol running on top. The loT protocol may be for example MQTT or AMQP. The person skilled in the art will appreciate that other protocols may be used and the invention is not limited to any particular protocol.
In an embodiment, the End Devices Layer 1901 features extended edge nodes, contextually termed as Mobile Platforms (MPs), which act as a connecting fabric between these End devices layer and Layers above. In an embodiment, slice specific services run inside a container hosted by the MPs.
In an embodiment, the Data Plane Layer 1902 uses the switches 1908 to communicate 1909 with one or more control plane devices 1910. At this layer, in an embodiment, the OpenVSwitch, OVS, is placed on top to connect to one or more control plane devices. A software defined controller connects several such OVSs. The controller uses OpenFlow protocol to control the OVSs. In order to scale the network, multiple controllers can collaborate with their own control domain. These collaborating controllers use protocols to communicate among each other. A periodic resource utilization runs alongside of the container engine. This periodically collects and distributes local system states so as to orchestrates intelligent migration. This layer is responsible for data acquisition from loT & resource quarrying agents. Push agents running inside the east bound runs on a static port and publish information to the controller plan layer 1903.
In an embodiment, the Controller Plane Layer 1903 makes forwarding decisions. It is, typically, based on a protocol that manages flows, such as the open flow protocol, that allows the parent node to tell the child node where to send packets. The person skilled in the art will appreciate that other protocols may be used and the invention is not limited to any one protocol. The Controller Plane Layer may provide functions based on the concept of Domains. A group sharing common networking policies such as routing protocol, QoS/QoE rules form a Domain. Each of the domain is controlled by a Domain controller, which are typically designated with Network Functions. Among many of the functions SDN is one of the key ones.
Communication can be either intra-domain or inter-domain depending on the residence of the communication parties. In an embodiment, the architecture uses OpenVSwitch (OVS). Typically, this uses SON southbound protocol OpenFlow to communicate with the SON controller. In an embodiment, for inter-domain communication the "speaking nodes" (Border Gateway Routers) can be software defined using tools such as Quagga. OVS and SON Controllers can run on top of it, resulting Multi-Domain & Multi-Tier architecture. The skilled person would recognise that alternative protocols may be used and the invention is not limited to any one protocol or combination of protocols.
The Application Plane Layer 1904 is typically implemented on cloud. This layer sometimes may be referred to as Core. There are Two main purpose of this layer, firstly to provide Multi Tenancy and secondly to provide Data analytics. In an embodiment, the control plane layer devices provide topology information 1911 at this layer which is communicated to the analytics engine 1912, typically running at Analytic Plan Layer. In an embodiment, this is implemented by RESTconf. Node and link utilisation information 1913 is also provided to the analytics engine 1912, which writes the OpenFlow rules after performing the analytics. The Data analytics is a very computationally intensive yet efficient way to observe the runtime data and predict any anomalies. Due to the high need of computational resources these are typically not run on the edge nodes. In an embodiment, the cloud computes the prediction and instructs potentially overloaded resources with identified countermeasures. This reduces the delay. The control plane devices provide topology information 1911 to this layer. The architecture is capable of using any network topology.
In an embodiment, shell scripts are used to fetch and calculate resource utilization from the layers below (for example CPU, Memory, Bandwidth & Rh). In an embodiment, Python scripts run these shell scripts to gather data and put in into a dictionary structure to send data as <key,value> pairs and use Pickle for serialization and publication with a given port number. In an embodiment, this process runs as a REST API.
In an embodiment, the Analyfics Plane Layer comprises an analytic engine, which is used to perform analytics on data collected through the application plan layer from the layers below. This layer typically feeds information to provide improvements in resource allocation by improved slice allocation and route optimization. This is achieved by the incorporation of node costs in addition to conventional link costs into the optimization algorithms.
Figure 20 is a schematic diagram of a control unit according to an embodiment. In an embodiment, there is a transmitter 2001 and a receiver 2002, for linking to the network.
The control unit further comprises a processor 2003 and a memory 2004. The person skilled in the art will appreciate that the control unit could comprise multiple transmitters and receivers, and also multiple processors and memory units. In an embodiment, the control unit is at a server. In another embodiment, the control unit may be a virtual device, distributed across different physical hardware.
Figure 21 is a schematic diagram of a network node according to an embodiment. In an embodiment, there is a transmitter 2101 and a receiver 2102, for linking to the network. The network node further comprises a processor 2103 and a memory 2104. The person skilled in the art will appreciate that the network node could comprise multiple transmitters and receivers, and also multiple processors and memory units. In an embodiment, the control unit is at a server. In another embodiment, the network node may be a virtual device, distributed across different physical hardware.
Figure 22 is a schematic diagram illustrating the logical units of a control unit 2200 according to an embodiment. The control unit comprises a link usage determining unit 1901 for receiving a link usage for each of the links, a node usage determining unit 1902 for receiving a node usage for each of the links, a link cost determining unit 2203 for determining a link cost for each the links based on the link usage, a node cost determining unit 2204 for determining a node cost for each of the links based on the link usage, an identifying unit 2205 identifying a minimum cost route between a first endpoint and a second endpoint based on the costs of the nodes and links and a distributing unit for distributing routing instructions.
Figure 23 is a schematic diagram illustrating further logical units that may be provided in the control unit in an embodiment. The control unit may further comprise a node capacity receiving unit 2301 receiving a capacity of a link, a link capacity receiving unit 2302, for receiving a capacity of a link, a normalised link usage determining unit 2303 for determining a normalised link usage based on the capacity of the link and the usage of the link and a normalised node usage determining unit 2304 for determining a normalised node usage based on the capacity of the link and the usage of the node.
Figure 24 is a schematic diagram illustrating further logical units that may be provided in the control unit in an embodiment. The control unit may further comprise a link hop probability determining unit 2401, for determining a link hop probability for each link connected to the node, weighting determining unit 2402, for determining, based on the link hop probability, a weighting for each link connected to the node, a calculating unit 2403 for calculating a weighted node cost for each connected link based on the weighting and the node cost, an identifying unit 2404 for identifying a first node at a first endpoint of the link and a second node at a second endpoint of the link and combined link cost calculating unit 2405 for calculating a combined link cost by summing the link cost for the link, a first weighted node cost associated with the first node and a second weighted node cost associated with the second node.
Figure 25 is a schematic diagram illustrating further logical units that may be provided in the control unit in an embodiment. The control unit may further comprise a topology information receiving unit 2501 for receiving network topology information from one or more nodes or from one or more network controllers, a topology determining and representing unit 2502 for determining a network topology and representing the topology as a graph, the graph comprising graph edges representing links and graph nodes representing network nodes, a node associating unit 2503 for associating received node usages with respective graph nodes and a link associating unit 2504 for associating received link usages with respective graph edges.
Figure 26 is a schematic diagram illustrating the logical units that are present in a network node according to an embodiment. The node may comprise a node usage monitoring unit 2601, a link usage monitoring unit 2602, a node usage reporting unit 2603, a link usage reporting unit 2604 and a routing instruction receiving unit 2605 for receiving routing instructions.
The present disclosure has mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the present disclosure, as defined by the appended claims.

Claims (18)

  1. CLAIMS: 1) A method of routing packets through a network, the network comprising a plurality of nodes, a plurality of links connecting the nodes, and a control unit for determining routing of the packets, the method comprising, at the control unit: receiving a link usage for each of the links, the link usages being received from one or more of the nodes; receiving a node usage from each of the nodes; determining a link cost for each link based on the link usage; determining a node cost for each node based on the node usage identifying a minimum cost route between a first endpoint and a second endpoint based on the costs of the nodes and links; and distributing at least one routing instruction based the minimum cost route on to at least one node.
  2. 2) A method according to claim 1, further comprising: at the control unit: for each link: receiving a capacity of the link; determining a normalised link usage based on the capacity of the link and the usage of the link; and for each node: receiving a capacity of the node; determining a normalised node usage based on the capacity of the link and the usage of the node, wherein the step of determining the link cost for each link is based on the respective normalized link usage, and the step of determining the node cost for each node is based on the respective normalized node usage.
  3. 3) A method according to claim 1 or claim 2, the method further comprising: for each node: determining a link hop probability for each link connected to the node; and determining, based on the link hop probability, a weighting for each link connected to the node; and calculating a weighted node cost for each connected link based on the weighting and the node cost; for each link: identifying a first node at a first endpoint of the link and a second node at a second endpoint of the link calculating a combined link cost by summing the link cost for the link, a first weighted node cost associated with the first node and a second weighted node cost associated with the second node, wherein the step of identifying a minimum cost route between a first endpoint and a second endpoint comprises running a routing algorithm based on the combined link costs for each of the links.
  4. 4) A method according to any preceding claim, further comprising: receiving network topology information from one or more nodes or from one or more network controllers; determining a network topology; representing the topology as a graph, the graph comprising graph edges representing links and graph nodes representing network nodes; associating received node usages with respective graph nodes; associating received link usages with respective graph edges; and wherein the step of identifying a minimum cost route comprises implementing a routing algorithm based on the graph.
  5. 5) A method according to any preceding claim, wherein a same metric is used to represent node and link costs, the metric comprising a queuing time or queue length, the queueing time or queue length being determined based on a queueing model and a normalized node or link usage.
  6. 6) A method according to any preceding claim, wherein the network is a software defined network.
  7. 7) A method according to any preceding claim, wherein at least one node comprises an OpenVSwitch.
  8. 8) A method according to any preceding claim, wherein the network is a software defined network, and wherein the method further comprises converting the routing instructions into flows, each flow comprising a source IF address, identifying a destination IP address, identifying a port at a following node, and creating the flow.
  9. 9) A method at a network node comprising a switch or router, in a network comprising a plurality of nodes, a plurality of links connecting the nodes, and a control unit for determining routing of the packets, the method comprising: monitoring a node usage for the node; monitoring a link usage for at least one link connected to the node; reporting the node usage to a network node; reporting the link usage of the at least one link to the network node; and receiving at least one routing instruction from the control unit.
  10. 10) A method according to Claim 9, wherein the reporting is performed by a push agent, when usage changes.
  11. 11) A method according to Claims 9 or Claim 10, wherein the node is an OpenVSwitch.
  12. 12) A control unit for a network comprising a plurality of nodes and a plurality of links between the nodes, the control unit comprising a transmitter, a receiver, a processor and a memory, the control unit being configured to: receive a link usage for each of the links, the link usages being received from one or more of the nodes; receive a node usage from each of the nodes; determine a link cost for each link based on the link usage; determine a node cost for each node based on the node usage identify a minimum cost route between a first endpoint and a second endpoint based on the costs of the nodes and links; and distribute at least one routing instruction based the minimum cost route on to at least one node.
  13. 13) A control unit according to claim 12 further configured to: receive a capacity of each link; determine a normalised link usage based on the capacity of the link and the usage of the link; and receiving a capacity of each node; determine a normalised node usage based on the capacity of the link and the usage of the node, and wherein the control unit is further configured to determine the link cost for each link based on the respective normalized link usage and to determine the node cost for each node based on the respective normalized node usage.
  14. 14) A control unit according to claim 12 or claim 13, further configured to: determine, for each node, a link hop probability for each link connected to the node; and determine, based on the link hop probability, a weighting for each link connected to the node; and calculate a weighted node cost for each connected link based on the weighting and the node cost; and identify for each link a first node at a first endpoint of the link and a second node at a second endpoint of the link calculate a combined link cost by summing the link cost for the link, a first weighted node cost associated with the first node and a second weighted node cost associated with the second node, wherein the step of identifying a minimum cost route between a first endpoint and a second endpoint comprises running a routing algorithm based on the combined link costs for each of the links.
  15. 15) A control unit according to any of claims 12 to 14, further configured to: receive network topology information from one or more nodes or from one or more network controllers; determine a network topology; represent the topology as a graph, the graph comprising graph edges representing links and graph nodes representing network nodes; associate received node usages with respective graph nodes; associate received link usages with respective graph edges; and wherein the control unit is further configured to identify a minimum cost route by implementing a routing algorithm based on the graph.
  16. 16) A control unit according to any of claims 12 to 15, further configured to use a same metric to represent node and link costs, the metric comprising a queuing time or queue length, the queueing time or queue length being determined based on a queueing model and a normalized node or link usage.
  17. 17) A control unit according to any of claims 12 to 16, configured for use in a software defined network.
  18. 18) A control unit according to any of claims 12 to 17, configured for use in a network wherein at least one node comprises an OpenVSwitch. 15 19) A control unit according to any of claims 12 to 18, configured for use in a software defined network, and further configured to convert the routing instructions into flows, each flow comprising a source IP address, identifying a destination IP address, identifying a port at a following node and creating the flow.20) A network node comprising a switch or router, for use in a network comprising a plurality of nodes, a plurality of links connecting the nodes, and a control unit for determining routing of the packets, the node being connected to one or more links, the node being configured to: monitor a node usage for the node; monitor a link usage for at least one link connected to the node; report the node usage to the control unit; report the link usage of the at least one link connected to the network node; and receive at least one routing instruction from the control unit.21) A network node according to Claim 20, wherein the reporting is performed by a push agent, when usage changes.22)A network node according to Claim 20 or Claim 21, wherein the node comprises an OpenVSwitch.23)A computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out a method according to any one of claims 1 to 11.24) A computer program product comprising a computer program as claimed in claim 23.25)A carrier containing the computer program product of claim 24, wherein the carrier optionally includes an electrical signal, an optical signal, a radio signal, a magnetic tape or disk, an optical disk or a memory stick.
GB1817504.2A 2018-10-27 2018-10-27 Software defined networks Active GB2578453B8 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1817504.2A GB2578453B8 (en) 2018-10-27 2018-10-27 Software defined networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1817504.2A GB2578453B8 (en) 2018-10-27 2018-10-27 Software defined networks

Publications (4)

Publication Number Publication Date
GB201817504D0 GB201817504D0 (en) 2018-12-12
GB2578453A true GB2578453A (en) 2020-05-13
GB2578453B GB2578453B (en) 2021-07-07
GB2578453B8 GB2578453B8 (en) 2021-08-04

Family

ID=64560351

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1817504.2A Active GB2578453B8 (en) 2018-10-27 2018-10-27 Software defined networks

Country Status (1)

Country Link
GB (1) GB2578453B8 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112203300B (en) * 2020-08-07 2022-06-17 杭州电子科技大学 Routing performance evaluation method for wireless communication node of power internet of things

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2237495A1 (en) * 2009-03-31 2010-10-06 BRITISH TELECOMMUNICATIONS public limited company Path generation in a packet network
WO2016123040A1 (en) * 2015-01-30 2016-08-04 Hewlett-Packard Development Company, L.P. Adjusted spanning tree protocol path cost values in a software defined network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2237495A1 (en) * 2009-03-31 2010-10-06 BRITISH TELECOMMUNICATIONS public limited company Path generation in a packet network
WO2016123040A1 (en) * 2015-01-30 2016-08-04 Hewlett-Packard Development Company, L.P. Adjusted spanning tree protocol path cost values in a software defined network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"2015 IEEE International Conference on Communications (ICC)", 8-12 June 2015, IEEE, A Craig et al, "Load balancing for multicast traffic in SDN using real-time link cost modification", pages 5789-5795 *
"2018 IEEE Global Communications Conference (GLOBECOM)", 9-13 December 2018, IEEE, S Ghosh et al, "Energy-aware IP routing over SDN". *
"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications", 10-14 April 2016, IEEE, M Huang et al, "Dynamic routing for network throughput maximization in software-defined networks" *

Also Published As

Publication number Publication date
GB2578453B (en) 2021-07-07
GB201817504D0 (en) 2018-12-12
GB2578453B8 (en) 2021-08-04

Similar Documents

Publication Publication Date Title
Kaur et al. A comprehensive survey of service function chain provisioning approaches in SDN and NFV architecture
US11558293B2 (en) Network controller subclusters for distributed compute deployments
US9887777B2 (en) Affinity modeling in a data center network
US9461877B1 (en) Aggregating network resource allocation information and network resource configuration information
Ghaznavi et al. Service function chaining simplified
EP2774048B1 (en) Affinity modeling in a data center network
US11611517B2 (en) Tenant-driven dynamic resource allocation for virtual network functions
JP7288980B2 (en) Quality of Service in Virtual Service Networks
EP2748990A1 (en) Network virtualization apparatus and method with scheduling capabilities
CN110278139B (en) Method for forwarding packets in a computer network, network device and storage medium
US7944923B2 (en) Method and system for classifying network traffic
US10164845B2 (en) Network service aware routers, and applications thereof
Lin et al. Jointly optimized QoS-aware virtualization and routing in software defined networks
Chang et al. Hydra: Leveraging functional slicing for efficient distributed SDN controllers
Lee et al. Enhancing infiniband with openflow-style sdn capability
Muñoz et al. SDN orchestration and virtualization of heterogeneous multi-domain and multi-layer transport networks: The STRAUSS approach
GB2578453A (en) Software defined networks
US8964596B1 (en) Network service aware routers, and applications thereof
US20220121504A1 (en) Methods for event prioritization in network function virtualization using rule-based feedback
CN110300073A (en) Cascade target selecting method, polyplant and the storage medium of port
US11245630B2 (en) Network system and network band control management method
EP4075751B1 (en) Network controller horizontal scaling for network device configuration sessions management
Chang et al. Exploring functional slicing in the design of distributed SDN controllers