WO2018197924A1 - Method and system to detect virtual network function (vnf) congestion - Google Patents

Method and system to detect virtual network function (vnf) congestion Download PDF

Info

Publication number
WO2018197924A1
WO2018197924A1 PCT/IB2017/052348 IB2017052348W WO2018197924A1 WO 2018197924 A1 WO2018197924 A1 WO 2018197924A1 IB 2017052348 W IB2017052348 W IB 2017052348W WO 2018197924 A1 WO2018197924 A1 WO 2018197924A1
Authority
WO
WIPO (PCT)
Prior art keywords
vnf
packet
process performance
values
congestion state
Prior art date
Application number
PCT/IB2017/052348
Other languages
French (fr)
Inventor
Ashvin Lakshmikantha
Vinayak Joshi
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/IB2017/052348 priority Critical patent/WO2018197924A1/en
Publication of WO2018197924A1 publication Critical patent/WO2018197924A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/20Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion

Definitions

  • a service level agreement For a subscriber of a network utilizing VNFs, the subscriber often reaches a service level agreement (SLA) with an operator of the network.
  • SLA service level agreement
  • the measurement is often performed by an application coupled to a controller of a network.
  • a SDN system includes one or more SDN controllers and a set of network elements managed by the SDN controllers.
  • An application coupled to the SDN controllers may measure packet process performance of one or more VNFs implemented in one or more network elements, without requiring physical access to the hardware implementing the network elements.
  • measuring the packet process performance of a VNF typically involves significantly more operations than measuring resource utilization of the VNF.
  • VNF virtual network function
  • the method is implemented in an electronic device, where it is determined that a VNF implemented in a network device is in a congestion state based on packet process performance of the VNF, the packet process performance corresponding to a service level agreement; from the network device, values of a plurality of VNF status parameters corresponding to measurements of the packet process performance are obtained; and a plurality of coefficients is derived to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state. Then a subsequent congestion state of the VNF is determined based on subsequent values of the plurality of the VNF status parameters and the plurality of coefficients, and a notification indicating the subsequent congestion state of the VNF is provided.
  • the electronic device comprises a non-transitory machine -readable storage medium to store instructions and a processor coupled to the non-transitory machine-readable storage medium to process the stored instructions to determine that a virtual network function (VNF) implemented in a network device is in a congestion state based on packet process performance of the VNF, the packet process performance corresponding to a service level agreement; to obtain, from the network device, values of a plurality of VNF status parameters corresponding to measurements of the packet process performance; derive a plurality of coefficients to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state; determine a subsequent congestion state of the VNF based on subsequent values of the plurality of the VNF status parameters and the plurality of coefficients; and provide a notification indicating the subsequent congestion state of the VNF.
  • VNF virtual network function
  • the VNF allocation may be static, based on known or estimated traffic patterns and/or traffic loads of traffic flows. Yet optimizing VNF allocation in a system is often an NP (non- deterministic polynomial-time) hard problem, and heuristics to approximate the optimization are computing intensive. Additionally, a static VNF allocation is optimal only for a fixed traffic load and/or pattern. Traffic load and/or pattern may change over time, rendering the static VNF allocation obsolete. Thus, static VNF allocation may not be ideal in many cases. Instead, it may be more practical to observe VNF resource utilization (which may be indicated by VNF status parameter(s)) of various VNFs in a system, and allocate/remove VNFs dynamically.
  • VNF resource utilization which may be indicated by VNF status parameter(s)
  • supervised learning can identify a model through empirical learning to transform a set of inputs to a set of outputs. For example, a subset of inputs with corresponding outputs may be provided to the model to train the model. Once the model is identified, the model may be used to predict output for subsequent inputs.
  • the model may include a plurality of coefficients, which when used in conjunction with system inputs, provide an output that is consistent with observed behaviors or status.
  • the machine learning problems such as correlating VNF packet process performance measurements with values of VNF status parameters, may be cast as a convex optimization problem and solved by standard optimization tools. Many classification problems and linear or non-linear estimation problems may be viewed as machine learning problems.
  • a VNF may experience congestion.
  • the congestion may be detected through packet process performance measurements, which the packet process performance measurement unit 302 obtains from one or more of the network elements.
  • a VNF may provide certain key performance indicators (KPIs, such as packet delay, packet loss, and packet jitter) regarding their packet processing performance in some embodiments.
  • KPIs key performance indicators
  • the VNF congestion detector 124 may obtain packet process performance measurements of a VNF at the packet process performance measurement unit 302. Based on the obtained packet process performance measurements, the congestion determination unit 312 determines whether the VNF is in a congestion state. If the VNF is in a congestion state, the congestion determination unit 312 may send out a notification 325 regarding such determination (e.g., an indication that the VNF is congested, based on which a remedial process may be performed to remove the congestion).
  • the logistic regression unit 306 selects the optimized plurality of coefficients ⁇ .
  • the training data may include the following training sets: ⁇ (3 ⁇ 4, Yi), (3 ⁇ 4,
  • the measurements of the packet process performance are updated.
  • the measurements may be obtained as discussed herein above in relation to the packet process performance measurement unit 302.
  • values of the plurality of the VNF status parameters corresponding to the updated measurements of the packet process performance are obtained.
  • the operations at reference 704 is similar to the operations at reference 604.
  • Figure 8C illustrates various exemplary ways in which VNEs may be coupled according to some embodiments of the invention.
  • Figure 8C shows VNEs 870A.1-870A.P (and optionally VNEs 870A.Q-870A.R) implemented in ND 800A and VNE 870H.1 in ND 800H.
  • the network controller 878 may present the implementation of a VNE/NE in a single ND as multiple VNEs in the virtual networks 892 (all in the same one of the virtual network(s) 892, each in different ones of the virtual network(s) 892, or some combination).
  • the network controller 878 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 876 to present different VNEs in the virtual network(s) 892 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).
  • interfaces that are independent of physical NIs may be configured as part of the VNEs to provide higher-layer protocol and service information (e.g., Layer 3 addressing).
  • the subscriber records in the AAA server identify, in addition to the other subscriber configuration requirements, to which context (e.g., which of the VNEs/NEs) the corresponding subscribers should be bound within the ND.

Abstract

Methods for detecting a congestion state of a virtual network function (VNF) are disclosed. In one embodiment, the method is implemented in an electronic device, where it is determined that a VNF implemented in a network device is in a congestion state based on packet process performance of the VNF, the packet process performance corresponding to a service level agreement; from the network device, values of a plurality of VNF status parameters corresponding to measurements of the packet process performance are obtained; and a plurality of coefficients is derived to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state. Then a subsequent congestion state of the VNF is determined based on subsequent values of the plurality of the VNF status parameters and the plurality of coefficients, and a notification is provided.

Description

METHOD AND SYSTEM TO DETECT VIRTUAL NETWORK FUNCTION (VNF)
CONGESTION
TECHNICAL FIELD
[0001] Embodiments of the invention relate to the field of networking; and more specifically, relate to a method and system for detection of virtual network function congestion.
BACKGROUND
[0002] The recent advances in software engineering and high-performance commodity servers facilitate virtualization of network functions (NFs). NFs traditionally delivered on proprietary and application-specific equipment now can be realized in software running on generic server hardware (e.g., commercial off-the-shelf (COTS) servers). The technology, using one or more virtual network functions (VNFs) and referred to as network function virtualization (NFV), is gaining popularity with network operators.
[0003] A VNF may be implemented at various parts of a network, such as at a serving gateway (S-GW), a packet data network gateway (P-GW), a serving GPRS (general packet radio service) support node (SGSN), a gateway GPRS support node (GGSN), a broadband remote access server (BRAS), and a provider edge (PE) router. VNFs can also be implemented to support various services (also called appliances, middleboxes, or service functions), such as content filter, deep packet inspection (DPI), logging/metering/charging/advanced charging, firewall (FW), virus scanning (VS), intrusion detection and prevention (IDP), and network address translation (NAT), etc. The flexibility offered by VNFs allows more dynamic deployments of traditional network functions, in various locations such as the operator's cloud or even central offices and point of presences (POPs) where a smaller scale data center may reside. For the purpose of load balancing or reducing latency, one type of VNF may be instantiated and hosted at multiple locations providing the same functions (i.e., multiple instances of the same network function). A VNF implemented in a network element utilizes resources of the network element, and the resource utilization of the VNF can be measured.
[0004] For a subscriber of a network utilizing VNFs, the subscriber often reaches a service level agreement (SLA) with an operator of the network. To satisfy the SLA, one may measure packet process performance of one or more VNFs. The measurement is often performed by an application coupled to a controller of a network. For example, in a software-defined networking (SDN) architecture, a SDN system includes one or more SDN controllers and a set of network elements managed by the SDN controllers. An application coupled to the SDN controllers may measure packet process performance of one or more VNFs implemented in one or more network elements, without requiring physical access to the hardware implementing the network elements. However, measuring the packet process performance of a VNF typically involves significantly more operations than measuring resource utilization of the VNF.
SUMMARY
[0005] Methods for detecting a congestion state of a virtual network function (VNF) are disclosed. In one embodiment, the method is implemented in an electronic device, where it is determined that a VNF implemented in a network device is in a congestion state based on packet process performance of the VNF, the packet process performance corresponding to a service level agreement; from the network device, values of a plurality of VNF status parameters corresponding to measurements of the packet process performance are obtained; and a plurality of coefficients is derived to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state. Then a subsequent congestion state of the VNF is determined based on subsequent values of the plurality of the VNF status parameters and the plurality of coefficients, and a notification indicating the subsequent congestion state of the VNF is provided.
[0006] Electronic devices are disclosed to detect a congestion state of a VNF. In one embodiment, the electronic device comprises a non-transitory machine -readable storage medium to store instructions and a processor coupled to the non-transitory machine-readable storage medium to process the stored instructions to determine that a virtual network function (VNF) implemented in a network device is in a congestion state based on packet process performance of the VNF, the packet process performance corresponding to a service level agreement; to obtain, from the network device, values of a plurality of VNF status parameters corresponding to measurements of the packet process performance; derive a plurality of coefficients to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state; determine a subsequent congestion state of the VNF based on subsequent values of the plurality of the VNF status parameters and the plurality of coefficients; and provide a notification indicating the subsequent congestion state of the VNF.
[0007] Non-transitory machine -readable storage media for detecting a congestion state of a VNF are disclosed. In one embodiment, the non- transitory machine -readable storage medium that provides instructions, which when executed by a processor of an electronic device, cause said processor to perform operations comprising: determining that a virtual network function (VNF) implemented in a network device is in a congestion state based on packet process performance of the VNF, the packet process performance corresponding to a service level agreement; obtaining, from the network device, values of a plurality of VNF status parameters corresponding to measurements of the packet process performance; deriving a plurality of coefficients to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state; determining a subsequent congestion state of the VNF based on subsequent values of the plurality of the VNF status parameters and the plurality of coefficients; and providing a notification indicating the subsequent congestion state of the VNF.
[0008] Embodiments of the disclosed techniques aim at implementing an efficient way at an electronic device to detect a congestion state of a virtual network function (VNF) based on values of a plurality of VNF status parameters, where a plurality of coefficients are derived based on corresponding measurements of packet process performance and determined congestion states.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings:
[0010] Figure 1 illustrates a software-defined networking (SDN) system according to one embodiment of the invention.
[0011] Figure 2A illustrates collection of indicators of VNF resource utilization according to one embodiment of the invention.
[0012] Figure 2B illustrates VNF congestion detection based on VNF status parameters and packet process performance measurements.
[0013] Figure 3 illustrates machine learning units for VNF congestion detection according to one embodiment of the invention.
[0014] Figure 4 illustrates a machine learning process for VNF congestion detection according to one embodiment of the invention.
[0015] Figure 5 illustrates the iterations of derivation of the plurality of coefficients to be applied to the VNF status parameters according to one embodiment of the invention.
[0016] Figure 6 is a flow diagram illustrating a machine learning process for VNF congestion detection according to one embodiment of the invention.
[0017] Figure 7 is a flow diagram illustrating a machine learning process for updating VNF congestion detection according to one embodiment of the invention.
[0018] Figure 8A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention.
[0019] Figure 8B illustrates an exemplary way to implement a special-purpose network device according to some embodiments of the invention. [0020] Figure 8C illustrates various exemplary ways in which virtual network elements (VNEs) may be coupled according to some embodiments of the invention.
[0021] Figure 8D illustrates a network with a single network element (NE) on each of the NDs, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention.
[0022] Figure 8E illustrates the simple case of where each of the NDs implements a single NE, but a centralized control plane has abstracted multiple of the NEs in different NDs into (to represent) a single NE in one of the virtual network(s), according to some embodiments of the invention.
[0023] Figure 8F illustrates a case where multiple VNEs are implemented on different NDs and are coupled to each other, and where a centralized control plane has abstracted these multiple VNEs such that they appear as a single VNE within one of the virtual networks, according to some embodiments of the invention.
[0024] Figure 9 illustrates a general-purpose control plane device with centralized control plane (CCP) software 950), according to some embodiments of the invention.
DETAILED DESCRIPTION
[0025] The following description describes methods and apparatus for detecting virtual network function (VNF) congestion. In the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource
partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.
[0026] Terms
[0027] References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
[0028] In figures, bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, and dots) may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention. Also in the figures, reference numbers are used to refer to various elements or components, the same reference numbers in different figures indicate that the elements or components have the same or similar
functionalities.
[0029] In the following description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. "Coupled" is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. "Connected" is used to indicate the establishment of communication between two or more elements that are coupled with each other.
[0030] An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine -readable media (also called computer-readable media), such as machine -readable storage media (e.g., magnetic disks, optical disks, read only memory (ROM), flash memory devices, phase change memory) and machine -readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals). Thus, an electronic device (e.g., a computer) includes hardware and software, such as a set of one or more processors coupled to one or more machine -readable storage media to store code for execution on the set of processors and/or to store data. For instance, an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower nonvolatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device. Typical electronic devices also include a set or one or more physical network interface(s) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices. One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
[0031] A network device (ND) is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices). Some network devices are "multiple services network devices" that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video). As explained in more details herein below relating to Figures 8-9, a network device may implement a set of network elements in some embodiments; and in alternative embodiments, a single network element may be implemented by a set of network devices.
[0032] Software-defined Networking System (SDN) and VNF Congestion Detection
[0033] In a software-defined networking (SDN) system, packets are forwarded through traffic flows (or simply referred to as flows), and a network element forwards the flows based on the network element's forwarding tables, which are managed by one or more network controllers (also referred to as SDN controllers, and the two terms are used interchangeably in the specification). A flow may be defined as a set of packets whose headers match a given pattern of bits. A flow may be identified by a set of attributes embedded to one or more packets of the flow. An exemplary set of attributes includes a 5-tuple (source and destination IP addresses, a protocol type, source and destination TCP/UDP ports). Service chaining in a SDN system is a way to stitch multiple customer specific services, and to lead the traffic flow through the right path (a service chain) in the SDN system.
[0034] Figure 1 illustrates a software-defined networking (SDN) system according to one embodiment of the invention. A network 100, implementing SDN architecture, includes a network controller 120 managing a plurality of network elements, including network elements 132 and 134. These network elements may be implemented as OpenFlow switches when they comply with OpenFlow standards such as "OpenFlow Switch Specification," the latest version 1.5.1 being dated March 2015.
[0035] The network elements 132 and 134 may communicate through a network cloud 190, which may contain traditional network elements such as routers/switches or other SDN network elements. The network elements 132 and 134, and the traditional network elements such as routers/switches or other SDN network elements in the network cloud 190, may host service functions such as services 142 and 144. The service functions process subscribers' traffic by providing services such as content filter, deep packet inspection (DPI), logging/metering/charging/advanced charging, firewall (FW), virus scanning (VS), intrusion detection and prevention (IDP), network address translation (NAT), etc. These services may be hosted in dedicated physical hardware, or in virtual machines (VMs) associated with network elements (e.g., residing in or coupled to the network elements) in the network cloud 190. Also, network elements 132 and 134 may host one or more of these or other service functions.
[0036] In service chaining (also referred to as service function chaining (SFC)), incoming packets may be classified based on the packet header fields (e.g., a flow identifier) by a network element, and the packets may be sent to one or more service functions associated with the network element. After the one or more service functions in a service chain process the packets, the network element forwards the packets to the next network element for subsequent service functions in the service chain until the packets are processed by all service functions of the service chain. The packet processing in a service function may include one or more of the following: 1. Packets are observed and sent out unmodified (e.g. DPI); 2. Packets headers are modified and sent out (e.g. NAT); 3. Packets are encapsulated inside some encapsulations and sent out (e.g. general packet radio service (GPRS) tunneling protocol (GTP) encapsulation); and 4. Packets are dropped (intrusion detection).
[0037] Each of the one or more service functions in a service chain may be implemented as one or more of virtual network functions (VNFs) associated with a network element, e.g., a VNF implemented within or coupled to the network element. The network element may instantiate a VNF with the goal of optimizing resource allocation in the network 100 for the service chain. Through the coordination of a network controller such as the network controller 120, a network such as the network 100 may allocate VNFs to various network elements such as SDN network elements and traditional network elements.
[0038] The VNF allocation may be static, based on known or estimated traffic patterns and/or traffic loads of traffic flows. Yet optimizing VNF allocation in a system is often an NP (non- deterministic polynomial-time) hard problem, and heuristics to approximate the optimization are computing intensive. Additionally, a static VNF allocation is optimal only for a fixed traffic load and/or pattern. Traffic load and/or pattern may change over time, rendering the static VNF allocation obsolete. Thus, static VNF allocation may not be ideal in many cases. Instead, it may be more practical to observe VNF resource utilization (which may be indicated by VNF status parameter(s)) of various VNFs in a system, and allocate/remove VNFs dynamically. For example, a network element may spawn more VMs to instantiate additional VNFs or increase resource allocation for the existing VMs hosting a VNF when the VNF encounters a traffic load surge, or the network controller may divert traffic load to a VNF in another network element where the VNF in the other network element is not over utilized. [0039] The dynamic VNF allocation may be tied to a service level agreement (SLA) that a subscriber of a network signs with the operator of the network. The subscriber (also referred to as a client or a tenant) may require a certain level quality of service (QoS) for one or more services that the network provides to the subscriber. The QoS requirement may include one or more thresholds for packet delay, packet loss, packet jitter, and other packet process performance measures. That is, the packets in traffic flows of the subscriber is required to be processed within specified boundaries of specified packet process performance measures. For example, the subscriber may require the packet delay of a traffic flow to be within 50 milliseconds, or packet loss (also referred to as packet drop, or packet discard) of the traffic flow to be within 0.5% of total packets. When a traffic flow is processed through a service chain, each VNF of the service chain may contribute to the overall packet process performance of the traffic flow. The network may allocate the VNFs so that the required QoS for the traffic flow as required by the SLA of the subscriber is satisfied.
[0040] In dynamic VNF allocation, one may measure packet process performance of a VNF and determine whether one or more measurements are over the corresponding thresholds. Once one or more threshold cross events are detected, the corresponding VNF may be deemed in a congested state and a remedial process such as load balancing may be taken so that the VNF is no longer congested and the SLA may be honored.
[0041] The VNF congestion detection may be performed remotely at a network controller. As illustrated, a VNF congestion detector 124 is coupled to the network controller 120. The VNF congestion detector 124 may be implemented as an application running on the network controller 120 (e.g., through a north bound interface of the network controller 120). The application may be referred to as an orchestrator of VNFs. In an alternative embodiment, the VNF congestion detector 124 may be implemented in the network controller 120 (e.g., within a centralized control plane as detailed herein below in relation to Figures 8-9).
[0042] The detection of whether a VNF is in a congestion state based on packet process performance such as packet delay is not trivial. For example, two-way active measurement protocol (TWAMP) is a commonly used path delay measurement protocol in traditional routers and switches. Yet TWAMP does not measure the delay experienced by the real traffic on a path; instead, TWAMP measures the delay experienced by a test stream along the same path. The assumption is that the measurement of the test stream is close to the delay experienced by the real traffic. However, for a service chain, the packet delay is a sum of not only switching, processing, and transport delays that may incur for both the real traffic and the test stream, but also the time spent in service functions. While the switching, processing, and transport delays are in the order of few tens of microseconds, service function processing time could be in the order of several milliseconds or higher. Thus, in order to get the true estimate of the packet delay faced by a traffic flow (a type of real traffic), the test stream must not only pass through the network path, but also get processed with similar complexity in service functions as the traffic flow. This requirement makes generating the test stream hard. Additionally, service functions are often stateful and keep the state of packets processed by the service functions. If packets of the test stream are made to traverse the same service functions and to be processed similarly, the packets would pollute the states maintained by the service functions and may cause other networking issues. To resolve these and other issues, elaborate approaches are proposed to measure packet delay through VNFs. For example, U.S. Patent Application 14/852,293, entitled "Method and System for Delay Measurement of a Traffic Flow in a Software-defined Network (SDN) System", filed on September 11, 2015, discloses approaches to measure packet delay through a VNF.
[0043] Similarly, determining a VNF being in a congestion state based on other packet process performance measures such as packet loss and packet jitter can be challenging too. For a VNF, an interface may contain a counter to count packet loss at the VNF. Yet a SLA is typically associated with a traffic flow of a particular subscriber; thus, the packet loss of all traffic flows processed by the VNF needs to be further categorized and only the packet loss of the traffic flow of the subscriber is counted against one or more packet loss thresholds.
[0044] Because packet process performance measurements take significant computing resources to obtain, it is desirable not to perform such measurements at a high frequency (e.g., every few milliseconds). Rather, a network operator may prefer to perform packet process performance measurements of a VNF every few seconds or minutes so that no significant computing resource is taken to comply with a SLA.
[0045] In contrast to the packet process performance measurements of a VNF, resource utilization of the VNF, indicated by VNF status parameters, is much easier to obtain. For example, at a given moment for a VNF, one may obtain a processor utilization measurement (e.g., reading in percentage of central processor unit (CPU) usage), a memory utilization measurement (e.g., reading in percentage of random access memory (RAM) usage), a bandwidth utilization measurement (e.g., reading in bits per second of a link utilized by the VNF), and/or a storage utilization measurement (e.g., reading in megabytes MB utilized by the VNF). The resource utilization of a VNF may be measured at high frequency (e.g., every few milliseconds) as such measurements do not consume significant computing resources.
[0046] Figure 2A illustrates collection of indicators (e.g., measurements) of VNF resource utilization according to one embodiment of the invention. Figure 2A is similar to Figure 1, with certain aspects omitted to avoid obscuring other aspects of Figure 2A. In Figure 2A, the VNF congestion detector 124 is implemented within the network controller 120 (an embodiment alternative to Figure 1), more specifically, within the centralized control plane 122. The VNF congestion detector 124 determines whether VNFs of the network elements are congested.
[0047] Each network element, including the network elements 132 and 134 and other traditional network elements (not shown), may implement one or more VNFs. For example, the network elements 132 and 134 implement VNFs 152 and 162 respectively. The resource utilization of the VNFs may be measured and the measurements may be stored in a database. The database may be within a VNF or outside of the VNF. For example, a VNF status 154, stored in the database within the VNF 152, includes resource utilization measurements of the VNF 152, while a VNF status 164, stored in a database outside of the VNF 162, includes resource utilization measurements of the VNF 164. The VNF status of a VNF may include values of one or more VNF status parameters (may also be referred to as VNF resource utilization parameters) such as a processor utilization measurement (e.g., reading in percentage of processor usage), a memory utilization measurement (e.g., reading in percentage of memory usage), a bandwidth utilization measurement (e.g., reading in bits per second of a link utilized by the VNF), and/or a storage utilization measurement (e.g., reading in megabytes MB utilized by the VNF). The processor may be a central processor unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), a microcontroller (MCU), or other processor units, and the memory may be any type of volatile memory of a network device at which the VNF resides.
[0048] By specifying thresholds for each VNF status parameter, the operator of the network may deem a VNF to be in a congested state when the values of one or more of the VNF status parameters are over the thresholds. The operator may specify an "OR" condition, declaring that the VNF is in a congested state when any of the values of the one or more of the VNF status parameters is over its corresponding threshold(s) (some VNF status parameters may have multiple thresholds corresponding to different severity levels of congestion). In alternative, the operator may specify an "AND" condition, declaring that the VNF is in a congested state when all of the values of the one or more of the VNF status parameters are over their corresponding threshold(s). The "OR" and "AND" based congestion determinations appear to be at two extremes of the spectrum: the "OR" based determination may claim more congestion states than the ones affecting the SLA of a subscriber (causing false alarm), but the "AND" based determination may claim less congestion states than the ones affecting the SLA (causing under reporting).
[0049] Instead of the simplistic approaches of "AND" and "OR" of VNF resource utilization determinations, another approach is to assume a relationship between VNF status parameters, and once the relationship between the VNF status parameters meets a criterion, the VNF congestion state is determined. Figure 2B illustrates VNF congestion detection based on VNF status parameters and packet process performance measurements.
[0050] In Figure 2B, VNF congestion detection may be performed based on a linear relationship between processor utilization and memory utilization. In this example, the memory utilization 250 is measured in percentage of memory usage by a VNF in the range of 0% and 100%. Similarly, the processor utilization 252 is measured in percentage of processor usage by the VNF in the range of 0% and 100%. Based on the assumed relationship between the percentages of memory utilization and processor utilization, the triangle area at the bottom left of Figure 2B is the non-congestion state as reference 290 indicates. That is, when the measurements of the percentages of memory utilization and processor utilization falls within the triangle area, the VNF is deemed not congested, otherwise, the VNF is deemed to be in a congestion state. Yet, the assumed relationship between the percentages of memory utilization and processor utilization may not be true, and other factors such as bandwidth utilization may cause a VNF to be in a congestion state, even though the measurements of the percentages of memory utilization and processor utilization of the VNF are within the triangle area. Thus, determining VNF congestion state based on a simple relationship between VNF status parameters (e.g., a linear relationship between two VNF status parameters) may not be sufficient for satisfying a subscriber's SLA.
[0051] In contrast, VNF congestion detection based on packet process performance measurements can tie directly to the QoS requirement of the SLA. For example, if the QoS requirement of the SLA is that the packet delay is less than 50 milliseconds and the measured packet delay of a VNF is over 50 milliseconds, the operator can determine that the VNF is congested and a remedial process is needed to resolve the congestion. The packet process performance measurements, when tied directly to the SLA, may allow the corresponding determination of VNF congestion to be optimal. In Figure 2B, as indicated by reference 292, based on packet process performance measurements, the non-congestion state of the VNF may be in a partial oval shape. That is, the VNF congestion state determination based on packet process performance measurements may not be aligned well with the VNF congestion state determination based on VNF resource utilization.
[0052] It is preferable to predict a VNF congestion state which can be measured directly using VNF packet process performance measurements (which consume more computing resources and are thus harder to be performed frequently) from VNF resource utilization (which consume less computing resources and are thus easier to be performed more frequently). Such prediction may be achieved through training a machine using VNF packet process performance measurements to detect a VNF congestion state and the corresponding values of VNF status parameters, and then using the machine to predict subsequent VNF congestion state using subsequent values of the VNF status parameters without the VNF packet process performance measurements. Such training and prediction of such machine may be referred to as machine learning.
[0053] Machine Learning and VNF Congestion Detection
[0054] In dynamic VNF allocation, while VNF packet process performance measurements can be directly tied to a SLA of a subscriber and is thus a more accurate and preferred measure of VNF congestion, obtaining the VNF packet process performance measurements consumes more computing resources than obtaining VNF status. Yet, the values of VNF status parameters may not follow an easily derived mathematical relationship with packet process performance measurements of the same VNF. For example, packets of a traffic flow may experience a long packet process delay at a VNF while the processor and memory utilization of the VNF is low, as the long packet process delay may be due to insufficient bandwidth allocated to the VNF. Thus, for dynamic VNF allocation and/or VNF congestion detection, one may use a machine learning based approach to correlate VNF packet process performance measurements and values of VNF status parameters.
[0055] In machine learning, supervised learning can identify a model through empirical learning to transform a set of inputs to a set of outputs. For example, a subset of inputs with corresponding outputs may be provided to the model to train the model. Once the model is identified, the model may be used to predict output for subsequent inputs. The model may include a plurality of coefficients, which when used in conjunction with system inputs, provide an output that is consistent with observed behaviors or status. The machine learning problems, such as correlating VNF packet process performance measurements with values of VNF status parameters, may be cast as a convex optimization problem and solved by standard optimization tools. Many classification problems and linear or non-linear estimation problems may be viewed as machine learning problems.
[0056] Figure 3 illustrates machine learning units for VNF congestion detection according to one embodiment of the invention. In one embodiment, the modules are implemented within a VNF congestion detector such as the VNF congestion detector 124. The modules may be parts of a network controller such as the network controller 120. The VNF congestion detector 124 includes a packet process performance measurement unit 302, a congestion determination unit 312, a VNF status parameter collection unit 304, and logistic regression unit 306. Some or all of the units may be implemented in hardware (e.g., electric circuits such as processors, ASIC and/or FPGA), software, or a combination thereof.
[0057] A VNF may experience congestion. The congestion may be detected through packet process performance measurements, which the packet process performance measurement unit 302 obtains from one or more of the network elements. A VNF may provide certain key performance indicators (KPIs, such as packet delay, packet loss, and packet jitter) regarding their packet processing performance in some embodiments. Through the KPIs and other data (e.g., performance data of the VM that hosts the VNF or time stamps of packets being processed by network elements), the VNF congestion detector 124 may obtain packet process performance measurements of a VNF at the packet process performance measurement unit 302. Based on the obtained packet process performance measurements, the congestion determination unit 312 determines whether the VNF is in a congestion state. If the VNF is in a congestion state, the congestion determination unit 312 may send out a notification 325 regarding such determination (e.g., an indication that the VNF is congested, based on which a remedial process may be performed to remove the congestion).
[0058] The obtained packet process performance measurements of a VNF may be one or more of packet delay, packet loss, packet jitter, or other measurements that are required by a SLA of a subscriber. It is to be noted that the SLA of the subscriber may specify the overall packet performance measurement thresholds, i.e., the thresholds for a service chain including multiple VNFs, and the operator may determine allocation of a packet performance measurement threshold to several VNFs. For example, the SLA may dictate a packet delay threshold of 50 milliseconds, and the operator, through the network controller 120, sets a threshold of packet delay of a particular VNF (e.g., DPI) of the service chain to be 5 milliseconds. Thus, the congestion determination unit 312 determines that the particular VNF is in a congestion state when packet processing delay through the VNF is over the 5 milliseconds delay threshold.
[0059] While the packet process performance measurements are performed by the VNF congestion detector 124 through the packet process performance measurement unit 302 in one embodiment, the measurements may be performed by another application within or coupled to the network controller 120. The packet process performance measure for one VNF may be different from that of another VNF. For example, the packet process performance measure for one VNF may be packet delay, for another VNF may be packet loss, and for yet another VNF may be both packet delay and packet loss. The type(s) of packet process performance measures depends on the associated SLA of a subscriber in one embodiment. For example, the subscriber requires packet delay of the subscriber's traffic flow to be less than 50 milliseconds, then packet process performance measurements for all the VNFs in the service chain of the traffic flow will include packet delay measurements.
[0060] It is to be noted that packet process performance measurements may be an average value over a period of time. For example, packet delay and packet jitter measurements of a VNF may be the average of packet delays and packet jitters of several packets of a traffic flow being processed through the VNF during the period of time (e.g., 1 millisecond).
[0061] The VNF status parameter collection unit 304 obtains values of VNF status parameters 354 from network elements. The values of VNF status parameters 354 may include processor utilization measurements (e.g., reading in percentage of processor usage by the VNF), memory utilization measurements (e.g., reading in percentage of memory usage by the VNF), bandwidth utilization measurements (e.g., reading in bits per second of a link utilized by the VNF), storage utilization measurements (e.g., reading in megabytes MB utilized by the VNF), and/or other parameters indicating a reading of VNF resource utilization. The values of VNF status parameters 354 may be values at the time of receiving the request to obtain the values in one embodiment. The values of VNF status parameters may be obtained through either poll (requested by the VNF congestion detector 124) or push (initiated by the network elements such as the network elements 132 and 134). At the network elements, the values of VNF status parameters may be stored in a database, such as the ones storing VNF statuses 154 and 164. The obtained values of VNF status parameters are provided to the logistic regression unit 306.
[0062] In one embodiment, the VNF status parameter collection unit 304 obtains the values of VNF status parameters that correspond to the packet process performance measurements. That is, the values of VNF status parameters of a VNF reflect the VNF's status at the time that the VNF has the packet process performance as measured by the packet process performance measurements. For example, the VNF status parameter collection unit 304 obtains the values of VNF status parameters at substantially the same time as the time the packet process performance measurements of the same VNF are obtained. That is, if the packet process performance measurements are the measurements of a VNF at one time (e.g., 9:00 AM), the values of the VNF status parameters are the values of the VNF status parameters at the same time (e.g., 9:00 AM) of the same VNF. The substantial same time means that a time difference between when the packet process performance measurements and the values of the VNF status parameters are obtained is negligible (e.g., in a few microseconds). In addition, when the packet process performance measurements are average values over a period of time (e.g., 5 milliseconds starting at 9:00 AM), the values of VNF status parameters are values over the same period of time (e.g., the same 5 milliseconds starting at 9:00 AM) in one embodiment.
[0063] The logistic regression unit 306 obtains the packet process performance measurements from the packet process performance measurement unit 302, values of VNF status parameters from the VNF status parameter collection unit 304, and congestion determinations from the congestion determination unit 312. These inputs are provided for machine learning at the logistic regression unit 306, which produces a plurality of coefficients 322 for a VNF. The plurality of coefficients 322 may be then applied to subsequent values of VNF status parameters of the VNF to predict whether the VNF is in a congestion state at the subsequent time.
[0064] The logistic regression unit 306 may perform classification through logistic regression (also referred to as logit regression or logit model). For VNF congestion detection, the classification may be binary: either congested or not. In some embodiments, the classification for VNF congestion detection may include a third state (e.g., near congestion, for example, when one or more thresholds are close to being crossed), which may lead to notification and/or remedial measures (e.g., reallocation of one or more resources to the VNF to prevent entering a congestion state). In these embodiments with a third state, the below equations could be altered accordingly. The congestion state may be determined based on packet process performance measurements. The congestion state determination and corresponding values of VNF status parameters may be used as training data to derive the correlation between the VNF status parameters and VNF congestion determination. The training data may be illustrated as a table such as the following:
Figure imgf000017_0001
Table 1 : Training Data for Logistic Regression
[0065] This set of training data may be used to solve a classification problem in machine learning. One may identify a set of coefficient that maps the VNF status to VNF congestion state. For example, for the training data in Table 1, we aim at finding a plurality of coefficients, 0 = [0O, 01; θ2, θ3, 04], such that the following is true:
/(0„ + 0i * 30% + 02 * 25% + 03 * 30% + 04 * 20%) = 0 /(0„ + θ1 * 50% + θ2 * 45% + 03 * 10% + 04 * 10%) = 1 /(0ο + 0ι * 35% + 02 * 35% + 03 * 35% + 04 * 35%) = 1
(1)
[0066] The function f() is a suitable function for the VNF status parameters based congestion detection. Once the plurality of coefficients is obtained, the plurality of coefficients may be applied to the subsequent VNF status parameters and used to determine whether the
corresponding VNF is congested (VNF Congestion = 1) or not (VNF Congestion = 0). The plurality of coefficients may be refined by additional training data of values of VNF status parameters and corresponding values of packet process performance measurements of the VNF. In one embodiment, the refinement of the plurality of coefficients uses a sliding window where the most recent values of VNF status parameters and corresponding packet process performance measurements of the VNF remain, while the earlier data are removed from the logistic regression model as the earlier data is less relevant, and removing the earlier data makes the logistic regression model less computing intensive.
[0067] The logistic regression unit 306 determines a decision boundary of the VNF being in a congestion state or not, and the determination may be viewed as an estimated probability, where the estimated probably that a y = 1 (VNF being congested) on input x (values of VNF status parameters), parameterized by Θ is the following:
hg (x) = P(y = l \x; e
(2)
[0068] The decision boundary may use a logistic function (also referred to as a Sigmoid function), which may be expressed as the following:
1
hg (X) =
1 + e -θτχ
(3)
[0069] Within equation (3), 0Tis the transpose of the plurality of coefficients Θ, which may have a set of default values at the beginning of the training. The number of elements within the plurality of coefficients Θ is the number of VNF status parameters plus one. That is, if four VNF status parameters are used to make a VNF congestion determination, the plurality of coefficients Θ includes five elements, e.g., Θ = [θ0, Θ , θ2, θ3, θ4] , where θ0 is not applied to any of the VNF status parameters as illustrated in Equation (1), and is often referred to as bias. X is a vector including values of VNF status parameters of a VNF.
[0070] Through training, the logistic regression unit 306 selects the optimized plurality of coefficients Θ. The training data may include the following training sets: {(¾, Yi), (¾,
Y2),...,(Xm, Ym) } , where Xm is a vector including values of VNF status parameters of a VNF at a time, and Ym is the corresponding known congestion state (determined by corresponding packet process performance measurements of the VNF at the same time), which may be expressed in binary.
[0071] In one embodiment, the optimization of the logistic regression unit 306 is to minimize the difference between the predicted VNF congestion (predicted by using the VNF status parameters of a VNF at one time and the plurality of coefficients) and the known VNF congestion (determined by corresponding packet process performance measurements of the VNF at the same time). In one embodiment, the optimization is to get the minimal value of the following cost function:
Figure imgf000019_0001
(4)
[0072] Within Equation (4), m is the number of training sets, and n is the number of features, which is the number of VNF status parameters for VNF congestion detection. The parameter λ is a regularization parameter, the value of which may be selected to adjust the cost of having an additional number of features. By setting the parameter λ > 0, the optimization process reflects the desire of having a small set of features to determine the VNF congestion. That is, by putting a "cost" on using too many non-zero coefficients, the process is forced to choose the values of Θ that has as few non-zero values as possible. In an environment where the number of features is fixed, the regularization parameter may be set to zero so that the number of features has no impact on the optimization.
[0073] The J(6) in Equation (4) may be a large number when the training starts. With training sets being provided to the Equation (4), the plurality of coefficients Θ that results in a smaller value of J(6) is retained, and the new plurality of coefficients Θ is applied to newer training sets, so that the plurality of coefficients Θ that results in an even smaller value of J(6) is retained. Through numerous iterations of the plurality of coefficients Θ, the logistic regression unit 306 may determine that the value of the cost function is small enough, and the plurality of coefficients Θ is the result of the training. In one embodiment, the value of the cost function may be compared to a threshold to determine whether the value is small enough, and once the value is lower than or equal to the threshold, the computation of J(6) is considered to be converged, and the resulting plurality of coefficients may be referred to as the converged plurality of coefficients. In other embodiments, the computation of J(6) is considered to be converged under a different criterion. For example, the computation of J(6) may be considered to have converged (1) when the value of the cost function has not decreased over a pre-defined number of last iterations, or (2) when the decrease of the cost function over a pre-defined number of last iterations is lower than or equal to a threshold.
[0074] It is to be noted that a number of optimization algorithms may be used to derive the plurality of coefficients Θ to minimize the cost function J(6). The optimization algorithms include gradient descent algorithm, conjugate gradient algorithm, BFGS (Bryoden-Fletcher- Goldfarb-Shanno) algorithm, and Limited-memory BFGS (L-BFGS) algorithm.
[0075] The derived plurality of coefficients Θ is then used to predict subsequent congestion based on values of VNF status parameters. Through the VNF status parameter collection unit 304, the VNF congestion detector 124 may obtain the subsequent values of VNF status parameters, and apply the plurality of coefficients Θ using Equation (1). Thus, the VNF congestion detector 124 may determine subsequent VNF congestion status based on the values of VNF status parameters.
[0076] Figure 4 illustrates a machine learning process for VNF congestion detection according to one embodiment of the invention. Figure 4 is similar to Figure 3, with certain aspects omitted to avoid obscuring other aspects of Figure 3. Task boxes 1 to 6 illustrate the order in which operations are performed according to one embodiment of the invention.
[0077] At task box 1 , the packet process performance measurement unit 302 measures a set of parameters of packet process performance of a VNF. The set of parameters may be selected for the VNF based on a service level agreement (SLA) of a subscriber, the traffic flow of which is processed through the VNF. The set of parameters may include one or more of parameters indicating a packet delay measure, a packet jitter measure, or a packet drop measure.
[0078] At task box 2, the VNF status parameter collection unit 304 obtains values of a plurality of VNF status parameters corresponding to the measurements of the packet process performance obtained from operations at task box 1. The plurality of the VNF status parameters includes one or more parameters indicating a processor utilization measure, a memory utilization measure, a bandwidth utilization measure, and a data storage utilization measure.
[0079] At task box 3, the congestion determination unit 312 determines that the VNF is in a congestion state based on the packet process performance measurements. Since the operations are for training the logistic regression unit 306, no notification or remedial measure is provided regarding the congestion in one embodiment. In an alternative embodiment, the congestion determination unit 312 may notify an operator of the network or another module of the network controller 120 to provide remedial measures when necessary.
[0080] At task box 4, the logistic regression unit 306 derives a plurality of coefficients to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state. The derivation may be performed through operations described herein above in relation to Equations 1-4. As described, the derivation may be through multiple iterations. In one embodiment, the plurality of coefficients may be deemed to be good enough when the value of the cost function (see Equation 4) is below a threshold for the known training sets.
[0081] At task box 5, the congestion determination unit 312 determines a subsequent congestion state based on subsequent values of the plurality of the VNF status parameters and the derived plurality of coefficients. In one embodiment, the determination is performed through plugging in the subsequent values of the plurality of the VNF status parameters and the derived plurality of coefficients to an equation similar to Equation (1). However, the number of elements in the plurality of coefficients Θ may be different from the one in Equation (1) as different embodiments may have different members of VNF status parameters.
[0082] At task box 6, the congestion determination unit 312 notifies an operator of the network or another module of the network controller 120 to provide remedial measures when a congestion state is determined based on the subsequent values of the plurality of the VNF status parameters and the derived plurality of coefficients.
[0083] Through the training phase (task boxes 1-4) and prediction phase (task boxes 5-6), the VNF congestion detector 124 may determine whether a VNF is in a congestion state based on VNF status parameters, the value of which are much easier to obtain and consume less computing resources than the packet process performance measurements.
[0084] The derivation of the plurality of coefficients may be an iterative process as discussed herein above. Figure 5 illustrates the iterations of derivation of the plurality of coefficients to be applied to the VNF status parameters according to one embodiment of the invention.
[0085] Figure 5 illustrates a state machine, where the state machine starts at reference 502 with training sets, such as {(¾, Yi), (¾, i¾,...,(Xn, Ym) } , where Xm is a vector including values of VNF status parameters of a VNF at a time, and Ym is a known congestion state (determined by corresponding packet process performance measurements of the VNF at the same time), which may be expressed in binary.
[0086] The training sets are applied to logistic regression at reference 504, where a plurality of coefficients may be derived through operations described herein above in relation to Equations 1-4. Then the derived plurality of coefficients is applied at reference 506 to subsequent values of the VNF status parameters in a later time to determine that whether the VNF is in a congestion state.
[0087] The predicted congestion states using the derived plurality of coefficients are then compared to the determined congestion status based on corresponding packet process performance measurements at reference 508. In one embodiment, if the prediction error is over a threshold, that means that the prediction is no longer accurate. In that case the state machine goes to reference 510, which improves the plurality of coefficients using the subsequent data sets at reference 506 as the new training sets. The operations at reference 510 may be similar to the ones at reference 504 in that the operations at reference 510 derive updated plurality of coefficients. The improved coefficients are then applied to newer values of the VNF status parameters.
[0088] The state machine may stay in the loop of 506 - 508 - 510 as long as updating the plurality of coefficients is deemed necessary to obtain accurate VNF congestion detection based on values of VNF status parameters. [0089] Flow Diagrams
[0090] Figure 6 is a flow diagram illustrating a machine learning process for VNF congestion detection according to one embodiment of the invention. Method 600 may be implemented in a VNF congestion detector such as the VNF congestion detector 124 discussed herein above.
[0091] At reference 602, it is determined that a VNF implemented in a network device is in a congestion state based on packet process performance of the VNF, the packet process performance corresponding to a service level agreement. The network device implements a network element such as one of the network elements 132 and 134. In one embodiment, the packet process performance is measured using measurements of packet process performance, and the measurements of packet process performance include values of one or more of parameters indicating a packet delay measure, a packet jitter measure, and a packet drop measure. The packet delay measure may be indicated using a measurement in time (e.g., microseconds/milliseconds/seconds), the packet jitter measure may be indicated using a measurement in time domain (e.g., in microseconds/milliseconds/seconds) or in frequency domain (e.g., in Hz, kHz, MHz), and the packet drop measure may be indicated using a measurement in bit rate or percentage of packets/bits that are dropped. The measurements may be obtained as discussed herein above in relation to the packet process performance measurement unit 302.
[0092] At reference 604, from the network device, values of a plurality of VNF status parameters corresponding to measurements of the packet process performance are obtained, the plurality of the VNF status parameters includes one or more parameters indicating a processor utilization measure (e.g., measured in percentage of processor usage by the VNF), a memory utilization measure (e.g., measured in percentage of memory usage by the VNF), a bandwidth utilization measure (e.g., measured in bits per second of a link utilized by the VNF), and a data storage utilization measure (e.g., measured in bytes utilized by the VNF). In one embodiment, the values of VNF status parameters of the VNF reflect the VNF's status at the time that the VNF has the packet process performance as measured by the packet process performance measurements as discussed herein above in relation to Figure 3.
[0093] At reference 606, a plurality of coefficients is derived to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state. In one embodiment, the derivation of the plurality of coefficients comprises applying logistic regression that searches for a minimum cost of the logistic regression in consideration of regularization of at least a portion of the plurality of coefficients as discussed herein above in relation to Equations (2)-(4). [0094] Optionally at reference 608, it is determined whether the derived plurality of coefficients has converged. In one embodiment, the determination is based on whether the value of a cost function is smaller than a threshold as discussed herein above in relation to Equation (4). If the derived plurality of coefficients has not converged yet, more training data is needed, thus the flow goes back to reference 602, so that operations 602-606 are repeated to derive the updated plurality of coefficients.
[0095] If the derived plurality of coefficients has converged, the flow goes to reference 610, and a subsequent congestion state of the VNF is determined based on subsequent values of the plurality of the VNF status parameters and the plurality of coefficients. In one embodiment, the determination is performed using Equation (1) as discussed herein above.
[0096] If a congestion state is determined, a notification is provided to indicate the congestion state at reference 612. The notification is sent to an operator of the network or another module of the network controller as discussed herein above in relation to Figure 4. Optionally, to mitigate the congestion, a traffic flow is moved from the VNF to another instance of the VNF (the other instance may be at the network device or another network device) or the traffic flow is moved from the VNF to another type of VNF at reference 614.
[0097] Method 600 may be used in dynamic VNF allocation. In dynamic VNF allocation, the VNF allocation may change over time based on various factors such as VNF packet process performance and VNF status, thus the plurality of coefficients used for predicting VNF congestion may be updated too. Figure 7 is a flow diagram illustrating a machine learning process for updating VNF congestion detection according to one embodiment of the invention. Method 700 is a continuation of 600 in one embodiment (e.g., following operations of reference 612). In an alternative embodiment, method 700 updates a plurality of coefficients for predicting VNF congestion, and the plurality of coefficients is obtained from means different from method 600.
[0098] At reference 702, the measurements of the packet process performance are updated. The measurements may be obtained as discussed herein above in relation to the packet process performance measurement unit 302. At reference 704, values of the plurality of the VNF status parameters corresponding to the updated measurements of the packet process performance are obtained. The operations at reference 704 is similar to the operations at reference 604.
[0099] Optionally at reference 706, earlier measurements of the packet process performance and corresponding values of the plurality of the VNF status parameters are removed, so that a pre-defined number of most recent measurements remain. Since the packet process performance measurements are obtained periodically, the number of samples (including both the packet process performance and corresponding values of VNF status parameters) available to derive the plurality of coefficients keeps increasing. The increased number of samples increase computational cost. Eventually, increasing samples lead to a diminishing return. Thus, in one embodiment, the total number of samples is defined, and a sliding window mechanism is implemented. The VNF congestion detector 124 runs the process to derive the plurality of coefficients periodically, and the measurements of the packet process performance and corresponding values of VNF status parameters are stored in a storage space (e.g., a ring buffer). As more samples are gathered, the storage space gets filled. Once the storage space is full, any new sample collected overwrites an existing sample, starting from the earliest collected samples. Consequently, the latest samples will be used for deriving the plurality of coefficients.
[00100] At reference 710, it is determined whether the VNF is in the congestion state based on the updated measurements of the packet process performance. At reference 712, the plurality of coefficients to apply to the values of the plurality of the VNF status parameters is updated to arrive at an indication of the congestion state.
[00101] The operations in the flow diagrams are described with reference to the exemplary embodiments of the other figures. However, it should be understood that the operations of the flow diagrams can be performed by embodiments of the invention other than those discussed with reference to the other figures, and the embodiments of the invention discussed with reference to these other figures can perform operations different than those discussed with reference to the flow diagrams.
[00102] SDN and NFV Environment Utilizing Embodiments of the Invention
[00103] Figure 8A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention. Figure 8A shows NDs 800A-H, and their connectivity by way of lines between 800A-800B, 800B-800C, 800C-800D, 800D-800E, 800E-800F, 800F-800G, and 800A-800G, as well as between 800H and each of 800A, 800C, 800D, and 800G. These NDs are physical devices, and the connectivity between these NDs can be wireless or wired (often referred to as a link). An additional line extending from NDs 800A, 800E, and 800F illustrates that these NDs act as ingress and egress points for the network (and thus, these NDs are sometimes referred to as edge NDs; while the other NDs may be called core NDs).
[00104] Two of the exemplary ND implementations in Figure 8A are: 1) a special-purpose network device 802 that uses custom application-specific integrated-circuits (ASICs) or a field- programmable gate array (FPGA), and a special-purpose operating system (OS); and 2) a general-purpose network device 804 that uses common off-the-shelf (COTS) processors and a standard OS. [00105] The special-purpose network device 802 includes networking hardware 810 comprising compute resource(s) 812 (which typically include a set of one or more processors), forwarding resource(s) 814 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 816 (sometimes called physical ports), as well as non-transitory machine readable storage media 818 having stored therein networking software 820. A physical NI is hardware in a ND through which a network connection (e.g., wirelessly through a wireless network interface controller (WNIC) or through plugging in a cable to a physical port connected to a network interface controller (NIC)) is made, such as those shown by the connectivity between NDs 800A-H. During operation, the networking software 820 may be executed by the networking hardware 810 to instantiate a set of one or more networking software instance(s) 822. Each of the networking software instance(s) 822, and that part of the networking hardware 810 that executes that network software instance (be it hardware dedicated to that networking software instance and/or time slices of hardware temporally shared by that networking software instance with others of the networking software instance(s) 822), form a separate virtual network element 830A-R. Each of the virtual network element(s) (VNEs) 830A-R includes a control communication and configuration module 832A- R (sometimes referred to as a local control module or control communication module) and forwarding table(s) 834A-R, such that a given virtual network element (e.g., 830A) includes the control communication and configuration module (e.g., 832A), a set of one or more forwarding table(s) (e.g., 834A), and that portion of the networking hardware 810 that executes the virtual network element (e.g., 830A). The networking software 820 includes one or more VNFs such as VNF 152.
[00106] The special-purpose network device 802 is often physically and/or logically considered to include: 1) a ND control plane 824 (sometimes referred to as a control plane) comprising the compute resource(s) 812 that execute the control communication and configuration module(s) 832A-R; and 2) a ND forwarding plane 826 (sometimes referred to as a forwarding plane, a data plane, or a media plane) comprising the forwarding resource(s) 814 that utilize the forwarding table(s) 834A-R and the physical NIs 816. By way of example, where the ND is a router (or is implementing routing functionality), the ND control plane 824 (the compute resource(s) 812 executing the control communication and configuration module(s) 832A-R) is typically responsible for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) and storing that routing information in the forwarding table(s) 834A-R, and the ND forwarding plane 826 is responsible for receiving that data on the physical NIs 816 and forwarding that data out the appropriate ones of the physical NIs 816 based on the forwarding table(s) 834A-R. [00107] Figure 8B illustrates an exemplary way to implement the special-purpose network device 802 according to some embodiments of the invention. Figure 8B shows a special- purpose network device including cards 838 (typically hot pluggable). While in some embodiments the cards 838 are of two types (one or more that operate as the ND forwarding plane 826 (sometimes called line cards), and one or more that operate to implement the ND control plane 824 (sometimes called control cards)), alternative embodiments may combine functionality onto a single card and/or include additional card types (e.g., one additional type of card is called a service card, resource card, or multi-application card). A service card can provide specialized processing (e.g., Layer 4 to Layer 7 services (e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)). By way of example, a service card may be used to terminate IPsec tunnels and execute the attendant authentication and encryption algorithms. These cards are coupled together through one or more interconnect mechanisms illustrated as backplane 836 (e.g., a first full mesh coupling the line cards and a second full mesh coupling all of the cards).
[00108] Returning to Figure 8A, the general-purpose network device 804 includes hardware 840 comprising a set of one or more processor(s) 842 (which are often COTS processors) and network interface controller(s) 844 (NICs; also known as network interface cards) (which include physical NIs 846), as well as non-transitory machine readable storage media 848 having stored therein software 850. During operation, the processor(s) 842 execute the software 850 to instantiate one or more sets of one or more applications 864A-R. While one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization. For example, in one such alternative embodiment the virtualization layer 854 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 862A-R called software containers that may each be used to execute one (or more) of the sets of applications 864A-R; where the multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that are separate from, each other and separate from the kernel space in which the operating system is run; and where the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes. In another such alternative embodiment the virtualization layer 854 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and each of the sets of applications 864A-R is run on top of a guest operating system within an instance 862A-R called a virtual machine (which may in some cases be considered a tightly isolated form of software container) that is run on top of the hypervisor - the guest operating system and application may not know they are running on a virtual machine as opposed to running on a "bare metal" host electronic device, or through para-virtualization the operating system and/or application may be aware of the presence of virtualization for optimization purposes. In yet other alternative embodiments, one, some or all of the applications are implemented as unikernel(s), which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application. As a unikernel can be implemented to run directly on hardware 840, directly on a hypervisor (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container, embodiments can be implemented fully with unikernels running directly on a hypervisor represented by virtualization layer 854, unikernels running within software containers represented by instances 862A-R, or as a combination of unikernels and the above-described techniques (e.g. , unikernels and virtual machines both run directly on a hypervisor, unikernels and sets of applications that are run in different software containers). The networking software 850 includes one or more VNFs such as VNF 162.
[00109] The instantiation of the one or more sets of one or more applications 864A-R, as well as virtualization if implemented, are collectively referred to as software instance(s) 852. Each set of applications 864A-R, corresponding virtualization construct (e.g., instance 862A-R) if implemented, and that part of the hardware 840 that executes them (be it hardware dedicated to that execution and/or time slices of hardware temporally shared), forms a separate virtual network element(s) 860A-R.
[00110] The virtual network element(s) 860A-R perform similar functionality to the virtual network element(s) 830A-R - e.g., similar to the control communication and configuration module(s) 832A and forwarding table(s) 834A (this virtualization of the hardware 840 is sometimes referred to as network function virtualization (NFV)). Thus, NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which could be located in Data centers, NDs, and customer premise equipment (CPE). While embodiments of the invention are illustrated with each instance 862A-R corresponding to one VNE 860A-R, alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of instances 862A-R to VNEs also apply to embodiments where such a finer level of granularity and/or unikernels are used.
[00111] In certain embodiments, the virtualization layer 854 includes a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between instances 862A-R and the NIC(s) 844, as well as optionally between the instances 862A-R; in addition, this virtual switch may enforce network isolation between the VNEs 860A-R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)).
[00112] The third exemplary ND implementation in Figure 8A is a hybrid network device 806, which includes both custom ASICs/special-purpose OS and COTS processors/standard OS in a single ND or a single card within an ND. In certain embodiments of such a hybrid network device, a platform VM (i.e., a VM that that implements the functionality of the special-purpose network device 802) could provide for para-virtualization to the networking hardware present in the hybrid network device 806.
[00113] Regardless of the above exemplary implementations of an ND, when a single one of multiple VNEs implemented by an ND is being considered (e.g., only one of the VNEs is part of a given virtual network) or where only a single VNE is currently being implemented by an ND, the shortened term network element (NE) is sometimes used to refer to that VNE. Also in all of the above exemplary implementations, each of the VNEs (e.g., VNE(s) 830A-R, VNEs 860A-R, and those in the hybrid network device 806) receives data on the physical NIs (e.g., 816, 846) and forwards that data out the appropriate ones of the physical NIs (e.g., 816, 846). For example, a VNE implementing IP router functionality forwards IP packets on the basis of some of the IP header information in the IP packet; where IP header information includes source IP address, destination IP address, source port, destination port (where "source port" and
"destination port" refer herein to protocol ports, as opposed to physical ports of a ND), transport protocol (e.g., user datagram protocol (UDP), Transmission Control Protocol (TCP), and differentiated services code point (DSCP) values.
[00114] Figure 8C illustrates various exemplary ways in which VNEs may be coupled according to some embodiments of the invention. Figure 8C shows VNEs 870A.1-870A.P (and optionally VNEs 870A.Q-870A.R) implemented in ND 800A and VNE 870H.1 in ND 800H. In Figure 8C, VNEs 870A.1-P are separate from each other in the sense that they can receive packets from outside ND 800A and forward packets outside of ND 800A; VNE 870A.1 is coupled with VNE 870H.1, and thus they communicate packets between their respective NDs; VNE 870A.2-870A.3 may optionally forward packets between themselves without forwarding them outside of the ND 800 A; and VNE 870 A. P may optionally be the first in a chain of VNEs that includes VNE 870A.Q followed by VNE 870A.R (this is sometimes referred to as dynamic service chaining, where each of the VNEs in the series of VNEs provides a different service - e.g., one or more layer 4-7 network services). While Figure 8C illustrates various exemplary relationships between the VNEs, alternative embodiments may support other relationships (e.g., more/fewer VNEs, more/fewer dynamic service chains, multiple different dynamic service chains with some common VNEs and some different VNEs).
[00115] The NDs of Figure 8A, for example, may form part of the Internet or a private network; and other electronic devices (not shown; such as end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances) may be coupled to the network (directly or through other networks such as access networks) to communicate over the network (e.g., the Internet or virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet) with each other (directly or through servers) and/or access content and/or services. Such content and/or services are typically provided by one or more servers (not shown) belonging to a service/content provider or one or more end user devices (not shown) participating in a peer-to-peer (P2P) service, and may include, for example, public webpages (e.g., free content, store fronts, search services), private webpages (e.g., username/password accessed webpages providing email services), and/or corporate networks over VPNs. For instance, end user devices may be coupled (e.g., through customer premise equipment coupled to an access network (wired or wirelessly)) to edge NDs, which are coupled (e.g., through one or more core NDs) to other edge NDs, which are coupled to electronic devices acting as servers. However, through compute and storage virtualization, one or more of the electronic devices operating as the NDs in Figure 8A may also host one or more such servers (e.g., in the case of the general purpose network device 804, one or more of the software instances 862A-R may operate as servers; the same would be true for the hybrid network device 806; in the case of the special-purpose network device 802, one or more such servers could also be run on a virtualization layer executed by the compute resource(s) 812); in which case the servers are said to be co-located with the VNEs of that ND.
[00116] A virtual network is a logical abstraction of a physical network (such as that in Figure 8 A) that provides network services (e.g., L2 and/or L3 services). A virtual network can be implemented as an overlay network (sometimes referred to as a network virtualization overlay) that provides network services (e.g., layer 2 (L2, data link layer) and/or layer 3 (L3, network layer) services) over an underlay network (e.g., an L3 network, such as an Internet Protocol (IP) network that uses tunnels (e.g., generic routing encapsulation (GRE), layer 2 tunneling protocol (L2TP), IPSec) to create the overlay network).
[00117] A network virtualization edge (NVE) sits at the edge of the underlay network and participates in implementing the network virtualization; the network-facing side of the NVE uses the underlay network to tunnel frames to and from other NVEs; the outward-facing side of the NVE sends and receives data to and from systems outside the network. A virtual network instance (VNI) is a specific instance of a virtual network on a NVE (e.g., a NE/VNE on an ND, a part of a NE/VNE on a ND where that NE/VNE is divided into multiple VNEs through emulation); one or more VNIs can be instantiated on an NVE (e.g., as different VNEs on an ND). A virtual access point (VAP) is a logical connection point on the NVE for connecting external systems to a virtual network; a VAP can be physical or virtual ports identified through logical interface identifiers (e.g., a VLAN ID).
[00118] Examples of network services include: 1) an Ethernet LAN emulation service (an Ethernet-based multipoint service similar to an Internet Engineering Task Force (IETF) Multiprotocol Label Switching (MPLS) or Ethernet VPN (EVPN) service) in which external systems are interconnected across the network by a LAN environment over the underlay network (e.g., an NVE provides separate L2 VNIs (virtual switching instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network); and 2) a virtualized IP forwarding service (similar to IETF IP VPN (e.g., Border Gateway Protocol (BGP)/MPLS IP VPN) from a service definition perspective) in which external systems are interconnected across the network by an L3 environment over the underlay network (e.g., an NVE provides separate L3 VNIs (forwarding and routing instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network)). Network services may also include quality of service capabilities (e.g., traffic classification marking, traffic conditioning and scheduling), security capabilities (e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements), and management capabilities (e.g., full detection and processing).
[00119] Figure 8D illustrates that a centralized approach 874 (also known as software defined networking (SDN)) that decouples the system that makes decisions about where traffic is sent from the underlying systems that forwards traffic to the selected destination. The illustrated centralized approach 874 has the responsibility for the generation of reachability and forwarding information in a centralized control plane 876 (sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity), and thus the process of neighbor discovery and topology discovery is centralized. The centralized control plane 876 has a south bound interface 882 with a data plane 880 (sometime referred to the infrastructure layer, network forwarding plane, or forwarding plane (which should not be confused with a ND forwarding plane)) that includes the NEs 870A-H (sometimes referred to as switches, forwarding elements, data plane elements, or nodes). The centralized control plane 876 includes a network controller 878, which includes a centralized reachability and forwarding information module 879 that determines the reachability within the network and distributes the forwarding information to the NEs 870A-H of the data plane 880 over the south bound interface 882 (which may use the OpenFlow protocol). Thus, the network intelligence is centralized in the centralized control plane 876 executing on electronic devices that are typically separate from the NDs.
[00120] For example, where the special-purpose network device 802 is used in the data plane 880, each of the control communication and configuration module(s) 832A-R of the ND control plane 824 typically include a control agent that provides the VNE side of the south bound interface 882. In this case, the ND control plane 824 (the compute resource(s) 812 executing the control communication and configuration module(s) 832A-R) performs its responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) through the control agent communicating with the centralized control plane 876 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 879 (it should be understood that in some embodiments of the invention, the control communication and configuration module(s) 832A-R, in addition to communicating with the centralized control plane 876, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach; such embodiments are generally considered to fall under the centralized approach 874, but may also be considered a hybrid approach).
[00121] While the above example uses the special-purpose network device 802, the same centralized approach 874 can be implemented with the general purpose network device 804 (e.g., each of the VNE 860A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by communicating with the centralized control plane 876 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 879; it should be understood that in some embodiments of the invention, the VNEs 860A-R, in addition to communicating with the centralized control plane 876, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach) and the hybrid network device 806. In fact, the use of SDN techniques can enhance the NFV techniques typically used in the general-purpose network device 804 or hybrid network device 806 implementations as NFV is able to support SDN by providing an infrastructure upon which the SDN software can be run, and NFV and SDN both aim to make use of commodity server hardware and physical switches.
[00122] Figure 8D also shows that the centralized control plane 876 has a north bound interface 884 to an application layer 886, in which resides application(s) 888. The centralized control plane 876 has the ability to form virtual networks 892 (sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 870A-H of the data plane 880 being the underlay network)) for the application(s) 888. Thus, the centralized control plane 876 maintains a global view of all NDs and configured NEs/VNEs, and it maps the virtual networks to the underlying NDs efficiently (including maintaining these mappings as the physical network changes either through hardware (ND, link, or ND component) failure, addition, or removal).
[00123] The VNF congestion detector 124 discussed herein above may be a part of the network controller 878 (which may perform functions similar to those of the network controller 120). Another VNF congestion detector 889 may be a part of the application(s) 888.
[00124] While Figure 8D illustrates the simple case where each of the NDs 800A-H
implements a single NE 870A-H, it should be understood that the network control approaches described with reference to Figure 8D also work for networks where one or more of the NDs 800A-H implement multiple VNEs (e.g., VNEs 830A-R, VNEs 860A-R, those in the hybrid network device 806). Alternatively or in addition, the network controller 878 may also emulate the implementation of multiple VNEs in a single ND. Specifically, instead of (or in addition to) implementing multiple VNEs in a single ND, the network controller 878 may present the implementation of a VNE/NE in a single ND as multiple VNEs in the virtual networks 892 (all in the same one of the virtual network(s) 892, each in different ones of the virtual network(s) 892, or some combination). For example, the network controller 878 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 876 to present different VNEs in the virtual network(s) 892 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).
[00125] On the other hand, Figures 8E and 8F respectively illustrate exemplary abstractions of NEs and VNEs that the network controller 878 may present as part of different ones of the virtual networks 892. Figure 8E illustrates the simple case of where each of the NDs 800A-H implements a single NE 870A-H (see Figure 8D), but the centralized control plane 876 has abstracted multiple of the NEs in different NDs (the NEs 870A-C and G-H) into (to represent) a single NE 8701 in one of the virtual network(s) 892 of Figure 8D, according to some embodiments of the invention. Figure 8E shows that in this virtual network, the NE 8701 is coupled to NE 870D and 870F, which are both still coupled to NE 870E.
[00126] Figure 8F illustrates a case where multiple VNEs (VNE 870A.1 and VNE 870H.1) are implemented on different NDs (ND 800A and ND 800H) and are coupled to each other, and where the centralized control plane 876 has abstracted these multiple VNEs such that they appear as a single VNE 870T within one of the virtual networks 892 of Figure 8D, according to some embodiments of the invention. Thus, the abstraction of a NE or VNE can span multiple NDs.
[00127] While some embodiments of the invention implement the centralized control plane 876 as a single entity (e.g., a single instance of software running on a single electronic device), alternative embodiments may spread the functionality across multiple entities for redundancy and/or scalability purposes (e.g., multiple instances of software running on different electronic devices).
[00128] Similar to the network device implementations, the electronic device(s) running the centralized control plane 876, and thus the network controller 878 including the centralized reachability and forwarding information module 879, may be implemented a variety of ways (e.g., a special purpose device, a general-purpose (e.g., COTS) device, or hybrid device). These electronic device(s) would similarly include compute resource(s), a set or one or more physical NICs, and a non-transitory machine-readable storage medium having stored thereon the centralized control plane software. For instance, Figure 9 illustrates, a general-purpose control plane device 904 including hardware 940 comprising a set of one or more processor(s) 942 (which are often COTS processors) and network interface controller(s) 944 (NICs; also known as network interface cards) (which include physical NIs 946), as well as non-transitory machine readable storage media 948 having stored therein centralized control plane (CCP) software 950.
[00129] In embodiments that use compute virtualization, the processor(s) 942 typically execute software to instantiate a virtualization layer 954 (e.g., in one embodiment the virtualization layer 954 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 962A-R called software containers (representing separate user spaces and also called virtualization engines, virtual private servers, or jails) that may each be used to execute a set of one or more applications; in another embodiment the virtualization layer 954 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and an application is run on top of a guest operating system within an instance 962A-R called a virtual machine (which in some cases may be considered a tightly isolated form of software container) that is run by the hypervisor ; in another embodiment, an application is implemented as a unikernel, which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application, and the unikernel can run directly on hardware 940, directly on a hypervisor represented by virtualization layer 954 (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container represented by one of instances 962A-R). Again, in embodiments where compute virtualization is used, during operation an instance of the CCP software 950 (illustrated as CCP instance 976A) is executed (e.g., within the instance 962A) on the virtualization layer 954. In embodiments where compute virtualization is not used, the CCP instance 976A is executed, as a unikernel or on top of a host operating system, on the "bare metal" general-purpose control plane device 904. The instantiation of the CCP instance 976A, as well as the virtualization layer 954 and instances 962A-R if implemented, are collectively referred to as software instance(s) 952.
[00130] In some embodiments, the CCP instance 976A includes a network controller instance 978. The network controller instance 978 includes a centralized reachability and forwarding information module instance 979 (which is a middleware layer providing the context of the network controller 878 to the operating system and communicating with the various NEs), and an CCP application layer 980 (sometimes referred to as an application layer) over the middleware layer (providing the intelligence required for various network operations such as protocols, network situational awareness, and user - interfaces). At a more abstract level, this CCP application layer 980 within the centralized control plane 876 works with virtual network view(s) (logical view(s) of the network) and the middleware layer provides the conversion from the virtual networks to the physical view.
[00131] The centralized control plane 876 transmits relevant messages to the data plane 880 based on CCP application layer 980 calculations and middleware layer mapping for each flow. A flow may be defined as a set of packets whose headers match a given pattern of bits; in this sense, traditional IP forwarding is also flow-based forwarding where the flows are defined by the destination IP address for example; however, in other implementations, the given pattern of bits used for a flow definition may include more fields (e.g., 10 or more) in the packet headers. Different NDs/NEs/VNEs of the data plane 880 may receive different messages, and thus different forwarding information. The data plane 880 processes these messages and programs the appropriate flow information and corresponding actions in the forwarding tables (sometime referred to as flow tables) of the appropriate NE/VNEs, and then the NEs/VNEs map incoming packets to flows represented in the forwarding tables and forward packets based on the matches in the forwarding tables.
[00132] Standards such as OpenFlow define the protocols used for the messages, as well as a model for processing the packets. The model for processing packets includes header parsing, packet classification, and making forwarding decisions. Header parsing describes how to interpret a packet based upon a well-known set of protocols. Some protocol fields are used to build a match structure (or key) that will be used in packet classification (e.g., a first key field could be a source media access control (MAC) address, and a second key field could be a destination MAC address).
[00133] Packet classification involves executing a lookup in memory to classify the packet by determining which entry (also referred to as a forwarding table entry or flow entry) in the forwarding tables best matches the packet based upon the match structure, or key, of the forwarding table entries. It is possible that many flows represented in the forwarding table entries can correspond/match to a packet; in this case the system is typically configured to determine one forwarding table entry from the many according to a defined scheme (e.g., selecting a first forwarding table entry that is matched). Forwarding table entries include both a specific set of match criteria (a set of values or wildcards, or an indication of what portions of a packet should be compared to a particular value/values/wildcards, as defined by the matching capabilities - for specific fields in the packet header, or for some other packet content), and a set of one or more actions for the data plane to take on receiving a matching packet. For example, an action may be to push a header onto the packet, for the packet using a particular port, flood the packet, or simply drop the packet. Thus, a forwarding table entry for IPv4/IPv6 packets with a particular transmission control protocol (TCP) destination port could contain an action specifying that these packets should be dropped.
[00134] Making forwarding decisions and performing actions occurs, based upon the forwarding table entry identified during packet classification, by executing the set of actions identified in the matched forwarding table entry on the packet.
[00135] However, when an unknown packet (for example, a "missed packet" or a "match- miss" as used in OpenFlow parlance) arrives at the data plane 880, the packet (or a subset of the packet header and content) is typically forwarded to the centralized control plane 876. The centralized control plane 876 will then program forwarding table entries into the data plane 880 to accommodate packets belonging to the flow of the unknown packet. Once a specific forwarding table entry has been programmed into the data plane 880 by the centralized control plane 876, the next packet with matching credentials will match that forwarding table entry and take the set of actions associated with that matched entry. [00136] The VNF congestion detector 124 may be implemented in the CCP software 950 in one embodiment, and the instance of the VNF congestion detector may be an VNF congestion detector instance 982 within the CCP application layer 980.
[00137] A network interface (NI) may be physical or virtual; and in the context of IP, an interface address is an IP address assigned to a NI, be it a physical NI or virtual NI. A virtual NI may be associated with a physical NI, with another virtual interface, or stand on its own (e.g., a loopback interface, a point-to-point protocol interface). A NI (physical or virtual) may be numbered (a NI with an IP address) or unnumbered (a NI without an IP address). A loopback interface (and its loopback address) is a specific type of virtual NI (and IP address) of a
NE/VNE (physical or virtual) often used for management purposes; where such an IP address is referred to as the nodal loopback address. The IP address(es) assigned to the NI(s) of a ND are referred to as IP addresses of that ND; at a more granular level, the IP address(es) assigned to NI(s) assigned to a NE/VNE implemented on a ND can be referred to as IP addresses of that NE/VNE.
[00138] Certain NDs (e.g., certain edge NDs) internally represent end user devices (or sometimes customer premise equipment (CPE) such as a residential gateway (e.g., a router, modem)) using subscriber circuits. A subscriber circuit uniquely identifies within the ND a subscriber session and typically exists for the lifetime of the session. Thus, a ND typically allocates a subscriber circuit when the subscriber connects to that ND, and correspondingly deallocates that subscriber circuit when that subscriber disconnects. Each subscriber session represents a distinguishable flow of packets communicated between the ND and an end user device (or sometimes CPE such as a residential gateway or modem) using a protocol, such as the point-to-point protocol over another protocol (PPPoX) (e.g., where X is Ethernet or
Asynchronous Transfer Mode (ATM)), Ethernet, 802.1Q Virtual LAN (VLAN), Internet Protocol, or ATM). A subscriber session can be initiated using a variety of mechanisms (e.g., manual provisioning a dynamic host configuration protocol (DHCP), DHCP/client-less internet protocol service (CLIPS) or Media Access Control (MAC) address tracking). For example, the point-to-point protocol (PPP) is commonly used for digital subscriber line (DSL) services and requires installation of a PPP client that enables the subscriber to enter a username and a password, which in turn may be used to select a subscriber record. When DHCP is used (e.g., for cable modem services), a username typically is not provided; but in such situations other information (e.g., information that includes the MAC address of the hardware in the end user device (or CPE)) is provided. The use of DHCP and CLIPS on the ND captures the MAC addresses and uses these addresses to distinguish subscribers and access their subscriber records. [00139] A virtual circuit (VC), synonymous with virtual connection and virtual channel, is a connection oriented communication service that is delivered by means of packet mode communication. Virtual circuit communication resembles circuit switching, since both are connection oriented, meaning that in both cases data is delivered in correct order, and signaling overhead is required during a connection establishment phase. Virtual circuits may exist at different layers. For example, at layer 4, a connection oriented transport layer datalink protocol such as Transmission Control Protocol (TCP) may rely on a connectionless packet switching network layer protocol such as IP, where different packets may be routed over different paths, and thus be delivered out of order. Where a reliable virtual circuit is established with TCP on top of the underlying unreliable and connectionless IP protocol, the virtual circuit is identified by the source and destination network socket address pair, i.e. the sender and receiver IP address and port number. However, a virtual circuit is possible since TCP includes segment numbering and reordering on the receiver side to prevent out-of-order delivery. Virtual circuits are also possible at Layer 3 (network layer) and Layer 2 (datalink layer); such virtual circuit protocols are based on connection oriented packet switching, meaning that data is always delivered along the same network path, i.e. through the same NEs/VNEs. In such protocols, the packets are not routed individually and complete addressing information is not provided in the header of each data packet; only a small virtual channel identifier (VCI) is required in each packet; and routing information is transferred to the NEs/VNEs during the connection establishment phase;
switching only involves looking up the virtual channel identifier in a table rather than analyzing a complete address. Examples of network layer and datalink layer virtual circuit protocols, where data always is delivered over the same path: X.25, where the VC is identified by a virtual channel identifier (VCI); Frame relay, where the VC is identified by a VCI; Asynchronous Transfer Mode (ATM), where the circuit is identified by a virtual path identifier (VPI) and virtual channel identifier (VCI) pair; General Packet Radio Service (GPRS); and Multiprotocol label switching (MPLS), which can be used for IP over virtual circuits (Each circuit is identified by a label).
[00140] Certain NDs (e.g., certain edge NDs) use a hierarchy of circuits. The leaf nodes of the hierarchy of circuits are subscriber circuits. The subscriber circuits have parent circuits in the hierarchy that typically represent aggregations of multiple subscriber circuits, and thus the network segments and elements used to provide access network connectivity of those end user devices to the ND. These parent circuits may represent physical or logical aggregations of subscriber circuits (e.g., a virtual local area network (VLAN), a permanent virtual circuit (PVC) (e.g., for Asynchronous Transfer Mode (ATM)), a circuit-group, a channel, a pseudo-wire, a physical NI of the ND, and a link aggregation group). A circuit-group is a virtual construct that allows various sets of circuits to be grouped together for configuration purposes, for example aggregate rate control. A pseudo-wire is an emulation of a layer 2 point-to-point connection- oriented service. A link aggregation group is a virtual construct that merges multiple physical NIs for purposes of bandwidth aggregation and redundancy. Thus, the parent circuits physically or logically encapsulate the subscriber circuits.
[00141] Each VNE (e.g., a virtual router, a virtual bridge (which may act as a virtual switch instance in a Virtual Private LAN Service (VPLS) is typically independently administrable. For example, in the case of multiple virtual routers, each of the virtual routers may share system resources but is separate from the other virtual routers regarding its management domain, AAA (authentication, authorization, and accounting) name space, IP address, and routing database(s). Multiple VNEs may be employed in an edge ND to provide direct network access and/or different classes of services for subscribers of service and/or content providers.
[00142] Within certain NDs, "interfaces" that are independent of physical NIs may be configured as part of the VNEs to provide higher-layer protocol and service information (e.g., Layer 3 addressing). The subscriber records in the AAA server identify, in addition to the other subscriber configuration requirements, to which context (e.g., which of the VNEs/NEs) the corresponding subscribers should be bound within the ND. As used herein, a binding forms an association between a physical entity (e.g., physical NI, channel) or a logical entity (e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)) and a context's interface over which network protocols (e.g., routing protocols, bridging protocols) are configured for that context. Subscriber data flows on the physical entity when some higher- layer protocol interface is configured and associated with that physical entity.
[00143] While embodiments of the invention have been described in relation to a SDN system. However, embodiments of the invention are not limited to SDN system, and a controlling network device may perform methods described in Figures 6-7.
[00144] In addition, while the flow diagrams in the figures show a particular order of operations performed by certain embodiments of the invention, it should be understood that such order is exemplary (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.).
[00145] While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.

Claims

CLAIMS What is claimed is:
1. A method implemented in an electronic device to detect a congestion state of a virtual network function (VNF), the method comprising:
determining (602) that a virtual network function (VNF) implemented in a network device is in a congestion state based on packet process performance of the VNF, the packet process performance corresponding to a service level agreement;
obtaining (604), from the network device, values of a plurality of VNF status parameters
corresponding to measurements of the packet process performance;
deriving (606) a plurality of coefficients to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state;
determining (610) a subsequent congestion state of the VNF based on subsequent values of the plurality of the VNF status parameters and the plurality of coefficients; and
providing (612) a notification indicating the subsequent congestion state of the VNF.
2. The method of claim 1, wherein the measurements of the packet process performance include values of one or more of parameters indicating a packet delay measure, a packet jitter measure, and a packet drop measure.
3. The method of claim 1, wherein the plurality of the VNF status parameters includes one or more parameters indicating a processor utilization measure, a memory utilization measure, a bandwidth utilization measure, and a data storage utilization measure.
4. The method of claim 1, further comprising:
determining (608) that the plurality of the coefficients have converged prior to the determination of the subsequent congestion state of the VNF.
5. The method of claim 1, wherein the derivation of the plurality of coefficients comprises applying logistic regression that searches for a minimum cost of the logistic regression in consideration of regularization of at least a portion of the plurality of coefficients.
6. The method of claim 1, further comprising:
updating (702) the measurements of the packet process performance; obtaining (704), from the network device, values of the plurality of the VNF status parameters corresponding to the updated measurements of the packet process performance;
determining (710) whether the VNF is in the congestion state based on the updated
measurements of the packet process performance; and
updating (712) the plurality of coefficients to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state.
7. The method of claim 6, further comprising:
prior to determining whether the VNF is in the congestion state based on the updated
measurements of the packet process performance, removing (706) earlier measurements of the packet process performance and corresponding values of the plurality of the VNF status parameters so that a pre-defined number of most recent measurements remain.
8. The method of claim 1, further comprising:
moving (614) a traffic flow from the VNF to another instance of the VNF or moving the traffic flow from the VNF to another type of VNF.
9. An electronic device to detect a congestion state of a virtual network function (VNF), the electronic device comprising:
a non-transitory machine readable storage medium (948) to store instructions; and
a processor (942) coupled with the non-transitory machine readable storage medium (948) to process the stored instructions to:
determine that a virtual network function (VNF) implemented in a network device is in a congestion state based on packet process performance of the VNF, the packet process performance corresponding to a service level agreement, obtain, from the network device, values of a plurality of VNF status parameters
corresponding to measurements of the packet process performance, derive a plurality of coefficients to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state,
determine a subsequent congestion state of the VNF based on subsequent values of the plurality of the VNF status parameters and the plurality of coefficients, and provide a notification indicating the subsequent congestion state of the VNF.
10. The electronic device of claim 9, wherein the measurements of the packet process performance include values of one or more of parameters indicating a packet delay measure, a packet jitter measure, and a packet drop measure.
11. The electronic device of claim 9, wherein plurality of VNF status parameters includes one or more parameters indicating a processor utilization measure, a memory utilization measure, a bandwidth utilization measure, and a data storage utilization measure.
12. The electronic device of claim 9, wherein the processor is further to:
determine that the plurality of the coefficients have converged prior to the determination of the subsequent congestion state of the VNF.
13. The electronic device of claim 9, wherein the derivation of the plurality of coefficients comprises applying logistic regression that searches for a minimum cost of the logistic regression in consideration of regularization of at least a portion of the plurality of coefficients.
14. The electronic device of claim 9, wherein the processor is further to:
update the measurements of the parameters of packet process performance,
obtain, from the network device, values of the plurality of the VNF status parameters
corresponding to the updated measurements of the packet process performance, determine whether the VNF is in the congestion state based on the updated measurements of the packet process performance, and
update the plurality of coefficients to apply to the values of the plurality of the VNF status
parameters to arrive at an indication of the congestion state.
15. The electronic device of claim 9, wherein the electronic device comprises a software-defined networking (SDN) controller.
16. A non-transitory machine readable storage medium (948) that provides instructions, which when executed by a processor (942) of an electronic device (904), cause said processor to perform operations comprising:
determining (602) that a virtual network function (VNF) implemented in a network device is in a congestion state based on packet process performance of the VNF, the packet process performance corresponding to a service level agreement; obtaining (604), from the network device, values of a plurality of VNF status parameters corresponding to measurements of the packet process performance;
deriving (606) a plurality of coefficients to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state;
determining (610) a subsequent congestion state of the VNF based on subsequent values of the plurality of the VNF status parameters and the plurality of coefficients; and
providing (612) a notification indicating the subsequent congestion state of the VNF.
17. The non-transitory machine readable storage medium of claim 16, wherein the
measurements of the packet process performance include values of one or more of parameters indicating a packet delay measure, a packet jitter measure, and a packet drop measure.
18. The non-transitory machine readable storage medium of claim 16, wherein the plurality of the VNF status parameters includes one or more parameters indicating a processor utilization measure, a memory utilization measure, a bandwidth utilization measure, and a data storage utilization measure.
19. The non-transitory machine readable storage medium of claim 16, wherein the derivation of the plurality of coefficients comprises applying logistic regression that searches for a minimum cost of the logistic regression in consideration of regularization of at least a portion of the plurality of coefficients.
20. The non-transitory machine readable storage medium of claim 16, the operations further comprising:
updating (702) the measurements of the packet process performance;
obtaining (704), from the network device, values of the plurality of the VNF status parameters corresponding to the updated measurements of the packet process performance;
determining (710) whether the VNF is in the congestion state based on the updated
measurements of the packet process performance; and
updating (712) the plurality of coefficients to apply to the values of the plurality of the VNF status parameters to arrive at an indication of the congestion state.
PCT/IB2017/052348 2017-04-24 2017-04-24 Method and system to detect virtual network function (vnf) congestion WO2018197924A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2017/052348 WO2018197924A1 (en) 2017-04-24 2017-04-24 Method and system to detect virtual network function (vnf) congestion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2017/052348 WO2018197924A1 (en) 2017-04-24 2017-04-24 Method and system to detect virtual network function (vnf) congestion

Publications (1)

Publication Number Publication Date
WO2018197924A1 true WO2018197924A1 (en) 2018-11-01

Family

ID=58699197

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2017/052348 WO2018197924A1 (en) 2017-04-24 2017-04-24 Method and system to detect virtual network function (vnf) congestion

Country Status (1)

Country Link
WO (1) WO2018197924A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111010293A (en) * 2019-11-27 2020-04-14 中国联合网络通信集团有限公司 Virtual resource management method and device
CN111130953A (en) * 2019-12-31 2020-05-08 奇安信科技集团股份有限公司 VNF availability monitoring method, device and medium
CN113348651A (en) * 2019-01-24 2021-09-03 威睿公司 Dynamic inter-cloud placement of sliced virtual network functions
CN113992551A (en) * 2021-09-09 2022-01-28 新华三信息安全技术有限公司 Information reporting method and device
US11431636B2 (en) * 2018-08-03 2022-08-30 Nippon Telegraph And Telephone Corporation Communication system and communication method
US11588733B2 (en) 2019-05-14 2023-02-21 Vmware, Inc. Slice-based routing
US11595315B2 (en) 2019-05-14 2023-02-28 Vmware, Inc. Quality of service in virtual service networks
US11902080B2 (en) 2019-05-14 2024-02-13 Vmware, Inc. Congestion avoidance in a slice-based network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120155262A1 (en) * 2010-12-17 2012-06-21 Microsoft Corporation Kernel awareness of physical environment
US20160149788A1 (en) * 2014-11-20 2016-05-26 Telefonaktiebolaget L M Ericsson (pubI) Passive Performance Measurement for Inline Service Chaining

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120155262A1 (en) * 2010-12-17 2012-06-21 Microsoft Corporation Kernel awareness of physical environment
US20160149788A1 (en) * 2014-11-20 2016-05-26 Telefonaktiebolaget L M Ericsson (pubI) Passive Performance Measurement for Inline Service Chaining

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11431636B2 (en) * 2018-08-03 2022-08-30 Nippon Telegraph And Telephone Corporation Communication system and communication method
CN113348651A (en) * 2019-01-24 2021-09-03 威睿公司 Dynamic inter-cloud placement of sliced virtual network functions
CN113348651B (en) * 2019-01-24 2023-06-09 威睿公司 Dynamic inter-cloud placement of sliced virtual network functions
US11588733B2 (en) 2019-05-14 2023-02-21 Vmware, Inc. Slice-based routing
US11595315B2 (en) 2019-05-14 2023-02-28 Vmware, Inc. Quality of service in virtual service networks
US11902080B2 (en) 2019-05-14 2024-02-13 Vmware, Inc. Congestion avoidance in a slice-based network
CN111010293A (en) * 2019-11-27 2020-04-14 中国联合网络通信集团有限公司 Virtual resource management method and device
CN111130953A (en) * 2019-12-31 2020-05-08 奇安信科技集团股份有限公司 VNF availability monitoring method, device and medium
CN111130953B (en) * 2019-12-31 2022-04-15 奇安信科技集团股份有限公司 VNF availability monitoring method, device and medium
CN113992551A (en) * 2021-09-09 2022-01-28 新华三信息安全技术有限公司 Information reporting method and device
CN113992551B (en) * 2021-09-09 2023-07-14 新华三信息安全技术有限公司 Information reporting method and device

Similar Documents

Publication Publication Date Title
CN111193666B (en) Applying quality of experience metrics using adaptive machine learning sounding prediction
US10623321B2 (en) Adaptive load balancing in packet processing
US9596173B2 (en) Method and system for traffic pattern generation in a software-defined networking (SDN) system
EP3222005B1 (en) Passive performance measurement for inline service chaining
EP3222006B1 (en) Passive performance measurement for inline service chaining
US9882815B2 (en) Adaptive load balancing in packet processing
WO2018197924A1 (en) Method and system to detect virtual network function (vnf) congestion
US20160050132A1 (en) Method and system to dynamically collect statistics of traffic flows in a software-defined networking (sdn) system
US20160301632A1 (en) Method and system for burst based packet processing
WO2019012546A1 (en) Efficient load balancing mechanism for switches in a software defined network
EP3738033A1 (en) Process placement in a cloud environment based on automatically optimized placement policies and process execution profiles
US20220214912A1 (en) Sharing and oversubscription of general-purpose graphical processing units in data centers
WO2018150223A1 (en) A method and system for identification of traffic flows causing network congestion in centralized control plane networks
WO2019003235A1 (en) Inline stateful monitoring request generation for sdn
EP3281370B1 (en) Adaptive load balancing in packet processing
WO2018220426A1 (en) Method and system for packet processing of a distributed virtual network function (vnf)
US11296999B2 (en) Sliding window based non-busy looping mode in cloud computing
WO2020202167A1 (en) Method for virtual network function (vnf) load characterization using traffic analytics in sdn managed clouds
US11563648B2 (en) Virtual network function placement in a cloud environment based on historical placement decisions and corresponding performance indicators
WO2022064258A1 (en) Edge cloud platform for mission critical applications
EP4031976B1 (en) Method and system for cache management in a network device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17722882

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17722882

Country of ref document: EP

Kind code of ref document: A1