EP2865147B1 - Guarantee of predictable and quantifiable network performance - Google Patents

Guarantee of predictable and quantifiable network performance Download PDF

Info

Publication number
EP2865147B1
EP2865147B1 EP13730765.8A EP13730765A EP2865147B1 EP 2865147 B1 EP2865147 B1 EP 2865147B1 EP 13730765 A EP13730765 A EP 13730765A EP 2865147 B1 EP2865147 B1 EP 2865147B1
Authority
EP
European Patent Office
Prior art keywords
congestion
message processor
act
bandwidth
data flows
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP13730765.8A
Other languages
German (de)
French (fr)
Other versions
EP2865147A1 (en
Inventor
Changhoon Kim
Albert G. Greenberg
Alireza Dabagh
Yousef A. Khalidi
Deepak Bansal
Srikanth Kandula
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Publication of EP2865147A1 publication Critical patent/EP2865147A1/en
Application granted granted Critical
Publication of EP2865147B1 publication Critical patent/EP2865147B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • H04L47/263Rate modification at the source after receiving feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/76Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
    • H04L47/762Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/822Collecting or measuring resource availability data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Definitions

  • Computer systems and related technology affect many aspects of society. Indeed, the computer system's ability to process information has transformed the way we live and work. Computer systems now commonly perform a host of tasks (e.g., word processing, scheduling, accounting, etc.) that prior to the advent of the computer system were performed manually. More recently, computer systems have been coupled to one another and to other electronic devices to form both wired and wireless computer networks over which the computer systems and other electronic devices can transfer electronic data. Accordingly, the performance of many computing tasks is distributed across a number of different computer systems and/or a number of different computing environments.
  • tasks e.g., word processing, scheduling, accounting, etc.
  • a cloud-service provider uses a common underlying physical network to host multiple customers' applications, sometimes referred to as "tenants".
  • a tenant can a set of virtual machines (“VMs”) or applications processes that is independently deployable and is solely owned by a single customer (i.e., subscription).
  • VMs virtual machines
  • Reach ability isolation can be used to mitigate direct interference between tenants.
  • reach ability isolation is not sufficient, since a malicious or careless tenant can still interfere with other tenants in the network data plane by exchanging heavy traffic only among its own members (VMs).
  • TCP's Transmission Control Protocol's
  • UDP User Datagram Protocol
  • QoS Quality of Service
  • WFQ Weighted Fair Queuing
  • US 2008/259798 A1 describes a shared memory switch and switch fabric architecture which employ partitions of the shared memory to implement multiple, independent virtual congestion domains, thereby allowing congestion to be handled for different classes of traffic independently.
  • the present invention extends to the ensuring of predictable and quantifiable networking performance.
  • Embodiments address networking congestion at a receiving computer system.
  • a computing system manages one or more message processors. For instance, in a virtual machine environment, a hypervisor manages one or more message processors.
  • a subscription bandwidth for a message processor is accessed. The subscription bandwidth indicates a quantitative and invariant minimum bandwidth for the message processor.
  • One or more data flows are received from a congestion free network core.
  • the one or more data flows are sent from sending message processor and directed to the receiving message processor.
  • the combined bandwidth of the one or more data flows is calculated.
  • the onset of congestion at the receiving computing system is detected.
  • at least one message processor associated with the one or more data flows is identified as a violator of the subscription bandwidth.
  • the at least one violating message processor is a sending message processor or a receiving message processor of one of the one or more data flows.
  • the extent of the violation by the at least one violating message processor is determined.
  • Feedback for delivery to sender side adaptive rate limiters corresponding to the at least one violating message processor is determined.
  • the feedback instructs the sender side adaptive rate limiters to reduce the bandwidth of the one or more data flows originating from the at least one violating message processor.
  • the feedback is sent onto the congestion free network core for delivery to the sender side adaptive rate limiters.
  • the present invention extends to methods, systems, and computer program products for ensuring predictable and quantifiable networking performance.
  • Embodiments address networking congestion at a computer system.
  • a computing system manages one or more message processors. For instance, in a virtual machine environment, a hypervisor manages one or more message processors.
  • a subscription bandwidth for a message processor is accessed. The subscription bandwidth indicates a quantitative and invariant minimum bandwidth for the message processor.
  • One or more data flows are received from a congestion free network core.
  • the one or more data flows are sent from sending message processors and directed to the message processor.
  • the combined bandwidth of the one or more data flows is calculated.
  • the onset of congestion at the receiving computing system is detected.
  • at least one message processor associated with the one or more data flows is identified as a violator of the subscription bandwidth.
  • the at least one violating message processor is a sending message processor or a receiving message processor of one of the one or more data flows.
  • the extent of the violation by the at least one violating message processors is determined.
  • Feedback for delivery to sender side adaptive rate limiters corresponding to the at least one violating message processor is determined.
  • the feedback instructs the sender side adaptive rate limiters to reduce the bandwidth of the one or more data flows originating from the at least one violating message processor.
  • the feedback is sent onto the congestion free network core for delivery to the sender side adaptive rate limiters.
  • Embodiments of the present invention may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below.
  • Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures.
  • Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system.
  • Computer-readable media that store computer-executable instructions are computer storage media (devices).
  • Computer-readable media that carry computer-executable instructions are transmission media.
  • embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
  • Computer storage media includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
  • SSDs solid state drives
  • PCM phase-change memory
  • other types of memory other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
  • a "network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices.
  • a network or another communications connection can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
  • program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (devices) (or vice versa).
  • computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a "NIC"), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system.
  • a network interface module e.g., a "NIC”
  • NIC network interface module
  • computer storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media.
  • Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
  • the computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code.
  • the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, and the like.
  • the invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks.
  • program modules may be located in both local and remote memory storage devices.
  • cloud computing is defined as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be provisioned and released with reduced management effort or service provider interaction.
  • a cloud model can be composed of various characteristics (e.g., on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, etc), service models (e.g., Software as a Service (“SaaS”), Platform as a Service (“PaaS”), Infrastructure as a Service (“IaaS”), and deployment models (e.g., private cloud, community cloud, public cloud, hybrid cloud, etc.).
  • hypervisor (or virtual machine manager (“VMM”) is defined as a component that permits multiple operating system instances (or Virtual Machines (“VMs”)) to share a hardware host.
  • a hypervisor can run directly on a host's hardware (type 1) or on top of an operating system running on a host's hardware (type 2).
  • a hypervisor presents a virtual operating platform and manages the execution of operating system instances. For example, through virtualization a hypervisor can present individual Virtual Network Interface Cards ("VNICs”) to a number of different operating system instances based on the hardware of an underlying Network Interface Card (“NIC").
  • VNICs Virtual Network Interface Cards
  • a hypervisor controls the allocation of host processes and resources to each operating system instance to avoid disruptions between the operating system instances. Hypervisors can be used on machines in a cloud computing environment.
  • Embodiments of the invention combine a congestion free network core with a hypervisor based (i.e., edge-based) throttling design.
  • a lightweight shim layer in a hypervisor can adaptively throttle the rate of VM-to-VM traffic flow. Regulation of traffic flow takes into account the speed of VM ports and congestion state visible to receiving-end hosts.
  • a hypervisor based approach has increased simplicity and increased scalability in network Quality of Service (“QoS”) mechanisms.
  • Throttling VM-to-VM traffic promotes fairness enforcement (i.e., regulating connections for different protocols, such as, User Datagram Protocol (“UDP”) and Transmission Control Protocol (“TCP”)).
  • Throttling VM-to-VM traffic also provides a new measure of fairness aligned with per-VM hourly charging models used in cloud based environments.
  • Figure 1 illustrates an example computer architecture 100 that facilitates ensuring predictable and quantifiable networking performance.
  • computer architecture 100 includes computing systems 111, 121, 131, and 141 in a general embodiment.
  • the computing systems are hypervisors 111, 121, 131, and 141.
  • each computing system 111, 121, 131 and 141 manages message processors.
  • each hypervisor manages one or more virtual machines, the virtual machines representing an example of a message processor.
  • hypervisor 111 manages virtual machines 114A and 114B
  • hypervisor 121 manages virtual machine 124
  • hypervisor 131 manages virtual machines 134A and 134B
  • hypervisor 141 manages virtual machines 144A and 144B.
  • Hypervisors 111, 121, 131, and 141 are connected to congestion free network core 101.
  • Each of the depicted components as well as any other connected computer systems and their components, can create message related data and exchange message related data (e.g., Internet Protocol (“IP”) data grams and other higher layer protocols that utilize IP data grams, such as, Transmission Control Protocol (“TCP”), Hypertext Transfer Protocol (“HTTP”), Simple Mail Transfer Protocol (“SMTP”), etc.) over congestion free network core 101.
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • HTTP Hypertext Transfer Protocol
  • SMTP Simple Mail Transfer Protocol
  • Congestion free network core 101 is configured such that there is an extremely low (or even essentially no) chance of congestion within congestion free network core 101.
  • Congestion free network core 101 can be a full bisection-bandwidth network.
  • Congestion free network core 101 can be established using any of variety of network topologies, including but not limited to Fat Tree and VL2.
  • Hypervisors 111, 121, 131, and 141 include virtual machine switches 112, 122, 132, and 142 respectively.
  • the corresponding virtual switch directs the packets to the appropriate virtual machine (e.g., by a tag or electronic address).
  • a virtual machine switch can include an adaptive rate limiter and/or a congestion detector.
  • virtual machine switch 112 includes congestion detector 113 and virtual machine switches 122, 132, and 142 include adaptive rate limiters 123, 133, and 143 respectively.
  • virtual machine switch 112 can include an adaptive rate limiter (not shown) and each of virtual machine switches 122, 132, and 142 can include a congestion detector (not shown).
  • the congestion detector 113, and the adaptive rate limiters 123, 133, and 143 may be considered as logic implemented by computing systems 111, 121, 131, and 141, respectively.
  • a VM can operate in accordance with a defined subscription (e.g., a Service Level Agreement ("SLA")).
  • a defined subscription e.g., a Service Level Agreement ("SLA")
  • SLA Service Level Agreement
  • a user of a VM can purchase the right to use congestion free network core 101 from a service provider (e.g., a data center provider).
  • the user and service provider can agree to a set of parameters defining a level of network service for the VM.
  • the set of parameters can include a subscription bandwidth that is to be available to the VM.
  • the subscription bandwidth can be a quantitative and invariant minimum bandwidth allocated for the VM.
  • a congestion detector is aware of the maximum bandwidth of underlying networking hardware, such as, as NICs and Top-of-Tack ("TOR") switches, used by VMs.
  • hypervisor 111 can make congestion detector 113 aware of bandwidth limitations in the hardware supporting VMs 114A and 114B (e.g., a 1GB/s NIC).
  • a congestion detector monitors received packets for various data flows directed to virtual machines.
  • a congestion detector can detect when congestion is likely to occur or is occur in the supporting hardware. Congestion is likely to occur or is occurring when the received bandwidth at a hypervisor approaches the bandwidth limitations of underlying hardware. For example, when 975 MB/s are being received a NIC rated for 1GB/s there is some likelihood that congestion is occurring.
  • a congestion detector can be configured to indicate congestion when received bandwidth is within a specified threshold (e.g., an amount or percentage) of hardware bandwidth limitations. In this particular illustrated system 100, the congestion detector is placed at a point of potential congestion in the network, and that point of congestion is within a single server. This has the potential to provide quicker and more stable control.
  • a receiving hypervisor uses software metering to detect congestion.
  • a protocol or protocol extension such as, for example, Explicit Congestion Notification ("ECN"), is used to detect congestion.
  • ECN can be used at a last hop device, such as, for example, a TOR switch.
  • the congestion detector can identify sending VMs as violators. Based on the extent of the violations, the congestion detector can send feedback to adaptive rate limiters for the sending VMs.
  • an adaptive rate limiter can regulate the rate of sending data packets onto congestion free network core 101.
  • An adaptive rate limiter can receive feedback from a congestion detector. In response to received feedback, an adaptive rate limiter can reduce the bandwidth used to send packets to avoid further congestion. In absence of received feedback, an adaptive rate limiter can increase the bandwidth used to send packets onto congestion free network core 101 to promote efficient use of resource.
  • An adaptive rate limiter can use any of a variety of different feedback algorithms to regulate bandwidth when sending packets.
  • adaptive rate limiters use an Additive Increase/Multiplicative Decrease (“AIMD”) algorithm for congestion avoidance.
  • AIMD combines linear growth of the congestion window with an exponential reduction when congestion takes place.
  • Other algorithms such as, for example, multiplicative-increase/multiplicative-decrease (“MIMD”) and additive-increase/additive-decrease (“AIAD”) can also be used.
  • Figure 2 illustrates a flow chart of an example method 200 for addressing network congestion at a computer system. Method 200 will be described with respect to the components and data of computer architecture 100.
  • Method 200 includes an act of accessing a subscription bandwidth for a virtual machine managed by a hypervisor, the subscription bandwidth indicating a quantitative and invariant minimum bandwidth for the virtual machine (act 201).
  • hypervisor 111 can access subscription bandwidth 152 from subscription 151.
  • Subscription 151 can be a previous established subscription for virtual machine 114A.
  • Subscription bandwidth 152 can indicate a quantitative and invariant minimum bandwidth (e.g., 400 MB/s) for the virtual machine 114A.
  • Method 200 includes an act of receiving one or more data flows from a congestion free network core, the one or more data flows sent from sending virtual machines and directed to the virtual machine (act 202).
  • virtual machines 124, 134A, and 144B can send packets 102, 103, and 104 respectively onto congestion free network core 101 as part of corresponding data flows.
  • Packets 102, 103, and 104 can be directed to virtual machines managed by hypervisor 111 (e.g., virtual machines 114A and/or 114B).
  • Hypervisor 111 can receive packets 102, 103, and 104 from congestion free network core 101.
  • Method 200 includes an act of calculating the combined bandwidth of the one or more data flows (act 203).
  • congestion detector 133 can calculated the combined bandwidth for the data flows corresponding to packets 102, 103, and 104.
  • Method 200 includes an act of detecting the onset of congestion at the virtual machine switch (act 204).
  • congestion detector 133 can detect the onset of congestion at virtual machine switch 112. The onset of congestion can be detected by determining that the combined bandwidth of the data flows corresponding to packets 102, 103, and 104 is within a specified threshold of the bandwidth limitations for virtual machine switch 112. For example, the onset of congestion may be detected when the combined bandwidth of the data flows is 9.5 GB/s and virtual machine switch 112 is capable of 10 GB/s.
  • Method 200 includes an act of identifying at least one virtual machine associated with the one or more data flows as a violator of the subscription bandwidth in response to detecting the onset of congestion, the at least one violating virtual machine being a sending virtual machine or a receiving virtual machine of one of the one or more data flows (act 205).
  • congestion detector 113 can identify one or more of virtual machines virtual machines 124, 134A, and 144B as violating subscription bandwidth 152 in response to detecting the onset of congestion at virtual machine switch 112.
  • Congestion detector 113 may also identify virtual machine 114B as violators of subscription bandwidth 152. For example, individual bandwidth for each of a plurality of data flows may not violate subscription bandwidth 152. However, when the plurality of data flows are for the same receiving virtual machine, the sum of the individual bandwidths may violate subscription bandwidth 152.
  • Method 200 includes an act of determining the extent of the violation by the at least one violating virtual machine (act 206). For example, congestion detector 113 can determine a bandwidth amount by which subscription bandwidth 152 is being violated by one or more of virtual machines 124, 134A, 144B, and 114B.
  • Method 200 includes an act of formulating feedback for delivery to sender side adaptive rate limiters corresponding to the at least one violating virtual machine, the feedback instructing the sender side adaptive rate limiters to reduce the bandwidth of the one or more data flows originating from the at least one violating virtual machine (act 207).
  • congestion detector 133 can formulate feedback 106 for delivery to one or more of adaptive rate limiters 123, 133, and 143 (or even an adaptive rate limiter at hypervisor 111, for example, when virtual machine 114B is a violator).
  • Feedback 106 can instruct the one or more adaptive rate limiters 123, 133, and 143 to reduce the bandwidth of data flows (corresponding to one or more of packets 102, 103, and 104) from one or more of virtual machines 124, 134A, and 144B respectively.
  • feedback 106 can also be formulated for delivery to an adaptive rate limiter at hypervisor 111.
  • feedback can be based on the subscription bandwidth of a receiving virtual machine and possible also the subscription bandwidth of one or more sending virtual machines.
  • virtual machine 124 may also have a specified subscription bandwidth.
  • feedback 106 can be formulated based on subscription bandwidth 152 and the specified subscription bandwidth for virtual machine 124.
  • Feedback 106 can be formulated so that adaptive rate limiter 123 does not throttle back the data flow corresponding to packets 102 to a rate below the specified subscription bandwidth for virtual machine 124.
  • Method 200 includes an act of sending the feedback onto the congestion free network core for delivery to the sender side adaptive rate limiters (act 208).
  • congestions detector 113 can send feedback 106 on congestion free network core 101 for delivery to one or more of adaptive rate limiters 123, 133, and 143.
  • Adaptive rate limiters that receive feedback 106 can reduce bandwidth of respective data flows in accordance with a bandwidth regulation algorithm such as, for example, AIMD, etc.
  • the same or similar feedback may be send to all of the sending virtual machines, whether violating or not.
  • the send-side hypervisor may then determine an appropriate manner to rate limit the sending virtual machines, perhaps choosing to more aggressively rate limit the violating virtual machines, as compared to the non-violating virtual machines.
  • such computation may occur at a tenant level in which there are multiple sending virtual machines associated with a single tenant.
  • the feedback proportional rate adaption may be performed for all violating (and perhaps non-violating) virtual machines associated with that tenant.
  • Such proportional rate limiting may be performed by some tenant-level logic.
  • Congestion detectors and adaptive rate limiters can be used regulate data flow bandwidth for various different types of traffic, including TCP traffic and non-TCP (e.g., UDP) traffic.
  • TCP traffic and non-TCP (e.g., UDP) traffic.
  • the bandwidth of TCP flows as well as non-TCP flows can be regulated in accordance with AIMD or other congestion avoidance algorithms.
  • Data flow bandwidth from different types of traffic can also be considered together when detecting congestion.
  • a receiving hypervisor can received at least one data flow of TCP traffic and at least one data flow of non-TCP (e.g., UDP) traffic.
  • the receiving hypervisor can consider the bandwidth of the at least one TCP data flow and the bandwidth of the at least one non-TCP data flow when detecting congestion the receiving.
  • Feedback from the receiving hypervisor can be used to regulate the at least one TCP data flow as well as the at least one non-TCP data flow.
  • embodiments of the invention combine a congestion free network core with a hypervisor based (i.e., edge-based) throttling design to help insure quantitative and invariable subscription bandwidth rates.
  • Embodiments can confine the scope of congestion to a single physical machine and limit the number of contributors to congestion. Since congestion is visible to a receiving hypervisor, congestion is more easily detected and communicated back to sending hypervisors. Communication back to sending hypervisors provides a closed-loop control approach increasing stability and permitting self-clocking.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Description

    BACKGROUND 1. Background and Relevant Art
  • Computer systems and related technology affect many aspects of society. Indeed, the computer system's ability to process information has transformed the way we live and work. Computer systems now commonly perform a host of tasks (e.g., word processing, scheduling, accounting, etc.) that prior to the advent of the computer system were performed manually. More recently, computer systems have been coupled to one another and to other electronic devices to form both wired and wireless computer networks over which the computer systems and other electronic devices can transfer electronic data. Accordingly, the performance of many computing tasks is distributed across a number of different computer systems and/or a number of different computing environments.
  • In some environments, computer systems operate in a cloud computing environment. In cloud computing environments, a cloud-service provider uses a common underlying physical network to host multiple customers' applications, sometimes referred to as "tenants". A tenant can a set of virtual machines ("VMs") or applications processes that is independently deployable and is solely owned by a single customer (i.e., subscription). Reach ability isolation can be used to mitigate direct interference between tenants. However, reach ability isolation is not sufficient, since a malicious or careless tenant can still interfere with other tenants in the network data plane by exchanging heavy traffic only among its own members (VMs).
  • Accordingly, other techniques can be used to attempt to isolate performance of tenants. Some techniques have relied on Transmission Control Protocol's ("TCP's") congestion control. However, a tenant can essentially achieve unbounded utilization of a network by using many TCP flows (connections) and using variations of TCP. Tenants can also use other protocols, such as, for example, User Datagram Protocol ("UDP") that do not respond to congestion control
  • Trust of tenant networking stacks is also a problem.
  • Further, conventional in-network Quality of Service ("QoS") mechanisms (e.g., separate queues with Weighted Fair Queuing ("WFQ")) do not scale. These QoS mechanisms are also complicated and expensive to use for differentiating performance when tenants frequently join and leave. Statically throttling each VM on the sender side is inefficient and ineffective as it wastes any unused capacity and given a sufficient number of VMs, a tenant can always cause performance interference at virtually any static rate applied to each VM.
  • Accordingly, in cloud computing environments, due at least in part to one or more of these factors, it can be difficult to regulate network traffic in a way that reliably prevents disproportionate bandwidth consumption.
  • US 2008/259798 A1 describes a shared memory switch and switch fabric architecture which employ partitions of the shared memory to implement multiple, independent virtual congestion domains, thereby allowing congestion to be handled for different classes of traffic independently.
  • BRIEF SUMMARY
  • It is the object of the present invention to ensure predictable and quantifiable networking performance.
  • This object is solved by the subject matter of the independent claims.
  • Preferred embodiments are defined by the dependent claims.
  • The present invention extends to the ensuring of predictable and quantifiable networking performance. Embodiments address networking congestion at a receiving computer system. A computing system manages one or more message processors. For instance, in a virtual machine environment, a hypervisor manages one or more message processors. A subscription bandwidth for a message processor is accessed. The subscription bandwidth indicates a quantitative and invariant minimum bandwidth for the message processor.
  • One or more data flows are received from a congestion free network core. The one or more data flows are sent from sending message processor and directed to the receiving message processor. The combined bandwidth of the one or more data flows is calculated. The onset of congestion at the receiving computing system is detected. In response to detecting the onset of congestion, at least one message processor associated with the one or more data flows is identified as a violator of the subscription bandwidth. The at least one violating message processor is a sending message processor or a receiving message processor of one of the one or more data flows. The extent of the violation by the at least one violating message processor is determined.
  • Feedback for delivery to sender side adaptive rate limiters corresponding to the at least one violating message processor is determined. The feedback instructs the sender side adaptive rate limiters to reduce the bandwidth of the one or more data flows originating from the at least one violating message processor. The feedback is sent onto the congestion free network core for delivery to the sender side adaptive rate limiters.
  • This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
  • Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
    • Figure 1 illustrates an example computer architecture that facilitates ensuring predictable and quantifiable networking performance.
    • Figure 2 illustrates a flow chart of an example method for ensuring predictable and quantifiable networking performance.
    DETAILED DESCRIPTION
  • The present invention extends to methods, systems, and computer program products for ensuring predictable and quantifiable networking performance. Embodiments address networking congestion at a computer system. A computing system manages one or more message processors. For instance, in a virtual machine environment, a hypervisor manages one or more message processors. A subscription bandwidth for a message processor is accessed. The subscription bandwidth indicates a quantitative and invariant minimum bandwidth for the message processor.
  • One or more data flows are received from a congestion free network core. The one or more data flows are sent from sending message processors and directed to the message processor. The combined bandwidth of the one or more data flows is calculated. The onset of congestion at the receiving computing system is detected. In response to detecting the onset of congestion, at least one message processor associated with the one or more data flows is identified as a violator of the subscription bandwidth. The at least one violating message processor is a sending message processor or a receiving message processor of one of the one or more data flows. The extent of the violation by the at least one violating message processors is determined.
  • Feedback for delivery to sender side adaptive rate limiters corresponding to the at least one violating message processor is determined. The feedback instructs the sender side adaptive rate limiters to reduce the bandwidth of the one or more data flows originating from the at least one violating message processor. The feedback is sent onto the congestion free network core for delivery to the sender side adaptive rate limiters.
  • Embodiments of the present invention may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
  • Computer storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives ("SSDs") (e.g., based on RAM), Flash memory, phase-change memory ("PCM"), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
  • A "network" is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
  • Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (devices) (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a "NIC"), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system. Thus, it should be understood that computer storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media.
  • Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code.Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
  • Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
  • In this description and the following claims, "cloud computing" is defined as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be provisioned and released with reduced management effort or service provider interaction. A cloud model can be composed of various characteristics (e.g., on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, etc), service models (e.g., Software as a Service ("SaaS"), Platform as a Service ("PaaS"), Infrastructure as a Service ("IaaS"), and deployment models (e.g., private cloud, community cloud, public cloud, hybrid cloud, etc.).
  • In this description and the following claims, "hypervisor" (or virtual machine manager ("VMM")) is defined as a component that permits multiple operating system instances (or Virtual Machines ("VMs")) to share a hardware host. A hypervisor can run directly on a host's hardware (type 1) or on top of an operating system running on a host's hardware (type 2). A hypervisor presents a virtual operating platform and manages the execution of operating system instances. For example, through virtualization a hypervisor can present individual Virtual Network Interface Cards ("VNICs") to a number of different operating system instances based on the hardware of an underlying Network Interface Card ("NIC"). A hypervisor controls the allocation of host processes and resources to each operating system instance to avoid disruptions between the operating system instances. Hypervisors can be used on machines in a cloud computing environment.
  • Embodiments of the invention combine a congestion free network core with a hypervisor based (i.e., edge-based) throttling design. A lightweight shim layer in a hypervisor can adaptively throttle the rate of VM-to-VM traffic flow. Regulation of traffic flow takes into account the speed of VM ports and congestion state visible to receiving-end hosts. A hypervisor based approach has increased simplicity and increased scalability in network Quality of Service ("QoS") mechanisms. Throttling VM-to-VM traffic promotes fairness enforcement (i.e., regulating connections for different protocols, such as, User Datagram Protocol ("UDP") and Transmission Control Protocol ("TCP")). Throttling VM-to-VM traffic also provides a new measure of fairness aligned with per-VM hourly charging models used in cloud based environments.
  • Figure 1 illustrates an example computer architecture 100 that facilitates ensuring predictable and quantifiable networking performance. Referring to Figure 1, computer architecture 100 includes computing systems 111, 121, 131, and 141 in a general embodiment. In the more specific virtual machine embodiment of Figure 1, the computing systems are hypervisors 111, 121, 131, and 141. In the general embodiment, each computing system 111, 121, 131 and 141 manages message processors. For instance, in the specific virtual machine embodiment, each hypervisor manages one or more virtual machines, the virtual machines representing an example of a message processor. For example, hypervisor 111 manages virtual machines 114A and 114B, hypervisor 121 manages virtual machine 124, hypervisor 131 manages virtual machines 134A and 134B, and hypervisor 141 manages virtual machines 144A and 144B. Hypervisors 111, 121, 131, and 141 are connected to congestion free network core 101. Each of the depicted components as well as any other connected computer systems and their components, can create message related data and exchange message related data (e.g., Internet Protocol ("IP") data grams and other higher layer protocols that utilize IP data grams, such as, Transmission Control Protocol ("TCP"), Hypertext Transfer Protocol ("HTTP"), Simple Mail Transfer Protocol ("SMTP"), etc.) over congestion free network core 101. Hereafter, the specific virtual machine embodiment will be described, although it will be understood that the principles described herein extend to the general embodiment in which computing systems generally are connected over the congestion free core network 101
  • Congestion free network core 101 is configured such that there is an extremely low (or even essentially no) chance of congestion within congestion free network core 101. Congestion free network core 101 can be a full bisection-bandwidth network. Congestion free network core 101 can be established using any of variety of network topologies, including but not limited to Fat Tree and VL2.
  • Hypervisors 111, 121, 131, and 141 include virtual machine switches 112, 122, 132, and 142 respectively. In general, when a hypervisor receives packets, the corresponding virtual switch directs the packets to the appropriate virtual machine (e.g., by a tag or electronic address). A virtual machine switch can include an adaptive rate limiter and/or a congestion detector. For example, virtual machine switch 112 includes congestion detector 113 and virtual machine switches 122, 132, and 142 include adaptive rate limiters 123, 133, and 143 respectively. Additionally, virtual machine switch 112 can include an adaptive rate limiter (not shown) and each of virtual machine switches 122, 132, and 142 can include a congestion detector (not shown). In the more general embodiment that extends beyond virtual machine environments, the congestion detector 113, and the adaptive rate limiters 123, 133, and 143, may be considered as logic implemented by computing systems 111, 121, 131, and 141, respectively.
  • In general, a VM (or other message processor) can operate in accordance with a defined subscription (e.g., a Service Level Agreement ("SLA")). For example, a user of a VM can purchase the right to use congestion free network core 101 from a service provider (e.g., a data center provider). As part of the purchase, the user and service provider can agree to a set of parameters defining a level of network service for the VM. The set of parameters can include a subscription bandwidth that is to be available to the VM. The subscription bandwidth can be a quantitative and invariant minimum bandwidth allocated for the VM.
  • Through a corresponding hypervisor, a congestion detector is aware of the maximum bandwidth of underlying networking hardware, such as, as NICs and Top-of-Tack ("TOR") switches, used by VMs. For example, hypervisor 111 can make congestion detector 113 aware of bandwidth limitations in the hardware supporting VMs 114A and 114B (e.g., a 1GB/s NIC). During operation, a congestion detector monitors received packets for various data flows directed to virtual machines.
  • From received packets, a congestion detector can detect when congestion is likely to occur or is occur in the supporting hardware. Congestion is likely to occur or is occurring when the received bandwidth at a hypervisor approaches the bandwidth limitations of underlying hardware. For example, when 975 MB/s are being received a NIC rated for 1GB/s there is some likelihood that congestion is occurring. A congestion detector can be configured to indicate congestion when received bandwidth is within a specified threshold (e.g., an amount or percentage) of hardware bandwidth limitations. In this particular illustrated system 100, the congestion detector is placed at a point of potential congestion in the network, and that point of congestion is within a single server. This has the potential to provide quicker and more stable control.
  • Any of a variety of different mechanisms can be used to detect congestion. In some embodiments, a receiving hypervisor (e.g., hypervisor 111) uses software metering to detect congestion. In other embodiments, a protocol or protocol extension, such as, for example, Explicit Congestion Notification ("ECN"), is used to detect congestion. ECN can be used at a last hop device, such as, for example, a TOR switch.
  • When congestion occurs, the subscription bandwidth for one or more VMs may be violated. In response to detecting congestion, the congestion detector can identify sending VMs as violators. Based on the extent of the violations, the congestion detector can send feedback to adaptive rate limiters for the sending VMs.
  • Generally, an adaptive rate limiter can regulate the rate of sending data packets onto congestion free network core 101. An adaptive rate limiter can receive feedback from a congestion detector. In response to received feedback, an adaptive rate limiter can reduce the bandwidth used to send packets to avoid further congestion. In absence of received feedback, an adaptive rate limiter can increase the bandwidth used to send packets onto congestion free network core 101 to promote efficient use of resource.
  • An adaptive rate limiter can use any of a variety of different feedback algorithms to regulate bandwidth when sending packets. In some embodiments, adaptive rate limiters use an Additive Increase/Multiplicative Decrease ("AIMD") algorithm for congestion avoidance. AIMD combines linear growth of the congestion window with an exponential reduction when congestion takes place. Other algorithms, such as, for example, multiplicative-increase/multiplicative-decrease ("MIMD") and additive-increase/additive-decrease ("AIAD") can also be used.
  • Figure 2 illustrates a flow chart of an example method 200 for addressing network congestion at a computer system. Method 200 will be described with respect to the components and data of computer architecture 100.
  • Method 200 includes an act of accessing a subscription bandwidth for a virtual machine managed by a hypervisor, the subscription bandwidth indicating a quantitative and invariant minimum bandwidth for the virtual machine (act 201). For example, hypervisor 111 can access subscription bandwidth 152 from subscription 151. Subscription 151 can be a previous established subscription for virtual machine 114A. Subscription bandwidth 152 can indicate a quantitative and invariant minimum bandwidth (e.g., 400 MB/s) for the virtual machine 114A.
  • Method 200 includes an act of receiving one or more data flows from a congestion free network core, the one or more data flows sent from sending virtual machines and directed to the virtual machine (act 202). For example, virtual machines 124, 134A, and 144B can send packets 102, 103, and 104 respectively onto congestion free network core 101 as part of corresponding data flows. Packets 102, 103, and 104 can be directed to virtual machines managed by hypervisor 111 (e.g., virtual machines 114A and/or 114B). Hypervisor 111 can receive packets 102, 103, and 104 from congestion free network core 101.
  • Method 200 includes an act of calculating the combined bandwidth of the one or more data flows (act 203). For example, congestion detector 133 can calculated the combined bandwidth for the data flows corresponding to packets 102, 103, and 104. Method 200 includes an act of detecting the onset of congestion at the virtual machine switch (act 204). For example, congestion detector 133 can detect the onset of congestion at virtual machine switch 112. The onset of congestion can be detected by determining that the combined bandwidth of the data flows corresponding to packets 102, 103, and 104 is within a specified threshold of the bandwidth limitations for virtual machine switch 112. For example, the onset of congestion may be detected when the combined bandwidth of the data flows is 9.5 GB/s and virtual machine switch 112 is capable of 10 GB/s.
  • Method 200 includes an act of identifying at least one virtual machine associated with the one or more data flows as a violator of the subscription bandwidth in response to detecting the onset of congestion, the at least one violating virtual machine being a sending virtual machine or a receiving virtual machine of one of the one or more data flows (act 205). For example, congestion detector 113 can identify one or more of virtual machines virtual machines 124, 134A, and 144B as violating subscription bandwidth 152 in response to detecting the onset of congestion at virtual machine switch 112. Congestion detector 113 may also identify virtual machine 114B as violators of subscription bandwidth 152. For example, individual bandwidth for each of a plurality of data flows may not violate subscription bandwidth 152. However, when the plurality of data flows are for the same receiving virtual machine, the sum of the individual bandwidths may violate subscription bandwidth 152.
  • Method 200 includes an act of determining the extent of the violation by the at least one violating virtual machine (act 206). For example, congestion detector 113 can determine a bandwidth amount by which subscription bandwidth 152 is being violated by one or more of virtual machines 124, 134A, 144B, and 114B.
  • Method 200 includes an act of formulating feedback for delivery to sender side adaptive rate limiters corresponding to the at least one violating virtual machine, the feedback instructing the sender side adaptive rate limiters to reduce the bandwidth of the one or more data flows originating from the at least one violating virtual machine (act 207). For example, congestion detector 133 can formulate feedback 106 for delivery to one or more of adaptive rate limiters 123, 133, and 143 (or even an adaptive rate limiter at hypervisor 111, for example, when virtual machine 114B is a violator). Feedback 106 can instruct the one or more adaptive rate limiters 123, 133, and 143 to reduce the bandwidth of data flows (corresponding to one or more of packets 102, 103, and 104) from one or more of virtual machines 124, 134A, and 144B respectively. When appropriate, feedback 106 can also be formulated for delivery to an adaptive rate limiter at hypervisor 111.
  • In general, feedback can be based on the subscription bandwidth of a receiving virtual machine and possible also the subscription bandwidth of one or more sending virtual machines. For example, virtual machine 124 may also have a specified subscription bandwidth. As such, feedback 106 can be formulated based on subscription bandwidth 152 and the specified subscription bandwidth for virtual machine 124. Feedback 106 can be formulated so that adaptive rate limiter 123 does not throttle back the data flow corresponding to packets 102 to a rate below the specified subscription bandwidth for virtual machine 124.
  • Method 200 includes an act of sending the feedback onto the congestion free network core for delivery to the sender side adaptive rate limiters (act 208). For example, congestions detector 113 can send feedback 106 on congestion free network core 101 for delivery to one or more of adaptive rate limiters 123, 133, and 143. Adaptive rate limiters that receive feedback 106 can reduce bandwidth of respective data flows in accordance with a bandwidth regulation algorithm such as, for example, AIMD, etc.
  • In some embodiments, the same or similar feedback may be send to all of the sending virtual machines, whether violating or not. The send-side hypervisor may then determine an appropriate manner to rate limit the sending virtual machines, perhaps choosing to more aggressively rate limit the violating virtual machines, as compared to the non-violating virtual machines. Also, such computation may occur at a tenant level in which there are multiple sending virtual machines associated with a single tenant. In that case, the feedback proportional rate adaption may be performed for all violating (and perhaps non-violating) virtual machines associated with that tenant. Such proportional rate limiting may be performed by some tenant-level logic.
  • Congestion detectors and adaptive rate limiters can be used regulate data flow bandwidth for various different types of traffic, including TCP traffic and non-TCP (e.g., UDP) traffic. As such, the bandwidth of TCP flows as well as non-TCP flows can be regulated in accordance with AIMD or other congestion avoidance algorithms. Data flow bandwidth from different types of traffic can also be considered together when detecting congestion. For example, a receiving hypervisor can received at least one data flow of TCP traffic and at least one data flow of non-TCP (e.g., UDP) traffic. The receiving hypervisor can consider the bandwidth of the at least one TCP data flow and the bandwidth of the at least one non-TCP data flow when detecting congestion the receiving. Feedback from the receiving hypervisor can be used to regulate the at least one TCP data flow as well as the at least one non-TCP data flow.
  • Accordingly, embodiments of the invention combine a congestion free network core with a hypervisor based (i.e., edge-based) throttling design to help insure quantitative and invariable subscription bandwidth rates. Embodiments can confine the scope of congestion to a single physical machine and limit the number of contributors to congestion. Since congestion is visible to a receiving hypervisor, congestion is more easily detected and communicated back to sending hypervisors. Communication back to sending hypervisors provides a closed-loop control approach increasing stability and permitting self-clocking.

Claims (10)

  1. A method performed at a computer system including one or more processors and system memory, the computer system connected to a congestion free network core (101), the computer system also including a message processor (111) for processing data flows (102, 103, 104) received from the congestion free network core, the message processor associated with a congestion detector (113), a method (200) for addressing network congestion at the computer system, the method comprising:
    an act of accessing (201) a subscription bandwidth (152) for the message processor, the subscription bandwidth indicating a quantitative and invariant minimum bandwidth for the message processor;
    an act of receiving (202) one or more data flows (102, 103, 104) from the congestion free network core (101), the one or more data flows sent from sending message processors (121, 131, 141) and directed to the message processor (111);
    an act of calculating (203) the combined bandwidth of the one or more data flows;
    an act of detecting (204) the onset of congestion at the computing system;
    an act of identifying (205) at least one message processor associated with the one or more data flows as a violator of the subscription bandwidth in response to detecting the onset of congestion, the at least one violating message processor being a sending message processor or a receiving message processor of one of the one or more data flows;
    an act of determining (206) the extent of the violation by the at least one violating message processor;
    an act of formulating (207) feedback (106) for delivery to sender side adaptive rate limiters that serves to rate limit the at least one violating message processor, the feedback instructing the sender side adaptive rate limiters (123, 133, 143) to reduce the bandwidth of the one or more data flows originating from the at least one violating message processor; and
    an act of sending (208) the feedback onto the congestion free network core for delivery to the sender side adaptive rate limiters.
  2. The method as recited in claim 1, wherein the act of receiving one or more data flows from the congestion free network core comprises:
    an act of receiving at least one data flow of Transmission Control Protocol (TCP) traffic; and
    an act of receiving at least one other data flow of traffic using another different protocol, the other different protocol different than Transmission Control Protocol (TCP).
  3. The method as recited in claim 2, wherein the act of identifying at least one message processor associated with the one or more data flows as a violator of the subscription bandwidth comprises an act of identifying a specified message processor associated with the at least one other data flow as a violator of the subscription bandwidth.
  4. The method as recited in claim 2, wherein the act of receiving at least one other data flow of traffic using another different protocol comprises an act of receiving at least one data flow of User Datagram Protocol ("UDP") traffic.
  5. The method as recited in claim 1, wherein the act of detecting the onset of congestion at the computing system comprises an act of using software metering to detect the onset of congestion at the computing system.
  6. The method as recited in claim 1, wherein the act of detecting the onset of congestion at the computing system comprises an act of using Explicit Congestion Notification ("ECN") to detect the onset of congestion at the computing system.
  7. The method as recited in claim 1, wherein the message processor is a virtual machine served by a hypervisor, and wherein the act of detecting the onset of congestion is performed by a virtual switch within the hypervisor.
  8. The method as recited in claim 1, wherein the at least one violating message processor includes at least one virtual machine that is served by a hypervisor, wherein the adaptive rate limiter that servers to rate limit the virtual machine is included in a virtual switch of the hypervisor.
  9. The method as recited in claim 1, wherein the act of formulating feedback for delivery to sender side adaptive rate limiters corresponding to the at least one violating message processor comprises an act of formulating feedback for reducing the bandwidth of the one or more data flows originating from the at least one violating message processor in accordance with an additive increase/multiplicative decrease ("AIMD") algorithm.
  10. A computer program product for use at a computer system, the computer system connected to a congestion free network core (101), the computer system also including a message processor (111) for processing data flows (102, 103, 104) received from the congestion free network core, the message processor associated with a congestion detector (113), the computer program product for implementing a method (200) for addressing network congestion at the computer system, the computer program product claim comprising one or more computer storage devices having stored thereon computer-executable instructions that, when executed at a processor, cause the computer system to perform the following steps:
    access (201) a subscription bandwidth (152) for the message processor, the subscription bandwidth indicating a quantitative and invariant minimum bandwidth for the message processor;
    receive (202) one or more data flows (102, 103, 104) from the congestion free network core (101), the one or more data flows sent from sending message processor (121, 131, 141) and directed to the message processor (111);
    calculate (203) the combined bandwidth of the one or more data flows;
    detect(204) the onset of congestion at the computing system;
    identify (205) at least one message processor associated with the one or more data flows as a violator of the subscription bandwidth in response to detecting the onset of congestion, the at least one violating message processor being a sending message processor or a receiving message processor of one of the one or more data flows;
    determine(206) the extent of the violation by the at least one violating message processor;
    formulate (207) feedback (106) for delivery to sender side adaptive rate limiters that serve to rate limit the at least one violating message processor, the feedback instructing the sender side adaptive rate limiters (123, 133, 143) to reduce the bandwidth of the one or more data flows originating from the at least one violating message processor; and
    send (208) the feedback onto the congestion free network core for delivery to the sender side adaptive rate limiters.
EP13730765.8A 2012-06-21 2013-06-10 Guarantee of predictable and quantifiable network performance Active EP2865147B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/530,043 US8804523B2 (en) 2012-06-21 2012-06-21 Ensuring predictable and quantifiable networking performance
PCT/US2013/044869 WO2013191927A1 (en) 2012-06-21 2013-06-10 Ensuring predictable and quantifiable networking performance

Publications (2)

Publication Number Publication Date
EP2865147A1 EP2865147A1 (en) 2015-04-29
EP2865147B1 true EP2865147B1 (en) 2016-11-23

Family

ID=48670846

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13730765.8A Active EP2865147B1 (en) 2012-06-21 2013-06-10 Guarantee of predictable and quantifiable network performance

Country Status (5)

Country Link
US (4) US8804523B2 (en)
EP (1) EP2865147B1 (en)
CN (1) CN104396200B (en)
ES (1) ES2616682T3 (en)
WO (1) WO2013191927A1 (en)

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8095774B1 (en) 2007-07-05 2012-01-10 Silver Peak Systems, Inc. Pre-fetching data into a memory
US8392684B2 (en) 2005-08-12 2013-03-05 Silver Peak Systems, Inc. Data encryption in a network memory architecture for providing data based on local accessibility
US8929402B1 (en) 2005-09-29 2015-01-06 Silver Peak Systems, Inc. Systems and methods for compressing packet data by predicting subsequent data
US8811431B2 (en) 2008-11-20 2014-08-19 Silver Peak Systems, Inc. Systems and methods for compressing packet data
US8489562B1 (en) * 2007-11-30 2013-07-16 Silver Peak Systems, Inc. Deferred data storage
US8885632B2 (en) 2006-08-02 2014-11-11 Silver Peak Systems, Inc. Communications scheduler
US8755381B2 (en) 2006-08-02 2014-06-17 Silver Peak Systems, Inc. Data matching using flow based packet data storage
US8307115B1 (en) 2007-11-30 2012-11-06 Silver Peak Systems, Inc. Network memory mirroring
US10805840B2 (en) 2008-07-03 2020-10-13 Silver Peak Systems, Inc. Data transmission via a virtual wide area network overlay
US9717021B2 (en) 2008-07-03 2017-07-25 Silver Peak Systems, Inc. Virtual network overlay
US8743683B1 (en) 2008-07-03 2014-06-03 Silver Peak Systems, Inc. Quality of service using multiple flows
US10164861B2 (en) 2015-12-28 2018-12-25 Silver Peak Systems, Inc. Dynamic monitoring and visualization for network health characteristics
US9130991B2 (en) 2011-10-14 2015-09-08 Silver Peak Systems, Inc. Processing data packets in performance enhancing proxy (PEP) environment
US9626224B2 (en) 2011-11-03 2017-04-18 Silver Peak Systems, Inc. Optimizing available computing resources within a virtual environment
US9069598B2 (en) 2012-01-06 2015-06-30 International Business Machines Corporation Providing logical partions with hardware-thread specific information reflective of exclusive use of a processor core
US8804523B2 (en) 2012-06-21 2014-08-12 Microsoft Corporation Ensuring predictable and quantifiable networking performance
JP5874828B2 (en) * 2012-07-03 2016-03-02 富士通株式会社 Control target flow specifying program, control target flow specifying method, and control target flow specifying apparatus
FR2995160B1 (en) * 2012-09-05 2014-09-05 Thales Sa TRANSMISSION METHOD IN AN AD HOC MULTISAUTAL IP NETWORK
US20140126371A1 (en) * 2012-11-08 2014-05-08 Electronics And Telecommunications Research Institute Flow switch and operating method thereof
US9141416B2 (en) * 2013-03-15 2015-09-22 Centurylink Intellectual Property Llc Virtualization congestion control framework for modifying execution of applications on virtual machine based on mass congestion indicator in host computing system
US9971617B2 (en) * 2013-03-15 2018-05-15 Ampere Computing Llc Virtual appliance on a chip
US9762502B1 (en) * 2014-05-12 2017-09-12 Google Inc. Method and system for validating rate-limiter determination made by untrusted software
US9755978B1 (en) * 2014-05-12 2017-09-05 Google Inc. Method and system for enforcing multiple rate limits with limited on-chip buffering
US10469404B1 (en) * 2014-05-12 2019-11-05 Google Llc Network multi-level rate limiter
US9948496B1 (en) 2014-07-30 2018-04-17 Silver Peak Systems, Inc. Determining a transit appliance for data traffic to a software service
US9875344B1 (en) 2014-09-05 2018-01-23 Silver Peak Systems, Inc. Dynamic monitoring and authorization of an optimization device
US10693806B2 (en) * 2015-03-11 2020-06-23 Vmware, Inc. Network bandwidth reservations for system traffic and virtual computing instances
US10025609B2 (en) 2015-04-23 2018-07-17 International Business Machines Corporation Virtual machine (VM)-to-VM flow control for overlay networks
US10277736B2 (en) 2015-07-30 2019-04-30 At&T Intellectual Property I, L.P. Methods, systems, and computer readable storage devices for determining whether to handle a request for communication services by a physical telephone number mapping service or a virtual telephone number mapping service
US9866521B2 (en) 2015-07-30 2018-01-09 At&T Intellectual Property L.L.P. Methods, systems, and computer readable storage devices for determining whether to forward requests from a physical telephone number mapping service server to a virtual telephone number mapping service server
US9851999B2 (en) 2015-07-30 2017-12-26 At&T Intellectual Property I, L.P. Methods, systems, and computer readable storage devices for handling virtualization of a physical telephone number mapping service
US9888127B2 (en) 2015-07-30 2018-02-06 At&T Intellectual Property I, L.P. Methods, systems, and computer readable storage devices for adjusting the use of virtual resources providing communication services based on load
US10091113B2 (en) * 2015-11-06 2018-10-02 At&T Intellectual Property I, L.P. Network functions virtualization leveraging unified traffic management and real-world event planning
CN105471765B (en) * 2015-12-22 2019-12-10 国云科技股份有限公司 Virtual machine external mesh bandwidth limiting method for cloud platform
US9985890B2 (en) * 2016-03-14 2018-05-29 International Business Machines Corporation Identifying a local congestion control algorithm of a virtual machine
US10019280B2 (en) * 2016-03-25 2018-07-10 Intel Corporation Technologies for dynamically managing data bus bandwidth usage of virtual machines in a network device
US10045252B2 (en) * 2016-06-02 2018-08-07 International Business Machines Corporation Virtual switch-based congestion control for multiple TCP flows
US10432484B2 (en) 2016-06-13 2019-10-01 Silver Peak Systems, Inc. Aggregating select network traffic statistics
US9967056B1 (en) 2016-08-19 2018-05-08 Silver Peak Systems, Inc. Forward packet recovery with constrained overhead
US10623330B2 (en) 2016-09-23 2020-04-14 Google Llc Distributed bandwidth allocation and throttling
US10771394B2 (en) 2017-02-06 2020-09-08 Silver Peak Systems, Inc. Multi-level learning for classifying traffic flows on a first packet from DNS data
US10257082B2 (en) 2017-02-06 2019-04-09 Silver Peak Systems, Inc. Multi-level learning for classifying traffic flows
US10892978B2 (en) 2017-02-06 2021-01-12 Silver Peak Systems, Inc. Multi-level learning for classifying traffic flows from first packet data
US11044202B2 (en) 2017-02-06 2021-06-22 Silver Peak Systems, Inc. Multi-level learning for predicting and classifying traffic flows from first packet data
CN109088742B (en) * 2017-06-14 2021-11-19 ***通信有限公司研究院 Service prediction method, network element equipment and computer readable storage medium
CN109104373B (en) * 2017-06-20 2022-02-22 华为技术有限公司 Method, device and system for processing network congestion
US10721294B2 (en) 2017-07-12 2020-07-21 Telefonaktiebolaget Lm Ericsson (Publ) Apparatus and method for subscription-based resource throttling in a cloud environment
US10903985B2 (en) 2017-08-25 2021-01-26 Keysight Technologies Singapore (Sales) Pte. Ltd. Monitoring encrypted network traffic flows in a virtual environment using dynamic session key acquisition techniques
US10992652B2 (en) 2017-08-25 2021-04-27 Keysight Technologies Singapore (Sales) Pte. Ltd. Methods, systems, and computer readable media for monitoring encrypted network traffic flows
US11212210B2 (en) 2017-09-21 2021-12-28 Silver Peak Systems, Inc. Selective route exporting using source type
US10834003B2 (en) * 2018-01-17 2020-11-10 Druva Inc. Systems and methods for adaptive bandwidth throttling
US10637721B2 (en) 2018-03-12 2020-04-28 Silver Peak Systems, Inc. Detecting path break conditions while minimizing network overhead
CN108989880B (en) * 2018-06-21 2020-04-14 北京邮电大学 Code rate self-adaptive switching method and system
US10630539B2 (en) * 2018-08-07 2020-04-21 International Business Machines Corporation Centralized rate limiters for services in cloud based computing environments
US10893030B2 (en) * 2018-08-10 2021-01-12 Keysight Technologies, Inc. Methods, systems, and computer readable media for implementing bandwidth limitations on specific application traffic at a proxy element
US11212227B2 (en) * 2019-05-17 2021-12-28 Pensando Systems, Inc. Rate-optimized congestion management
CN114788243B (en) * 2019-12-31 2023-08-22 华为技术有限公司 Method and device for scheduling message

Family Cites Families (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5377327A (en) * 1988-04-22 1994-12-27 Digital Equipment Corporation Congestion avoidance scheme for computer networks
US6826620B1 (en) * 1998-08-26 2004-11-30 Paradyne Corporation Network congestion control system and method
GB9930428D0 (en) * 1999-12-22 2000-02-16 Nortel Networks Corp A method of provisioning a route in a connectionless communications network such that a guaranteed quality of service is provided
US7061861B1 (en) * 2000-07-06 2006-06-13 Broadband Royalty Corporation Method and system for weighted fair flow control in an asynchronous metro packet transport ring network
US20110238855A1 (en) * 2000-09-25 2011-09-29 Yevgeny Korsunsky Processing data flows with a data flow processor
US20110231564A1 (en) * 2000-09-25 2011-09-22 Yevgeny Korsunsky Processing data flows with a data flow processor
US20040136379A1 (en) * 2001-03-13 2004-07-15 Liao Raymond R Method and apparatus for allocation of resources
US7934020B1 (en) * 2003-09-19 2011-04-26 Vmware, Inc. Managing network data transfers in a virtual computer system
IL167059A (en) * 2005-02-23 2010-11-30 Tejas Israel Ltd Network edge device and telecommunications network
US7733891B2 (en) * 2005-09-12 2010-06-08 Zeugma Systems Inc. Methods and apparatus to support dynamic allocation of traffic management resources in a network element
JP2007265557A (en) 2006-03-29 2007-10-11 Toshiba Corp Semiconductor memory device
EP1848161B1 (en) * 2006-04-20 2008-06-18 Alcatel Lucent Efficient method and system for weighted fair policing
US8477658B2 (en) 2006-04-25 2013-07-02 The Hong Kong University Of Science And Technology Intelligent peer-to-peer media streaming
US7599290B2 (en) * 2006-08-11 2009-10-06 Latitude Broadband, Inc. Methods and systems for providing quality of service in packet-based core transport networks
CN101009655B (en) * 2007-02-05 2011-04-20 华为技术有限公司 Traffic scheduling method and device
US7916718B2 (en) 2007-04-19 2011-03-29 Fulcrum Microsystems, Inc. Flow and congestion control in switch architectures for multi-hop, memory efficient fabrics
US20080298248A1 (en) * 2007-05-28 2008-12-04 Guenter Roeck Method and Apparatus For Computer Network Bandwidth Control and Congestion Management
EP2174450B1 (en) * 2007-07-02 2016-10-12 Telecom Italia S.p.A. Application data flow management in an ip network
US8797850B2 (en) * 2008-01-10 2014-08-05 Qualcomm Incorporated System and method to adapt to network congestion
CN101222296B (en) * 2008-01-31 2010-06-09 上海交通大学 Self-adapting transmission method and system in ascending honeycomb video communication
CN101582836B (en) * 2008-05-16 2011-06-01 华为技术有限公司 Congestion control method, wireless netted network node and system
US8543685B1 (en) * 2008-08-12 2013-09-24 Eden Rock Communications, Llc Usage based multiplier modification in rate limiting schemes
US20100128665A1 (en) * 2008-11-21 2010-05-27 Alcatel-Lucent Usa Inc. Method for providing signaling between a core network and a radio access network
US9407550B2 (en) * 2008-11-24 2016-08-02 Avago Technologies General Ip (Singapore) Pte. Ltd. Method and system for controlling traffic over a computer network
CN101436990B (en) * 2008-12-23 2011-09-14 华为终端有限公司 Method for automatically adjusting encoding rate, receiving device and communication system
WO2010139109A1 (en) * 2009-06-01 2010-12-09 上海贝尔股份有限公司 Method and device for requesting multicasting, processing multicasting requests and assisting in the aforementioned process
WO2011025438A1 (en) * 2009-08-25 2011-03-03 Telefonaktiebolaget L M Ericsson (Publ) Using the ecn mechanism to signal congestion directly to the base station
US8078691B2 (en) 2009-08-26 2011-12-13 Microsoft Corporation Web page load time prediction and simulation
CN102035796A (en) * 2009-09-30 2011-04-27 华为技术有限公司 Method and device for providing differentiated service in video sharing
US9274848B2 (en) 2009-12-03 2016-03-01 International Business Machines Corporation Optimizing cloud service delivery within a cloud computing environment
US8416690B2 (en) * 2010-01-11 2013-04-09 Research In Motion Limited Explicit congestion notification based rate adaptation using binary marking in communication systems
US20110178890A1 (en) 2010-01-15 2011-07-21 Endurance International Group, Inc. Common services web hosting architecture with multiple branding
US8553540B2 (en) * 2010-03-05 2013-10-08 Microsoft Corporation Congestion control for delay sensitive applications
US8464255B2 (en) 2010-03-12 2013-06-11 Microsoft Corporation Managing performance interference effects on cloud computing servers
US20130009450A1 (en) * 2010-03-25 2013-01-10 Minoru Suzuki In-wheel motor-driven device
US8719804B2 (en) * 2010-05-05 2014-05-06 Microsoft Corporation Managing runtime execution of applications on cloud computing systems
US8477610B2 (en) * 2010-05-31 2013-07-02 Microsoft Corporation Applying policies to schedule network bandwidth among virtual machines
US10187353B2 (en) * 2010-06-02 2019-01-22 Symantec Corporation Behavioral classification of network data flows
US8375139B2 (en) 2010-06-28 2013-02-12 Canon Kabushiki Kaisha Network streaming over multiple data communication channels using content feedback information
US8804747B2 (en) * 2010-09-23 2014-08-12 Cisco Technology, Inc. Network interface controller for virtual and distributed services
US8937858B2 (en) * 2010-09-27 2015-01-20 Telefonaktiebolaget L M Ericsson (Publ) Reducing access network congestion caused by oversubscription of multicast groups
GB2485765B (en) * 2010-11-16 2014-02-12 Canon Kk Client based congestion control mechanism
JP5538257B2 (en) * 2011-02-02 2014-07-02 アラクサラネットワークス株式会社 Bandwidth monitoring device and packet relay device
US9450873B2 (en) * 2011-06-28 2016-09-20 Microsoft Technology Licensing, Llc Performance isolation for clouds
US8923294B2 (en) * 2011-06-28 2014-12-30 Polytechnic Institute Of New York University Dynamically provisioning middleboxes
US9009385B1 (en) * 2011-06-30 2015-04-14 Emc Corporation Co-residency detection in a cloud-based system
US8671407B2 (en) * 2011-07-06 2014-03-11 Microsoft Corporation Offering network performance guarantees in multi-tenant datacenters
US20130100803A1 (en) * 2011-10-21 2013-04-25 Qualcomm Incorporated Application based bandwidth control for communication networks
US9590881B2 (en) * 2012-02-07 2017-03-07 Telefonaktiebolaget Lm Ericsson (Publ) Monitoring carrier ethernet networks
US8898295B2 (en) * 2012-03-21 2014-11-25 Microsoft Corporation Achieving endpoint isolation by fairly sharing bandwidth
EP2829078A1 (en) * 2012-03-21 2015-01-28 Lightfleet Corporation A packet-flow interconnect fabric
US8804523B2 (en) * 2012-06-21 2014-08-12 Microsoft Corporation Ensuring predictable and quantifiable networking performance

Also Published As

Publication number Publication date
WO2013191927A1 (en) 2013-12-27
US20130343191A1 (en) 2013-12-26
US9537773B2 (en) 2017-01-03
CN104396200B (en) 2018-08-28
ES2616682T3 (en) 2017-06-14
US8804523B2 (en) 2014-08-12
CN104396200A (en) 2015-03-04
US20140347998A1 (en) 2014-11-27
EP2865147A1 (en) 2015-04-29
US10447594B2 (en) 2019-10-15
US20160134538A1 (en) 2016-05-12
US20170111278A1 (en) 2017-04-20
US9231869B2 (en) 2016-01-05

Similar Documents

Publication Publication Date Title
US10447594B2 (en) Ensuring predictable and quantifiable networking performance
Shieh et al. Sharing the data center network
US11070625B2 (en) Server connection capacity management
US9276864B1 (en) Dynamic network traffic throttling
US9112809B2 (en) Method and apparatus for controlling utilization in a horizontally scaled software application
US8793687B2 (en) Operating virtual switches in a virtualized computing environment
US9092269B2 (en) Offloading virtual machine flows to physical queues
US11729108B2 (en) Queue management in a forwarder
CN112041826B (en) Fine-grained traffic shaping offload for network interface cards
US9292466B1 (en) Traffic control for prioritized virtual machines
US10243816B2 (en) Automatically optimizing network traffic
Sun et al. A price-aware congestion control protocol for cloud services
JP7148596B2 (en) Network-aware elements and how to use them
Sun et al. PACCP: a price-aware congestion control protocol for datacenters
US11902826B2 (en) Acknowledgement of data packet transmission using RLC in am mode operating in 5G protocol stack with mitigation of RLC channel congestion
Xiaocui et al. A price-aware congestion control protocol for cloud services
Shen et al. Rendering differential performance preference through intelligent network edge in cloud data centers

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20141218

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602013014414

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0012803000

Ipc: H04L0012801000

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/801 20130101AFI20160512BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20160617

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 848768

Country of ref document: AT

Kind code of ref document: T

Effective date: 20161215

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602013014414

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 848768

Country of ref document: AT

Kind code of ref document: T

Effective date: 20161123

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170223

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170224

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170323

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2616682

Country of ref document: ES

Kind code of ref document: T3

Effective date: 20170614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602013014414

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170223

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20170824

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170610

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170630

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170630

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170610

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170610

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20130610

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20161123

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170323

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602013014414

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0012801000

Ipc: H04L0047100000

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230523

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20230523

Year of fee payment: 11

Ref country code: IT

Payment date: 20230523

Year of fee payment: 11

Ref country code: FR

Payment date: 20230523

Year of fee payment: 11

Ref country code: DE

Payment date: 20230523

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230523

Year of fee payment: 11

Ref country code: ES

Payment date: 20230703

Year of fee payment: 11