EP4097582A1 - Multi-application packet processing development framework - Google Patents

Multi-application packet processing development framework

Info

Publication number
EP4097582A1
EP4097582A1 EP20704090.8A EP20704090A EP4097582A1 EP 4097582 A1 EP4097582 A1 EP 4097582A1 EP 20704090 A EP20704090 A EP 20704090A EP 4097582 A1 EP4097582 A1 EP 4097582A1
Authority
EP
European Patent Office
Prior art keywords
application
packet
framework
implementation
specific
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP20704090.8A
Other languages
German (de)
French (fr)
Inventor
Martin Julien
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Publication of EP4097582A1 publication Critical patent/EP4097582A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/40006Architecture of a communication node
    • H04L12/40045Details regarding the feeding of energy to the node from the bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer

Definitions

  • Embodiments of the invention relate to the field of packet processing; and more specifically, to a multi-application packet processing development framework.
  • Modem datacenters and cloud systems typically offer several types of accelerators to software developers to improve processing performance.
  • Typical accelerators include an Application-Specific Integrated Circuit (ASIC) with a specialized Software Development Kit (SDK), networking ASICs.
  • ASIC Application-Specific Integrated Circuit
  • SDK Software Development Kit
  • Ethernet switch ASICs have recently evolved from a fixed network data plane functional model to a more P4-programmable one.
  • P4 is a technology that is designed to program the data plane functionality of network devices and to partially define interfaces between control and data planes.
  • P4-programmable devices are available in software virtual appliances and in hardware, such as with NICs and Ethernet switches. These devices are typically programmed to specifically fulfill the requirements of the cloud infrastructure instead of the applications themselves. As they’re starting to be offered commercially, P4-programmable devices should be increasingly available on cloud networks.
  • a method for offloading packet processing of an application includes receiving, by a multi-application framework, a set of application specifications describing a first application; selecting, by the multi-application framework, a set of devices for deployment of the first application based on the set of application specifications; generating, by the multi-application framework, application-specific templates based on the set of application specifications, a set of multi-application reference templates, and architectures of the set of devices; receiving, by the multi-application framework, a first application-specific implementation, wherein the first application-specific implementation includes logic and data objects for deployment on the set of devices; merging, by the multi -application framework, the first application-specific implementation, which corresponds to the first application, with a second application-specific implementation, which corresponds to a second application, to produce a first merged multi-application implementation; and deploying, by the multi application framework, the first merged multi-application implementation to the set of devices.
  • a non-transitory machine-readable storage medium that provides instructions that, if executed by a processor, will cause said processor to perform operations.
  • the operations include receiving a set of application specifications describing a first application; selecting a set of devices for deployment of the first application based on the set of application specifications; generating application-specific templates based on the set of application specifications, a set of multi-application reference templates, and architectures of the set of devices; receiving a first application-specific implementation, wherein the first application- specific implementation includes logic and data objects for deployment on the set of devices; merging the first application-specific implementation, which corresponds to the first application, with a second application-specific implementation, which corresponds to a second application, to produce a first merged multi-application implementation; and deploying the first merged multi -application implementation to the set of devices.
  • systems are provided for application developers to develop their own application-specific data plane packet processing fimctions/logic and objects (sometimes referred to as application-specific data plane packet processing implementations) and have them deployed on P4 resources of a cloud infrastructure. Further, multiple application-specific data plane packet processing implementations can be deployed simultaneously on the same P4 resource.
  • FIG. 1 shows a P4 Portable Switch Architecture (PSA), according to some embodiments.
  • PSA Portable Switch Architecture
  • Figure 2 shows PSA packet paths, according to some embodiments.
  • Figure 3 shows PSA-based multi-application data plane framework pipeline paths, according to some embodiments.
  • Figure 4 shows a method for determining an application signature associated with a packet and selection of a packet processing logic pipeline, according to some embodiments.
  • Figure 5 shows a multi-application data plane framework pipeline, according to some embodiments.
  • Figure 6 shows a method for determining an application signature associated with a packet and selection of a packet processing logic pipeline, according to some embodiments.
  • Figure 7 shows a multi-application data plane framework pipeline, according to some embodiments.
  • Figure 8 shows a multi-application data plane framework pipeline, according to some embodiments.
  • Figure 9 shows processing of a packet in a multi-application data plane framework pipeline, according to some embodiments.
  • Figure 10 shows processing of a packet in a multi-application data plane framework pipeline, according to some embodiments.
  • Figure 11 shows a multi-application data plane framework pipeline, according to some embodiments.
  • Figure 12 shows processing of a packet in a multi-application data plane framework pipeline using separate application implementations, according to some embodiments.
  • Figure 13 shows processing of a packet in a multi-application data plane framework pipeline using separate application implementations, according to some embodiments.
  • Figure 14 shows deployment dynamicity applied to the control plane, according to some embodiments.
  • Figure 15 shows processing of a packet in multi -application data plane framework pipeline paths using separate application implementations with management, provisioning, and message components, according to some embodiments.
  • Figure 16 shows processing of a packet in multi -application data plane framework pipeline paths using separate application implementations with management, provisioning, and message components, according to some embodiments.
  • Figure 17 shows an arrangement of application-specific data plane and control plane templates, according to some embodiments.
  • Figure 18 shows processing of a packet in multi -application data plane framework pipeline paths using separate application implementations and separate application-specific templates, according to some embodiments.
  • Figure 19 shows an arrangement of multi -application frameworks for data plane and control plane implementations, according to some embodiments.
  • Figure 20 shows a sequence diagram of a possible deployment process for an application-specific implementation, according to some embodiments.
  • Figure 21 shows a sequence diagram of a possible deployment process when updating an application-specific implementation, according to some embodiments.
  • Figure 22 shows a sequence diagram where an application-specific implementation is removed from the multi-application framework deployed on a P4 device, according to some embodiments.
  • Figure 23 shows an arrangement of a datacenter/cloud system, according to some embodiments.
  • Figure 24 shows an arrangement of a datacenter/cloud system with a control plane proxy, according to some embodiments.
  • Figure 25 shows an arrangement of a datacenter/cloud system for use with application migration, according to some embodiments.
  • Figures 26, 27, 28, 29, and 30 show a set of flow diagrams for a method for offloading packet processing of an application, according to some embodiments.
  • Figure 31 A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention.
  • Figure 3 IB illustrates an exemplary way to implement a special-purpose network device according to some embodiments of the invention.
  • FIG. 31C illustrates various exemplary ways in which virtual network elements (VNEs) may be coupled according to some embodiments of the invention.
  • VNEs virtual network elements
  • Figure 3 ID illustrates a network with a single network element (NE) on each of the NDs, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention.
  • NE network element
  • Figure 3 IE illustrates the simple case of where each of the NDs implements a single NE, but a centralized control plane has abstracted multiple of the NEs in different NDs into (to represent) a single NE in one of the virtual network(s), according to some embodiments of the invention.
  • Figure 3 IF illustrates a case where multiple VNEs are implemented on different NDs and are coupled to each other, and where a centralized control plane has abstracted these multiple VNEs such that they appear as a single VNE within one of the virtual networks, according to some embodiments of the invention.
  • Figure 32 illustrates a general-purpose control plane device with centralized control plane (CCP), according to some embodiments of the invention.
  • references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • Bracketed text and blocks with dashed borders may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.
  • Coupled is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other.
  • Connected is used to indicate the establishment of communication between two or more elements that are coupled with each other.
  • systems are provided for application developers to develop their own application-specific data plane packet processing fimctions/logic and objects (sometimes referred to as application-specific data plane packet processing implementations) and have them deployed on P4 resources of a cloud infrastructure. Further, multiple application-specific data plane packet processing implementations can be deployed simultaneously on the same P4 resource.
  • a multi-application P4 packet processing development framework (sometimes referred to as a multi-application framework) provides a development framework/environment to allow multiple application-specific data plane packet processing implementations to be developed and deployed simultaneously on the same P4 resource with a logical isolation between all applications/implementations.
  • the multi application framework requires applications/developers to specify their own application-specific packet processing specifications, which includes an application-specific signature that is used as a unique identifier for identifying all traffic and management flows destined to the intended application-specific data plane packet processing implementation.
  • the multi-application framework provides each application/developer with application-specific data plane implementation templates (sometimes referred to as application-specific data plane packet processing P4 implementation templates) to implement their own specifically intended data plane logic and data model/objects, as well as with application-specific control plane implementation templates for accessing its associated resources and for message handling.
  • application-specific data plane implementation templates sometimes referred to as application-specific data plane packet processing P4 implementation templates
  • the application-specific data plane implementation templates provided by the multi- application framework can be based on a P4 implementation of the multi-application framework allowing multiple applications/implementations to co-exist simultaneously on the same P4 target, as each application-specific packet processing logical pipeline is selected based on the provided unique application-specific signature. Different application-specific data plane packet processing implementations can be provided, depending on the selected P4 target and P4 architecture.
  • Each application/developer can develop its own application-specific data plane packet processing fimctions/logic that are included in an application-specific data plane packet processing implementation using the provided templates and without requiring any knowledge of other application-specific data plane packet processing functions/logic also potentially developed in parallel for the same P4 target.
  • the multi-application framework is responsible for merging all application-specific data plane packet processing implementations from all applications/developers to create the final P4 implementation to be deployed on the selected P4 target. It is the responsibility of the multi-application framework to validate that each application-specific data plane packet processing implementation respects the constraints imposed by the multi-application framework.
  • a deployment model could have the development framework deployed first on the P4 target, as each additional application-specific data plane packet processing P4 implementation could be added individually as it becomes available. That model could be considered as offering a modular deployment model for specific P4 targets.
  • Another deployment model could merge all application-specific data plane packet processing P4 implementations with the development framework P4 implementation and deploy the final combined implementation as one unique image for the P4 target.
  • the multi-application framework could also allow an application to dynamically provision its own application-specific data plane packet processing P4 implementation.
  • the mechanism involves a control plane proxy service provided by the development framework that would be used to relay provisioning operations between an application and its data plane implementation, also assuming logical isolation between all applications.
  • the multi- application framework could allow an application to exchange packets and messages dynamically with its own application-specific data plane packet processing P4 implementation.
  • the mechanism involves a control plane proxy service provided by the development framework that would be used to relay packets and messages between an application and its data plane implementation, also assuming logical isolation between all applications.
  • applications deployed in a cloud infrastructure can be developed to leverage on the flexible and dynamic packet processing capabilities of the P4 resources available in the cloud infrastructure.
  • Applications can potentially improve their processing performance by offloading some of their application-specific packet processing tasks to P4-programmable devices available in the associated cloud infrastructure. For example, assuming an application’s logic for packet parsing and header validation could be considered as recurrent processing overhead that remains essential but might represent a relatively heavy burden for the application, such tasks could potentially be performed more efficiently by other system components on behalf of the application itself.
  • the multi-application framework enables applications to share P4- programmable resources and for multiple applications to deploy their own application-specific data plane packet processing logic and data model on the same P4 resource.
  • the multi-application framework provides a development framework to allow multiple application- specific data plane packet processing implementations to be developed and deployed simultaneously on the same P4 resource, assuming a logical isolation between all applications.
  • the systems described herein leverage emergent P4 technologies, which standardize a programming language and development model for programmable data planes that can be supported by multiple different types of P4-capable software appliances and hardware devices.
  • the dynamicity of the multi -application framework service allows application-specific implementations to migrate between different nodes as their associated applications would also be requested to move.
  • P4 target e.g., a P4 device or appliance
  • P4 architecture e.g., a P4 architecture
  • development environment for that specific P4 target e.g., a P4 compiler
  • P4 architectures are used to abstract the implementation details of P4 resources
  • P4 applications can be written independently of any specific P4 appliances or devices and still be deployed on any P4 resources compatible with the chosen P4 architecture.
  • P4 technology can be considered as a standard programming model for programmable data planes and a P4 architecture can be thought of as a contract between a P4 application and a P4 target.
  • the P4 Portable Switch Architecture was specified by the P4 consortium (p4.org) to represent a standard P4 architecture for P4-capable targets.
  • the PSA specifies a packet processing pipeline (i.e., the PSA packet pipeline 100) for P4-capable switches, while also suggesting standard packet paths, as shown in Figure 2 (e.g., the PSA packet paths 200).
  • the boxes/blocks 102 i.e., the ingress parser 102A, the ingress 102B, the ingress deparser 102C, the egress parser 102D, the egress 102E, and the egress deparser 102F
  • the boxes/blocks 104 are target-specific non-programmable functional blocks.
  • an incoming packet can be a normal packet from port (NFP), a packet from a CPU port (NFCPU), a normal unicast packet from ingress to egress (NU), a normal multicast-replicated packet from ingress to egress (NM), a clone from ingress to egress (CI2E), or a clone from egress to egress (CE2E).
  • NFP normal packet from port
  • NFCPU normal unicast packet from ingress to egress
  • NU normal multicast-replicated packet from ingress to egress
  • NM normal multicast-replicated packet from ingress to egress
  • C2E clone from ingress to egress
  • CE2E clone from egress to egress
  • P4 applications written for the PSA specify their own data plane data objects and logic by programming the six suggested functional blocks 102, within the confines of the supported PSA packet paths 200.
  • Metadata information is also minimally standardized to carry packet- related information through the PSA packet pipeline 100, such as information parsed from packets themselves, results from table lookups, or instructions relative to packet modifications or packet forwarding.
  • packets and metadata information are flowing from the ingress parser 102A to the buffer queueing engine 104B (although resubmission and/or recirculation is possible, as shown in Figure 2).
  • a P4 target might prefer to support its own specific P4 architecture.
  • a target-specific P4 architecture could be required to best leverage on a specialized packet processing pipeline and/or functional block (e.g., specialized functional blocks for crypto or machine-learning (ML) inference purposes).
  • the specialized functional blocks could also be made available as optional extensions to the PSA architecture.
  • the standardized P4 PSA has been referenced for simplicity and clarity.
  • a multi-application P4 packet processing development framework (sometimes referred to as multi-application framework) provides additional capabilities to P4 architectures, allowing multiple applications to be deployed dynamically and simultaneously on the same P4 device, while allowing each application’s packet processing logic and data objects to remain isolated from the ones of other applications.
  • each programmable block 102 can include or otherwise utilize an application signature selection component 304 that is used to differentiate between the different deployed applications and select one application’s packet processing pipeline/implementation corresponding to the processed/received packet 308.
  • an application signature selection component 304 that is used to differentiate between the different deployed applications and select one application’s packet processing pipeline/implementation corresponding to the processed/received packet 308.
  • framework-related information such as an application-specific signature or identity, that is passed through metadata information messages between the different programmable blocks 102 of the packet processing pipeline (e.g., framework (FW) metadata 306).
  • application signature selection components 304 Although there can be several different types of application signature selection components 304, two types of application signature selection components 304 are described in detail herein: (1) a packet-based application signature selection component 304A and (2) a metadata-based application signature selection component 304B.
  • this type of component is supported by programmable blocks 102 that can directly parse packets (e.g., the ingress parser 102A and the egress parser 102D).
  • the information used to uniquely identify an application can be based on the packet headers of a packet 308, while it could also be complemented with metadata information passed along with each packet 308 from a previous block 102/104, if available (e.g., the FW metadata 306). That information is then used to select the corresponding application’s packet processing pipeline logic and data objects to process the packet 308.
  • this type of component is supported on programmable blocks 102 that cannot directly parse packets (e.g., the ingress 102B, the ingress deparser 102C, the egress 102E, and the egress deparser 102F). Instead, the information used to uniquely identify an application can be based on the metadata information passed along with each packet (e.g., the FW metadata 306). That information is then used to select the corresponding application’s packet processing pipeline logic and data objects to process the packet 308.
  • Figure 3 shows packet-based application signature selection components 304A and metadata-based application signature selection components 304B outside the programmable blocks 102 and 104 and used by the framework 302, the signature selection components 304 can reside within the programmable blocks 102 (e.g., within the respective frameworks 302).
  • each programmable block 102 includes packet processing logic provided by the multi-application framework 302 for application signature selection purposes, it is the framework’s 302 responsibility to guarantee the co-existence of multiple applications within the system, as well as the required isolation between the different application’s data plane implementations.
  • P4 architectures typically only allow a few P4-programmable blocks 102 to directly parse packet headers.
  • the multi -application framework 302 implements an application signature selection component 304, which can leverage on a combination of system data, metadata (e.g., the FW metadata 306), packet header information, and table lookups.
  • each packet 308 could be associated with system data directly extracted from the system itself (e.g., the incoming port of the packet), metadata messages carried along with each packet through the pipeline (e.g., an application identifier), or packet header information extracted directly from the packet headers themselves (e.g., SA, DA, L3 proto, SIP, DIP, etc.), which could be used to uniquely identify different traffic flows and applications.
  • system data directly extracted from the system itself (e.g., the incoming port of the packet), metadata messages carried along with each packet through the pipeline (e.g., an application identifier), or packet header information extracted directly from the packet headers themselves (e.g., SA, DA, L3 proto, SIP, DIP, etc.), which could be used to uniquely identify different traffic flows and applications.
  • Figure 4 shows a method 400 for determining an application signature associated with a packet 308 and selection of a packet processing logic pipeline, according to one example embodiment.
  • the method 400 may be performed by or using a packet-based application signature selection component 304A.
  • the method 400 may commence at operation 402 with the retrieval of system information for a set of incoming packets 308 (e.g., the incoming port of the packets 308).
  • metadata e.g., FW metadata 306
  • application-specific signature parameters e.g., an application identifier
  • table lookups may be performed (if needed and supported) for application- specific signature parameters.
  • packets 308 are parsed for application-specific signature parameters (e.g., source address (SA), destination address (DA), L3 proto, Source Internet Protocol (SIP) address, Destination Internet Protocol (DIP) address, etc.).
  • application-specific signature parameters e.g., source address (SA), destination address (DA), L3 proto, Source Internet Protocol (SIP) address, Destination Internet Protocol (DIP) address, etc.
  • SA source address
  • DA destination address
  • L3 proto Source Internet Protocol
  • SIP Source Internet Protocol
  • DIP Destination Internet Protocol
  • a signature key is generated using determined information from operations 402-408 and the key is used to identify a corresponding application-specific signature that is associated with a corresponding packet processing logic pipeline.
  • Each application-specific signature is unique, to avoid any possible conflict between applications, and corresponds to an application-specific data plane packet processing implementation with a corresponding set of data objects and logic for processing packets 308.
  • the key building process could either be a static definition of key structure or allow for a more dynamic approach. While a static key structure could include building a key from a fixed and pre-determined set of parameters, a more dynamic approach would rather include building the key based on the minimum information required to uniquely identify the application-specific signatures. For example, for the static approach, each application could specify their own application key/signature in terms of a set of fixed parameters for which each application could specify either specific (1) values or ranges of values or (2) leave open the values (i.e., the values of the parameters are not part of the key/signature). For the more dynamic approach, the key/signature could be minimally built according to the deployed applications, implementing the minimum parsing and logical decisions required to uniquely identify an application’s packet processing pipeline (e.g., leveraging a decision tree).
  • an application could also be identified through several different application-specific signatures (e.g., depending on the supported packet types).
  • the framework 302 could provide a default behavior, which could be to discard packets 308, or simply forward them using the default packet processing pipeline instead of an application- specific one.
  • the method 400 moves to operation 414 to use the application-specific packet processing logic pipeline (e.g., the application-specific data plane packet processing implementation) associated with the matched application-specific signature.
  • the method 400 moves to operation 416 to use a default packet processing logic pipeline (e.g., a default data plane packet processing implementation).
  • P4 architectures can allow metadata information (e.g., FW metadata 306) to be carried along with each packet 308.
  • the metadata information can minimally include system -related information (e.g. an incoming or an outgoing port) as well as pipeline-related information, such as packet header information.
  • system -related information e.g. an incoming or an outgoing port
  • pipeline-related information such as packet header information.
  • the P4- programmable functional block 102i could be used to extract packet header information from the packets 308 and include it in the metadata (i.e., FW metadata 306).
  • the P4- programmable functional block 1022 would receive the metadata information along with the corresponding packet 308 and could make decisions based on that information.
  • the multi- application framework 302 implements application signature selection leveraging mainly on a combination of system-level and metadata information and optionally on table lookups, to select between all the available application-specific packet processing logic pipelines, as shown in the method 600 of Figure 6. Namely, the method 600 avoids the operation 408 from the method 400 of Figure 4.
  • the packet 308 can then be processed by the corresponding application-specific data plane packet processing implementation.
  • the application-specific data plane packet processing implementation 7021 in a multi -application data plane framework pipeline 700 is defined by associated data objects 704i and data plane logic 706i. While the data objects 704 can refer to statically and dynamically provisioned information (e.g. using data structures or tables), they can be used by the corresponding application ’s/implementation’s processing logic 706 to make dynamic decisions regarding packet processing.
  • Figure 7 shows a P4-programmable block 102i where the framework application signature selection component 304 is based on packet-parsing (e.g., a packet-based application signature selection component 304 A) and selects application 1 (App. 1) as the required packet processing pipeline/implementation 702 for the received packet 308.
  • application 1 App. 1
  • the received packet 308 is then processed by the application l’s packet processing logic 706i using the associated data objects 7041.
  • the metadata 306 sent from the functional block 102i includes an indication of the identity of application (e.g., App. 1) for subsequent logical blocks 102 to also be able to easily identify the same application and process their part of the selected application’s packet processing pipeline/implementation 702.
  • the data objects 704 associated with a application-specific data plane packet processing implementation 702 can be associated with the same application identifier used to identify the application-specific data plane packet processing implementation 702.As packets 308 are progressing through a packet processing pipeline, they transit through a number of different functional blocks 102.
  • Figure 8 shows the progress of the same packet 308 through functional blocks 1022 and 1023 of the multi- application data plane framework pipeline 800.
  • functional block 102i includes the application identity (App.
  • functional block 1022 uses that information directly to also select the packet processing pipeline/implementation 702i corresponding to application 1.
  • functional block 102 2 also sends the application identity (App. ID) in the metadata 306 to functional block 1023, which also uses it to select the packet processing pipeline corresponding/implementation 7021 to application 1.
  • the multi-application framework 302 is meant to process an application’s packet processing pipeline/implementation 702 using its own application- specific data objects 704 and packet processing logic 706.
  • the first multi-application framework s application signature selection component 304 is used to identify the application associated with the received packet 308.
  • each functional block 102 keeps selecting the same application.
  • FIG 10 shows an example of a multi-application data plane framework pipeline 1000 where the first functional block 102i of a P4 architecture includes the framework’s application signature selection component 304A, which is used to select between two different applications (e.g., application 1 and application N).
  • application 1 and application N As each application provides their own specific application signature, each packet 308 is tested against those application signatures based on generated keys.
  • application l’s pipeline/implementation 702i using corresponding data plane logic and data objects.
  • the packet 308 is processed by the application N’s pipeline/implementation 702i using corresponding data plane logic and data objects.
  • FIG. 11 the first three P4-programmable functional blocks 102I-102 3 of a P4 architecture of an example of a multi-application data plane framework pipeline 1100 are shown.
  • packets 308 are first processed by the framework’s application signature selection component 304, which is used to select between the different application’s packet processing pipelines/implementations 702 available and process packets 308 accordingly.
  • packets 308 are progressing through the pipeline/implementation 702, they transit though the different functional blocks 102 with their associated metadata 306.
  • the functional block 102i adds the application identity of the selected application to the metadata 306 sent to functional block 1022, which uses it to also select the corresponding application’s packet processing logic and associated data objects of the pipeline/implementation 7021.
  • functional block 1022 also sends its selected application identity (App. ID) in the metadata 306 sent to functional block 1023, which also uses it to select the packet processing logic and data objects of the pipeline/implementation 7021 corresponding to the same application.
  • App. ID selected application identity
  • the multi-application framework 302 is meant to process an application’s packet processing pipeline/implementation 702 using its own application-specific data objects and packet processing logic.
  • the first signature selection component 304 is used to identify the application associated with the received packet 308.
  • the packet 308 progresses through the P4 architecture and each functional block 102 keeps selecting the same application, it might be considered that the complete packet processing of the P4 architecture would be application-specific.
  • both applications application 1 and application N
  • each application could be considered as having its own packet processing pipeline implementation.
  • the multi-application framework 302 described herein leverages each application’s signature to differentiate between applications in each functional block 102, starting with the ingress parser functional block 102A.
  • the traffic destined to each application is identified by the framework’s application signature selection component 304, the data plane packet processing logic of each application could be considered as parallel packet processing pipelines, as shown in Figure 12.
  • P4 devices have a P4 architecture that is tightly coupled with their hardware implementation.
  • the multi-application framework 302 can logically separate the packet processing pipeline as well as the corresponding resources. But as P4 architectures/devices can also be deployed on FPGAs, more flexibility could be offered, allowing the multi -application framework 302 to instantiate a new dedicated packet processing pipeline for each application.
  • an FPGA-based implementation of the multi -application framework 302 could assume a first logical block 1302 that would be used to first detect an application signature and then direct packets 308 to the packet processing pipeline 1300/702 completely dedicated to the selected application.
  • the concept would correspond to the instantiation of a completely dedicated packet processing pipeline 1300/702 per application, with dedicated resources. Assuming that each pipeline 1300/702 would be dedicated to a unique application, the different application-specific signature selection components 304 might not be needed in the P4-programmable functional blocks 102 unless deemed useful for an application to determine its own required packet processing logic to execute.
  • Figure 14 shows deployment dynamicity but applied to the control plane, which provides control plane isolation between applications.
  • the multi -application framework 302 leverages (1) a control plane proxy 1402, (2) an application signature selection component 304, (3) a DP-CP proxy communication mechanism 1404, and (4) a control plane proxy-application communication mechanism 1406.
  • the control plane proxy 1402 in the multi-application control plane framework 1410, acts as a proxy between an application’s data plane implementation 702I-702N in the multi- application data plane framework 1412 in a P4 target 1414 and its control plane counterpart 1408I-1408N (e.g., for providing interface and API adaptations).
  • the control plane proxy 1402 provides certain Quality-of-Service (QoS), security, and analytics services.
  • QoS Quality-of-Service
  • security security
  • analytics services e.g., security, security, and analytics services.
  • the framework s application signature selection component 304 is used to differentiate between the different applications.
  • each application could instead be provided its own instance of a control plane proxy 1402, which would provide even more isolation between applications.
  • an application signature selection component 304 is used to differentiate between the different deployed applications and select the right application interfaces to use to communicate with either an application’s data plane implementation 702, or an application’s control plane 1408.
  • the DP-CP proxy communication mechanism 1404 facilitates the exchange of metadata between the data plane and the control plane proxy 1402.
  • the information communicated between them can be framework-related (e.g., an application-specific signature or identity), packet-related, and/or application-related information, and would be specified minimally through an API.
  • the control plane proxy-application communication mechanism 1406 facilitates the exchange of metadata 306 between the control plane proxy 1402 and the application itself (e.g., the multi-application framework via the management, provisioning, and message component 1416 and the application 1 control plane 1408i via the management, provisioning, and message component 1418i and application N control plane 1408N via the management, provisioning, and message component 1418N).
  • the information communicated between them can be framework-related (e.g., an application-specific signature or identity), packet-related, and/or application- related information, and would be specified minimally through an API.
  • the multi-application framework 302 could allow an application to manage and provision its own application-specific data plane packet processing implementation 702. That means that an application can be granted access to its own specifically allocated resources of the multi-application framework 302 (e.g. accessing all the tables specified by their own application-specific data plane packet processing implementation).
  • each application can be provided its own name space to clearly identify the resources that they were allocated, guaranteeing resource isolation between each application.
  • the multi-application framework 302 would allow applications to exchange packets 308 between their own application-specific control plane 1408 and their associated application-specific data plane packet processing implementation 702. Accordingly, packets 308 could be copied or forwarded from an application-specific data plane packet processing implementation 702 to its control plane counterpart or directly originate from an application’s control plane logic implementation 1408 and be forwarded by its application-specific data plane packet processing implementations 702 counterpart. That can also refer to messages exchanged between an application’s data plane and control plane implementations for events or analytics reporting purposes.
  • a framework-specific management, provisioning, and message component 1416 could also be used.
  • a centralized multi -application framework 302 could administer all the framework instances 302 deployed on the supported P4 targets 1414. That would allow the provisioning of the tables specified by the multi -application framework data plane implementation 702 and/or multi -application framework 302, as well as to exchange messages between both the multi-application framework data plane implementation 702 and/or the centralized multi -application framework 302 itself.
  • the multi-application framework 302 provides a solution so that each application could have its own data plane packet processing implementation 702 deployed on a P4-programmable target device, as well as having access to the interfaces for provisioning and exchanging packets 308 with their application-specific control plane while each application remains logically isolated from each other.
  • management, provisioning, and message components 1418 of each application’s control plane 1408 are shown.
  • each application’s control plane 1408I-1408N could manage and provision their data plane packet processing implementation 702 from their own management, provisioning, and message components 1418I-1418N, via a corresponding control plane proxy 1402 provided by the multi application control plane framework 1410.
  • Figure 16 shows the exchange of packets or messages between an application’s data plane packet processing logic and its associated control plane framework 1410.
  • different application control planes 1408 can exchange packets between the multi -application control plane framework 1410 and their data plane packet processing implementation 710, using their own interface generated by the multi-application framework 302.
  • the multi -application framework 302 can develop and/or provide data plane and control plane reference templates 1704 for each supported P4 target architecture 1702.
  • Those reference templates 1704 could be considered as reference implementations for all the required data plane and control plane framework data objects and logic implementations necessary to allow multiple applications to be deployed simultaneously on the same P4-programmable device. Leveraging on those reference templates 1704, application-specific data plane and control plane templates 1706 can be generated and then implemented.
  • each application/developer provides its own application-specific specifications 1708 to the multi-application framework 302.
  • the application-specific specifications 1708 can include several pieces of information.
  • the application-specific specifications 1708 can include (1) hardware dependent information (e.g., incoming ports, virtual channel, etc.), (2) packet header (e.g., L2/L3/L4 headers) information (e.g., SA, DA, L3 proto, SIP, DIP, etc.), and/or (3) metadata information (e.g., application identifier, etc.).
  • the application-specific specifications 1708 can include (1) deployment information (e.g., target P4 architecture, resources, device type, location, etc.), (2) performance information (e.g., bandwidth, latency, statistics, debug, etc.), and/or (3) enhanced capabilities information (e.g., management and provisioning interfaces, message and packet interfaces, specialized functions, hardware accelerations (e.g., via extems), etc.).
  • deployment information e.g., target P4 architecture, resources, device type, location, etc.
  • performance information e.g., bandwidth, latency, statistics, debug, etc.
  • enhanced capabilities information e.g., management and provisioning interfaces, message and packet interfaces, specialized functions, hardware accelerations (e.g., via extems), etc.
  • the multi-application framework 302 can use both the application-specific specifications 1708 and the data plane and control plane reference templates 1704 to generate application-specific data plane and control plane templates 1710A and 1710B.
  • the generated templates 1710 can include all the necessary components required by the multi application framework to support multiple applications on the same P4 target, such as the framework’s application signature component and the minimally required metadata information between the P4-programmable blocks 102 of the data plane pipeline, which could include passing the associated application identifier as metadata information.
  • the application-specific data plane and control plane templates 1710 are generated to guarantee isolation between applications and to allow applications to be developed completely independently from each other. That assumes that applications could be developed with different amounts of required resources, different processing logic, and table definitions, as well as different timelines for development and deployment.
  • the development framework 302 can provide the minimally required packet processing logic and metadata information to assure that each application’s packet processing logic would be executed independently of other applications.
  • P4 architectures can include several P4-programmable blocks 102 to program, which requires the multi-application framework 302 to generate several P4 templates per intended data plane packet processing pipeline block (i.e., one per P4-programmable block 102).
  • P4-programmable blocks 102 there are six P4-programmable blocks 102, which corresponds to generation of six application-specific data plane templates 1706.
  • the multi -application framework 302 provides each application with their own application-specific data plane and control plane generated templates 1706, it allows each application to develop their application at its own pace, without necessarily dependency on other applications. It is assumed that each application-specific data plane packet processing implementation 702 could be developed and tested in parallel by different development teams, and submitted to the multi -application framework 302 for deployment only once the development process is complete.
  • control plane templates 1710B As an application-specific data plane implementation 702 is deployed, new APIs can be generated to exchange management, provisioning and message information between the application’s data plane and control plane implementations. To ease the communication, the control plane proxy 1402 can facilitate proper communication routing and isolation between the multiple applications deployed on the same P4 device or appliance. While the APIs could be generated to populate the table entries used by the application’s data plane packet processing logic, they could also be used to exchange packets between an application’s data plane and control plane implementation. [00101] The communication channels between the framework’s data plane and control plane proxy 1402, as well as between the control plane proxy 1402 and each application, can be implemented using a proprietary solution and/or be automatically generated using a P4 runtime technology.
  • an application-specific data plane packet processing implementation 702 implies that an application/developer would implement its own intended data plane packet processing logic using the P4 language, as well as potentially the data model or provisioning information that might be required by the packet processing logic itself.
  • an application is able to write its own application-specific implementation 1902 for the multi -application framework 302 to deploy on a selected P4 target.
  • the multi-application framework 302 provides different framework data plane and control plane templates 1706 to each application, it is assumed that each application/developer could be developing its own application-specific data plane and control plane packet processing implementations 1902 independently of other applications.
  • a deployment model can include deploying a multi application framework 302 first on the P4 target, while each additional application-specific implementation 1902 can be added individually once it becomes available. That model can be considered as offering a modular deployment model for specific P4 targets. As yet another possible deployment model, all application-specific implementations 1902 are merged with the current deployed implementation 1904 and deploy the final combined implementation 1904 as one unique image for the P4 target.
  • Figure 20 shows a sequence diagram 2000 of a possible deployment process for an application-specific implementation 1902.
  • an application/developer 2002 first provides its own application-specific specifications 1708 to the multi-application framework 302 at operation 2006A.
  • the multi-application framework 302 analyzes the provided information to possibly select one or more P4 device(s) or appliance(s) 2004i-2004 s for potential deployment.
  • the multi -application framework 302 generates the corresponding application-specific data plane and control plane templates 1706 and transmits or otherwise makes available to the application/developer 2002 at operation 2006D.
  • application/developer 2002 can submit its application-specific data plane and control plane implementations 1902 to the multi -application framework 302 at operation 2006F.
  • the multi -application framework 302 is then responsible for merging active application-specific data plane implementations 702 at operation 2006G, validating resource usage on respective P4 targets at operation 2006H, and deploying the merged implementation 1904 onto the selected P4 device(s) or appliance(s) 2004i-2004 s at operation 20061.
  • the multi-application framework 302 ensures that each P4-programmable block 102 has a proper implementation of the framework’s application signature selection component 304 as well as a proper merge of the P4 code from all applications.
  • the multi application framework 302 is responsible for generating and deploying a control plane proxy 1402 component that would take into account all the applications.
  • the multi-application framework 302 might be required to merge all application-specific implementations located on the same P4 target first, depending on how flexible a P4 target might be. On some P4 targets, it might be possible to dynamically compile and deploy new data objects and logic completely independently of the already deployed ones, while on some others, any update to a P4- programmable block 102 that would be shared between multiple applications might require a P4 code merge, recompilation, and redeployment of the P4 target.
  • Figure 21 shows a sequence diagram 2100 of a possible deployment process when updating an application-specific implementation 1902.
  • This diagram 2100 describes the use case where only an application-specific data plane logic implementation would be updated but not its data objects or intended characteristics, as the application-specific specifications 1708 are assumed to remain unchanged.
  • the application/developer 2002 updates application- specific implementations 1902 at operation 2102A and submits its application-specific data plane and control plane implementations 1902 to the multi-application framework 302 at operation 2102B.
  • the multi -application framework 302 is then responsible for merging active application-specific data plane implementations 702 at operation 2102C, validating resource usage on respective P4 targets at operation 2102D, and deploying the merged implementation 1904 onto the selected P4 device(s) or appliance(s) 2004i-2004 s at operation 2102E.
  • Figure 22 shows a sequence diagram 2200 where an application-specific implementation 1902 is removed from the multi -application framework 302 and deployed on a P4 device 2004.
  • an application/developer 2002 could request to the multi- application framework 302 to remove its application-specific implementation 1902 from a P4 device 2004 at operation 2202A.
  • the multi-application framework 302 is responsible for redeploying a proper image on the corresponding P4 device(s) 2004i-2004 s without the requesting application’s application-specific implementation 1902.
  • the multi -application framework 302 merges all the remaining application- specific implementations 702, revalidating that all remaining implementations 702 still respect the allocated amount of resources at operation 2202C, and redeploy the newly generated image onto the corresponding P4 device(s) 2004i-2004 s at operation 2202D.
  • Figure 23 shows a datacenter/cloud system 2300 where a centralized multi-application framework service 2314 is offered and made available to cloud tenants for their applications 2308Ai-2308Ax and 2308BI-2308BY operating on compute nodes 2318A-2318M.
  • the datacenter/cloud system 2300 can additionally include network nodes 2304A-2304N in addition to the compute nodes 2318A-2318M and each of the network nodes 2304A-2304N and the compute nodes 2318A-2318M can include corresponding multi -application data plane and control plane frameworks 2302 and multi-application framework agents 2316.
  • the multi-application data plane and control plane frameworks 2302 can operate/reside on smart network interface cards (smart-NICs) (e.g., the smart-NICS 2312A and 2312B).
  • smart-NICs smart network interface cards
  • the one or more centralized multi-application framework services 2314 can be connected to multi -application framework agents 2316 across the system 2300.
  • An agent 2316 directly manages the multi -application frameworks 2302 of a P4 target.
  • Figure 24 shows the interactions between an application’s data plane 2302 and control plane implementations 2406 via a multi-application control plane proxy 2402.
  • an application’s data plane implementation could be managed and provisioned by an application control plane via the multi-application control plane proxy 2402 that would be located on the same node as the corresponding P4 target.
  • messages sent and received between an application’s data plane implementation and an application’s control plane would also be transiting through the multi-application control plane proxy 2402.
  • Figure 25 shows a use case where the multi-application framework service 2314 orchestrates the migration of an application-specific implementation between different compute nodes 2318A and 2318M. For example, assuming an application 2308 would be migrated between different compute nodes 2318, it might be necessary to also move their corresponding application-specific implementations from one P4 device to another. In Figure 25, the originally selected P4 device was the smart-NIC 2312Ai located on the same compute node 2318A as the application 2308Ai. As the application 2308Ai is migrated from compute node 2318A to compute node 2318M, the multi-application framework service 2314 is also requested to contribute by migrating its corresponding application-specific implementations between compute nodes 2318A and 2318M.
  • This capability of the multi-application framework service 2314 to allow applications 2308 to easily migrate between nodes offers a lot of flexibility and enables dynamicity of the system. It shows that once application-specific implementations are submitted to the multi application framework service, the same implementations can be migrated without any specific code rewrite by any application developer. It’s a great advantage to have a multi -application framework service 2314 that can support a dynamic deployment of applications 2308.
  • Figures 26, 27, 28, 29, and 30 show a set of flow diagrams for a method 2600 for offloading packet processing of an application 2308, according to one example embodiment.
  • the method 2600 may commence at operation 2602 a multi-application framework 302 receiving a set of application specifications 1708 describing a first application 2308Ai.
  • the set of application specifications 1708 include a definition of a signature that is a unique identifier for identifying all traffic and management flows of the first application 2308Ai.
  • the multi-application framework 302 selects a set of devices 2318/2304 for deployment of the first application 2308Ai based on the set of application specifications 1708.
  • the multi-application framework 302 generates application- specific templates 1706 based on the set of application specifications 1708, a set of multi application reference templates 1704, and architectures 1702 of the set of devices 2318/2304. [00118] At operation 2608, the multi-application framework 302 receives a first application- specific implementation 702i, wherein the first application-specific implementation 702i includes logic and data objects for deployment on the set of devices 2318/2304.
  • the multi-application framework 302 merges the first application-specific implementation 702i, which corresponds to the first application 2308Ai, with a second application-specific implementation 702N, which corresponds to a second application 2308Ax, to produce a first merged multi -application implementation 1904.
  • the merging is performed per device 2318/2304 in the set of devices 2318/2304 such that a separate first merged multi -application implementation 1904 is produced and deployed per each device 2318/2304 in the set of set devices 2318/2304.
  • the multi-application framework 302 deploys the first merged multi-application implementation 1904 to the set of devices 2318/2304.
  • Deploying the first merged multi -application implementation 1904 includes deploying the merged multi -application implementation 1904 on a set of programmable components 102 in each device in the set of devices 2318/2304.
  • the method 2600 may move to either operation 2702 (shown in Figure 27), 2802 (shown in Figure 28), or 2902 (shown in Figure 29).
  • the multi-application framework 302 detects a request 2202A to remove the first application 2308Ai from the set of devices 2318/2304. [00123] At operation 2704, the multi-application framework 302 merges active application- specific implementations 702, including the second application-specific implementation 702N, to produce a second merged multi -application implementation 1904 that, in comparison to the first merged multi -application implementation 1904, does not correspond to the first application- specific implementation 702i.
  • the multi-application framework 302 deploys the second merged multi -application implementation 1904 to the set of devices 2318/2304.
  • the request 2202A is from one of (1) the first application 2308Ai or (2) a developer of the first application 2308Ai or triggered by the multi-application framework 302.
  • the multi-application framework 302 detects a request 2102B to update the first application-specific implementation 702i, wherein the request 2102B includes an updated first application-specific implementation 702i.
  • the multi-application framework 302 merges the updated first application-specific implementation 702i, which corresponds to the first application 2308Ai, with the second application-specific implementation 702N, which corresponds to the second application 2308Ax, to produce a second merged multi-application implementation 1904.
  • the multi-application framework 302 deploys the second merged multi -application implementation 1904 to the set of devices 2318/2304.
  • the request 2102B is from one of (1) the first application 2308Ai or (2) a developer of the first application 2308Ai or triggered by the multi-application framework 302 (e.g., the request 2102B is related to an update of the multi -application framework 302).
  • the multi-application framework 302 receives, on a first programmable component 102i within a first device 2318/2304 in the set of devices 2318/2304, a packet 308;
  • the multi-application framework 302 determines, on the first programmable component 102i, that the packet 308 is associated with the first application 2308Ai.
  • determining that the packet 308 is associated with the first application 2308Ai includes sub-operation 2904A to build, by a framework application signature selection component 304A, a signature key for the packet 308 based on one or more of (1) system information for the packet 308, (2) a set of metadata 306 from a previous programmable component 102 of the first device 2318/2304, (3) application-specific signature parameters from table lookups, and (4) application-specific signature parameters from parsing the packet 308.
  • determining that the packet 308 is associated with the first application 2308Ai may additionally include sub-operation 2904B to match, by the framework application signature selection component 304 A, the signature key with a signature of the first application 2308Ai.
  • the multi-application framework 302 selects, in response to determining that the packet 308 is associated with the first application 2308Ai, the first application-specific implementation 7021 included in the first merged multi-application implementation 1904 for use in processing the packet 308 by the first programmable component 102i within the first device 2318/2304.
  • the multi-application framework 302 processes, by the first application-specific implementation 7021 in the first programmable component 102i in the first device 2318/2304 in response to selecting the first application-specific implementation 702i, the packet 308 to cause the first programmable component 102i to one or more of (1) perform a task of the first application 2308Ai and (2) generate a first set of metadata 306.
  • the method 2600 may either perform operation 2910 or operation 2912.
  • the first programmable component 102i passes one or more of (1) the first set of metadata 306 and (2) the packet 308 to a second programmable component 1022 or a programmable component 102 in the first device 2318/2304 or a second device 2318/2304 in the set of devices 2318/2304.
  • the first programmable component 102i drops the packet 308.
  • the method 2600 moves to operation 3002 in Figure 30.
  • the multi -application framework 302 receives, on the second programmable component 1022 within the first device 2318/2304, the packet 308 and the first set of metadata 306.
  • the multi-application framework 302 determines, on the second programmable component 1022, that the packet 308 is associated with the first application 2308Ai based on the packet 308 and the first set of metadata 306.
  • determining that the packet 308 is associated with the first application 2308Ai includes sub operation 3004A to build, by a framework application signature selection component 304A, a signature key for the packet 308 based on one or more of (1) system information for the packet 308, (2) a set of metadata 306 from a previous programmable component 102 of the first device 2318/2304, (3) application-specific signature parameters from table lookups, and (4) application-specific signature parameters from parsing the packet 308.
  • determining that the packet 308 is associated with the first application 2308Ai may additionally include sub-operation 2904B to match, by the framework application signature selection component 304A, the signature key with a signature of the first application 2308Ai.
  • the multi-application framework 302 selects, in response to determining that the packet 308 is associated with the first application 2308Ai, the first application-specific implementation 7021 included in the first merged multi-application implementation 1904 for use in processing the packet 308 by the second programmable component 1022 within the first device 2318/2304.
  • the multi-application framework 302 processes, by the first application-specific implementation 7021 in the second programmable component 1022 in the first device 2318/2304 in response to selecting the first application-specific implementation 702i, the packet 308 to generate a second set of metadata 306.
  • the multi-application framework 302 passes, by the second programmable component 1022, the metadata 306 and the packet 308 to a third programmable component 1023 in the first device 2318/2304.
  • the metadata 306 and/or the packet 308 a processing pipeline, as indicated by the architecture.
  • An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine-readable media (also called computer-readable media), such as machine-readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals).
  • machine-readable media also called computer-readable media
  • machine-readable storage media e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory
  • machine-readable transmission media also called a carrier
  • carrier e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, inf
  • an electronic device e.g., a computer
  • hardware and software such as a set of one or more processors (e.g., wherein a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding) coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data.
  • processors e.g., wherein a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding
  • an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower non volatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device.
  • Typical electronic devices also include a set or one or more physical network interface(s) (NI(s)) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices.
  • NI(s) physical network interface
  • a physical NI may comprise radio circuitry capable of receiving data from other electronic devices over a wireless connection and/or sending data out to other devices via a wireless connection.
  • This radio circuitry may include transmitter(s), receiver(s), and/or transceiver(s) suitable for radiofrequency communication.
  • the radio circuitry may convert digital data into a radio signal having the appropriate parameters (e.g., frequency, timing, channel, bandwidth, etc.). The radio signal may then be transmitted via antennas to the appropriate recipient(s).
  • the set of physical NI(s) may comprise network interface controller(s) (NICs), also known as a network interface card, network adapter, or local area network (LAN) adapter.
  • NICs network interface controller
  • the NIC(s) may facilitate in connecting the electronic device to other electronic devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC.
  • One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
  • a network device is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices).
  • Some network devices are “multiple services network devices” that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video).
  • Figure 31 A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention.
  • Figure 31 A shows NDs 3100A-H, and their connectivity by way of lines between 3100A-3100B, 3100B-3100C, 31 OOC-3100D, 3100D-3100E, 3100E-31 OOF, 3100F-3100G, and 3100A-3100G, as well as between 3100H and each of 3100A, 3100C,
  • 3100D, and 3100G are physical devices, and the connectivity between these NDs can be wireless or wired (often referred to as a link).
  • An additional line extending from NDs 3100A, 3100E, and 3100F illustrates that these NDs act as ingress and egress points for the network (and thus, these NDs are sometimes referred to as edge NDs; while the other NDs may be called core NDs).
  • Two of the exemplary ND implementations in Figure 31A are: 1) a special-purpose network device 3102 that uses custom application-specific integrated-circuits (ASICs) and a special-purpose operating system (OS); and 2) a general purpose network device 3104 that uses common off-the-shelf (COTS) processors and a standard OS.
  • ASICs application-specific integrated-circuits
  • OS special-purpose operating system
  • COTS common off-the-shelf
  • the special-purpose network device 3102 includes networking hardware 3110 comprising a set of one or more processor(s) 3112, forwarding resource(s) 3114 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 3116 (through which network connections are made, such as those shown by the connectivity between NDs 3100A-H), as well as non-transitory machine readable storage media 3118 having stored therein networking software 3120.
  • the networking software 3120 may be executed by the networking hardware 3110 to instantiate a set of one or more networking software instance(s) 3122.
  • Each of the networking software instance(s) 3122, and that part of the networking hardware 3110 that executes that network software instance form a separate virtual network element 3130A-R.
  • Each of the virtual network element(s) (VNEs) 3130A-R includes a control communication and configuration module 3132A-R (sometimes referred to as a local control module or control communication module) and forwarding table(s) 3134A-R, such that a given virtual network element (e.g., 3130A) includes the control communication and configuration module (e.g., 3132A), a set of one or more forwarding table(s) (e.g., 3134A), and that portion of the networking hardware 3110 that executes the virtual network element (e.g., 3130A).
  • a control communication and configuration module 3132A-R sometimes referred to as a local control module or control communication module
  • forwarding table(s) 3134A-R forwarding table(s) 3134A-R
  • the special-purpose network device 3102 is often physically and/or logically considered to include: 1) a ND control plane 3124 (sometimes referred to as a control plane) comprising the processor(s) 3112 that execute the control communication and configuration module(s) 3132A-R; and 2) a ND forwarding plane 3126 (sometimes referred to as a forwarding plane, a data plane, or a media plane) comprising the forwarding resource(s) 3114 that utilize the forwarding table(s) 3134A-R and the physical NIs 3116.
  • a ND control plane 3124 (sometimes referred to as a control plane) comprising the processor(s) 3112 that execute the control communication and configuration module(s) 3132A-R
  • a ND forwarding plane 3126 sometimes referred to as a forwarding plane, a data plane, or a media plane
  • the forwarding resource(s) 3114 that utilize the forwarding table(s) 3134A-R and the physical NIs 3116.
  • the ND control plane 3124 (the processor(s) 3112 executing the control communication and configuration module(s) 3132A-R) is typically responsible for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) and storing that routing information in the forwarding table(s) 3134A-R, and the ND forwarding plane 3126 is responsible for receiving that data on the physical NIs 3116 and forwarding that data out the appropriate ones of the physical NIs 3116 based on the forwarding table (s) 3134A-R.
  • data e.g., packets
  • the ND forwarding plane 3126 is responsible for receiving that data on the physical NIs 3116 and forwarding that data out the appropriate ones of the physical NIs 3116 based on the forwarding table (s) 3134A-R.
  • Figure 3 IB illustrates an exemplary way to implement the special-purpose network device 3102 according to some embodiments of the invention.
  • Figure 3 IB shows a special- purpose network device including cards 3138 (typically hot pluggable). While in some embodiments the cards 3138 are of two types (one or more that operate as the ND forwarding plane 3126 (sometimes called line cards), and one or more that operate to implement the ND control plane 3124 (sometimes called control cards)), alternative embodiments may combine functionality onto a single card and/or include additional card types (e.g., one additional type of card is called a service card, resource card, or multi -application card).
  • additional card types e.g., one additional type of card is called a service card, resource card, or multi -application card.
  • a service card can provide specialized processing (e.g., Layer 4 to Layer 7 services (e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)).
  • Layer 4 to Layer 7 services e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)
  • GPRS General Pack
  • the general purpose network device 3104 includes hardware 3140 comprising a set of one or more processor(s) 3142 (which are often COTS processors) and physical NIs 3146, as well as non-transitory machine readable storage media 3148 having stored therein software 3150.
  • the processor(s) 3142 execute the software 3150 to instantiate one or more sets of one or more applications 3164A-R. While one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization.
  • the virtualization layer 3154 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 3162A-R called software containers that may each be used to execute one (or more) of the sets of applications 3164A-R; where the multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that are separate from each other and separate from the kernel space in which the operating system is ran; and where the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes.
  • the multiple software containers also called virtualization engines, virtual private servers, or jails
  • user spaces typically a virtual memory space
  • the virtualization layer 3154 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and each of the sets of applications 3164A-R is run on top of a guest operating system within an instance 3162A-R called a virtual machine (which may in some cases be considered a tightly isolated form of software container) that is run on top of the hypervisor - the guest operating system and application may not know they are running on a virtual machine as opposed to running on a “bare metal” host electronic device, or through para- virtualization the operating system and/or application may be aware of the presence of virtualization for optimization purposes.
  • a hypervisor sometimes referred to as a virtual machine monitor (VMM)
  • VMM virtual machine monitor
  • one, some or all of the applications are implemented as unikemel(s), which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application.
  • libraries e.g., from a library operating system (LibOS) including drivers/libraries of OS services
  • unikemel can be implemented to run directly on hardware 3140, directly on a hypervisor (in which case the unikemel is sometimes described as running within a LibOS virtual machine), or in a software container
  • embodiments can be implemented fully with unikemels running directly on a hypervisor represented by virtualization layer 3154, unikemels running within software containers represented by instances 3162A-R, or as a combination of unikemels and the above-described techniques (e.g., unikemels and virtual machines both ran directly on a hypervisor, unikemels and sets of applications that are run in different software containers).
  • the virtual network element(s) 3160A-R perform similar functionality to the virtual network element(s) 3130A-R - e.g., similar to the control communication and configuration module(s) 3132A and forwarding table(s) 3134A (this virtualization of the hardware 3140 is sometimes referred to as network function virtualization (NFV)).
  • NFV network function virtualization
  • CPE customer premise equipment
  • each instance 3162A-R corresponding to one VNE 3160A-R
  • alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of instances 3162A-R to VNEs also apply to embodiments where such a finer level of granularity and/or unikemels are used.
  • the virtualization layer 3154 includes a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between instances 3162A-R and the physical NI(s) 3146, as well as optionally between the instances 3162A-R; in addition, this virtual switch may enforce network isolation between the VNEs 3160A-R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)).
  • VLANs virtual local area networks
  • the third exemplary ND implementation in Figure 31A is a hybrid network device 3106, which includes both custom ASICs/special-purpose OS and COTS processors/standard OS in a single ND or a single card within an ND.
  • a platform VM i.e., a VM that that implements the functionality of the special- purpose network device 3102
  • each of the VNEs receives data on the physical NIs (e.g., 3116,
  • a VNE implementing IP router functionality forwards IP packets on the basis of some of the IP header information in the IP packet; where IP header information includes source IP address, destination IP address, source port, destination port (where “source port” and “destination port” refer herein to protocol ports, as opposed to physical ports of a ND), transport protocol (e.g., user datagram protocol (UDP), Transmission Control Protocol (TCP), and differentiated services code point (DSCP) values.
  • UDP user datagram protocol
  • TCP Transmission Control Protocol
  • DSCP differentiated services code point
  • FIG 31C shows VNEs 3170A.1-3170A.P (and optionally VNEs 3170A.Q-3170A.R) implemented in ND 3100A and VNE 3170H.1 in ND 3100H.
  • VNEs 3170A.1-P are separate from each other in the sense that they can receive packets from outside ND 3100A and forward packets outside ofND 3100A;
  • VNE 3170A.1 is coupled with VNE 3170H.1, and thus they communicate packets between their respective NDs;
  • VNE 3170A.2-3170A.3 may optionally forward packets between themselves without forwarding them outside of the ND 3100A;
  • VNE 3170A.P may optionally be the first in a chain of VNEs that includes VNE 3170A.Q followed by VNE 3170A.R (this is sometimes referred to as dynamic service chaining, where each of the VNEs in the series of VNEs provides a different service - e.g., one or more layer
  • Figure 31C illustrates various exemplary relationships between the VNEs
  • alternative embodiments may support other relationships (e.g., more/fewer VNEs, more/fewer dynamic service chains, multiple different dynamic service chains with some common VNEs and some different VNEs).
  • the NDs of Figure 31 A may form part of the Internet or a private network; and other electronic devices (not shown; such as end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances) may be coupled to the network (directly or through other networks such as access networks) to communicate over the network (e.g., the Internet or virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet) with each other (directly or through servers) and/or access content and/or services.
  • VOIP Voice Over Internet Protocol
  • Such content and/or services are typically provided by one or more servers (not shown) belonging to a service/content provider or one or more end user devices (not shown) participating in a peer-to-peer (P2P) service, and may include, for example, public webpages (e.g., free content, store fronts, search services), private webpages (e.g., usemame/password accessed webpages providing email services), and/or corporate networks over VPNs.
  • end user devices may be coupled (e.g., through customer premise equipment coupled to an access network (wired or wirelessly)) to edge NDs, which are coupled (e.g., through one or more core NDs) to other edge NDs, which are coupled to electronic devices acting as servers.
  • one or more of the electronic devices operating as the NDs in Figure 31 A may also host one or more such servers (e.g., in the case of the general purpose network device 3104, one or more of the software instances 3162A-Rmay operate as servers; the same would be true for the hybrid network device 3106; in the case of the special-purpose network device 3102, one or more such servers could also be run on a virtualization layer executed by the processor(s) 3112); in which case the servers are said to be co-located with the VNEs of that ND.
  • the servers are said to be co-located with the VNEs of that ND.
  • a virtual network is a logical abstraction of a physical network (such as that in Figure 31A) that provides network services (e.g., L2 and/or L3 services).
  • a virtual network can be implemented as an overlay network (sometimes referred to as a network virtualization overlay) that provides network services (e.g., layer 2 (L2, data link layer) and/or layer 3 (L3, network layer) services) over an underlay network (e.g., an L3 network, such as an Internet Protocol (IP) network that uses tunnels (e.g., generic routing encapsulation (GRE), layer 2 tunneling protocol (L2TP), IPSec) to create the overlay network).
  • IP Internet Protocol
  • a network virtualization edge sits at the edge of the underlay network and participates in implementing the network virtualization; the network-facing side of the NVE uses the underlay network to tunnel frames to and from other NVEs; the outward-facing side of the NVE sends and receives data to and from systems outside the network.
  • a virtual network instance is a specific instance of a virtual network on a NVE (e.g., a NE VNE on an ND, a part of a NE VNE on a ND where that NE VNE is divided into multiple VNEs through emulation); one or more VNIs can be instantiated on an NVE (e.g., as different VNEs on an ND).
  • a virtual access point is a logical connection point on the NVE for connecting external systems to a virtual network; a VAP can be physical or virtual ports identified through logical interface identifiers (e.g., a VLAN ID).
  • Examples of network services include: 1) an Ethernet LAN emulation service (an Ethernet-based multipoint service similar to an Internet Engineering Task Force (IETF) Multiprotocol Label Switching (MPLS) or Ethernet VPN (EVPN) service) in which external systems are interconnected across the network by a LAN environment over the underlay network (e.g., an NVE provides separate L2 VNIs (virtual switching instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network); and 2) a virtualized IP forwarding service (similar to IETF IP VPN (e.g., Border Gateway Protocol (BGP)/MPLS IPVPN) from a service definition perspective) in which external systems are interconnected across the network by an L3 environment over the underlay network (e.g., an NVE provides separate L3 VNIs (forwarding and routing instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network)
  • Network services may also include quality of service capabilities (e.g., traffic classification marking, traffic conditioning and scheduling), security capabilities (e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements), and management capabilities (e.g., full detection and processing).
  • quality of service capabilities e.g., traffic classification marking, traffic conditioning and scheduling
  • security capabilities e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements
  • management capabilities e.g., full detection and processing.
  • Fig. 3 ID illustrates a network with a single network element on each of the NDs of Figure 31 A, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention.
  • Figure 3 ID illustrates network elements (NEs)
  • Figure 3 ID illustrates that the distributed approach 3172 distributes responsibility for generating the reachability and forwarding information across the NEs 3170A-H; in other words, the process of neighbor discovery and topology discovery is distributed.
  • the control communication and configuration module(s) 3132A-R of the ND control plane 3124 typically include a reachability and forwarding information module to implement one or more routing protocols (e.g., an exterior gateway protocol such as Border Gateway Protocol (BGP), Interior Gateway Protocol(s) (IGP) (e.g., Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), Routing Information Protocol (RIP), Label Distribution Protocol (LDP), Resource Reservation Protocol (RSVP) (including RSVP-Traffic Engineering (TE): Extensions to RSVP for LSP Tunnels and Generalized Multi-Protocol Label Switching (GMPLS) Signaling RSVP-TE)) that communicate with other NEs to exchange routes, and then selects those routes based on one or more routing metrics.
  • Border Gateway Protocol BGP
  • IGP Interior Gateway Protocol
  • OSPF Open Shortest Path First
  • IS-IS Intermediate System to Intermediate System
  • RIP Routing Information Protocol
  • LDP Label Distribution Protocol
  • RSVP Resource Reservation Protocol
  • TE RSVP-Traffic Engineering
  • GPLS
  • the NEs 3170A-H e.g., the processor(s) 3112 executing the control communication and configuration module(s) 3132A-R
  • Routes and adjacencies are stored in one or more routing structures (e.g., Routing Information Base (RIB), Label Information Base (LIB), one or more adjacency structures) on the ND control plane 3124.
  • routing structures e.g., Routing Information Base (RIB), Label Information Base (LIB), one or more adjacency structures
  • the ND control plane 3124 programs the ND forwarding plane 3126 with information (e.g., adjacency and route information) based on the routing structure(s). For example, the ND control plane 3124 programs the adjacency and route information into one or more forwarding table(s) 3134A-R (e.g., Forwarding Information Base (FIB), Label Forwarding Information Base (LFIB), and one or more adjacency structures) on the ND forwarding plane 3126.
  • the ND can store one or more bridging tables that are used to forward data based on the layer 2 information in that data. While the above example uses the special-purpose network device 3102, the same distributed approach 3172 can be implemented on the general purpose network device 3104 and the hybrid network device 3106.
  • Figure 3 ID illustrates that a centralized approach 3174 (also known as software defined networking (SDN)) that decouples the system that makes decisions about where traffic is sent from the underlying systems that forwards traffic to the selected destination.
  • the illustrated centralized approach 3174 has the responsibility for the generation of reachability and forwarding information in a centralized control plane 3176 (sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity), and thus the process of neighbor discovery and topology discovery is centralized.
  • a centralized control plane 3176 sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity
  • the centralized control plane 3176 has a south bound interface 3182 with a data plane 3180 (sometime referred to the infrastructure layer, network forwarding plane, or forwarding plane (which should not be confused with aND forwarding plane)) that includes the NEs 3170A-H (sometimes referred to as switches, forwarding elements, data plane elements, or nodes).
  • the centralized control plane 3176 includes a network controller 3178, which includes a centralized reachability and forwarding information module 3179 that determines the reachability within the network and distributes the forwarding information to the NEs 3170A-H of the data plane 3180 over the south bound interface 3182 (which may use the OpenFlow protocol).
  • the network intelligence is centralized in the centralized control plane 3176 executing on electronic devices that are typically separate from the NDs.
  • each of the control communication and configuration module(s) 3132A-R of the ND control plane 3124 typically include a control agent that provides the VNE side of the south bound interface 3182.
  • the ND control plane 3124 (the processor(s) 3112 executing the control communication and configuration module(s) 3132A-R) performs its responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) through the control agent communicating with the centralized control plane 3176 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 3179 (it should be understood that in some embodiments of the invention, the control communication and configuration module(s) 3132A-R, in addition to communicating with the centralized control plane 3176, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach; such embodiments are generally considered to fall under the centralized approach 3174, but may also be considered a hybrid approach).
  • data e.g., packets
  • the control agent communicating with the centralized control plane 3176 to receive the forwarding
  • the same centralized approach 3174 can be implemented with the general purpose network device 3104 (e.g., each of the VNE 3160A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by communicating with the centralized control plane 3176 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 3179; it should be understood that in some embodiments of the invention, the VNEs 3160A-R, in addition to communicating with the centralized control plane 3176, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach) and the hybrid network device 3106.
  • the general purpose network device 3104 e.g., each of the VNE 3160A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for
  • NFV is able to support SDN by providing an infrastructure upon which the SDN software can be run
  • NFV and SDN both aim to make use of commodity server hardware and physical switches.
  • Figure 3 ID also shows that the centralized control plane 3176 has a north bound interface 3184 to an application layer 3186, in which resides application(s) 3188.
  • the centralized control plane 3176 has the ability to form virtual networks 3192 (sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 3170A-H of the data plane 3180 being the underlay network)) for the application(s) 3188.
  • virtual networks 3192 sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 3170A-H of the data plane 3180 being the underlay network)
  • the centralized control plane 3176 maintains a global view of all NDs and configured NEs VNEs, and it maps the virtual networks to the underlying NDs efficiently (including maintaining these mappings as the physical network changes either through hardware (ND, link, or ND component) failure, addition, or removal).
  • Figure 3 ID shows the distributed approach 3172 separate from the centralized approach 3174
  • the effort of network control may be distributed differently or the two combined in certain embodiments of the invention.
  • embodiments may generally use the centralized approach (SDN) 3174, but have certain functions delegated to the NEs (e.g., the distributed approach may be used to implement one or more of fault monitoring, performance monitoring, protection switching, and primitives for neighbor and/or topology discovery); or 2) embodiments of the invention may perform neighbor discovery and topology discovery via both the centralized control plane and the distributed protocols, and the results compared to raise exceptions where they do not agree.
  • Such embodiments are generally considered to fall under the centralized approach 3174, but may also be considered a hybrid approach.
  • Figure 3 ID illustrates the simple case where each of the NDs 3100A-H implements a single NE 3170A-H
  • the network control approaches described with reference to Figure 3 ID also work for networks where one or more of the NDs 3100A-H implement multiple VNEs (e.g., VNEs 3130A-R, VNEs 3160A-R, those in the hybrid network device 3106).
  • the network controller 3178 may also emulate the implementation of multiple VNEs in a single ND.
  • the network controller 3178 may present the implementation of a VNE/NE in a single ND as multiple VNEs in the virtual networks 3192 (all in the same one of the virtual network(s) 3192, each in different ones of the virtual network(s) 3192, or some combination).
  • the network controller 3178 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 3176 to present different VNEs in the virtual network(s) 3192 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).
  • Figures 3 IE and 3 IF respectively illustrate exemplary abstractions of NEs and VNEs that the network controller 3178 may present as part of different ones of the virtual networks 3192.
  • Figure 3 IE illustrates the simple case of where each of the NDs 3100A-H implements a single NE 3170A-H (see Figure 3 ID), but the centralized control plane 3176 has abstracted multiple of the NEs in different NDs (the NEs 3170A-C and G-H) into (to represent) a single NE 31701 in one of the virtual network(s) 3192 of Figure 3 ID, according to some embodiments of the invention.
  • Figure 3 IE shows that in this virtual network, the NE 31701 is coupled to NE 3170D and 3170F, which are both still coupled to NE 3170E.
  • Figure 31F illustrates a case where multiple VNEs (VNE 3170A.1 and VNE 3170H.1) are implemented on different NDs (ND 3100A and ND 3100H) and are coupled to each other, and where the centralized control plane 3176 has abstracted these multiple VNEs such that they appear as a single VNE 3170T within one of the virtual networks 3192 of Figure 3 ID, according to some embodiments of the invention.
  • the abstraction of a NE or VNE can span multiple NDs.
  • the electronic device(s) running the centralized control plane 3176, and thus the network controller 3178 including the centralized reachability and forwarding information module 3179 may be implemented a variety of ways (e.g., a special purpose device, a general-purpose (e.g., COTS) device, or hybrid device). These electronic device(s) would similarly include processor(s), a set or one or more physical NIs, and a non-transitory machine-readable storage medium having stored thereon the centralized control plane software.
  • Figure 32 illustrates, a general purpose control plane device 3204 including hardware 3240 comprising a set of one or more processor(s) 3242 (which are often COTS processors) and physical NIs 3246, as well as non-transitory machine readable storage media 3248 having stored therein centralized control plane (CCP) software 3250.
  • processor(s) 3242 which are often COTS processors
  • NIs 3246 physical NIs
  • CCP centralized control plane
  • the processor(s) 3242 typically execute software to instantiate a virtualization layer 3254 (e.g., in one embodiment the virtualization layer 3254 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 3262A-R called software containers (representing separate user spaces and also called virtualization engines, virtual private servers, or jails) that may each be used to execute a set of one or more applications; in another embodiment the virtualization layer 3254 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and an application is run on top of a guest operating system within an instance 3262A-R called a virtual machine (which in some cases may be considered a tightly isolated form of software container) that is run by the hypervisor ; in another embodiment, an application is implemented as a unikemel, which can be generated by compiling directly with an application only a l
  • VMM virtual machine monitor
  • an instance of the CCP software 3250 (illustrated as CCP instance 3276A) is executed (e.g., within the instance 3262A) on the virtualization layer 3254.
  • the CCP instance 3276A is executed, as a unikemel or on top of a host operating system, on the “bare metal” general purpose control plane device 3204.
  • the instantiation of the CCP instance 3276A, as well as the virtualization layer 3254 and instances 3262A-R if implemented, are collectively referred to as software instance(s) 3252.
  • the CCP instance 3276A includes a network controller instance 3278.
  • the network controller instance 3278 includes a centralized reachability and forwarding information module instance 3279 (which is a middleware layer providing the context of the network controller 3178 to the operating system and communicating with the various NEs), and an CCP application layer 3280 (sometimes referred to as an application layer) over the middleware layer (providing the intelligence required for various network operations such as protocols, network situational awareness, and user - interfaces).
  • this CCP application layer 3280 within the centralized control plane 3176 works with virtual network view(s) (logical view(s) of the network) and the middleware layer provides the conversion from the virtual networks to the physical view.
  • the centralized control plane 3176 transmits relevant messages to the data plane 3180 based on CCP application layer 3280 calculations and middleware layer mapping for each flow.
  • a flow may be defined as a set of packets whose headers match a given pattern of bits; in this sense, traditional IP forwarding is also flow-based forwarding where the flows are defined by the destination IP address for example; however, in other implementations, the given pattern of bits used for a flow definition may include more fields (e.g., 10 or more) in the packet headers.
  • Different NDs/NEs/VNEs of the data plane 3180 may receive different messages, and thus different forwarding information.
  • the data plane 3180 processes these messages and programs the appropriate flow information and corresponding actions in the forwarding tables (sometime referred to as flow tables) of the appropriate NE/VNEs, and then the NEs/VNEs map incoming packets to flows represented in the forwarding tables and forward packets based on the matches in the forwarding tables.
  • Standards such as OpenFlow define the protocols used for the messages, as well as a model for processing the packets.
  • the model for processing packets includes header parsing, packet classification, and making forwarding decisions. Header parsing describes how to interpret a packet based upon a well-known set of protocols. Some protocol fields are used to build a match structure (or key) that will be used in packet classification (e.g., a first key field could be a source media access control (MAC) address, and a second key field could be a destination MAC address).
  • MAC media access control
  • Packet classification involves executing a lookup in memory to classify the packet by determining which entry (also referred to as a forwarding table entry or flow entry) in the forwarding tables best matches the packet based upon the match structure, or key, of the forwarding table entries. It is possible that many flows represented in the forwarding table entries can correspond/match to a packet; in this case the system is typically configured to determine one forwarding table entry from the many according to a defined scheme (e.g., selecting a first forwarding table entry that is matched).
  • Forwarding table entries include both a specific set of match criteria (a set of values or wildcards, or an indication of what portions of a packet should be compared to a particular value/values/wildcards, as defined by the matching capabilities - for specific fields in the packet header, or for some other packet content), and a set of one or more actions for the data plane to take on receiving a matching packet. For example, an action may be to push a header onto the packet, for the packet using a particular port, flood the packet, or simply drop the packet.
  • TCP transmission control protocol
  • an unknown packet for example, a “missed packet” or a “match- miss” as used in OpenFlow parlance
  • the packet (or a subset of the packet header and content) is typically forwarded to the centralized control plane 3176.
  • the centralized control plane 3176 will then program forwarding table entries into the data plane 3180 to accommodate packets belonging to the flow of the unknown packet. Once a specific forwarding table entry has been programmed into the data plane 3180 by the centralized control plane 3176, the next packet with matching credentials will match that forwarding table entry and take the set of actions associated with that matched entry.
  • a network interface may be physical or virtual; and in the context of IP, an interface address is an IP address assigned to a NI, be it a physical NI or virtual NI.
  • a virtual NI may be associated with a physical NI, with another virtual interface, or stand on its own (e.g., a loopback interface, a point-to-point protocol interface).
  • a NI physical or virtual
  • a loopback interface (and its loopback address) is a specific type of virtual NI (and IP address) of a NE/VNE (physical or virtual) often used for management purposes; where such an IP address is referred to as the nodal loopback address.
  • IP addresses of that ND are referred to as IP addresses of that ND; at a more granular level, the IP address(es) assigned to NI(s) assigned to a NE/VNE implemented on a ND can be referred to as IP addresses of that NE/VNE.
  • Next hop selection by the routing system for a given destination may resolve to one path (that is, a routing protocol may generate one next hop on a shortest path); but if the routing system determines there are multiple viable next hops (that is, the routing protocol generated forwarding solution offers more than one next hop on a shortest path - multiple equal cost next hops), some additional criteria is used - for instance, in a connectionless network, Equal Cost Multi Path (ECMP) (also known as Equal Cost Multi Pathing, multipath forwarding and IP multipath) may be used (e.g., typical implementations use as the criteria particular header fields to ensure that the packets of a particular packet flow are always forwarded on the same next hop to preserve packet flow ordering).
  • ECMP Equal Cost Multi Path
  • a packet flow is defined as a set of packets that share an ordering constraint.
  • the set of packets in a particular TCP transfer sequence need to arrive in order, else the TCP logic will interpret the out of order delivery as congestion and slow the TCP transfer rate down.
  • a Layer 3 (L3) Link Aggregation (LAG) link is a link directly connecting two NDs with multiple IP-addressed link paths (each link path is assigned a different IP address), and a load distribution decision across these different link paths is performed at the ND forwarding plane; in which case, a load distribution decision is made between the link paths.
  • L3 Link Aggregation (LAG) link is a link directly connecting two NDs with multiple IP-addressed link paths (each link path is assigned a different IP address), and a load distribution decision across these different link paths is performed at the ND forwarding plane; in which case, a load distribution decision is made between the link paths.
  • Some NDs include functionality for authentication, authorization, and accounting (AAA) protocols (e.g., RADIUS (Remote Authentication Dial-In User Service), Diameter, and/or TACACS+ (Terminal Access Controller Access Control System Plus).
  • AAA can be provided through a client/server model, where the AAA client is implemented on a ND and the AAA server can be implemented either locally on the ND or on a remote electronic device coupled with the ND.
  • Authentication is the process of identifying and verifying a subscriber. For instance, a subscriber might be identified by a combination of a username and a password or through a unique key.
  • Authorization determines what a subscriber can do after being authenticated, such as gaining access to certain electronic device information resources (e.g., through the use of access control policies). Accounting is recording user activity.
  • end user devices may be coupled (e.g., through an access network) through an edge ND (supporting AAA processing) coupled to core NDs coupled to electronic devices implementing servers of service/content providers.
  • AAA processing is performed to identify for a subscriber the subscriber record stored in the AAA server for that subscriber.
  • a subscriber record includes a set of attributes (e.g., subscriber name, password, authentication information, access control information, rate-limiting information, policing information) used during processing of that subscriber’s traffic.
  • Certain NDs internally represent end user devices (or sometimes customer premise equipment (CPE) such as a residential gateway (e.g., a router, modem)) using subscriber circuits.
  • CPE customer premise equipment
  • a subscriber circuit uniquely identifies within the ND a subscriber session and typically exists for the lifetime of the session.
  • a ND typically allocates a subscriber circuit when the subscriber connects to that ND, and correspondingly de allocates that subscriber circuit when that subscriber disconnects.
  • Each subscriber session represents a distinguishable flow of packets communicated between the ND and an end user device (or sometimes CPE such as a residential gateway or modem) using a protocol, such as the point-to-point protocol over another protocol (PPPoX) (e.g., where X is Ethernet or Asynchronous Transfer Mode (ATM)), Ethernet, 802. IQ Virtual LAN (VLAN), Internet Protocol, or ATM).
  • PPPoX point-to-point protocol over another protocol
  • a subscriber session can be initiated using a variety of mechanisms (e.g., manual provisioning a dynamic host configuration protocol (DHCP), DHCP/client-less internet protocol service (CLIPS) or Media Access Control (MAC) address tracking).
  • DHCP dynamic host configuration protocol
  • CLIPS client-less internet protocol service
  • MAC Media Access Control
  • the point-to-point protocol is commonly used for digital subscriber line (DSL) services and requires installation of a PPP client that enables the subscriber to enter a username and a password, which in turn may be used to select a subscriber record.
  • DHCP digital subscriber line
  • a username typically is not provided; but in such situations other information (e.g., information that includes the MAC address of the hardware in the end user device (or CPE)) is provided.
  • CPE end user device
  • a virtual circuit synonymous with virtual connection and virtual channel, is a connection oriented communication service that is delivered by means of packet mode communication.
  • Virtual circuit communication resembles circuit switching, since both are connection oriented, meaning that in both cases data is delivered in correct order, and signaling overhead is required during a connection establishment phase.
  • Virtual circuits may exist at different layers. For example, at layer 4, a connection oriented transport layer datalink protocol such as Transmission Control Protocol (TCP) may rely on a connectionless packet switching network layer protocol such as IP, where different packets may be routed over different paths, and thus be delivered out of order.
  • TCP Transmission Control Protocol
  • IP connectionless packet switching network layer protocol
  • the virtual circuit is identified by the source and destination network socket address pair, i.e. the sender and receiver IP address and port number.
  • TCP includes segment numbering and reordering on the receiver side to prevent out-of-order delivery.
  • Virtual circuits are also possible at Layer 3 (network layer) and Layer 2 (datalink layer); such virtual circuit protocols are based on connection oriented packet switching, meaning that data is always delivered along the same network path, i.e. through the same NEs/VNEs.
  • the packets are not routed individually and complete addressing information is not provided in the header of each data packet; only a small virtual channel identifier (VCI) is required in each packet; and routing information is transferred to the NEs/VNEs during the connection establishment phase; switching only involves looking up the virtual channel identifier in a table rather than analyzing a complete address.
  • VCI virtual channel identifier
  • VCI virtual channel identifier
  • ATM Asynchronous Transfer Mode
  • VPN virtual path identifier
  • VCI virtual channel identifier
  • VCI virtual channel identifier
  • GPRS General Packet Radio Service
  • MPLS Multiprotocol label switching
  • Certain NDs use a hierarchy of circuits.
  • the leaf nodes of the hierarchy of circuits are subscriber circuits.
  • the subscriber circuits have parent circuits in the hierarchy that typically represent aggregations of multiple subscriber circuits, and thus the network segments and elements used to provide access network connectivity of those end user devices to the ND.
  • These parent circuits may represent physical or logical aggregations of subscriber circuits (e.g., a virtual local area network (VLAN), a permanent virtual circuit (PVC) (e.g., for Asynchronous Transfer Mode (ATM)), a circuit-group, a channel, a pseudo-wire, a physical NI of the ND, and a link aggregation group).
  • VLAN virtual local area network
  • PVC permanent virtual circuit
  • ATM Asynchronous Transfer Mode
  • a circuit-group is a virtual construct that allows various sets of circuits to be grouped together for configuration purposes, for example aggregate rate control.
  • a pseudo-wire is an emulation of a layer 2 point-to-point connection- oriented service.
  • a link aggregation group is a virtual construct that merges multiple physical NIs for purposes of bandwidth aggregation and redundancy.
  • the parent circuits physically or logically encapsulate the subscriber circuits.
  • Each VNE e.g., a virtual router, a virtual bridge (which may act as a virtual switch instance in a Virtual Private LAN Service (VPLS) is typically independently administrable.
  • each of the virtual routers may share system resources but is separate from the other virtual routers regarding its management domain, AAA (authentication, authorization, and accounting) name space, IP address, and routing database(s).
  • AAA authentication, authorization, and accounting
  • Multiple VNEs may be employed in an edge ND to provide direct network access and/or different classes of services for subscribers of service and/or content providers.
  • interfaces that are independent of physical NIs may be configured as part of the VNEs to provide higher-layer protocol and service information (e.g., Layer 3 addressing).
  • the subscriber records in the AAA server identify, in addition to the other subscriber configuration requirements, to which context (e.g., which of the VNEs/NEs) the corresponding subscribers should be bound within the ND.
  • a binding forms an association between a physical entity (e.g., physical NI, channel) or a logical entity (e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)) and a context’s interface over which network protocols (e.g., routing protocols, bridging protocols) are configured for that context. Subscriber data flows on the physical entity when some higher-layer protocol interface is configured and associated with that physical entity.
  • a physical entity e.g., physical NI, channel
  • a logical entity e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)
  • network protocols e.g., routing protocols, bridging protocols
  • Some NDs provide support for implementing VPNs (Virtual Private Networks) (e.g., Layer 2 VPNs and/or Layer 3 VPNs).
  • VPNs Virtual Private Networks
  • the ND where a provider’s network and a customer’s network are coupled are respectively referred to as PEs (Provider Edge) and CEs (Customer Edge).
  • PEs Provide Edge
  • CEs Customer Edge
  • Layer 2 VPN forwarding typically is performed on the CE(s) on either end of the VPN and traffic is sent across the network (e.g., through one or more PEs coupled by other NDs).
  • Layer 2 circuits are configured between the CEs and PEs (e.g., an Ethernet port, an ATM permanent virtual circuit (PVC), a Frame Relay PVC).
  • PVC ATM permanent virtual circuit
  • Frame Relay PVC Frame Relay PVC
  • routing typically is performed by the PEs.
  • an edge ND that supports multiple VNEs may be deployed as a PE; and a VNE may be configured with a VPN protocol
  • VPLS Virtual Private LAN Service
  • end user devices access content/services provided through the VPLS network by coupling to CEs, which are coupled through PEs coupled by other NDs.
  • VPLS networks can be used for implementing triple play network applications (e.g., data applications (e.g., high speed Internet access), video applications (e.g., television service such as IPTV (Internet Protocol Television), VoD (Video-on-Demand) service), and voice applications (e.g., VoIP (Voice over Internet Protocol) service)), VPN services, etc.
  • VPLS is a type of layer 2 VPN that can be used for multi-point connectivity.
  • VPLS networks also allow end use devices that are coupled with CEs at separate geographical locations to communicate with each other across a Wide Area Network (WAN) as if they were directly attached to each other in a Local Area Network (LAN) (referred to as an emulated LAN).
  • WAN Wide Area Network
  • LAN Local Area Network
  • each CE typically attaches, possibly through an access network (wired and/or wireless), to a bridge module of a PE via an attachment circuit (e.g., a virtual link or connection between the CE and the PE).
  • the bridge module of the PE attaches to an emulated LAN through an emulated LAN interface.
  • Each bridge module acts as a “Virtual Switch Instance” (VSI) by maintaining a forwarding table that maps MAC addresses to pseudowires and attachment circuits.
  • PEs forward frames (received from CEs) to destinations (e.g., other CEs, other PEs) based on the MAC destination address field included in those frames.

Abstract

A method is described for offloading packet processing of an application. The method includes receiving, by a multi-application framework, a set of application specifications describing a first application; selecting, by the framework, a set of devices for deployment of the first application based on the set of application specifications; generating, by the framework, application-specific templates based on the application specifications, a set of multi-application reference templates, and architectures of the set of devices; receiving, by the framework, a first application-specific implementation that was generated based on the application-specific templates, wherein the first application-specific implementation includes logic and data objects for deployment on the devices; merging, by the framework, the first application-specific implementation, which corresponds to the first application, with a second application-specific implementation, which corresponds to a second application, to produce a first merged multi-application implementation; and deploying, by the framework, the first merged implementation to the devices.

Description

SPECIFICATION
MULTI-APPLICATION PACKET PROCESSING DEVELOPMENT FRAMEWORK TECHNICAL FIELD
[0001] Embodiments of the invention relate to the field of packet processing; and more specifically, to a multi-application packet processing development framework.
BACKGROUND ART
[0002] As Moore’s law seems to be ending, software designers and developers are learning to do more with current hardware as they can no longer assume problems will be smoothed out when a new generation of faster chips arrives. As software developers rethink software architectures and programming paradigms, they’re trying to build smarter systems rather than relying simply on brute force. This implies that software developers are starting to offload specific processing tasks to specialized accelerators, such as Graphics Processing Units (GPUs), Tensor Processing Units (TPUs), Field-Programmable Gate Arrays (FPGAs), and Network Interface Cards (NICs), to improve performance and computation efficiency.
[0003] Modem datacenters and cloud systems typically offer several types of accelerators to software developers to improve processing performance. Typical accelerators include an Application-Specific Integrated Circuit (ASIC) with a specialized Software Development Kit (SDK), networking ASICs. For example, Ethernet switch ASICs have recently evolved from a fixed network data plane functional model to a more P4-programmable one.
[0004] P4 is a technology that is designed to program the data plane functionality of network devices and to partially define interfaces between control and data planes. P4-programmable devices are available in software virtual appliances and in hardware, such as with NICs and Ethernet switches. These devices are typically programmed to specifically fulfill the requirements of the cloud infrastructure instead of the applications themselves. As they’re starting to be offered commercially, P4-programmable devices should be increasingly available on cloud networks.
[0005] When applications are deployed on cloud systems, they can specify a number of performance-related requirements for the cloud system to properly allocate the right resources to them. In typical cloud systems, resources are associated with specific performance requirements, and allocated to applications based on their generic performance acceleration perspective rather than from an application-specific packet processing acceleration perspective. [0006] Several technologies are available to improve the performance of applications, depending on the intended purpose of those applications. For example, for networking applications providing packet switching and routing capabilities, technologies such as a Data Plane Development Kit (DPDK) and protocol offload on a set of NICs could be used to improve the processing performance of such applications. While those technologies are typically meant to accelerate networking performance, they only provide generic networking acceleration capabilities rather than application-specific processing logic acceleration.
SUMMARY
[0007] A method is described for offloading packet processing of an application. The method includes receiving, by a multi-application framework, a set of application specifications describing a first application; selecting, by the multi-application framework, a set of devices for deployment of the first application based on the set of application specifications; generating, by the multi-application framework, application-specific templates based on the set of application specifications, a set of multi-application reference templates, and architectures of the set of devices; receiving, by the multi-application framework, a first application-specific implementation, wherein the first application-specific implementation includes logic and data objects for deployment on the set of devices; merging, by the multi -application framework, the first application-specific implementation, which corresponds to the first application, with a second application-specific implementation, which corresponds to a second application, to produce a first merged multi-application implementation; and deploying, by the multi application framework, the first merged multi-application implementation to the set of devices. [0008] A non-transitory machine-readable storage medium that provides instructions that, if executed by a processor, will cause said processor to perform operations. In one embodiment, the operations include receiving a set of application specifications describing a first application; selecting a set of devices for deployment of the first application based on the set of application specifications; generating application-specific templates based on the set of application specifications, a set of multi-application reference templates, and architectures of the set of devices; receiving a first application-specific implementation, wherein the first application- specific implementation includes logic and data objects for deployment on the set of devices; merging the first application-specific implementation, which corresponds to the first application, with a second application-specific implementation, which corresponds to a second application, to produce a first merged multi-application implementation; and deploying the first merged multi -application implementation to the set of devices. [0009] As described herein, systems are provided for application developers to develop their own application-specific data plane packet processing fimctions/logic and objects (sometimes referred to as application-specific data plane packet processing implementations) and have them deployed on P4 resources of a cloud infrastructure. Further, multiple application-specific data plane packet processing implementations can be deployed simultaneously on the same P4 resource.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings:
[0011] Figure 1 shows a P4 Portable Switch Architecture (PSA), according to some embodiments.
[0012] Figure 2 shows PSA packet paths, according to some embodiments.
[0013] Figure 3 shows PSA-based multi-application data plane framework pipeline paths, according to some embodiments.
[0014] Figure 4 shows a method for determining an application signature associated with a packet and selection of a packet processing logic pipeline, according to some embodiments. [0015] Figure 5 shows a multi-application data plane framework pipeline, according to some embodiments.
[0016] Figure 6 shows a method for determining an application signature associated with a packet and selection of a packet processing logic pipeline, according to some embodiments. [0017] Figure 7 shows a multi-application data plane framework pipeline, according to some embodiments.
[0018] Figure 8 shows a multi-application data plane framework pipeline, according to some embodiments.
[0019] Figure 9 shows processing of a packet in a multi-application data plane framework pipeline, according to some embodiments.
[0020] Figure 10 shows processing of a packet in a multi-application data plane framework pipeline, according to some embodiments.
[0021] Figure 11 shows a multi-application data plane framework pipeline, according to some embodiments.
[0022] Figure 12 shows processing of a packet in a multi-application data plane framework pipeline using separate application implementations, according to some embodiments. [0023] Figure 13 shows processing of a packet in a multi-application data plane framework pipeline using separate application implementations, according to some embodiments.
[0024] Figure 14 shows deployment dynamicity applied to the control plane, according to some embodiments.
[0025] Figure 15 shows processing of a packet in multi -application data plane framework pipeline paths using separate application implementations with management, provisioning, and message components, according to some embodiments.
[0026] Figure 16 shows processing of a packet in multi -application data plane framework pipeline paths using separate application implementations with management, provisioning, and message components, according to some embodiments.
[0027] Figure 17 shows an arrangement of application-specific data plane and control plane templates, according to some embodiments.
[0028] Figure 18 shows processing of a packet in multi -application data plane framework pipeline paths using separate application implementations and separate application-specific templates, according to some embodiments.
[0029] Figure 19 shows an arrangement of multi -application frameworks for data plane and control plane implementations, according to some embodiments.
[0030] Figure 20 shows a sequence diagram of a possible deployment process for an application-specific implementation, according to some embodiments.
[0031] Figure 21 shows a sequence diagram of a possible deployment process when updating an application-specific implementation, according to some embodiments.
[0032] Figure 22 shows a sequence diagram where an application-specific implementation is removed from the multi-application framework deployed on a P4 device, according to some embodiments.
[0033] Figure 23 shows an arrangement of a datacenter/cloud system, according to some embodiments.
[0034] Figure 24 shows an arrangement of a datacenter/cloud system with a control plane proxy, according to some embodiments.
[0035] Figure 25 shows an arrangement of a datacenter/cloud system for use with application migration, according to some embodiments.
[0036] Figures 26, 27, 28, 29, and 30 show a set of flow diagrams for a method for offloading packet processing of an application, according to some embodiments.
[0037] Figure 31 A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention. [0038] Figure 3 IB illustrates an exemplary way to implement a special-purpose network device according to some embodiments of the invention.
[0039] Figure 31C illustrates various exemplary ways in which virtual network elements (VNEs) may be coupled according to some embodiments of the invention.
[0040] Figure 3 ID illustrates a network with a single network element (NE) on each of the NDs, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention.
[0041] Figure 3 IE illustrates the simple case of where each of the NDs implements a single NE, but a centralized control plane has abstracted multiple of the NEs in different NDs into (to represent) a single NE in one of the virtual network(s), according to some embodiments of the invention.
[0042] Figure 3 IF illustrates a case where multiple VNEs are implemented on different NDs and are coupled to each other, and where a centralized control plane has abstracted these multiple VNEs such that they appear as a single VNE within one of the virtual networks, according to some embodiments of the invention.
[0043] Figure 32 illustrates a general-purpose control plane device with centralized control plane (CCP), according to some embodiments of the invention.
DETAILED DESCRIPTION
[0044] The following description describes methods and apparatus for a multi-application packet processing development framework. In the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.
[0045] References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
[0046] Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot- dash, and dots) may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.
[0047] In the following description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. “Coupled” is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. “Connected” is used to indicate the establishment of communication between two or more elements that are coupled with each other.
[0048] As described herein, systems are provided for application developers to develop their own application-specific data plane packet processing fimctions/logic and objects (sometimes referred to as application-specific data plane packet processing implementations) and have them deployed on P4 resources of a cloud infrastructure. Further, multiple application-specific data plane packet processing implementations can be deployed simultaneously on the same P4 resource.
[0049] In particular, a multi-application P4 packet processing development framework (sometimes referred to as a multi-application framework) provides a development framework/environment to allow multiple application-specific data plane packet processing implementations to be developed and deployed simultaneously on the same P4 resource with a logical isolation between all applications/implementations. In this configuration, the multi application framework requires applications/developers to specify their own application-specific packet processing specifications, which includes an application-specific signature that is used as a unique identifier for identifying all traffic and management flows destined to the intended application-specific data plane packet processing implementation. Based on the provided application-specific specifications, the multi-application framework provides each application/developer with application-specific data plane implementation templates (sometimes referred to as application-specific data plane packet processing P4 implementation templates) to implement their own specifically intended data plane logic and data model/objects, as well as with application-specific control plane implementation templates for accessing its associated resources and for message handling.
[0050] The application-specific data plane implementation templates provided by the multi- application framework can be based on a P4 implementation of the multi-application framework allowing multiple applications/implementations to co-exist simultaneously on the same P4 target, as each application-specific packet processing logical pipeline is selected based on the provided unique application-specific signature. Different application-specific data plane packet processing implementations can be provided, depending on the selected P4 target and P4 architecture.
[0051] Each application/developer can develop its own application-specific data plane packet processing fimctions/logic that are included in an application-specific data plane packet processing implementation using the provided templates and without requiring any knowledge of other application-specific data plane packet processing functions/logic also potentially developed in parallel for the same P4 target.
[0052] As each application/developer releases its own application-specific data plane packet processing P4 implementation, the multi-application framework is responsible for merging all application-specific data plane packet processing implementations from all applications/developers to create the final P4 implementation to be deployed on the selected P4 target. It is the responsibility of the multi-application framework to validate that each application-specific data plane packet processing implementation respects the constraints imposed by the multi-application framework.
[0053] Depending on the P4 target, a deployment model could have the development framework deployed first on the P4 target, as each additional application-specific data plane packet processing P4 implementation could be added individually as it becomes available. That model could be considered as offering a modular deployment model for specific P4 targets. Another deployment model could merge all application-specific data plane packet processing P4 implementations with the development framework P4 implementation and deploy the final combined implementation as one unique image for the P4 target.
[0054] As each application-specific data plane packet processing P4 implementation could provide its own statically provisioned data model, the multi-application framework could also allow an application to dynamically provision its own application-specific data plane packet processing P4 implementation. The mechanism involves a control plane proxy service provided by the development framework that would be used to relay provisioning operations between an application and its data plane implementation, also assuming logical isolation between all applications. [0055] As each application-specific data plane packet processing P4 implementation could provide its own control plane interface for sending and receiving packets or messages, the multi- application framework could allow an application to exchange packets and messages dynamically with its own application-specific data plane packet processing P4 implementation. The mechanism involves a control plane proxy service provided by the development framework that would be used to relay packets and messages between an application and its data plane implementation, also assuming logical isolation between all applications.
[0056] Based on the above and as will be described in greater detail below, applications deployed in a cloud infrastructure can be developed to leverage on the flexible and dynamic packet processing capabilities of the P4 resources available in the cloud infrastructure. Applications can potentially improve their processing performance by offloading some of their application-specific packet processing tasks to P4-programmable devices available in the associated cloud infrastructure. For example, assuming an application’s logic for packet parsing and header validation could be considered as recurrent processing overhead that remains essential but might represent a relatively heavy burden for the application, such tasks could potentially be performed more efficiently by other system components on behalf of the application itself.
[0057] Further, the multi-application framework enables applications to share P4- programmable resources and for multiple applications to deploy their own application-specific data plane packet processing logic and data model on the same P4 resource. In particular, the multi-application framework provides a development framework to allow multiple application- specific data plane packet processing implementations to be developed and deployed simultaneously on the same P4 resource, assuming a logical isolation between all applications. [0058] Additionally, the systems described herein leverage emergent P4 technologies, which standardize a programming language and development model for programmable data planes that can be supported by multiple different types of P4-capable software appliances and hardware devices. Moreover, the dynamicity of the multi -application framework service allows application-specific implementations to migrate between different nodes as their associated applications would also be requested to move.
[0059] To develop applications using the P4 technology, there must be at least a P4 target (e.g., a P4 device or appliance), a P4 architecture, and a development environment for that specific P4 target (e.g., a P4 compiler). Multiple different types of hardware devices and software appliances can support P4 technology, potentially offering different acceleration capabilities accessible through P4 applications/programs. As P4 architectures are used to abstract the implementation details of P4 resources, P4 applications can be written independently of any specific P4 appliances or devices and still be deployed on any P4 resources compatible with the chosen P4 architecture. P4 technology can be considered as a standard programming model for programmable data planes and a P4 architecture can be thought of as a contract between a P4 application and a P4 target.
[0060] As P4 applications are meant to be portable across all P4-capable devices supporting the same P4 architecture (provided there would be enough resources available), the P4 Portable Switch Architecture (PSA) was specified by the P4 consortium (p4.org) to represent a standard P4 architecture for P4-capable targets. As shown in Figure 1, the PSA specifies a packet processing pipeline (i.e., the PSA packet pipeline 100) for P4-capable switches, while also suggesting standard packet paths, as shown in Figure 2 (e.g., the PSA packet paths 200). In those figures, the boxes/blocks 102 (i.e., the ingress parser 102A, the ingress 102B, the ingress deparser 102C, the egress parser 102D, the egress 102E, and the egress deparser 102F) of the PSA packet pipeline 100 and PSA packet paths 200 are P4-programmable functional blocks, while the boxes/blocks 104 (i.e., the packet buffer and replication 104A and the buffer queuing engine 104B) are target-specific non-programmable functional blocks. As shown in Figure 2, an incoming packet can be a normal packet from port (NFP), a packet from a CPU port (NFCPU), a normal unicast packet from ingress to egress (NU), a normal multicast-replicated packet from ingress to egress (NM), a clone from ingress to egress (CI2E), or a clone from egress to egress (CE2E). In addition, packet processing can be recirculated and/or resubmitted, as shown in Figure 2.
[0061] P4 applications written for the PSA specify their own data plane data objects and logic by programming the six suggested functional blocks 102, within the confines of the supported PSA packet paths 200. Metadata information is also minimally standardized to carry packet- related information through the PSA packet pipeline 100, such as information parsed from packets themselves, results from table lookups, or instructions relative to packet modifications or packet forwarding. In the PSA packet pipeline 100, packets and metadata information are flowing from the ingress parser 102A to the buffer queueing engine 104B (although resubmission and/or recirculation is possible, as shown in Figure 2).
[0062] While it could be considered preferable for all P4 targets to always support the PSA, a P4 target might prefer to support its own specific P4 architecture. For example, a target-specific P4 architecture could be required to best leverage on a specialized packet processing pipeline and/or functional block (e.g., specialized functional blocks for crypto or machine-learning (ML) inference purposes). For such devices, the specialized functional blocks could also be made available as optional extensions to the PSA architecture. As used herein, while any P4 architecture could be used, the standardized P4 PSA has been referenced for simplicity and clarity.
[0063] As described herein, a multi-application P4 packet processing development framework (sometimes referred to as multi-application framework) provides additional capabilities to P4 architectures, allowing multiple applications to be deployed dynamically and simultaneously on the same P4 device, while allowing each application’s packet processing logic and data objects to remain isolated from the ones of other applications.
[0064] As shown in the PSA-based multi -application data plane framework pipeline paths 300 of Figure 3, deployment dynamicity and data plane (DP) isolation is provided between applications via the framework 302 (sometimes referred to as the framework service 302 or the multi -application framework 302) by leveraging (1) an application signature selection mechanism and (2) an application signature passing mechanism. In particular, in relation to the application signature selection mechanism, each programmable block 102 can include or otherwise utilize an application signature selection component 304 that is used to differentiate between the different deployed applications and select one application’s packet processing pipeline/implementation corresponding to the processed/received packet 308. In relation to the application signature passing mechanism, this corresponds to framework-related information, such as an application-specific signature or identity, that is passed through metadata information messages between the different programmable blocks 102 of the packet processing pipeline (e.g., framework (FW) metadata 306).
[0065] Although there can be several different types of application signature selection components 304, two types of application signature selection components 304 are described in detail herein: (1) a packet-based application signature selection component 304A and (2) a metadata-based application signature selection component 304B.
[0066] In terms of the packet-based application signature selection component 304A, this type of component is supported by programmable blocks 102 that can directly parse packets (e.g., the ingress parser 102A and the egress parser 102D). The information used to uniquely identify an application can be based on the packet headers of a packet 308, while it could also be complemented with metadata information passed along with each packet 308 from a previous block 102/104, if available (e.g., the FW metadata 306). That information is then used to select the corresponding application’s packet processing pipeline logic and data objects to process the packet 308.
[0067] In terms of the metadata-based application signature selection component 304B, this type of component is supported on programmable blocks 102 that cannot directly parse packets (e.g., the ingress 102B, the ingress deparser 102C, the egress 102E, and the egress deparser 102F). Instead, the information used to uniquely identify an application can be based on the metadata information passed along with each packet (e.g., the FW metadata 306). That information is then used to select the corresponding application’s packet processing pipeline logic and data objects to process the packet 308.
[0068] While Figure 3 shows packet-based application signature selection components 304A and metadata-based application signature selection components 304B outside the programmable blocks 102 and 104 and used by the framework 302, the signature selection components 304 can reside within the programmable blocks 102 (e.g., within the respective frameworks 302).
[0069] As each programmable block 102 includes packet processing logic provided by the multi-application framework 302 for application signature selection purposes, it is the framework’s 302 responsibility to guarantee the co-existence of multiple applications within the system, as well as the required isolation between the different application’s data plane implementations.
[0070] In P4 devices, P4 architectures typically only allow a few P4-programmable blocks 102 to directly parse packet headers. As mentioned above, in those blocks 102, the multi -application framework 302 implements an application signature selection component 304, which can leverage on a combination of system data, metadata (e.g., the FW metadata 306), packet header information, and table lookups. In such blocks 102, each packet 308 could be associated with system data directly extracted from the system itself (e.g., the incoming port of the packet), metadata messages carried along with each packet through the pipeline (e.g., an application identifier), or packet header information extracted directly from the packet headers themselves (e.g., SA, DA, L3 proto, SIP, DIP, etc.), which could be used to uniquely identify different traffic flows and applications.
[0071] Figure 4 shows a method 400 for determining an application signature associated with a packet 308 and selection of a packet processing logic pipeline, according to one example embodiment. The method 400 may be performed by or using a packet-based application signature selection component 304A. As shown in Figure 4, the method 400 may commence at operation 402 with the retrieval of system information for a set of incoming packets 308 (e.g., the incoming port of the packets 308). At operation 404, metadata (e.g., FW metadata 306) is parsed (if available) for application-specific signature parameters (e.g., an application identifier). At operation 406, table lookups may be performed (if needed and supported) for application- specific signature parameters. At operation 408, packets 308 are parsed for application-specific signature parameters (e.g., source address (SA), destination address (DA), L3 proto, Source Internet Protocol (SIP) address, Destination Internet Protocol (DIP) address, etc.). This retrieved, extracted, parsed, and otherwise determined information can be used to uniquely identify different traffic flows and applications associated with packets 308. At operation 410, a signature key is generated using determined information from operations 402-408 and the key is used to identify a corresponding application-specific signature that is associated with a corresponding packet processing logic pipeline. Each application-specific signature is unique, to avoid any possible conflict between applications, and corresponds to an application-specific data plane packet processing implementation with a corresponding set of data objects and logic for processing packets 308. The key building process could either be a static definition of key structure or allow for a more dynamic approach. While a static key structure could include building a key from a fixed and pre-determined set of parameters, a more dynamic approach would rather include building the key based on the minimum information required to uniquely identify the application-specific signatures. For example, for the static approach, each application could specify their own application key/signature in terms of a set of fixed parameters for which each application could specify either specific (1) values or ranges of values or (2) leave open the values (i.e., the values of the parameters are not part of the key/signature). For the more dynamic approach, the key/signature could be minimally built according to the deployed applications, implementing the minimum parsing and logical decisions required to uniquely identify an application’s packet processing pipeline (e.g., leveraging a decision tree).
[0072] In some embodiments, an application could also be identified through several different application-specific signatures (e.g., depending on the supported packet types). Also, when an application-specific signature cannot be found that corresponds to a generated key, the framework 302 could provide a default behavior, which could be to discard packets 308, or simply forward them using the default packet processing pipeline instead of an application- specific one. For example, as shown in Figure 4, when a match between the generated key for a packet 308 and an application-specific signature is determined at operation 412, the method 400 moves to operation 414 to use the application-specific packet processing logic pipeline (e.g., the application-specific data plane packet processing implementation) associated with the matched application-specific signature. Conversely, when there is a failure to match the generated key and an application-specific signature at operation 412, the method 400 moves to operation 416 to use a default packet processing logic pipeline (e.g., a default data plane packet processing implementation).
[0073] For P4-programmable blocks 102 that cannot directly parse packets 308, P4 architectures can allow metadata information (e.g., FW metadata 306) to be carried along with each packet 308. The metadata information can minimally include system -related information (e.g. an incoming or an outgoing port) as well as pipeline-related information, such as packet header information. For example, in the packet pipeline 500 shown in Figure 5, the P4- programmable functional block 102i could be used to extract packet header information from the packets 308 and include it in the metadata (i.e., FW metadata 306). In such a case, the P4- programmable functional block 1022 would receive the metadata information along with the corresponding packet 308 and could make decisions based on that information. Similarly, processing would be performed by the block 1023 until the packets 308 and the metadata 306 reaches a target specific functional block 104. In some embodiments, the packets 308 and/or the metadata 306 can be passed beyond a target specific functional block 102/104 (e.g., the metadata 306 can be passed from the block 102F to block 102A or from the block 102A to block 102F). [0074] In the P4-programmable blocks 102 that cannot directly parse packets, the multi- application framework 302 implements application signature selection leveraging mainly on a combination of system-level and metadata information and optionally on table lookups, to select between all the available application-specific packet processing logic pipelines, as shown in the method 600 of Figure 6. Namely, the method 600 avoids the operation 408 from the method 400 of Figure 4.
[0075] Once the multi-application framework 302 has selected an application for a packet 308, the packet 308 can then be processed by the corresponding application-specific data plane packet processing implementation. As shown in Figure 7, the application-specific data plane packet processing implementation 7021 in a multi -application data plane framework pipeline 700 is defined by associated data objects 704i and data plane logic 706i. While the data objects 704 can refer to statically and dynamically provisioned information (e.g. using data structures or tables), they can be used by the corresponding application ’s/implementation’s processing logic 706 to make dynamic decisions regarding packet processing.
[0076] Figure 7 shows a P4-programmable block 102i where the framework application signature selection component 304 is based on packet-parsing (e.g., a packet-based application signature selection component 304 A) and selects application 1 (App. 1) as the required packet processing pipeline/implementation 702 for the received packet 308. By selecting application 1, the received packet 308 is then processed by the application l’s packet processing logic 706i using the associated data objects 7041. It is also shown that the metadata 306 sent from the functional block 102i includes an indication of the identity of application (e.g., App. 1) for subsequent logical blocks 102 to also be able to easily identify the same application and process their part of the selected application’s packet processing pipeline/implementation 702. In some embodiments, the data objects 704 associated with a application-specific data plane packet processing implementation 702 can be associated with the same application identifier used to identify the application-specific data plane packet processing implementation 702.As packets 308 are progressing through a packet processing pipeline, they transit through a number of different functional blocks 102. Using the example of Figure 7 in which application l’s packet processing pipeline/implementation 702 was selected by functional block 102i, Figure 8 shows the progress of the same packet 308 through functional blocks 1022 and 1023 of the multi- application data plane framework pipeline 800. As functional block 102i includes the application identity (App. ID) in the metadata 306 sent to functional block 1022, functional block 1022 uses that information directly to also select the packet processing pipeline/implementation 702i corresponding to application 1. Similarly, functional block 1022 also sends the application identity (App. ID) in the metadata 306 to functional block 1023, which also uses it to select the packet processing pipeline corresponding/implementation 7021 to application 1.
[0077] From a high-level perspective, the multi-application framework 302 is meant to process an application’s packet processing pipeline/implementation 702 using its own application- specific data objects 704 and packet processing logic 706. As shown in the multi -application data plane framework pipeline 900 of Figure 9, as each packet 308 is received in a P4 device or appliance, the first multi-application framework’s application signature selection component 304 is used to identify the application associated with the received packet 308. As the packet 308 progresses through the P4 architecture, each functional block 102 keeps selecting the same application.
[0078] Figure 10 shows an example of a multi-application data plane framework pipeline 1000 where the first functional block 102i of a P4 architecture includes the framework’s application signature selection component 304A, which is used to select between two different applications (e.g., application 1 and application N). As each application provides their own specific application signature, each packet 308 is tested against those application signatures based on generated keys. When a received packet 308 matches the application signature of application 1, then the packet 308 is processed by application l’s pipeline/implementation 702i using corresponding data plane logic and data objects. Similarly, when a received packet 308 matches the application signature of application N, then the packet 308 is processed by the application N’s pipeline/implementation 702i using corresponding data plane logic and data objects.
[0079] In Figure 11, the first three P4-programmable functional blocks 102I-1023 of a P4 architecture of an example of a multi-application data plane framework pipeline 1100 are shown. In each block 102, packets 308 are first processed by the framework’s application signature selection component 304, which is used to select between the different application’s packet processing pipelines/implementations 702 available and process packets 308 accordingly. As packets 308 are progressing through the pipeline/implementation 702, they transit though the different functional blocks 102 with their associated metadata 306. In Figure 11, it is shown that the functional block 102i adds the application identity of the selected application to the metadata 306 sent to functional block 1022, which uses it to also select the corresponding application’s packet processing logic and associated data objects of the pipeline/implementation 7021. Similarly, functional block 1022 also sends its selected application identity (App. ID) in the metadata 306 sent to functional block 1023, which also uses it to select the packet processing logic and data objects of the pipeline/implementation 7021 corresponding to the same application.
[0080] As described herein, multiple applications could share the same P4 device (or appliance) for their own application-specific packet processing implementations 702. From a high-level perspective, the multi-application framework 302 is meant to process an application’s packet processing pipeline/implementation 702 using its own application-specific data objects and packet processing logic. As shown in the multi-application data plane framework pipeline 1200 of Figure 12, as each packet 308 is received in a P4 device or appliance, the first signature selection component 304 is used to identify the application associated with the received packet 308. As the packet 308 progresses through the P4 architecture and each functional block 102 keeps selecting the same application, it might be considered that the complete packet processing of the P4 architecture would be application-specific. In Figure 12, while both applications (application 1 and application N) share the same P4 device (or appliance), each application could be considered as having its own packet processing pipeline implementation.
[0081] Considering the P4 PSA, the multi-application framework 302 described herein leverages each application’s signature to differentiate between applications in each functional block 102, starting with the ingress parser functional block 102A. As the traffic destined to each application is identified by the framework’s application signature selection component 304, the data plane packet processing logic of each application could be considered as parallel packet processing pipelines, as shown in Figure 12.
[0082] In some embodiments, P4 devices have a P4 architecture that is tightly coupled with their hardware implementation. In such a case, the multi-application framework 302 can logically separate the packet processing pipeline as well as the corresponding resources. But as P4 architectures/devices can also be deployed on FPGAs, more flexibility could be offered, allowing the multi -application framework 302 to instantiate a new dedicated packet processing pipeline for each application. As shown in the multi-application data plane framework pipelines 1300i and 1300N of Figure 13, in some embodiments, an FPGA-based implementation of the multi -application framework 302 could assume a first logical block 1302 that would be used to first detect an application signature and then direct packets 308 to the packet processing pipeline 1300/702 completely dedicated to the selected application. The concept would correspond to the instantiation of a completely dedicated packet processing pipeline 1300/702 per application, with dedicated resources. Assuming that each pipeline 1300/702 would be dedicated to a unique application, the different application-specific signature selection components 304 might not be needed in the P4-programmable functional blocks 102 unless deemed useful for an application to determine its own required packet processing logic to execute.
[0083] Figure 14 shows deployment dynamicity but applied to the control plane, which provides control plane isolation between applications. For its control plane counterpart, the multi -application framework 302 leverages (1) a control plane proxy 1402, (2) an application signature selection component 304, (3) a DP-CP proxy communication mechanism 1404, and (4) a control plane proxy-application communication mechanism 1406.
[0084] The control plane proxy 1402, in the multi-application control plane framework 1410, acts as a proxy between an application’s data plane implementation 702I-702N in the multi- application data plane framework 1412 in a P4 target 1414 and its control plane counterpart 1408I-1408N (e.g., for providing interface and API adaptations). The control plane proxy 1402 provides certain Quality-of-Service (QoS), security, and analytics services. When multiple applications are deployed on the multi -application control plane framework 1410, the same control plane proxy 1402 can be shared between them. In that case, the framework’s application signature selection component 304 is used to differentiate between the different applications. In some embodiments, each application could instead be provided its own instance of a control plane proxy 1402, which would provide even more isolation between applications.
[0085] In the control plane proxy 1402, an application signature selection component 304 is used to differentiate between the different deployed applications and select the right application interfaces to use to communicate with either an application’s data plane implementation 702, or an application’s control plane 1408.
[0086] The DP-CP proxy communication mechanism 1404 facilitates the exchange of metadata between the data plane and the control plane proxy 1402. The information communicated between them can be framework-related (e.g., an application-specific signature or identity), packet-related, and/or application-related information, and would be specified minimally through an API.
[0087] The control plane proxy-application communication mechanism 1406 facilitates the exchange of metadata 306 between the control plane proxy 1402 and the application itself (e.g., the multi-application framework via the management, provisioning, and message component 1416 and the application 1 control plane 1408i via the management, provisioning, and message component 1418i and application N control plane 1408N via the management, provisioning, and message component 1418N). The information communicated between them can be framework- related (e.g., an application-specific signature or identity), packet-related, and/or application- related information, and would be specified minimally through an API.
[0088] As shown in Figure 14, in some embodiments, the multi-application framework 302 could allow an application to manage and provision its own application-specific data plane packet processing implementation 702. That means that an application can be granted access to its own specifically allocated resources of the multi-application framework 302 (e.g. accessing all the tables specified by their own application-specific data plane packet processing implementation). In the control plane proxy 1402, each application can be provided its own name space to clearly identify the resources that they were allocated, guaranteeing resource isolation between each application.
[0089] In a similar way, it is also assumed that the multi-application framework 302 would allow applications to exchange packets 308 between their own application-specific control plane 1408 and their associated application-specific data plane packet processing implementation 702. Accordingly, packets 308 could be copied or forwarded from an application-specific data plane packet processing implementation 702 to its control plane counterpart or directly originate from an application’s control plane logic implementation 1408 and be forwarded by its application- specific data plane packet processing implementations 702 counterpart. That can also refer to messages exchanged between an application’s data plane and control plane implementations for events or analytics reporting purposes.
[0090] Regarding the multi-application framework 302, a framework-specific management, provisioning, and message component 1416 could also be used. For example, a centralized multi -application framework 302 could administer all the framework instances 302 deployed on the supported P4 targets 1414. That would allow the provisioning of the tables specified by the multi -application framework data plane implementation 702 and/or multi -application framework 302, as well as to exchange messages between both the multi-application framework data plane implementation 702 and/or the centralized multi -application framework 302 itself.
[0091] The multi-application framework 302 provides a solution so that each application could have its own data plane packet processing implementation 702 deployed on a P4-programmable target device, as well as having access to the interfaces for provisioning and exchanging packets 308 with their application-specific control plane while each application remains logically isolated from each other. In Figure 15, management, provisioning, and message components 1418 of each application’s control plane 1408 are shown. For this configuration, each application’s control plane 1408I-1408N could manage and provision their data plane packet processing implementation 702 from their own management, provisioning, and message components 1418I-1418N, via a corresponding control plane proxy 1402 provided by the multi application control plane framework 1410.
[0092] Similarly, Figure 16 shows the exchange of packets or messages between an application’s data plane packet processing logic and its associated control plane framework 1410. In Figure 16, different application control planes 1408 can exchange packets between the multi -application control plane framework 1410 and their data plane packet processing implementation 710, using their own interface generated by the multi-application framework 302.
[0093] As shown in Figure 17, the multi -application framework 302 can develop and/or provide data plane and control plane reference templates 1704 for each supported P4 target architecture 1702. Those reference templates 1704 could be considered as reference implementations for all the required data plane and control plane framework data objects and logic implementations necessary to allow multiple applications to be deployed simultaneously on the same P4-programmable device. Leveraging on those reference templates 1704, application-specific data plane and control plane templates 1706 can be generated and then implemented.
[0094] To generate application-specific data plane and control plane templates 1706 for the multi -application framework 302, each application/developer provides its own application- specific specifications 1708 to the multi-application framework 302. The application-specific specifications 1708 can include several pieces of information. For example, for application- specific signature definitions, the application-specific specifications 1708 can include (1) hardware dependent information (e.g., incoming ports, virtual channel, etc.), (2) packet header (e.g., L2/L3/L4 headers) information (e.g., SA, DA, L3 proto, SIP, DIP, etc.), and/or (3) metadata information (e.g., application identifier, etc.). For application-specific resource requirements, the application-specific specifications 1708 can include (1) deployment information (e.g., target P4 architecture, resources, device type, location, etc.), (2) performance information (e.g., bandwidth, latency, statistics, debug, etc.), and/or (3) enhanced capabilities information (e.g., management and provisioning interfaces, message and packet interfaces, specialized functions, hardware accelerations (e.g., via extems), etc.).
[0095] As shown in Figure 17, the multi-application framework 302 can use both the application-specific specifications 1708 and the data plane and control plane reference templates 1704 to generate application-specific data plane and control plane templates 1710A and 1710B. The generated templates 1710 can include all the necessary components required by the multi application framework to support multiple applications on the same P4 target, such as the framework’s application signature component and the minimally required metadata information between the P4-programmable blocks 102 of the data plane pipeline, which could include passing the associated application identifier as metadata information.
[0096] The application-specific data plane and control plane templates 1710 are generated to guarantee isolation between applications and to allow applications to be developed completely independently from each other. That assumes that applications could be developed with different amounts of required resources, different processing logic, and table definitions, as well as different timelines for development and deployment.
[0097] In each application-specific data plane and control plane template 1710, the development framework 302 can provide the minimally required packet processing logic and metadata information to assure that each application’s packet processing logic would be executed independently of other applications.
[0098] As mentioned earlier, it is left to each application/developer to implement their own application-specific data plane and control plane generated templates 1706. Regarding the data plane templates 1710A, an application might be requested to fill in several templates 1710A. As shown in Figure 18, P4 architectures can include several P4-programmable blocks 102 to program, which requires the multi-application framework 302 to generate several P4 templates per intended data plane packet processing pipeline block (i.e., one per P4-programmable block 102). In Figure 18, there are six P4-programmable blocks 102, which corresponds to generation of six application-specific data plane templates 1706.
[0099] As the multi -application framework 302 provides each application with their own application-specific data plane and control plane generated templates 1706, it allows each application to develop their application at its own pace, without necessarily dependency on other applications. It is assumed that each application-specific data plane packet processing implementation 702 could be developed and tested in parallel by different development teams, and submitted to the multi -application framework 302 for deployment only once the development process is complete.
[00100] Regarding the control plane templates 1710B, as an application-specific data plane implementation 702 is deployed, new APIs can be generated to exchange management, provisioning and message information between the application’s data plane and control plane implementations. To ease the communication, the control plane proxy 1402 can facilitate proper communication routing and isolation between the multiple applications deployed on the same P4 device or appliance. While the APIs could be generated to populate the table entries used by the application’s data plane packet processing logic, they could also be used to exchange packets between an application’s data plane and control plane implementation. [00101] The communication channels between the framework’s data plane and control plane proxy 1402, as well as between the control plane proxy 1402 and each application, can be implemented using a proprietary solution and/or be automatically generated using a P4 runtime technology.
[00102] As shown in Figure 19, an application-specific data plane packet processing implementation 702 implies that an application/developer would implement its own intended data plane packet processing logic using the P4 language, as well as potentially the data model or provisioning information that might be required by the packet processing logic itself. Based on the application-specific data plane and control plane templates 1706 provided by the multi- application framework 302, an application is able to write its own application-specific implementation 1902 for the multi -application framework 302 to deploy on a selected P4 target. [00103] As the multi-application framework 302 provides different framework data plane and control plane templates 1706 to each application, it is assumed that each application/developer could be developing its own application-specific data plane and control plane packet processing implementations 1902 independently of other applications. Assuming multiple applications would be deployed on the same P4-programmable device, it would be the responsibility of the multi-application framework 302 to merge all application-specific implementations 1902I-1902N in a multi -application framework data plane and control plane implementation 1904 so that all implementations can be deployed on the same P4 resource, as shown in Figure 19.
[00104] Depending on the P4 target, a deployment model can include deploying a multi application framework 302 first on the P4 target, while each additional application-specific implementation 1902 can be added individually once it becomes available. That model can be considered as offering a modular deployment model for specific P4 targets. As yet another possible deployment model, all application-specific implementations 1902 are merged with the current deployed implementation 1904 and deploy the final combined implementation 1904 as one unique image for the P4 target.
[00105] Figure 20 shows a sequence diagram 2000 of a possible deployment process for an application-specific implementation 1902. As shown, an application/developer 2002 first provides its own application-specific specifications 1708 to the multi-application framework 302 at operation 2006A. At operation 2006B, the multi-application framework 302 analyzes the provided information to possibly select one or more P4 device(s) or appliance(s) 2004i-2004s for potential deployment. At operation 2006C, the multi -application framework 302 generates the corresponding application-specific data plane and control plane templates 1706 and transmits or otherwise makes available to the application/developer 2002 at operation 2006D. [00106] Once an application/developer 2002 has completed its development at operation 2006E, application/developer 2002 can submit its application-specific data plane and control plane implementations 1902 to the multi -application framework 302 at operation 2006F. The multi -application framework 302 is then responsible for merging active application-specific data plane implementations 702 at operation 2006G, validating resource usage on respective P4 targets at operation 2006H, and deploying the merged implementation 1904 onto the selected P4 device(s) or appliance(s) 2004i-2004s at operation 20061. For data plane implementations 702, the multi-application framework 302 ensures that each P4-programmable block 102 has a proper implementation of the framework’s application signature selection component 304 as well as a proper merge of the P4 code from all applications. For control plane implementations, the multi application framework 302 is responsible for generating and deploying a control plane proxy 1402 component that would take into account all the applications.
[00107] When deploying a new application-specific implementation, the multi-application framework 302 might be required to merge all application-specific implementations located on the same P4 target first, depending on how flexible a P4 target might be. On some P4 targets, it might be possible to dynamically compile and deploy new data objects and logic completely independently of the already deployed ones, while on some others, any update to a P4- programmable block 102 that would be shared between multiple applications might require a P4 code merge, recompilation, and redeployment of the P4 target.
[00108] Figure 21 shows a sequence diagram 2100 of a possible deployment process when updating an application-specific implementation 1902. This diagram 2100 describes the use case where only an application-specific data plane logic implementation would be updated but not its data objects or intended characteristics, as the application-specific specifications 1708 are assumed to remain unchanged. As shown, the application/developer 2002 updates application- specific implementations 1902 at operation 2102A and submits its application-specific data plane and control plane implementations 1902 to the multi-application framework 302 at operation 2102B. The multi -application framework 302 is then responsible for merging active application-specific data plane implementations 702 at operation 2102C, validating resource usage on respective P4 targets at operation 2102D, and deploying the merged implementation 1904 onto the selected P4 device(s) or appliance(s) 2004i-2004s at operation 2102E.
[00109] Figure 22 shows a sequence diagram 2200 where an application-specific implementation 1902 is removed from the multi -application framework 302 and deployed on a P4 device 2004. In this diagram 2200, an application/developer 2002 could request to the multi- application framework 302 to remove its application-specific implementation 1902 from a P4 device 2004 at operation 2202A. In such a case, the multi-application framework 302 is responsible for redeploying a proper image on the corresponding P4 device(s) 2004i-2004s without the requesting application’s application-specific implementation 1902. Accordingly, at operation 2202B the multi -application framework 302 merges all the remaining application- specific implementations 702, revalidating that all remaining implementations 702 still respect the allocated amount of resources at operation 2202C, and redeploy the newly generated image onto the corresponding P4 device(s) 2004i-2004s at operation 2202D.
[00110] Figure 23 shows a datacenter/cloud system 2300 where a centralized multi-application framework service 2314 is offered and made available to cloud tenants for their applications 2308Ai-2308Ax and 2308BI-2308BY operating on compute nodes 2318A-2318M. The datacenter/cloud system 2300 can additionally include network nodes 2304A-2304N in addition to the compute nodes 2318A-2318M and each of the network nodes 2304A-2304N and the compute nodes 2318A-2318M can include corresponding multi -application data plane and control plane frameworks 2302 and multi-application framework agents 2316. In some cases, the multi-application data plane and control plane frameworks 2302 can operate/reside on smart network interface cards (smart-NICs) (e.g., the smart-NICS 2312A and 2312B). Depending on the required logical or physical isolation, the one or more centralized multi-application framework services 2314 can be connected to multi -application framework agents 2316 across the system 2300. An agent 2316 directly manages the multi -application frameworks 2302 of a P4 target.
[00111] Figure 24 shows the interactions between an application’s data plane 2302 and control plane implementations 2406 via a multi-application control plane proxy 2402. In this configuration, an application’s data plane implementation could be managed and provisioned by an application control plane via the multi-application control plane proxy 2402 that would be located on the same node as the corresponding P4 target. Similarly, messages sent and received between an application’s data plane implementation and an application’s control plane would also be transiting through the multi-application control plane proxy 2402.
[00112] Figure 25 shows a use case where the multi-application framework service 2314 orchestrates the migration of an application-specific implementation between different compute nodes 2318A and 2318M. For example, assuming an application 2308 would be migrated between different compute nodes 2318, it might be necessary to also move their corresponding application-specific implementations from one P4 device to another. In Figure 25, the originally selected P4 device was the smart-NIC 2312Ai located on the same compute node 2318A as the application 2308Ai. As the application 2308Ai is migrated from compute node 2318A to compute node 2318M, the multi-application framework service 2314 is also requested to contribute by migrating its corresponding application-specific implementations between compute nodes 2318A and 2318M.
[00113] This capability of the multi-application framework service 2314 to allow applications 2308 to easily migrate between nodes offers a lot of flexibility and enables dynamicity of the system. It shows that once application-specific implementations are submitted to the multi application framework service, the same implementations can be migrated without any specific code rewrite by any application developer. It’s a great advantage to have a multi -application framework service 2314 that can support a dynamic deployment of applications 2308.
[00114] Figures 26, 27, 28, 29, and 30 show a set of flow diagrams for a method 2600 for offloading packet processing of an application 2308, according to one example embodiment.
The operations in the flow diagrams will be described with reference to the exemplary embodiments of the other figures. However, it should be understood that the operations of the flow diagrams can be performed by embodiments of the invention other than those discussed with reference to the other figures, and the embodiments of the invention discussed with reference to these other figures can perform operations different than those discussed with reference to the flow diagrams.
[00115] As shown in Figure 26, the method 2600 may commence at operation 2602 a multi-application framework 302 receiving a set of application specifications 1708 describing a first application 2308Ai. In one embodiment, the set of application specifications 1708 include a definition of a signature that is a unique identifier for identifying all traffic and management flows of the first application 2308Ai.
[00116] At operation 2604, the multi-application framework 302 selects a set of devices 2318/2304 for deployment of the first application 2308Ai based on the set of application specifications 1708.
[00117] At operation 2606, the multi-application framework 302 generates application- specific templates 1706 based on the set of application specifications 1708, a set of multi application reference templates 1704, and architectures 1702 of the set of devices 2318/2304. [00118] At operation 2608, the multi-application framework 302 receives a first application- specific implementation 702i, wherein the first application-specific implementation 702i includes logic and data objects for deployment on the set of devices 2318/2304.
[00119] At operation 2610, when more than one application 2308 needs to be deployed on the same device, the multi-application framework 302 merges the first application-specific implementation 702i, which corresponds to the first application 2308Ai, with a second application-specific implementation 702N, which corresponds to a second application 2308Ax, to produce a first merged multi -application implementation 1904. In one embodiment, the merging is performed per device 2318/2304 in the set of devices 2318/2304 such that a separate first merged multi -application implementation 1904 is produced and deployed per each device 2318/2304 in the set of set devices 2318/2304.
[00120] At operation 2612 the multi-application framework 302 deploys the first merged multi-application implementation 1904 to the set of devices 2318/2304. Deploying the first merged multi -application implementation 1904 includes deploying the merged multi -application implementation 1904 on a set of programmable components 102 in each device in the set of devices 2318/2304.
[00121] Following operation 2612, the method 2600 may move to either operation 2702 (shown in Figure 27), 2802 (shown in Figure 28), or 2902 (shown in Figure 29).
[00122] Turning to Figure 27, at operation 2702, the multi-application framework 302 detects a request 2202A to remove the first application 2308Ai from the set of devices 2318/2304. [00123] At operation 2704, the multi-application framework 302 merges active application- specific implementations 702, including the second application-specific implementation 702N, to produce a second merged multi -application implementation 1904 that, in comparison to the first merged multi -application implementation 1904, does not correspond to the first application- specific implementation 702i.
[00124] At operation 2706, the multi-application framework 302 deploys the second merged multi -application implementation 1904 to the set of devices 2318/2304. In some embodiments, the request 2202A is from one of (1) the first application 2308Ai or (2) a developer of the first application 2308Ai or triggered by the multi-application framework 302.
[00125] Turning to Figure 28, at operation 2802, the multi-application framework 302 detects a request 2102B to update the first application-specific implementation 702i, wherein the request 2102B includes an updated first application-specific implementation 702i.
[00126] At operation 2804, the multi-application framework 302 merges the updated first application-specific implementation 702i, which corresponds to the first application 2308Ai, with the second application-specific implementation 702N, which corresponds to the second application 2308Ax, to produce a second merged multi-application implementation 1904.
[00127] At operation 2806, the multi-application framework 302 deploys the second merged multi -application implementation 1904 to the set of devices 2318/2304. In some embodiments, the request 2102B is from one of (1) the first application 2308Ai or (2) a developer of the first application 2308Ai or triggered by the multi-application framework 302 (e.g., the request 2102B is related to an update of the multi -application framework 302). [00128] Turning to Figures 29 and 30, at operation 2902 the multi-application framework 302 receives, on a first programmable component 102i within a first device 2318/2304 in the set of devices 2318/2304, a packet 308;
[00129] At operation 2904, the multi-application framework 302 determines, on the first programmable component 102i, that the packet 308 is associated with the first application 2308Ai. In some embodiments, determining that the packet 308 is associated with the first application 2308Ai includes sub-operation 2904A to build, by a framework application signature selection component 304A, a signature key for the packet 308 based on one or more of (1) system information for the packet 308, (2) a set of metadata 306 from a previous programmable component 102 of the first device 2318/2304, (3) application-specific signature parameters from table lookups, and (4) application-specific signature parameters from parsing the packet 308. In some embodiments, determining that the packet 308 is associated with the first application 2308Ai may additionally include sub-operation 2904B to match, by the framework application signature selection component 304 A, the signature key with a signature of the first application 2308Ai.
[00130] At operation 2906, the multi-application framework 302 selects, in response to determining that the packet 308 is associated with the first application 2308Ai, the first application-specific implementation 7021 included in the first merged multi-application implementation 1904 for use in processing the packet 308 by the first programmable component 102i within the first device 2318/2304.
[00131] At operation 2908, the multi-application framework 302 processes, by the first application-specific implementation 7021 in the first programmable component 102i in the first device 2318/2304 in response to selecting the first application-specific implementation 702i, the packet 308 to cause the first programmable component 102i to one or more of (1) perform a task of the first application 2308Ai and (2) generate a first set of metadata 306.
[00132] Following performing the task, the method 2600 may either perform operation 2910 or operation 2912. At operation 2910, the first programmable component 102i passes one or more of (1) the first set of metadata 306 and (2) the packet 308 to a second programmable component 1022 or a programmable component 102 in the first device 2318/2304 or a second device 2318/2304 in the set of devices 2318/2304.
[00133] At operation 2912, the first programmable component 102i drops the packet 308. [00134] As shown in Figure 29, following either operation 2910 or operation 2912, the method 2600 moves to operation 3002 in Figure 30. [00135] At operation 3002, the multi -application framework 302 receives, on the second programmable component 1022 within the first device 2318/2304, the packet 308 and the first set of metadata 306.
[00136] At operation 3004, the multi-application framework 302 determines, on the second programmable component 1022, that the packet 308 is associated with the first application 2308Ai based on the packet 308 and the first set of metadata 306. In some embodiments, determining that the packet 308 is associated with the first application 2308Ai includes sub operation 3004A to build, by a framework application signature selection component 304A, a signature key for the packet 308 based on one or more of (1) system information for the packet 308, (2) a set of metadata 306 from a previous programmable component 102 of the first device 2318/2304, (3) application-specific signature parameters from table lookups, and (4) application-specific signature parameters from parsing the packet 308. In some embodiments, determining that the packet 308 is associated with the first application 2308Ai may additionally include sub-operation 2904B to match, by the framework application signature selection component 304A, the signature key with a signature of the first application 2308Ai.
[00137] At operation 3006, the multi-application framework 302 selects, in response to determining that the packet 308 is associated with the first application 2308Ai, the first application-specific implementation 7021 included in the first merged multi-application implementation 1904 for use in processing the packet 308 by the second programmable component 1022 within the first device 2318/2304.
[00138] At operation 3008, the multi-application framework 302 processes, by the first application-specific implementation 7021 in the second programmable component 1022 in the first device 2318/2304 in response to selecting the first application-specific implementation 702i, the packet 308 to generate a second set of metadata 306.
[00139] At operation 3010, the multi-application framework 302 passes, by the second programmable component 1022, the metadata 306 and the packet 308 to a third programmable component 1023 in the first device 2318/2304. In particular, the metadata 306 and/or the packet 308 a processing pipeline, as indicated by the architecture.
[00140] An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine-readable media (also called computer-readable media), such as machine-readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals). Thus, an electronic device (e.g., a computer) includes hardware and software, such as a set of one or more processors (e.g., wherein a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding) coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data. For instance, an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower non volatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device. Typical electronic devices also include a set or one or more physical network interface(s) (NI(s)) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices. For example, the set of physical NIs (or the set of physical NI(s) in combination with the set of processors executing code) may perform any formatting, coding, or translating to allow the electronic device to send and receive data whether over a wired and/or a wireless connection. In some embodiments, a physical NI may comprise radio circuitry capable of receiving data from other electronic devices over a wireless connection and/or sending data out to other devices via a wireless connection. This radio circuitry may include transmitter(s), receiver(s), and/or transceiver(s) suitable for radiofrequency communication. The radio circuitry may convert digital data into a radio signal having the appropriate parameters (e.g., frequency, timing, channel, bandwidth, etc.). The radio signal may then be transmitted via antennas to the appropriate recipient(s). In some embodiments, the set of physical NI(s) may comprise network interface controller(s) (NICs), also known as a network interface card, network adapter, or local area network (LAN) adapter. The NIC(s) may facilitate in connecting the electronic device to other electronic devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC. One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
[00141] A network device (ND) is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices). Some network devices are “multiple services network devices” that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video). [00142] Figure 31 A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention. Figure 31 A shows NDs 3100A-H, and their connectivity by way of lines between 3100A-3100B, 3100B-3100C, 31 OOC-3100D, 3100D-3100E, 3100E-31 OOF, 3100F-3100G, and 3100A-3100G, as well as between 3100H and each of 3100A, 3100C,
3100D, and 3100G. These NDs are physical devices, and the connectivity between these NDs can be wireless or wired (often referred to as a link). An additional line extending from NDs 3100A, 3100E, and 3100F illustrates that these NDs act as ingress and egress points for the network (and thus, these NDs are sometimes referred to as edge NDs; while the other NDs may be called core NDs).
[00143] Two of the exemplary ND implementations in Figure 31A are: 1) a special-purpose network device 3102 that uses custom application-specific integrated-circuits (ASICs) and a special-purpose operating system (OS); and 2) a general purpose network device 3104 that uses common off-the-shelf (COTS) processors and a standard OS.
[00144] The special-purpose network device 3102 includes networking hardware 3110 comprising a set of one or more processor(s) 3112, forwarding resource(s) 3114 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 3116 (through which network connections are made, such as those shown by the connectivity between NDs 3100A-H), as well as non-transitory machine readable storage media 3118 having stored therein networking software 3120. During operation, the networking software 3120 may be executed by the networking hardware 3110 to instantiate a set of one or more networking software instance(s) 3122. Each of the networking software instance(s) 3122, and that part of the networking hardware 3110 that executes that network software instance (be it hardware dedicated to that networking software instance and/or time slices of hardware temporally shared by that networking software instance with others of the networking software instance(s) 3122), form a separate virtual network element 3130A-R. Each of the virtual network element(s) (VNEs) 3130A-R includes a control communication and configuration module 3132A-R (sometimes referred to as a local control module or control communication module) and forwarding table(s) 3134A-R, such that a given virtual network element (e.g., 3130A) includes the control communication and configuration module (e.g., 3132A), a set of one or more forwarding table(s) (e.g., 3134A), and that portion of the networking hardware 3110 that executes the virtual network element (e.g., 3130A).
[00145] The special-purpose network device 3102 is often physically and/or logically considered to include: 1) a ND control plane 3124 (sometimes referred to as a control plane) comprising the processor(s) 3112 that execute the control communication and configuration module(s) 3132A-R; and 2) a ND forwarding plane 3126 (sometimes referred to as a forwarding plane, a data plane, or a media plane) comprising the forwarding resource(s) 3114 that utilize the forwarding table(s) 3134A-R and the physical NIs 3116. By way of example, where the ND is a router (or is implementing routing functionality), the ND control plane 3124 (the processor(s) 3112 executing the control communication and configuration module(s) 3132A-R) is typically responsible for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) and storing that routing information in the forwarding table(s) 3134A-R, and the ND forwarding plane 3126 is responsible for receiving that data on the physical NIs 3116 and forwarding that data out the appropriate ones of the physical NIs 3116 based on the forwarding table (s) 3134A-R.
[00146] Figure 3 IB illustrates an exemplary way to implement the special-purpose network device 3102 according to some embodiments of the invention. Figure 3 IB shows a special- purpose network device including cards 3138 (typically hot pluggable). While in some embodiments the cards 3138 are of two types (one or more that operate as the ND forwarding plane 3126 (sometimes called line cards), and one or more that operate to implement the ND control plane 3124 (sometimes called control cards)), alternative embodiments may combine functionality onto a single card and/or include additional card types (e.g., one additional type of card is called a service card, resource card, or multi -application card). A service card can provide specialized processing (e.g., Layer 4 to Layer 7 services (e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)). By way of example, a service card may be used to terminate IPsec tunnels and execute the attendant authentication and encryption algorithms. These cards are coupled together through one or more interconnect mechanisms illustrated as backplane 3136 (e.g., a first full mesh coupling the line cards and a second full mesh coupling all of the cards).
[00147] Returning to Figure 31A, the general purpose network device 3104 includes hardware 3140 comprising a set of one or more processor(s) 3142 (which are often COTS processors) and physical NIs 3146, as well as non-transitory machine readable storage media 3148 having stored therein software 3150. During operation, the processor(s) 3142 execute the software 3150 to instantiate one or more sets of one or more applications 3164A-R. While one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization. For example, in one such alternative embodiment the virtualization layer 3154 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 3162A-R called software containers that may each be used to execute one (or more) of the sets of applications 3164A-R; where the multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that are separate from each other and separate from the kernel space in which the operating system is ran; and where the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes. In another such alternative embodiment the virtualization layer 3154 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and each of the sets of applications 3164A-R is run on top of a guest operating system within an instance 3162A-R called a virtual machine (which may in some cases be considered a tightly isolated form of software container) that is run on top of the hypervisor - the guest operating system and application may not know they are running on a virtual machine as opposed to running on a “bare metal” host electronic device, or through para- virtualization the operating system and/or application may be aware of the presence of virtualization for optimization purposes. In yet other alternative embodiments, one, some or all of the applications are implemented as unikemel(s), which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application. As a unikemel can be implemented to run directly on hardware 3140, directly on a hypervisor (in which case the unikemel is sometimes described as running within a LibOS virtual machine), or in a software container, embodiments can be implemented fully with unikemels running directly on a hypervisor represented by virtualization layer 3154, unikemels running within software containers represented by instances 3162A-R, or as a combination of unikemels and the above-described techniques (e.g., unikemels and virtual machines both ran directly on a hypervisor, unikemels and sets of applications that are run in different software containers).
[00148] The instantiation of the one or more sets of one or more applications 3164A-R, as well as virtualization if implemented, are collectively referred to as software instance(s) 3152. Each set of applications 3164A-R, corresponding virtualization construct (e.g., instance 3162A- R) if implemented, and that part of the hardware 3140 that executes them (be it hardware dedicated to that execution and/or time slices of hardware temporally shared), forms a separate virtual network element(s) 3160A-R.
[00149] The virtual network element(s) 3160A-R perform similar functionality to the virtual network element(s) 3130A-R - e.g., similar to the control communication and configuration module(s) 3132A and forwarding table(s) 3134A (this virtualization of the hardware 3140 is sometimes referred to as network function virtualization (NFV)). Thus, NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which could be located in Data centers, NDs, and customer premise equipment (CPE). While embodiments of the invention are illustrated with each instance 3162A-R corresponding to one VNE 3160A-R, alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of instances 3162A-R to VNEs also apply to embodiments where such a finer level of granularity and/or unikemels are used.
[00150] In certain embodiments, the virtualization layer 3154 includes a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between instances 3162A-R and the physical NI(s) 3146, as well as optionally between the instances 3162A-R; in addition, this virtual switch may enforce network isolation between the VNEs 3160A-R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)).
[00151] The third exemplary ND implementation in Figure 31A is a hybrid network device 3106, which includes both custom ASICs/special-purpose OS and COTS processors/standard OS in a single ND or a single card within an ND. In certain embodiments of such a hybrid network device, a platform VM (i.e., a VM that that implements the functionality of the special- purpose network device 3102) could provide for para-virtualization to the networking hardware present in the hybrid network device 3106.
[00152] Regardless of the above exemplary implementations of an ND, when a single one of multiple VNEs implemented by an ND is being considered (e.g., only one of the VNEs is part of a given virtual network) or where only a single VNE is currently being implemented by an ND, the shortened term network element (NE) is sometimes used to refer to that VNE. Also in all of the above exemplary implementations, each of the VNEs (e.g., VNE(s) 3130A-R, VNEs 3160A- R, and those in the hybrid network device 3106) receives data on the physical NIs (e.g., 3116,
3146) and forwards that data out the appropriate ones of the physical NIs (e.g., 3116, 3146). For example, a VNE implementing IP router functionality forwards IP packets on the basis of some of the IP header information in the IP packet; where IP header information includes source IP address, destination IP address, source port, destination port (where “source port” and “destination port” refer herein to protocol ports, as opposed to physical ports of a ND), transport protocol (e.g., user datagram protocol (UDP), Transmission Control Protocol (TCP), and differentiated services code point (DSCP) values. [00153] Figure 31C illustrates various exemplary ways in which VNEs may be coupled according to some embodiments of the invention. Figure 31C shows VNEs 3170A.1-3170A.P (and optionally VNEs 3170A.Q-3170A.R) implemented in ND 3100A and VNE 3170H.1 in ND 3100H. In Figure 31C, VNEs 3170A.1-P are separate from each other in the sense that they can receive packets from outside ND 3100A and forward packets outside ofND 3100A; VNE 3170A.1 is coupled with VNE 3170H.1, and thus they communicate packets between their respective NDs; VNE 3170A.2-3170A.3 may optionally forward packets between themselves without forwarding them outside of the ND 3100A; and VNE 3170A.P may optionally be the first in a chain of VNEs that includes VNE 3170A.Q followed by VNE 3170A.R (this is sometimes referred to as dynamic service chaining, where each of the VNEs in the series of VNEs provides a different service - e.g., one or more layer 4-7 network services). While Figure 31C illustrates various exemplary relationships between the VNEs, alternative embodiments may support other relationships (e.g., more/fewer VNEs, more/fewer dynamic service chains, multiple different dynamic service chains with some common VNEs and some different VNEs). [00154] The NDs of Figure 31 A, for example, may form part of the Internet or a private network; and other electronic devices (not shown; such as end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances) may be coupled to the network (directly or through other networks such as access networks) to communicate over the network (e.g., the Internet or virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet) with each other (directly or through servers) and/or access content and/or services. Such content and/or services are typically provided by one or more servers (not shown) belonging to a service/content provider or one or more end user devices (not shown) participating in a peer-to-peer (P2P) service, and may include, for example, public webpages (e.g., free content, store fronts, search services), private webpages (e.g., usemame/password accessed webpages providing email services), and/or corporate networks over VPNs. For instance, end user devices may be coupled (e.g., through customer premise equipment coupled to an access network (wired or wirelessly)) to edge NDs, which are coupled (e.g., through one or more core NDs) to other edge NDs, which are coupled to electronic devices acting as servers. However, through compute and storage virtualization, one or more of the electronic devices operating as the NDs in Figure 31 A may also host one or more such servers (e.g., in the case of the general purpose network device 3104, one or more of the software instances 3162A-Rmay operate as servers; the same would be true for the hybrid network device 3106; in the case of the special-purpose network device 3102, one or more such servers could also be run on a virtualization layer executed by the processor(s) 3112); in which case the servers are said to be co-located with the VNEs of that ND.
[00155] A virtual network is a logical abstraction of a physical network (such as that in Figure 31A) that provides network services (e.g., L2 and/or L3 services). A virtual network can be implemented as an overlay network (sometimes referred to as a network virtualization overlay) that provides network services (e.g., layer 2 (L2, data link layer) and/or layer 3 (L3, network layer) services) over an underlay network (e.g., an L3 network, such as an Internet Protocol (IP) network that uses tunnels (e.g., generic routing encapsulation (GRE), layer 2 tunneling protocol (L2TP), IPSec) to create the overlay network).
[00156] A network virtualization edge (NVE) sits at the edge of the underlay network and participates in implementing the network virtualization; the network-facing side of the NVE uses the underlay network to tunnel frames to and from other NVEs; the outward-facing side of the NVE sends and receives data to and from systems outside the network. A virtual network instance (VNI) is a specific instance of a virtual network on a NVE (e.g., a NE VNE on an ND, a part of a NE VNE on a ND where that NE VNE is divided into multiple VNEs through emulation); one or more VNIs can be instantiated on an NVE (e.g., as different VNEs on an ND). A virtual access point (VAP) is a logical connection point on the NVE for connecting external systems to a virtual network; a VAP can be physical or virtual ports identified through logical interface identifiers (e.g., a VLAN ID).
[00157] Examples of network services include: 1) an Ethernet LAN emulation service (an Ethernet-based multipoint service similar to an Internet Engineering Task Force (IETF) Multiprotocol Label Switching (MPLS) or Ethernet VPN (EVPN) service) in which external systems are interconnected across the network by a LAN environment over the underlay network (e.g., an NVE provides separate L2 VNIs (virtual switching instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network); and 2) a virtualized IP forwarding service (similar to IETF IP VPN (e.g., Border Gateway Protocol (BGP)/MPLS IPVPN) from a service definition perspective) in which external systems are interconnected across the network by an L3 environment over the underlay network (e.g., an NVE provides separate L3 VNIs (forwarding and routing instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network)). Network services may also include quality of service capabilities (e.g., traffic classification marking, traffic conditioning and scheduling), security capabilities (e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements), and management capabilities (e.g., full detection and processing). [00158] Fig. 3 ID illustrates a network with a single network element on each of the NDs of Figure 31 A, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention. Specifically, Figure 3 ID illustrates network elements (NEs)
3170A-H with the same connectivity as the NDs 3100A-H of Figure 31 A.
[00159] Figure 3 ID illustrates that the distributed approach 3172 distributes responsibility for generating the reachability and forwarding information across the NEs 3170A-H; in other words, the process of neighbor discovery and topology discovery is distributed.
[00160] For example, where the special-purpose network device 3102 is used, the control communication and configuration module(s) 3132A-R of the ND control plane 3124 typically include a reachability and forwarding information module to implement one or more routing protocols (e.g., an exterior gateway protocol such as Border Gateway Protocol (BGP), Interior Gateway Protocol(s) (IGP) (e.g., Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), Routing Information Protocol (RIP), Label Distribution Protocol (LDP), Resource Reservation Protocol (RSVP) (including RSVP-Traffic Engineering (TE): Extensions to RSVP for LSP Tunnels and Generalized Multi-Protocol Label Switching (GMPLS) Signaling RSVP-TE)) that communicate with other NEs to exchange routes, and then selects those routes based on one or more routing metrics. Thus, the NEs 3170A-H (e.g., the processor(s) 3112 executing the control communication and configuration module(s) 3132A-R) perform their responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by distributively determining the reachability within the network and calculating their respective forwarding information. Routes and adjacencies are stored in one or more routing structures (e.g., Routing Information Base (RIB), Label Information Base (LIB), one or more adjacency structures) on the ND control plane 3124. The ND control plane 3124 programs the ND forwarding plane 3126 with information (e.g., adjacency and route information) based on the routing structure(s). For example, the ND control plane 3124 programs the adjacency and route information into one or more forwarding table(s) 3134A-R (e.g., Forwarding Information Base (FIB), Label Forwarding Information Base (LFIB), and one or more adjacency structures) on the ND forwarding plane 3126. For layer 2 forwarding, the ND can store one or more bridging tables that are used to forward data based on the layer 2 information in that data. While the above example uses the special-purpose network device 3102, the same distributed approach 3172 can be implemented on the general purpose network device 3104 and the hybrid network device 3106. [00161] Figure 3 ID illustrates that a centralized approach 3174 (also known as software defined networking (SDN)) that decouples the system that makes decisions about where traffic is sent from the underlying systems that forwards traffic to the selected destination. The illustrated centralized approach 3174 has the responsibility for the generation of reachability and forwarding information in a centralized control plane 3176 (sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity), and thus the process of neighbor discovery and topology discovery is centralized. The centralized control plane 3176 has a south bound interface 3182 with a data plane 3180 (sometime referred to the infrastructure layer, network forwarding plane, or forwarding plane (which should not be confused with aND forwarding plane)) that includes the NEs 3170A-H (sometimes referred to as switches, forwarding elements, data plane elements, or nodes). The centralized control plane 3176 includes a network controller 3178, which includes a centralized reachability and forwarding information module 3179 that determines the reachability within the network and distributes the forwarding information to the NEs 3170A-H of the data plane 3180 over the south bound interface 3182 (which may use the OpenFlow protocol). Thus, the network intelligence is centralized in the centralized control plane 3176 executing on electronic devices that are typically separate from the NDs.
[00162] For example, where the special-purpose network device 3102 is used in the data plane 3180, each of the control communication and configuration module(s) 3132A-R of the ND control plane 3124 typically include a control agent that provides the VNE side of the south bound interface 3182. In this case, the ND control plane 3124 (the processor(s) 3112 executing the control communication and configuration module(s) 3132A-R) performs its responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) through the control agent communicating with the centralized control plane 3176 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 3179 (it should be understood that in some embodiments of the invention, the control communication and configuration module(s) 3132A-R, in addition to communicating with the centralized control plane 3176, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach; such embodiments are generally considered to fall under the centralized approach 3174, but may also be considered a hybrid approach).
[00163] While the above example uses the special-purpose network device 3102, the same centralized approach 3174 can be implemented with the general purpose network device 3104 (e.g., each of the VNE 3160A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by communicating with the centralized control plane 3176 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 3179; it should be understood that in some embodiments of the invention, the VNEs 3160A-R, in addition to communicating with the centralized control plane 3176, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach) and the hybrid network device 3106. In fact, the use of SDN techniques can enhance the NFV techniques typically used in the general purpose network device 3104 or hybrid network device 3106 implementations as NFV is able to support SDN by providing an infrastructure upon which the SDN software can be run, and NFV and SDN both aim to make use of commodity server hardware and physical switches.
[00164] Figure 3 ID also shows that the centralized control plane 3176 has a north bound interface 3184 to an application layer 3186, in which resides application(s) 3188. The centralized control plane 3176 has the ability to form virtual networks 3192 (sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 3170A-H of the data plane 3180 being the underlay network)) for the application(s) 3188. Thus, the centralized control plane 3176 maintains a global view of all NDs and configured NEs VNEs, and it maps the virtual networks to the underlying NDs efficiently (including maintaining these mappings as the physical network changes either through hardware (ND, link, or ND component) failure, addition, or removal).
[00165] While Figure 3 ID shows the distributed approach 3172 separate from the centralized approach 3174, the effort of network control may be distributed differently or the two combined in certain embodiments of the invention. For example: 1) embodiments may generally use the centralized approach (SDN) 3174, but have certain functions delegated to the NEs (e.g., the distributed approach may be used to implement one or more of fault monitoring, performance monitoring, protection switching, and primitives for neighbor and/or topology discovery); or 2) embodiments of the invention may perform neighbor discovery and topology discovery via both the centralized control plane and the distributed protocols, and the results compared to raise exceptions where they do not agree. Such embodiments are generally considered to fall under the centralized approach 3174, but may also be considered a hybrid approach.
[00166] While Figure 3 ID illustrates the simple case where each of the NDs 3100A-H implements a single NE 3170A-H, it should be understood that the network control approaches described with reference to Figure 3 ID also work for networks where one or more of the NDs 3100A-H implement multiple VNEs (e.g., VNEs 3130A-R, VNEs 3160A-R, those in the hybrid network device 3106). Alternatively or in addition, the network controller 3178 may also emulate the implementation of multiple VNEs in a single ND. Specifically, instead of (or in addition to) implementing multiple VNEs in a single ND, the network controller 3178 may present the implementation of a VNE/NE in a single ND as multiple VNEs in the virtual networks 3192 (all in the same one of the virtual network(s) 3192, each in different ones of the virtual network(s) 3192, or some combination). For example, the network controller 3178 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 3176 to present different VNEs in the virtual network(s) 3192 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).
[00167] On the other hand, Figures 3 IE and 3 IF respectively illustrate exemplary abstractions of NEs and VNEs that the network controller 3178 may present as part of different ones of the virtual networks 3192. Figure 3 IE illustrates the simple case of where each of the NDs 3100A-H implements a single NE 3170A-H (see Figure 3 ID), but the centralized control plane 3176 has abstracted multiple of the NEs in different NDs (the NEs 3170A-C and G-H) into (to represent) a single NE 31701 in one of the virtual network(s) 3192 of Figure 3 ID, according to some embodiments of the invention. Figure 3 IE shows that in this virtual network, the NE 31701 is coupled to NE 3170D and 3170F, which are both still coupled to NE 3170E.
[00168] Figure 31F illustrates a case where multiple VNEs (VNE 3170A.1 and VNE 3170H.1) are implemented on different NDs (ND 3100A and ND 3100H) and are coupled to each other, and where the centralized control plane 3176 has abstracted these multiple VNEs such that they appear as a single VNE 3170T within one of the virtual networks 3192 of Figure 3 ID, according to some embodiments of the invention. Thus, the abstraction of a NE or VNE can span multiple NDs.
[00169] While some embodiments of the invention implement the centralized control plane 3176 as a single entity (e.g., a single instance of software running on a single electronic device), alternative embodiments may spread the functionality across multiple entities for redundancy and/or scalability purposes (e.g., multiple instances of software running on different electronic devices).
[00170] Similar to the network device implementations, the electronic device(s) running the centralized control plane 3176, and thus the network controller 3178 including the centralized reachability and forwarding information module 3179, may be implemented a variety of ways (e.g., a special purpose device, a general-purpose (e.g., COTS) device, or hybrid device). These electronic device(s) would similarly include processor(s), a set or one or more physical NIs, and a non-transitory machine-readable storage medium having stored thereon the centralized control plane software. For instance, Figure 32 illustrates, a general purpose control plane device 3204 including hardware 3240 comprising a set of one or more processor(s) 3242 (which are often COTS processors) and physical NIs 3246, as well as non-transitory machine readable storage media 3248 having stored therein centralized control plane (CCP) software 3250.
[00171] In embodiments that use compute virtualization, the processor(s) 3242 typically execute software to instantiate a virtualization layer 3254 (e.g., in one embodiment the virtualization layer 3254 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 3262A-R called software containers (representing separate user spaces and also called virtualization engines, virtual private servers, or jails) that may each be used to execute a set of one or more applications; in another embodiment the virtualization layer 3254 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and an application is run on top of a guest operating system within an instance 3262A-R called a virtual machine (which in some cases may be considered a tightly isolated form of software container) that is run by the hypervisor ; in another embodiment, an application is implemented as a unikemel, which can be generated by compiling directly with an application only a l imited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS sendees) that provide the particular OS sendees needed by the application, and the unikemel can run directly on hardware 3240, directly on a hypenisor represented by virtualization layer 3254 (in which case the unikemel is sometimes described as running within a LibOS virtual machine), or in a software container represented by one of instances 3262A-R). Again, in embodiments where compute virtualization is used, during operation an instance of the CCP software 3250 (illustrated as CCP instance 3276A) is executed (e.g., within the instance 3262A) on the virtualization layer 3254. In embodiments where compute virtualization is not used, the CCP instance 3276A is executed, as a unikemel or on top of a host operating system, on the “bare metal” general purpose control plane device 3204. The instantiation of the CCP instance 3276A, as well as the virtualization layer 3254 and instances 3262A-R if implemented, are collectively referred to as software instance(s) 3252.
[00172] In some embodiments, the CCP instance 3276A includes a network controller instance 3278. The network controller instance 3278 includes a centralized reachability and forwarding information module instance 3279 (which is a middleware layer providing the context of the network controller 3178 to the operating system and communicating with the various NEs), and an CCP application layer 3280 (sometimes referred to as an application layer) over the middleware layer (providing the intelligence required for various network operations such as protocols, network situational awareness, and user - interfaces). At a more abstract level, this CCP application layer 3280 within the centralized control plane 3176 works with virtual network view(s) (logical view(s) of the network) and the middleware layer provides the conversion from the virtual networks to the physical view.
[00173] The centralized control plane 3176 transmits relevant messages to the data plane 3180 based on CCP application layer 3280 calculations and middleware layer mapping for each flow. A flow may be defined as a set of packets whose headers match a given pattern of bits; in this sense, traditional IP forwarding is also flow-based forwarding where the flows are defined by the destination IP address for example; however, in other implementations, the given pattern of bits used for a flow definition may include more fields (e.g., 10 or more) in the packet headers. Different NDs/NEs/VNEs of the data plane 3180 may receive different messages, and thus different forwarding information. The data plane 3180 processes these messages and programs the appropriate flow information and corresponding actions in the forwarding tables (sometime referred to as flow tables) of the appropriate NE/VNEs, and then the NEs/VNEs map incoming packets to flows represented in the forwarding tables and forward packets based on the matches in the forwarding tables.
[00174] Standards such as OpenFlow define the protocols used for the messages, as well as a model for processing the packets. The model for processing packets includes header parsing, packet classification, and making forwarding decisions. Header parsing describes how to interpret a packet based upon a well-known set of protocols. Some protocol fields are used to build a match structure (or key) that will be used in packet classification (e.g., a first key field could be a source media access control (MAC) address, and a second key field could be a destination MAC address).
[00175] Packet classification involves executing a lookup in memory to classify the packet by determining which entry (also referred to as a forwarding table entry or flow entry) in the forwarding tables best matches the packet based upon the match structure, or key, of the forwarding table entries. It is possible that many flows represented in the forwarding table entries can correspond/match to a packet; in this case the system is typically configured to determine one forwarding table entry from the many according to a defined scheme (e.g., selecting a first forwarding table entry that is matched). Forwarding table entries include both a specific set of match criteria (a set of values or wildcards, or an indication of what portions of a packet should be compared to a particular value/values/wildcards, as defined by the matching capabilities - for specific fields in the packet header, or for some other packet content), and a set of one or more actions for the data plane to take on receiving a matching packet. For example, an action may be to push a header onto the packet, for the packet using a particular port, flood the packet, or simply drop the packet. Thus, a forwarding table entry for IPv4/IPv6 packets with a particular transmission control protocol (TCP) destination port could contain an action specifying that these packets should be dropped.
[00176] Making forwarding decisions and performing actions occurs, based upon the forwarding table entry identified during packet classification, by executing the set of actions identified in the matched forwarding table entry on the packet.
[00177] However, when an unknown packet (for example, a “missed packet” or a “match- miss” as used in OpenFlow parlance) arrives at the data plane 3180, the packet (or a subset of the packet header and content) is typically forwarded to the centralized control plane 3176. The centralized control plane 3176 will then program forwarding table entries into the data plane 3180 to accommodate packets belonging to the flow of the unknown packet. Once a specific forwarding table entry has been programmed into the data plane 3180 by the centralized control plane 3176, the next packet with matching credentials will match that forwarding table entry and take the set of actions associated with that matched entry.
[00178] A network interface (NI) may be physical or virtual; and in the context of IP, an interface address is an IP address assigned to a NI, be it a physical NI or virtual NI. A virtual NI may be associated with a physical NI, with another virtual interface, or stand on its own (e.g., a loopback interface, a point-to-point protocol interface). A NI (physical or virtual) may be numbered (a NI with an IP address) or unnumbered (a NI without an IP address). A loopback interface (and its loopback address) is a specific type of virtual NI (and IP address) of a NE/VNE (physical or virtual) often used for management purposes; where such an IP address is referred to as the nodal loopback address. The IP address(es) assigned to the NI(s) of a ND are referred to as IP addresses of that ND; at a more granular level, the IP address(es) assigned to NI(s) assigned to a NE/VNE implemented on a ND can be referred to as IP addresses of that NE/VNE.
[00179] Next hop selection by the routing system for a given destination may resolve to one path (that is, a routing protocol may generate one next hop on a shortest path); but if the routing system determines there are multiple viable next hops (that is, the routing protocol generated forwarding solution offers more than one next hop on a shortest path - multiple equal cost next hops), some additional criteria is used - for instance, in a connectionless network, Equal Cost Multi Path (ECMP) (also known as Equal Cost Multi Pathing, multipath forwarding and IP multipath) may be used (e.g., typical implementations use as the criteria particular header fields to ensure that the packets of a particular packet flow are always forwarded on the same next hop to preserve packet flow ordering). For purposes of multipath forwarding, a packet flow is defined as a set of packets that share an ordering constraint. As an example, the set of packets in a particular TCP transfer sequence need to arrive in order, else the TCP logic will interpret the out of order delivery as congestion and slow the TCP transfer rate down.
[00180] A Layer 3 (L3) Link Aggregation (LAG) link is a link directly connecting two NDs with multiple IP-addressed link paths (each link path is assigned a different IP address), and a load distribution decision across these different link paths is performed at the ND forwarding plane; in which case, a load distribution decision is made between the link paths.
[00181] Some NDs include functionality for authentication, authorization, and accounting (AAA) protocols (e.g., RADIUS (Remote Authentication Dial-In User Service), Diameter, and/or TACACS+ (Terminal Access Controller Access Control System Plus). AAA can be provided through a client/server model, where the AAA client is implemented on a ND and the AAA server can be implemented either locally on the ND or on a remote electronic device coupled with the ND. Authentication is the process of identifying and verifying a subscriber. For instance, a subscriber might be identified by a combination of a username and a password or through a unique key. Authorization determines what a subscriber can do after being authenticated, such as gaining access to certain electronic device information resources (e.g., through the use of access control policies). Accounting is recording user activity. By way of a summary example, end user devices may be coupled (e.g., through an access network) through an edge ND (supporting AAA processing) coupled to core NDs coupled to electronic devices implementing servers of service/content providers. AAA processing is performed to identify for a subscriber the subscriber record stored in the AAA server for that subscriber. A subscriber record includes a set of attributes (e.g., subscriber name, password, authentication information, access control information, rate-limiting information, policing information) used during processing of that subscriber’s traffic.
[00182] Certain NDs (e.g., certain edge NDs) internally represent end user devices (or sometimes customer premise equipment (CPE) such as a residential gateway (e.g., a router, modem)) using subscriber circuits. A subscriber circuit uniquely identifies within the ND a subscriber session and typically exists for the lifetime of the session. Thus, a ND typically allocates a subscriber circuit when the subscriber connects to that ND, and correspondingly de allocates that subscriber circuit when that subscriber disconnects. Each subscriber session represents a distinguishable flow of packets communicated between the ND and an end user device (or sometimes CPE such as a residential gateway or modem) using a protocol, such as the point-to-point protocol over another protocol (PPPoX) (e.g., where X is Ethernet or Asynchronous Transfer Mode (ATM)), Ethernet, 802. IQ Virtual LAN (VLAN), Internet Protocol, or ATM). A subscriber session can be initiated using a variety of mechanisms (e.g., manual provisioning a dynamic host configuration protocol (DHCP), DHCP/client-less internet protocol service (CLIPS) or Media Access Control (MAC) address tracking). For example, the point-to-point protocol (PPP) is commonly used for digital subscriber line (DSL) services and requires installation of a PPP client that enables the subscriber to enter a username and a password, which in turn may be used to select a subscriber record. When DHCP is used (e.g., for cable modem services), a username typically is not provided; but in such situations other information (e.g., information that includes the MAC address of the hardware in the end user device (or CPE)) is provided. The use of DHCP and CLIPS on the ND captures the MAC addresses and uses these addresses to distinguish subscribers and access their subscriber records. [00183] A virtual circuit (VC), synonymous with virtual connection and virtual channel, is a connection oriented communication service that is delivered by means of packet mode communication. Virtual circuit communication resembles circuit switching, since both are connection oriented, meaning that in both cases data is delivered in correct order, and signaling overhead is required during a connection establishment phase. Virtual circuits may exist at different layers. For example, at layer 4, a connection oriented transport layer datalink protocol such as Transmission Control Protocol (TCP) may rely on a connectionless packet switching network layer protocol such as IP, where different packets may be routed over different paths, and thus be delivered out of order. Where a reliable virtual circuit is established with TCP on top of the underlying unreliable and connectionless IP protocol, the virtual circuit is identified by the source and destination network socket address pair, i.e. the sender and receiver IP address and port number. However, a virtual circuit is possible since TCP includes segment numbering and reordering on the receiver side to prevent out-of-order delivery. Virtual circuits are also possible at Layer 3 (network layer) and Layer 2 (datalink layer); such virtual circuit protocols are based on connection oriented packet switching, meaning that data is always delivered along the same network path, i.e. through the same NEs/VNEs. In such protocols, the packets are not routed individually and complete addressing information is not provided in the header of each data packet; only a small virtual channel identifier (VCI) is required in each packet; and routing information is transferred to the NEs/VNEs during the connection establishment phase; switching only involves looking up the virtual channel identifier in a table rather than analyzing a complete address. Examples of network layer and datalink layer virtual circuit protocols, where data always is delivered over the same path: X.25, where the VC is identified by a virtual channel identifier (VCI); Frame relay, where the VC is identified by a VCI; Asynchronous Transfer Mode (ATM), where the circuit is identified by a virtual path identifier (VPI) and virtual channel identifier (VCI) pair; General Packet Radio Service (GPRS); and Multiprotocol label switching (MPLS), which can be used for IP over virtual circuits (Each circuit is identified by a label).
[00184] Certain NDs (e.g., certain edge NDs) use a hierarchy of circuits. The leaf nodes of the hierarchy of circuits are subscriber circuits. The subscriber circuits have parent circuits in the hierarchy that typically represent aggregations of multiple subscriber circuits, and thus the network segments and elements used to provide access network connectivity of those end user devices to the ND. These parent circuits may represent physical or logical aggregations of subscriber circuits (e.g., a virtual local area network (VLAN), a permanent virtual circuit (PVC) (e.g., for Asynchronous Transfer Mode (ATM)), a circuit-group, a channel, a pseudo-wire, a physical NI of the ND, and a link aggregation group). A circuit-group is a virtual construct that allows various sets of circuits to be grouped together for configuration purposes, for example aggregate rate control. A pseudo-wire is an emulation of a layer 2 point-to-point connection- oriented service. A link aggregation group is a virtual construct that merges multiple physical NIs for purposes of bandwidth aggregation and redundancy. Thus, the parent circuits physically or logically encapsulate the subscriber circuits.
[00185] Each VNE (e.g., a virtual router, a virtual bridge (which may act as a virtual switch instance in a Virtual Private LAN Service (VPLS) is typically independently administrable. For example, in the case of multiple virtual routers, each of the virtual routers may share system resources but is separate from the other virtual routers regarding its management domain, AAA (authentication, authorization, and accounting) name space, IP address, and routing database(s). Multiple VNEs may be employed in an edge ND to provide direct network access and/or different classes of services for subscribers of service and/or content providers.
[00186] Within certain NDs, “interfaces” that are independent of physical NIs may be configured as part of the VNEs to provide higher-layer protocol and service information (e.g., Layer 3 addressing). The subscriber records in the AAA server identify, in addition to the other subscriber configuration requirements, to which context (e.g., which of the VNEs/NEs) the corresponding subscribers should be bound within the ND. As used herein, a binding forms an association between a physical entity (e.g., physical NI, channel) or a logical entity (e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)) and a context’s interface over which network protocols (e.g., routing protocols, bridging protocols) are configured for that context. Subscriber data flows on the physical entity when some higher-layer protocol interface is configured and associated with that physical entity.
[00187] Some NDs provide support for implementing VPNs (Virtual Private Networks) (e.g., Layer 2 VPNs and/or Layer 3 VPNs). For example, the ND where a provider’s network and a customer’s network are coupled are respectively referred to as PEs (Provider Edge) and CEs (Customer Edge). In a Layer 2 VPN, forwarding typically is performed on the CE(s) on either end of the VPN and traffic is sent across the network (e.g., through one or more PEs coupled by other NDs). Layer 2 circuits are configured between the CEs and PEs (e.g., an Ethernet port, an ATM permanent virtual circuit (PVC), a Frame Relay PVC). In a Layer 3 VPN, routing typically is performed by the PEs. By way of example, an edge ND that supports multiple VNEs may be deployed as a PE; and a VNE may be configured with a VPN protocol, and thus that VNE is referred as a VPN VNE.
[00188] Some NDs provide support for VPLS (Virtual Private LAN Service). For example, in a VPLS network, end user devices access content/services provided through the VPLS network by coupling to CEs, which are coupled through PEs coupled by other NDs. VPLS networks can be used for implementing triple play network applications (e.g., data applications (e.g., high speed Internet access), video applications (e.g., television service such as IPTV (Internet Protocol Television), VoD (Video-on-Demand) service), and voice applications (e.g., VoIP (Voice over Internet Protocol) service)), VPN services, etc. VPLS is a type of layer 2 VPN that can be used for multi-point connectivity. VPLS networks also allow end use devices that are coupled with CEs at separate geographical locations to communicate with each other across a Wide Area Network (WAN) as if they were directly attached to each other in a Local Area Network (LAN) (referred to as an emulated LAN).
[00189] In VPLS networks, each CE typically attaches, possibly through an access network (wired and/or wireless), to a bridge module of a PE via an attachment circuit (e.g., a virtual link or connection between the CE and the PE). The bridge module of the PE attaches to an emulated LAN through an emulated LAN interface. Each bridge module acts as a “Virtual Switch Instance” (VSI) by maintaining a forwarding table that maps MAC addresses to pseudowires and attachment circuits. PEs forward frames (received from CEs) to destinations (e.g., other CEs, other PEs) based on the MAC destination address field included in those frames.
[00190] While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.

Claims

CLAIMS What is claimed is:
1. A method (2600) for offloading packet processing of an application (2308), the method comprising: receiving (2602), by a multi-application framework (302), a set of application specifications (1708) describing a first application; selecting (2604), by the multi -application framework, a set of devices (2318, 2304) for deployment of the first application based on the set of application specifications; generating (2606), by the multi-application framework, application-specific templates (1706) based on the set of application specifications, a set of multi-application reference templates (1704), and architectures (1702) of the set of devices; receiving (2608), by the multi-application framework, a first application-specific implementation (7021), wherein the first application-specific implementation includes logic (7061) and data objects (7041) for deployment on the set of devices; merging (2610), by the multi -application framework, the first application-specific implementation, which corresponds to the first application, with a second application-specific implementation (702N), which corresponds to a second application (2308), to produce a first merged multi-application implementation (1904); and deploying (2612), by the multi -application framework, the first merged multi -application implementation to the set of devices.
2. The method of claim 1, further comprising: detecting (2702), by the multi-application framework, a request (2202A) to remove the first application from the set of devices; merging (2704), by the multi-application framework, active application-specific implementations, including the second application-specific implementation, to produce a second merged multi-application implementation that, in comparison to the first merged multi-application implementation, does not correspond to the first application-specific implementation; and deploying (2706), by the multi -application framework, the second merged multi application implementation to the set of devices, wherein the request is from one of (1) the first application or (2) a developer of the first application or triggered by the multi-application framework.
3. The method of claim 1, further comprising: detecting (2802), by the multi-application framework, a request (2102B) to update the first application-specific implementation, wherein the request includes an updated first application-specific implementation; merging (2804), by the multi-application framework, the updated first application- specific implementation, which corresponds to the first application, with the second application-specific implementation, which corresponds to the second application, to produce a second merged multi-application implementation (1904); and deploying (2806), by the multi-application framework, the second merged multi application implementation to the set of devices, wherein the request is from one of (1) the first application or (2) a developer of the first application or triggered by the multi-application framework.
4. The method of claim 1, wherein deploying the first merged multi-application implementation includes deploying the merged multi-application implementation on a set of programmable components in each device in the set of devices.
5. The method of claim 4, further comprising: receiving (2902), by the multi-application framework on a first programmable component (102i) within a first device in the set of devices, a packet (308); determining (2904), by the multi-application framework on the first programmable component, that the packet is associated with the first application; and selecting (2906), by the multi-application framework in response to determining that the packet is associated with the first application, the first application-specific implementation included in the first merged multi-application implementation for use in processing the packet by the first programmable component within the first device.
6. The method of claim 5, wherein determining that the packet is associated with the first application comprises: building (2904A), by a framework application signature selection component (304A), a signature key for the packet based on one or more of ( 1) system information for the packet, (2) a set of metadata (306) from a previous programmable component of the first device, (3) application-specific signature parameters from table lookups, and (4) application-specific signature parameters from parsing the packet; and matching (2904B), by the framework application signature selection component, the signature key with a signature of the first application.
7. The method of claim 6, further comprising: processing (2908), by the first application-specific implementation in the first programmable component in the first device in response to selecting the first application-specific implementation, the packet to cause the first programmable component to one or more of (1) perform a task of the first application and (2) generate a first set of metadata (306); and following performing the task, either: passing (2910), by the first programmable component, one or more of (1) the first set of metadata and (2) the packet to a second programmable component (1022) or a programmable component in the first device or a second device in the set of devices, or dropping (2912), by the first programmable component, the packet.
8. The method of claim 7, further comprising: receiving (3002), by a multi-application framework on the second programmable component within the first device, the packet and the first set of metadata; determining (3004), by the multi -application framework on the second programmable component, that the packet is associated with the first application based on the packet and the first set of metadata; and selecting (3006), by the multi -application framework in response to determining that the packet is associated with the first application, the first application-specific implementation included in the first merged multi-application implementation for use in processing the packet by the second programmable component within the first device.
9. The method of claim 8, wherein determining by the multi-application framework on the second programmable component that the packet is associated with the first application comprises: building (3004A), by a framework application signature selection component (304B), a signature key for the packet based on one or more of ( 1) system information for the packet, (2) a set of metadata (306) from a previous programmable component of the first device, (3) application-specific signature parameters from table lookups, and (4) application-specific signature parameters from parsing the packet; and matching (3004B), by the framework application signature selection component, the signature key with the signature of the first application.
10. The method of claim 9, further comprising: processing (3008), by the first application-specific implementation in the second programmable component in the first device in response to selecting the first application-specific implementation, the packet to generate a second set of metadata; and passing (3010), by the second programmable component, the metadata and the packet to a third programmable component (1023) in the first device.
11. The method of claim 1, wherein the set of application specifications include a definition of a signature that is a unique identifier for identifying all traffic and management flows of the first application.
12. The method of claim 1, wherein the merging is performed per device in the set of devices such that a separate first merged multi-application implementation is produced and deployed per each device in the set of set devices.
13. A non-transitory machine -readable storage medium that provides instructions that, if executed by a processor, will cause said processor to perform operations comprising: receiving (2602) a set of application specifications (1708) describing a first application; selecting (2604) a set of devices (2318, 2304) for deployment of the first application based on the set of application specifications; generating (2606) application-specific templates (1706) based on the set of application specifications, a set of multi -application reference templates (1704), and architectures (1702) of the set of devices; receiving (2608) a first application-specific implementation (702i), wherein the first application-specific implementation includes logic and data objects for deployment on the set of devices; merging (2610) the first application-specific implementation, which corresponds to the first application, with a second application-specific implementation (702N), which corresponds to a second application (2308), to produce a first merged multi application implementation (1904); and deploying (2612) the first merged multi -application implementation to the set of devices.
14. The non-transitory machine-readable storage medium of claim 13, wherein the instructions, if executed by the processor, will cause said processor to further perform operations comprising: detecting (2702) a request (2202A) to remove the first application from the set of devices; merging (2704) active application-specific implementations, including the second application-specific implementation, to produce a second merged multi application implementation that, in comparison to the first merged multi application implementation, does not correspond to the first application-specific implementation; and deploying (2706) the second merged multi -application implementation to the set of devices, wherein the request is from one of (1) the first application or (2) a developer of the first application or triggered by the multi-application framework.
15. The non-transitory machine-readable storage medium of claim 14, wherein the instructions, if executed by the processor, will cause said processor to further perform operations comprising: detecting (2802) a request (2102B) to update the first application-specific implementation, wherein the request includes an updated first application-specific implementation; merging (2804) the updated first application-specific implementation, which corresponds to the first application, with the second application-specific implementation, which corresponds to the second application, to produce a second merged multi application implementation (1904); and deploying (2806) the second merged multi-application implementation to the set of devices, wherein the request is from one of (1) the first application or (2) a developer of the first application or triggered by the multi-application framework.
16. The non-transitory machine-readable storage medium of claim 13, wherein deploying the first merged multi-application implementation includes deploying the merged multi-application implementation on a set of programmable components in each device in the set of devices and wherein the instructions, if executed by the processor, will cause said processor to further perform operations comprising:. receiving (2902), on a first programmable component (102i) within a first device in the set of devices, a packet (308); determining (2904), on the first programmable component, that the packet is associated with the first application; and selecting (2906), in response to determining that the packet is associated with the first application, the first application-specific implementation included in the first merged multi-application implementation for use in processing the packet by the first programmable component within the first device.
17. The non-transitory machine-readable storage medium of claim 16, wherein determining that the packet is associated with the first application comprises: building (2904A), by a framework application signature selection component (304A), a signature key for the packet based on one or more of ( 1) system information for the packet, (2) a set of metadata (306) from a previous programmable component of the first device, (3) application-specific signature parameters from table lookups, and (4) application-specific signature parameters from parsing the packet; and matching (2904B), by the framework application signature selection component, the signature key with a signature of the first application.
18. The non-transitory machine-readable storage medium of claim 17, wherein the instructions, if executed by the processor, will cause said processor to further perform operations comprising: processing (2908), by the first application-specific implementation in the first programmable component in the first device in response to selecting the first application-specific implementation, the packet to cause the first programmable component to one or more of (1) perform a task of the first application and (2) generate a first set of metadata (306); and following performing the task, either: passing (2910), by the first programmable component, one or more of (1) the first set of metadata and (2) the packet to a second programmable component (1022) or a programmable component in the first device or a second device in the set of devices, or dropping (2912), by the first programmable component, the packet.
19. The non-transitory machine-readable storage medium of claim 18, wherein the instructions, if executed by the processor, will cause said processor to further perform operations comprising: receiving (3002), by the second programmable component within the first device, the packet and the first set of metadata; determining (3004), by the second programmable component, that the packet is associated with the first application based on the packet and the first set of metadata; and selecting (3006), in response to determining that the packet is associated with the first application, the first application-specific implementation included in the first merged multi-application implementation for use in processing the packet by the second programmable component within the first device.
20. The non-transitory machine-readable storage medium of claim 19, wherein determining by the second programmable component that the packet is associated with the first application comprises: building (3004A), by a framework application signature selection component (304B), a signature key for the packet based on one or more of ( 1) system information for the packet, (2) a set of metadata (306) from a previous programmable component of the first device, (3) application-specific signature parameters from table lookups, and (4) application-specific signature parameters from parsing the packet; and matching (3004B), by the framework application signature selection component, the signature key with the signature of the first application.
21. The non-transitory machine-readable storage medium of claim 20, wherein the instructions, if executed by the processor, will cause said processor to further perform operations comprising: processing (3008), by the first application-specific implementation in the second programmable component in the first device in response to selecting the first application-specific implementation, the packet to generate a second set of metadata; and passing (3010), by the second programmable component, the metadata and the packet to a third programmable component (1023) in the first device.
EP20704090.8A 2020-01-31 2020-01-31 Multi-application packet processing development framework Withdrawn EP4097582A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2020/050813 WO2021152354A1 (en) 2020-01-31 2020-01-31 Multi-application packet processing development framework

Publications (1)

Publication Number Publication Date
EP4097582A1 true EP4097582A1 (en) 2022-12-07

Family

ID=69500798

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20704090.8A Withdrawn EP4097582A1 (en) 2020-01-31 2020-01-31 Multi-application packet processing development framework

Country Status (2)

Country Link
EP (1) EP4097582A1 (en)
WO (1) WO2021152354A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
PH12018050262A1 (en) * 2017-07-21 2019-06-17 Accenture Global Solutions Ltd Automatic provisioning of a software development environment
US11663052B2 (en) * 2018-01-08 2023-05-30 Telefonaktiebolaget Lm Ericsson (Publ) Adaptive application assignment to distributed cloud resources

Also Published As

Publication number Publication date
WO2021152354A1 (en) 2021-08-05

Similar Documents

Publication Publication Date Title
US11005751B2 (en) Techniques for exposing maximum node and/or link segment identifier depth utilizing IS-IS
US11038791B2 (en) Techniques for exposing maximum node and/or link segment identifier depth utilizing OSPF
US20170070416A1 (en) Method and apparatus for modifying forwarding states in a network device of a software defined network
CN110651451B (en) Routing table selection method and network equipment in routing system based on strategy
US20190028577A1 (en) Dynamic re-route in a redundant system of a packet network
US20200267051A1 (en) Remotely controlling network slices in a network
US20160285753A1 (en) Lock free flow learning in a network device
US11671483B2 (en) In-band protocol-based in-network computation offload framework
US10680910B2 (en) Virtualized proactive services
US20220141761A1 (en) Dynamic access network selection based on application orchestration information in an edge cloud system
WO2017089917A1 (en) Method and system for completing loosely specified mdts
WO2017221050A1 (en) Efficient handling of multi-destination traffic in multi-homed ethernet virtual private networks (evpn)
WO2017144943A1 (en) Method and apparatus for congruent unicast and multicast for ethernet services in a spring network
WO2017144945A1 (en) Method and apparatus for multicast in multi-area spring network
US11563648B2 (en) Virtual network function placement in a cloud environment based on historical placement decisions and corresponding performance indicators
US11777868B2 (en) Application-specific packet processing offload service
WO2021152354A1 (en) Multi-application packet processing development framework
US11669256B2 (en) Storage resource controller in a 5G network system

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220610

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20230308