WO2007084597A2 - System, network and methods for provisioning optical circuits in a multi-network, multi vendor environment - Google Patents

System, network and methods for provisioning optical circuits in a multi-network, multi vendor environment Download PDF

Info

Publication number
WO2007084597A2
WO2007084597A2 PCT/US2007/001305 US2007001305W WO2007084597A2 WO 2007084597 A2 WO2007084597 A2 WO 2007084597A2 US 2007001305 W US2007001305 W US 2007001305W WO 2007084597 A2 WO2007084597 A2 WO 2007084597A2
Authority
WO
WIPO (PCT)
Prior art keywords
network
node
request
ipn
intelligent
Prior art date
Application number
PCT/US2007/001305
Other languages
French (fr)
Other versions
WO2007084597A3 (en
Inventor
Alex Mashinsky
Original Assignee
Governing Dynamics, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Governing Dynamics, Llc filed Critical Governing Dynamics, Llc
Publication of WO2007084597A2 publication Critical patent/WO2007084597A2/en
Publication of WO2007084597A3 publication Critical patent/WO2007084597A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0806Configuration setting for initial configuration or provisioning, e.g. plug-and-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/022Multivendor or multi-standard integration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/044Network management architectures or arrangements comprising hierarchical management structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q11/0071Provisions for the electrical-optical layer interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0079Operation or maintenance aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0086Network resource allocation, dimensioning or optimisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0088Signalling aspects

Definitions

  • the present invention generally relates to the field of remote accessing and, more particularly, to a system, networks and methods and for provisioning optical circuits in a multi-network, multi-vendor environment .
  • network peering or the exchange of data between two different networks, may be divided into two groups.
  • the first group comprises open peering in, for example, Internet exchanges.
  • the second group comprises private, bilateral peering relationships.
  • Both types of network peering require participants to provide and over abundance of transaction capacity and negotiate each transaction on a case-by-case basis.
  • a network service provider interested in entering into peering agreements must acquire enough capacity to handle spikes in their demand if they are to avoid lower quality of service (QoS) during peek access times.
  • QoS quality of service
  • Such "over-provisioning" often results in portions of the network sitting idle for the majority of the time, which .is disadvantageous and costly.
  • IETF Internet Engineering Task Force
  • the IETF is currently developing the Generalized Multi-Protocol Label Switching protocol
  • GMPLS is an extension of the Multi-Protocol Label Switching (MPLS) protocol. GMPLS is designed to support devices that perform switching in the packet, time, wavelength and space domains.
  • the Optical Internetworking Forum (OIF) which was organized to facilitate and accelerate the development of next- generation optical internetworking products, is working on specifications for an optical user network interface (UNI) that defines the protocol for a client device to request the provision for a circuit in an optical network.
  • UNI optical user network interface
  • UNI optical user network interface
  • UNI optical user network interface
  • UNI optical user network interface
  • OBGP Optical Border Gateway Protocol
  • NNI network-to-network interface
  • the present invention is directed to a system, networks and methods for provisioning optical circuits in a multi-network, multi-vendor environment.
  • the network includes at least one sub-network, multiple nodes, intelligent provisioning nodes, an intelligent ⁇ node controller (INC) , a central optical cross-connect database (COCCD) , a signaling control plane and customers .
  • IOC intelligent ⁇ node controller
  • COCCD central optical cross-connect database
  • Each node is in communication with at least one other node.
  • connections between nodes in different networks occur, for example, at public or private peering points, collocation facilities, or the like.
  • the nodes in a first network are different models, vendors, etc. than the nodes in a second network.
  • multiple connections are established between two nodes.
  • two optical switches may be connected by three OC-12s (i.e., Optical Carrier-12s) .
  • Interfacing with each node is an intelligent provisioning node (IPN) .
  • an IPN is typically a separate network element . In alternative contemplated embodiments, however, the IPN is integrated with existing nodes.
  • each node is in communication with another separate IPN.
  • each IPN is also in communication with at least one other IPN. In alternative embodiments, these connections occur across networks.
  • an individual IPN may be .connected to more than one other IPN, or not connected to an IPN at all, such as when only connected to the INC.
  • each IPN is in communication with the centralized Intelligent Node Controller (INC) , which is disposed "above" all the subnetworks .
  • INC Intelligent Node Controller
  • the INC maintains a central optical cross-connect database (COCCD) that contains information relating to the topology and capability of each of the sub-networks.
  • COCCD central optical cross-connect database
  • the COCCD is used to calculate paths for virtual circuits within the involved networks.
  • the endpoints of such paths may be within one network, or in different networks, i.e., memory resident.
  • the customers are in communication with the sub- networks.
  • customers are also in direct communication with at least one IPN, such as an IPN that is associated with a node.
  • at least one customer may be in direct communication with the INC and/or customers may be directly connected to each other, such as at a private peering point. In certain embodiments, there is no difference between customers and nodes.
  • the network of inter-connected IPNs forms a signaling control plane for the interconnected networks .
  • This control plane is configured to signal at least one node in each of the sub-networks so as to configure and provide virtual circuits that may extend across multiple networks.
  • the INC forms a part of the signaling control plane.
  • the signaling control plane may utilize in-band or out-of band signaling.
  • the signaling control plane may interact with a network of optical switches using a separate IP control channel (out-of- band) .
  • the signaling control plane may utilize, for example, unused synchronous optical network
  • MPLS label-switched routers
  • the signaling control plane may utilize a combination of in-band and out-of- band signaling.
  • Each localized IPN performs a self-discovery process by which information related to local network topology and capability is discovered, including information on connections between nodes within a sub- network and cross-connections to nodes in other subnetworks.
  • Inventory of the local network topology and capability is stored locally at a respective IPN. In other embodiments, the inventory is uploaded to the INC.
  • a request for the provision of a circuit is received by a specific IPN from a customer.
  • the specific IPN may receive the request directly from the customer, or via a specific node.
  • the request includes information relating to the circuit that is to be provisioned, such as addresses for the origination and destination of the circuit (e.g. IP addresses), the required capacity or the required QoS.
  • the IPN operating in conjunction with other IPNs, determines a path over which the requested circuit may be provisoned.
  • the specific IPN operates in conjunction with the INC and the COCCD .
  • the determined path may extend across multiple networks or may be located entirely within a single sub-network.
  • the contemplated embodiments of the invention enable real-time provisioning of virtual circuits across multi-carrier, multi-vendor networks.
  • a signaling control plane is provided that may be fully integrated with conventional network infrastructure and operation support systems, and that permits logical (e.g. layer 2) and layer 1 provisioning of virtual circuits across optical and non-optical networks.
  • logical e.g. layer 2
  • layer 1 provisioning of virtual circuits across optical and non-optical networks.
  • rapid set- up and teardown of virtual circuits between end-user requested endpoints that may have many different networks in between are also achieved.
  • the ranking and rating of connections that may be established across multiple networks is permitted by the contemplated embodiments of the invention, based on various parameters involving both technical performance and business issues.
  • FIG. 1 is a schematic block diagram of the network in accordance with an embodiment of the invention.
  • FIG. 2 is a schematic block diagram of a network configured in accordance with the embodiments of the invention.
  • FIG. 3 is an exemplary system block diagram of a dense wavelength division multiplexing (DWDM) optical switching environment
  • FIG. 4 is an exemplary schematic block diagram illustrating an Intelligent Provisioning Node (IPN) shown in FIG. 1;
  • FIG. 5 is a schematic block diagram illustrating a network node configured in accordance with the invention,-
  • FIG. 6 is a tabular illustration of a central optical cross-connect database in accordance with the invention.
  • FIGS. 7 (a), 7(b) and 7 (c) is a flow chart illustrating the steps of the method for determining and provisioning a virtual circuit (centralized) ;
  • FIGS. 8 (a), 8(b) and 8 (c) is a flow chart illustrating the steps of the method for determining and provisioning an optical circuit (distributed) ;
  • FIG. 9 is a functional block diagram illustrating testing of a virtual circuit established by the network of FIGS. 1 or FIG. 2;
  • FIGS. 10 (a), 10 (b) and 10 (c) are exemplary schematic illustrations of the effects of the method of the invention on various layers of the exemplary networks of FIGS . 1 and 2 ; and
  • FIG. 11 is an exemplary block diagram illustrating the distributed aspects of an optical operating system.
  • FIG. 1 is a schematic block diagram of the network in accordance with an embodiment of the invention.
  • the network includes at least one sub-network 101, 102, 103. (long haul network (LHN) , Metropolitan Area Networks (MAN) , LAN, etc.), multiple nodes 110-115, and intelligent provisioning node (IPN) 120-125, intelligent node controller (INC) 130, central optical cross-connect database (COCCD) 131, signaling control plane 150 and customers 140, 141.
  • LHN long haul network
  • MAN Metropolitan Area Networks
  • LAN local area Networks
  • COCCD central optical cross-connect database
  • MANs are large computer networks typically arranged throughout a city. Generally, MANs use wireless infrastructure or optical fiber connections to link their sites. A MAN is optimized for a larger geographical area than a LAN, ranging from several blocks of buildings to entire cities. As with local networks, MANs can also depend on communications channels of moderate-to-high data rates. A MAN might be owned and operated by a single organization, but it usually will be used by many individuals and organizations. MANs might also be owned and operated as public utilities. They will often provide means for internetworking of local networks . With further reference to FIG. 1, sub-networks 101, 102, 103 may comprise a combination of networks, such as optical networks, router networks, long-haul networks, MANs or LANs .
  • networks such as optical networks, router networks, long-haul networks, MANs or LANs .
  • Each individual sub-network comprises multiple nodes 110-115.
  • the nodes represent optical or non-optical switches, routers, cross-connects , Dense Wavelength Division Multiplexing (DWDM) equipment, and the like.
  • DWDM is an optical technology used to increase bandwidth over existing fiber optic backbones .
  • DWDM ' operates based on the principle of simultaneously combining and transmitting multiple signals at different wavelengths on the same fiber.
  • DWDM permits a single fiber to transmit data at speeds of up to 400 Gb/s.
  • One key advantage of DWDM is that it is protocol and bit-rate- independent.
  • DWDM-based networks can transmit data in IP, ATM, synchronous optical network/synchronous digital hierarchy (SONET/SDH) and Ethernet, and can handle bit rates of between 100 Mb/s and 2.5 Gb/s. Therefore, DWDM-based networks can carry different types of traffic at different speeds over an optical channel . From a QoS standpoint, DWDM-based networks provide a low cost way to quickly respond to customers' bandwidth demands and protocol changes.
  • each node 110-115 is in communication with at least one other node.
  • the communication protocol and technology utilized by each node may differ between different connections, depending on the nature of the nodes involved, i.e. vendor, configuration, etc. It should be noted that some connections between nodes occur within a single network, while others occur across multiple networks.
  • connections between nodes in different networks occur, for example, at public or private peering points, collocation facilities, etc.
  • nodes in a first network are different models, vendors, etc. than the nodes in a second network.
  • multiple connections are established between two nodes.
  • two optical switches may be connected by three OC-12s (i.e., Optical Carrier-12s) .
  • Interfacing with each node is an intelligent provisioning node (IPN) 120-125.
  • an IPN is typically a separate network element. In alternative contemplated embodiments, however, the IPN is integrated with existing nodes.
  • each node is in communication with another separate IPN.
  • node 111 is in communication with IPN 121.
  • IPN may be in communication with a single IPN, or that more than one IPN may be connected to a single node.
  • each IPN is also in communication with at least one other IPN.
  • IPN 121 is in communication with IPN 120.
  • these connections occur across networks, e.g. between IPN 121 and IPN 122.
  • an individual IPN may be connected to more than one other IPN, or not connected to an IPN at all, such as when only connected to the INC 130 described subsequently.
  • Each IPN is also in communication with a centralized Intelligent Node Controller (INC) 130.
  • IOC Intelligent Node Controller
  • the INC does not effectively reside in any one network, but instead it is disposed "above" all the sub-networks. In certain embodiments, however, the INC does resides within a specific network. Even though only one INC is shown in FIGS. 1 and 2, it should be appreciated that more than one INC may exist.
  • the IPNs within sub-networks 101 and 102 may be in communication with one individual INC, while the IPNs within, sub-network 103 would be in communication with a second INC.
  • the multiple INCs it is would also be possible for the multiple INCs to be in communication with each other.
  • the INC 130 maintains a central optical cross-connect database (COCCD) 131 that contains information relating to the topology and capability of each of the sub-networks.
  • COCCD central optical cross-connect database
  • the COCCD is used to calculate paths for virtual circuits within the involved networks.
  • the endpoints of such paths may be within one network, or in different networks, i.e., memory resident .
  • Customers 140, 141 are in communication with subnetworks 101, 103, respectively.
  • the customers may be carriers, ISPs, corporations, individuals with desktop computers, LANs, WANs, etc.
  • customers 140 and 141 are represented as networks of nodes, such as a gigabit Ethernet corporate LAN or an IP router network.
  • customers are also in direct communication with at least one IPN, such as an IPN that is associated with a node.
  • customer 140 is in communication with IPN 120.
  • an IPN is associated with a customer, such as customer 141 in communication with IPN 128.
  • at least one customer may be in direct communication with the INC and/or customers may be directly connected to each other, such as at a private peering point.
  • node 110 may be a customer that requests a virtual circuit be provisioned to node 113. 5
  • the network of inter-connected IPNs forms a signaling control plane 150 for the interconnected networks 101- 103. This control plane 150 is configured to signal at least one node in each of the sub-networks so as to
  • the INC 130 forms a part of the signaling control plane 150.
  • the signaling control plane 150 may utilize in-band or out-of band signaling.
  • the signaling may utilize in-band or out-of band signaling.
  • the signaling may utilize in-band or out-of band signaling.
  • the signaling may utilize in-band or out-of band signaling.
  • control plane 150 may interact with a network of optical switches using a separate IP control channel (out -of- band) .
  • the signaling control plane 150 may utilize, for example, unused synchronous optical network (SONET) overhead bytes or multi-protocol label
  • the signaling control plane 150 may utilize a combination of in-band and out-of-band signaling.
  • SONET is a standard for connecting fiber-optic
  • SONET establishes Optical Carrier (OC) levels from 51.8 Mbps
  • SONET permits communication carriers throughout the world to interconnect their existing digital carrier and fiber optic systems.
  • MPLS is an IETF initiative that integrates layer 2
  • IP network links
  • MPLS provides network operators with a great deal of flexibility to divert and route traffic around link failures, network congestion and bottlenecks.
  • ISPs are able to more efficiently manage different kinds of data streams based on packet priority and/or service plan. For instance, consumers who subscribe to a premium service plan, or consumers who receive a high number of streaming media or high-bandwidth content can do so at a minimal level of latency and packet- loss.
  • label edge routers When packets enter a MPLS-based network, label edge routers (LERs) provide these packets with a label (i.e., an identifier) . These labels contain not only information based on a routing table entry (i.e., destination, bandwidth, delay and other metrics) , but also refer to an IP header field (i.e., source IP address) , layer 4 socket number information and differentiated service.
  • LSPs labeled switch paths
  • LSRs label switch routers
  • each localized IPN performs a self-discovery process by which information related to local network topology and capability is discovered, including information on connections between nodes within a sub-network and cross-connections to nodes in other sub-networks.
  • Inventory of the local network topology and capability is stored locally at a respective IPN. In other embodiments, the inventory is uploaded to the INC 130.
  • a request for the provision of a circuit is received by a specific IPN 120-125 from a customer 140, 141.
  • the specific IPN 120-125 may receive the request directly from the customer, or via a specific node 110- 115.
  • the request includes information related to the circuit that is to be provisioned, such as addresses for the origination and destination of the circuit (e.g. IP addresses) , the required capacity or the required QoS.
  • the IPN operating in conjunction with other IPNs, determines a path over which the requested circuit may be provisoned.
  • the specific IPN operates in conjunction with the INC 130 and the COCCD 131.
  • the determined path may extend across multiple networks or may be located entirely within a single sub-network.
  • signaling commands are sent from the IPNs 120-125 to the appropriate sub-network nodes so as to configure the nodes such that a virtual circuit is set up along the determined path.
  • the path is then tested. If the test is successful, it is provided to the customer. After the path is used, the virtual circuit is "torn down" , and the network segments used to construct the circuit are returned to inventory for future use.
  • FIG. 2 is a schematic block diagram of a network configured in accordance with the disclosed embodiments of the invention.
  • each optical subnet i.e., subnet 1, subnet 2 and subnet 3
  • each optical switch comprises three optical switches, such as the multiple nodes 110-115 of FIG. 1, which each comprise a number of transmission and reception ports.
  • optical connections are unidirectional.
  • each port is typically comprised of a transmitter (T) and a receiver (R) .
  • the ports on the optical switches are connected to DWDM equipment that function to "multiplex" many optical signals together for transmission on a single fiber optical link 1-8, and "de-multiplex" these signals for individual switching. These single fiber optical links are then variously bundled together in link bundles.
  • FIG. 3 is an exemplary system block diagram of a dense wavelength division multiplexing (DWDM) optical switching environment.
  • the exemplary system includes switches 40 and 50.
  • Switch 40 includes ports 401-412, each with respective T/R pairs.
  • switch 50 includes ports 501-512, each with respective T/R pairs.
  • optical links 12, 13 are shown.
  • the COCCD is used to calculate paths for virtual circuits within the involved networks.
  • DWDM 20- 32 are used to multiplex many optical signals together for transmission on a single fiber optical link 1-8, and "de-multiplex" these signals for individual switching. These single fiber optical links are then variously bundled together in bundles 300.
  • FIG. 4 is an exemplary schematic block diagram illustrating an Intelligent Provisioning Node (IPN) shown in FIG. 1.
  • the IPN interacts with optical switches, the INC, other IPNs, and is also in communication with other applications and systems.
  • the IPN is implemented on a variety of hardware platforms, such as a Sun UltraSpar ⁇ III running Solaris .
  • the IPN resides in the network as a separate network element that is in communication with the network node.
  • the IPN may reside on a Power PC platform and may be in communication with an optical switch and the optical operating system (OSS) , or the like.
  • OSS optical operating system
  • the IPN may be integrated with one or more other network nodes.
  • a vendor may manufacture an optical switch that is capable of performing all the functions of the IPN, as described subsequently.
  • the OOS comprises three main elements, i.e., the OSS 400, the User Network Interface (UNI) 410 - and Application Programming Interfaces (APIs) 420.
  • the OOS 400 is used to manage communications from customers, nodes, the INC, other IPNs, databases, etc.
  • the OOS 400 is responsible for performing, i.e., initiating, self-discovery processes, path calculations and/or, configuration commands, or the like.
  • the UWI interface 410 comprises a set of protocols used to communicate switching/routing commands with network nodes when setting up and tearing down a virtual circuit.
  • the UNI interface 410 is also responsible for communicating self-discovery commands to the nodes described in FIG. 1, as well as translating information, such as topology information received from the node.
  • the UNI interface 410 may be implemented over a TLl interface, common object request broker architecture (CORBA) , other application programming interfaces (APIs), IP, or other optical equipment management software (e.g., proprietary systems) .
  • CORBA common object request broker architecture
  • APIs application programming interfaces
  • IP IP
  • optical equipment management software e.g., proprietary systems
  • the IPN may functionally reside between an optical switch and the OSS of the switch.
  • the IPN captures information sent from the switch to the OSS, such as topology and capability information (e.g. link state information) .
  • the IPN also emulates commands from the OSS to the switch, initiating such processes as self-discovery (link state discovery) , provisioning (switch fabric) configuration, etc.
  • each IPN performs a "self-discovery" or "link-state discovery” process whereby inventory reports detailing all the connections under the IPN' s control are generated. These reports include information relating to the nodes connected to the IPN, the ports available on those nodes, the connections between nodes, including cross-network connections, the status of those connections, or the like. Information generated by self-discovery may include, for example, the address of each node in communication with the IPN, the address of the port-switch pair to which a given port is connected, the bandwidth available between the two ports, etc.
  • IGPs Interior gateway protocols
  • OSPF open shortest path first
  • ISIS Intermediate System to Intermediate System
  • EBGPs exterior border gateway protocols
  • OBGP optical BGP
  • Network service providers may request that only a specified number of ports be made available to the inventive system. For example, a given service provider may only permit availability of, for example, 10% of the ports on a node to the contemplated system.
  • the service provider's node may effectively be partitioned so that the inventive system only recognizes the identified ports as existing on the node.
  • it may be necessary to keep inventory of individual wavelengths available along a ' given port.
  • one or more nodes in a network may represent all-optical switches, i.e. switches that do not need to translate optical signals into electrical signals in order to communicate the signals through the switch.
  • the contemplated system may maintain .an inventory of wavelengths that are in use and not in use.
  • the inventory reports generated by each IPN may be stored locally on the IPN, such as in a link state database (not shown) .
  • the inventory reports may be transmitted to an INC described in FIG. 1.
  • the INC maintains a COCCD (FIG. 1) that stores the information received from all the IPNs in the network. Based on the information contained in the COCCD, as well as information that may be available from other sources, such as simple network management protocol (SNMP) , OSS and Meta managers, the INC maps all the subnets and cross-connections available in all the interconnected subnets .
  • COCCD COCCD
  • FIG. 5 is a schematic block diagram illustrating a network node configured in accordance with the invention. Specifically, FIG. 5 shows the software and controller extensions provided by the IPN to enable a manageable optical switch.
  • the switching infrastructure 500 or ingress/egress ports, the switching fabric 510, the switch controller 515, grooming controller 520 and protection controllers 525 are not under the exclusive control of the vendor's OSS. Rather, the IPN emulates the OSS (FIG. 3) , while globally improving the management of the switching components that will increase the capability of a switch's components.
  • the network manager 530 includes routing/switch topology, resource visibility/inventory, configuration, customer security and performance services .
  • Systems manager 535 includes fault-management, configuration, accounting, performance, and security (FCAPS) functions, telecommunications management network (TMN) model, transport layer interface (TLI) support and relational database management systems (RDBMS) for configuration and provisioning services.
  • Element manager 540 includes, redundancy management, configuration control, accounting and security and protection services.
  • network manager 530 is augmented to enable a switch to cooperate with its (non-identical) peers transparently.
  • the systems manager 535 provides increased (local) resource management support and the element manager 540 increases management services. As a result, the utilization of the individual components, such as the ports, is increased.
  • the IPN of FIG. 5 is further provided with a provisioning controller 545 and a signaling controller 550. With open access to the switching infrastructure, such as via CORBA, SNMP or Transaction Language 1 (TLl) as shown in the services manager 555, the IPN becomes configured to manage the switch efficiently, as well as cooperate with its peers across standards based signaling system 560 and enable the peers to do the same.
  • the signaling system 560 includes an UNI interface, an INC interface, an IPN interface, an NNI interface and a message protocol engine.
  • the provisioning controller 545 relies on the support of all of the foregoing components. However, it is the applications and SLA manager 560 that summarizes all information that is "learned" by the other components (i.e., other managers) and that provides the IPN with the capability to coordinate the provisioning and global management.
  • the provisioning controller 545 incorporates external or extended provisioning manager 565 that is a separate network element, and manages several IPNs (and their switches) from a centralized location.
  • the external or extended provisioning manager 565 includes shared memory, activation and a provisioning table.
  • FIG. 6 is a tabular illustration of a central optical cross-connect database (COCCD) in accordance with the invention.
  • the COCCD of the contemplated embodiments contains the current inventory and connection information related to the links between network nodes that are available to the system, and is typically maintained by an INC.
  • the COCCD is implemented using in-memory database technology, similar to that offered by companies, such as TimesTen, Inc. Such database technology offers the high levels of performance necessary to accomplish the provisioning of virtual circuits in very short time periods, such as 50 mis.
  • at least portions of the COCCD can be replicated across multiple locations within one or multiple networks for easier access .
  • the COCCD of the contemplated embodiments may be populated using a variety of techniques, including self- discovery, multicast requests, etc.
  • each IPN performs a self- discovery process to obtain network topology and capability information related to the optical links that are local to the IPN, including links between networks, nodes, etc.
  • changes in network topology such as the addition or deletion of network nodes, may trigger a self-discovery process.
  • each IPN forwards this information to the INC for storage in the COCCD.
  • information stored in the COCCD may be analyzed in order to create a centralized map of the entire network available to the system.
  • a node ID 604 that identifies a network node is included in each record.
  • the node ID may be an IP address.
  • -Associated with each node ID 604 is a number of port IDs 606 that identify individual ports on the node.
  • the protocol field 608 identifies the data protocol implemented (i.e., the encapsulation method) over the port, such as SONET or Gigabit Ethernet.
  • the bandwidth field 610 identifies the bandwidth of the connection associated with the port, such as, OC-4 or OC-12. In certain embodiments, the bandwidth field can also include T-3.
  • the system determines two paths for a given request, such as one path for use and one path for backup purposes, it is advantageous if the two paths do not include links in the same link bundles. As a result, if an entire link bundle becomes damaged, such as by being severed by a back-hoe during construction, the second determined path will not be affected.
  • An addressing scheme is utilized to ensure that two paths do not include links in the same link bundles. In the preferred embodiment, the shared risk link group (SRLG) addressing scheme is used.
  • SRLG shared risk link group
  • an addressing scheme for individual wavelengths that are available over a given optical connection. For example, if an optical link carries a multiplexed signal (i.e. a signal comprised of more than one wavelength), there may be additional wavelengths that may be multiplexed into the signal available for use. These additional wavelengths represent unused capacity along the optical link, and therefore must be identified in a self-discovery process.
  • a multiplexed signal i.e. a signal comprised of more than one wavelength
  • the COCCD may track a number of other parameters related to connections within and across specific networks. These parameters may be indicative of both technical concerns and business concerns. Some of the parameters may be automatically discovered via a self- discovery process, while others may be manually entered and associated with a given connection. Such parameters may include those a customer may request, such as endpoints for a connection (e.g., IP addresses) , bandwidth (e ; g., OC-12) , security, protocol (e.g., IP) or term/duration (e.g., 1 hour) parameters.
  • endpoints for a connection e.g., IP addresses
  • bandwidth e.g., OC-12
  • security e.g., protocol
  • protocol e.g., IP
  • term/duration e.g., 1 hour
  • Performance metrics/constraints may include latency, failover flexibility, packet loss, bit error rate and number of hops .
  • constraints on resources or capabilities that may be required for a connection to be established include constraints on resources or capabilities that may be required for a connection to be established. These constraints may be invisible to the customer, and may include available wavelength (s) , wavelength conversion, device compatibility (at client, metro, core, etc.) , mux/demux speed and availability, MPLS capabilities, OSS links, alarms, scheduling and network percentage fulfillment.
  • parameters associated with technical constraints that may be related to a connection may be stored in the COCCD, such as power, polarization, type of fiber and amplification mechanism.
  • Some or all of the information stored in the COCCD may also be stored at the IPN level .
  • the information stored in the COCCD is only stored at the IPN level.
  • various levels of hierarchical layers with regard to the nodes, IPNs and INCs may be implemented.
  • the contemplated embodiments of the present invention may function with no centralized control (e.g. no INC or COCCD), with all decision-making occurring in a distributed manner.
  • one INC controls all the IPNs in all the interconnected networks.
  • a network of INCs is provided, with each controlling some portion of the interconnected networks .
  • FIGS. 7 (a), 7 (b) and 7 (c) is a flow chart illustrating the steps of the method for determining and provisioning a virtual circuit (centralized) .
  • An application, router, desktop or the like sends a request to a local IPN, as indicated in step 702.
  • a request for a circuit is sent by a customer and received by an IPN.
  • customer 140 may transmit a request for a circuit to IPN 120.
  • customer 140 may represent, e.g. a network of IP routers, where the request is sent from an IP router that operates as a "border" for the network of routers .
  • the request is received by the IPN.
  • the customer 140 may have a direct connection to IPN 120, for example an IP connection, over which a request may be sent.
  • the request to a network node is sent by the customer, such as a request for node 110 of FIG. 1. The request would then be forwarded from node 110 to IPN 120 for processing.
  • the request may be sent directly to INC 130, bypassing IPN 120 all together.
  • the customer may be a desktop computer from which a request may be initiated, such as by launching a desktop application.
  • the request is a web-based request, where a customer logs onto a Web page and enters a request via the Web page.
  • the network administrator of a data center may log onto a Web page and request an additional optical circuit be provisioned between the data center and e.g. a second data center.
  • a request typically includes information related to the circuit that is to be provisioned. This information may include addresses of the endpoints of the circuit (e.g. IP addresses, BGP addressing, etc.) . The request may include varying levels of specificity regarding the endpoints of the requested circuit, such as which ports on the switches should be used or what wavelength should be used. Furthermore, a request may include the bandwidth required, such as OC-12 or the term desired, such as 30 minutes. In certain embodiments, a request typically includes other parameters with which the provisioned circuit must comply. Here, the parameters may comprise technical requirements, security requirements and business requirements, such as a required quality of service, acceptable costs or redundancy requirements.
  • the customer may specify acceptable parameters in the request, or the customer may have defined default parameters .
  • the customer may register a profile with the inventive system where the profile defines a number of default values for the various parameters. As a result, the system becomes permitted to access the profile of the customer when a request is received from the customer. As a result, the customer becomes relieved of the requirement to indicate all the parameters desired in every request .
  • the system may maintain sets of default parameters associated with each application. For example, if a request is received for a virtual circuit that is to be used for video conferencing, the system may access a set of default parameters that must typically be met for a video conferencing application to function properly.
  • the sending of the request may be triggered in a number of ways.
  • the request may be triggered manually, such, as by a network administrator operating a border IP router.
  • an application running on or in conjunction with the border IP router may trigger a request. For example, heavy traffic levels may cause congestion along certain routes associated with the network of IP routers, as well as networks to which it is connected, which would then trigger the request by the application.
  • an application is developed that automatically launches a request for additional bandwidth along a given route when, for example, traffic or congestion levels exceed certain threshold parameters.
  • layer 3 type activity such as at the IP layer, may trigger changes in the layer 1 configuration of the network.
  • a check is performed to determiner whether direct connection with an egress node available, as indicated in step 706. Having received the request, IPN 120 first determines whether a direct connection exists between the egress node specified in the request and the ingress node to which the IPN is in communication.
  • Information related to the ability and capability to reach a local node is typically available to the IPN.
  • a link state database (not shown) may be maintained at node 110 (or at IPN 120) .
  • Such a database stores information related to all the connections from node 110.
  • the IPN compares parameters specified in the request to parameters associated with the currently available connections to determine whether any available connections satisfy the parameters of the request. If a direct connection is available, then the local IPN immediately reserves the direct connection and provisions it for the customer, as indicated in step 708. If a direct connection is unavailable, then the local IPN sends the request to the INC for additional processing, as indicated in step 710.
  • the INC queries the COCCD for at least one path from the ingress to egress node specified in the request, as indicated in step 712.
  • the COCCD maintains information related to the topology and capability of the networks and nodes of the contemplated embodiments of the invention. Based on this information, the INC is able to compute a path through at least one of the involved networks to connect the ingress and egress nodes specified in the request.
  • the path calculation is based on open shortest path first (OSPF) or intermediate system to intermediate system (ISIS) protocols.
  • OSPF open shortest path first
  • ISIS intermediate system to intermediate system
  • the COCCD may contain varying amounts of additional information regarding the connections available to the system.
  • the COCCD contains enough current information for the INC to determine a path from the ingress node to the egress node that complies with all of the requirements set forth in the request.
  • the COCCD contains only enough information for the INC to determine at least one path between the egress and ingress nodes without being able to verify that the at least one path fulfills the other parameters specified in the request.
  • a check by the INC is performed to determine whether at least one path exists between the ingress and egress nodes, as indicated in step 714. If the INC determines that no paths exist between the ingress and egress nodes, a failure notification is transmitted to the customer, as indicated in 716. That is, if the INC is unable to determine a path between the ingress and egress nodes, a failure message may be sent to the customer, indicating that the system currently cannot fulfill the customer's request.
  • the INC retrieves at least one determined path from the COCCD, as indicated in step 718. That is, the INC retrieves information relating to the at least one determine path from the COCCD.
  • the retrieved information may include node addresses, port IDs or wavelengths.
  • the INC may retrieve multiple paths .
  • the INC retrieves at least two paths for a request. For example, the request may specify that at least two paths must be determined for protection purposes .
  • the INC transmits a resource reservation command to each node (IPN) of the at least one path, as indicated in step 720.
  • a resource reservation command instructs the receiving node to reserve requested resources for -the customer's use.
  • the command may include the port ID, wavelength or bandwidth that must be reserved for use in the at least one determined path.
  • reserve resource commands are also sent to nodes that comprise protection paths.
  • Each IPN receives a respective resource reservation command and queries a link state database of the respective IPN for resource availability, as indicated in steps 722a., 722b and 722c.
  • each of the IPNs comprising the at least one determined path receives the resource reservation command from the INC and determines whether the requested resources are available by querying the link state database.
  • each IPN i.e., IPN #1, 2 ... N, signals to the INC successful resource allocation, as indicated in steps 728a, 728b and 728c. That is, if the resources requested in the resource reservation command are available, the IPN signals a successful resource reservation to the INC, indicating, for example, that the resources are available, as well as any additional information with respect to the necessary resources, such as port ID or wavelength.
  • the INC transmits provisioning commands to the IPNs in the at least one determined path, as indicated in step 730.
  • the INC issues a provisioning command, instructing the IPNs to provision the determined path.
  • Each IPN then executes their respective provisioning command and signals completion to the INC, as indicated in step 732a, 732b and 732c.
  • the INC then receives the provisioning completion signals and generates a signal to the IPNs to test the at least one determined path, as indicated in step 734.
  • the circuit associated with the at least one determined path is then tested, as indicated in step 736.
  • a check is performed to determine whether the test of the circuit was successful, as indicated in step 738. If the test is unsuccessful, the INC transmits a provisioning command to a protection circuit, as indicated in step 740.
  • the system may determine more than one path for the entire initial path or portions of the path, depending on the level of protection specified in the request. In cases where the initial path that is provisioned fails the test, the protection path may be provisioned instead.
  • additional error detection steps are performed.
  • an additional step comprises determining which links in the connection are responsible for the failure.
  • a step comprises determining which links may be provisioned to route around the failure. If the test of the circuit was successful, then a hand-off to the customer is performed, as indicated in step 742.
  • the circuit associated with the at least one determined path is monitored to detect failure or degradation of the QoS, as indicated in step 744.
  • a check is performed to detect whether a failure has occurred, as indicated in step 746. If a failure of the circuit associated with the at least one determined path or degradation of the QoS is detected, then a return to step 740 occurs . If a failure of the circuit associated with the at least one determined path or degradation of the QoS is not detected, then the circuit is returned to inventory when the customer's transaction is completed, as indicated in step 748. Subsequent to expiration of the term of the circuit, the INC signals each, individual IPN to release the resources comprising each segment of the circuit.
  • FIGS. 8 (a), 8 (b) and 8 (c) is a flow chart illustrating the steps of the method for determining and provisioning an optical circuit (distributed) .
  • An application, router, desktop or the like sends a request to a local IPN, as indicated in step 802.
  • the local IPN receives the request, as indicated in step 804.
  • the customer 140 may have a direct connection to IPN 120, such as an IP connection, over which a request may be sent.
  • the request to a network node is sent by the customer, such as a request for node 110 of FIG. 1.
  • the request would then be forwarded from node 110 to IPN 120 for processing-
  • the request may instead be sent directly to INC 130, bypassing IPN 120 all together.
  • a check is performed to determiner whether direct connection with an egress node available, as indicated in step 806. If a direct connection to the egress node is available, then the local IPN immediately reserves the direct connection and provisions it for the customer, as indicated in step 808. If a direct connection to the egress node is unavailable, then the local IPN sends the request to the IPN associated with egress node, as indicated in step 810.
  • each IPN queries its associated link state database to determine whether any direct connections are available that meet the parameters set forth in the request.
  • a request specifies the amount of bandwidth, term or security. The query is performed to determine whether any connections currently available fulfill the requirements specified in the request .
  • connection identifier associated with connections that comply with the request parameters is retrieved, as indicated in steps 818a and 818b.
  • the resource reservation command for the at least one retrieved connection identifier is transmitted to the associated node, as indicated in steps 820a and 820b.
  • each respective IPN transmits a reserve resource command to its associated node in order to reserve the connections associated with the at least one retrieved connection identifier.
  • the IPNs that form the at least one retrieved connection are determined, as indicated in steps 822a and 822b. Having identified the at least one connection that fulfills the requirements contained in the request, IDs associated with the IPNs that form the opposite ends of those connections are determined. In certain embodiments, the IDs are retrieved from the link state database. A multicast of the request to the determined IPNs is then performed, as indicated in steps 824a and 824b. Here, the request is only sent to those IPNs that reside on the opposite side of the connections that comply with the requirements in the request, as identified by the IPNs that form the opposite ends of the connections.
  • each IPN receives the request first determines whether the same request has been received from an IPN in the egress/ingress request tree.
  • a receipt of the same request indicates an IPN that received both requests (i.e., the multicast request and the egress/ingress request tree) is now in possession of a complete path from ingress to egress node that fulfills the requirements set forth in the request .
  • Each request includes a "time- to-live" value that is set when the request is created. In accordance with the contemplated embodiments, the time-to-live value represents a time after which the request is no longer valid. As a result, it becomes possible to control the unlimited spread of requests through the interconnected networks .
  • step 834 information related to the complete path is sent to the customer for approval, as indicated in step 834.
  • a check is performed to determine whether the customer has approved the complete path, as indicated in step 836. If the customer does not approve the path, then a new path for approval of the customer is determined, as indicated in step 838. If the customer approves the path, then a provision command is sent to every IPN in the path to instruct the IPNs to provision the circuit specified in the path, as indicated in step 840.
  • the system may determine more than one path for the entire initial path or portions of the path, depending on the level of protection specified in the request.
  • an additional step comprises determining which links in the connection are responsible for the failure. In another embodiment, a step comprises determining which links may be provisioned to route around the failure.
  • the circuit specified in the ⁇ path is then tested, as indicated in step 842. A check is performed to determine whether the test was successful, as indicated in step 844. If the test of the circuit was successful, then a hand-off to the customer is performed, as indicated in step 848. Next, the circuit associated with the path is monitored to detect failure or degradation of the QoS, as indicated in step 850.
  • a check is performed to detect whether a failure has occurred, as indicated in step 852. If a failure of the circuit associated with the path or degradation of the QoS is detected, then a return to step 846 occurs. If a failure of the path or degradation of the QoS is not detected, then the circuit associated with the path is returned to inventory when the customer's transaction is completed, as indicated in step 854.
  • FIG. 9 is a functional block diagram illustrating the testing of a virtual circuit established by the network of PIG. 1 or FIG. 2.
  • an IPN in addition to a signaling connection, an IPN is provided with a data connection to its corresponding node.
  • IPNs test circuits that are set up in accordance with the disclosed embodiments of the system.
  • IPN 902 is connected to node 904 via signaling connection 910 and data connection 912.
  • IPN 906 is connected to node 908 via signaling connection 914 and data connection 916.
  • IPNs 902 and 906 are connected to each other via signaling connection 920, and nodes 904 and 908 are connected to each other via data connection 918.
  • data connection 918 may also comprise an indirect connection, such as a connection that includes a number of intermediate segments .
  • IPN 902 may configure node 904 to transmit data to node 908 over data connection 918. Such a configuration occurs over signaling connection 910 in accordance with the disclosed methods of the present invention.
  • IPN 902 may signal IPN 906 over signaling connection 920, informing node 906 that node 904 is about to send a test data transmission.
  • IPN 902 may then transmit a data signal to node 904 over data connection 912 that is to be switched to node 906. If node 906 receives the data signal, then the signal is switched to IPN 906 via data connection 916.
  • IPN 906 Upon receiving the data signal IPN 906 informs IPN 902 via signaling connection 920 that the data transmission was successfully received. If IPN 902 does not receive the confirmation signal from IPN 906 within a set time period, then IPN 902 concludes that the data transmission was not received, and the test of the connection failed.
  • FIGS. 10 (a) , 10 (b) and 10 (c) is a schematic illustration of the effects of the method of the invention on various layers of the exemplary network of FIGS. 1 and 2.
  • the nodes that accommodate routing and/or switching typically operate at up to layer 3 (i.e., the network layer) of the International Standard Organization's Open System Interconnect (ISO/OSI) network model.
  • layer 3 i.e., the network layer
  • ISO/OSI International Standard Organization's Open System Interconnect
  • a very common network layer protocol is the IP protocol, which forms a significant portion of all traffic in conventional communication networks, and which accommodates the routing of individual IP datagrams in a connectionless environment .
  • a SONET node can only switch SONET frames, but cannot simultaneously switch wavelengths or route IP datagrams.
  • an IP router can only route IP datagrams, even if this IP router operates with SONET as a datalink layer.
  • the increase in switch management that is accomplished by the usage of the disclosed IPN permits a switching/routing node that is provided with the correct hardware to efficiently switch and/or route data on all three layers .
  • the layer 3 i.e., the network layer, is responsible for the routing of data in the network.
  • An example of the layer 3 is IP protocol .
  • a switch/router managed by an IPN that is configured in accordance with the contemplated embodiments is able to accommodate any combination of these protocols.
  • the IPNs In such a scenario, it is the responsibility of the IPNs to ensure that the agreed upon requirements of a connection will not be affected by any translations in a node. In combination with such a requirement/ the IPNs are also responsible for programming the switch so as to enable it to effectively translate data between any layer to thereby enable transmission between networks that operate using different protocols.
  • FIG. 10 (a) is an illustration of an interface between each layer of the switch and the associated managing IPN in accordance with the contemplated embodiments of the invention. Also illustrated is the data path X through a switch, if a translation of data is required. Without any sort of electrical conversion, it is currently impossible to extract the data from an optical signal. However, it is contemplated that such as capability may become available in the future.
  • a node is only capable of layer 1 switching. At the instant that a node is able to receive an electrical signal or convert an incoming photonic signal into electronic form, the node is able to extract the layer 3 data, which enables the node to encapsulate and transfer the data using a different datalink, such as layer 2 protocol .
  • an IPN initiates translations between different network layers based upon criteria that can range from cost to utilization, such as an associated price per interface, where translation at the network layer is the most expensive, and translation at the physical layer the least expensive.
  • FIG. 10 (b) is exemplary illustrations of changes to the layer 2 protocol to enable communication between interconnected networks.
  • the layer 1 protocol remains unchanged while the layer 2 protocol requires changing so as to enable communication between the interconnected networks.
  • Such a translation may be caused by high utilization on one interface, or an increase in costs associated with the interface.
  • the IPN will initially map connection requirements to establish equivalent connections across the different datalink layers with respect to their QoS, and then proceed with the required programming to accommodate layer conversions.
  • PIG. 10 (b) illustrates two exemplary configurations 1000, 1100 for accommodating layer conversions . These two configurations illustrate the flexibility introduced with the usage of a managing IPN, such as IPN #1.
  • a physical layer is not connected through the first switch.
  • the managing IPN is able to set up a datalink layer connection between the first and second switches, as well as the second and third switches, while managing the translation at the second switch to enable the data transfer between these two connections.
  • the managing IPN utilizes its "intelligence" to determine the least expensive connection based on all cost parameters. Further, the managing IPN is configured to set up a physical layer connection through the first and third switches, while managing the translation of the data in the second switch.
  • FIG. 10 (c) illustrates another exemplary configuration for accommodating layer conversions.
  • the datalink layer needs to remain the same, while data needs to be forwarded to a different physical layer of another switch.
  • the managing IPN e.g. IPN#1
  • the IPN programs the switch to enable such a translation to occur dynamically .
  • FIG. 11 is an exemplary block diagram illustrating the distributed aspects of an optical operating system. That is, FIG.
  • FIG. 11 illustrates an exemplary contemplated embodiment of a network and system in which an IPN functions as a link between applications requesting circuits and layers 1, 2, 3 of the network.
  • the network and system is configured such that a combination of in-band and out-of-band signaling may be utilized.
  • QoS issues e.g. IP transmissions by provisioning connections
  • IP traffic flows e.g. MPLS
  • the contemplated system and networks may be utilized to dynamically allocate e.g. legacy SONET rings by enabling edge nodes with IPNs.
  • the update of router MPLS and BGP tables through IPN updates is also performed, and select applications are permitted to request a guaranteed QoS by provision on-demand circuits, while all other traffic uses shared "pipes" with other protocol such as resource reservation protocol (RSVP) or differentiated services (diffserv) .
  • RSVP resource reservation protocol
  • daiffserv differentiated services
  • signaling between networks and IPNs e.g. handshakes
  • the dynamic concatenation of network segments from multiple service providers for failures and maintenance cutovers is also permitted.
  • the contemplated embodiments of the present invention may be advantageously utilized to eliminate as many layer 3 and layer 2 transactions as possible to reduce the cost of routing and switching in the network.

Abstract

A system, networks and methods for provisioning optical circuits in a multi-network, multi-vendor environment. The network includes at least one sub- network, multiple nodes, intelligent provisioning nodes, an intelligent node controller (INC) , a central optical cross -connect database (COCCD) , a signaling control plane and customers, wherein a request for a circuit is sent from at least one customer to at least one intelligent provisioning node,- a check is performed to determine whether direct connection with an egress node is available; and the direct connection with the egress node is reserved if the connection is available and the connection for the at least one customer is provisioned.

Description

SYSTEM, NETWORK AND METHODS FOR PROVISIONING OPTICAL
CIRCUITS IN A MULTI-NETWORK, MULTI-VENDOR ENVIRONMENT
RELATED APPLICATIONS
This application claims priority from U.S. Provisional Patent Application Serial Number 60/759,995 which was filed on January 18, 2006.
BACKGROUND OF THE INVENTION
1. Field of the Invention The present invention generally relates to the field of remote accessing and, more particularly, to a system, networks and methods and for provisioning optical circuits in a multi-network, multi-vendor environment .
2. Description of the Related Art
Currently, network peering, or the exchange of data between two different networks, may be divided into two groups. The first group comprises open peering in, for example, Internet exchanges. The second group comprises private, bilateral peering relationships. Both types of network peering, however, require participants to provide and over abundance of transaction capacity and negotiate each transaction on a case-by-case basis. For example, a network service provider interested in entering into peering agreements must acquire enough capacity to handle spikes in their demand if they are to avoid lower quality of service (QoS) during peek access times. Such "over-provisioning" often results in portions of the network sitting idle for the majority of the time, which .is disadvantageous and costly.
Furthermore, typical Internet peering agreements require lengthy contracts that are disadvantageous for a variety of reasons. For example, it may be difficult to anticipate future needs prior to formation of the contract and such agreements may also be prohibitively expensive .
Several standards are emerging to allow systems to request and provision circuits from nodes in the same or different networks. For example, the Internet Engineering Task Force (IETF) is a large open international community of network designers, operators, vendors and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. The IETF is currently developing the Generalized Multi-Protocol Label Switching protocol
(GMPLS) for this purpose. GMPLS is an extension of the Multi-Protocol Label Switching (MPLS) protocol. GMPLS is designed to support devices that perform switching in the packet, time, wavelength and space domains. The Optical Internetworking Forum (OIF) , which was organized to facilitate and accelerate the development of next- generation optical internetworking products, is working on specifications for an optical user network interface (UNI) that defines the protocol for a client device to request the provision for a circuit in an optical network. Lastly, similar work is underway by the American Society for Testing and Materials (ASTN) and Optical Border Gateway Protocol (OBGP) to define a network-to-network interface (NNI) . However, these proposed solutions fail to address many issues related to linking the QoS of network connections and events transpiring on the transmission control protocol/internet protocol (TCP/IP) layer with dynamic allocation of bandwidth at the layer 1 level.
SUMMARY OF THE INVENTION The present invention is directed to a system, networks and methods for provisioning optical circuits in a multi-network, multi-vendor environment. The network includes at least one sub-network, multiple nodes, intelligent provisioning nodes, an intelligent node controller (INC) , a central optical cross-connect database (COCCD) , a signaling control plane and customers .
Each node is in communication with at least one other node. In certain embodiments, connections between nodes in different networks occur, for example, at public or private peering points, collocation facilities, or the like. In other embodiments, the nodes in a first network are different models, vendors, etc. than the nodes in a second network. In other alternative embodiments, multiple connections are established between two nodes. For example, two optical switches may be connected by three OC-12s (i.e., Optical Carrier-12s) .
Interfacing with each node is an intelligent provisioning node (IPN) . Here, an IPN is typically a separate network element . In alternative contemplated embodiments, however, the IPN is integrated with existing nodes. In addition, each node is in communication with another separate IPN. Moreover, each IPN is also in communication with at least one other IPN. In alternative embodiments, these connections occur across networks. In certain embodiments, an individual IPN may be .connected to more than one other IPN, or not connected to an IPN at all, such as when only connected to the INC. Furthermore, each IPN is in communication with the centralized Intelligent Node Controller (INC) , which is disposed "above" all the subnetworks .
The INC maintains a central optical cross-connect database (COCCD) that contains information relating to the topology and capability of each of the sub-networks. The COCCD is used to calculate paths for virtual circuits within the involved networks. The endpoints of such paths may be within one network, or in different networks, i.e., memory resident.
The customers are in communication with the sub- networks. In certain embodiments, customers are also in direct communication with at least one IPN, such as an IPN that is associated with a node. In accordance with additional embodiments, at least one customer may be in direct communication with the INC and/or customers may be directly connected to each other, such as at a private peering point. In certain embodiments, there is no difference between customers and nodes.
In accordance with the contemplated embodiments, the network of inter-connected IPNs forms a signaling control plane for the interconnected networks . This control plane is configured to signal at least one node in each of the sub-networks so as to configure and provide virtual circuits that may extend across multiple networks. In certain embodiments, the INC forms a part of the signaling control plane. The signaling control plane may utilize in-band or out-of band signaling. For example, the signaling control plane may interact with a network of optical switches using a separate IP control channel (out-of- band) . Alternatively, the signaling control plane may utilize, for example, unused synchronous optical network
(SONET) overhead bytes or multi-protocol label switching
(MPLS) labels in a network of label-switched routers .
In other contemplated embodiments, the signaling control plane may utilize a combination of in-band and out-of- band signaling.
Each localized IPN performs a self-discovery process by which information related to local network topology and capability is discovered, including information on connections between nodes within a sub- network and cross-connections to nodes in other subnetworks. Inventory of the local network topology and capability is stored locally at a respective IPN. In other embodiments, the inventory is uploaded to the INC. A request for the provision of a circuit is received by a specific IPN from a customer. The specific IPN may receive the request directly from the customer, or via a specific node. Here, the request includes information relating to the circuit that is to be provisioned, such as addresses for the origination and destination of the circuit (e.g. IP addresses), the required capacity or the required QoS. Based in part on the request, the IPN, operating in conjunction with other IPNs, determines a path over which the requested circuit may be provisoned. In alternative embodiments, the specific IPN operates in conjunction with the INC and the COCCD . In the contemplated embodiments, the determined path may extend across multiple networks or may be located entirely within a single sub-network. Once the path is determined, signaling commands are sent from the IPNs to the appropriate sub-network nodes so as to configure the nodes such that a virtual circuit is set up along the determined path. Subsequent to setting up such a virtual circuit, the path is then tested. If the test is successful, it is provided to the customer. After the path is used, the virtual circuit is "torn down", and the network segments used to construct the circuit are returned to inventory for future use.
The contemplated embodiments of the invention enable real-time provisioning of virtual circuits across multi-carrier, multi-vendor networks. A signaling control plane is provided that may be fully integrated with conventional network infrastructure and operation support systems, and that permits logical (e.g. layer 2) and layer 1 provisioning of virtual circuits across optical and non-optical networks. Moreover, rapid set- up and teardown of virtual circuits between end-user requested endpoints that may have many different networks in between are also achieved. The ranking and rating of connections that may be established across multiple networks is permitted by the contemplated embodiments of the invention, based on various parameters involving both technical performance and business issues. Moreover, the contemplated embodiments of the invention permit evolution of networks from protected rings and mesh topologies to "on-demand" provisioned circuits with real-time switching for each QoS. Other objects and features of the present invention will become apparent from the following detailed description considered in conjunction with the accompanying drawings. It is to be understood, however, that the drawings are designed solely for purposes of illustration and not as a definition of the limits of the invention, for which reference should be made to the appended claims. It should be further understood that the drawings are not necessarily drawn to scale and that, unless otherwise indicated, they are merely intended to conceptually illustrate the structures and procedures described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other advantages and features of the invention will become more apparent from the detailed description of the preferred embodiments of the invention given below with reference to the accompanying drawings in which:
FIG. 1 is a schematic block diagram of the network in accordance with an embodiment of the invention;
FIG. 2 is a schematic block diagram of a network configured in accordance with the embodiments of the invention;
FIG. 3 is an exemplary system block diagram of a dense wavelength division multiplexing (DWDM) optical switching environment;
FIG. 4 is an exemplary schematic block diagram illustrating an Intelligent Provisioning Node (IPN) shown in FIG. 1; FIG. 5 is a schematic block diagram illustrating a network node configured in accordance with the invention,-
FIG. 6 is a tabular illustration of a central optical cross-connect database in accordance with the invention;
FIGS. 7 (a), 7(b) and 7 (c) is a flow chart illustrating the steps of the method for determining and provisioning a virtual circuit (centralized) ;
FIGS. 8 (a), 8(b) and 8 (c) is a flow chart illustrating the steps of the method for determining and provisioning an optical circuit (distributed) ;
FIG. 9 is a functional block diagram illustrating testing of a virtual circuit established by the network of FIGS. 1 or FIG. 2; FIGS. 10 (a), 10 (b) and 10 (c) are exemplary schematic illustrations of the effects of the method of the invention on various layers of the exemplary networks of FIGS . 1 and 2 ; and
FIG. 11 is an exemplary block diagram illustrating the distributed aspects of an optical operating system.
DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS
The present invention is directed to a system, networks and methods for provisioning optical circuits in a multi-network, multi-vendor environment. FIG. 1 is a schematic block diagram of the network in accordance with an embodiment of the invention. The network includes at least one sub-network 101, 102, 103. (long haul network (LHN) , Metropolitan Area Networks (MAN) , LAN, etc.), multiple nodes 110-115, and intelligent provisioning node (IPN) 120-125, intelligent node controller (INC) 130, central optical cross-connect database (COCCD) 131, signaling control plane 150 and customers 140, 141.
Metropolitan Area Networks or MANs are large computer networks typically arranged throughout a city. Generally, MANs use wireless infrastructure or optical fiber connections to link their sites. A MAN is optimized for a larger geographical area than a LAN, ranging from several blocks of buildings to entire cities. As with local networks, MANs can also depend on communications channels of moderate-to-high data rates. A MAN might be owned and operated by a single organization, but it usually will be used by many individuals and organizations. MANs might also be owned and operated as public utilities. They will often provide means for internetworking of local networks . With further reference to FIG. 1, sub-networks 101, 102, 103 may comprise a combination of networks, such as optical networks, router networks, long-haul networks, MANs or LANs . These different sub-networks may operate based on different protocols, such as Internet protocol (IP), Ethernet, asynchronous transfer mode (ATM), may possess equipment from different vendors, such as Cisco, Corvis, CIENA or Extreme Networks, and may be owned and operated by different carriers, such as UUNet or 360Networks . Each individual sub-network comprises multiple nodes 110-115. In certain embodiments, the nodes represent optical or non-optical switches, routers, cross-connects , Dense Wavelength Division Multiplexing (DWDM) equipment, and the like. DWDM is an optical technology used to increase bandwidth over existing fiber optic backbones . DWDM ' operates based on the principle of simultaneously combining and transmitting multiple signals at different wavelengths on the same fiber. In effect, one fiber is transformed into multiple virtual fibers. For example, if eight OC-48 signals were multiplexed into one fiber, then the carrying capacity of the fiber would be increased from 2.5 Gb/s to 20 Gb/s. DWDM permits a single fiber to transmit data at speeds of up to 400 Gb/s. One key advantage of DWDM is that it is protocol and bit-rate- independent. DWDM-based networks can transmit data in IP, ATM, synchronous optical network/synchronous digital hierarchy (SONET/SDH) and Ethernet, and can handle bit rates of between 100 Mb/s and 2.5 Gb/s. Therefore, DWDM-based networks can carry different types of traffic at different speeds over an optical channel . From a QoS standpoint, DWDM-based networks provide a low cost way to quickly respond to customers' bandwidth demands and protocol changes.
As further shown in FIG. 1, each node 110-115 is in communication with at least one other node. The communication protocol and technology utilized by each node may differ between different connections, depending on the nature of the nodes involved, i.e. vendor, configuration, etc. It should be noted that some connections between nodes occur within a single network, while others occur across multiple networks.
In certain embodiments, connections between nodes in different networks occur, for example, at public or private peering points, collocation facilities, etc. In other embodiments, nodes in a first network are different models, vendors, etc. than the nodes in a second network. In other alternative embodiments, multiple connections are established between two nodes. For example, two optical switches may be connected by three OC-12s (i.e., Optical Carrier-12s) .
Interfacing with each node is an intelligent provisioning node (IPN) 120-125. Here, an IPN is typically a separate network element. In alternative contemplated embodiments, however, the IPN is integrated with existing nodes. As shown in FIG. 1, each node is in communication with another separate IPN. For example, node 111 is in communication with IPN 121. Naturally, it will be appreciated that more than one node may be in communication with a single IPN, or that more than one IPN may be connected to a single node.
With additional reference to FIG. 1, each IPN is also in communication with at least one other IPN. For example, IPN 121 is in communication with IPN 120. In alternative embodiments, these connections occur across networks, e.g. between IPN 121 and IPN 122. In certain embodiments, an individual IPN may be connected to more than one other IPN, or not connected to an IPN at all, such as when only connected to the INC 130 described subsequently.
Each IPN is also in communication with a centralized Intelligent Node Controller (INC) 130. As shown in FIG- 2, the INC does not effectively reside in any one network, but instead it is disposed "above" all the sub-networks. In certain embodiments, however, the INC does resides within a specific network. Even though only one INC is shown in FIGS. 1 and 2, it should be appreciated that more than one INC may exist. For example, the IPNs within sub-networks 101 and 102 may be in communication with one individual INC, while the IPNs within, sub-network 103 would be in communication with a second INC. Here, it is would also be possible for the multiple INCs to be in communication with each other.
Returning to FIG. 1, the INC 130 maintains a central optical cross-connect database (COCCD) 131 that contains information relating to the topology and capability of each of the sub-networks. The COCCD is used to calculate paths for virtual circuits within the involved networks. The endpoints of such paths may be within one network, or in different networks, i.e., memory resident .
Customers 140, 141 are in communication with subnetworks 101, 103, respectively. The customers may be carriers, ISPs, corporations, individuals with desktop computers, LANs, WANs, etc. As shown in FIG. 1, customers 140 and 141 are represented as networks of nodes, such as a gigabit Ethernet corporate LAN or an IP router network.
In certain embodiments, customers are also in direct communication with at least one IPN, such as an IPN that is associated with a node. As shown in FIG. 1, customer 140 is in communication with IPN 120. In an alternative embodiment, an IPN is associated with a customer, such as customer 141 in communication with IPN 128. In accordance with additional embodiments, at least one customer may be in direct communication with the INC and/or customers may be directly connected to each other, such as at a private peering point. In certain embodiments, there is no difference between customers and nodes. For example, node 110 may be a customer that requests a virtual circuit be provisioned to node 113. 5 In accordance with the contemplated embodiments, the network of inter-connected IPNs forms a signaling control plane 150 for the interconnected networks 101- 103. This control plane 150 is configured to signal at least one node in each of the sub-networks so as to
10 configure and provide virtual circuits that may extend across multiple networks. In certain embodiments, the INC 130 forms a part of the signaling control plane 150.
The signaling control plane 150 may utilize in-band or out-of band signaling. For example, the signaling
15 control plane 150 may interact with a network of optical switches using a separate IP control channel (out -of- band) . Alternatively, the signaling control plane 150 may utilize, for example, unused synchronous optical network (SONET) overhead bytes or multi-protocol label
20 switching (MPLS) labels in a network of label-switched routers. In other contemplated embodiments, the signaling control plane 150 may utilize a combination of in-band and out-of-band signaling.
SONET is a standard for connecting fiber-optic
25 transmission systems, which defines interface standards at the physical layer of the OSI seven-layer model, and defines a hierarchy of interface rates that allow data streams at different rates to be multiplexed. SONET establishes Optical Carrier (OC) levels from 51.8 Mbps
30 (OC-I) to 9.95 Gbps (OC-192). SONET permits communication carriers throughout the world to interconnect their existing digital carrier and fiber optic systems.
MPLS is an IETF initiative that integrates layer 2
•35 information about network links (i.e., bandwidth, latency or utilization) into layer 3 IP within a particular autonomous system or ISP in order to simplify and improve IP-packet exchange. MPLS provides network operators with a great deal of flexibility to divert and route traffic around link failures, network congestion and bottlenecks. From a QoS standpoint, ISPs are able to more efficiently manage different kinds of data streams based on packet priority and/or service plan. For instance, consumers who subscribe to a premium service plan, or consumers who receive a high number of streaming media or high-bandwidth content can do so at a minimal level of latency and packet- loss.
When packets enter a MPLS-based network, label edge routers (LERs) provide these packets with a label (i.e., an identifier) . These labels contain not only information based on a routing table entry (i.e., destination, bandwidth, delay and other metrics) , but also refer to an IP header field (i.e., source IP address) , layer 4 socket number information and differentiated service. Once this classification is completed and mapped, different packets are assigned to corresponding labeled switch paths (LSPs) , where label switch routers (LSRs) place outgoing labels on the packets . These LSPs provide network operators with the ability to divert and route traffic based on data-stream type and Internet-access customer. With further reference to FIG. 1, each localized IPN performs a self-discovery process by which information related to local network topology and capability is discovered, including information on connections between nodes within a sub-network and cross-connections to nodes in other sub-networks. Inventory of the local network topology and capability is stored locally at a respective IPN. In other embodiments, the inventory is uploaded to the INC 130.
A request for the provision of a circuit is received by a specific IPN 120-125 from a customer 140, 141. The specific IPN 120-125 may receive the request directly from the customer, or via a specific node 110- 115. Here, the request includes information related to the circuit that is to be provisioned, such as addresses for the origination and destination of the circuit (e.g. IP addresses) , the required capacity or the required QoS. Based in part on the request, the IPN, operating in conjunction with other IPNs, determines a path over which the requested circuit may be provisoned. In alternative embodiments, the specific IPN operates in conjunction with the INC 130 and the COCCD 131. In the contemplated embodiments, the determined path may extend across multiple networks or may be located entirely within a single sub-network. Once the path is determined, signaling commands are sent from the IPNs 120-125 to the appropriate sub-network nodes so as to configure the nodes such that a virtual circuit is set up along the determined path. Subsequent to setting up such a virtual circuit, the path is then tested. If the test is successful, it is provided to the customer. After the path is used, the virtual circuit is "torn down" , and the network segments used to construct the circuit are returned to inventory for future use.
FIG. 2 is a schematic block diagram of a network configured in accordance with the disclosed embodiments of the invention. Here, each optical subnet (i.e., subnet 1, subnet 2 and subnet 3) comprises three optical switches, such as the multiple nodes 110-115 of FIG. 1, which each comprise a number of transmission and reception ports. Typically, optical connections are unidirectional. As a result, it is necessary to utilize two links for bi-directional communication. For this reason, each port is typically comprised of a transmitter (T) and a receiver (R) . The ports on the optical switches are connected to DWDM equipment that function to "multiplex" many optical signals together for transmission on a single fiber optical link 1-8, and "de-multiplex" these signals for individual switching. These single fiber optical links are then variously bundled together in link bundles.
It is common for multiple optical links to be bundled, as described previously, when they are placed into service to save on infrastructure costs . However, this bundling can be problematic when, for example, a link bundle is damaged (e.g. accidentally severed by a construction backhoe) , thereby rendering all the optical links in the bundle inoperable. Therefore, in the contemplated embodiments, it is necessary to track and record which optical links are bundled together, so that bundled optical links are not included in predetermined restoration paths shared risk link groups (SRLGs) .
FIG. 3 is an exemplary system block diagram of a dense wavelength division multiplexing (DWDM) optical switching environment. With specific reference to FIG. 3, the exemplary system includes switches 40 and 50. Switch 40 includes ports 401-412, each with respective T/R pairs. Similarly, switch 50 includes ports 501-512, each with respective T/R pairs. Also shown are optical links 12, 13 (see COCCD 131 of FIG. 1) . As described previously, the COCCD is used to calculate paths for virtual circuits within the involved networks. DWDM 20- 32 are used to multiplex many optical signals together for transmission on a single fiber optical link 1-8, and "de-multiplex" these signals for individual switching. These single fiber optical links are then variously bundled together in bundles 300.
FIG. 4 is an exemplary schematic block diagram illustrating an Intelligent Provisioning Node (IPN) shown in FIG. 1. In accordance with the contemplated embodiments of the invention, the IPN interacts with optical switches, the INC, other IPNs, and is also in communication with other applications and systems. In one embodiment, the IPN is implemented on a variety of hardware platforms, such as a Sun UltraSparσ III running Solaris . In accordance with the contemplated embodiments, the IPN resides in the network as a separate network element that is in communication with the network node. For example, the IPN may reside on a Power PC platform and may be in communication with an optical switch and the optical operating system (OSS) , or the like.
In certain embodiments, the IPN may be integrated with one or more other network nodes. For example, a vendor may manufacture an optical switch that is capable of performing all the functions of the IPN, as described subsequently. From a functional perspective, the OOS comprises three main elements, i.e., the OSS 400, the User Network Interface (UNI) 410 - and Application Programming Interfaces (APIs) 420.
The OOS 400 is used to manage communications from customers, nodes, the INC, other IPNs, databases, etc. The OOS 400 is responsible for performing, i.e., initiating, self-discovery processes, path calculations and/or, configuration commands, or the like.
The UWI interface 410 comprises a set of protocols used to communicate switching/routing commands with network nodes when setting up and tearing down a virtual circuit. The UNI interface 410 is also responsible for communicating self-discovery commands to the nodes described in FIG. 1, as well as translating information, such as topology information received from the node. Depending on the node that is used, the UNI interface 410 may be implemented over a TLl interface, common object request broker architecture (CORBA) , other application programming interfaces (APIs), IP, or other optical equipment management software (e.g., proprietary systems) . ' In contemplated embodiments, an optical UNI may be used, which may include at least some of the functionality necessary for implementation in the IPN.
It is contemplated that other APIs may be utilized by the IPN to communicate with other applications (e.g., including integration with network management software, billing software), communicate with other OSSs, other applications (e.g., the application used by customer A to request a circuit) and to communicate with optical switch management software (e.g., Corvis's CorWave Manager and CIENA'S Lightworks OS) . In certain embodiments, the IPN may functionally reside between an optical switch and the OSS of the switch. Here, the IPN captures information sent from the switch to the OSS, such as topology and capability information (e.g. link state information) . The IPN also emulates commands from the OSS to the switch, initiating such processes as self-discovery (link state discovery) , provisioning (switch fabric) configuration, etc.
In accordance with, the contemplated embodiments of the invention, each IPN performs a "self-discovery" or "link-state discovery" process whereby inventory reports detailing all the connections under the IPN' s control are generated. These reports include information relating to the nodes connected to the IPN, the ports available on those nodes, the connections between nodes, including cross-network connections, the status of those connections, or the like. Information generated by self-discovery may include, for example, the address of each node in communication with the IPN, the address of the port-switch pair to which a given port is connected, the bandwidth available between the two ports, etc. Interior gateway protocols (IGPs) may be utilized to discover local network and node topology and capability, such as open shortest path first (OSPF) , Intermediate System to Intermediate System (ISIS), etc. Alternatively, exterior border gateway protocols (EBGPs) may be utilized to discover topology and capability information in other networks, such as optical BGP (OBGP) .
Network service providers may request that only a specified number of ports be made available to the inventive system. For example, a given service provider may only permit availability of, for example, 10% of the ports on a node to the contemplated system. Here, the service provider's node may effectively be partitioned so that the inventive system only recognizes the identified ports as existing on the node. It should be noted that in networks that deploy DWDM technology, it may be necessary to keep inventory of individual wavelengths available along a ' given port. For example, one or more nodes in a network may represent all-optical switches, i.e. switches that do not need to translate optical signals into electrical signals in order to communicate the signals through the switch. Here, it may be necessary to use the same value wavelength between multiple all-optical switches. As a result, the contemplated system may maintain .an inventory of wavelengths that are in use and not in use. In accordance with the contemplated embodiments, the inventory reports generated by each IPN may be stored locally on the IPN, such as in a link state database (not shown) . Alternatively, the inventory reports may be transmitted to an INC described in FIG. 1. The INC maintains a COCCD (FIG. 1) that stores the information received from all the IPNs in the network. Based on the information contained in the COCCD, as well as information that may be available from other sources, such as simple network management protocol (SNMP) , OSS and Meta managers, the INC maps all the subnets and cross-connections available in all the interconnected subnets .
FIG. 5 is a schematic block diagram illustrating a network node configured in accordance with the invention. Specifically, FIG. 5 shows the software and controller extensions provided by the IPN to enable a manageable optical switch. In accordance with the contemplated embodiments, the switching infrastructure 500 or ingress/egress ports, the switching fabric 510, the switch controller 515, grooming controller 520 and protection controllers 525 are not under the exclusive control of the vendor's OSS. Rather, the IPN emulates the OSS (FIG. 3) , while globally improving the management of the switching components that will increase the capability of a switch's components.
Current services that are available to switches through their OSS are summarized in the network manager 530, systems manager 535 and element manager 540. As shown, the network manager 530 includes routing/switch topology, resource visibility/inventory, configuration, customer security and performance services . Systems manager 535 includes fault-management, configuration, accounting, performance, and security (FCAPS) functions, telecommunications management network (TMN) model, transport layer interface (TLI) support and relational database management systems (RDBMS) for configuration and provisioning services. Element manager 540 includes, redundancy management, configuration control, accounting and security and protection services.
In accordance with the contemplated embodiments, network manager 530 is augmented to enable a switch to cooperate with its (non-identical) peers transparently. The systems manager 535 provides increased (local) resource management support and the element manager 540 increases management services. As a result, the utilization of the individual components, such as the ports, is increased. The IPN of FIG. 5 is further provided with a provisioning controller 545 and a signaling controller 550. With open access to the switching infrastructure, such as via CORBA, SNMP or Transaction Language 1 (TLl) as shown in the services manager 555, the IPN becomes configured to manage the switch efficiently, as well as cooperate with its peers across standards based signaling system 560 and enable the peers to do the same. The signaling system 560 includes an UNI interface, an INC interface, an IPN interface, an NNI interface and a message protocol engine. The provisioning controller 545 relies on the support of all of the foregoing components. However, it is the applications and SLA manager 560 that summarizes all information that is "learned" by the other components (i.e., other managers) and that provides the IPN with the capability to coordinate the provisioning and global management. In alternative embodiments, the provisioning controller 545 incorporates external or extended provisioning manager 565 that is a separate network element, and manages several IPNs (and their switches) from a centralized location. The external or extended provisioning manager 565 includes shared memory, activation and a provisioning table.
FIG. 6 is a tabular illustration of a central optical cross-connect database (COCCD) in accordance with the invention. The COCCD of the contemplated embodiments contains the current inventory and connection information related to the links between network nodes that are available to the system, and is typically maintained by an INC. In certain embodiments, the COCCD is implemented using in-memory database technology, similar to that offered by companies, such as TimesTen, Inc. Such database technology offers the high levels of performance necessary to accomplish the provisioning of virtual circuits in very short time periods, such as 50 mis. In other embodiments, at least portions of the COCCD can be replicated across multiple locations within one or multiple networks for easier access .
The COCCD of the contemplated embodiments may be populated using a variety of techniques, including self- discovery, multicast requests, etc. In accordance with the specific embodiments, each IPN performs a self- discovery process to obtain network topology and capability information related to the optical links that are local to the IPN, including links between networks, nodes, etc. Moreover, changes in network topology, such as the addition or deletion of network nodes, may trigger a self-discovery process. In accordance with the invention, each IPN forwards this information to the INC for storage in the COCCD. Here, information stored in the COCCD may be analyzed in order to create a centralized map of the entire network available to the system.
With specific reference to FIG. 6, two exemplary records 600 and 502 are shown that may be stored in the COCCD. A node ID 604 that identifies a network node is included in each record. For example, the node ID may be an IP address. -Associated with each node ID 604 is a number of port IDs 606 that identify individual ports on the node. The protocol field 608 identifies the data protocol implemented (i.e., the encapsulation method) over the port, such as SONET or Gigabit Ethernet. The bandwidth field 610 identifies the bandwidth of the connection associated with the port, such as, OC-4 or OC-12. In certain embodiments, the bandwidth field can also include T-3. In alternative embodiments, it is necessary to track the link bundle in which a given link between nodes resides. For example, in certain embodiments in which the system determines two paths for a given request, such as one path for use and one path for backup purposes, it is advantageous if the two paths do not include links in the same link bundles. As a result, if an entire link bundle becomes damaged, such as by being severed by a back-hoe during construction, the second determined path will not be affected. An addressing scheme is utilized to ensure that two paths do not include links in the same link bundles. In the preferred embodiment, the shared risk link group (SRLG) addressing scheme is used.
In additional embodiments, it is necessary to develop an addressing scheme for individual wavelengths that are available over a given optical connection. For example, if an optical link carries a multiplexed signal (i.e. a signal comprised of more than one wavelength), there may be additional wavelengths that may be multiplexed into the signal available for use. These additional wavelengths represent unused capacity along the optical link, and therefore must be identified in a self-discovery process.
The COCCD may track a number of other parameters related to connections within and across specific networks. These parameters may be indicative of both technical concerns and business concerns. Some of the parameters may be automatically discovered via a self- discovery process, while others may be manually entered and associated with a given connection. Such parameters may include those a customer may request, such as endpoints for a connection (e.g., IP addresses) , bandwidth (e;g., OC-12) , security, protocol (e.g., IP) or term/duration (e.g., 1 hour) parameters.
In addition, it is possible for a customer to also specify some performance related constraints, or there may be a performance "class-of-service" that the customer specifies that encompasses one or more performance metrics. Performance metrics/constraints may include latency, failover flexibility, packet loss, bit error rate and number of hops .
Additional constraints that the customer may specify include constraints on resources or capabilities that may be required for a connection to be established. These constraints may be invisible to the customer, and may include available wavelength (s) , wavelength conversion, device compatibility (at client, metro, core, etc.) , mux/demux speed and availability, MPLS capabilities, OSS links, alarms, scheduling and network percentage fulfillment. Moreover, parameters associated with technical constraints that may be related to a connection may be stored in the COCCD, such as power, polarization, type of fiber and amplification mechanism.
In order to keep the information stored in the COCCD as up-to-date as possible, periodic self-discovery updates are performed. Here, updates may be received from the IPNs, through other known applications, such as Lightworks OS or by network operators. In addition, changes in the network, such as added or subtracted nodes or ports, may trigger a self-discovery process. As a result , changes in network topology become reflected in the COCCD in a timely manner. In other embodiments, the information stored in the COCCD is kept confidential such that the networks of competing network service providers remain invisible to each other. Here, the entity that operates and maintains the network of IPNs and INC may be a "carrier- neutral" entity, providing the service to any/all qualifying network service providers. In alternative embodiments, each network maintains a separate INC containing information related only to the given network.
.Some or all of the information stored in the COCCD may also be stored at the IPN level . Alternatively, the information stored in the COCCD is only stored at the IPN level. Generally, various levels of hierarchical layers with regard to the nodes, IPNs and INCs may be implemented. For example, the contemplated embodiments of the present invention may function with no centralized control (e.g. no INC or COCCD), with all decision-making occurring in a distributed manner. In other embodiments, one INC controls all the IPNs in all the interconnected networks. Alternatively, a network of INCs is provided, with each controlling some portion of the interconnected networks .
In the embodiments with multiple INCs, the contemplated invention employs a single master INC to control the multiple INCs. As a result, multiple levels of signal control are created. In the preferred embodiment, three levels of signaling control are created: (1) IPNs, (2) INCs, and (3) master INC. Naturally, it will be appreciated that additional layers of signaling control. may be also implemented. FIGS. 7 (a), 7 (b) and 7 (c) is a flow chart illustrating the steps of the method for determining and provisioning a virtual circuit (centralized) . An application, router, desktop or the like sends a request to a local IPN, as indicated in step 702. In accordance with the embodiments of the present invention, a request for a circuit is sent by a customer and received by an IPN. For example, referring to FIG. 1, customer 140 may transmit a request for a circuit to IPN 120. Here, customer 140 may represent, e.g. a network of IP routers, where the request is sent from an IP router that operates as a "border" for the network of routers . As stated, the request is received by the IPN. Here, the customer 140 may have a direct connection to IPN 120, for example an IP connection, over which a request may be sent. In other embodiments, the request to a network node is sent by the customer, such as a request for node 110 of FIG. 1. The request would then be forwarded from node 110 to IPN 120 for processing. In alternative embodiments, the request may be sent directly to INC 130, bypassing IPN 120 all together.
In other embodiments, the customer may be a desktop computer from which a request may be initiated, such as by launching a desktop application. In an embodiment, the request is a web-based request, where a customer logs onto a Web page and enters a request via the Web page. For example, the network administrator of a data center may log onto a Web page and request an additional optical circuit be provisioned between the data center and e.g. a second data center.
Typically a request includes information related to the circuit that is to be provisioned. This information may include addresses of the endpoints of the circuit (e.g. IP addresses, BGP addressing, etc.) . The request may include varying levels of specificity regarding the endpoints of the requested circuit, such as which ports on the switches should be used or what wavelength should be used. Furthermore, a request may include the bandwidth required, such as OC-12 or the term desired, such as 30 minutes. In certain embodiments, a request typically includes other parameters with which the provisioned circuit must comply. Here, the parameters may comprise technical requirements, security requirements and business requirements, such as a required quality of service, acceptable costs or redundancy requirements.
In alternative embodiments , the customer may specify acceptable parameters in the request, or the customer may have defined default parameters . For example, the customer may register a profile with the inventive system where the profile defines a number of default values for the various parameters. As a result, the system becomes permitted to access the profile of the customer when a request is received from the customer. As a result, the customer becomes relieved of the requirement to indicate all the parameters desired in every request .
In accordance with embodiments where requests are received from specific applications, the system may maintain sets of default parameters associated with each application. For example, if a request is received for a virtual circuit that is to be used for video conferencing, the system may access a set of default parameters that must typically be met for a video conferencing application to function properly. The sending of the request may be triggered in a number of ways. Here, the request may be triggered manually, such, as by a network administrator operating a border IP router. In alternative embodiments, an application running on or in conjunction with the border IP router may trigger a request. For example, heavy traffic levels may cause congestion along certain routes associated with the network of IP routers, as well as networks to which it is connected, which would then trigger the request by the application.
In other embodiments, an application is developed that automatically launches a request for additional bandwidth along a given route when, for example, traffic or congestion levels exceed certain threshold parameters. As a result, layer 3 type activity, such as at the IP layer, may trigger changes in the layer 1 configuration of the network. Next, a check is performed to determiner whether direct connection with an egress node available, as indicated in step 706. Having received the request, IPN 120 first determines whether a direct connection exists between the egress node specified in the request and the ingress node to which the IPN is in communication.
Information related to the ability and capability to reach a local node is typically available to the IPN. For example, a link state database (not shown) may be maintained at node 110 (or at IPN 120) . Such a database stores information related to all the connections from node 110. Here, the IPN compares parameters specified in the request to parameters associated with the currently available connections to determine whether any available connections satisfy the parameters of the request. If a direct connection is available, then the local IPN immediately reserves the direct connection and provisions it for the customer, as indicated in step 708. If a direct connection is unavailable, then the local IPN sends the request to the INC for additional processing, as indicated in step 710.
Next, the INC queries the COCCD for at least one path from the ingress to egress node specified in the request, as indicated in step 712. The COCCD maintains information related to the topology and capability of the networks and nodes of the contemplated embodiments of the invention. Based on this information, the INC is able to compute a path through at least one of the involved networks to connect the ingress and egress nodes specified in the request. In the preferred embodiment, the path calculation is based on open shortest path first (OSPF) or intermediate system to intermediate system (ISIS) protocols.
In accordance with the contemplated embodiments, the COCCD may contain varying amounts of additional information regarding the connections available to the system. In one exemplary embodiment, the COCCD contains enough current information for the INC to determine a path from the ingress node to the egress node that complies with all of the requirements set forth in the request. In another embodiment, the COCCD contains only enough information for the INC to determine at least one path between the egress and ingress nodes without being able to verify that the at least one path fulfills the other parameters specified in the request. Here, it is necessary for the INC to signal each of the nodes in the path to determine whether the path complies with the parameters of the request .
Next, a check by the INC is performed to determine whether at least one path exists between the ingress and egress nodes, as indicated in step 714. If the INC determines that no paths exist between the ingress and egress nodes, a failure notification is transmitted to the customer, as indicated in 716. That is, if the INC is unable to determine a path between the ingress and egress nodes, a failure message may be sent to the customer, indicating that the system currently cannot fulfill the customer's request.
Next, if the INC determines that at least one path does exist between the ingress and egress nodes, the INC retrieves at least one determined path from the COCCD, as indicated in step 718. That is, the INC retrieves information relating to the at least one determine path from the COCCD. Here, the retrieved information may include node addresses, port IDs or wavelengths. In accordance with the contemplated embodiments, the INC may retrieve multiple paths . In the preferred embodiment, the INC retrieves at least two paths for a request. For example, the request may specify that at least two paths must be determined for protection purposes . Next, the INC transmits a resource reservation command to each node (IPN) of the at least one path, as indicated in step 720. Here, a resource reservation command instructs the receiving node to reserve requested resources for -the customer's use. For example, the command may include the port ID, wavelength or bandwidth that must be reserved for use in the at least one determined path. In certain embodiments, reserve resource commands are also sent to nodes that comprise protection paths.
Each IPN, e.g., IPN #1 ... N, receives a respective resource reservation command and queries a link state database of the respective IPN for resource availability, as indicated in steps 722a., 722b and 722c. Here, each of the IPNs comprising the at least one determined path receives the resource reservation command from the INC and determines whether the requested resources are available by querying the link state database.
A check is then performed by each respective IPN to determine whether the resources associated with an individual IPN are available, as indicated in steps 724a, 724b and 724c. However, this step may not be necessary if, for example, the COCCD contains enough current information about the link states in each node. If the resources associated with an individual IPN specified in the resource reservation command are not available, then the INC is queried for an alternate path, as indicated in step 726. That is, if the required resources are not available at an IPN, then the IPN will signal the INC to select an alternate path for that network segment, or for the entire path. If a path cannot be located, then the INC transmits a failure notification to the customer, and processing is terminated.
If the resources specified in the resource reservation command are available, then each IPN, i.e., IPN #1, 2 ... N, signals to the INC successful resource allocation, as indicated in steps 728a, 728b and 728c. That is, if the resources requested in the resource reservation command are available, the IPN signals a successful resource reservation to the INC, indicating, for example, that the resources are available, as well as any additional information with respect to the necessary resources, such as port ID or wavelength.
Next, the INC transmits provisioning commands to the IPNs in the at least one determined path, as indicated in step 730. Thus, having received successful resource reservation signals from each of the IPNs in the determined path, the INC issues a provisioning command, instructing the IPNs to provision the determined path.
Each IPN then executes their respective provisioning command and signals completion to the INC, as indicated in step 732a, 732b and 732c. The INC then receives the provisioning completion signals and generates a signal to the IPNs to test the at least one determined path, as indicated in step 734. The circuit associated with the at least one determined path is then tested, as indicated in step 736.
A check is performed to determine whether the test of the circuit was successful, as indicated in step 738. If the test is unsuccessful, the INC transmits a provisioning command to a protection circuit, as indicated in step 740. Here, the system may determine more than one path for the entire initial path or portions of the path, depending on the level of protection specified in the request. In cases where the initial path that is provisioned fails the test, the protection path may be provisioned instead. In certain embodiments, additional error detection steps are performed. In one embodiment, an additional step comprises determining which links in the connection are responsible for the failure. In another embodiment, a step comprises determining which links may be provisioned to route around the failure. If the test of the circuit was successful, then a hand-off to the customer is performed, as indicated in step 742. Next, the circuit associated with the at least one determined path is monitored to detect failure or degradation of the QoS, as indicated in step 744. Next, a check is performed to detect whether a failure has occurred, as indicated in step 746. If a failure of the circuit associated with the at least one determined path or degradation of the QoS is detected, then a return to step 740 occurs . If a failure of the circuit associated with the at least one determined path or degradation of the QoS is not detected, then the circuit is returned to inventory when the customer's transaction is completed, as indicated in step 748. Subsequent to expiration of the term of the circuit, the INC signals each, individual IPN to release the resources comprising each segment of the circuit. In certain embodiments, the individual IPNs may track circuit terms and release resources automatically. FIGS. 8 (a), 8 (b) and 8 (c) is a flow chart illustrating the steps of the method for determining and provisioning an optical circuit (distributed) . An application, router, desktop or the like sends a request to a local IPN, as indicated in step 802. Next, the local IPN receives the request, as indicated in step 804. Here, the customer 140 may have a direct connection to IPN 120, such as an IP connection, over which a request may be sent. In other embodiments, the request to a network node is sent by the customer, such as a request for node 110 of FIG. 1. The request would then be forwarded from node 110 to IPN 120 for processing- In other embodiments, the request may instead be sent directly to INC 130, bypassing IPN 120 all together.
Next, a check is performed to determiner whether direct connection with an egress node available, as indicated in step 806. If a direct connection to the egress node is available, then the local IPN immediately reserves the direct connection and provisions it for the customer, as indicated in step 808. If a direct connection to the egress node is unavailable, then the local IPN sends the request to the IPN associated with egress node, as indicated in step 810.
Next, queries of the link state database of an IPN for all connections that complies with the requested parameters are performed, as indicated in steps 812a and 812b. Here, it should be understood that reference characters "a" and wb" refer to the same step that occurs in different nodes or IPNs. For example, as shown "a" refers to an ingress IPN and "b" refers to the egress IPN. Here, each IPN queries its associated link state database to determine whether any direct connections are available that meet the parameters set forth in the request. In certain embodiments, a request specifies the amount of bandwidth, term or security. The query is performed to determine whether any connections currently available fulfill the requirements specified in the request . A check is then performed to determine whether at least one connection that complies with the requested parameters has been located, as indicated in steps 814a and 814b. If a direct connection to an ingress or egress node that fulfills the requirements in the request does not exists, then the system sends a failure notification to the customer, notifying them that their request cannot be met, as indicated in step 816. In certain embodiments, additional programming is implemented to determine whether a connection that is currently set up may be "torn-down" in favor of the current request .
If a direct connection to an ingress or egress node that fulfills the requirements in the request exists, then at least one connection identifier associated with connections that comply with the request parameters is retrieved, as indicated in steps 818a and 818b. The resource reservation command for the at least one retrieved connection identifier is transmitted to the associated node, as indicated in steps 820a and 820b. Here, each respective IPN transmits a reserve resource command to its associated node in order to reserve the connections associated with the at least one retrieved connection identifier.
Next, the IPNs that form the at least one retrieved connection are determined, as indicated in steps 822a and 822b. Having identified the at least one connection that fulfills the requirements contained in the request, IDs associated with the IPNs that form the opposite ends of those connections are determined. In certain embodiments, the IDs are retrieved from the link state database. A multicast of the request to the determined IPNs is then performed, as indicated in steps 824a and 824b. Here, the request is only sent to those IPNs that reside on the opposite side of the connections that comply with the requirements in the request, as identified by the IPNs that form the opposite ends of the connections.
The multicast of the request is then received by each respective IPN, as indicted in steps 826a and 826b. A check is performed to determine whether a request from an egress/ingress request "tree" has been received, as indicated at steps 828a and 828b. Here, each IPN that receives the request first determines whether the same request has been received from an IPN in the egress/ingress request tree. A receipt of the same request indicates an IPN that received both requests (i.e., the multicast request and the egress/ingress request tree) is now in possession of a complete path from ingress to egress node that fulfills the requirements set forth in the request .
If the egress/ingress request tree was not received along with the multicast request, then a check is performed to determine whether the request has "timed out", as indicated in steps 830a and 830b. However, if the request has not timed out, then the method return to steps 812a and 812b. If, on the other hand, the request has timed out, then the process is terminated as indicated in step 832. Each request includes a "time- to-live" value that is set when the request is created. In accordance with the contemplated embodiments, the time-to-live value represents a time after which the request is no longer valid. As a result, it becomes possible to control the unlimited spread of requests through the interconnected networks .
If the egress/ingress request tree was also received along with the multicast request, then information related to the complete path is sent to the customer for approval, as indicated in step 834. A check is performed to determine whether the customer has approved the complete path, as indicated in step 836. If the customer does not approve the path, then a new path for approval of the customer is determined, as indicated in step 838. If the customer approves the path, then a provision command is sent to every IPN in the path to instruct the IPNs to provision the circuit specified in the path, as indicated in step 840. Here, the system may determine more than one path for the entire initial path or portions of the path, depending on the level of protection specified in the request. In cases where the initial path that is provisioned fails the test, the protection path may be provisioned instead. In certain embodiments, additional error detection steps are performed. In one embodiment, an additional step comprises determining which links in the connection are responsible for the failure. In another embodiment, a step comprises determining which links may be provisioned to route around the failure. The circuit specified in the^ path is then tested, as indicated in step 842. A check is performed to determine whether the test was successful, as indicated in step 844. If the test of the circuit was successful, then a hand-off to the customer is performed, as indicated in step 848. Next, the circuit associated with the path is monitored to detect failure or degradation of the QoS, as indicated in step 850. A check is performed to detect whether a failure has occurred, as indicated in step 852. If a failure of the circuit associated with the path or degradation of the QoS is detected, then a return to step 846 occurs. If a failure of the path or degradation of the QoS is not detected, then the circuit associated with the path is returned to inventory when the customer's transaction is completed, as indicated in step 854.
FIG. 9 is a functional block diagram illustrating the testing of a virtual circuit established by the network of PIG. 1 or FIG. 2. In accordance with certain disclosed embodiments of the invention, in addition to a signaling connection, an IPN is provided with a data connection to its corresponding node. Here, IPNs test circuits that are set up in accordance with the disclosed embodiments of the system.
With specific reference to FIG. 9, IPN 902 is connected to node 904 via signaling connection 910 and data connection 912. Similarly, IPN 906 is connected to node 908 via signaling connection 914 and data connection 916. IPNs 902 and 906 are connected to each other via signaling connection 920, and nodes 904 and 908 are connected to each other via data connection 918. It should be noted that data connection 918 may also comprise an indirect connection, such as a connection that includes a number of intermediate segments .
Continuing with FIG. 9, IPN 902 may configure node 904 to transmit data to node 908 over data connection 918. Such a configuration occurs over signaling connection 910 in accordance with the disclosed methods of the present invention. In order to test the connection between nodes 904 and 908, IPN 902 may signal IPN 906 over signaling connection 920, informing node 906 that node 904 is about to send a test data transmission. IPN 902 may then transmit a data signal to node 904 over data connection 912 that is to be switched to node 906. If node 906 receives the data signal, then the signal is switched to IPN 906 via data connection 916. Upon receiving the data signal IPN 906 informs IPN 902 via signaling connection 920 that the data transmission was successfully received. If IPN 902 does not receive the confirmation signal from IPN 906 within a set time period, then IPN 902 concludes that the data transmission was not received, and the test of the connection failed.
FIGS. 10 (a) , 10 (b) and 10 (c) is a schematic illustration of the effects of the method of the invention on various layers of the exemplary network of FIGS. 1 and 2. In a communication network, the nodes that accommodate routing and/or switching typically operate at up to layer 3 (i.e., the network layer) of the International Standard Organization's Open System Interconnect (ISO/OSI) network model. A very common network layer protocol is the IP protocol, which forms a significant portion of all traffic in conventional communication networks, and which accommodates the routing of individual IP datagrams in a connectionless environment .
In a highly dynamic network, it is possible for all nodes within the network to support layer 3. However, such a configuration is inefficient, due to the high cost of operation associated with each interface port. For this reason, equipment is available that only operates at up to layer 2, i.e. the datalink layer, such as SONET/SDH or Ethernet, or even only layer 1, such as DWDM.
Due to the lack of adequate switch management tools, it is possible for conventional equipment to switch (or route) data on only one layer. At present, cooperation between the layers to efficiently switch/route data is impossible. For example, a SONET node can only switch SONET frames, but cannot simultaneously switch wavelengths or route IP datagrams. In addition, an IP router can only route IP datagrams, even if this IP router operates with SONET as a datalink layer. In accordance with the disclosed embodiments, the increase in switch management that is accomplished by the usage of the disclosed IPN permits a switching/routing node that is provided with the correct hardware to efficiently switch and/or route data on all three layers .
Proceeding from the bottom of the OSI stack, there are several options for implementing the physical layer. Conventional layer 1 switching can be accommodated solely in the electrical domain, such as in LANs across coaxial cable, a combination of the photonic and electrical domain, and solely in the photonic domain, such as with MEMS technology. Coupled with this is the existence of several layer 2 protocols that provide the framing to enable the transfer of data across the physical medium, which protocols include SONET/SDH and Ethernet. The layer 3, i.e., the network layer, is responsible for the routing of data in the network. An example of the layer 3 is IP protocol . A switch/router managed by an IPN that is configured in accordance with the contemplated embodiments is able to accommodate any combination of these protocols. In such a scenario, it is the responsibility of the IPNs to ensure that the agreed upon requirements of a connection will not be affected by any translations in a node. In combination with such a requirement/ the IPNs are also responsible for programming the switch so as to enable it to effectively translate data between any layer to thereby enable transmission between networks that operate using different protocols.
FIG. 10 (a) is an illustration of an interface between each layer of the switch and the associated managing IPN in accordance with the contemplated embodiments of the invention. Also illustrated is the data path X through a switch, if a translation of data is required. Without any sort of electrical conversion, it is currently impossible to extract the data from an optical signal. However, it is contemplated that such as capability may become available in the future. Here, a node is only capable of layer 1 switching. At the instant that a node is able to receive an electrical signal or convert an incoming photonic signal into electronic form, the node is able to extract the layer 3 data, which enables the node to encapsulate and transfer the data using a different datalink, such as layer 2 protocol .
With the availability of different network layers and the ability to program their usage, the IPN of the contemplated embodiment can efficiently manage the switch. Instead of hard-provisioning the interfaces, in accordance with the contemplated embodiments, an IPN initiates translations between different network layers based upon criteria that can range from cost to utilization, such as an associated price per interface, where translation at the network layer is the most expensive, and translation at the physical layer the least expensive.
FIG. 10 (b) is exemplary illustrations of changes to the layer 2 protocol to enable communication between interconnected networks. Here, the layer 1 protocol remains unchanged while the layer 2 protocol requires changing so as to enable communication between the interconnected networks. Such a translation may be caused by high utilization on one interface, or an increase in costs associated with the interface. In such a case, in accordance with, the contemplated embodiments, the IPN will initially map connection requirements to establish equivalent connections across the different datalink layers with respect to their QoS, and then proceed with the required programming to accommodate layer conversions. PIG. 10 (b) illustrates two exemplary configurations 1000, 1100 for accommodating layer conversions . These two configurations illustrate the flexibility introduced with the usage of a managing IPN, such as IPN #1. Tracing along the data path X from left-to-right, in configuration 1000 a physical layer is not connected through the first switch. However, the managing IPN is able to set up a datalink layer connection between the first and second switches, as well as the second and third switches, while managing the translation at the second switch to enable the data transfer between these two connections. Here, even though a SONET and an
Ethernet connection between all the switches exists, the managing IPN utilizes its "intelligence" to determine the least expensive connection based on all cost parameters. Further, the managing IPN is configured to set up a physical layer connection through the first and third switches, while managing the translation of the data in the second switch.
FIG. 10 (c) illustrates another exemplary configuration for accommodating layer conversions. Here, the datalink layer needs to remain the same, while data needs to be forwarded to a different physical layer of another switch. In accordance with the contemplated embodiments, the managing IPN, e.g. IPN#1, evaluates the properties of both the photonic, such as the MEMS technology equipment, and the electrical domains, such as IANs across coaxial cable to provide a requested QoS quality of service. In addition, the IPN programs the switch to enable such a translation to occur dynamically . FIG. 11 is an exemplary block diagram illustrating the distributed aspects of an optical operating system. That is, FIG. 11 illustrates an exemplary contemplated embodiment of a network and system in which an IPN functions as a link between applications requesting circuits and layers 1, 2, 3 of the network. In addition, the network and system is configured such that a combination of in-band and out-of-band signaling may be utilized. In such a network and system, QoS issues (e.g. IP transmissions by provisioning connections) are managed, and IP traffic flows (e.g. MPLS) to the connections in order to divert the traffic flows from congested areas of the network are redirected.
Furthermore, the contemplated system and networks may be utilized to dynamically allocate e.g. legacy SONET rings by enabling edge nodes with IPNs. The update of router MPLS and BGP tables through IPN updates is also performed, and select applications are permitted to request a guaranteed QoS by provision on-demand circuits, while all other traffic uses shared "pipes" with other protocol such as resource reservation protocol (RSVP) or differentiated services (diffserv) . In such a system and networks that are configured in accordance with the contemplated embodiments, signaling between networks and IPNs (e.g. handshakes) may occur in advance of and during transaction setup and provisioning. Furthermore, the dynamic concatenation of network segments from multiple service providers for failures and maintenance cutovers is also permitted. The contemplated embodiments of the present invention may be advantageously utilized to eliminate as many layer 3 and layer 2 transactions as possible to reduce the cost of routing and switching in the network.
Thus, while there have shown and described and pointed out fundamental novel features of the invention as applied to a preferred embodiment thereof, it will be understood that various omissions and substitutions and changes in the form and details of the devices illustrated, and in their operation, may be made by those skilled in the art without departing from the spirit of the invention. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Moreover, it should be recognized that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or embodiment of the invention may be incorporated in any other disclosed or described or suggested form or embodiment as a general matter of design choice. It is the intention, therefore, to be limited only as indicated by the scope of the claims appended hereto .

Claims

WHAT IS CLAIMED IS:
1. A system for provisioning optical circuits in a multi-network, multi-vendor environment, comprising: at least one sub-network comprising a plurality of nodes and at least one intelligent provisioning node in communication with, each other, each of said plural nodes being in communication with at least one other node, and at least one intelligent provisioning node being in communication with another intelligent provisioning node ; an intelligent node controller in operative communication with the at least one intelligent provisioning node of the at least one sub-network; a central optical cross-connect database in operative communication with the intelligent node controller; a signaling control plane comprising the at least one intelligent provisioning node in operative communication with another intelligent provisioning node; and at least one customer in operative communication with the at least one sub-network..
2. The system of claim 1, wherein the intelligent node controller maintains the central optical cross- connect database .
3. The system of claim 1, wherein the central optical cross-connect database contains information relating to topology and capability of the at least one sub-network.
4. The system of claim 2, wherein the central optical cross-connect database contains information relating to topology and capability of the at least one sub-network.
5. The system of claim 1, wherein the central optical cross-connect database calculates paths for virtual circuits .
6. The system of claim 2, wherein the central optical cross-connect database calculates paths for virtual circuits.
7. The system of claim 1, wherein the at least one customer is also in direct communication with the intelligent provisioning node of the at least one subnetwork .
8. The system of claim 1, wherein the at least one customer is also at least one of in direct communication with the intelligent node controller and directly connected to. other customers.
9. The system of claim 1, wherein the signaling control plane utilizes in-band or out-of band signaling.
10. The system of claim 10, wherein the out-of band signaling comprises interacting with a network of optical switches using a separate IP control channel.
11. The system of claim 1, wherein the signaling control plane utilizes unused synchronous optical network overhead bytes or multi-protocol label switching labels in a network of label-switched routers.
12. The system of claim 1,. wherein the signaling control plane utilizes a combination of in-band and out- of-band signaling.
13. The system of claim 1, wherein the at least one intelligent provisioning node performs a self- discovery process to discover information related to local network topology and capability.
14. The system of claim 13, wherein the information related to local network topology and capability includes information on connections between nodes within the at least one sub-network and cross- connections to nodes in other sub-networks .
15. The system of claim 1, wherein a request for provision of a circuit is received by a specific intelligent provisioning node from the at least one customer.
16. The system of claim 15, wherein the request includes information relating to the circuit that is to be provisioned.
17. The system of claim 16, wherein the information relating to the circuit that is to be provisioned includes one of addresses for an origination and destination of the circuit, a required capacity and a required quality of service.
18. The system of claim 15, wherein the specific intelligent provisioning node, operating in conjunction with other intelligent provisioning nodes, determines a path over which the requested circuit may be provisioned based partly on the request .
19. The system of claim 15, wherein the specific intelligent provisioning node operates in conjunction with the intelligent node controller and the central optical cross-connect database.
20. The system of claim 18, wherein the determined path extends across multiple networks or the determined path is located entirely within a single sub-network.
21. The system of claim 18, wherein once the path is determined, signaling commands are sent from the at least one intelligent provisioning node to the at least one sub-network to configure the nodes such that a virtual circuit is set up along the determined path.
22. A method for provisioning optical circuits in a multi -network, multi -vendor environment, comprising: sending a request for a circuit from at least one customer to at least one intelligent provisioning node; checking to determine whether direct connection with an egress node is available; and reserving the direct connection with the egress node if said connection is available and provisioning said connection for the at least one customer .
23. The method of claim 22, further comprising: sending the request from the at least one intelligent provisioning node to an intelligent node controller for additional processing.
24. The method of claim 23, wherein said additional processing comprises: sending a query from the at least one intelligent provisioning node to a central optical cross-connect database for at least one path from an ingress node to an egress node specified in the request. *
25. The method of claim 23, further comprising: determining whether at least one path exists between an ingress and the egress node,- retrieving information relating to at least one determined path from the central optical cross- connect database if at least one path exists between the ingress and the egress node,- and transmitting a resource reservation command to the at least one intelligent provisioning node of the at least one path.
26. The method of claim 22, wherein the at least one customer comprises a network of IP routers and the request is sent from a border IP router of the network of IP routers.
27. The method of claim 22, wherein the at least one customer comprises a desktop computer from which the request is initiated.
28. The method of claim 27, wherein the request is initiated by activating«:a. desktop application.
29. The method of claim 22, wherein the request is a web-based request, and a customer logs onto a Web page and enters the request via the Web page.
30. The method of claim 26, wherein the request is a web-based request, and a customer logs onto a Web page and enters the request via the Web page .
31. The method of claim 27, wherein the request is a web-based request, and a customer logs onto a Web page and enters the request via the Web page.
32. The method of claim 22, wherein the request includes information related to the circuit that is to be provisioned.
33. The method of claim 32, wherein the information related to the circuit that is to be provisioned includes addresses of endpoints of the circuit .
34. The method of claim 33, wherein the addresses of the endpoints of 'the circuit comprise IP addresses.
35. The method of claim 33, wherein the request includes varying levels of specificity regarding the endpoints of the requested circuit.
36. The method of claim 35, wherein the varying levels of specificity comprise which ports on switches or what wavelength should be used to provision the circuit.
37. The method of claim 24, wherein the central optical cross-connect database contains information relating to topology and capability of at least one sub- network .
38. The method of claim 24, wherein the central optical cross -connect database calculates paths for virtual circuits .
PCT/US2007/001305 2006-01-18 2007-01-18 System, network and methods for provisioning optical circuits in a multi-network, multi vendor environment WO2007084597A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US75999506P 2006-01-18 2006-01-18
US60/759,995 2006-01-18

Publications (2)

Publication Number Publication Date
WO2007084597A2 true WO2007084597A2 (en) 2007-07-26
WO2007084597A3 WO2007084597A3 (en) 2007-12-27

Family

ID=38288224

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/001305 WO2007084597A2 (en) 2006-01-18 2007-01-18 System, network and methods for provisioning optical circuits in a multi-network, multi vendor environment

Country Status (1)

Country Link
WO (1) WO2007084597A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016154248A1 (en) * 2015-03-25 2016-09-29 Tevetron, Llc Communication network employing network devices with packet delivery over pre-assigned optical channels

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107070790B (en) * 2016-12-16 2020-05-19 浙江宇视科技有限公司 Route learning method and routing equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6195425B1 (en) * 1996-11-21 2001-02-27 Bell Atlantic Network Services, Inc. Telecommunications system with wide area internetwork control
US20020109879A1 (en) * 2000-08-23 2002-08-15 Wing So John Ling Co-channel modulation
US20040252995A1 (en) * 2003-06-11 2004-12-16 Shlomo Ovadia Architecture and method for framing control and data bursts over 10 GBIT Ethernet with and without WAN interface sublayer support
US6950391B1 (en) * 1999-01-15 2005-09-27 Cisco Technology, Inc. Configurable network router

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6195425B1 (en) * 1996-11-21 2001-02-27 Bell Atlantic Network Services, Inc. Telecommunications system with wide area internetwork control
US6950391B1 (en) * 1999-01-15 2005-09-27 Cisco Technology, Inc. Configurable network router
US20020109879A1 (en) * 2000-08-23 2002-08-15 Wing So John Ling Co-channel modulation
US20040252995A1 (en) * 2003-06-11 2004-12-16 Shlomo Ovadia Architecture and method for framing control and data bursts over 10 GBIT Ethernet with and without WAN interface sublayer support

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AMAUD: 'CA*net 4 Research Program Update - UCLP Roadmap for creating User Controlled and Architected Networks using Service Oriented Architecture' 03 January 2006, *
ZHENG ET AL.: 'CHEETAH: CIRCUIT-SWITCHED HIGH-SPEED END-TO-END TRANSPORT ARCHITECTURE TESTBED' IEEE OPTICAL COMMUNICATIONS August 2005, *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016154248A1 (en) * 2015-03-25 2016-09-29 Tevetron, Llc Communication network employing network devices with packet delivery over pre-assigned optical channels
US10419152B2 (en) 2015-03-25 2019-09-17 Tevetron, Llc Communication network employing network devices with packet delivery over pre-assigned optical channels

Also Published As

Publication number Publication date
WO2007084597A3 (en) 2007-12-27

Similar Documents

Publication Publication Date Title
US7167648B2 (en) System and method for an ethernet optical area network
US20090122707A1 (en) Multi-layer cascading network bandwidth control
US7991872B2 (en) Vertical integration of network management for ethernet and the optical transport
US8233487B2 (en) Communication network system that establishes communication path by transferring control signal
Zhang et al. An overview of virtual private network (VPN): IP VPN and optical VPN
US20090103533A1 (en) Method, system and node apparatus for establishing identifier mapping relationship
Haddaji et al. Towards end-to-end integrated optical packet network: Empirical analysis
US20030035411A1 (en) Service discovery using a user device interface to an optical transport network
Tomic et al. ASON and GMPLS—overview and comparison
WO2007084597A2 (en) System, network and methods for provisioning optical circuits in a multi-network, multi vendor environment
Amokrane et al. Dynamic capacity management and traffic steering in enterprise passive optical networks
Jukan QoS-based wavelength routing in multi-service WDM networks
WO2003001397A1 (en) Method and apparatus for provisioning a communication path
Hote et al. Developing and deploying a carrier-class sdn-centric network management system for a tier 1 service provider network
Takeda et al. Layer 1 VPN architecture and its evaluation
Liu Intelligent network control middleware platform
Chen et al. End-to-end service provisioning in carrier-grade ethernet networks: The 100 GET-E3 approach
JP3778138B2 (en) Network path setting method and system
Perello et al. Assessment of LMP-based recovery mechanisms for GMPLS control planes
Paggi 13 Network Core
Paggi Network Core
Pinart et al. Integration of peer-to-peer strategies and SOAP/XML for inter-domain user-driven provisioning in an ASON/GMPLS network
Liu et al. On the tradeoffs between path computation efficiency and information abstraction in optical mesh networks
Verchere et al. The Advances in Control and Management for Transport Networks
Di Giglio et al. The Emerging Core and Metropolitan Networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC DATED 23.10.08

122 Ep: pct application non-entry in european phase

Ref document number: 07716753

Country of ref document: EP

Kind code of ref document: A2