EP2814205A1 - Computer system and method for visualizing virtual network - Google Patents

Computer system and method for visualizing virtual network Download PDF

Info

Publication number
EP2814205A1
EP2814205A1 EP13747150.4A EP13747150A EP2814205A1 EP 2814205 A1 EP2814205 A1 EP 2814205A1 EP 13747150 A EP13747150 A EP 13747150A EP 2814205 A1 EP2814205 A1 EP 2814205A1
Authority
EP
European Patent Office
Prior art keywords
virtual
data
networks
managing unit
virtual networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP13747150.4A
Other languages
German (de)
French (fr)
Other versions
EP2814205A4 (en
Inventor
Takahisa Masuda
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Publication of EP2814205A1 publication Critical patent/EP2814205A1/en
Publication of EP2814205A4 publication Critical patent/EP2814205A4/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • H04L41/122Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/028Dynamic adaptation of the update intervals, e.g. event-triggered updates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/462LAN interconnection over a bridge based backbone
    • H04L12/4625Single bridge functionality, e.g. connection of two networks over a single bridge
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/64Hybrid switching systems
    • H04L12/6418Hybrid transport
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/22Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/14Routing performance; Theoretical aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies

Definitions

  • the present invention relates to a computer system and a visualization method of a computer system, more particularly, to a virtual network virtualization method of a computer system which uses an OpenFlow (also referred to as programmable flow) technology.
  • OpenFlow also referred to as programmable flow
  • packet route determination and packet transfer from the source to the destination have been achieved by a plurality of switches provided on the route.
  • the network configuration is being continuously modified due to halts of devices caused by failures or additions of new devices for scale expansion. This has necessitated flexibility for promptly adapting to the modification of the network configuration to determine appropriate routes. It has been, however, impossible to perform a centralized control and management of the whole network, since the route determination programs installed on the switches have been unable to be externally modified.
  • OpenFlow switch a technology for achieving a centralized control of the transfer operations and the like in respective switches by using an external controller in a computer network (that is, the OpenFlow technique) has been proposed by the Open Networking Foundation (see non-patent literature 1).
  • a network switch adapted to this technology (hereinafter, referred to as OpenFlow switch (OFS)) holds detailed information, including the protocol type, the port number and the like, in a flow table and allows a flow control and obtainment of statistic information.
  • an OpenFlow controller also referred to as programmable flow controller and abbreviated to "OFC", hereinafter.
  • the OFC sets flow entries, which correlates rules for identifying flows (packet data) with actions defining operations to be performed on the identified flows, into flow tables held by the OFSs.
  • OFSs on a communication route determine the transfer destination of received packet data in accordance with the flow entries set by the OFC, to achieve transmittals. This allows a client terminal to exchange packet data with another client terminal by using a communication route set by the OFC.
  • an OpenFlow-based computer system in which an OFC which sets communication routes is separated from OFSs which perform transmittals, allows a centralized control and management of communications over the whole system.
  • the OFC can control transfer among client terminals in units of flows which are defined by header data of L1 to L4, and therefore can virtualize a network in a desired form. This loosens restrictions on the physical configuration and facilitates establishment of a virtual tenant environment, reducing the initial investment cost resulting from scaling out.
  • a plurality of OFCs may be disposed in a single system (network) in order to reduce the load imposed on each OFC.
  • the network defined over the whole system are managed by a plurality of OFCs, because one OFC is usually disposed for each data center.
  • Disclosed in patent literature 1 is a system in which the flow control of an OpenFlow-based network is achieved by a plurality of controllers which share topology data.
  • Disclosed in patent literature 2 is a system which includes: a plurality of controllers which instruct switches on communication routes to set flow entries for which an ordering of priority is determined; and switches which determine based on the ordering of priority whether to set flow entries and provide relaying for received packets matching flow entries set thereto in accordance with the flow entries.
  • Disclosed in patent literature 3 is a system which includes: a plurality of controllers 1 which instruct switches on communication routes to set flow entries; and a plurality of switches which specify one of the plurality of controllers 1 as a route deciding entity and perform relaying of received packets in accordance with flow entries set by the route deciding entity.
  • an objective of the present invention is to perform centralized management of the whole of a virtual network controlled by a plurality of controllers which use an OpenFlow technology.
  • a computer system in an aspect of the present invention includes a plurality of controllers, switches and a managing unit.
  • Each of the plurality of controllers calculates communication routes and sets flow entries onto switches on the communication routes.
  • the switches perform relaying of received packets in accordance with flow entries set in flow tables thereof.
  • the managing unit outputs a plurality of virtual networks managed by the plurality of controllers in a visually perceivable form with the plurality of virtual networks combined, on the basis of topology data of the virtual networks, the topology data being generated based on the communication routes.
  • a virtual network visualization method in another aspect of the present invention is implemented over a computer system, including: a plurality of controllers which each calculate communication routes and set flow entries onto switches on the communication routes; and switches which perform relaying of received packets in accordance with the flow entries set in flow tables thereof.
  • the virtual network visualization method according to the present invention includes steps of: by a managing unit, obtaining topology data of the plurality of virtual networks managed by the plurality of controllers, from the plurality of controllers; and by the managing unit, outputting the plurality of virtual networks in a visually perceivable form with the plurality of virtual networks combined, on the basis of topology data of the respective virtual networks.
  • the virtual network visualization method according to the present invention is preferably achieved by a visualization program executable by a computer.
  • the present invention enables centralized management of the whole of a virtual network controlled by a plurality of controllers which use an OpenFlow technology.
  • Fig. 1 is a diagram illustrating the configuration of a computer system according to the present invention in an exemplary embodiment.
  • the computer system according to the present invention uses OpenFlow to perform establishment of communication routes and transfer control of packet data.
  • the computer system according to the present invention includes: OpenFlow controllers 1-1 to 1-5 (hereinafter, referred to as OFCs 1-1 to 1-5), a plurality of OpenFlow switches 2 (hereinafter, referred to as OFSs 2), a plurality of L3 routers 3, a plurality of hosts 4 (e.g., storages 4-1, servers 4-2 and client terminals 4-3) and a managing unit 100.
  • OFCs 1-1 to 1-5 may be collectively referred to as OFCs 1, if they are not distinguished between each other.
  • the hosts 4 which are computer apparatuses including a not-shown CPU, main storage and auxiliary storage, each communicate with other hosts 4 by executing programs stored in the auxiliary storage. Communications between hosts 4 are achieved via the switches 2 and the L3 routers 3.
  • the hosts 4 implements their own functions of the storages 4-1, servers (e.g., web servers, file servers and application servers) and the client terminals 4-3, for example, depending on the programs executed therein and their hardware configurations.
  • the OFCs 1 each include a flow control section 12 which controls communication route packet transfer processing related to packet transfer in the system, on the basis of an OpenFlow technology.
  • the OpenFlow technology is a technology in which controllers (the OFCs 1 in this exemplary embodiment) set multilayer routing data in units of flows onto the OFSs 2 in accordance with a routing policy (flow entries: flow and action), to achieve a route control and node control (see non-patent literature 1 for details). This separates the route control function from the routers and switches, allowing optimized routing and traffic management through a centralized control by the controllers.
  • the OFSs 2 to which the OpenFlow technology is applied handle communications as end-to-end flows rather than in units of packets or frames, differently from conventional routers and switches.
  • the OFCs 1 control the operations of OFSs 2 (e.g., relaying of packet data) by setting flow entries (rules and actions) into flow tables (not shown) held by the OFSs 2.
  • the setting of flow entries onto the OFSs 2 by the OFCs 1 and notifications of first packets (packet-in) from the OFSs 2 to the OFCs 13 are performed via control networks 200 (hereinafter referred to as control NWs 200).
  • the OFCs 1-1 to 1-4 are disposed as OFCs 1 which control the network (the OFSs 2) in a data center DC1 and the OFC 1-5 is disposed as an OFC 1 which controls the network (the OFSs 2) in a data center DC2.
  • the OFCs 1-1 to 1-4 are connected to the OFSs 2 in the data center DC1 via a control NW 200-1 and the OFC 1-5 is connected to the OFSs 2 in the data center DC2 via a control NW 200-2.
  • the network (OFSs 2) of the data center DC1 and the network (OFSs 2) of the data center DC2 are networks (subnetworks) of different ID address ranges connected via the L3 routers 3, which performs Layer 3 routing.
  • Fig. 2 is a diagram illustrating the configuration of the OFCs 1 according to the present invention. It is preferable that the OFCs 1 are embodied as a computer including a CPU and storage device. In each OFC 1, the respective functions of a VN topology data notification section 11 and flow control section 12 illustrated in Fig. 2 are implemented by executing programs stored in the storage device by the not-shown CPU. Also, each OFC 1 holds VN topology data 13 stored in the storage device.
  • the flow control section 12 performs setting and deletion of flow entries (rules and actions) for OFSs 2 to be managed by the flow control section 12 itself.
  • the flow control section 12 sets the flow entries (rules and action data) into flow tables of the OFSs 2 so that the flow entries are correlated with the controller ID of the OFC 1.
  • the OFSs 2 refer to the flow entries set thereto to perform the action (e.g., relaying or discarding of packet data) associated with the rule matching the header data of a received packet. Details of the rules and actions are described in the following.
  • Specified in a rule is, for example, a combination of addresses and identifiers defined in Layers 1 to 4 of the OSI (open system interconnection) model, which are included in header data in TCP/IP packet data.
  • OSI open system interconnection
  • a combination of a physical port defined in Layer 1, a MAC address and VLAN tag (VLAN id) defined in Layer 2, an IP address defined in Layer 3 and a port number defined in Layer 4 may be described in a rule.
  • the VLAN tag may be given a priority (VLAN priority).
  • An identifier, address and the like described in a rule may be specified as a certain range. It is preferable that the source and destination are distinguished with respect to an address or the like described in a rule. For example, a range of the destination MAC address, a range of the destination port number identifying the connection-destination application, a range of the source port number identifying the connection-source application may be described in a rule. Furthermore, an identifier specifying the data transfer protocol may be described in a rule.
  • Specified in an action is, for example, how to handle TCP/IP packet data. For example, data indicating whether to relay received packet data or not, and if so, the destination may be described in an action. Also, data to instruct duplication or discarding of packet data may be described in an action.
  • a predetermined virtual network (VN) is built for each OFC 1 through a flow control by each OFC 1.
  • one virtual tenant network (VTN) is built with at least one virtual network (VN), which is individually managed by an OFC 1.
  • VTN1 is built with the virtual networks respectively managed by OFCs 1-1 to 1-5, which control different IP networks.
  • one virtual tenant network VTN2 may be built with virtual networks respectively managed by OFCs 1-1 to 1-4, which control the same IP network.
  • one virtual tenant network VTN3 may be composed of a virtual network managed by one OFC 1 (e.g. the OFC 1-5). It should be noted that a plurality of virtual tenant networks (VTNs) may be built in the system, as illustrated in Fig. 1 .
  • the VN topology data notification section 11 transmits VN topology data 13 of the virtual network (VN) managed by the VN topology data notification section 11 itself to the managing unit 100.
  • the VN topology data 13 include data related to the topology of the virtual network (VN) managed (or controlled) by the OFC 1.
  • a plurality of virtual tenant networks VTN1, VTN2... are provided by the controls by a plurality of OFCs 1.
  • the virtual tenant networks include virtual networks (VN) respectively managed (or controlled) by the OFCs 1-1 to 1-5.
  • Each OFC 1 holds data related to the topology of the virtual network managed by the OFC 1 itself (hereinafter, referred to as management target virtual network) as the VN topology data 13.
  • Fig. 3 is a diagram illustrating one example of the VN topology data 13 held in an OFC 1.
  • Fig. 4 is a conceptual diagram of the VN topology data 13 held in the OFC 1.
  • the VN topology data 13 include data related to connections among virtual nodes in a virtual network embodied by OFSs 2 and physical switches, such as not-shown routers.
  • the VN topology data 13 include data identifying virtual nodes belonging to the management target virtual network (virtual node data 132) and connection data 133 indicating the connections among the virtual nodes.
  • the virtual node data 132 and connection data 133 are recorded to be correlated with a VTN number 131, which is an identifier of a virtual network belonging to the management target virtual network (for example, a virtual tenant network).
  • the virtual node data 132 include, for example, data identifying respective virtual bridges, virtual externals and virtual routers as virtual nodes.
  • the virtual external is a terminal (host) or router which operates as a connection destination of a virtual bridge.
  • the virtual node data 132 may be defined, for example, with combinations of the names of the VLANs to which virtual nodes are connected and MAC addresses (or port numbers).
  • the identifier of a virtual router (virtual router name) is described in the virtual node data 132 with the identifier of the virtual router correlated with a MAC address (or a port number).
  • the virtual node names such as virtual bridge names, virtual external names and virtual router names, may be defined to be specific to each OFC 1 in the virtual node data 132; alternatively, common names may be defined for all the OFCs 1 in the system.
  • connection data 133 include data identifying connection destinations of virtual nodes, correlated with the virtual node data 132 of the virtual nodes.
  • a virtual router (vRouter) "VR11” and a virtual external (vExternal) "VE11” may be described as the connection destination of the virtual bridge (vBridge) "VB11" in the connection data 133.
  • the connection data 133 may include a connection type identifying the connection counterpart (bridge/external/ router/external network (L3 router)) or data identifying the connection destination (e.g., the port number, the MAC address and the VLAN name).
  • the identifier of a virtual bridge (virtual bridge name) is described in the connection data 133 with the described identifier correlated with the name of the VLAN to which the virtual bridge belongs.
  • the identifier of a virtual external (virtual external name) is described in the connection data 133 with the described identifier correlated with a combination of the VLAN name and the MAC address (or the port number).
  • a virtual external is defined with a VLAN name and a MAC address (or a port number).
  • the virtual network illustrated in Fig. 4 belongs to the virtual tenant network VTN1 and is composed of a virtual router "VR11", virtual bridges "VB11” and “VB12” and virtual externals "VE11” and "VE12".
  • the virtual bridges "VB11” and “VB12” represent different subnetworks connected via the virtual router "VR11".
  • the virtual bridge "VB11” is connected to the virtual external "VE11” and the virtual external "VE11” is associated with the MAC address of a virtual router "VR22" managed by the OFC 1-2 named "OFC2".
  • the VN topology data notification section 11 transmits the VN topology data 13 managed by the VN topology data notification section 11 itself to the managing unit 100 via a secure management network 300 (hereinafter, referred to as management NW 300).
  • the managing unit 100 combines the VN topology data 14 obtained from the OFCs 1-1 to 1-5 on the basis of the virtual node data 105 to generate a virtual network of the whole system (e.g., the virtual tenant networks VTN1, VTN2...)
  • Fig. 5 is a diagram illustrating the configuration of the managing unit 100 according to the present invention in an exemplary embodiment. It is preferable that the managing unit 100 is embodied as a computer including a CPU and storage device. In the managing unit 100, the respective functions of a VN data collecting section 101, a VN topology combining section 102 and a VTN topology outputting section 103 by executing a visualization program stored in the storage device by the not-shown CPU. In addition, the managing unit 100 holds VTN topology data 104 and virtual node data 105 stored in the storage device.
  • VTN topology data 104 are not recorded in the initial state; the VTN topology data 104 are recorded only after generated by the VN topology combining section 102. It is preferable, on the other hand, that the virtual node data 105 are preset in the initial state.
  • the VN data collecting section 101 issues VN topology data collection instructions to the OFCs 1 via the management NW 300 to obtain the VN topology data 13 from the OFCs 1.
  • the VN topology data 13 thus obtained are temporarily stored in the not-shown storage device.
  • the VN topology combining section 102 combines (or unifies) the obtained VN topology data 13 on the basis of the virtual node data 105 in units of virtual networks defined over the whole system (e.g., in units of virtual tenant networks) to generate topology data corresponding to virtual networks defined over the whole system.
  • the topology data generated by the VN topology combining section 102 are recorded as VTN topology data 104 and outputted by the VTN topology outputting section 103 in a visually perceivable form.
  • the VTN topology outputting section 103 displays the VTN topology data 104 on an output device (not shown) such as a monitor in a text style or in a graphical style.
  • the VTN topology data 104 which has a similar configuration to the VN topology data 13 illustrated in Fig. 3 , include virtual node data and connection data associated with VTN numbers.
  • the VN topology combining section 102 On the basis of the VN topology data 13 obtained from the OFCs 1 and the virtual node data 105, the VN topology combining section 102 identifies a common (or the same) virtual node out of the virtual nodes on the management target virtual networks of the individual OFCs 1.
  • the VN topology combining section 102 combines the virtual networks to which the common virtual node belongs, via the common virtual node.
  • the VN topology combining section 102 when combining virtual networks (subnetworks) of the same IP address range, the VN topology combining section 102 combines the virtual networks via a common virtual bridge shared by the instant networks.
  • the VN topology combining section 102 When combining virtual networks (subnetworks) of different IP address ranges, the VN topology combining section 102 combines the virtual networks via a virtual external shared by the networks.
  • the virtual node data 105 are data which correlate virtual node names individually defined in the respective OFCs 1 with the same virtual node.
  • Fig. 6 is a diagram illustrating one example of the virtual node data 105 held by the managing unit 100 according to the present invention.
  • the virtual node data 105 illustrated in Fig. 6 include controller names 51, common virtual node names 52 and corresponding virtual node names 53.
  • the virtual node names corresponding to the same virtual node out of virtual node names individually defined in the respective OFCs are recorded as the corresponding virtual node names 53, correlated with the common virtual node name 52.
  • Fig. 6 is a diagram illustrating one example of the virtual node data 105 held by the managing unit 100 according to the present invention.
  • the virtual node data 105 illustrated in Fig. 6 include controller names 51, common virtual node names 52 and corresponding virtual node names 53.
  • the virtual node names corresponding to the same virtual node out of virtual node names individually defined in the respective OFCs are recorded as the
  • a virtual bridge "VBx1" defined in the OFC 1 with a controller name 51 of "OFC1” and a virtual bridge “VBy1" defined in the OFC 1 with a controller name 51 of "OFC2" are described in the virtual node data 105, correlated with a common virtual node name "VB1".
  • the VN topology combining section 102 can recognize that the virtual bridge "VBx1" described in the VN topology data 13 received from the OFC 1 named "OFC1" and the virtual bridge "VBy1" described in the VN topology data 13 received from the OFC 1 named “OFC2" are the same virtual bridge "VB1", by referring to the virtual node data 105 by using the controller name 51 and the corresponding virtual node name 53 as keys.
  • the VN topology combining section 102 can recognize that the virtual bridge "VBx2" defined in the OFC1 named “OFC1” and the virtual bridge “VBy2" defined in the OFC 1 named “OFC2” are the same virtual bridge "VB2", by referring to the virtual node data 105 illustrated in Fig. 6 .
  • a virtual external "VEx1” defined in the OFC 1 named “OFC1” and a virtual external "VEx2" defined in the OFC 1 named “OFC2" are described in the virtual node data 105, correlated with a common virtual node name "VE1".
  • the VN topology combining section 102 can recognize that the virtual external "VEx1" described in the VN topology data 13 received from the OFC 1 named "OFC1” and the virtual external "VEy1” described in the VN topology data 13 received from the OFC 1 named “OFC2” are the same virtual external "VE1", by referring to the virtual node data 105.
  • the VN topology combining section 102 can recognize a virtual external "VEx2" defined in the OFC 1 named "OFC1” and a virtual external "VEy2" defined in the OFC 1 named “OFC2” as the same virtual bridge "VE2", by referring the virtual node data 105 illustrated in Fig. 6 .
  • Fig. 7 is a diagram illustrating another example of the virtual node data 105 held by the managing unit 100 according to the present invention.
  • the virtual node data 105 illustrated in Fig. 7 include virtual node names 61, VLAN names 62 and MAC addresses 63.
  • VLANs to which virtual nodes belong and MAC addresses which belong to the virtual nodes are described as the virtual node data 105, correlated with the name (the virtual node name 61) of the virtual nodes.
  • the VN data collecting section 101 collects virtual node data 132 including the names of VLANs to which virtual nodes belong and MAC addresses which belong to the virtual nodes, from the OFCs 1.
  • the VN topology combining section 102 identifies virtual node names 61 by referring to the virtual node data 105, using the VLAN names and MAC addresses included in the virtual node data 132 received from the OFCs 1 as keys, and correlates the identified virtual node names with the virtual node names included in the virtual node data 132. This allows the VN topology combining section 102 to recognize that the virtual nodes with the same virtual node name 61 identified by the VLAN names and MAC addresses are the same virtual node, even when the virtual node names obtained from different OFCs are different.
  • Fig. 8 is a diagram illustrating one example of the VN topology data 13 of virtual networks belonging to the virtual tenant network VTN1, wherein the VN topology data 13 are respectively held by the OFCs 1-1 to 1-5 illustrated in Fig. 1 .
  • the OFC 1-1 named “OFC1” holds a virtual bridge "VB11” and a virtual external "VE11”, which are connected with each other, as the VN topology data 13 of the management target virtual network of the OFC 1-1 itself.
  • the OFC 1-2 named “OFC2” holds a virtual router "VR21”, virtual bridges “VB21” and “VB22” and virtual externals "VE21” and “VE22” as the VN topology data 13 of the management target virtual network of the OFC 1-2 itself.
  • the virtual bridges "VB21” and “VB22” represent different subnetworks connected via the virtual router "VR21".
  • the virtual bridge "VB21” is connected to the virtual external "VE21".
  • the virtual bridge "VB22” is connected to the virtual external "VE22” and the virtual external “VE22” is associated with an L3 router "SW1".
  • the OFC 1-3 named “OFC3” holds a virtual bridge “VB31” and virtual externals “VE31” and “VE32” as the VN topology data 13 of the management target virtual network of the OFC 1-3 itself.
  • the OFC 1-4 named “OFC4" holds a virtual bridge "VB41” and a virtual external "VE41" as the VN topology data 13 of the management target virtual network of the OFC 1-4 itself.
  • the OFC 1-5 named “OFC5" holds a virtual router "VR51", virtual bridges "VB51” and “VB52” and virtual externals “VE51” and “VE52” as the VN topology data 13 of the management target virtual network of the OFC 1-5 itself.
  • the virtual bridges "VB51” and “VB52” represent different subnetworks connected via the virtual router "VR51”.
  • the virtual bridge "VB51” is connected to the virtual external "VE51” and the virtual external "VE51” is associated with an L3 router "SW2".
  • the virtual bridge "VB52” is connected to the virtual external "VE52".
  • the VN data collecting section 101 of the managing unit 100 issues VN topology data collection instructions with respect to the virtual tenant network "VTN1", to the OFCs 1-1 to 1-5.
  • the OFCs 1-1 to 1-5 each transmit the VN topology data 13 related to the virtual tenant network "VTN1" to the managing unit 100 via the management NW 300. This allows the managing unit 100 to collect the VN topology data 13, for example, as illustrated in Fig. 8 , from the respective OFCs 1-1 to 1-5.
  • the VN topology combining section 102 of the managing unit 100 identifies common virtual nodes in the collected VN topology data 13 by referring to the virtual node data 105.
  • the VN topology combining section 102 acknowledges that the two virtual networks are connected via a Layer 2 connection. In this case, the VN topology combining section 102 combines the two virtual networks via the correlated virtual bridges.
  • the VN topology combining section 102 connects the virtual bridges "VB11", “VB21”, “VB31” and “VB41", which are correlated with each other, to the virtual router "VR21", defining the virtual bridges "VB11", “VB21”, “VB31” and “VB41” as the same virtual bridge "VB1". Also, when finding that virtual externals on two virtual networks are correlated by referring to the virtual node data 105, the VN topology combining section 102 acknowledges that the two virtual networks are connected via a Layer 3 connection. In this case, the VN topology combining section 102 combines the two virtual networks via the correlated virtual externals.
  • the VN topology combining section 102 connects the virtual bridges "VB22" and “VB51” with each other, defining the virtual external "VE22” and “VE51” as the same virtual external "VE1".
  • the VN topology combining section 102 combines (or unifies) the VN topology data 13 defined in the respective OFCs 1 as illustrated in Fig. 8 to generate and record topology data (VTN topology data 104) of the whole of the virtual tenant network "VTN1" illustrated in Fig. 9 .
  • the VTN topology data 104 thus generated are outputted in a visually perceivable form as illustrated in Fig. 9 . This allows the network administrator to perform centralized management of the topology of a virtual network defined over the whole of the system illustrated in Fig. 1 .
  • the managing unit 100 is illustrated in Fig. 1 as being disposed separately from the OFCs 1, the implementation is not limited to this configuration; the managing unit 100 may be mounted in any of the OFCs 1-1 to 1-5.
  • a computer system including five OFCs is illustrated in Fig. 1 , the numbers of the OFCs 1 and host 4 connected to the network are not limited to those illustrated in Fig. 1 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A computer system according to the present invention includes a managing unit which outputs a plurality of virtual networks managed by a plurality of controllers in a visually perceivable form with the plurality of virtual networks combined, on the basis of topology data of the virtual networks, the topology data being generated based on communication routes. This enables centralized management of the whole of a virtual network controlled by a plurality of controllers which use an OpenFlow technology.

Description

    Technical Field
  • The present invention relates to a computer system and a visualization method of a computer system, more particularly, to a virtual network virtualization method of a computer system which uses an OpenFlow (also referred to as programmable flow) technology.
  • Background Art
  • Conventionally, packet route determination and packet transfer from the source to the destination have been achieved by a plurality of switches provided on the route. In a recent large-sized network such as a data center, the network configuration is being continuously modified due to halts of devices caused by failures or additions of new devices for scale expansion. This has necessitated flexibility for promptly adapting to the modification of the network configuration to determine appropriate routes. It has been, however, impossible to perform a centralized control and management of the whole network, since the route determination programs installed on the switches have been unable to be externally modified.
  • On the other hand, a technology for achieving a centralized control of the transfer operations and the like in respective switches by using an external controller in a computer network (that is, the OpenFlow technique) has been proposed by the Open Networking Foundation (see non-patent literature 1). A network switch adapted to this technology (hereinafter, referred to as OpenFlow switch (OFS)) holds detailed information, including the protocol type, the port number and the like, in a flow table and allows a flow control and obtainment of statistic information.
  • In a system using the OpenFlow protocol, the setting of communication routes, transfer operations (relay operations) and the like to OFSs on the routes are achieved by an OpenFlow controller (also referred to as programmable flow controller and abbreviated to "OFC", hereinafter). In this operation, the OFC sets flow entries, which correlates rules for identifying flows (packet data) with actions defining operations to be performed on the identified flows, into flow tables held by the OFSs. OFSs on a communication route determine the transfer destination of received packet data in accordance with the flow entries set by the OFC, to achieve transmittals. This allows a client terminal to exchange packet data with another client terminal by using a communication route set by the OFC. In other words, an OpenFlow-based computer system, in which an OFC which sets communication routes is separated from OFSs which perform transmittals, allows a centralized control and management of communications over the whole system.
  • The OFC can control transfer among client terminals in units of flows which are defined by header data of L1 to L4, and therefore can virtualize a network in a desired form. This loosens restrictions on the physical configuration and facilitates establishment of a virtual tenant environment, reducing the initial investment cost resulting from scaling out.
  • When the number of terminals such as client terminals, servers and storages connected to an OpenFlow-based system is increased, the load imposed on an OFC which manages flows is increased. Accordingly, a plurality of OFCs may be disposed in a single system (network) in order to reduce the load imposed on each OFC. Also, in a system including a plurality of data centers, the network defined over the whole system are managed by a plurality of OFCs, because one OFC is usually disposed for each data center.
  • Systems in which one network is managed by a plurality of controllers are disclosed, for example, in JP 2011-166692 A (see patent literature 1), JP 2011-166384 A (see patent literature 2) and JP 2011-160363 A (see patent literature 3). Disclosed in patent literature 1 is a system in which the flow control of an OpenFlow-based network is achieved by a plurality of controllers which share topology data. Disclosed in patent literature 2 is a system which includes: a plurality of controllers which instruct switches on communication routes to set flow entries for which an ordering of priority is determined; and switches which determine based on the ordering of priority whether to set flow entries and provide relaying for received packets matching flow entries set thereto in accordance with the flow entries. Disclosed in patent literature 3 is a system which includes: a plurality of controllers 1 which instruct switches on communication routes to set flow entries; and a plurality of switches which specify one of the plurality of controllers 1 as a route deciding entity and perform relaying of received packets in accordance with flow entries set by the route deciding entity.
  • Citation List Patent Literature
    • [Patent literature 1] JP 2011-166692 A
    • [Patent literature 2] JP 2011-166384 A
    • [Patent literature 3] JP 2011-160363 A
    Non-Patent Literature
  • [Non-patent literature 1] OpenFlow Switch Specification Version 1.1.0 Implemented (Wire Protocol 0x02), February 28, 2011
  • Summary of Invention
  • When a single virtual network is managed by a plurality of controllers, it is impossible to monitor the whole virtual network managed by the plurality of controllers as a single virtual network, although each individual controller can monitor the status and the like of the virtual network managed by each controller. When one virtual tenant network "VTN1" is constituted with two virtual networks "VNW1" and "VNW2" respectively managed by two OFCs, for example, the statuses of the two virtual networks "VNW1" and "VNW2" can be monitored by the two OFCs, respectively. It has been, however, impossible to perform centralized monitoring of the status of the whole of the virtual tenant network "VTN1", since the two virtual networks "VNW1" and "VNW2" cannot be unified.
  • Accordingly, an objective of the present invention is to perform centralized management of the whole of a virtual network controlled by a plurality of controllers which use an OpenFlow technology.
  • A computer system in an aspect of the present invention includes a plurality of controllers, switches and a managing unit. Each of the plurality of controllers calculates communication routes and sets flow entries onto switches on the communication routes. The switches perform relaying of received packets in accordance with flow entries set in flow tables thereof. The managing unit outputs a plurality of virtual networks managed by the plurality of controllers in a visually perceivable form with the plurality of virtual networks combined, on the basis of topology data of the virtual networks, the topology data being generated based on the communication routes.
  • A virtual network visualization method in another aspect of the present invention is implemented over a computer system, including: a plurality of controllers which each calculate communication routes and set flow entries onto switches on the communication routes; and switches which perform relaying of received packets in accordance with the flow entries set in flow tables thereof. The virtual network visualization method according to the present invention includes steps of: by a managing unit, obtaining topology data of the plurality of virtual networks managed by the plurality of controllers, from the plurality of controllers; and by the managing unit, outputting the plurality of virtual networks in a visually perceivable form with the plurality of virtual networks combined, on the basis of topology data of the respective virtual networks.
  • The virtual network visualization method according to the present invention is preferably achieved by a visualization program executable by a computer.
  • The present invention enables centralized management of the whole of a virtual network controlled by a plurality of controllers which use an OpenFlow technology.
  • Brief Description of Drawings
  • Objectives, effects and features of the above-described invention will be made more apparent from the description of exemplary embodiments in cooperation with the attached drawings in which:
    • Fig. 1 is a diagram illustrating the configuration of a computer system according to the present invention in an exemplary embodiment;
    • Fig. 2 is a diagram illustrating the configuration of an OpenFlow controller according to the present invention in an exemplary embodiment;
    • Fig. 3 is a diagram illustrating one example of VN topology data held by the OpenFlow controller according to the present invention;
    • Fig. 4 is a conceptual diagram of the VN topology data held by the OpenFlow controller according to the present invention;
    • Fig. 5 is a diagram illustrating the configuration of a managing unit according to the present invention in an exemplary embodiment;
    • Fig. 6 is a diagram illustrating one example of virtual node data held by the managing unit according to the present invention;
    • Fig. 7 is a diagram illustrating another example of virtual node data held by the managing unit according to the present invention;
    • Fig. 8 is a diagram illustrating one example of the VN topology data held by each of the OpenFlow controllers illustrated in Fig. 1; and
    • Fig. 9 is a diagram illustrating one example of VTN topology data of the whole of a virtual network generated by unifying the VN topology data illustrated in Fig. 8.
    Description of Exemplary Embodiments
  • In the following, a description is given of exemplary embodiments of the present invention with reference to the attached drawings. The same or similar reference numerals denote the same, similar or equivalent components in the drawings.
  • (Computer System Configuration)
  • The configuration of a computer system according to the present invention is described with reference to Fig. 1. Fig. 1 is a diagram illustrating the configuration of a computer system according to the present invention in an exemplary embodiment. The computer system according to the present invention uses OpenFlow to perform establishment of communication routes and transfer control of packet data. The computer system according to the present invention includes: OpenFlow controllers 1-1 to 1-5 (hereinafter, referred to as OFCs 1-1 to 1-5), a plurality of OpenFlow switches 2 (hereinafter, referred to as OFSs 2), a plurality of L3 routers 3, a plurality of hosts 4 (e.g., storages 4-1, servers 4-2 and client terminals 4-3) and a managing unit 100. It should be noted that the OFCs 1-1 to 1-5 may be collectively referred to as OFCs 1, if they are not distinguished between each other.
  • The hosts 4, which are computer apparatuses including a not-shown CPU, main storage and auxiliary storage, each communicate with other hosts 4 by executing programs stored in the auxiliary storage. Communications between hosts 4 are achieved via the switches 2 and the L3 routers 3. The hosts 4 implements their own functions of the storages 4-1, servers (e.g., web servers, file servers and application servers) and the client terminals 4-3, for example, depending on the programs executed therein and their hardware configurations.
  • The OFCs 1 each include a flow control section 12 which controls communication route packet transfer processing related to packet transfer in the system, on the basis of an OpenFlow technology. The OpenFlow technology is a technology in which controllers (the OFCs 1 in this exemplary embodiment) set multilayer routing data in units of flows onto the OFSs 2 in accordance with a routing policy (flow entries: flow and action), to achieve a route control and node control (see non-patent literature 1 for details). This separates the route control function from the routers and switches, allowing optimized routing and traffic management through a centralized control by the controllers. The OFSs 2 to which the OpenFlow technology is applied handle communications as end-to-end flows rather than in units of packets or frames, differently from conventional routers and switches.
  • The OFCs 1 control the operations of OFSs 2 (e.g., relaying of packet data) by setting flow entries (rules and actions) into flow tables (not shown) held by the OFSs 2. The setting of flow entries onto the OFSs 2 by the OFCs 1 and notifications of first packets (packet-in) from the OFSs 2 to the OFCs 13 are performed via control networks 200 (hereinafter referred to as control NWs 200).
  • In one example illustrated in Fig. 1, the OFCs 1-1 to 1-4 are disposed as OFCs 1 which control the network (the OFSs 2) in a data center DC1 and the OFC 1-5 is disposed as an OFC 1 which controls the network (the OFSs 2) in a data center DC2. The OFCs 1-1 to 1-4 are connected to the OFSs 2 in the data center DC1 via a control NW 200-1 and the OFC 1-5 is connected to the OFSs 2 in the data center DC2 via a control NW 200-2. Note that the network (OFSs 2) of the data center DC1 and the network (OFSs 2) of the data center DC2 are networks (subnetworks) of different ID address ranges connected via the L3 routers 3, which performs Layer 3 routing.
  • Referring to Fig. 2, details of the configuration of the OFCs 1 are described in the following. Fig. 2 is a diagram illustrating the configuration of the OFCs 1 according to the present invention. It is preferable that the OFCs 1 are embodied as a computer including a CPU and storage device. In each OFC 1, the respective functions of a VN topology data notification section 11 and flow control section 12 illustrated in Fig. 2 are implemented by executing programs stored in the storage device by the not-shown CPU. Also, each OFC 1 holds VN topology data 13 stored in the storage device.
  • The flow control section 12 performs setting and deletion of flow entries (rules and actions) for OFSs 2 to be managed by the flow control section 12 itself. In this operation, the flow control section 12 sets the flow entries (rules and action data) into flow tables of the OFSs 2 so that the flow entries are correlated with the controller ID of the OFC 1. The OFSs 2 refer to the flow entries set thereto to perform the action (e.g., relaying or discarding of packet data) associated with the rule matching the header data of a received packet. Details of the rules and actions are described in the following.
  • Specified in a rule is, for example, a combination of addresses and identifiers defined in Layers 1 to 4 of the OSI (open system interconnection) model, which are included in header data in TCP/IP packet data. For example, a combination of a physical port defined in Layer 1, a MAC address and VLAN tag (VLAN id) defined in Layer 2, an IP address defined in Layer 3 and a port number defined in Layer 4 may be described in a rule. Note that the VLAN tag may be given a priority (VLAN priority).
  • An identifier, address and the like described in a rule, such as a port number, may be specified as a certain range. It is preferable that the source and destination are distinguished with respect to an address or the like described in a rule. For example, a range of the destination MAC address, a range of the destination port number identifying the connection-destination application, a range of the source port number identifying the connection-source application may be described in a rule. Furthermore, an identifier specifying the data transfer protocol may be described in a rule.
  • Specified in an action is, for example, how to handle TCP/IP packet data. For example, data indicating whether to relay received packet data or not, and if so, the destination may be described in an action. Also, data to instruct duplication or discarding of packet data may be described in an action.
  • A predetermined virtual network (VN) is built for each OFC 1 through a flow control by each OFC 1. In addition, one virtual tenant network (VTN) is built with at least one virtual network (VN), which is individually managed by an OFC 1. For example, one virtual tenant network VTN1 is built with the virtual networks respectively managed by OFCs 1-1 to 1-5, which control different IP networks. Alternatively, one virtual tenant network VTN2 may be built with virtual networks respectively managed by OFCs 1-1 to 1-4, which control the same IP network. Furthermore, one virtual tenant network VTN3 may be composed of a virtual network managed by one OFC 1 (e.g. the OFC 1-5). It should be noted that a plurality of virtual tenant networks (VTNs) may be built in the system, as illustrated in Fig. 1.
  • The VN topology data notification section 11 transmits VN topology data 13 of the virtual network (VN) managed by the VN topology data notification section 11 itself to the managing unit 100. As illustrated in Figs. 3 and 4, the VN topology data 13 include data related to the topology of the virtual network (VN) managed (or controlled) by the OFC 1. Referring to Fig. 1, in the computer system according to the present invention a plurality of virtual tenant networks VTN1, VTN2... are provided by the controls by a plurality of OFCs 1. The virtual tenant networks include virtual networks (VN) respectively managed (or controlled) by the OFCs 1-1 to 1-5. Each OFC 1 holds data related to the topology of the virtual network managed by the OFC 1 itself (hereinafter, referred to as management target virtual network) as the VN topology data 13.
  • Fig. 3 is a diagram illustrating one example of the VN topology data 13 held in an OFC 1. Fig. 4 is a conceptual diagram of the VN topology data 13 held in the OFC 1. The VN topology data 13 include data related to connections among virtual nodes in a virtual network embodied by OFSs 2 and physical switches, such as not-shown routers. Specifically, the VN topology data 13 include data identifying virtual nodes belonging to the management target virtual network (virtual node data 132) and connection data 133 indicating the connections among the virtual nodes. The virtual node data 132 and connection data 133 are recorded to be correlated with a VTN number 131, which is an identifier of a virtual network belonging to the management target virtual network (for example, a virtual tenant network).
  • The virtual node data 132 include, for example, data identifying respective virtual bridges, virtual externals and virtual routers as virtual nodes. The virtual external is a terminal (host) or router which operates as a connection destination of a virtual bridge. The virtual node data 132 may be defined, for example, with combinations of the names of the VLANs to which virtual nodes are connected and MAC addresses (or port numbers). In one example, the identifier of a virtual router (virtual router name) is described in the virtual node data 132 with the identifier of the virtual router correlated with a MAC address (or a port number). The virtual node names, such as virtual bridge names, virtual external names and virtual router names, may be defined to be specific to each OFC 1 in the virtual node data 132; alternatively, common names may be defined for all the OFCs 1 in the system.
  • The connection data 133 include data identifying connection destinations of virtual nodes, correlated with the virtual node data 132 of the virtual nodes. Referring to Fig. 4, for example, a virtual router (vRouter) "VR11" and a virtual external (vExternal) "VE11" may be described as the connection destination of the virtual bridge (vBridge) "VB11" in the connection data 133. The connection data 133 may include a connection type identifying the connection counterpart (bridge/external/ router/external network (L3 router)) or data identifying the connection destination (e.g., the port number, the MAC address and the VLAN name). In detail, the identifier of a virtual bridge (virtual bridge name) is described in the connection data 133 with the described identifier correlated with the name of the VLAN to which the virtual bridge belongs. Furthermore, the identifier of a virtual external (virtual external name) is described in the connection data 133 with the described identifier correlated with a combination of the VLAN name and the MAC address (or the port number). In other words, a virtual external is defined with a VLAN name and a MAC address (or a port number).
  • Referring to Fig. 4, one example of a virtual network established on the basis of VN topology data 13 held by an OFC 1 is described in the following. The virtual network illustrated in Fig. 4 belongs to the virtual tenant network VTN1 and is composed of a virtual router "VR11", virtual bridges "VB11" and "VB12" and virtual externals "VE11" and "VE12". The virtual bridges "VB11" and "VB12" represent different subnetworks connected via the virtual router "VR11". The virtual bridge "VB11" is connected to the virtual external "VE11" and the virtual external "VE11" is associated with the MAC address of a virtual router "VR22" managed by the OFC 1-2 named "OFC2". This implies that the MAC address of the virtual router "VR22", which is managed by the OFC 1-2 named "OFC2", is recognizable from the virtual bridge "VB11". Similarly, the virtual bridge "VB12" is connected to the virtual external "VE12" and the virtual external "VE12" is associated with an L3 router. This implies that the virtual bridge "VB12" is connected to an external network via the L3 router.
  • Referring to Fig. 1, the VN topology data notification section 11 transmits the VN topology data 13 managed by the VN topology data notification section 11 itself to the managing unit 100 via a secure management network 300 (hereinafter, referred to as management NW 300). The managing unit 100 combines the VN topology data 14 obtained from the OFCs 1-1 to 1-5 on the basis of the virtual node data 105 to generate a virtual network of the whole system (e.g., the virtual tenant networks VTN1, VTN2...)
  • Referring to Fig. 5, details of the configuration of the managing unit 100 is described in the following. Fig. 5 is a diagram illustrating the configuration of the managing unit 100 according to the present invention in an exemplary embodiment. It is preferable that the managing unit 100 is embodied as a computer including a CPU and storage device. In the managing unit 100, the respective functions of a VN data collecting section 101, a VN topology combining section 102 and a VTN topology outputting section 103 by executing a visualization program stored in the storage device by the not-shown CPU. In addition, the managing unit 100 holds VTN topology data 104 and virtual node data 105 stored in the storage device. It should be noted that the VTN topology data 104 are not recorded in the initial state; the VTN topology data 104 are recorded only after generated by the VN topology combining section 102. It is preferable, on the other hand, that the virtual node data 105 are preset in the initial state.
  • The VN data collecting section 101 issues VN topology data collection instructions to the OFCs 1 via the management NW 300 to obtain the VN topology data 13 from the OFCs 1. The VN topology data 13 thus obtained are temporarily stored in the not-shown storage device.
  • The VN topology combining section 102 combines (or unifies) the obtained VN topology data 13 on the basis of the virtual node data 105 in units of virtual networks defined over the whole system (e.g., in units of virtual tenant networks) to generate topology data corresponding to virtual networks defined over the whole system. The topology data generated by the VN topology combining section 102 are recorded as VTN topology data 104 and outputted by the VTN topology outputting section 103 in a visually perceivable form. For example, the VTN topology outputting section 103 displays the VTN topology data 104 on an output device (not shown) such as a monitor in a text style or in a graphical style. The VTN topology data 104, which has a similar configuration to the VN topology data 13 illustrated in Fig. 3, include virtual node data and connection data associated with VTN numbers.
  • On the basis of the VN topology data 13 obtained from the OFCs 1 and the virtual node data 105, the VN topology combining section 102 identifies a common (or the same) virtual node out of the virtual nodes on the management target virtual networks of the individual OFCs 1. The VN topology combining section 102 combines the virtual networks to which the common virtual node belongs, via the common virtual node. In this operation, when combining virtual networks (subnetworks) of the same IP address range, the VN topology combining section 102 combines the virtual networks via a common virtual bridge shared by the instant networks. When combining virtual networks (subnetworks) of different IP address ranges, the VN topology combining section 102 combines the virtual networks via a virtual external shared by the networks.
  • The virtual node data 105 are data which correlate virtual node names individually defined in the respective OFCs 1 with the same virtual node. Fig. 6 is a diagram illustrating one example of the virtual node data 105 held by the managing unit 100 according to the present invention. The virtual node data 105 illustrated in Fig. 6 include controller names 51, common virtual node names 52 and corresponding virtual node names 53. In detail, the virtual node names corresponding to the same virtual node out of virtual node names individually defined in the respective OFCs are recorded as the corresponding virtual node names 53, correlated with the common virtual node name 52. In the example illustrated in Fig. 6, a virtual bridge "VBx1" defined in the OFC 1 with a controller name 51 of "OFC1" and a virtual bridge "VBy1" defined in the OFC 1 with a controller name 51 of "OFC2" are described in the virtual node data 105, correlated with a common virtual node name "VB1". In this case, the VN topology combining section 102 can recognize that the virtual bridge "VBx1" described in the VN topology data 13 received from the OFC 1 named "OFC1" and the virtual bridge "VBy1" described in the VN topology data 13 received from the OFC 1 named "OFC2" are the same virtual bridge "VB1", by referring to the virtual node data 105 by using the controller name 51 and the corresponding virtual node name 53 as keys. Similarly, the VN topology combining section 102 can recognize that the virtual bridge "VBx2" defined in the OFC1 named "OFC1" and the virtual bridge "VBy2" defined in the OFC 1 named "OFC2" are the same virtual bridge "VB2", by referring to the virtual node data 105 illustrated in Fig. 6. In addition, a virtual external "VEx1" defined in the OFC 1 named "OFC1" and a virtual external "VEx2" defined in the OFC 1 named "OFC2" are described in the virtual node data 105, correlated with a common virtual node name "VE1". In this case, the VN topology combining section 102 can recognize that the virtual external "VEx1" described in the VN topology data 13 received from the OFC 1 named "OFC1" and the virtual external "VEy1" described in the VN topology data 13 received from the OFC 1 named "OFC2" are the same virtual external "VE1", by referring to the virtual node data 105. In the same way, the VN topology combining section 102 can recognize a virtual external "VEx2" defined in the OFC 1 named "OFC1" and a virtual external "VEy2" defined in the OFC 1 named "OFC2" as the same virtual bridge "VE2", by referring the virtual node data 105 illustrated in Fig. 6.
  • Fig. 7 is a diagram illustrating another example of the virtual node data 105 held by the managing unit 100 according to the present invention. The virtual node data 105 illustrated in Fig. 7 include virtual node names 61, VLAN names 62 and MAC addresses 63. In detail, VLANs to which virtual nodes belong and MAC addresses which belong to the virtual nodes are described as the virtual node data 105, correlated with the name (the virtual node name 61) of the virtual nodes. When the virtual node data 105 have been registered as illustrated in Fig. 7, the VN data collecting section 101 collects virtual node data 132 including the names of VLANs to which virtual nodes belong and MAC addresses which belong to the virtual nodes, from the OFCs 1. The VN topology combining section 102 identifies virtual node names 61 by referring to the virtual node data 105, using the VLAN names and MAC addresses included in the virtual node data 132 received from the OFCs 1 as keys, and correlates the identified virtual node names with the virtual node names included in the virtual node data 132. This allows the VN topology combining section 102 to recognize that the virtual nodes with the same virtual node name 61 identified by the VLAN names and MAC addresses are the same virtual node, even when the virtual node names obtained from different OFCs are different.
  • (Combining (Unifying) Operation of Virtual Networks)
  • Next, details of the combining operation of virtual networks in the managing unit 100 are described with reference to Figs. 8 and 9. Fig. 8 is a diagram illustrating one example of the VN topology data 13 of virtual networks belonging to the virtual tenant network VTN1, wherein the VN topology data 13 are respectively held by the OFCs 1-1 to 1-5 illustrated in Fig. 1.
  • Referring to Fig. 8, The OFC 1-1 named "OFC1" holds a virtual bridge "VB11" and a virtual external "VE11", which are connected with each other, as the VN topology data 13 of the management target virtual network of the OFC 1-1 itself. The OFC 1-2 named "OFC2" holds a virtual router "VR21", virtual bridges "VB21" and "VB22" and virtual externals "VE21" and "VE22" as the VN topology data 13 of the management target virtual network of the OFC 1-2 itself. The virtual bridges "VB21" and "VB22" represent different subnetworks connected via the virtual router "VR21". The virtual bridge "VB21" is connected to the virtual external "VE21". The virtual bridge "VB22" is connected to the virtual external "VE22" and the virtual external "VE22" is associated with an L3 router "SW1". The OFC 1-3 named "OFC3" holds a virtual bridge "VB31" and virtual externals "VE31" and "VE32" as the VN topology data 13 of the management target virtual network of the OFC 1-3 itself. The OFC 1-4 named "OFC4" holds a virtual bridge "VB41" and a virtual external "VE41" as the VN topology data 13 of the management target virtual network of the OFC 1-4 itself. The OFC 1-5 named "OFC5" holds a virtual router "VR51", virtual bridges "VB51" and "VB52" and virtual externals "VE51" and "VE52" as the VN topology data 13 of the management target virtual network of the OFC 1-5 itself. The virtual bridges "VB51" and "VB52" represent different subnetworks connected via the virtual router "VR51". The virtual bridge "VB51" is connected to the virtual external "VE51" and the virtual external "VE51" is associated with an L3 router "SW2". The virtual bridge "VB52" is connected to the virtual external "VE52".
  • The VN data collecting section 101 of the managing unit 100 issues VN topology data collection instructions with respect to the virtual tenant network "VTN1", to the OFCs 1-1 to 1-5. The OFCs 1-1 to 1-5 each transmit the VN topology data 13 related to the virtual tenant network "VTN1" to the managing unit 100 via the management NW 300. This allows the managing unit 100 to collect the VN topology data 13, for example, as illustrated in Fig. 8, from the respective OFCs 1-1 to 1-5. The VN topology combining section 102 of the managing unit 100 identifies common virtual nodes in the collected VN topology data 13 by referring to the virtual node data 105. In this exemplary embodiment, it is assumed that, in the virtual node data 105, the virtual bridges "VB11", "VB21", "VB31" and "VB41" are registered and correlated with a virtual bridge "VB1" and the virtual external "VE22" and "VB51" are registered and correlated with a virtual external "VE1". When finding that virtual bridges on two virtual networks are correlated by referring to the virtual node data 105, the VN topology combining section 102 acknowledges that the two virtual networks are connected via a Layer 2 connection. In this case, the VN topology combining section 102 combines the two virtual networks via the correlated virtual bridges. In this example, on the basis of the virtual node data 105, the VN topology combining section 102 connects the virtual bridges "VB11", "VB21", "VB31" and "VB41", which are correlated with each other, to the virtual router "VR21", defining the virtual bridges "VB11", "VB21", "VB31" and "VB41" as the same virtual bridge "VB1". Also, when finding that virtual externals on two virtual networks are correlated by referring to the virtual node data 105, the VN topology combining section 102 acknowledges that the two virtual networks are connected via a Layer 3 connection. In this case, the VN topology combining section 102 combines the two virtual networks via the correlated virtual externals. In this example, since the virtual externals "VE22" and "VE51" are correlated with each other, the VN topology combining section 102 connects the virtual bridges "VB22" and "VB51" with each other, defining the virtual external "VE22" and "VE51" as the same virtual external "VE1". As described above, the VN topology combining section 102 combines (or unifies) the VN topology data 13 defined in the respective OFCs 1 as illustrated in Fig. 8 to generate and record topology data (VTN topology data 104) of the whole of the virtual tenant network "VTN1" illustrated in Fig. 9.
  • The VTN topology data 104 thus generated are outputted in a visually perceivable form as illustrated in Fig. 9. This allows the network administrator to perform centralized management of the topology of a virtual network defined over the whole of the system illustrated in Fig. 1.
  • Although exemplary embodiments of the present invention are described above in detail, the specific configuration is not limited to the above-described exemplary embodiments; the present invention encompasses modifications which do not depart from the scope of the present invention. For example, although the managing unit 100 is illustrated in Fig. 1 as being disposed separately from the OFCs 1, the implementation is not limited to this configuration; the managing unit 100 may be mounted in any of the OFCs 1-1 to 1-5. Although a computer system including five OFCs is illustrated in Fig. 1, the numbers of the OFCs 1 and host 4 connected to the network are not limited to those illustrated in Fig. 1.
  • It should be noted that the present application is based on Japanese Patent Application No. 2012-027779 and the disclosure of Japanese Patent Application No. 2012-027779 is incorporated herein by reference.

Claims (12)

  1. A computer system, comprising:
    a plurality of controllers, each of which calculates communication routes and sets flow entries onto switches on said communication routes;
    switches which perform relaying of received packet in accordance with said flow entries set in flow tables of the switches; and
    a managing unit which outputs a plurality of virtual networks managed by said plurality of controllers in a visually perceivable form with the plurality of virtual networks combined, based on topology data of the virtual networks, the topology data being generated based on said communication routes.
  2. The computer system according to claim 1, wherein said managing unit holds virtual node data identifying virtual nodes constituting said virtual networks and identifies a common virtual node shared by said plurality of virtual networks based on said topology data and said virtual node data to combine said plurality of virtual networks via said common virtual node.
  3. The computer system according to claim 2, wherein said virtual nodes include virtual bridges,
    wherein a combination of corresponding virtual bridges of said plurality of virtual bridges is described in said virtual node data, and
    wherein said managing unit identifies a common virtual bridge shared by said plurality of virtual networks based on said topology data and said virtual node data to combine said plurality of virtual networks via said common virtual node.
  4. The computer system according to claim 3, wherein said virtual nodes includes virtual externals which are recognized as connection destinations of said virtual bridges,
    wherein a combination of corresponding virtual externals of said plurality of virtual externals is described in said virtual node data, and
    wherein said managing unit identifies a common virtual external shared by said plurality of virtual networks based on said topology data and said virtual node data to combine said plurality of virtual networks via said common virtual external.
  5. The computer system according to claim 2,
    wherein virtual nodes and VLAN names are described to be correlated in said virtual node data, and
    wherein said managing unit identifies a common virtual node shared by said plurality of virtual networks based on VLAN names included in said topology data and said virtual node data to combine said plurality of virtual networks via said common virtual node.
  6. The computer system according to any one of claims 1 to 5, wherein said managing unit is mounted on any of said plurality of controllers.
  7. A virtual network visualization method implemented on a computer system including:
    a plurality of controllers which each calculate communication routes and set flow entries onto switches on said communication routes; and
    switches which perform relaying of received packets in accordance with said flow entries set in flow tables of the switches, said method comprising steps of:
    by a managing unit, obtaining topology data of said plurality of virtual networks managed by said plurality of controllers, from said plurality of controllers; and
    by said managing unit, outputting said plurality of virtual networks in a visually perceivable form with said plurality of virtual networks combined, based on the topology data of said respective virtual networks.
  8. The visualization method according to claim 7, wherein said managing unit holds virtual node data identifying virtual nodes constituting said virtual networks, and
    wherein the step of outputting said plurality of virtual networks in the visually perceivable form with the plurality of virtual networks combined includes steps of:
    by said managing unit, identifying a common virtual node shared by said plurality of virtual networks based on said topology data and said virtual node data; and
    by said managing unit, combining said plurality of virtual networks via said common virtual node.
  9. The visualization method according to claim 8, wherein said virtual nodes include virtual bridges,
    wherein a combination of corresponding virtual bridges of said plurality of virtual bridges is described in said virtual node data, and
    wherein the step of outputting said plurality of virtual networks in the visually perceivable form with the plurality of virtual networks combined includes steps of:
    by said managing unit, identifying a common virtual bridge shared by said plurality of virtual networks based on said topology data and said virtual node data; and
    by said managing unit, combining said plurality of virtual networks via said common virtual node.
  10. The visualization method according to claim 9, wherein said virtual nodes includes virtual externals which are recognized as connection destinations of said virtual bridges,
    wherein a combination of corresponding virtual externals of said plurality of virtual externals is described in said virtual node data, and
    wherein the step of outputting said plurality of virtual networks in the visually perceivable form with the plurality of virtual networks combined includes steps of:
    by said managing unit, identifying a common virtual external shared by said plurality of virtual networks based on said topology data and said virtual node data; and
    by said managing unit, combining said plurality of virtual networks via said common virtual external.
  11. The visualization method according to claim 8, wherein virtual nodes and VLAN names are described to be correlated in said virtual node data,
    wherein the step of outputting said plurality of virtual networks in the visually perceivable form with the plurality of virtual networks combined includes steps of:
    by said managing unit, identifying a common virtual node shared by said plurality of virtual networks based on VLAN names included in said topology data and said virtual node data; and
    by said managing unit, combining said plurality of virtual networks via said common virtual node.
  12. A recording device in which a visualization program is recorded, the visualization program causing a computer to implement the visualization method set forth in any one of claims 7 to 11.
EP13747150.4A 2012-02-10 2013-02-05 Computer system and method for visualizing virtual network Withdrawn EP2814205A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012027779 2012-02-10
PCT/JP2013/052523 WO2013118687A1 (en) 2012-02-10 2013-02-05 Computer system and method for visualizing virtual network

Publications (2)

Publication Number Publication Date
EP2814205A1 true EP2814205A1 (en) 2014-12-17
EP2814205A4 EP2814205A4 (en) 2015-09-16

Family

ID=48947451

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13747150.4A Withdrawn EP2814205A4 (en) 2012-02-10 2013-02-05 Computer system and method for visualizing virtual network

Country Status (5)

Country Link
US (1) US20150019756A1 (en)
EP (1) EP2814205A4 (en)
JP (1) JP5967109B2 (en)
CN (1) CN104106237B (en)
WO (1) WO2013118687A1 (en)

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9781004B2 (en) 2014-10-16 2017-10-03 Cisco Technology, Inc. Discovering and grouping application endpoints in a network environment
US9800549B2 (en) * 2015-02-11 2017-10-24 Cisco Technology, Inc. Hierarchical clustering in a geographically dispersed network environment
CN104717095B (en) * 2015-03-17 2018-04-10 大连理工大学 A kind of visualization SDN management method of integrated multi-controller
US9521071B2 (en) * 2015-03-22 2016-12-13 Freescale Semiconductor, Inc. Federation of controllers management using packet context
US10440054B2 (en) * 2015-09-25 2019-10-08 Perspecta Labs Inc. Customized information networks for deception and attack mitigation
US10560328B2 (en) 2017-04-20 2020-02-11 Cisco Technology, Inc. Static network policy analysis for networks
US10826788B2 (en) 2017-04-20 2020-11-03 Cisco Technology, Inc. Assurance of quality-of-service configurations in a network
US10623264B2 (en) 2017-04-20 2020-04-14 Cisco Technology, Inc. Policy assurance for service chaining
US10812318B2 (en) 2017-05-31 2020-10-20 Cisco Technology, Inc. Associating network policy objects with specific faults corresponding to fault localizations in large-scale network deployment
US10623271B2 (en) 2017-05-31 2020-04-14 Cisco Technology, Inc. Intra-priority class ordering of rules corresponding to a model of network intents
US10693738B2 (en) 2017-05-31 2020-06-23 Cisco Technology, Inc. Generating device-level logical models for a network
US10581694B2 (en) 2017-05-31 2020-03-03 Cisco Technology, Inc. Generation of counter examples for network intent formal equivalence failures
US10439875B2 (en) 2017-05-31 2019-10-08 Cisco Technology, Inc. Identification of conflict rules in a network intent formal equivalence failure
US20180351788A1 (en) 2017-05-31 2018-12-06 Cisco Technology, Inc. Fault localization in large-scale network policy deployment
US10505816B2 (en) 2017-05-31 2019-12-10 Cisco Technology, Inc. Semantic analysis to detect shadowing of rules in a model of network intents
US10554483B2 (en) 2017-05-31 2020-02-04 Cisco Technology, Inc. Network policy analysis for networks
US10547715B2 (en) 2017-06-16 2020-01-28 Cisco Technology, Inc. Event generation in response to network intent formal equivalence failures
US11150973B2 (en) 2017-06-16 2021-10-19 Cisco Technology, Inc. Self diagnosing distributed appliance
US10574513B2 (en) 2017-06-16 2020-02-25 Cisco Technology, Inc. Handling controller and node failure scenarios during data collection
US10904101B2 (en) 2017-06-16 2021-01-26 Cisco Technology, Inc. Shim layer for extracting and prioritizing underlying rules for modeling network intents
US11469986B2 (en) 2017-06-16 2022-10-11 Cisco Technology, Inc. Controlled micro fault injection on a distributed appliance
US10686669B2 (en) 2017-06-16 2020-06-16 Cisco Technology, Inc. Collecting network models and node information from a network
US10587621B2 (en) 2017-06-16 2020-03-10 Cisco Technology, Inc. System and method for migrating to and maintaining a white-list network security model
US11645131B2 (en) 2017-06-16 2023-05-09 Cisco Technology, Inc. Distributed fault code aggregation across application centric dimensions
US10498608B2 (en) 2017-06-16 2019-12-03 Cisco Technology, Inc. Topology explorer
US10567228B2 (en) 2017-06-19 2020-02-18 Cisco Technology, Inc. Validation of cross logical groups in a network
US11283680B2 (en) 2017-06-19 2022-03-22 Cisco Technology, Inc. Identifying components for removal in a network configuration
US10560355B2 (en) 2017-06-19 2020-02-11 Cisco Technology, Inc. Static endpoint validation
US10805160B2 (en) 2017-06-19 2020-10-13 Cisco Technology, Inc. Endpoint bridge domain subnet validation
US10437641B2 (en) 2017-06-19 2019-10-08 Cisco Technology, Inc. On-demand processing pipeline interleaved with temporal processing pipeline
US10567229B2 (en) 2017-06-19 2020-02-18 Cisco Technology, Inc. Validating endpoint configurations between nodes
US10623259B2 (en) 2017-06-19 2020-04-14 Cisco Technology, Inc. Validation of layer 1 interface in a network
US11343150B2 (en) 2017-06-19 2022-05-24 Cisco Technology, Inc. Validation of learned routes in a network
US10528444B2 (en) 2017-06-19 2020-01-07 Cisco Technology, Inc. Event generation in response to validation between logical level and hardware level
US10644946B2 (en) 2017-06-19 2020-05-05 Cisco Technology, Inc. Detection of overlapping subnets in a network
US10505817B2 (en) 2017-06-19 2019-12-10 Cisco Technology, Inc. Automatically determining an optimal amount of time for analyzing a distributed network environment
US10333787B2 (en) 2017-06-19 2019-06-25 Cisco Technology, Inc. Validation of L3OUT configuration for communications outside a network
US10218572B2 (en) 2017-06-19 2019-02-26 Cisco Technology, Inc. Multiprotocol border gateway protocol routing validation
US10411996B2 (en) 2017-06-19 2019-09-10 Cisco Technology, Inc. Validation of routing information in a network fabric
US10341184B2 (en) 2017-06-19 2019-07-02 Cisco Technology, Inc. Validation of layer 3 bridge domain subnets in in a network
US10348564B2 (en) 2017-06-19 2019-07-09 Cisco Technology, Inc. Validation of routing information base-forwarding information base equivalence in a network
US10554493B2 (en) 2017-06-19 2020-02-04 Cisco Technology, Inc. Identifying mismatches between a logical model and node implementation
US10700933B2 (en) 2017-06-19 2020-06-30 Cisco Technology, Inc. Validating tunnel endpoint addresses in a network fabric
US10432467B2 (en) 2017-06-19 2019-10-01 Cisco Technology, Inc. Network validation between the logical level and the hardware level of a network
US10673702B2 (en) 2017-06-19 2020-06-02 Cisco Technology, Inc. Validation of layer 3 using virtual routing forwarding containers in a network
US10652102B2 (en) 2017-06-19 2020-05-12 Cisco Technology, Inc. Network node memory utilization analysis
US10536337B2 (en) 2017-06-19 2020-01-14 Cisco Technology, Inc. Validation of layer 2 interface and VLAN in a networked environment
US10812336B2 (en) 2017-06-19 2020-10-20 Cisco Technology, Inc. Validation of bridge domain-L3out association for communication outside a network
US10587484B2 (en) 2017-09-12 2020-03-10 Cisco Technology, Inc. Anomaly detection and reporting in a network assurance appliance
US10587456B2 (en) 2017-09-12 2020-03-10 Cisco Technology, Inc. Event clustering for a network assurance platform
US10554477B2 (en) 2017-09-13 2020-02-04 Cisco Technology, Inc. Network assurance event aggregator
US10333833B2 (en) 2017-09-25 2019-06-25 Cisco Technology, Inc. Endpoint path assurance
US11102053B2 (en) 2017-12-05 2021-08-24 Cisco Technology, Inc. Cross-domain assurance
US10873509B2 (en) 2018-01-17 2020-12-22 Cisco Technology, Inc. Check-pointing ACI network state and re-execution from a check-pointed state
US10572495B2 (en) 2018-02-06 2020-02-25 Cisco Technology Inc. Network assurance database version compatibility
US10812315B2 (en) 2018-06-07 2020-10-20 Cisco Technology, Inc. Cross-domain network assurance
US10659298B1 (en) 2018-06-27 2020-05-19 Cisco Technology, Inc. Epoch comparison for network events
US10911495B2 (en) 2018-06-27 2021-02-02 Cisco Technology, Inc. Assurance of security rules in a network
US11044273B2 (en) 2018-06-27 2021-06-22 Cisco Technology, Inc. Assurance of security rules in a network
US11019027B2 (en) 2018-06-27 2021-05-25 Cisco Technology, Inc. Address translation for external network appliance
US11218508B2 (en) 2018-06-27 2022-01-04 Cisco Technology, Inc. Assurance of security rules in a network
US10904070B2 (en) 2018-07-11 2021-01-26 Cisco Technology, Inc. Techniques and interfaces for troubleshooting datacenter networks
US10826770B2 (en) 2018-07-26 2020-11-03 Cisco Technology, Inc. Synthesis of models for networks using automated boolean learning
US10616072B1 (en) 2018-07-27 2020-04-07 Cisco Technology, Inc. Epoch data interface
CN113824615B (en) * 2021-09-26 2024-07-12 济南浪潮数据技术有限公司 Virtual network flow visualization method, device and equipment based on OpenFlow

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5948055A (en) * 1996-08-29 1999-09-07 Hewlett-Packard Company Distributed internet monitoring system and method
US7752024B2 (en) * 2000-05-05 2010-07-06 Computer Associates Think, Inc. Systems and methods for constructing multi-layer topological models of computer networks
US20030115319A1 (en) * 2001-12-17 2003-06-19 Dawson Jeffrey L. Network paths
US7219300B2 (en) * 2002-09-30 2007-05-15 Sanavigator, Inc. Method and system for generating a network monitoring display with animated utilization information
US7584298B2 (en) * 2002-12-13 2009-09-01 Internap Network Services Corporation Topology aware route control
US8627005B1 (en) * 2004-03-26 2014-01-07 Emc Corporation System and method for virtualization of networked storage resources
JP4334419B2 (en) * 2004-06-30 2009-09-30 富士通株式会社 Transmission equipment
US7681130B1 (en) * 2006-03-31 2010-03-16 Emc Corporation Methods and apparatus for displaying network data
US10313191B2 (en) * 2007-08-31 2019-06-04 Level 3 Communications, Llc System and method for managing virtual local area networks
US8161393B2 (en) * 2007-09-18 2012-04-17 International Business Machines Corporation Arrangements for managing processing components using a graphical user interface
US9083609B2 (en) * 2007-09-26 2015-07-14 Nicira, Inc. Network operating system for managing and securing networks
US8447181B2 (en) * 2008-08-15 2013-05-21 Tellabs Operations, Inc. Method and apparatus for displaying and identifying available wavelength paths across a network
US8255496B2 (en) * 2008-12-30 2012-08-28 Juniper Networks, Inc. Method and apparatus for determining a network topology during network provisioning
US8213336B2 (en) * 2009-02-23 2012-07-03 Cisco Technology, Inc. Distributed data center access switch
US7937438B1 (en) * 2009-12-07 2011-05-03 Amazon Technologies, Inc. Using virtual networking devices to manage external connections
JPWO2011083780A1 (en) * 2010-01-05 2013-05-13 日本電気株式会社 Communication system, control device, processing rule setting method, packet transmission method and program
JP5488979B2 (en) 2010-02-03 2014-05-14 日本電気株式会社 Computer system, controller, switch, and communication method
JP5488980B2 (en) * 2010-02-08 2014-05-14 日本電気株式会社 Computer system and communication method
JP5521613B2 (en) 2010-02-15 2014-06-18 日本電気株式会社 Network system, network device, route information update method, and program
US8612627B1 (en) * 2010-03-03 2013-12-17 Amazon Technologies, Inc. Managing encoded multi-part communications for provided computer networks
US8407366B2 (en) * 2010-05-14 2013-03-26 Microsoft Corporation Interconnecting members of a virtual network
US8830823B2 (en) * 2010-07-06 2014-09-09 Nicira, Inc. Distributed control platform for large-scale production networks
JP2012027779A (en) 2010-07-26 2012-02-09 Denso Corp On-vehicle driving support device and road-vehicle communication system
AU2011343699B2 (en) * 2010-12-15 2014-02-27 Shadow Networks, Inc. Network stimulation engine
US8625597B2 (en) * 2011-01-07 2014-01-07 Jeda Networks, Inc. Methods, systems and apparatus for the interconnection of fibre channel over ethernet devices
US9715222B2 (en) * 2011-02-09 2017-07-25 Avocent Huntsville, Llc Infrastructure control fabric system and method
US9043452B2 (en) * 2011-05-04 2015-05-26 Nicira, Inc. Network control apparatus and method for port isolation
US9209998B2 (en) * 2011-08-17 2015-12-08 Nicira, Inc. Packet processing in managed interconnection switching elements
US8593958B2 (en) * 2011-09-14 2013-11-26 Telefonaktiebologet L M Ericsson (Publ) Network-wide flow monitoring in split architecture networks
US9178833B2 (en) * 2011-10-25 2015-11-03 Nicira, Inc. Chassis controller
US9337931B2 (en) * 2011-11-01 2016-05-10 Plexxi Inc. Control and provisioning in a data center network with at least one central controller
US9311160B2 (en) * 2011-11-10 2016-04-12 Verizon Patent And Licensing Inc. Elastic cloud networking
CN103930882B (en) * 2011-11-15 2017-10-03 Nicira股份有限公司 The network architecture with middleboxes
US8824274B1 (en) * 2011-12-29 2014-09-02 Juniper Networks, Inc. Scheduled network layer programming within a multi-topology computer network
US8948054B2 (en) * 2011-12-30 2015-02-03 Cisco Technology, Inc. System and method for discovering multipoint endpoints in a network environment

Also Published As

Publication number Publication date
JP5967109B2 (en) 2016-08-10
CN104106237A (en) 2014-10-15
US20150019756A1 (en) 2015-01-15
WO2013118687A1 (en) 2013-08-15
JPWO2013118687A1 (en) 2015-05-11
CN104106237B (en) 2017-08-11
EP2814205A4 (en) 2015-09-16

Similar Documents

Publication Publication Date Title
EP2814205A1 (en) Computer system and method for visualizing virtual network
JP5811196B2 (en) Computer system and virtual network visualization method
JP5300076B2 (en) Computer system and computer system monitoring method
CN105052083B (en) For handling the method and network node of management plane flow
RU2651149C2 (en) Sdn-controller, data processing center system and the routed connection method
JP5488980B2 (en) Computer system and communication method
JP5757552B2 (en) Computer system, controller, service providing server, and load distribution method
JP5488979B2 (en) Computer system, controller, switch, and communication method
EP2608459B1 (en) Router, virtual cluster router system and establishing method thereof
WO2011155510A1 (en) Communication system, control apparatus, packet capture method and program
WO2012081549A1 (en) Computer system, controller, controller manager, and communication path analysis method
JPWO2012090996A1 (en) Information system, control device, virtual network providing method and program
CN103069754A (en) Communication device, communication system, communication method, and recording medium
WO2021047011A1 (en) Data processing method and apparatus, and computer storage medium
JP6492660B2 (en) COMMUNICATION SYSTEM, CONTROL DEVICE, CONTROL METHOD, AND PROGRAM
US11336564B1 (en) Detection of active hosts using parallel redundancy protocol in software defined networks
Kumar et al. Open flow switch with intrusion detection system
US20230061491A1 (en) Improving efficiency and fault tolerance in a software defined network using parallel redundancy protocol
WO2018095095A1 (en) Method and apparatus for establishing disjoint path
KR20130096762A (en) Server management apparatus, server management method, and program
WO2014104277A1 (en) Control apparatus, communication system, communication node control method and program
US11750502B2 (en) Detection of in-band software defined network controllers using parallel redundancy protocol
US20230061215A1 (en) Detection of parallel redundancy protocol traffic in software defined networks
Lin et al. A Routing Framework in Software Defined Network Environment

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20140808

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
RA4 Supplementary search report drawn up and despatched (corrected)

Effective date: 20150813

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/24 20060101AFI20150807BHEP

Ipc: H04L 12/28 20060101ALI20150807BHEP

Ipc: H04L 12/46 20060101ALI20150807BHEP

Ipc: H04L 12/64 20060101ALI20150807BHEP

Ipc: H04L 12/70 20130101ALI20150807BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20180228