WO2015196647A1 - 环形拓扑堆叠***路径选择方法、装置及主设备 - Google Patents

环形拓扑堆叠***路径选择方法、装置及主设备 Download PDF

Info

Publication number
WO2015196647A1
WO2015196647A1 PCT/CN2014/088797 CN2014088797W WO2015196647A1 WO 2015196647 A1 WO2015196647 A1 WO 2015196647A1 CN 2014088797 W CN2014088797 W CN 2014088797W WO 2015196647 A1 WO2015196647 A1 WO 2015196647A1
Authority
WO
WIPO (PCT)
Prior art keywords
path
bandwidth
bandwidth resource
stacking
link
Prior art date
Application number
PCT/CN2014/088797
Other languages
English (en)
French (fr)
Inventor
潘庭山
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2015196647A1 publication Critical patent/WO2015196647A1/zh

Links

Images

Definitions

  • the present invention relates to the field of communications, and in particular, to a ring topology stacking system path selection method, apparatus, and host device.
  • Stacking combines multiple switch devices that support stacking and logically combines them into a single switching device.
  • Stacking is a virtualization technology that virtualizes multiple devices on the same layer of a network into a single logical device without changing the physical topology of the network. This simplifies the network structure, simplifies network protocol deployment, and improves network reliability. Administrative purpose.
  • the stacking technology can simplify the complex network topology into a hierarchical network with simple interconnections.
  • the links between the network layers are aggregated through links to eliminate loops.
  • Protocols such as the Multiple Spanning Tree Protocol (MSTP) and the Virtual Router Redundancy Protocol (VRRP).
  • connection topology of the stack is shown in Figure 1 and Figure 2. There are two structures:
  • Chain connection use a stacking cable to connect the left port (right port) of one device to the right port (left port) of another device, and so on, the right port of the first device (left port) And the last one The stacking cable is not connected to the left port (right port) of the device.
  • Ring connection connect the right port (left port) of the chain connection to the first device and the left port (right port) of the last device.
  • the master switch, Master is responsible for managing the entire stacking system. There is only one master switch in the stacking system. In Figure 1 and Figure 2, switch 1 is the master switch, that is, the master device;
  • the backup switch that is, the standby switch, is the backup switch of the master switch.
  • the standby switch takes over all services of the master switch. There is only one standby switch in the stack.
  • switch 2 is Standby switch, that is, standby equipment;
  • the stack ID which is the member ID, is used to identify and manage member devices.
  • the stack IDs of all member devices in the stack are unique.
  • the stack priority is an attribute of a member device. It is used to determine the role of member devices during the role election. The higher the priority, the higher the priority. The greater the probability of being elected as the master switch.
  • a stacking split is a stacking system in a steady-state stacking system.
  • the stacking system becomes a stacking system.
  • multiple stacking systems with the same configuration may be generated. This causes a conflict between the IP address and the Media Access Control (MAC) address in the network, and a network failure.
  • the chain connection stack is more likely to be split. Because the stack cable is faulty, the stack splits. Therefore, the ring connection is generally recommended for security in the actual networking.
  • the ring topology network is a common networking mode in the stack system. Since there are two directions in the forwarding between ring devices, path selection is a problem that must be considered in the ring network topology.
  • a common practice is to determine the optimal path based on the number of links through which the path passes, the bandwidth of each link, and the duplex mode of the port where the link resides.
  • the bandwidth of the stack ports on the ring is the same, and the types of stack ports are basically full-duplex. Therefore, the actual number of links actually remains, and the path with the least number of links on the path is the optimal path selected, that is, the shortest path.
  • This path selection method does not consider link bandwidth utilization. In some cases, even if some links on the shortest path are very congested and the link on the other path is very idle at this time, the shortest As the optimal path, the path leads to unreasonable resource allocation, which reduces the availability of the system and the flexibility of networking.
  • the embodiment of the invention provides a method, a device and a master device for selecting a path of a ring topology stacking system, and solves the problem that the resource allocation of the ring topology stacking system of the related technology adopts the shortest path selection method is unreasonable.
  • a ring topology stacking system path selection method the ring topology stacking system includes a plurality of devices connected in a ring stack; the method includes:
  • the method before acquiring the current bandwidth resource occupancy rate of the stacking link on the first path and the second path in the different directions between the two devices, the method further includes:
  • the path with the lowest priority is directly selected as the working path.
  • the method before acquiring the current bandwidth resource occupancy rate of the stacking link on the first path and the second path in the different directions between the two devices, the method further includes:
  • the path bandwidth of the first path is different from the path bandwidth of the second path, the path with a large path bandwidth is directly selected as the working path.
  • the method when determining that the lowest priority of the stacking port on the first path is the same as the lowest priority of the stacking port on the second path, obtaining different directions between the two devices Before the current bandwidth resource occupancy of the stacking link on the first path and the second path, the method further includes:
  • the path bandwidth of the first path is different from the path bandwidth of the second path, the path with a large path bandwidth is directly selected as the working path.
  • the path bandwidth resource usage ratios of the first path and the second path are equal, respectively acquiring the stack links on the first path and the second path For the total number, select the path with the total number of stacked links as the working path.
  • obtaining the path bandwidth resource usage rate of the first path and the second path according to the current bandwidth resource occupancy ratio of the stack link includes:
  • a sum of weight values of the stacking links on the first path is the The path bandwidth resource occupancy rate of the path
  • the sum of the weight values of the stack links on the second path is the path bandwidth resource occupancy rate of the second path.
  • the method further includes:
  • the present invention further provides a path selection device for a ring topology stacking system, which is used for determining a working path between two devices in a ring topology stacking system, including: a first information acquiring module, a processing module, and a first selection Module
  • the first information acquiring module is configured to acquire the current bandwidth resource occupancy rate of the stacking link in the first path and the second path in the different directions between the two devices;
  • the processing module is configured to obtain a path bandwidth resource occupancy rate of the first path and the second path according to a current bandwidth resource occupancy ratio of the stack link.
  • the first selection module is configured to select a path bandwidth resource occupancy ratio from the first path and the second path as a working path between the two devices.
  • the second information acquiring module is further configured to acquire the current bandwidth resource occupation of the stack link in the resource information acquiring module. Prior to the rate, the priority of the stacking ports on the first path and the second path is obtained, and different types of ports correspond to different priorities;
  • the second selection module is configured to directly select the path with the lowest priority as the working path when the lowest priority of the stacking port on the first path is different from the lowest priority of the stacking port on the second path.
  • the method further includes a third information acquiring module and a third selecting module, where the third information acquiring module is configured to stack the lowest priority and the second of the port on the first path
  • the third information acquiring module is configured to stack the lowest priority and the second of the port on the first path
  • the first path and the first path are obtained before the current bandwidth resource occupancy rate of the stacking link on the first path and the second path in the two paths are obtained.
  • the bandwidth supported by the stacking link on the two paths, and the minimum bandwidth supported by the stacking link on the first path and the second path as the path bandwidth of the first path and the second path, respectively;
  • the third selection module is configured to directly select a path with a large path bandwidth as a working path when the path bandwidth of the first path and the path bandwidth of the second path are not equal.
  • the method further includes: a statistic module, configured to separately calculate the first path and the location when the path bandwidth resource usage ratios of the first path and the second path are equal The total number of stacking links on the second path is the working path.
  • the processing module includes a setting submodule, a conversion submodule, and a computing submodule:
  • the setting sub-module is configured to set a correspondence between the bandwidth resource occupancy rate of the stack link and the different weight values, and the larger the bandwidth resource occupancy rate, the larger the corresponding weight value;
  • the conversion submodule is configured to convert the bandwidth resource occupancy rate of the stack link into a corresponding weight value
  • the calculating sub-module is configured to separately calculate a sum of a weight value of a stacking link on the first path and a weight value of a stacking link on the second path; and a weight of the stacking link on the first path
  • the sum of the values is the path bandwidth resource occupancy of the first path
  • the sum of the weight values of the stack links on the second path is the path bandwidth resource occupancy of the second path.
  • the fourth information acquiring module further includes a fourth information acquiring module, a determining module, and a triggering module.
  • the fourth information acquiring module is configured to follow a setting rule after determining a working path between the two devices. Obtaining the current bandwidth resource occupancy rate of the stack link on the working path,
  • the determining module is configured to determine whether there is a stack link with a bandwidth resource usage ratio greater than a bandwidth resource occupancy threshold, and if yes, notify the trigger module;
  • the triggering module is configured to trigger a re-determination of a working path between two devices in the device.
  • an embodiment of the present invention further provides a master device of a ring topology stacking system, including a memory and a processor; the memory is configured to store instructions; and the processor is configured to determine the ring topology stacking system
  • the instruction is invoked to perform the following steps:
  • Embodiments of the present invention also provide a computer program, including program instructions, when the program instructions are When the master device of the ring topology stacking system is executed, the master device can perform the above method.
  • Embodiments of the present invention also provide a carrier carrying the above computer program.
  • the path selection method, device and main device of the ring topology stacking system provided by the embodiments of the present invention can avoid the path of the congestion as the working path, and make the resource allocation more reasonable, thereby improving the availability of the system and the flexibility of the networking.
  • FIG. 1 is a schematic structural diagram of a chain topology stacking system
  • FIG. 2 is a schematic structural diagram of a ring topology stacking system
  • FIG. 3 is a schematic flowchart of a path selection method of a ring topology stacking system according to Embodiment 1 of the present invention
  • FIG. 4 is a schematic structural diagram of a path selection device of a ring topology stacking system according to Embodiment 2 of the present invention.
  • FIG. 5 is a schematic structural diagram of a chain topology stacking system according to Embodiment 3 of the present invention.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • a ring topology stacking system a plurality of devices including a ring stack connection, one of which is a master device and one of which is a slave device, and the rest are slave devices, wherein the standby device is also a slave device;
  • the connected port is a stacking port;
  • the connecting cable is a stacking link.
  • the switch 1, the switch 2, the switch 3, and the switch 4 are connected in a stack; each switch has two stack ports, and two stack ports on different switches are connected.
  • the link is a stacking link.
  • One of the ring topology stacking systems There are two paths in the different directions from the device to the other device; one path is called the first path in this embodiment, and the other path is called the second path; there is at least one stack link on each path.
  • first path between switch 1 and switch 2: switch 1 -> switch 2 and second path: switch 1 -> switch 4 -> switch 3 -> switch 2; on the first path
  • stacking link there is a stacking link; there are three stacking links on the second path; and the stacking link between switch 1 and switch 2 is in the direction of switch 1 -> switch 2, and the stacking port on switch 1 is the stack.
  • Link exit; the stack port on switch 2 is the entry to the stack link.
  • the bandwidth resource occupancy of the stack link can be obtained by counting the traffic of the egress and/or the ingress of the stack link and the bandwidth supported by the egress and/or the ingress. For example, if the stack port that serves as the egress and the ingress on a stack link supports the bandwidth of 10 Gigabit (that is, the 10 Gigabit port, and the current traffic 2 passing through the port is Gigabit, the bandwidth resource of the stack link is occupied. The rate is 20%. At the same time, the bandwidth supported by the stack link is 10 megabytes. The bandwidth of each stack link in Figure 2 is 10 Mbps.
  • the first path: switch 1-> switch 2 and the second path: the switch 1 -> switch 4 -> switch 3 -> switch 2 path bandwidth is also 10 megabytes. That is, in this embodiment, the bandwidth of the stack link with the smallest bandwidth supported on one path is Path bandwidth.
  • the type of the stack port generally includes a full-duplex port and a half-duplex port; wherein the full-duplex port has a priority greater than the half-duplex port.
  • the method for selecting a path of a ring topology stacking system in this embodiment, when determining a working path between two devices in a ring topology stacking system includes:
  • Step 301 Obtain the current bandwidth resource occupancy rate of the stacking link on the first path and the second path in the different directions between the two devices.
  • Step 302 Obtain a path bandwidth resource occupancy rate of the first path and the second path according to the current bandwidth resource occupancy rate of the stack link.
  • Step 303 Select a path bandwidth resource occupancy rate from the first path and the second path as a working path between the two devices.
  • the total number of stack links on the first path and the second path may be respectively obtained, and the path with the total number of stacked links is selected as The working path; that is, the shortest path can be selected as the working path at this time. At this point, the number of stack links is combined.
  • the path bandwidth resource occupancy rate is selected from the first path and the second path as the working path between the two devices; this avoids selecting the congested path as the working path, so that the resource allocation is more reasonable, thereby improving System availability and networking flexibility.
  • At least one of the type of the stacking port on the path, the path bandwidth of the path, and even the number of stacked links on the path may be combined.
  • the type of the stacking port on the path, the path bandwidth of the path, and even the number of stacked links on the path may be combined.
  • the priority of the stacking ports on the first path and the second path of the two devices are obtained, and different types of ports correspond to different priorities.
  • the path with the lowest priority is directly selected as the working path.
  • the lowest priority of the stacking port in the first path is the same as the lowest priority of the stacking port in the second path, and the current bandwidth resources of the stacking link in the first path and the second path in the second path are obtained. Occupancy rate;
  • the path bandwidth resource occupancy rate is selected from the first path and the second path as a working path between the two devices.
  • the path bandwidth of the first path and the path bandwidth of the second path are not equal, the path with a large path bandwidth is directly selected as the working path;
  • the path bandwidth of the first path is equal to the path bandwidth of the second path, the current bandwidth resource occupancy rate of the stacking link in the first path and the second path in the two paths is obtained.
  • the path bandwidth resource occupancy rate is selected from the first path and the second path as a working path between the two devices.
  • the working path between the two devices is determined by the type of the stacking port on the path, the path bandwidth of the path, and the bandwidth resource utilization:
  • the priority of the stacking ports on the first path and the second path of the two devices are obtained, and different types of ports correspond to different priorities.
  • the path with the lowest priority is directly selected as the working path.
  • the lowest priority of the stacking port on the first path is the same as the lowest priority of the stacking port on the second path, and the bandwidth supported by the stacking link on the first path and the second path between the two devices is obtained, and the first priority is respectively obtained.
  • a minimum bandwidth supported by the stack link on the path and the second path as a path bandwidth of the first path and the second path;
  • the path bandwidth of the first path and the path bandwidth of the second path are not equal, the path with a large path bandwidth is directly selected as the working path;
  • the path bandwidth of the first path is equal to the path bandwidth of the second path, the current bandwidth resource occupancy rate of the stacking link in the first path and the second path in the two paths is obtained.
  • the path bandwidth resource occupancy rate is selected from the first path and the second path as a working path between the two devices.
  • the bandwidth of the stack link is dynamically changed. Therefore, in this embodiment, the working path can be dynamically adjusted according to the bandwidth utilization rate of the stack link on the working path. Make resource allocation more reasonable. To this end, the bandwidth usage threshold of the bandwidth is set to be relatively large, for example, 80% or more, to avoid frequent repeated calculations. At this time, the application also includes the following steps:
  • the preset rule may be obtained according to the set period
  • the path bandwidth resource occupancy rate of the first path and the second path is obtained according to the current bandwidth resource occupancy rate of the stack link in the foregoing step 302, where:
  • the sum of the weights of the stacking links on the first path and the weights of the stacking links on the second path are respectively calculated; the sum of the weights of the stacking links on the first path is the path bandwidth resource occupancy of the first path.
  • the sum of the weight values of the stack links on the second path is the path bandwidth resource occupancy of the second path.
  • the egress bandwidth resource usage (that is, the stack link bandwidth resource occupancy ratio) is equal to the bandwidth resource occupancy threshold.
  • the corresponding weight is: (number of stack system devices + 1)/2 If the bandwidth resource occupancy rate is greater than or equal to 99%, the corresponding weight is: the number of devices in the stack system is -1.5; the bandwidth resource usage is less than the bandwidth resource occupancy threshold, the corresponding weight is 1, the bandwidth resource occupancy rate is in the bandwidth resource occupancy threshold, and 99.
  • the corresponding weights between % bandwidth utilization are: (number of stack system devices +1) / 2+ ((number of devices - 4) / 2) * (bandwidth utilization - bandwidth utilization trigger threshold) / (99% - bandwidth utilization trigger threshold)).
  • the weight corresponding to the bandwidth resource usage threshold is 1, and the weight greater than the trigger threshold is: 1.5 for the ring stack system.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the present embodiment provides a path selection device for a ring topology stacking system, which is used to determine a working path between two devices in a ring topology stacking system.
  • the method includes: a first information acquiring module, a processing module, and First selection module;
  • the first information acquiring module is configured to obtain the current bandwidth resource occupancy rate of the stacking link in the first path and the second path in the different directions between the two devices;
  • the processing module is configured to obtain a path bandwidth resource occupancy rate of the first path and the second path according to a current bandwidth resource occupancy ratio of the stack link.
  • the first selection module is configured to select a path bandwidth resource occupancy ratio from the first path and the second path to be a working path between the two devices.
  • the ring topology stacking system path selecting apparatus in this embodiment further includes a second information acquiring module and a second selecting module, where the second information acquiring module is configured to obtain the resource resource acquiring module before acquiring the current bandwidth resource occupancy rate of the stack link.
  • the priority of the stacking ports on the first path and the second path correspond to different priorities.
  • the second selection module is configured to directly select the path with the lowest priority as the working path when the lowest priority of the stacking port on the first path is different from the lowest priority of the stacking port on the second path.
  • the ring topology stacking system path selecting apparatus in this embodiment further includes a third information acquiring module and a third selecting module, where the third information acquiring module is configured to stack the port with the lowest priority and the second path on the first path. Obtaining the support of the stacking link on the first path and the second path before obtaining the current bandwidth resource usage of the stacking link on the first path and the second path in the different directions between the two devices. Bandwidth, and respectively using the minimum bandwidth supported by the stack link on the first path and the second path as path paths of the first path and the second path width;
  • the third selection module is configured to directly select a path with a large path bandwidth as the working path when the path bandwidth of the first path and the path bandwidth of the second path are not equal.
  • the ring topology stacking system path selecting apparatus in this embodiment further includes a statistic module, configured to separately calculate the stacking chain on the first path and the second path when the path bandwidth resource occupancy ratios of the first path and the second path are equal. For the total number of paths, select the path with the total number of stacked links as the working path.
  • the path topology selection device of the ring topology stacking system in this embodiment further includes a fourth information acquiring module, a determining module, and a triggering module.
  • the fourth information acquiring module is configured to obtain the working according to the setting rule after determining the working path between the two devices. Current bandwidth resource usage of the stack link on the path.
  • the determining module is configured to determine whether there is a stack link whose bandwidth resource usage ratio is greater than the bandwidth resource occupancy threshold, and if yes, notify the triggering module;
  • the trigger module is configured to trigger a re-determination of the working path between the two devices in the device.
  • the processing module of the path selection device of the ring topology stacking system includes a setting submodule, a conversion submodule, and a computing submodule:
  • the setting sub-module is set to set the correspondence between the bandwidth resource usage of the stack link and the different weight values.
  • the specific setting mode is not described here.
  • the conversion submodule is configured to convert the bandwidth resource occupancy rate of the stack link into a corresponding weight value
  • the calculation submodule is configured to calculate a sum of a weight value of the stack link on the first path and a weight value of the stack link on the second path, respectively; the sum of the weight values of the stack links on the first path is the first path
  • the path bandwidth resource occupancy rate, and the sum of the weight values of the stack links on the second path is the path bandwidth resource occupancy rate of the second path.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • the main device of the ring topology stacking system in this embodiment includes a memory and a processor; the memory is set to store instructions; and the processor is configured to determine two devices in the ring topology stacking system (including When the working path between one device is the master device, the instruction in the memory is retrieved to perform the following steps:
  • a path bandwidth resource occupancy rate is selected from the first path and the second path as a working path between the two devices.
  • the processor calls the above instructions in addition to the above steps, and can perform other steps as exemplified in the first embodiment.
  • the following takes a specific ring topology stacking system as an example: As shown in Figure 5, the system includes switch 1, switch 2, switch 3, and switch 4 and switch 5. The five devices are connected through the stack port port.
  • the stacking system of the ring topology is a full-duplex type of the 10G port.
  • the topology discovery phase assumes that the traffic stacking port has no service traffic.
  • switch 1 is the master device
  • switch 2 is the standby device
  • switches 3, 4, and 5 are slave devices.
  • the bandwidth usage threshold of the stack interface exiting the path of the master device recalculating path that is, the bandwidth resource occupancy threshold
  • the stacking system operates as follows:
  • the switch 2, 3, 4, and 5 report the stack port usage and the stack port bandwidth utilization (currently 0) to the master device switch.
  • the master device switch 1 the running path selection algorithm, because the port bandwidth is 10G, the port working mode is full duplex, and the port outlet bandwidth utilization is less than 80%, then the path is determined according to the link length; for example, the switch 4
  • the path to switch 2 is switch 4->switch 1->switch 2.
  • the path of switch 1 to switch 2 is switch 1 -> switch 2.
  • the slave device and the standby device periodically advertise the outbound bandwidth usage of the stack port to the master device switch 1.
  • Switch 1 finds that the inbound bandwidth utilization of all stack ports is less than 80%, so the path is not recalculated.
  • the 20 ports of switch 1 have 800M traffic directly going to switch 2, and the stack of switch 2 is The port (port between switches 1, 2) has an outbound bandwidth of 80%.
  • the master device On the slave device, the master device periodically advertises the inbound bandwidth usage of the stack interface to the master device switch 1.
  • the switch 1 finds that the outbound bandwidth usage of the stack interface is greater than or equal to 80%, and triggers the path selection algorithm in the stack system.
  • the weight setting method in the first embodiment is taken as an example:
  • the link weights of the links between switches 1, 2 are:
  • the master device finds that the path selection result is different from the previous one and notifies all other devices.
  • the path will be switch 4->switch 5->switch 3>switch 2. If the algorithm of the present invention is not used, the traffic of the switch 4 and the switch 1 to the switch 2 is the shortest path, and the traffic passes through the link between the switches 1 and 2, which causes congestion on the link, resulting in congestion. Packet loss, which affects the flexibility of networking.
  • all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve. Thus, the invention is not limited to any specific combination of hardware and software.
  • the devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
  • each device/function module/functional unit in the above embodiment When each device/function module/functional unit in the above embodiment is implemented in the form of a software function module and sold or used as a stand-alone product, it can be stored in a computer readable storage medium.
  • the above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
  • the embodiment of the invention can avoid the path of the congestion as the working path, and make the resource allocation more reasonable, thereby improving the availability of the system and the flexibility of the networking.

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种环形拓扑堆叠***路径选择方法、装置及主设备,在确定环形拓扑堆叠***中两设备之间的工作路径时,获取两设备之间不同方向的第一路径和第二路径上各堆叠链路当前的带宽资源占用率;根据各堆叠链路当前的带宽资源占用率得到第一路径和第二路径的路径带宽资源占用率;然后从第一路径和第二路径中选取路径带宽资源占用率小的作为两设备之间的工作路径。

Description

环形拓扑堆叠***路径选择方法、装置及主设备 技术领域
本发明涉及通信领域,具体涉及一种环形拓扑堆叠***路径选择方法、装置及主设备。
背景技术
堆叠是将多台支持堆叠特性的交换机设备结合在一起,从逻辑上组合成一台整体的交换设备。堆叠是一种虚拟化技术,在不改变网络物理拓扑连接结构条件下,将网络同一层的多台设备虚拟化成单台逻辑设备,达到简化网络结构,简化网络协议部署,提高网络可靠性和可管理性的目的。
堆叠主要有以下优点:
1.高可靠性,堆叠***多台成员设备之间1:N冗余备份;堆叠支持跨设备的链路聚合功能,实现跨设备的链路冗余备份。
2.强大的网络扩展能力,通过增加成员设备,可以轻松的扩展堆叠***的端口数,带宽和处理能力。
3.简化网络结构和协议部署,堆叠技术可以将复杂的网络拓扑结构简化为层次分明,互联关系简单的网络结构,网络各层之间通过链路聚合,自然消除环路,不需要再部署多生成树协议(Multiple Spanning Tree Protocol,MSTP),虚拟路由冗余协议(Virtual Router Redundancy Protocol,VRRP)等协议。
4.简化配置和管理,堆叠形成后,多台物理设备虚拟成为一台设备,用户可以通过任何一台成员设备登陆堆叠***,对堆叠***所有成员设备进行统一配置和管理。
堆叠的连接拓扑请参见图1和图2所示,有两种结构:
1.链形连接:使用堆叠电缆将一台设备的左口(右口)和另一台设备的右口(左口)连接起来,以此类推,第一台设备的右口(左口)和最后一台 设备的左口(右口)没有连接堆叠电缆。
2.环形连接:将链形连接第一台设备的右口(左口)和最后一台设备的左口(右口)连接起来。
堆叠所有的单台设备都成为成员设备,按照功能不同,可以分为三种角色:
1.主交换机,即Master,负责管理整个堆叠***。堆叠***中只有一台主交换机。在图1和图2中,交换机1为主交换机,即主设备;
2.备交换机,即Standby,是主交换机的备份交换机,当主交换机故障时,备交换机会接替主交换机的所有业务,堆叠***中只有一台备交换机;在图1和图2中,交换机2为备交换机,即备设备;
3.从交换机,即Slave,除了主交换机外,堆叠中所有交换机都是从交换机,其中备交换机同时承担备份交换机和从交换机两种角色;在图1和图2中,除交换机1外,其他的交换机都是从交换机,即从设备。
堆叠ID,即成员编号(Member ID),用来识别和管理成员设备,堆叠***中所有成员设备的堆叠ID都是唯一的。
堆叠优先级是成员设备的一个属性,主要用于角色选举过程中确定成员设备的角色,优先级越大表示优先级越高,当选为主交换机的可能性越大。
堆叠***是指稳态运行的堆叠***中带电移出部分成员或者堆叠线缆多出故障,导致一个堆叠***变成多个堆叠***,堆叠******后,可能产生多个有相同配置的堆叠***,导致网络中ip地址和媒体接入控制(Media Access Control,MAC)地址的冲突,一起网络故障。链形连接堆叠***的可能性更大,因为堆叠线缆一处故障就可能导致堆叠***,所以实际组网中安全起见一般建议环形连接。
基于以上介绍,环形拓扑组网是堆叠***中一种较常用的组网方式,由于环网设备间转发存在两个方向的路径,所以路径选择是环网拓扑必须要考虑的问题,目前业内比较常用的做法是,基于路径经过的链路数目,各个链路带宽,链路所在端口的双工模式等共同决定最优路径。而在一般的环形组网中,环上堆叠口的带宽都是一样的,且堆叠端口的类型基本都是全双工工 作,所以实际生效的基本上就剩链路数目,路径上的链路总数最少的路径就是选择出来的最优路径,即最短路径。这种路径选择方法并没有考虑链路带宽利用率情况,在有些情况下,即使最短路径上的某些链路非常拥塞而另一路径上的链路此时即使非常空闲,也会以该最短路径作为最优路径,导致资源分配不合理,降低了***的可用性和组网的灵活性。
发明内容
本发明实施例提供一种环形拓扑堆叠***路径选择方法、装置及主设备,解决相关技术的环形拓扑堆叠***采用最短路径选择方法存在的资源分配不合理的问题。
一种环形拓扑堆叠***路径选择方法,环形拓扑堆叠***包括环形堆叠连接的多个设备;该方法包括:
获取所述两设备之间不同方向上的第一路径和第二路径上各堆叠链路当前的带宽资源占用率;
根据所述堆叠链路当前的带宽资源占用率得到所述第一路径和所述第二路径的路径带宽资源占用率;
从所述第一路径和所述第二路径中选取路径带宽资源占用率小的作为所述两设备之间的工作路径。
在本发明的一种实施例中,在获取所述两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率之前,还包括:
获取所述第一路径和第二路径上堆叠端口的优先级,不同类型的端口对应不同的优先级;
如所述第一路径上堆叠端口的最低优先级与所述第二路径上堆叠端口的最低优先级不同,则直接选择最低优先级较高的路径作为工作路径。
在本发明的一种实施例中,在获取所述两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率之前,还包括:
获取所述第一路径和第二路径上堆叠链路所支持的带宽,并分别将所述第一路径和第二路径上堆叠链路支持的最小带宽作为所述第一路径和所述第 二路径的路径带宽;
如所述第一路径的路径带宽与所述第二路径的路径带宽不等,则直接选取路径带宽大的路径作为工作路径。
在本发明的一种实施例中,判断所述第一路径上堆叠端口的最低优先级与所述第二路径上堆叠端口的最低优先级相同时,在获取所述两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率之前,还包括:
获取所述第一路径和第二路径上堆叠链路所支持的带宽,并分别将所述第一路径和第二路径上堆叠链路支持的最小带宽作为所述第一路径和所述第二路径的路径带宽;
如所述第一路径的路径带宽与所述第二路径的路径带宽不等,则直接选取路径带宽大的路径作为工作路径。
在本发明的一种实施例中,如所述第一路径和所述第二路径的路径带宽资源占用率相等,则分别获取所述第一路径上和所述第二路径上的堆叠链路总数,选取堆叠链路总数小的路径为工作路径。
在本发明的一种实施例中,根据所述堆叠链路当前的带宽资源占用率得到所述第一路径和所述第二路径的路径带宽资源占用率包括:
设置堆叠链路的带宽资源占用率与不同权重值之间的对应关系,带宽资源占用率越大,对应的权重值越大;
将堆叠链路的带宽资源占用率转换成对应的权重值;
分别计算所述第一路径上堆叠链路的权重值之和与所述第二路径上堆叠链路的权重值之和;所述第一路径上堆叠链路的权重值之和为所述第一路径的路径带宽资源占用率,所述第二路径上堆叠链路的权重值之和为所述第二路径的路径带宽资源占用率。
在本发明的一种实施例中,在确定所述设备中两设备之间的工作路径后,还包括:
按照设定规则获取所述工作路径上堆叠链路当前的带宽资源占用率;
判断是否存在带宽资源占用率大于带宽资源占用率阈值的堆叠链路,如 存在,则重新确定所述设备中两设备之间的工作路径。
为了解决上述问题,本发明还提供了一种环形拓扑堆叠***路径选择装置,用于确定环形拓扑堆叠***中两设备之间的工作路径,包括:第一信息获取模块、处理模块以及第一选择模块;
所述第一信息获取模块设置为获取所述两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率;
所述处理模块设置为根据所述堆叠链路当前的带宽资源占用率得到所述第一路径和所述第二路径的路径带宽资源占用率;
所述第一选择模块设置为从所述第一路径和所述第二路径中选取路径带宽资源占用率小的作为所述两设备之间的工作路径。
在本发明的一种实施例中,还包括第二信息获取模块和第二选择模块,所述第二信息获取模块设置为在所述资源信息获取模块获取所述堆叠链路当前的带宽资源占用率之前,获取所述第一路径和第二路径上堆叠端口的优先级,不同类型的端口对应不同的优先级;
所述第二选择模块设置为在所述第一路径上堆叠端口的最低优先级与所述第二路径上堆叠端口的最低优先级不同时,直接选择最低优先级较高的路径作为工作路径。
在本发明的一种实施例中,还包括第三信息获取模块和第三选择模块,所述第三信息获取模块设置为在所述第一路径上堆叠端口的最低优先级与所述第二路径上堆叠端口的最低优先级相同时,在获取所述两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率之前,获取所述第一路径和第二路径上堆叠链路所支持的带宽,并分别将所述第一路径和第二路径上堆叠链路支持的最小带宽作为所述第一路径和所述第二路径的路径带宽;
所述第三选择模块设置为在所述第一路径的路径带宽与所述第二路径的路径带宽不等时,直接选取路径带宽大的路径作为工作路径。
在本发明的一种实施例中,还包括统计模块,设置为在所述第一路径和所述第二路径的路径带宽资源占用率相等时,分别统计所述第一路径上和所 述第二路径上的堆叠链路总数,选取堆叠链路总数小的路径为工作路径。
在本发明的一种实施例中,所述处理模块包括设置子模块、转换子模块以及计算子模块:
所述设置子模块设置为设置堆叠链路的带宽资源占用率与不同权重值之间的对应关系,带宽资源占用率越大,对应的权重值越大;
所述转换子模块设置为将堆叠链路的带宽资源占用率转换成对应的权重值;
所述计算子模块设置为分别计算所述第一路径上堆叠链路的权重值之和与所述第二路径上堆叠链路的权重值之和;所述第一路径上堆叠链路的权重值之和为所述第一路径的路径带宽资源占用率,所述第二路径上堆叠链路的权重值之和为所述第二路径的路径带宽资源占用率。
在本发明的一种实施例中,还包括第四信息获取模块、判断模块以及触发模块;所述第四信息获取模块设置为在确定所述两设备之间的工作路径后,按照设定规则获取所述工作路径上堆叠链路当前的带宽资源占用率,
所述判断模块设置为判断是否存在带宽资源占用率大于带宽资源占用率阈值的堆叠链路,如存在,通知所述触发模块;
所述触发模块设置为触发重新确定所述设备中两设备之间的工作路径。
为了解决上述问题,本发明实施例还提供了一种环形拓扑堆叠***的主设备,包括存储器和处理器;所述存储器设置为存储指令;所述处理器设置为在确定所述环形拓扑堆叠***中两设备之间的工作路径时,调取所述指令执行以下步骤:
获取所述两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率;
根据所述堆叠链路当前的带宽资源占用率得到所述第一路径和所述第二路径的路径带宽资源占用率;
从所述第一路径和所述第二路径中选取路径带宽资源占用率小的作为所述两设备之间的工作路径。
本发明实施例还提供一种计算机程序,包括程序指令,当该程序指令被 环形拓扑堆叠***的主设备执行时,使得该主设备可执行上述方法。
本发明实施例还提供一种载有上述计算机程序的载体。
本发明实施例提供的环形拓扑堆叠***路径选择方法、装置及主设备,可避免选取拥塞的路径作为工作路径,使资源分配更为合理,进而提升了***的可用性和组网的灵活性。
附图概述
图1为链形拓扑堆叠***结构示意图;
图2为环形拓扑堆叠***结构示意图;
图3为本发明实施例一提供的环形拓扑堆叠***路径选择方法流程示意图;
图4为本发明实施例二提供的环形拓扑堆叠***路径选择装置结构示意图;
图5为本发明实施例三提供的链形拓扑堆叠***结构示意图。
本发明的较佳实施方式
下面结合附图对本发明的具体实施方式作详细说明,在不冲突的情况下,本发明实施例和实施例中的特征可以相互任意组合。
实施例一:
为了更好的理解本发明,本实施例先对环形拓扑堆叠***进行简单介绍说明。在环形拓扑堆叠***中,包括环形堆叠连接的多个设备,其中一个为主设备,一个为备用设备,剩下的都为从设备,其中备用设备同时也为从设备;设备之间通过堆叠电缆连接的端口为堆叠端口;该连接线缆则为堆叠链路。例如,图2所示的环形拓扑堆叠***中,由交换机1、交换机2、交换机3以及交换机4堆叠连接组成;每台交换机都具备两个堆叠端口,不同交换机上的两个堆叠端口之间连接的链路为堆叠链路。环形拓扑堆叠***中一个 设备到另个一设备存在两个不同方向上的路径;其中一个路径本实施例称之为第一路径,另外一个路径则称为第二路径;每一条路径上存在至少一个堆叠链路。例如,在图2中,交换1到交换机2之间就存在第一路径:交换机1->交换机2和第二路径:交换机1->交换机4->交换机3->交换机2;第一路径上存在一个堆叠链路;第二路径上存在三条堆叠链路;且交换机1与交换机2之间的堆叠链路在交换机1->交换机2这个方向上,对应在交换机1上的堆叠端口为该堆叠链路出口;在交换机2上的堆叠端口为该堆叠链路的入口。本实施例可通过统计堆叠链路的出口和/或入口的流量以及出口和/或入口支持的带宽来得到该堆叠链路的带宽资源占用率。例如假设一个堆叠链路上作为出口和入口的堆叠端口支持的带宽都为1万兆(即为万兆口,而当前经过该端口的流量2为千兆,则该堆叠链路的带宽资源占用率为20%。同时,该堆叠链路所支持的带宽就为1万兆;加设图2中每条堆叠链路所持的带宽都为1万兆,则第一路径:交换机1->交换机2和第二路径:交换机1->交换机4->交换机3->交换机2的路径带宽也都为1万兆。也即本实施例中去一条路径上支持带宽最小的堆叠链路所持带宽为路径带宽。
本实施例中,堆叠端口的类型一般包括全双工端口和半双工端口;其中全双工端口的优先级大于半双工端口。
请参见图3所示,本实施例中的环形拓扑堆叠***路径选择方法,在确定环形拓扑堆叠***中的两设备之间的工作路径时,包括:
步骤301:获取两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率;
步骤302:根据堆叠链路当前的带宽资源占用率得到第一路径和第二路径的路径带宽资源占用率;
步骤303:从第一路径和第二路径中选取路径带宽资源占用率小的作为两设备之间的工作路径。
本实施例中,当第一路径和第二路径的路径带宽资源占用率相同时,可则分别获取第一路径上和第二路径上的堆叠链路总数,选取堆叠链路总数小的路径为工作路径;也即此时可选取最短路径作为工作路径。此时则结合了堆叠链路数。
本实施例中从第一路径和第二路径中选取路径带宽资源占用率小的作为两设备之间的工作路径;这样可避免选取拥塞的路径作为工作路径,使资源分配更为合理,进而提升了***的可用性和组网的灵活性。
当然,在确定路径时,除了考虑带宽资源利用率外,还可结合路径上堆叠端口的类型、路径的路径带宽甚至路径上的堆叠链路数中的至少一种因素。下面以其中几个实例进行说明:
示例一:
结合路径上堆叠端口的类型以及带宽资源利用率的方式确定两设备之间的工作路径:
获取两设备第一路径和第二路径上堆叠端口的优先级,不同类型的端口对应不同的优先级;
如第一路径上堆叠端口的最低优先级与第二路径上堆叠端口的最低优先级不同,则直接选择最低优先级较高的路径作为工作路径;
如第一路径上堆叠端口的最低优先级与第二路径上堆叠端口的最低优先级相同,再获取两设备之间不同方向上的第一路径和第二路径上各堆叠链路当前的带宽资源占用率;
根据堆叠链路当前的带宽资源占用率得到第一路径和第二路径的路径带宽资源占用率;
从第一路径和第二路径中选取路径带宽资源占用率小的作为两设备之间的工作路径。
示例二:
结合路径的路径带宽以及带宽资源利用率的方式确定两设备之间的工作路径:
获取两设备间第一路径和第二路径上堆叠链路所支持的带宽,并分别将第一路径和第二路径上堆叠链路支持的最小带宽作为第一路径和所述第二路径的路径带宽;
如第一路径的路径带宽与第二路径的路径带宽不等,则直接选取路径带宽大的路径作为工作路径;
如第一路径的路径带宽与第二路径的路径带宽相等,再获取两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率;
根据堆叠链路当前的带宽资源占用率得到第一路径和第二路径的路径带宽资源占用率;
从第一路径和第二路径中选取路径带宽资源占用率小的作为两设备之间的工作路径。
示例三:
结合路径上堆叠端口的类型、路径的路径带宽以及带宽资源利用率的方式确定两设备之间的工作路径:
获取两设备第一路径和第二路径上堆叠端口的优先级,不同类型的端口对应不同的优先级;
如第一路径上堆叠端口的最低优先级与第二路径上堆叠端口的最低优先级不同,则直接选择最低优先级较高的路径作为工作路径;
如第一路径上堆叠端口的最低优先级与第二路径上堆叠端口的最低优先级相同,再获取两设备间第一路径和第二路径上堆叠链路所支持的带宽,并分别将第一路径和第二路径上堆叠链路支持的最小带宽作为第一路径和所述第二路径的路径带宽;
如第一路径的路径带宽与第二路径的路径带宽不等,则直接选取路径带宽大的路径作为工作路径;
如第一路径的路径带宽与第二路径的路径带宽相等,再获取两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率;
根据堆叠链路当前的带宽资源占用率得到第一路径和第二路径的路径带宽资源占用率;
从第一路径和第二路径中选取路径带宽资源占用率小的作为两设备之间的工作路径。
以上示例仅作为示例性的说明。应当理解的是本实施例中具体确定的路径时考虑的因素以及具体确定规则除了上述几种方式外,还可基于上述几种示例衍生出其他的方式,都应在本申请的保护范围内。
由于堆叠链路的带宽资源占用率是在动态变化,因此本实施例中还可根据工作路径上堆叠链路带宽资源占用率的情况动态调整工作路径。使资源分配更为合理。为此,本实施例设置一个带宽资源占用率阈值,该带宽资源占用率阈值一般设置为比较大,例如80%以上,避免频繁的重复计算。此时,本申请还包括以下步骤:
在确定好两设备之间的工作路径后,还包括:
按照设定规则获取工作路径上堆叠链路当前的带宽资源占用率;该预设规则可以是按照设定周期获取;
判断是否存在带宽资源占用率大于带宽资源占用率阈值的堆叠链路,如存在,则重新确定设备中两设备之间的工作路径。此时重新确定两设备之间的路径时,可以直接选取另一条路径作为工作路径;也可以完全重新按照上述过程计算确定。
本实施例中,上述步骤302中根据堆叠链路当前的带宽资源占用率得到第一路径和所述第二路径的路径带宽资源占用率包括:
设置堆叠链路的带宽资源占用率与不同权重值之间的对应关系,带宽资源占用率越大,对应的权重值越大;
将堆叠链路的带宽资源占用率转换成对应的权重值;
分别计算第一路径上堆叠链路的权重值之和与第二路径上堆叠链路的权重值之和;第一路径上堆叠链路的权重值之和为第一路径的路径带宽资源占用率,第二路径上堆叠链路的权重值之和为第二路径的路径带宽资源占用率。
应当理解的是,堆叠链路的带宽资源占用率与不同权重值之间的对应关系可以根据具体的应用场景等因素灵活设置。下面以一种具体的设置方式进行示例性的说明,当应当理解的是并不局限于以下设置方式:
当环形拓扑堆叠***中设备数大于等于4的时候,出口带宽资源占用率(即堆叠链路带宽资源占用率)等于带宽资源占用率阈值,对应权重为:(堆叠***设备数+1)/2,带宽资源占用率大于等于99%的,对应权重为:堆叠***设备数-1.5;带宽资源占用率小于带宽资源占用率阈值,对应权重为1,带宽资源占用率在带宽资源占用率阈值和99%带宽利用率之间的对应权重为: (堆叠***设备数+1)/2+((设备数-4)/2)*(带宽利用率-带宽利用率触发门限)/(99%-带宽利用率触发门限))。
当设备数少于4的时候,小于带宽资源占用率阈值对应的权重为1,大于触发门限权重为:环形堆叠***设备数-1.5。
实施例二:
本实施例提供了一种环形拓扑堆叠***路径选择装置,用于确定环形拓扑堆叠***中两设备之间的工作路径,请参见图4所示,其包括:第一信息获取模块、处理模块以及第一选择模块;
第一信息获取模块设置为获取两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率;
处理模块设置为根据堆叠链路当前的带宽资源占用率得到所述第一路径和所述第二路径的路径带宽资源占用率;
第一选择模块设置为从第一路径和第二路径中选取路径带宽资源占用率小的作为两设备之间的工作路径。
本实施例中的环形拓扑堆叠***路径选择装置还包括第二信息获取模块和第二选择模块,第二信息获取模块设置为在资源信息获取模块获取堆叠链路当前的带宽资源占用率之前,获取第一路径和第二路径上堆叠端口的优先级,不同类型的端口对应不同的优先级;
第二选择模块设置为在第一路径上堆叠端口的最低优先级与所述第二路径上堆叠端口的最低优先级不同时,直接选择最低优先级较高的路径作为工作路径。
本实施例中的环形拓扑堆叠***路径选择装置还包括第三信息获取模块和第三选择模块,第三信息获取模块设置为在第一路径上堆叠端口的最低优先级与第二路径上堆叠端口的最低优先级相同时,在获取两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率之前,获取第一路径和第二路径上堆叠链路所支持的带宽,并分别将所述第一路径和第二路径上堆叠链路支持的最小带宽作为所述第一路径和所述第二路径的路径带 宽;
第三选择模块设置为在第一路径的路径带宽与第二路径的路径带宽不等时,直接选取路径带宽大的路径作为工作路径。
本实施例中的环形拓扑堆叠***路径选择装置还包括统计模块,设置为在第一路径和第二路径的路径带宽资源占用率相等时,分别统计第一路径上和第二路径上的堆叠链路总数,选取堆叠链路总数小的路径为工作路径。
本实施例中的环形拓扑堆叠***路径选择装置还包括第四信息获取模块、判断模块以及触发模块;第四信息获取模块设置为在确定两设备之间的工作路径后,按照设定规则获取工作路径上堆叠链路当前的带宽资源占用率,
判断模块设置为判断是否存在带宽资源占用率大于带宽资源占用率阈值的堆叠链路,如存在,通知触发模块;
触发模块设置为触发重新确定所述设备中两设备之间的工作路径。此时重新确定两设备之间的路径时,可以直接选取另一条路径作为工作路径;也可以完全重新按照上述过程计算确定。
本实施例中,环形拓扑堆叠***路径选择装置的处理模块包括设置子模块、转换子模块以及计算子模块:
设置子模块设置为设置堆叠链路的带宽资源占用率与不同权重值之间的对应关系,带宽资源占用率越大,对应的权重值越大;具体设置方式在此不再赘述。
转换子模块设置为将堆叠链路的带宽资源占用率转换成对应的权重值;
计算子模块设置为分别计算第一路径上堆叠链路的权重值之和与第二路径上堆叠链路的权重值之和;第一路径上堆叠链路的权重值之和为第一路径的路径带宽资源占用率,第二路径上堆叠链路的权重值之和为第二路径的路径带宽资源占用率。
实施例三:
本实施例中的环形拓扑堆叠***的主设备包括存储器和处理器;存储器设置为存储指令;处理器设置为在确定环形拓扑堆叠***中两设备(包括其 中一个设备为主设备的情况)之间的工作路径时,调取存储器中的该指令执行以下步骤:
获取两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率;
根据堆叠链路当前的带宽资源占用率得到第一路径和第二路径的路径带宽资源占用率;
从第一路径和所述第二路径中选取路径带宽资源占用率小的作为所述两设备之间的工作路径。
当然,处理器调用上述指令除了执行上述步骤外,还可执行上述实施例一中所示例的其他步骤。下面以一个具体的环形拓扑堆叠***为例进行说明:请参见图5所示,该***包括交换机1、交换机2、交换机3以及交换机4和交换机5;5台设备通过堆叠端口口两两连接为环形拓扑的堆叠***,堆叠端口都是10G端口全双工类型,拓扑发现阶段假设流量堆叠口没有业务流量。拓扑发现阶段经过拓扑选举后交换机1为主设备,交换机2为备设备,交换机3,4,5为从设备。假设触发主设备重新计算路径的堆叠口出口带宽利用率门限(即带宽资源占用率阈值)为80%。该堆叠***运行如下:
1、交换机2,3,4,5把各自的堆叠端口,堆叠端口带宽利用率(当前为0)上报给主设备交换机;
2、主设备交换机1,运行路径选择算法,由于端口带宽都是10G,端口工作方式都是全双工,而且端口出口带宽利用率小于80%,那么完全根据链路长短来决定路径;例如交换机4去往交换机2的路径为交换机4->交换机1->交换机2。交换机1去往交换机2的路径为交换机1->交换机2。
运行一段时间后,假设交换机1有20个端口共800M流量去往交换机2,那么堆叠***运行如下:
3、从设备,备设备定期把堆叠端口的出向带宽利用率情况通告给主设备交换机1,交换机1发现所有堆叠口的入向带宽利用率都低于80%,所以不重新计算路径。
4、交换机1的20个端口800M流量直接去往交换机2,交换机2的堆叠 端口(交换机1,2之间的端口)出向带宽为80%。
5、从设备,主设备定期把堆叠口的入向带宽利用率情况通告给主设备交换机1,交换机1发现有堆叠口的出向带宽利用率大于等于80%,重新触发堆叠***内路径选择算法,此时以实施例一中的权重设置方式为例:
交换机1,2之间链路的链路权重为:
(5+1)/2=3,其它链路的链路权重为1,所以交换机1去往交换机2的路径不变,交换机4去往交换机2的可能路径有两条,一条为交换机4->交换机1->交换机2,整条路径的链路权重为1+3=4,一条为交换机4->交换机5->交换机3>交换机2,整条路径的链路权重为1+1+1=3,优先选择链路权重和小的路径,所以最终主设备计算交换机4去往交换机2的路径为交换机4->交换机5->交换机3>交换机2,堆叠***内其它设备间的路径选择类似。
6、主设备发现路径选择结果和上次不同,通知给所有其它设备。
5:从设备交换机4发现去往交换机2的路径发生变化,设置硬件新的转发行为。
如果此时有5个端口300M的流量从从交换机4去往交换机2,将通过路径为交换机4->交换机5->交换机3>交换机2。而如果不采用本发明的算法,交换机4,交换机1去往交换机2的流量都走路径最短路径,流量都通过交换机1和2之间的链路,将在这条链路上产生拥塞,导致丢包,从而影响组网的灵活性。
以上内容是结合具体的实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。
本领域普通技术人员可以理解上述实施例的全部或部分步骤可以使用计算机程序流程来实现,所述计算机程序可以存储于一计算机可读存储介质中,所述计算机程序在相应的硬件平台上(如***、设备、装置、器件等)执行, 在执行时,包括方法实施例的步骤之一或其组合。
可选地,上述实施例的全部或部分步骤也可以使用集成电路来实现,这些步骤可以被分别制作成一个个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
上述实施例中的各装置/功能模块/功能单元可以采用通用的计算装置来实现,它们可以集中在单个的计算装置上,也可以分布在多个计算装置所组成的网络上。
上述实施例中的各装置/功能模块/功能单元以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。上述提到的计算机可读取存储介质可以是只读存储器,磁盘或光盘等。
任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求所述的保护范围为准。
工业实用性
本发明实施例可避免选取拥塞的路径作为工作路径,使资源分配更为合理,进而提升了***的可用性和组网的灵活性。

Claims (16)

  1. 一种环形拓扑堆叠***路径选择方法,环形拓扑堆叠***包括环形堆叠连接的多个设备;该方法包括:
    获取所述多个设备中的两设备之间不同方向上的第一路径和第二路径上的堆叠链路当前的带宽资源占用率;
    根据所述堆叠链路当前的带宽资源占用率得到所述第一路径的路径带宽资源占用率和所述第二路径的路径带宽资源占用率;
    从所述第一路径和所述第二路径中选取路径带宽资源占用率小的作为所述两设备之间的工作路径。
  2. 如权利要求1所述的环形拓扑堆叠***路径选择方法,其中,
    在获取所述两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率之前,该方法还包括:
    获取所述第一路径和第二路径上堆叠端口的优先级,不同类型的端口对应不同的优先级;
    如所述第一路径上堆叠端口的最低优先级与所述第二路径上堆叠端口的最低优先级不同,则直接选择最低优先级较高的路径作为工作路径。
  3. 如权利要求1所述的环形拓扑堆叠***路径选择方法,其中,
    在获取所述两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率之前,还方法还包括:
    获取所述第一路径和第二路径上堆叠链路所支持的带宽,并分别将所述第一路径和第二路径上堆叠链路支持的最小带宽作为所述第一路径的路径带宽和所述第二路径的路径带宽;
    如所述第一路径的路径带宽与所述第二路径的路径带宽不等,则直接选取路径带宽大的路径作为工作路径。
  4. 如权利要求2所述的环形拓扑堆叠***路径选择方法,其中,
    判断所述第一路径上各堆叠端口的最低优先级与所述第二路径上堆叠端 口的最低优先级相同时,在获取所述两设备之间不同方向上的第一路径和第二路径上各堆叠链路当前的带宽资源占用率之前,该方法还包括:
    获取所述第一路径和第二路径上堆叠链路所支持的带宽,并分别将所述第一路径和第二路径上堆叠链路支持的最小带宽作为所述第一路径的路径带宽和所述第二路径的路径带宽;
    如所述第一路径的路径带宽与所述第二路径的路径带宽不等,则直接选取路径带宽大的路径作为工作路径。
  5. 如权利要求1-4任一项所述的环形拓扑堆叠***路径选择方法,还包括:如所述第一路径和所述第二路径的路径带宽资源占用率相等,则分别获取所述第一路径上和所述第二路径上的堆叠链路总数,选取堆叠链路总数小的路径为工作路径。
  6. 如权利要求1-4任一项所述的环形拓扑堆叠***路径选择方法,其中,根据所述堆叠链路当前的带宽资源占用率得到所述第一路径和所述第二路径的路径带宽资源占用率包括:
    设置堆叠链路的带宽资源占用率与不同权重值之间的对应关系,带宽资源占用率越大,对应的权重值越大;
    将堆叠链路的带宽资源占用率转换成对应的权重值;
    分别计算所述第一路径上堆叠链路的权重值之和与所述第二路径上堆叠链路的权重值之和;所述第一路径上堆叠链路的权重值之和为所述第一路径的路径带宽资源占用率,所述第二路径上堆叠链路的权重值之和为所述第二路径的路径带宽资源占用率。
  7. 如权利要求1-4任一项所述的环形拓扑堆叠***路径选择方法,其中,在确定所述设备中两设备之间的工作路径后,该方法还包括:
    按照设定规则获取所述工作路径上堆叠链路当前的带宽资源占用率;
    判断是否存在带宽资源占用率大于带宽资源占用率阈值的堆叠链路,如存在,则重新确定所述设备中两设备之间的工作路径。
  8. 一种环形拓扑堆叠***路径选择装置,包括:第一信息获取模块、处理模块以及第一选择模块;
    所述第一信息获取模块设置为:获取所述环形拓扑堆叠***中的两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率;
    所述处理模块设置为:根据所述各堆叠链路当前的带宽资源占用率得到所述第一路径的路径带宽资源占用率和所述第二路径的路径带宽资源占用率;
    所述第一选择模块设置为:从所述第一路径和所述第二路径中选取路径带宽资源占用率小的作为所述两设备之间的工作路径。
  9. 如权利要求8所述的环形拓扑堆叠***路径选择装置,还包括第二信息获取模块和第二选择模块,所述第二信息获取模块设置为:在所述资源信息获取模块获取所述堆叠链路当前的带宽资源占用率之前,获取所述第一路径和第二路径上堆叠端口的优先级,不同类型的端口对应不同的优先级;
    所述第二选择模块设置为:在所述第一路径上堆叠端口的最低优先级与所述第二路径上堆叠端口的最低优先级不同时,直接选择最低优先级较高的路径作为工作路径。
  10. 如权利要求9所述的环形拓扑堆叠***路径选择装置,
    还包括第三信息获取模块和第三选择模块,所述第三信息获取模块设置为:在所述第一路径上堆叠端口的最低优先级与所述第二路径上堆叠端口的最低优先级相同时,在获取所述两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率之前,获取所述第一路径和第二路径上堆叠链路所支持的带宽,并分别将所述第一路径和第二路径上堆叠链路支持的最小带宽作为所述第一路径的路径带宽和所述第二路径的路径带宽;
    所述第三选择模块设置为:在所述第一路径的路径带宽与所述第二路径的路径带宽不等时,直接选取路径带宽大的路径作为工作路径。
  11. 如权利要求8-10任一项所述的环形拓扑堆叠***路径选择装置,还包括统计模块,设置为:在所述第一路径和所述第二路径的路径带宽资源占用率相等时,分别统计所述第一路径上和所述第二路径上的堆叠链路总数,选取堆叠链路总数小的路径为工作路径。
  12. 如权利要求8-10任一项所述的环形拓扑堆叠***路径选择装置,其 中,所述处理模块包括设置子模块、转换子模块以及计算子模块:
    所述设置子模块设置为:设置堆叠链路的带宽资源占用率与不同权重值之间的对应关系,带宽资源占用率越大,对应的权重值越大;
    所述转换子模块设置为:将各堆叠链路的带宽资源占用率转换成对应的权重值;
    所述计算子模块设置为:分别计算所述第一路径上各堆叠链路的权重值之和与所述第二路径上堆叠链路的权重值之和;所述第一路径上堆叠链路的权重值之和为所述第一路径的路径带宽资源占用率,所述第二路径上堆叠链路的权重值之和为所述第二路径的路径带宽资源占用率。
  13. 如权利要求8-10任一项所述的环形拓扑堆叠***路径选择装置,还包括第四信息获取模块、判断模块以及触发模块;所述第四信息获取模块设置为:在确定所述两设备之间的工作路径后,按照设定规则获取所述工作路径上各堆叠链路当前的带宽资源占用率,
    所述判断模块设置为:判断是否存在带宽资源占用率大于带宽资源占用率阈值的堆叠链路,如存在,通知所述触发模块;
    所述触发模块设置为:触发重新确定所述设备中两设备之间的工作路径。
  14. 一种环形拓扑堆叠***的主设备,包括存储器和处理器;所述存储器设置为:存储指令;所述处理器设置为:在确定所述环形拓扑堆叠***中两设备之间的工作路径时,调取所述指令执行以下步骤:
    获取所述两设备之间不同方向上的第一路径和第二路径上堆叠链路当前的带宽资源占用率;
    根据所述堆叠链路当前的带宽资源占用率得到所述第一路径的路径带宽资源占用率和所述第二路径的路径带宽资源占用率;
    从所述第一路径和所述第二路径中选取路径带宽资源占用率小的作为所述两设备之间的工作路径。
  15. 一种计算机程序,包括程序指令,当该程序指令被环形拓扑堆叠***的主设备执行时,使得该主设备可执行权利要求1-7任一项所述的方法。
  16. 一种载有权利要求15所述计算机程序的载体。
PCT/CN2014/088797 2014-06-24 2014-10-17 环形拓扑堆叠***路径选择方法、装置及主设备 WO2015196647A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410286510.X 2014-06-24
CN201410286510.XA CN105323170A (zh) 2014-06-24 2014-06-24 环形拓扑堆叠***路径选择方法、装置及主设备

Publications (1)

Publication Number Publication Date
WO2015196647A1 true WO2015196647A1 (zh) 2015-12-30

Family

ID=54936602

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/088797 WO2015196647A1 (zh) 2014-06-24 2014-10-17 环形拓扑堆叠***路径选择方法、装置及主设备

Country Status (2)

Country Link
CN (1) CN105323170A (zh)
WO (1) WO2015196647A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113037627A (zh) * 2021-03-03 2021-06-25 烽火通信科技股份有限公司 一种网络业务线路资源选择的方法和装置
CN113067659A (zh) * 2020-01-02 2021-07-02 ***通信有限公司研究院 一种信息处理方法、装置、设备及计算机可读存储介质
CN113225241A (zh) * 2021-04-19 2021-08-06 中国科学院计算技术研究所 面向环形数据报文网络的数据传输拥塞控制方法及***

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105763483B (zh) * 2016-02-19 2019-01-22 新华三技术有限公司 一种报文发送方法和装置
CN106921527B (zh) * 2017-04-27 2019-08-06 新华三技术有限公司 堆叠冲突的处理方法及装置
CN108933720B (zh) * 2017-05-25 2021-11-02 中兴通讯股份有限公司 环形***的信息处理方法、装置、***及存储介质
CN108282406B (zh) * 2017-12-15 2021-03-23 瑞斯康达科技发展股份有限公司 一种数据传输方法、堆叠设备及堆叠***
CN108199986B (zh) * 2017-12-15 2022-05-27 瑞斯康达科技发展股份有限公司 一种数据传输方法、堆叠设备及堆叠***
CN109246671B (zh) * 2018-09-30 2020-12-08 Oppo广东移动通信有限公司 数据传输方法、装置及***
CN112104406B (zh) * 2020-08-19 2022-09-30 合肥工业大学 自适应的自主任务规划方法和***
CN117424664A (zh) * 2023-12-19 2024-01-19 南京华鹄科技发展有限公司 一种基于复合通信网络的应急广播***及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080259920A1 (en) * 2007-04-17 2008-10-23 Tellabs Operations, Inc. Method and apparatus for establishing virtual resilient packet ring (RPR) subrings over a common communications path
CN101645850A (zh) * 2009-09-25 2010-02-10 杭州华三通信技术有限公司 转发路径确定方法和设备
CN102075428A (zh) * 2011-01-20 2011-05-25 中国电信股份有限公司 联合路由设置方法和装置
CN102104533A (zh) * 2009-12-22 2011-06-22 杭州华三通信技术有限公司 Rrpp单环网络数据发送路径优化方法及环网节点
CN103595626A (zh) * 2013-10-15 2014-02-19 苏州拓康自动化技术有限公司 一种环形网络中实现动态路径规划的方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075248A (zh) * 2010-11-29 2011-05-25 东北大学 混合多层光网络中的波带融合方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080259920A1 (en) * 2007-04-17 2008-10-23 Tellabs Operations, Inc. Method and apparatus for establishing virtual resilient packet ring (RPR) subrings over a common communications path
CN101645850A (zh) * 2009-09-25 2010-02-10 杭州华三通信技术有限公司 转发路径确定方法和设备
CN102104533A (zh) * 2009-12-22 2011-06-22 杭州华三通信技术有限公司 Rrpp单环网络数据发送路径优化方法及环网节点
CN102075428A (zh) * 2011-01-20 2011-05-25 中国电信股份有限公司 联合路由设置方法和装置
CN103595626A (zh) * 2013-10-15 2014-02-19 苏州拓康自动化技术有限公司 一种环形网络中实现动态路径规划的方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113067659A (zh) * 2020-01-02 2021-07-02 ***通信有限公司研究院 一种信息处理方法、装置、设备及计算机可读存储介质
CN113067659B (zh) * 2020-01-02 2022-08-23 ***通信有限公司研究院 一种信息处理方法、装置、设备及计算机可读存储介质
CN113037627A (zh) * 2021-03-03 2021-06-25 烽火通信科技股份有限公司 一种网络业务线路资源选择的方法和装置
CN113225241A (zh) * 2021-04-19 2021-08-06 中国科学院计算技术研究所 面向环形数据报文网络的数据传输拥塞控制方法及***

Also Published As

Publication number Publication date
CN105323170A (zh) 2016-02-10

Similar Documents

Publication Publication Date Title
WO2015196647A1 (zh) 环形拓扑堆叠***路径选择方法、装置及主设备
JP7417825B2 (ja) スライスベースルーティング
US20210367853A1 (en) Transmit specific traffic along blocked link
Tam et al. Use of devolved controllers in data center networks
US8855116B2 (en) Virtual local area network state processing in a layer 2 ethernet switch
JP2023503063A (ja) 高性能コンピューティング環境においてプライベートファブリックにおける帯域幅輻輳制御を提供するためのシステムおよび方法
US8806031B1 (en) Systems and methods for automatically detecting network elements
US8284791B2 (en) Systems and methods for load balancing of management traffic over a link aggregation group
US8630171B2 (en) Policing virtual connections
US20150124612A1 (en) Multi-tenant network provisioning
EP2777229A1 (en) System and method for providing deadlock free routing between switches in a fat-tree topology
JP2013510459A (ja) 分離的なパス計算アルゴリズム
WO2014044093A1 (en) Disjoint multi-paths with service guarantee extension
WO2016165142A1 (zh) 一种虚拟网络的保护方法和装置
WO2017053452A1 (en) Methods, systems, and computer readable media for advanced distribution in a link aggregation group
EP2901634A1 (en) Method and apparatus for communication path selection
US10284457B2 (en) System and method for virtual link trunking
Barakabitze et al. Multipath protections and dynamic link recoveryin softwarized 5G networks using segment routing
US20150301571A1 (en) Methods and apparatus for dynamic mapping of power outlets
Vanamoorthy et al. Congestion-free transient plane (CFTP) using bandwidth sharing during link failures in SDN
WO2015135284A1 (zh) 数据流转发的控制方法及***、计算机存储介质
WO2016095610A1 (zh) 一种恢复光层业务的方法和***
Szymanski Low latency energy efficient communications in global-scale cloud computing systems
Mon et al. Flow path computing in software defined networking
Zhang et al. VLAN-based routing infrastructure for an all-optical circuit switched LAN

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14895843

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14895843

Country of ref document: EP

Kind code of ref document: A1