CN114615155A - Method and device for deploying service - Google Patents

Method and device for deploying service Download PDF

Info

Publication number
CN114615155A
CN114615155A CN202011340814.1A CN202011340814A CN114615155A CN 114615155 A CN114615155 A CN 114615155A CN 202011340814 A CN202011340814 A CN 202011340814A CN 114615155 A CN114615155 A CN 114615155A
Authority
CN
China
Prior art keywords
link
network
service
nodes
priority queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011340814.1A
Other languages
Chinese (zh)
Inventor
徐菊华
鲍磊
肖亚群
郑娟
陈新隽
方伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011340814.1A priority Critical patent/CN114615155A/en
Publication of CN114615155A publication Critical patent/CN114615155A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/32Flooding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters
    • H04W28/20Negotiating bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/24Negotiating SLA [Service Level Agreement]; Negotiating QoS [Quality of Service]

Abstract

The application discloses a method and a device for deploying services. The method comprises the following steps: acquiring a network topology of a network fragment corresponding to a first priority queue, wherein the network topology comprises a link between two adjacent nodes and performance parameters of the link; and determining a first link for bearing the service according to the SLA required by the service and the network topology, wherein the first link is a logical link meeting the SLA in the network topology, and the service belongs to the first priority queue. A corresponding apparatus is also disclosed. By adopting the scheme of the application, the efficient service deployment can be realized and the SLA requirements of the deployed service can be met.

Description

Method and device for deploying service
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for deploying a service.
Background
Currently, Internet Protocol (IP) service transmission is mainly based on distributed routing. As shown in fig. 1, taking the enterprise private services as an example, the Access Controller (AC) side portal of each private line makes a Committed Access Rate (CAR) (for example, in fig. 1, for one of the enterprise private services, the AC side portal commits 100M, and for another of the enterprise private services, the AC portal commits 200M). The network side equipment judges by manually inquiring the network load. For example, if the average load of the links along the newly opened dedicated line is less than 70%, the newly opened dedicated line is considered to be increased, but it cannot be guaranteed that the dedicated line does not lose packets. The method can only judge the bandwidth utilization rate of the link manually, the efficiency is too low, the judgment is not accurate according to the average bandwidth utilization rate, and the packet loss cannot be guaranteed.
Disclosure of Invention
The application provides a method and a device for deploying a service, which can realize efficient service deployment and meet the Service Level Agreement (SLA) requirement of the service deployment.
In a first aspect, a method for deploying a service is provided, the method including: acquiring a network topology of a network fragment corresponding to a first priority queue, wherein the network topology comprises a link between two adjacent nodes and performance parameters of the link; and determining a first link for bearing the service according to the SLA required by the service and the network topology, wherein the first link is a logical link meeting the SLA in the network topology, and the service belongs to the first priority queue.
In this aspect, by obtaining the network topology of the network segment corresponding to the first priority queue, and according to the SLA and the network topology required by the service, determining the first link carrying the service, where the first link is a logical link that meets the SLA in the network topology, thereby achieving efficient service deployment and meeting the SLA requirements of the deployed service.
In a possible implementation, the obtaining a network topology of a network segment corresponding to a first priority queue includes: acquiring performance parameters of any one of N nodes in a network and a link between any two nodes through Internal Gateway Protocol (IGP) flooding, wherein N is greater than 1; and acquiring the network topology of the network fragment corresponding to the first priority queue according to the performance parameter of any node and the link between any two nodes.
In the implementation, the performance parameters of each node and the links between any two nodes in the network can be acquired in an IGP flooding manner, so that the network topology of the network segment corresponding to the first priority queue is acquired, the controller does not need to acquire the performance parameters and the links between any two nodes from each node one by one, and the acquisition efficiency is improved.
In yet another possible implementation, the performance parameter is a bandwidth; the acquiring the performance parameter of any node of the N nodes in the network and the link between any two nodes through IGP flooding includes: acquiring the maximum bandwidth of the link between any two nodes and the link between any two nodes through IGP flooding; and the acquiring the network topology of the network fragment corresponding to the first priority queue according to the performance parameter of any node and the link between any two nodes comprises: and acquiring the available bandwidth of the network fragment corresponding to the first priority queue according to the maximum bandwidth of the link between any two nodes and the weight of the network fragment corresponding to the first priority queue.
In the implementation, the controller can obtain the available bandwidth of the network segment corresponding to the first priority queue according to the obtained maximum bandwidth of the link between any two nodes and the preset weight of the network segment corresponding to the first priority queue, and each node does not need to calculate the available bandwidth by itself, so that the requirement on the computing capacity of the node is low.
In yet another possible implementation, the performance parameter is a bandwidth; the acquiring the performance parameter of any node of the N nodes in the network and the link between any two nodes through IGP flooding includes: acquiring available bandwidth of the network fragment corresponding to the first priority queue carried by the link between any two nodes and the link between any two nodes through IGP flooding; and the acquiring the network topology of the network fragment corresponding to the first priority queue according to the performance parameter of any node and the link between any two nodes comprises: and taking the available bandwidth of the network fragment corresponding to the first priority queue, which is borne by the link between any two nodes, as the available bandwidth of the network fragment corresponding to the first priority queue.
In the implementation, the node calculates the available bandwidth of the network fragment corresponding to the first priority queue, and floods the available bandwidth to the controller through the IGP without the need of the controller to calculate the available bandwidth again, thereby improving the efficiency of service deployment.
In yet another possible implementation, the determining, according to the SLA required for the service and the network topology, a first link for carrying the service includes: acquiring a bandwidth required by the service; and determining the first link according to the bandwidth required by the service and the network topology, wherein the available bandwidth of the network fragment corresponding to the first priority queue is greater than or equal to the bandwidth required by the service.
In yet another possible implementation, after determining the first link according to the network topology, the method further includes: and updating the available bandwidth of the network fragment corresponding to the first priority queue, wherein the updated available bandwidth of the network fragment corresponding to the first priority queue is the difference between the available bandwidth of the network fragment corresponding to the first priority queue before the first link is determined and the bandwidth required by the service.
In the implementation, after a link is determined, the available bandwidth of the network segment corresponding to the first priority queue is updated in time, so that the accuracy of determining the link can be improved.
In yet another possible implementation, the method is performed by a controller, a path planning device, a router, or a switch.
In a second aspect, an apparatus for deploying a service is provided, comprising a processor and a communication interface, the processor being configured to: acquiring a network topology of a network fragment corresponding to a first priority queue, wherein the network topology comprises a link between two adjacent nodes and performance parameters of the link; and determining a first link for bearing the service according to a service level agreement SLA required by the service and the network topology, wherein the first link is a logical link meeting the SLA in the network topology, and the service belongs to the first priority queue.
In one possible implementation, the communication interface is configured to acquire, through an interior gateway protocol IGP flooding, a performance parameter of any one of N nodes and a link between any two nodes in the network, where N is greater than 1; and the processor is used for acquiring the network topology of the network fragment corresponding to the first priority queue according to the performance parameter of any node and the link between any two nodes.
In yet another possible implementation, the performance parameter is a bandwidth; the communication interface is used for acquiring the maximum bandwidth of the link between any two nodes and the link between any two nodes through IGP flooding; and the processor is used for acquiring the available bandwidth of the network fragment corresponding to the first priority queue according to the maximum bandwidth of the link between any two nodes and the weight of the network fragment corresponding to the first priority queue.
In yet another possible implementation, the performance parameter is a bandwidth; the communication interface is configured to acquire, through IGP flooding, available bandwidths of the network segments corresponding to the first priority queue carried by the link between any two nodes and the link between any two nodes; and the processor is configured to use the available bandwidth of the network segment corresponding to the first priority queue, which is carried by the link between any two nodes, as the available bandwidth of the network segment corresponding to the first priority queue.
In yet another possible implementation, the processor is configured to: acquiring a bandwidth required by the service; and determining the first link according to the bandwidth required by the service and the network topology, wherein the available bandwidth of the network fragment corresponding to the first priority queue is greater than or equal to the bandwidth required by the service.
In yet another possible implementation, the processor is further configured to update an available bandwidth of a network slice corresponding to the first priority queue, where the updated available bandwidth of the network slice corresponding to the first priority queue is a difference between the available bandwidth of the network slice corresponding to the first priority queue before the first link is determined and a bandwidth required by the service.
The number of communication interfaces may be one or more. The communication interface may include a wireless interface and/or a wired interface. For example, the wireless interface may include a Wireless Local Area Network (WLAN) interface, a bluetooth interface, a cellular network interface, or any combination thereof. The wired interface may include an ethernet interface, an asynchronous transfer mode interface, a fibre channel interface, or any combination thereof. The ethernet interface may be an electrical or optical interface. The communication interface does not necessarily include (although typically includes) an ethernet interface.
The number of processors may be one or more. The processor includes a central processing unit (cpu), a network processor, a Graphics Processing Unit (GPU), an application specific integrated circuit (asic), a Programmable Logic Device (PLD), or any combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
In a third aspect, an apparatus for deploying a service is provided, including: the receiving and sending unit is used for acquiring a network topology of the network fragment corresponding to the first priority queue, and the network topology comprises a link between two adjacent nodes and a performance parameter of the link; and a processing unit, configured to determine, according to a service level agreement SLA required by a service and the network topology, a first link for bearing the service, where the first link is a logical link that meets the SLA in the network topology, and the service belongs to the first priority queue.
In a possible implementation, the transceiver unit is configured to acquire, through IGP flooding, a performance parameter of any node among N nodes in a network and a link between any two nodes, where N is greater than 1; and the processing unit is used for acquiring the network topology of the network fragment corresponding to the first priority queue according to the performance parameter of any node and the link between any two nodes.
In yet another possible implementation, the performance parameter is a bandwidth; the receiving and sending unit is used for acquiring the maximum bandwidth of the link between any two nodes and the link between any two nodes through IGP flooding; and the processing unit is used for acquiring the available bandwidth of the network fragment corresponding to the first priority queue according to the maximum bandwidth of the link between any two nodes and the weight of the network fragment corresponding to the first priority queue.
In yet another possible implementation, the performance parameter is a bandwidth; the transceiver unit is configured to acquire, through IGP flooding, available bandwidths of the network segments corresponding to the first priority queue carried by the link between any two nodes and the link between any two nodes; and the processing unit is configured to use the available bandwidth of the network segment corresponding to the first priority queue, which is carried by the link between any two nodes, as the available bandwidth of the network segment corresponding to the first priority queue.
In yet another possible implementation, the transceiver unit is further configured to acquire a bandwidth required by the service; and the processing unit is further configured to determine the first link according to the bandwidth required by the service and the network topology, where an available bandwidth of a network slice corresponding to the first priority queue is greater than or equal to the bandwidth required by the service.
In yet another possible implementation, the processing unit is further configured to update an available bandwidth of a network slice corresponding to the first priority queue, where the updated available bandwidth of the network slice corresponding to the first priority queue is a difference between the available bandwidth of the network slice corresponding to the first priority queue before the first link is determined and a bandwidth required by the service.
In a fourth aspect, an apparatus for deploying a service is provided that includes a transceiver, a memory, and a processor; wherein the memory stores program code and the processor is configured to call the program code stored in the memory to perform the method of the first aspect or any implementation thereof.
With reference to the first aspect, the second aspect, the third aspect, the fourth aspect, or any implementation thereof, in yet another implementation, the performance parameter and the SLA are both bandwidth guarantees, or the performance parameter and the SLA are both transmission delays.
With reference to the second aspect, the third aspect, the fourth aspect, or any implementation thereof, in yet another implementation, the apparatus is a controller, a path planning device, a router, or a switch.
In a fifth aspect, there is provided a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of the first aspect described above or any implementation thereof.
A sixth aspect provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect described above or any implementation thereof.
Drawings
Fig. 1 is a schematic diagram of service deployment by AC-side ingress promised access rate;
fig. 2 is a schematic architecture diagram of a communication system according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a method for deploying a service according to an embodiment of the present application;
fig. 4 is a schematic diagram of an example service deployment provided in an embodiment of the present application;
fig. 5 is a flowchart illustrating a further method for deploying a service according to an embodiment of the present application;
FIG. 6 is a diagram illustrating an example of determining bandwidths of different links in a same priority queue according to an embodiment of the present application;
fig. 7 is a schematic diagram of a further example service deployment provided by an embodiment of the present application;
fig. 8 is a flowchart illustrating a further method for deploying a service according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an apparatus for deploying a service according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of another apparatus for deploying a service according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of another apparatus for deploying a service according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
Current networks prioritize different services based on quality of service (QoS) priorities. As shown in table 1 below, there are 8 classes of service (COS) or 8 priority queues:
TABLE 1
COS Bit position Congestion management Weight of Service type (traffic type)
CS7 7 PQ Reservation
CS6 6 PQ Control plane protocol message
EF 5 PQ High priority service such as voice, 4G/5G signaling and the like
AF4 4 WFQ 30% Real-time video, data-like services
AF3 3 WFQ 30% Enterprise special line business
AF2 2 WFQ 20% Mobile data service
AF1 1 WFQ 20% Traffic with relatively high priority
BE 0 WFQ 20% Ordinary internet access service and the like
According to table 1, the physical ports of each node set the above 8 priority queues. Among them, Priority Queuing (PQ) and Weighted Fair Queuing (WFQ) belong to different congestion management (congestion management) methods. Three priorities for PQ scheduling: the Class Selector (CS) 7, CS6, and Expedited Forwarding (EF) queue may be used to carry protocol packets and high-value voice, signaling, and other services. Illustratively, the CS6 queue may be used to carry control plane protocol packets; the EF queue can be used for carrying high-priority traffic such as voice, 4G/5G signaling and the like. The 5 priority queues scheduled by WFQ carry different priority services, such as real-time video, enterprise private line, mobile data, etc. Illustratively, an Assured Forwarding (AF) 4 queue may be used to carry real-time video, dataclass traffic; the AF3 queue can be used for bearing enterprise private line service; the AF2 queue may be used to carry mobile data traffic; the AF1 queue may be used to carry traffic of relatively high priority; best Effort (BE) queues are used for carrying ordinary internet traffic and the like. Each queue of WFQ is configured with a specific weight (weight) to guarantee bandwidth. Different queues are indicated by bit (bit) bits, respectively.
As shown in fig. 2, which is a schematic structural diagram of a communication system suitable for use in the embodiment of the present invention, the communication system 100 includes a plurality of interconnected routers 11 (the number of the routers 11 is not limited) and an Area Border Router (ABR) 12, the plurality of routers 11 and ABR12 are located in an IGP flooding domain, and the routers 11 support IGP and BGP protocols; the communication system 100 further comprises a device 13 for deploying the service, the device 13 communicates with the ABR12, the device 13 obtains a network topology of the network segment corresponding to the first priority queue, and determines a first link for carrying the service according to the SLA required by the service and the network topology, so as to meet the SLA requirement of the deployed service. The apparatus 13 may be a controller, a path planning device, a router or a switch. Routers 11 and ABRs 12 support QoS management and/or configuration. The device 13 supports a centralized segment-routing-traffic engineering (SR-TE) tunnel. In the following embodiments, the device 13 is described as an example of a controller.
Based on the architecture of the communication system, how to perform service deployment to meet the SLA requirements of the service is described in detail as follows:
as shown in fig. 3, a flowchart of a method for deploying a service provided in an embodiment of the present application is schematically shown, and exemplarily, the method includes the following steps:
s101, obtaining a network topology of a network fragment corresponding to a first priority queue, wherein the network topology comprises a link between two adjacent nodes and performance parameters of the link.
In this embodiment, different priority queues carry different priority services, in other words, different services enter different priority queues. For example, AF3 carries enterprise private line traffic, and AF2 carries mobile data traffic. For WFQ priority queues, each priority queue needs to be configured with a specific weight to guarantee its bandwidth. The present embodiment is described by taking the deployment of WFQ priority service as an example.
For example, each priority queue has a corresponding network slice (network slice) on which traffic belonging to the priority queue is carried. As shown in fig. 4, which is a schematic diagram of an example service deployment provided in the embodiment of the present application, an AF3 carries an enterprise private line service, and configures a dedicated network slice (referred to as enterprise network slice) for an AF3 queue; and for other non-priority services that are not the AF3 queue, a default network slice (default slice) is configured. The enterprise network segment may be an Ethernet Virtual Private Network (EVPN) based on SR-TE (EVPN over SR-TE).
For example, a network segment corresponding to a priority queue includes a plurality of nodes having a certain network topology relationship, and data of a service belonging to the priority queue is transmitted on the plurality of nodes included in the network segment. And acquiring the network topology of the network fragment corresponding to the first priority queue so as to deploy the service on the network fragment. For example, the network (specifically, an IGP flooding domain) includes N nodes, where N is greater than 1. Acquiring a network topology of the network fragment corresponding to the first priority queue, which may be acquiring performance parameters of any node of N nodes in the network and a link between any two nodes through IGP flooding; and acquiring the network topology of the network fragment corresponding to the first priority queue according to the performance parameter of any node and the link between any two nodes. In specific implementation, the ABR establishes a border gateway protocol-link state (BGP-LS) BGP LS neighbor (peer) with the controller. The performance parameters of any node in the N nodes and the link between any two nodes are diffused in the domain by the N nodes in the network through IGP, and the performance parameters of any node in the N nodes and the link between any two nodes flooded by IGP in the domain can be sent to a device for deploying services by ABR in the domain through BGP-LS. The performance parameters include at least one of: bandwidth, transmission delay. The device obtains the network topology of the network segment corresponding to the first priority queue according to the received performance parameter of any node in the N nodes and the link between any two nodes, specifically according to the performance parameter required by the first priority queue.
For example, data of a service belonging to a priority queue may be transmitted through routers in multiple IGP flooding domains, and a network segment may also be deployed in multiple IGP flooding domains. As shown in fig. 4, the dedicated network segments are deployed in the first IGP flooding domain, the second IGP flooding domain, the third IGP flooding domain, and so on. The enterprise private line service data finally reach a destination end-an enterprise headquarters from the source end through the transmission of nodes in the first IGP flooding domain, the second IGP flooding domain and the third IGP flooding domain; or the enterprise private line service data passes through the nodes in the first IGP flooding domain, the second IGP flooding domain and the third IGP flooding domain from the source end-enterprise headquarters and finally reaches the destination end. However, deploying the service in this embodiment mainly refers to deploying the service in one IGP flooding domain, and therefore, the network topology of the network segment corresponding to the first priority queue in the IGP flooding domain is obtained.
S102, determining a first link for bearing the service according to the SLA and the network topology required by the service.
For example, an SLA refers to an agreement or contract between a business providing a service and a customer that is commonly recognized by both parties with respect to the quality, level, performance, etc. of the service. In order to improve the quality, level, performance, etc. of the service, the SLA required by the service needs to be satisfied. The SLA may specifically be a bandwidth, a transmission delay, and the like.
The traffic belongs to a first priority queue. For example, the service is an enterprise private line service, belonging to the AF3 queue. Having obtained the network topology of the network segment corresponding to the first priority queue, the first link for carrying the service may be determined according to the SLA and the network topology required by the service. The first link is a logical link satisfying the SLA in the network topology. I.e. the performance parameters of the traffic belonging to the first priority queue can be guaranteed. Specifically, the controller determines one or more links for transmitting the traffic data according to the network topology. For example, traffic data is transmitted from source end node a to destination end node H, with two links (or tunnels): A-B-C-H, A-D-E-F-H. Then, the controller judges hop by hop whether the performance parameter value required by the service is less than or equal to the available performance parameter value of the link between any two nodes in the network fragment corresponding to the first service priority. For nodes passing from a source end to a destination end, available performance parameter values of links between every two nodes in a network fragment corresponding to a first service priority may be the same or different, and a controller respectively judges whether the performance parameter values required by the service are less than or equal to the available performance parameter values of the links between every two nodes according to the performance parameter values required by the service and the available performance parameter values of the links between every two nodes.
According to the method for deploying the service provided by the embodiment of the application, the network topology of the network fragment corresponding to the first priority queue is obtained, and the first link for bearing the service is determined according to the SLA and the network topology required by the service, wherein the first link is a logical link meeting the SLA in the network topology, so that efficient service deployment can be realized, and the SLA requirement of the deployed service can be met.
In an application scenario, the performance parameter and SLA are bandwidths. As shown in fig. 5, a flowchart of another method for deploying a service provided in the embodiment of the present application is schematically shown, and exemplarily, the method includes the following steps:
s201, acquiring a link between any two nodes in the N nodes in the network and the maximum bandwidth of the link between any two nodes through IGP flooding.
For example, the present embodiments relate generally to how a controller deploys traffic in an IP network (e.g., within an IGP flooding domain). Each of the N nodes in an IGP flooding domain acquires a link between the node and an adjacent node, and acquires a maximum bandwidth of a physical port of the node set by the network (i.e., a maximum bandwidth of a link between any two nodes). Each node may then flood the IGP domain with the maximum bandwidth for the link between any two nodes and the link between any two nodes. The ABR in the IGP flooding domain may send the received information of the node flooding to the controller, so that the controller acquires the maximum bandwidth of the link between any two nodes and the link between any two nodes in the N nodes in the network through the IGP flooding. The network topology of the IGP flooding domain includes links between any two of the N nodes in the network.
For example, there may be multiple physical ports on a node, and multiple links between each physical port of a node and a neighboring node. Here, the maximum bandwidth of a link between any two nodes refers to the maximum bandwidth of all links of one physical port of one node. The individual nodes can flood the maximum bandwidth of all links of each physical port through IGP.
As shown in fig. 6, the maximum bandwidth of physical port 1 (or the link between physical port 1 and physical port 2) is 10G, and the maximum bandwidth of physical port 2 (the link between physical port 2 and the next physical port) is 10G. The maximum bandwidth of physical port 1 and physical port 2 may also be different.
S202, according to the maximum bandwidth of a link between any two nodes and the weight of the network fragment corresponding to the first priority queue, the available bandwidth of the network fragment corresponding to the first priority queue is obtained.
For example, after the controller receives the maximum bandwidth of a link between any two nodes, the controller may obtain the available bandwidth of the network segment corresponding to each priority queue according to the weight of the network segment corresponding to each priority queue, which is preset by the controller. The available bandwidth may also be referred to as residual bandwidth. Here, taking the first priority queue as an example, the available bandwidth of the network segment corresponding to the first priority queue is calculated according to the maximum bandwidth of the link between any two nodes and the weight of the network segment corresponding to the first priority queue. Specifically, the available bandwidth of the network segment corresponding to the first priority queue may be a product of a maximum bandwidth of a link between any two nodes and a weight of the network segment corresponding to the first priority queue.
For example, each physical port is provided with a plurality of priority queues, such as an EF queue, an AF3 queue, and the like. Assuming that the maximum bandwidth of the physical port is 10G, where the bandwidth of the PQ queue is 1G, the total bandwidth of the remaining WFQs is 9G, and the available bandwidth calculated by each WFQ priority according to the weight of the WFQ priority queue is shown in table 2 below:
TABLE 2
COS Queue mode Input bandwidth Bandwidth of output
EF PQ 1G 1G
AF4 WFQ, weight 3 2.25G (10-1)*3/12=2.25G
AF3 (Special line queue) WFQ, weight 3 2.25G (10-1)*3/12=2.25G
AF2 WFQ, weight 2 2.25G (10-1)*2/12=1.5G
AF1 WFQ, weight 2 3G (10-1)*2/12=1.5G
BE WFQ, weight 2 5G (10-1)*2/12=1.5G
The input bandwidth of each priority queue is the real traffic of the service, and the output bandwidth is the real capability (i.e. available bandwidth) of the physical port.
As shown in fig. 6, taking the AF3 queues in physical port 1 and physical port 2 as an example, if the weight of the network partition corresponding to the AF3 queue is 1/4, the available bandwidth of the network partition corresponding to the AF3 queue is calculated to be (10-1) G × 3/12 — 2.25G.
And S203, acquiring the bandwidth required by the service.
In order to prevent the traffic data from being congested, each service has a bandwidth to be guaranteed, i.e. a bandwidth required by the service. The bandwidth required for the service may be a maximum bandwidth of service data transmission.
For example, the controller may obtain the bandwidth required by the service from the user, the network manager, and the like. The controller may identify the priority queue to which the service belongs based on attributes of the service, etc. The traffic belongs to a first priority queue. Since the service belongs to the first priority queue and the available bandwidth of the network segment corresponding to each priority queue is obtained previously, the bandwidth required by the service should be less than or equal to the available bandwidth of the network segment corresponding to the first priority queue.
S204, determining a first link according to the bandwidth and the network topology required by the service.
For example, the controller determines one or more links for transmitting traffic data based on the network topology. For example, traffic data is transmitted from source end node a to destination end node H, with two links (or tunnels): A-B-C-H, A-D-E-F-H. The controller then determines the first link based on the bandwidth required for the service and the available bandwidth of the link. The first link is a link in the network topology that satisfies the bandwidth required by the traffic. Specifically, for nodes passing from a source end to a destination end, available bandwidths of links between every two nodes in a network segment corresponding to a first service priority may be the same or different, and the controller determines whether a bandwidth required by a service is less than or equal to the available bandwidth of the link between every two nodes according to a bandwidth required by the service and an available bandwidth of the link between every two nodes.
According to the above example, the controller determines hop-by-hop whether the bandwidth required by the service is less than or equal to the available bandwidth of the link between any two nodes in the network segment corresponding to the first service priority. For the links a-B-C-H, it is respectively determined whether the bandwidth required by the service is less than or equal to the available bandwidth of the link between the nodes A, B in the network segment corresponding to the first service priority, whether the bandwidth required by the service is less than or equal to the available bandwidth of the link between the nodes B, C in the network segment corresponding to the first service priority, and whether the bandwidth required by the service is less than or equal to the available bandwidth of the link between C, H in the network segment corresponding to the first service priority. For the links a-D-E-F-H, it is respectively determined whether the bandwidth required by the service is less than or equal to the available bandwidth of the link between the nodes A, D in the network segment corresponding to the first service priority, whether the bandwidth required by the service is less than or equal to the available bandwidth of the link between the nodes D, E in the network segment corresponding to the first service priority, whether the bandwidth required by the service is less than or equal to the available bandwidth of the link between E, F in the network segment corresponding to the first service priority, and whether the bandwidth required by the service is less than or equal to the available bandwidth of the link between F, H in the network segment corresponding to the first service priority. The result is that the bandwidth required for the traffic is greater than the available bandwidth of the link between nodes D, E, and therefore, the first link is determined to be a-B-C-H.
Still referring to fig. 6, assuming that a link for transmitting data of service 1 is physical port 1 of node 1-physical port 2 of node 2, a bandwidth required by service 1 is 500M, service 1 belongs to the AF3 queue, and a weight of the AF3 queue is 3/12, an available bandwidth of physical port 1 (or an available bandwidth of a link between physical port 1 and physical port 2) in a network slice corresponding to the AF3 queue is 2.25G, and an available bandwidth of physical port 2 (or an available bandwidth of a link between physical port 2 and a next physical port) is 2.25G, and bandwidths required by service 1 are all smaller than available bandwidths of physical port 1 and physical port 2, thereby determining that the link between physical port 1 and physical port 2 is a first link. After determining the first link, the controller records that the available bandwidth (or referred to as the remaining bandwidth) of the physical port 1 and the physical port 2 is 2.25G-0.5G-1.75G.
Assuming that a link for transmitting data of the service 2 is a physical port of the node 1, a bandwidth required by the service 2 is 1G, the service 2 belongs to the AF3 queue, an available bandwidth of the physical port 1 in the network slice corresponding to the AF3 queue is 1.75G, and a bandwidth required by the service 2 is smaller than an available bandwidth of the physical port 1 in the network slice corresponding to the AF3 queue, it is determined that the link of the physical port 1 is a second link. After determining the second link, the controller records that the available bandwidth of physical port 1 is 1.75G-1G ═ 0.75G.
And S205, updating the available bandwidth of the network fragment corresponding to the first priority queue, wherein the updated available bandwidth of the network fragment corresponding to the first priority queue is the difference between the available bandwidth of the network fragment corresponding to the first priority queue before the first link and the bandwidth required by the service.
After the first link is determined, the available bandwidth of the network segment corresponding to the first priority queue may be updated in time. For example, in the example shown in fig. 6, after determining the first link of traffic 1, the controller records that the available bandwidth (or referred to as the remaining bandwidth) of physical port 1 and physical port 2 is 2.25G-0.5G — 1.75G; after determining the second link for traffic 2, the controller records that the available bandwidth of physical port 1 is 1.75G-1G — 0.75G.
Further, the controller may also update the available bandwidth of the network segment corresponding to the updated first priority queue.
As can be seen, after the first link and the second link are determined, the available bandwidths of the physical port 1 and the physical port 2 in the network segment corresponding to the first priority queue are different.
According to the method for deploying the service provided by the embodiment of the application, the maximum bandwidth of a link between any two nodes and a link between any two nodes in the N nodes in the network is obtained through IGP flooding, the available bandwidth of a network fragment corresponding to a first priority queue is obtained according to the maximum bandwidth of the link between any two nodes and the weight of the network fragment corresponding to the first priority queue, the first link for bearing the service is determined according to the bandwidth and the network topology required by the service, and the first link is a logical link meeting the SLA in the network topology, so that efficient service deployment can be realized and the SLA requirement of the deployed service can be met; and after a link is determined, the available bandwidth of the network fragment corresponding to the first priority queue is updated in time, so that the accuracy of determining the link can be improved.
Fig. 7 is a schematic diagram of a further example service deployment provided in the embodiment of the present application. In fig. 7, AN Access Node (AN), AN Access Gateway (AG), and AN access aggregation gateway (AGG) form AN IGP flooding domain (IGP flooding domain 1); the Provider Edge (PE) and the provider (P) constitute another IGP flooding domain (IGP flooding domain 2). Taking IGP flooding domain 1 as AN example, in the first step, the bandwidth required for the service accessed from one AN is 100M, and the bandwidth required for the service accessed from another AN is 200M. The two services belong to the AF3 queue. Each node in IGP flooding domain 1 may flood the domain with IGP the maximum bandwidth of a link between any two nodes and a link between any two nodes. The controller configures the maximum bandwidth of a link between any two nodes in the network segment corresponding to the AF queue to be 10G. In the second step, the abr (agg) in the IGP flooding domain can send the information of the flooding of the nodes in the domain to which the IGP flooding domain belongs to the controller through BGP-LS. Assuming that the maximum bandwidth of a link between any two nodes received by the controller is 10G, and the weight of the network segment corresponding to the AF3 queue is set to 3/12 by the controller, the controller may calculate the available bandwidth of the network segment corresponding to the AF3 queue to be (10-1) G3/12 ═ 2.25G according to the maximum bandwidth of the link between any two nodes and the weight of the network segment corresponding to the AF3 queue. In the third step, according to the fact that the bandwidth required by one service is 100M, the bandwidth required by the service is judged to be smaller than the available bandwidth of the network fragment corresponding to the AF3 queue, and then a first link is determined; and according to the fact that the bandwidth required by another service is 200M, judging that the bandwidth required by the service is smaller than the available bandwidth of the network fragment corresponding to the AF3 queue, and determining a second link. In the fourth step, after the first link is determined, the available bandwidth of the network segment corresponding to the AF3 queue is updated to be 2.25G-0.1G-2.15G; and after the second link is determined, updating the available bandwidth of the network slice corresponding to the AF3 queue to be 2.15G-0.2G-1.95G.
In one application scenario, the performance parameter is bandwidth. As shown in fig. 8, a flowchart of another method for deploying a service provided in the embodiment of the present application is schematically shown, and exemplarily, the method includes the following steps:
s301, acquiring available bandwidth of a network fragment corresponding to a first priority queue by using an IGP flooding method, wherein the available bandwidth is carried by a link between any two nodes in N nodes in the network and the link between any two nodes.
In this embodiment, each node in the network (for example, in one IGP flooding domain) obtains in advance the weight of the network segment corresponding to each configured priority queue, and obtains the maximum bandwidth of the link between any two nodes configured by the controller, so that each node can obtain the available bandwidth of the network segment corresponding to the first priority queue according to the maximum bandwidth of the link between any two nodes and the weight of the network segment corresponding to the first priority queue.
Each node spreads the link between any two nodes in the domain through IGP to bear the available bandwidth of the network fragment corresponding to the first priority queue, and the ABR in the domain sends the information of the flooding of the node in the domain to which the ABR belongs to the controller through BGP-LS.
S302, enabling a link between any two nodes to bear the available bandwidth of the network fragment corresponding to the first priority queue, and taking the available bandwidth of the network fragment corresponding to the first priority queue as the available bandwidth of the network fragment corresponding to the first priority queue.
After receiving the information of node flooding in the domain sent by the ABR, the controller uses the available bandwidth of the network fragment corresponding to the first priority queue, which is borne by the link between any two nodes, as the available bandwidth of the network fragment corresponding to the first priority queue, and does not need to calculate.
And S303, acquiring the bandwidth required by the service.
The step S203 of the embodiment shown in fig. 5 can be referred to for specific implementation of the step.
S304, determining a first link according to the bandwidth and the network topology required by the service.
The step S204 of the embodiment shown in fig. 5 can be referred to for specific implementation of this step.
And S305, updating the available bandwidth of the network segment corresponding to the first priority queue, where the updated available bandwidth of the network segment corresponding to the first priority queue is a difference between the available bandwidth of the network segment corresponding to the first priority queue before the first link is determined and the bandwidth required by the service.
The specific implementation of this step can refer to step S205 of the embodiment shown in fig. 5.
According to the method for deploying the service provided by the embodiment of the application, the available bandwidth of the network fragment corresponding to the first priority queue is obtained through IGP flooding, the link between any two nodes in the N nodes in the network and the available bandwidth of the network fragment corresponding to the first priority queue are borne by the link between any two nodes, the available bandwidth of the network fragment corresponding to the first priority queue is used as the available bandwidth of the network fragment corresponding to the first priority queue, the first link for bearing the service is determined according to the bandwidth and the network topology required by the service, and the first link is a logical link which meets the SLA in the network topology, so that efficient service deployment can be realized, and the SLA requirement of the deployed service can be met; and after a link is determined, the available bandwidth of the network fragment corresponding to the first priority queue is updated in time, so that the accuracy of determining the link can be improved.
For example, fig. 5 and fig. 8 are described with a performance parameter and an SLA as examples of bandwidth, where the performance parameter and the SLA may also be transmission delay, and the transmission delay of a service cannot be greater than the available transmission delay of a network slice corresponding to the first priority queue, otherwise congestion may occur. When the performance parameter and SLA are transmission delay, the service deployment method can refer to the embodiments shown in fig. 5 and fig. 8.
Based on the same concept of the method for deploying the service, the embodiment of the present application further provides the following apparatus for deploying the service:
the means for deploying a service may be a router or a switch. Fig. 9 is a schematic structural diagram of an apparatus for deploying a service according to an embodiment of the present application. The apparatus 400 for deploying a service comprises a communication interface 41 and a processor 42. The communication interface 41 is configured to acquire the maximum bandwidth of the link between any two nodes and the link between any two nodes through IGP flooding, or acquire the available bandwidth of the network segment corresponding to the first priority queue carried by the link between any two nodes and the link between any two nodes through IGP flooding. Processor 42 is configured to determine a first link for carrying the traffic based on the SLA and network topology required for the traffic.
The number of the communication interfaces 41 may be one or more. The communication interface 41 may specifically be a physical interface. The communication interface 41 may include a wireless interface and/or a wired interface. For example, the wireless interface may include a Wireless Local Area Network (WLAN) interface, a bluetooth interface, a cellular network interface, or any combination thereof. The wired interface may include an ethernet interface, an asynchronous transfer mode interface, a fibre channel interface, or any combination thereof. The ethernet interface may be an electrical or optical interface. The communication interface 41 does not necessarily include (although typically includes) an ethernet interface.
The number of processors 42 may be one or more. The processor 42 includes a central processing unit (cpu), a network processor, a Graphics Processing Unit (GPU), an application specific integrated circuit (asic), a Programmable Logic Device (PLD), or any combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. The processor 42 may include a control plane 421 and a forwarding plane 422. The control plane 421 and the forwarding plane 422 may be implemented by separate circuits or may be integrated into one circuit. For example, the processor 42 is a multi-core CPU. One or some of the cores implement a control plane 421 and others implement a forwarding plane 422. Also for example, the control plane 421 is implemented by a CPU, and the forwarding plane 422 is implemented by a Network Processor (NP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or any combination thereof. For another example, the device for deploying the service is a frame network device, the control plane 421 is implemented by a master control card, and the forwarding plane 422 is implemented by a line card. Also for example, the control plane 421 and the forwarding plane 422 are both implemented by NPs with control plane capabilities.
The device for deploying the service can also be a controller and a path planning device. As shown in fig. 10, an embodiment of the present application further provides a schematic structural diagram of an apparatus for deploying a service, where the apparatus for deploying a service is configured to execute the method for deploying a service. Some or all of the above methods may be implemented by hardware, or may be implemented by software or firmware.
Optionally, the apparatus for deploying a service may be a chip or an integrated circuit when implemented specifically.
Alternatively, when part or all of the method for deploying the service according to the foregoing embodiment is implemented by software or firmware, the method may be implemented by an apparatus 500 for deploying the service provided in fig. 10. As shown in fig. 10, the apparatus 500 for deploying a service may include:
the memory 53 and the processor 54 (the processor 54 in the device may be one or more, and fig. 10 illustrates one processor as an example), and may further include an input device 51 and an output device 52. In the present embodiment, the input device 51, the output device 52, the memory 53 and the processor 54 may be connected by a bus or other means, wherein the bus connection is taken as an example in fig. 10.
Alternatively, a program of the above-described method for deploying a service may be stored in the memory 53. The memory 53 may be a physically separate unit or may be integrated with the processor 54. The memory 53 may also be used for storing data.
Optionally, when part or all of the method for deploying the service in the foregoing embodiments is implemented by software, the apparatus for deploying the service may also include only a processor. The memory for storing the program is located outside the apparatus for deploying services, and the processor is connected to the memory through a circuit or a wire, and is used for reading and executing the program stored in the memory.
The processor 54 is arranged to call program code stored in the memory 53 to perform the method steps in the embodiments of fig. 3, 5 or 8.
The processor may be a Central Processing Unit (CPU), a Network Processor (NP), or a WLAN device.
The processor may further include a hardware chip. The hardware chip may be an ASIC, PLD, or a combination thereof. The PLD may be a CPLD, an FPGA, a GAL, or any combination thereof.
The memory may include volatile memory (volatile memory), such as random-access memory (RAM); the memory may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD); the memory may also comprise a combination of memories of the kind described above.
Fig. 11 is a schematic structural diagram of an apparatus for deploying a service according to an embodiment of the present application. The apparatus 600 comprises: a transceiver unit 61 and a processing unit 62; wherein:
the receiving and sending unit 61 is configured to obtain a network topology of a network segment corresponding to the first priority queue, where the network topology includes a link between two adjacent nodes and a performance parameter of the link; and a processing unit 62, configured to determine, according to a service level agreement SLA required by a service and the network topology, a first link for bearing the service, where the first link is a logical link that meets the SLA in the network topology, and the service belongs to the first priority queue.
In a possible implementation, the transceiver unit 61 is configured to acquire, through IGP flooding, performance parameters of any node of N nodes in a network and a link between any two nodes, where N is greater than 1; and the processing unit 62 is configured to obtain a network topology of the network segment corresponding to the first priority queue according to the performance parameter of any node and a link between any two nodes.
In yet another possible implementation, the performance parameter is a bandwidth; the transceiver unit 61 is configured to acquire the maximum bandwidth of the link between any two nodes and the link between any two nodes through IGP flooding; and the processing unit 62 is configured to obtain an available bandwidth of the network segment corresponding to the first priority queue according to the maximum bandwidth of the link between any two nodes and the weight of the network segment corresponding to the first priority queue.
In yet another possible implementation, the performance parameter is a bandwidth; the transceiver unit 61 is configured to acquire, through IGP flooding, available bandwidths of the link between any two nodes and the network segment corresponding to the first priority queue carried by the link between any two nodes; and the processing unit 62 is configured to use the available bandwidth of the network segment corresponding to the first priority queue, which is carried by the link between any two nodes, as the available bandwidth of the network segment corresponding to the first priority queue.
In yet another possible implementation, the transceiver unit 61 is further configured to obtain a bandwidth required by the service; and the processing unit 62 is further configured to determine the first link according to the bandwidth required by the service and the network topology, where an available bandwidth of a network slice corresponding to the first priority queue is greater than or equal to the bandwidth required by the service.
In yet another possible implementation, the processing unit 62 is further configured to update an available bandwidth of the network segment corresponding to the first priority queue, where the updated available bandwidth of the network segment corresponding to the first priority queue is a difference between the available bandwidth of the network segment corresponding to the first priority queue before the first link is determined and the bandwidth required by the service.
The transceiver unit 61 and the processing unit 62 may be implemented as described with reference to the embodiments shown in fig. 3, fig. 5 or fig. 8.
According to the device for deploying the service provided by the embodiment of the application, the available bandwidth of the network fragment corresponding to the first priority queue is obtained by IGP flooding through a link between any two nodes in N nodes in a network and the available bandwidth of the network fragment corresponding to the first priority queue borne by the link between any two nodes, the available bandwidth of the network fragment corresponding to the first priority queue borne by the link between any two nodes is used as the available bandwidth of the network fragment corresponding to the first priority queue, the first link for bearing the service is determined according to the bandwidth and the network topology required by the service, and the first link is a logical link which meets the SLA in the network topology, so that efficient service deployment can be realized and the SLA requirement of the deployed service can be met; and after a link is determined, the available bandwidth of the network fragment corresponding to the first priority queue is updated in time, so that the accuracy of determining the link can be improved.
One skilled in the art will appreciate that one or more embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
An embodiment of the present application further provides a chip system, including: at least one processor coupled with the memory through the interface, and an interface, the at least one processor causing the method of any of the above method embodiments to be performed when the at least one processor executes the computer program or instructions in the memory. Optionally, the chip system may be composed of a chip, and may also include a chip and other discrete devices, which is not specifically limited in this embodiment of the present application.
Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program may be stored, where the computer program, when executed by a processor, implements the steps of the method described in any embodiment of the present disclosure.
Embodiments of the present disclosure also provide a computer program product containing instructions which, when executed on a computer, cause the computer to perform the steps of the method described in any of the embodiments of the present disclosure.
An embodiment of the present application further provides a communication system, where the communication system includes the above apparatus for deploying a service.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
It should be understood that in the description of the present application, unless otherwise indicated, "/" indicates a relationship where the objects associated before and after are an "or", e.g., a/B may indicate a or B; wherein A and B can be singular or plural. Also, in the description of the present application, "a plurality" means two or more than two unless otherwise specified. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple. In addition, in order to facilitate clear description of technical solutions of the embodiments of the present application, in the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance. Also, in the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or illustrations. Any embodiment or design described herein as "exemplary" or "such as" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present relevant concepts in a concrete fashion for ease of understanding.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the division of the unit is only one logical function division, and other division may be implemented in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. The shown or discussed mutual coupling, direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a read-only memory (ROM), or a Random Access Memory (RAM), or a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium, such as a Digital Versatile Disk (DVD), or a semiconductor medium, such as a Solid State Disk (SSD).

Claims (16)

1. A method for deploying a service, the method comprising:
acquiring a network topology of a network fragment corresponding to a first priority queue, wherein the network topology comprises a link between two adjacent nodes and performance parameters of the link;
determining a first link for bearing the service according to a service level agreement SLA required by the service and the network topology, wherein the first link is a logical link meeting the SLA in the network topology, and the service belongs to the first priority queue.
2. The method according to claim 1, wherein the obtaining the network topology of the network segment corresponding to the first priority queue comprises:
acquiring performance parameters of any one of N nodes in a network and a link between any two nodes through IGP flooding, wherein N is greater than 1;
and acquiring the network topology of the network fragment corresponding to the first priority queue according to the performance parameter of any node and the link between any two nodes.
3. The method of claim 2, wherein the performance parameter is bandwidth;
the acquiring the performance parameter of any node of the N nodes in the network and the link between any two nodes through IGP flooding includes: acquiring the maximum bandwidth of the link between any two nodes and the link between any two nodes through IGP flooding;
the acquiring, according to the performance parameter of any node and the link between any two nodes, the network topology of the network segment corresponding to the first priority queue includes: and acquiring the available bandwidth of the network fragment corresponding to the first priority queue according to the maximum bandwidth of the link between any two nodes and the weight of the network fragment corresponding to the first priority queue.
4. The method of claim 2, wherein the performance parameter is bandwidth;
the acquiring the performance parameter of any node of the N nodes in the network and the link between any two nodes through IGP flooding includes: acquiring available bandwidth of the network fragment corresponding to the first priority queue carried by the link between any two nodes and the link between any two nodes through IGP flooding;
the acquiring, according to the performance parameter of any node and the link between any two nodes, the network topology of the network segment corresponding to the first priority queue includes: and taking the available bandwidth of the network fragment corresponding to the first priority queue, which is borne by the link between any two nodes, as the available bandwidth of the network fragment corresponding to the first priority queue.
5. The method according to claim 3 or 4, wherein the determining the first link carrying the service according to the SLA required by the service and the network topology comprises:
acquiring a bandwidth required by the service;
and determining the first link according to the bandwidth required by the service and the network topology, wherein the available bandwidth of the network fragment corresponding to the first priority queue is greater than or equal to the bandwidth required by the service.
6. The method of claim 5, wherein after determining the first link according to the network topology, the method further comprises:
and updating the available bandwidth of the network fragment corresponding to the first priority queue, wherein the updated available bandwidth of the network fragment corresponding to the first priority queue is the difference between the available bandwidth of the network fragment corresponding to the first priority queue before the first link is determined and the bandwidth required by the service.
7. The method according to any of claims 1 to 6, wherein the method is performed by a controller, a path planning apparatus, a router or a switch.
8. The method according to claim 1, 2 or 7, wherein the performance parameter and the SLA are both bandwidth guarantees or both the performance parameter and the SLA are transmission delays.
9. An apparatus for deploying a service, comprising a processor and a communication interface, the processor being configured to:
acquiring a network topology of a network fragment corresponding to a first priority queue, wherein the network topology comprises a link between two adjacent nodes and performance parameters of the link;
determining a first link for bearing the service according to a service level agreement SLA required by the service and the network topology, wherein the first link is a logical link meeting the SLA in the network topology, and the service belongs to the first priority queue.
10. The apparatus of claim 9, wherein:
the communication interface is used for acquiring the performance parameters of any one node of N nodes in the network and a link between any two nodes through IGP flooding, wherein N is greater than 1;
the processor is configured to obtain a network topology of the network segment corresponding to the first priority queue according to the performance parameter of any node and a link between any two nodes.
11. The apparatus of claim 10, wherein the performance parameter is bandwidth;
the communication interface is used for acquiring the maximum bandwidth of the link between any two nodes and the link between any two nodes through IGP flooding;
the processor is configured to obtain an available bandwidth of the network segment corresponding to the first priority queue according to the maximum bandwidth of the link between any two nodes and the weight of the network segment corresponding to the first priority queue.
12. The apparatus of claim 10, wherein the performance parameter is bandwidth;
the communication interface is configured to acquire, through IGP flooding, available bandwidths of the network segments corresponding to the first priority queue carried by the link between any two nodes and the link between any two nodes;
the processor is configured to use a link between any two nodes to carry an available bandwidth of the network segment corresponding to the first priority queue as an available bandwidth of the network segment corresponding to the first priority queue.
13. The apparatus of claim 11 or 12, wherein the processor is configured to:
acquiring a bandwidth required by the service;
and determining the first link according to the bandwidth required by the service and the network topology, wherein the available bandwidth of the network fragment corresponding to the first priority queue is greater than or equal to the bandwidth required by the service.
14. The apparatus of claim 13, wherein the processor is further configured to update an available bandwidth of the network slice corresponding to the first priority queue, and wherein the updated available bandwidth of the network slice corresponding to the first priority queue is a difference between the available bandwidth of the network slice corresponding to the first priority queue before the first link is determined and a bandwidth required by the service.
15. The apparatus according to any one of claims 9 to 14, wherein the apparatus is a controller, a path planning device, a router or a switch.
16. The apparatus according to claim 9, 10 or 15, wherein the performance parameter and the SLA are both bandwidth guarantees, or wherein the performance parameter and the SLA are both transmission delays.
CN202011340814.1A 2020-11-25 2020-11-25 Method and device for deploying service Pending CN114615155A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011340814.1A CN114615155A (en) 2020-11-25 2020-11-25 Method and device for deploying service

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011340814.1A CN114615155A (en) 2020-11-25 2020-11-25 Method and device for deploying service

Publications (1)

Publication Number Publication Date
CN114615155A true CN114615155A (en) 2022-06-10

Family

ID=81856704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011340814.1A Pending CN114615155A (en) 2020-11-25 2020-11-25 Method and device for deploying service

Country Status (1)

Country Link
CN (1) CN114615155A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115473825A (en) * 2022-09-14 2022-12-13 中国电信股份有限公司 Business service level agreement guarantee method and system, controller and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115473825A (en) * 2022-09-14 2022-12-13 中国电信股份有限公司 Business service level agreement guarantee method and system, controller and storage medium

Similar Documents

Publication Publication Date Title
CN114073052B (en) Systems, methods, and computer readable media for slice-based routing
US8144629B2 (en) Admission control for services
US6594268B1 (en) Adaptive routing system and method for QOS packet networks
US6538991B1 (en) Constraint-based routing between ingress-egress points in a packet network
US7646734B2 (en) Method for traffic engineering of connectionless virtual private network services
JP7288980B2 (en) Quality of Service in Virtual Service Networks
EP1589696A1 (en) The system and method for realizing the resource distribution in the communication network
WO2017024824A1 (en) Aggregated link-based traffic management method and device
US20080317045A1 (en) Method and System for Providing Differentiated Service
KR20060064661A (en) Flexible admission control for different traffic classes in a communication network
CN112448885A (en) Method and device for transmitting service message
EP3588880B1 (en) Method, device, and computer program for predicting packet lifetime in a computing device
WO2016194089A1 (en) Communication network, communication network management method and management system
WO2015101066A1 (en) Method and node for establishing quality of service reservation
CN109088822B (en) Data flow forwarding method, device, system, computer equipment and storage medium
CN109274589B (en) Service transmission method and device
CN113676412A (en) Network control method and equipment
US10382582B1 (en) Hierarchical network traffic scheduling using dynamic node weighting
US20230142425A1 (en) Virtual dual queue core stateless active queue management (agm) for communication networks
CN114615155A (en) Method and device for deploying service
CN113765796B (en) Flow forwarding control method and device
US9185042B2 (en) System and method for automated quality of service configuration through the access network
CN114448903A (en) Message processing method, device and communication equipment
CN113595915A (en) Method for forwarding message and related equipment
CN117255048A (en) Method, device and system for determining service transmission strategy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination