CN116938723A - Planning method and device for slice bandwidth - Google Patents

Planning method and device for slice bandwidth Download PDF

Info

Publication number
CN116938723A
CN116938723A CN202210344303.XA CN202210344303A CN116938723A CN 116938723 A CN116938723 A CN 116938723A CN 202210344303 A CN202210344303 A CN 202210344303A CN 116938723 A CN116938723 A CN 116938723A
Authority
CN
China
Prior art keywords
slice
node
bandwidth
queue
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210344303.XA
Other languages
Chinese (zh)
Inventor
白宇
李伟峰
杨文斌
李广
王震
王小忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210344303.XA priority Critical patent/CN116938723A/en
Publication of CN116938723A publication Critical patent/CN116938723A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0876Aspects of the degree of configuration automation
    • H04L41/0886Fully automatic configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5041Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
    • H04L41/5051Service on demand, e.g. definition and deployment of services in real time

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a planning method and device for slice bandwidth, and relates to the technical field of networks. The method can avoid the problem of insufficient slice bandwidth or waste of the network slice when transmitting service data. The method is applied to a planning device, and comprises the following steps: acquiring user information and target performance parameters of a first user; determining slice information of a target network slice according to the user information; and determining the bandwidth resource of the target network slice according to the target performance parameter and the slice information of the target network slice. Wherein the target performance parameter is used to indicate a transmission performance of the target network slice desired by the first user.

Description

Planning method and device for slice bandwidth
Technical Field
The present application relates to the field of network technologies, and in particular, to a method and an apparatus for planning slice bandwidth.
Background
With the development of the internet, new applications running on the internet are endless. Different services of a network application often have different demands on the network. For example, for video, cloud gaming, etc. services, users often want networks with large bandwidths. For enterprise services, users typically want a network to have a huge number of connections. Whereas for production-type traffic or financial-type traffic, users typically want the network to have high reliability and low latency (delay) characteristics. Based on such a situation, in order to cope with the performance requirement of the user on the network differentiation, each large network operator or network equipment provider guarantees the service level agreement (service level agreement, spla) requirement of the user service by means of network slicing.
Where network slicing refers to multiple end-to-end (i.e., user end-to-application server) networks that are logically and/or hardware isolated on a unified infrastructure (device) by a network operator or network device provider. An end-to-end network may be referred to herein as a network slice, and each network slice is isolated from each other from the access network to the bearer network and from the core network.
In the current technology, a network operator or a network device provider typically sets slice bandwidth for a network slice according to a slice bandwidth value specified by a user, or sets slice bandwidth of a network slice according to SLA requirements (e.g., delay requirements and/or packet loss (drop) rate requirements) of the user with expert experience pre-estimates. However, the problem of insufficient or wasted slice bandwidth when the network slice transmits the service data cannot be avoided by the slice bandwidth specified by the user or the slice bandwidth estimated by expert experience.
Disclosure of Invention
The application provides a planning method and a planning device for slice bandwidth, which can avoid the problem of insufficient slice bandwidth or waste of network slices when service data are transmitted.
In order to achieve the above purpose, the present application provides the following technical scheme:
In a first aspect, the present application provides a method for planning a slice bandwidth, where the method is applied to a planning apparatus. The method comprises the following steps: user information and target performance parameters of a first user are obtained. And determining slice information of the target network slice according to the acquired user information. And determining the bandwidth resource of the target network slice according to the target performance parameter and the slice information. Wherein the target performance parameter is used to indicate a transmission performance of the target network slice desired by the first user.
According to the planning method for the slice bandwidth, which is provided by the application, the planning device can automatically plan bandwidth resources for the target network slice based on the target performance parameter of the target network slice expected by the user and the slice information of the target network slice, so that the target slice bandwidth capable of meeting the target performance parameter appointed by the user is automatically determined for each node on the slice path of the target network slice, and the problem that the slice bandwidth is wasted or insufficient when the target network slice transmits service data can be avoided.
In one possible design manner, the obtaining the user information and the target performance parameter of the first user includes: user information and target performance parameters input on the slice bandwidth planning interface are received. For example, user information and target performance parameters entered by an operator on a slice bandwidth planning interface are received. Through the possible design, the planning device can achieve the purpose of acquiring the user information and the target performance parameters of the first user through interaction with an operator.
In another possible design, the bandwidth resource of the target network slice includes a target slice bandwidth configured for each node in the target network slice. With this possible design, the planning apparatus may enable determining bandwidth resources for the target network slice by planning a slice bandwidth for each node in the target network slice.
In another possible design, the method further includes: and displaying the target slice bandwidth configured for each node in the target network slice on a slice bandwidth display interface. With this possible design, the planning apparatus may output bandwidth resources planned for the target network slice to the operator via the slice bandwidth display interface.
In another possible design, the method further includes: and displaying estimated performance parameters of each node in the target network slice on the slice bandwidth display interface, wherein the estimated performance parameters of each node are determined based on the target slice bandwidth configured for each node.
In another possible design, the slice path of the target network slice includes one or more queues, and the method further includes: and displaying estimated performance parameters of the one or more queues on the slice bandwidth display interface, wherein the estimated performance parameters of the one or more queues are determined based on the bandwidth resources of the target network slice.
Through the two possible designs, the planning device can also display the transmission performance which can be achieved by the currently planned bandwidth resource to the operator on the basis of outputting the bandwidth resource planned for the target network slice to the operator through the slice bandwidth display interface.
In another possible design, the performance parameters include delay and/or packet loss rate at one or more predetermined probabilities.
In another possible design, the target performance parameter includes a target performance parameter of at least one queue in a slice path of the target network slice, where the target performance parameter of any one of the at least one queue is used to indicate a transmission performance of the any one queue desired by the first user. Through the possible design, the first user can specify the transmission performance of the target network slice by taking the queue as granularity, namely, the experience of the user can be improved.
In another possible design manner, the slice information includes slice path information of the target network slice, and determining the bandwidth resource of the target network slice according to the target performance parameter and the slice information includes: and acquiring a first slice bandwidth of each node on the slice path of the target network slice according to the slice path information of the target network slice, wherein the first slice bandwidth of each node is the slice bandwidth of each node at the current moment. And determining a first performance parameter of each queue in the at least one queue according to the first slice bandwidth of each node. And determining the slice path type of the target network slice according to the first performance parameter of each queue and the target performance parameter of each queue. And adjusting the slice bandwidth of at least one node on the slice path of the target network slice according to the slice path type of the target network slice so as to obtain the target slice bandwidth of each node. The first performance parameter of each queue in the slice path of the target network slice is the performance parameter of each queue when data is transmitted between the client and the server when the slice bandwidth of each node on the slice path is the first slice bandwidth. The slice path types include: a first type of path type that requires a reduction in slice bandwidth, a second type of path type that does not require an adjustment in slice bandwidth, and a third type of path type that requires an increase in slice bandwidth.
Through the possible design, the planning device can automatically plan bandwidth resources for the target network slice based on the target performance parameters of the target network slice desired by the user and slice information of the target network slice, so that the target slice bandwidth capable of meeting the target performance parameters designated by the user can be automatically determined for each node on a slice path of the target network slice, and the problem that the slice bandwidth is wasted or insufficient when the target network slice transmits service data can be avoided.
In another possible design, the determining the first performance parameter of each queue in the at least one queue according to the first slice bandwidth of each node includes: and determining the bandwidth occupied by each queue in the slice path of the target network slice by each node according to the first slice bandwidth of each node and the queue configuration of each node. And determining a first performance parameter of each queue according to the bandwidth occupied by each queue at each node, the slice flow information of each node and the preset probability. Wherein the queues of each node are configured to indicate a proportion of bandwidth allocated by each node to each queue in a slice path of a target network slice, and the slice traffic information of each node includes an input traffic rate of the target network slice at each node.
In another possible design, the slice traffic information of each node further includes a burst traffic size and a burst traffic rate input by the target network slice at the each node.
Based on the possible design, the burst flow in the target network slice is considered, so that the bandwidth resource planned for the target network slice can still meet the transmission performance specified by the user when the burst flow occurs in the target network slice.
In another possible design manner, for any one of the at least one queue, determining the first performance parameter of each queue according to the bandwidth occupied by each queue at each node, the slice flow information of each node, and the preset probability includes: and determining a first performance function of any queue on each node according to the bandwidth occupied by the any queue on each node and the slice flow information of each node. And determining a second performance function of the any queue according to the first performance function of the any queue on each node. And determining a first performance parameter of any queue according to the preset probability and the second performance function. The first performance function is used for indicating the relation between the performance parameters and the probability when any queue transmits data on each node, and the performance parameters comprise time delay and/or packet loss rate. The second performance function is used for indicating the relation between the performance parameters and the probability when any queue transmits data between the client and the server.
In another possible design manner, determining the slice path type of the target network slice according to the first performance parameter of each queue and the target performance parameter of each queue includes: for any one of the at least one queue, determining a queue type of the any one queue according to the first performance parameter of the any one queue and the target performance parameter of the any one queue. And determining the slice path type of the target network slice according to the queue type of each queue. Wherein, the queue type of any one queue comprises: a first type of queue having a first performance parameter less than a target performance parameter of the any queue and a difference between the first performance parameter of the any queue and the target performance parameter of the any queue greater than a threshold; the first performance parameter of any queue is smaller than the target performance parameter of any queue, and the difference value is smaller than the second type of queue type of the threshold value; and a third type of queue of which the first performance parameter is greater than the target performance parameter of the any queue.
And for the queues with the queue types of the first type, the current obtained slice bandwidth of the queues is far satisfied with the target performance parameters specified by the user. In this case, to avoid bandwidth waste, the slice bandwidth obtained by the queue may be reduced appropriately. And for the queues with the queue types of the second type, the slice bandwidth obtained by the queues at present just meets the target performance parameters specified by the user. In this case, no adjustment may be made to the queue acquisition slice bandwidth. And for the queue with the queue type of the third type, the current obtained slice bandwidth of the queue does not meet the target performance parameter specified by the user. In this case, to meet the target performance parameter specified by the user, the slice bandwidth obtained by the queue needs to be increased.
In another possible design, determining the slice path type of the target network slice according to the queue type of each queue includes: when the queue types of all queues in the slice path of the target network slice are the first type of queue types, determining that the slice path type of the target network slice is the first type of path type; when the queue types of all the queues in the slice path of the target network slice are the second type of queue types, or the queue types of the queues in the target network slice comprise the first type of queue types and the second type of queue types, determining that the slice path type of the target network slice is the second type of path type; and when the queue of which the slice path comprises at least one queue type is the queue type of the third class, determining that the slice path type of the target network slice is the path type of the third class.
In another possible design manner, adjusting the slice bandwidth of at least one node on the slice path of the target network slice according to the slice path type of the target network slice to obtain the target slice bandwidth of each node includes: if the slice path type of the target network slice is the first type of path type, reducing the first slice bandwidth of at least one node on the slice path of the target network slice to obtain the target slice bandwidth of each node; if the slice path type of the target network slice is the third type of path type, the first slice bandwidth of at least one node on the slice path of the target network slice is increased to obtain the target slice bandwidth of each node.
Based on the two possible designs, on one hand, since the current obtained slice bandwidth of the queue with the queue type of the second type of queue just meets the target performance parameter specified by the user, the slice bandwidth obtained by the queue does not need to be adjusted. Thus, for a target network slice whose slice path type is a second type path type that includes at least one second type queue and does not include a third type queue, there is no need to adjust the slice bandwidth of the nodes on the slice path of the target network slice to avoid that the first slice bandwidth of the nodes on the slice path of the target network slice after adjustment results in it not meeting the target performance parameter. On the other hand, since the slice bandwidth currently obtained by the queue of the queue type of the first type of queue is far satisfied with the target performance parameter specified by the user, the slice bandwidth obtained by the queue can be properly reduced to avoid the waste of bandwidth. Therefore, for a target network slice whose slice path type is a first type path type including only a first type queue type, the slice bandwidth of a node on the slice path of the target network slice can be appropriately reduced to avoid bandwidth waste. In yet another aspect, since the current obtained slice bandwidth of a queue of the queue type of the third type does not meet the user-specified target performance parameter, the slice bandwidth obtained by the queue may be appropriately increased so that the queue meets the user-specified target performance parameter. Therefore, for the target network slice with the slice path type of the third type path type comprising at least one queue type queue of the third type, the slice bandwidth of the node on the slice path of the target network slice can be increased, so that the target network slice meets the target performance parameters specified by the user, and the problem of insufficient bandwidth of the target network slice when the target network slice transmits service data is avoided.
In another possible design, if the slice path type of the target network slice is the first type path type or the third type path type, the method further includes: and determining a second performance parameter of each node according to the first slice bandwidth of each node and the input flow rate of the target network slice on each node, wherein the second performance parameter of each node is the performance parameter of the target network slice when data flows through each node. The step of reducing the first slice bandwidth of at least one node on the slice path of the target network slice if the slice path type of the target network slice is the first type of path type comprises: and if the slice path type of the target network slice is the first type of path type, reducing the first slice bandwidth of the first node on the slice path of the target network slice. The first node is a node with the minimum second performance parameter on a slice path of the target network slice, the reduced first slice bandwidth of the first node is larger than the input flow rate of the target network slice on the first node, and the first node is marked with a preset mark. Wherein the preset indicia of the first node indicia is used to characterize that the slice bandwidth of the first node is not adjusted when planning bandwidth resources of other slice paths including the first node.
By the possible design, as the characteristic based on the second performance parameter shows that when the first slice bandwidth of the first node with the minimum second performance parameter is compressed, the end-to-end performance parameter of the target network slice increases the slowest, so that the aim of compressing more bandwidths can be fulfilled, and further the bandwidth waste can be reduced. In addition, by adjusting the first node marked with the preset mark, the transmission performance of other network slices which affect the planned bandwidth resource can be avoided.
In another possible design manner, if the slice path type of the target network slice is the third type of path type, increasing the first slice bandwidth of at least one node on the slice path of the target network slice includes: and if the slice path type of the target network slice is the third type of path type, increasing the slice bandwidth of the second node on the slice path of the target network slice. The second node is a node with the largest second performance parameter on the slice path of the target network slice, and the first slice bandwidth of the second node after the increase is larger than the input flow rate of the target network slice on the second node.
In another possible design, the second node is marked with a preset mark. Wherein the preset flag of the second node flag is used to characterize that the slice bandwidth of the second node is not adjusted when planning bandwidth resources of other slice paths including the second node.
With the two possible designs, as the characteristic based on the second performance parameter shows that when the first slice bandwidth of the second node with the largest second performance parameter is increased, the end-to-end performance parameter of the target network slice is reduced fastest, the purpose of increasing the bandwidth can be achieved, and the purpose of saving the bandwidth can be achieved. In addition, by adjusting the first node marked with the preset mark, the transmission performance of other network slices which affect the planned bandwidth resource can be avoided.
In another possible design manner, if the slice path type of the target network slice is the first type of path type, reducing the first slice bandwidth of at least one node on the slice path of the target network slice to obtain the target slice bandwidth of each node specifically includes: and when the slice path type of the target network slice determined according to the first slice bandwidth after the reduction of at least one node on the slice path of the target network slice is the second type path type, determining the slice bandwidth of each node at the current moment as the target slice bandwidth of each node.
In another possible design manner, if the slice path type of the target network slice is the third type of path type, increasing the first slice bandwidth of at least one node on the slice path of the target network slice to obtain the target slice bandwidth of each node specifically includes: and when the slice path type of the target network slice determined according to the first slice bandwidth increased by at least one node on the slice path of the target network slice is the first type path type or the second type path type, determining the slice bandwidth of each node at the current moment as the target slice bandwidth of each node.
In a second aspect, the present application provides a slice bandwidth planning apparatus.
In one possible design, the slice bandwidth planning apparatus is configured to perform any of the methods provided in the first aspect. The present application may divide the functional modules of the slice bandwidth planning apparatus according to any of the methods provided in the first aspect. For example, each functional module may be divided corresponding to each function, or two or more functions may be integrated in one processing module. By way of example, the present application may divide the planning apparatus of the slice bandwidth into an acquisition unit, a determination unit, and the like according to functions. The description of possible technical solutions and beneficial effects executed by each of the above-divided functional modules may refer to the technical solutions provided by the above first aspect or the corresponding possible designs thereof, and will not be repeated herein.
In another possible design, the slice bandwidth planning apparatus includes a memory and a processor. Wherein the processor executes program instructions in the memory to cause the apparatus for planning slice bandwidth to perform any one of the methods as provided by the first aspect and any one of its possible designs.
In a third aspect, the application provides a computer readable storage medium comprising program instructions which, when run on a computer or processor, cause the computer or processor to perform any of the methods provided by any of the possible implementations of the first aspect.
In a fourth aspect, the application provides a computer program product which, when run on a slice bandwidth planning apparatus, causes any one of the methods provided by any one of the possible implementations of the first aspect to be performed.
In a fifth aspect, the present application provides a chip system comprising: a processor for calling from a memory and running a computer program stored in the memory, performing any one of the methods provided by the implementation of the first aspect.
It should be appreciated that any of the apparatus, computer storage medium, computer program product, or chip system provided above may be applied to the corresponding method provided above, and thus, the benefits achieved by the apparatus, computer storage medium, computer program product, or chip system may refer to the benefits in the corresponding method, which are not described herein.
In the present application, the names of the above-described planning apparatuses for slicing bandwidths do not constitute limitations on devices or function modules themselves, which may appear under other names in actual implementations. Insofar as the function of each device or function module is similar to that of the present application, it falls within the scope of the claims of the present application and the equivalents thereof.
Drawings
FIG. 1 is a schematic diagram of two slice paths in a network system;
fig. 2 is a schematic hardware structure of a planning apparatus according to an embodiment of the present application;
fig. 3 is a flow chart of a planning method for slice bandwidth according to an embodiment of the present application;
fig. 4 is a schematic diagram of a planning apparatus according to an embodiment of the present application, where the planning apparatus obtains user information and target performance parameters of a first user through a slice bandwidth planning interface 40;
fig. 5 is a schematic diagram of a slice bandwidth display interface and a slice bandwidth planning interface displayed in a split screen mode according to an embodiment of the present application;
fig. 6 is a schematic diagram of displaying node bandwidth resources in a target network slice on a slice bandwidth display interface according to an embodiment of the present application;
fig. 7 is a flow chart of another method for planning slice bandwidth according to an embodiment of the present application;
FIG. 8 is a graph of a function of a first mathematical model provided by an embodiment of the present application;
fig. 9 is a schematic diagram of a planning apparatus according to an embodiment of the present application, where the planning apparatus determines a second performance function of the queue 1 according to a first performance function of the queue 1 on each node in the slicing path 1;
fig. 10 is a schematic structural diagram of a planning apparatus 100 for slice bandwidth according to an embodiment of the present application;
Fig. 11 is a schematic diagram of a chip system according to an embodiment of the present application.
Detailed Description
For a clearer understanding of embodiments of the present application, some terms or techniques related to the embodiments of the present application are described below:
1) Network slice
Network slicing is a new type of network architecture and refers specifically to multiple logical networks provided through the same shared network infrastructure. In particular, based on different user groups, or different network performance requirements of different user groups, a network operator or network device provider may isolate multiple end-to-end (i.e., user end-to-application server end) networks, each of which is referred to as a network slice, on a unified infrastructure (device) by way of logical and/or hardware isolation.
It should be noted that each network slice serves a particular business type or industry user. Each network slice can flexibly define own logic topology, SLA requirements, reliability and security level, thereby meeting the differentiated requirements of different businesses, industries or users. The network operators or network equipment providers can reduce the cost of constructing a plurality of private networks through network slicing, and can provide highly flexible network services allocated according to the needs of the services, thereby improving the network value and the rendering capability of the operators and assisting the digital transformation of various industries.
In a network slice, all nodes that traffic data (i.e., traffic) passes through when traveling from end to end may form a slice path, and each node on the slice path is referred to as a forwarding node (or simply node) in the network slice. It should be appreciated that a network slice may include at least one slice path, and a slice path may also correspond to at least one network slice.
As an example, referring to fig. 1, fig. 1 shows a schematic diagram of two slice paths in a network system. As shown in fig. 1, the network system includes a node 1, a node 2, a node 3, and a node 4. The nodes 1, 3 and 4 are nodes on the slicing path 1, and the slicing path 1 is used for transmitting service data between the client 1 and the server 1. Node 2, node 3 and node 4 are nodes on slice path 2, and slice path 2 is used to transmit traffic data between client 2 and server 2. It should be noted that, the slice path 1 and the slice path 2 are two different data transmission paths, and the slice path 1 and the slice path 2 may belong to different network slices, and of course, the slice path 1 and the slice path 2 may also belong to the same network slice.
It will be appreciated that when the nodes in the network slice include network nodes in the access network, the carrier network, and the core network, then different network slices all need to be isolated from each other by logic and/or hardware implementations from the access network to the carrier network to the core network.
Illustratively, different network slices may be isolated from each other by different ports of nodes in the access network, the bearer network, and the core network. In this case, the traffic data of the different network slices is transmitted through ports corresponding to the network slices on the nodes in the access network, the bearer network and the core network. In connection with fig. 1, a slice path 1 of a network slice may transmit traffic data through preset ports on nodes 1, 3 and 4. The slice path 2 of the network slice can transmit service data through the ports preset on the node 2, the node 3 and the node 4.
Further exemplary, the access network, the bearer network, and the network nodes in the core network may transmit the service data of different network slices in a time division multiplexing manner, so as to isolate the service data of different network slices through a time dimension. In connection with fig. 1, node 3, and node 4 may transmit traffic data requiring transmission of slice path 1 during period 1. Node 2, node 3, and node 4 may transmit traffic data requiring transmission of slice path 2 during period 2.
In general, one slice path in a network slice may include one or more queues, and different queues may be used to transmit different types of traffic data. Alternatively, multiple queues in a slice path may have different priorities. For queues of different priorities, each node on the slice path may allocate a larger bandwidth for queues with high priority and a smaller bandwidth for queues with low priority.
Thus, for traffic requiring low latency, traffic data for that traffic may be transmitted through queues in the slice path that have high priority. Whereas for traffic that does not require latency, traffic data for that traffic may be transmitted through queues in the slice path that have lower priority. Therefore, in the slice path of the network slice with limited slice bandwidth customized by the user, the transmission performance of the service data of the service with low delay requirement can be ensured, namely, the service data is transmitted on demand in one network slice.
As an example, taking the slice path a of the network slice customized by the hospital a as an example, m queues may be included in the slice path a, where m is a positive integer. The m queues have different priorities, for example, the priorities of the m queues decrease sequentially from small to large according to the numbers, that is, the priority of the queue with the number 1 in the m queues is the highest, and the priority of the queue with the number m is the lowest. In this way, the queue numbered 1 in slice path a may be used to transmit high definition live video data (e.g., real-time surgical video, may be used for remote teaching, etc.). The queue numbered m in the slice path a may be used for transmitting related data (e.g., transmitting data of a material referred to in a digital library) for accessing the database, etc., and will not be described again.
2) Other terms
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion. In embodiments of the application, the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more. The term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. The term "and/or" is an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist together, and B exists alone. In the present application, the character "/" generally indicates that the front and rear related objects are an or relationship. In the embodiments of the present application, the sequence number of each process does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiments of the present application. Determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information.
Currently, a network operator or a network device provider typically sets slice bandwidth for a network slice according to a slice bandwidth value specified by a user, or sets slice bandwidth of a network slice according to SLA requirements (e.g., delay requirements and/or packet loss rate requirements) of the user with expert experience predictors. When the slice bandwidth of the network slice is set in this way, no direct connection of the slice bandwidth to the SLA of the user is established. In this way, problems of insufficient slice bandwidth or waste may occur.
For example, the network operator or the network equipment provider sets a slice bandwidth of 10Gbps for the network slice 1 according to a slice bandwidth value specified by the user or an expert experience value estimated according to the SLA requirement of the user, and the SLA requirement of the user is that the delay of the service data when transmitted end to end through the network slice 1 is not more than 20ms. However, in actual situations:
the slice bandwidths of cases 1 and 10Gbps cannot meet the requirement that the end-to-end delay does not exceed 20ms, and the slice bandwidth of 15Gbps may be required to actually meet the requirement that the end-to-end delay does not exceed 20ms.
The slice bandwidths of cases 2 and 10Gbps far meet the requirement that the end-to-end delay is not more than 20ms, and the slice bandwidth of 5Gbps is probably only needed to actually meet the requirement that the end-to-end delay is not more than 20ms.
Based on this, the embodiment of the application provides a method for planning slice bandwidth, which is applied to a planning device, and the method can adjust the slice bandwidth of each node in a target network slice in an iterative adjustment manner according to slice information of the created network slice and target performance parameters specified by a user, so as to determine bandwidth resources of the target network slice, for example, determine the target slice bandwidth required to be configured for each node on a slice path of the target network slice.
Based on the method, reasonable bandwidth resources can be configured for the network slice customized by the user based on the target performance parameter adaptability specified by the user, so that the problem of insufficient slice bandwidth or waste caused when the slice bandwidth is set for the network slice only according to the user specified value or the expert experience predicted value is avoided.
The embodiment of the application also provides a planning device for the slice bandwidth, which can be any computing device with computing processing capability or a functional module in the computing device, and the embodiment of the application is not limited to the above.
Alternatively, the computing device may be a general purpose computer, a tablet computer, a notebook computer, or the like. As an example, the computing device may be a server of a network operator or a network device provider, a network management device, a network controller, or the like, without limitation.
Referring to fig. 2, fig. 2 shows a schematic hardware structure of a planning apparatus according to an embodiment of the present application. As shown in fig. 2, the planning apparatus 20 includes a processor 21, a memory 22, a communication interface 23, and a bus 24. The processor 21, the memory 22 and the communication interface 23 are connected by a bus 24. Optionally, the planning apparatus 20 further comprises an input-output interface 25, the input-output interface 25 being in communication with the processor 21, the memory 22, the communication interface 23, etc. via the bus 24.
The processor 21 is a control center of the planning apparatus 20 and may be a general-purpose central processing unit (central processing unit, CPU), the processor 21 may also be other general-purpose processors, digital signal processors (digital signal processing, DSP), application-specific integrated circuits (application-specific integrated circuit, ASIC), field-programmable gate arrays (field-programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, graphics processors (graphics processing unit, GPU), neural network processing units (neural processing unit, NPU), tensor processors (tensor processing unit, TPU) or artificial intelligence (artificial intelligent) chips, data processors (data processing unit, DPU), etc.
As one example, processor 21 includes one or more CPUs, such as CPU 0 and CPU 1 shown in fig. 2. Furthermore, the present application is not limited to the number of processor cores in each processor.
The memory 22 is used for storing program instructions or data to be accessed by an application process, and the processor 21 can implement the slice bandwidth planning method provided by the embodiment of the present application by executing the program instructions in the memory 22.
The memory 22 includes volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DR RAM). The nonvolatile memory may be a storage class memory (storage class memory, SCM), a solid state disk (solid state drive, SSD), a mechanical hard disk (HDD), or the like. The storage level memory may be, for example, a nonvolatile memory (NVM), a phase-change memory (PCM), a persistent memory, or the like.
In one possible implementation, the memory 22 exists independently of the processor 21. The memory 22 is coupled to the processor 21 via a bus 24 for storing data, instructions or program code. The processor 21, when calling and executing instructions or program code stored in the memory 22, is capable of implementing the slice bandwidth planning method provided by the embodiment of the present application.
In another possible implementation, the memory 22 and the processor 21 are integrated.
A communication interface 23 for connecting the planning apparatus 20 with other devices, such as forwarding nodes in a target network, etc., via a communication network, which may be an ethernet network, a radio access network (radio access network, RAN), a wireless local area network (wireless local area networks, WLAN), etc. The communication interface 23 includes a receiving unit for receiving data/messages and a transmitting unit for transmitting data/messages.
Bus 24 may be an industry standard architecture (industry standard architecture, ISA) bus, an external device interconnect (peripheral component interconnect, PCI) bus, a high speed serial computer expansion bus (peripheral component interconnect express, PCIe), a computing fast link (compute express link, CXL) or an extended industry standard architecture (extended industry standard architecture, EISA) bus, etc. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 2, but not only one bus or one type of bus.
An input-output interface 25 for enabling human-machine interaction between a person for operating the planning apparatus 20 (hereinafter referred to as operator) and the planning apparatus 20. Such as text interactions or voice interactions between the operator and planning apparatus 20.
The input/output interface 25 includes an input interface for enabling an operator to input information to the planning apparatus 20, and an output interface for enabling the planning apparatus 20 to output information to the operator.
By way of example, input interfaces include, but are not limited to, a touch screen, keyboard, mouse, microphone, etc., and output interfaces include, but are not limited to, a display screen, speakers, etc. The touch screen, the keyboard or the mouse is used for inputting text/image information, the microphone is used for inputting voice information, the display screen is used for outputting the text/image information, and the loudspeaker is used for outputting the voice information.
In an embodiment of the present application, the operator may input information, such as user information of the first user described below, to the planning apparatus 20 based on the above-described input interface. The output interface of the planning apparatus 20 may be used to display information such as the target slice bandwidth of each node on the slice path of the target network slice described below to an operator, which will not be described in detail.
It should be noted that the configuration shown in fig. 2 does not constitute a limitation of the planning apparatus 20, and that the planning apparatus 20 includes more or less components than those shown in fig. 2, or some components may be combined, or a different arrangement of components, in addition to those shown in fig. 2.
The following describes a method for planning slice bandwidth according to an embodiment of the present application in detail with reference to the accompanying drawings.
Referring to fig. 3, fig. 3 shows a flow chart of a slice bandwidth planning method according to an embodiment of the present application. The method may be performed by a planning apparatus having the hardware configuration shown in fig. 2, and the method may include the following steps.
S101, acquiring user information and target performance parameters of a first user.
Wherein the first user is a user who customizes the target network slice. It will be appreciated that the target network slice may be a network slice customized by a network operator or network device provider for the first user. The user information of the first user may comprise identification information for identifying the identity of the first user. For example, the user information of the first user includes a name of the first user.
Optionally, when the network operator or the network device provider customizes a plurality of network slices for the first user, the user information of the first user may include, in addition to identification information for identifying the first user identity, an identification for identifying the target network slice among the plurality of network slices. For example, the user information of the first user includes a name of the first user, and includes a name of the target network slice.
The target performance parameter is used to indicate a transmission performance of the target network slice desired by the first user.
It will be appreciated that when the transmission performance of the data is characterized by a time delay and/or a packet loss rate at the time of transmission of the data, then the performance parameters may include the time delay and/or the packet loss rate at one or more predetermined probabilities at the time of transmission of the data. In this case, the target performance parameter may include a delay and/or a packet loss rate when the target network slice transmits the service data under one or more preset probabilities desired by the first user.
For example, the target performance parameter may be that with a 99% probability, the delay for the target network slice to transmit traffic data is 0.5ms, and/or the packet loss rate for the target network slice to transmit traffic data is 5%. As another example, the target performance parameter may be, without limitation, that the delay time for transmitting traffic data by the target network slice is 1ms and/or the packet loss rate for transmitting traffic data by the target network slice is 8% with a probability of 99.9%.
Optionally, each slice path in the target network slice may include one or more queues. Based on the method of the embodiment of the application, the planning device can plan the bandwidth for the nodes on each slice path in the target network slice in turn by taking the slice path in the target network slice as granularity, thereby realizing the purpose of planning (or determining) the bandwidth resource for the target network slice. For simplicity of description, embodiments of the present application are described below by taking the example that the target network slice includes only one slice path.
The slice path of the target network slice may include m queues having different priorities and used to transmit traffic data having different transmission performance requirements. The target performance parameter may comprise, in particular, a target performance parameter of at least one of the m queues. The target performance parameter of any one of the at least one queue is used for indicating the transmission performance of the any one queue desired by the first user.
For example, for any queue on the slice path of the target network slice, the target performance parameter for that any queue may be that with a 99% probability, the delay for traffic data transmission by that any queue is 0.5ms, and/or the packet loss rate for traffic data transmission by that any queue is 5%. For another example, the target performance parameter of any one of the queues may be, with a probability of 99.9%, a delay of 1ms for transmission of traffic data by any one of the queues, and/or a packet loss rate of 8% for transmission of traffic data by any one of the queues, and the like, without being limited thereto.
It will be appreciated that the first user may specify the target performance parameter of each queue in the slice path of the target network slice, or may specify the target performance parameter of a portion of the queues in the slice path of the target network slice, which is not limited.
Alternatively, the operator may enter the user information of the first user and the target performance parameter on the slice bandwidth planning interface. The slice bandwidth planning interface may be an interface displayed on a display screen of the planning device (for example, a display screen in the output interface shown in fig. 2), or an interface displayed by a display device connected to and communicating with the planning device, which is not limited in the embodiment of the present application. For convenience of description, the following description will take an example in which the slice bandwidth planning interface is an interface displayed on a display screen of the planning apparatus.
For example, an operator may enter user information and target performance parameters for a first user through an input interface (e.g., keyboard, mouse, touch screen, microphone, etc.) of the planning apparatus on a slice bandwidth planning interface displayed on a display screen of the planning apparatus. In response, the planning device obtains user information and target performance parameters of the first user. The operator may be, for example, a network operator or a network manager of a network equipment provider, and the like, and is not limited thereto.
Illustratively, referring to fig. 4, fig. 4 shows a schematic diagram of a planning apparatus acquiring user information and target performance parameters of a first user through a slice bandwidth planning interface 40. The slice bandwidth planning interface 40 is an interface displayed on a display screen of the planning apparatus. As shown in fig. 4, the operator may enter the user name of the first user via the "user name" field on the slice bandwidth planning interface 40 via an input interface (e.g., keyboard) of the planning apparatus, and enter the target performance parameter for the target network slice desired (or specified) by the first user in the "target performance parameter" field.
If the first user-specified target performance parameter comprises target performance parameters for a plurality of queues in the slice path of the target network slice, the operator may separate the target performance parameters for different queues by a separator (e.g., a semicolon or space, etc.) when the target performance parameters are entered in the "target performance parameters" column. For example, when the first user specifies that the target performance parameters of queue 1 in the slice path of the target network slice are: with a probability of 99%, the delay is 5ms. The first user also specifies that the target performance parameters for queue 2 in the slice path of the target network slice are: with a probability of 99%, the delay is 10ms. The operator may enter in the "target performance parameter" field shown in fig. 4: queue 1,99%,5ms; queue 2, 99%,10ms.
It will be appreciated that the first user may also specify multiple target performance parameters for the same queue in the slice path of the target network slice, in which case the operator may also separate by a separator (e.g., a semicolon or space, etc.) when entering multiple target performance parameters for the same queue in the "target performance parameters" column. For example, the first user specifies that target performance parameter 1 for queue 1 in the slice path of the target network slice is: with a probability of 99%, the delay is 5ms. The target performance parameter 2 of the queue 1 in the slice path of the target network slice specified by the first user is: with a probability of 99.9%, the delay is 20ms. The first user also specifies that the target performance parameters for queue 2 in the slice path of the target network slice are: with a probability of 99%, the delay is 10ms. The operator may enter in the "target performance parameter" field: queue 1,99%,5ms; queue 1,99.9%,20ms; queue 2, 99%,10ms.
Optionally, the slice bandwidth planning interface 40 shown in fig. 4 may also include a plurality of "target performance parameters" fields (not shown). Thus, when the first user-specified target performance parameter includes a plurality of target performance parameters, the operator may enter one target performance parameter in each of the "target performance parameter" fields. For example, the first user specifies that the target performance parameters for queue 1 in the slice path for the target network slice are: with a probability of 99%, the delay is 5ms. The first user also specifies that the target performance parameters for queue 2 in the slice path of the target network slice are: with a probability of 99%, the delay is 10ms. The operator may enter in column 1 "target performance parameter": queue 1,99%,5ms. And, input in the 2 nd "target performance parameter" column; queue 2, 99%,10ms.
Alternatively, the "target performance parameter" field displayed on the slice bandwidth planning interface 40 shown in fig. 4 may include a plurality of drop-down menu fields (not shown), where each menu field corresponds to an input box, and each input box is used to input a target performance parameter for a queue, or is used to input a target performance parameter. Thus, the operator can select one menu bar of the "target performance parameter" bar through an input interface (such as a mouse, a keyboard, etc.) of the planning device, and input the target performance parameter of the queue corresponding to the menu bar in the corresponding input box. For example, the first user specifies that the target performance parameters for queue 1 in the slice path for the target network slice are: with a probability of 99%, the delay is 5ms. The first user also specifies that the target performance parameters for queue 2 in the slice path of the target network slice are: with a probability of 99%, the delay is 10ms. The operator may select the menu bar corresponding to queue 1 in the "target performance parameter" bar and enter in the corresponding input box: 99%,5ms. Selecting a menu bar corresponding to the queue 2 in a target performance parameter bar, and inputting the menu bar in a corresponding input box; 99%,10ms.
It should be understood that the above implementation of the first user's user information and target performance parameters entered by the operator on the sliced bandwidth planning interface 40 is merely exemplary and is not intended to limit the scope of the present application.
Thus, after the operator enters the user information of the first user and the target performance parameter on the slice bandwidth planning interface 40, the "ok" button may be operated (e.g., by a mouse click) to indicate that planning device related information has been entered. In response, the planning apparatus obtains the user information and the target performance parameter of the first user entered by the operator at the slice bandwidth planning interface 40.
S102, determining slice information of the target network slice according to the acquired user information.
Optionally, the planning device may search the pre-stored network slice database for the target network slice according to the acquired user information of the first user, and further determine slice information of the target network slice.
The network slice database comprises slice information of network slices created by a network operator or a network equipment provider for different users. In this way, the planning device can find the target network slice in the network slice database based on the acquired user information of the first network, and determine slice information of the target network slice.
The slice information of the target network slice may include slice path information of the target network slice, which may include an identification number (identity document, ID) of each node on a slice path of the target network slice, and a port number of a port (port) on each node for transmitting traffic data of the target network slice.
The slice information of the target network slice may also include a queue configuration for each node on the slice path of the target network slice. Wherein the queues of each node are configured to indicate a proportion of each queue allocated bandwidth for each queue in the slice path of the target network slice. As an example, assume that the slice path of the target network slice includes m queues, which any node in the target network slice would queue, i.e., pre-specify the proportion of allocated bandwidth for the m queues. Typically, the any node will allocate more bandwidth for queues with higher priority and less bandwidth for queues with lower priority. That is, queues with higher priorities occupy a higher proportion of bandwidth, and queues with lower priorities occupy a lower proportion of bandwidth.
The slice information of the target network slice may also include slice traffic information of the target network slice. The slice traffic information of the target network slice may include an input traffic rate of each node on a slice path of the target network slice when transmitting traffic data of the target network slice. The input traffic rate may be understood as a macroscopic input traffic rate of each node when transmitting traffic data of the target network slice. The macroscopic input traffic rate may be understood as the size of service data received by the node within a preset time granularity, and the size of the time granularity is not specifically limited in the embodiment of the present application. As an example, the time granularity may be 1 second, 5 seconds, etc.
Optionally, the slice traffic information of the target network slice may further include a burst traffic size and a burst traffic rate input by each node on the slice path of the target network slice. For any node on the slice path of the target network slice, assuming that a packet transmitted on any node and used for carrying service data of the target network slice is referred to as a target slice packet, the burst traffic size input by the target network slice at any node may refer to the sum of the sizes of a plurality of target slice packets continuously received by the any node and having a time interval between two adjacent target slice packets smaller than a first threshold, where the burst traffic rate input by the target network slice at any node satisfies: the sum of the sizes of the plurality of target slice messages and/or the time interval between the plurality of target slice messages. Wherein the first threshold is less than the temporal granularity of the macroscopic input flow rate.
As an example, assume that node 1 is a node on the slice path of the target network slice, if node 1 sequentially receives, in succession, message 1, message 2, message 3, and message 4 for carrying the traffic data of the target network slice. The time interval between receiving the message 1 and receiving the message 2 by the node 1 is t1, the time interval between receiving the message 2 and receiving the message 3 by the node 1 is t2, the time interval between receiving the message 3 and receiving the message 4 by the node 1 is t3, and t1, t2 and t3 are all smaller than a first threshold value, the burst traffic size input by the target network slice at the node 1 is the sum of the sizes of the message 1, the message 2, the message 3 and the message 4, and the burst traffic rate input by the target network slice at the node 1 satisfies the following conditions: (Messaging 1+Messaging 2+Messaging 3+Messaging 4)/(t1+t2+t3).
In a possible case, slice information of the target network slice may be pre-stored in the network slice database. That is, the slice path information of the target network slice, the queue configuration of each node on the slice path of the target network slice, and the slice traffic information of the target network slice may be pre-stored in the network slice database. In this case, the planning apparatus may directly find the target network slice in the network slice database based on the acquired user information of the first network, and determine slice information of the target network slice.
In another possible case, a part of slice information of the target network slice is pre-stored in the network slice database, and another part of slice information is acquired by the planning device in real time.
For example, the slice path information of the target network slice may be pre-stored in the network slice database. In this way, the planning device can directly find the target network slice in the network slice database based on the acquired user information of the first network, and determine slice path information of the target network slice. And the planning device acquires the slice flow information of each node on the slice path of the target network slice in real time according to the ID and the port number of each node in the determined slice path information.
As another example, the queue configuration of each node on the slice path of the target network slice may be pre-stored in the network slice database, or the planning device may obtain the queue configuration of each node from each node in real time according to the ID of each node in the determined slice path information, which is not limited.
Optionally, in some embodiments, some or all of the slice information of the target network slice may be input by an operator through an input interface of the planning apparatus, so that the planning apparatus may obtain the slice information input by the operator. The embodiment of the present application is not limited thereto.
S103, determining bandwidth resources of the target network slice according to the target performance parameters and slice information of the target network slice.
Wherein the bandwidth resources of the target network slice include a target slice bandwidth that needs to be configured for each node on the slice path of the target network slice. Here, the planning apparatus determines the detailed description of the bandwidth resource of the target network slice according to the target performance parameter and the slice information of the target network slice, which will be described below and will not be repeated here.
In the embodiment of the application, the planning device determines that the sum of the target slice bandwidths required to be configured for each node on the slice path of the target network slice is a relatively smaller bandwidth on the premise of meeting the target performance parameters according to the target performance parameters and the slice information of the target network slice. Namely, the method provided by the embodiment of the application can avoid the problem of bandwidth waste caused by meeting the target performance parameters.
S104, outputting the bandwidth resource of the target network slice.
The planning apparatus may output the bandwidth resources of the target network slice through an output interface (e.g., a display screen described in the output interface shown in fig. 2).
Taking the example that the output interface is a display screen, the planning device can display the bandwidth resource of the target network slice on the slice bandwidth display interface displayed by the display screen, so as to achieve the purpose of outputting the bandwidth resource of the target network slice. The slice bandwidth display interface may be an interface displayed on a display screen of the planning device, or may be an interface displayed by a display device connected to the planning device, which is not limited thereto. For simplicity of description, the embodiment of the application is illustrated by taking a display screen of the planning device on which a slice bandwidth display interface is displayed as an example.
Wherein the bandwidth resources of the target network slice determined by the planning apparatus include a target slice bandwidth determined for each node on the slice path of the target network slice. In this way, the planning apparatus may display the target slice bandwidth determined for each node on the slice path of the target network slice on the slice bandwidth display interface. It can be understood that the target slice bandwidth of each node is the target slice bandwidth determined by the planning device based on the target performance parameter and the slice information of the target network slice.
Optionally, the slice bandwidth display interface may be displayed sequentially with the slice bandwidth planning interface in S101 through a display screen of the planning device. Or, the slice bandwidth display interface and the slice bandwidth planning interface described in S101 may be displayed on the same display interface in a split screen manner through a display screen of the planning device. The split display on the display interface may be, for example, left and right split display on the display screen or up and down split display on the display screen, which is not limited in the embodiment of the present application. For simplicity of description, the embodiment of the application is illustrated by taking a left-right split screen display slice bandwidth display interface and a slice bandwidth planning interface as examples on a display interface by a planning device.
As an example, referring to fig. 5, fig. 5 shows a schematic diagram of a slice bandwidth display interface and a slice bandwidth planning interface displayed in a split screen mode according to an embodiment of the present application. As shown in fig. 5, the right side of the display interface 50 of the planning apparatus is a slice bandwidth planning interface 51, and the left side of the display interface 50 is a slice bandwidth display interface 52. After the planning device obtains the user information and the target performance parameter of the first user input by the operator through the slice bandwidth planning interface 51 (refer to the description of the user information and the target performance parameter of the first user input by the operator through the slice bandwidth planning interface 40 in S101), the planning device may execute S102-S103 through a background operation, so as to determine the target slice bandwidth planned for each node on the slice path of the target network slice.
In one possible implementation, the planning apparatus may display the target slice bandwidth determined for each node on the slice path of the target network slice in a list on the slice bandwidth display interface. For example, assume that the slice path of the target network slice includes node 1, node 2, node 3, and node 4, and the planning apparatus determines the target slice bandwidth for node 1 as bandwidth 1, the target slice bandwidth for node 2 as bandwidth 2, the target slice bandwidth for node 3 as bandwidth 3, and the target slice bandwidth for node 4 as bandwidth 4. The planning apparatus may display the target slice bandwidths of node 1, node 2, node 3, and node 4 at the slice bandwidth display interface 62 in the manner shown in (a) of fig. 6.
In one possible implementation, the planning apparatus may display the network topology of the slice path of the target network slice on the slice bandwidth display interface, and mark each node in the network topology with the target slice bandwidth determined by the planning apparatus for that each node. For example, assume that the slice path of the target network slice includes node 1, node 2, node 3, and node 4, and the planning apparatus determines the target slice bandwidth for node 1 as bandwidth 1, the target slice bandwidth for node 2 as bandwidth 2, the target slice bandwidth for node 3 as bandwidth 3, and the target slice bandwidth for node 4 as bandwidth 4. The planning apparatus may display the network topology including node 1, node 2, node 3, and node 4 at the slice bandwidth display interface 62 and determine the target slice bandwidth for each node in the network topology by marking the planning apparatus for each node in the network topology in an endorsed form (or any other form, such as levitation, etc.), as shown in fig. 6 (b).
In addition, as shown in fig. 6, the slice bandwidth planning interface 61 may also display an identification (e.g., name) of the target network slice. It will be appreciated that there may be an identification on the slice bandwidth planning interface 61 that displays a plurality of network slices. When an operator selects a network slice from the plurality of network slices through an input interface (e.g., mouse or keyboard, etc.) of the planning apparatus, slice bandwidth display interface 62 displays the network topology of the network slice and marks each node in the network topology with the target slice bandwidth determined by the planning apparatus for each node. Alternatively, the slice bandwidth display interface 62 may also display the global network topology of the network operator or network equipment provider, and when an operator selects one of the plurality of network slices through an input interface (e.g., mouse or keyboard, etc.) of the planning apparatus, the slice bandwidth display interface 62 may highlight (e.g., by highlighting) the network topology of that network slice on the global network topology, and mark each node in the network topology of that network slice with the target slice bandwidth determined by the planning apparatus for that each node. In this case, the network topology of the network slice is a sub-topology of the global network topology of the network operator or network device provider displayed by the slice bandwidth display interface 62.
Optionally, the planning device may further display estimated performance parameters of one or more queues in the slice path of the target network slice on the slice bandwidth display interface. For any queue in the slice path of the target network slice, the preset performance parameter of the any queue is used for indicating the transmission performance when the slice bandwidth of each node on the slice path of the target network slice is the target slice bandwidth determined after the planning device executes the method provided by the embodiment of the application, and the any queue transmits the service data of the target network slice.
Specifically, the estimated performance parameter of any one of the queues may be determined based on the bandwidth resource of the target network slice determined in S103. Optionally, the estimated performance parameter of any queue may be determined based on a target slice bandwidth of each node on a slice path of the target network slice. The target slice bandwidth of each node is the target slice bandwidth determined by each node on the slice path of the target network slice after the planning device executes the slice bandwidth planning method provided by the embodiment of the application. The planning apparatus may determine the specific description of the estimated performance parameter of the any queue based on the target slice bandwidth of each node, and reference may be made to the following specific description of the planning apparatus determining the first performance parameter of the any queue based on the first slice bandwidth of each node, which will not be described herein.
Optionally, the planning device may further display estimated performance parameters of each node in the target network slice on the slice bandwidth display interface. For any node on the slice path of the target network slice, the estimated performance parameter of the any node is used for indicating the transmission performance of the any node when the slice bandwidth of the any node is the target slice bandwidth determined for the planning device after the planning device executes the method of the embodiment of the application.
Specifically, the estimated performance parameter of any node may be determined based on the target slice bandwidth of any node. The target slice bandwidth of any node is the target slice bandwidth determined for the any node after the planning device executes the slice bandwidth planning method provided by the embodiment of the application. The planning device determines the specific description of the estimated performance parameter of any node based on the target slice bandwidth of the any node, and reference may be made to the following specific description of the planning device determining the second performance parameter of any node based on the first slice bandwidth of any node, which will not be repeated.
In this way, by the method described in S101-S104, the planning apparatus may automatically plan bandwidth resources for the target network slice based on the target performance parameter of the target network slice desired by the user and the slice information of the target network slice, so as to automatically determine, for each node on the slice path of the target network slice, a target slice bandwidth capable of meeting the target performance parameter specified by the user, and avoid the problem that the slice bandwidth is wasted or insufficient when the target network slice transmits service data.
Next, with reference to fig. 7, the description will be given of the above S103, and as shown in fig. 7, S103 may be implemented by:
s1031, determining a first slice bandwidth of each node on a slice path of the target network slice.
Alternatively, the first slice bandwidth may be any preset bandwidth. The specific value of the preset bandwidth is not limited in this embodiment of the present application, and for example, the preset bandwidth value may be 1 gigabit/second (giga bit per second, gbps).
One possible implementation may be that a preset bandwidth value is preset in the planning apparatus. In this way, when the planning device starts planning the slice bandwidth of each node for the target network slice, the preset bandwidth value can be used as the value of the first slice bandwidth of each node in the target network slice. In another possible implementation, the operator may input a preset bandwidth value to the planning apparatus through the bandwidth input interface. In response, the planning device receives the preset bandwidth value as a value of the first slice bandwidth of each node in the target network slice. It can be understood that the preset bandwidth values corresponding to different nodes in the target network slice may be the same or different, which is not limited by the embodiment of the present application.
Alternatively, the first slice bandwidth may also be a bandwidth actually configured by each node at the current moment on the slice path of the target network slice. In this way, when the planning device starts planning the slice bandwidth of each node for the target network slice, based on the slice path information in the slice information acquired in S102, the planning device may acquire, in real time, the slice bandwidth configured for the target network slice by each node at the current time from each node on the slice path of the target network slice, and use the acquired slice bandwidth as the first slice bandwidth of each node.
It will be understood that the planning apparatus includes the ID of each node on the slice path of the target network slice in the slice path information in the slice information acquired in S102. Thus, the planning apparatus may send a query request to each node based on the ID of the each node to request a query of the slice bandwidth configured by the each node for the target network slice at the current time. In response, each node may return its own current slice bandwidth configured for the target network slice to the planning device after receiving the query request. In this way, the planning device obtains the slice bandwidth configured by each node for the target network slice at the current moment, and takes the slice bandwidth as the first slice bandwidth of each node.
For example, for node 1 on the slice path of the target network slice, the planning apparatus sends a query request to node 1 based on the ID of node 1 to request query node 1 for the slice bandwidth 1 configured for the target network slice at the current time. The node 1 may then return the slice bandwidth 1 configured for the target network slice to the planning apparatus after receiving the query request. In this way, the planning device acquires the slice bandwidth 1 configured by the node 1 for the target network slice at the current moment, and takes the slice bandwidth 1 as the first slice bandwidth of the node 1.
Of course, the planning device determines the slice path information and then displays the slice path information to the operator through the information display interface. And the operator manually inquires the slice bandwidth configured for the target network slice at the current moment of each node on the slice path of the target network slice based on the seen slice path information. Then, an operator can input the slice bandwidth configured for the target network slice at the current moment by each node on the slice path of the target network slice to the planning device through the bandwidth input interface. In response, the planning device receives the slice bandwidth of each node at the current moment on the slice path of the target network slice input by the operator, and takes the slice bandwidth as the first slice bandwidth of each node.
S1032, determining a first performance parameter of each queue in at least one queue in the slice path of the target network slice based on the first slice bandwidth of each node on the slice path of the target network slice and the slice information of the target network slice.
The at least one queue is a queue corresponding to the target performance parameter acquired by the planning device. For example, the slice path of the target network slice includes m queues, and the planning device acquires the target performance parameters of n (n is a positive integer less than or equal to m) queues in the m queues in S101, where the at least one queue is the n queues.
For convenience of description, embodiments of the present application will be described hereinafter by taking n equal to m as an example. That is, the planning apparatus may determine the first performance parameter for each queue in the slice path of the target network slice based on the first slice bandwidth of each node on the slice path of the target network slice.
Specifically, the planning apparatus may determine, based on a first slice bandwidth of each node and a queue configuration of each node on a slice path of the target network slice, a bandwidth occupied by each queue on each node on the slice path of the target network slice. Wherein the queues of each node are configured to indicate a proportion of bandwidth allocated by each node to each queue in a slice path of a target network slice.
For any node on the slice path of the target network slice, the planning device may determine, according to the first slice bandwidth of the any node and the queue configuration of the any node, a bandwidth size allocated by the any node for each queue of the target network slice.
Illustratively, for node 1 on the slice path of the target network slice, assume that the first slice bandwidth of node 1 is 10Gbps and the slice path of the target network slice includes 2 queues (queue 1 and queue 2, respectively). If the queue configuration of the node 1 indicates that the bandwidth allocation ratio of the queue 1 and the queue 2 is 1:4, the bandwidth allocated by the node 1 to the queue 1 is [ 10/(1+4) ], 1=2gbps, and the bandwidth allocated to the queue 2 is [ 10/(1+4) ], 4=8gbps.
In this way, the planning device can determine the first performance parameter of each queue in the slice path of the target network slice according to the bandwidth occupied by each node by each queue in the slice path of the target network slice, the slice flow information of each node on the slice path of the target network slice and the preset probability. The preset probability is the preset probability corresponding to the target performance parameter acquired by the planning device.
Optionally, the planning device may substitute the bandwidth occupied by each queue in the slice path of the target network slice, the slice flow information of each node on the slice path of the target network slice, and the preset probability into a preset model, so as to obtain the first performance parameter of each queue under the preset probability. The preset model is used for indicating the corresponding relation between the probability and performance parameters (such as time delay or packet loss rate) of a single node when the single node transmits the service data of the target network slice.
Optionally, the preset model may include a first mathematical model constructed based on queuing theory, where the first mathematical model is used to characterize a correspondence between probability and time delay of a single node when transmitting the target network slice service data. For any node on the slice path of the target network slice, the first mathematical model is used for indicating the corresponding relation between queuing delay and probability of the any node when transmitting service data of the target network slice. And/or the preset model can further comprise a second mathematical model which is constructed based on queuing theory and used for representing the corresponding relation between the probability and the packet loss rate of the single node when the target network slice service data are transmitted. For any node on the slice path of the target network slice, the second mathematical model is used for indicating the corresponding relation between the packet loss rate and the probability of the any node when the traffic data of the target network slice is transmitted. It may be appreciated that the second mathematical model may be a mathematical model determined based on the first mathematical model and a packet loss threshold set by the node, which is not limited by the embodiment of the present application.
By way of example, referring to fig. 8, fig. 8 shows a graph of a function of a first mathematical model provided by an embodiment of the present application. As shown in the dashed curve in fig. 8, when the queuing delay of the node when transmitting the service data of the target network slice is 5m, the corresponding probability is a, that is, the probability that the node has a queuing delay of 5ms when transmitting the service data of the target network slice is a. For another example, when the queuing delay of the node when transmitting the service data of the target network slice is 8ms, the corresponding probability is B, that is, the probability that the node appears queuing delay when transmitting the service data of the target network slice is 8ms is B.
Specifically, for any queue in the slice path of the target network slice, the planning device may determine the first performance function of the any queue on each node according to the bandwidth occupied by the any queue on the slice path of the target network slice and the slice flow information of each node. Wherein the first performance function is used to indicate a relationship between performance parameters (e.g., delay and/or packet loss rate) and probability of the any queue transmitting traffic data on each node in the target network slice. Taking the time delay as an example, the first performance function is the time delay probability density function.
Optionally, the planning device may bring the bandwidth occupied by each node of the any queue on the slice path of the target network slice and the slice flow information of each node into the preset model, so as to obtain the first performance function of the any queue on each node. Taking any node on the slice path of the target network slice as an example, the planning device can bring the bandwidth occupied by the any node of the any queue and the slice flow information of the any node into the preset model, so as to obtain a first performance function of the any queue on the any node. It should be appreciated that for any of the queues described above, each node in the target network slice corresponds to a first performance function.
Further, the planning apparatus may determine the second performance function of any one of the queues according to the first performance function of the any one of the queues on each node on the slice path of the target network slice. The second performance function is used for indicating the relation between the performance parameter and the probability when the any queue transmits the service data of the target network slice between the client and the server. In other words, the second performance function is an end-to-end performance function of the either queue. Taking the performance parameter as the time delay as an example, the second performance function is the time delay cumulative distribution function.
Alternatively, the planning apparatus may convolutionally sum the first performance functions of the any one of the queues over all nodes of the target network slice to obtain a probability density function of the end-to-end performance parameter of the any one of the queues. The planning device may also perform an integral operation on the probability density function to obtain a second performance function for the any one of the queues.
As an example, when the performance parameter is a time delay, taking the slice path 1 of the network slice shown in fig. 1 as an example, for the queue 1 in the slice path 1, referring to fig. 9, fig. 9 shows a schematic diagram of the planning apparatus determining the second performance function of the queue 1 according to the first performance function of the queue 1 on each node in the slice path 1. As shown in fig. 9, in slice path 1, the first performance function of queue 1 at node 1 may be characterized as a latency probability density function curve 901, the first performance function of queue 1 at node 3 may be characterized as a latency probability density function curve 902, and the first performance function of queue 1 at node 4 may be characterized as a latency probability density function curve 903. The planning device may convolve and sum the time delay probability density function curve 901, the time delay probability density function curve 902, and the time delay probability density function curve 903 to obtain an end-to-end time delay probability density function curve 91 of the queue 1. The planning means may then integrate the time delay probability density function curve 91, resulting in an end-to-end delay cumulative distribution function of the queue 1 (a function characterized by a curve 92 as shown in fig. 9), which is the second performance function of the queue 1,
In this way, the planning apparatus may obtain a second performance function for each queue in the slice path of the target network slice. It will be appreciated that the number of second performance functions is the same as the number of queues in the slice path of the target network slice, i.e. one second performance function for each queue in the slice path of the target network slice.
Further, the planning device may determine the first performance parameter of each queue in the slice path of the target network slice under each preset probability according to the second performance function of each queue in the slice path of the target network slice and at least one preset probability corresponding to the target performance parameter. Taking any queue in the slice path of the target network slice as an example, the planning device may determine the first performance parameter of the any queue under each preset probability according to the second performance function of the any queue and the preset probability corresponding to the target performance parameter of the any queue.
It may be understood that the first performance parameter is a performance parameter calculated by the planning apparatus based on the first slice bandwidth of each node on the slice path of the target network slice, i.e. the first performance parameter may be referred to as a calculation performance parameter or a theoretical performance parameter.
Optionally, for at least one preset probability corresponding to the target performance parameter, the planning device may substitute each preset probability into the second performance function of any queue in the slice path of the target network slice, so as to obtain, under each preset probability, the first performance parameter, such as the delay or the packet loss rate, of the target network slice service data transmitted by the any queue between the client and the server.
Taking a probability 1 as an example, the planning device may bring the probability 1 into the second performance function of any queue, so as to calculate the performance parameter (such as delay and/or packet loss rate) of the any queue corresponding to the probability 1 when transmitting the target network slice service data between the client and the server.
S1033, determining the queue type of each queue according to the first performance parameter of each queue and the target performance parameter of each queue.
Specifically, the planning device may determine a queue type of each queue according to a first performance parameter of each queue in a slice path of the target network slice under at least one preset probability and a target performance parameter of each queue under the at least one preset probability. For simplicity of description, the at least one probability will be described below as including only one preset probability.
For any queue in the slice path of the target network slice: and under the preset probability, if the planning device determines that the first performance parameter of any queue is smaller than the target performance parameter of any queue and the difference value between the first performance parameter of any queue and the target performance parameter of any queue is larger than a threshold value, determining that the queue type of any queue is the first type of the queue. If the planning device determines that the first performance parameter of any queue is smaller than the target performance parameter of any queue and the difference between the first performance parameter of any queue and the target performance parameter of any queue is smaller than a threshold value, determining that the queue type of any queue is a second type of queue type. If the planning device determines that the first performance parameter of any queue is greater than the target performance parameter of any queue, determining that the queue type of any queue is a third type of queue type. The specific value of the threshold is not specifically limited in the embodiment of the present application. The case of the above-described equality is not particularly limited in the embodiment of the present application.
It should be noted that, for a queue whose queue type is the first type, it means that the slice bandwidth currently obtained by the queue far satisfies the target performance parameter specified by the user. In this case, to avoid bandwidth waste, the slice bandwidth obtained by the queue may be reduced appropriately. And for the queues with the queue types of the second type, the slice bandwidth obtained by the queues at present just meets the target performance parameters specified by the user. In this case, no adjustment may be made to the queue acquisition slice bandwidth. And for the queue with the queue type of the third type, the current obtained slice bandwidth of the queue does not meet the target performance parameter specified by the user. In this case, to meet the target performance parameter specified by the user, the slice bandwidth obtained by the queue needs to be increased.
As an example, assume that the first performance parameter and the target performance parameter are time delays, and for any queue in the slice path of the target network slice, the first performance parameter of the any queue is denoted by delay_t, the target performance parameter of the any queue is denoted by delay_e, and the above threshold is denoted by Y as an example:
when the planning device determines that delay_t is less than delay_e and |delay_t-delay_e| > Y, the planning device determines that the type of any queue is the type of the first type queue. Alternatively, delay_t < delay_e, and |delay_t-delay_e| > Y may be replaced with delay_t < a, where a is a constant coefficient, and a is any value greater than 0 and less than 1. That is, when the planning apparatus determines that delay_t < a_delay_e, the type of any queue may also be determined to be the first type queue type.
When the planning device determines that delay_t is less than or equal to delay_e and |delay_t-delay_e| is less than or equal to Y, the type of any queue is determined to be the type of the second type queue. Alternatively, delay_t is less than or equal to delay_e, and |delay_t-delay_e| is less than or equal to Y may be replaced with a×delay_e is less than or equal to delay_t is less than or equal to delay_e, i.e., when the planning apparatus determines that a×delay_e is less than or equal to delay_t is less than or equal to delay_e, it may also be determined that the type of any queue is the second type of queue type.
When the planning apparatus determines that delay_t > delay_e, it is determined that the type of either queue is a third type of queue type.
S1034, determining the slice path type of the target network slice according to the queue type of each queue.
The slice path types of the target slice network include: a first type of path type that requires a reduction in slice bandwidth, a second type of path type that does not require an adjustment in slice bandwidth, and a third type of path type that requires an increase in slice bandwidth.
Optionally, the planning apparatus may determine the slice path type of the target network slice according to a queue type of each queue in the slice path of the target network slice and the following rule.
Specifically, when the planning device determines that the queue types of all queues in the slice path of the target network slice are the first type of queue types, the planning device determines that the slice path type of the target network slice is the first type of path type.
When the planning device determines that the queue types of all the queues in the slice path of the target network slice are the second type queue types, or the planning device determines that the queue types of the queues in the slice path of the target network slice only comprise the first type queue types and the second type queue types, the slice path type of the target network slice is determined to be the second type path type. That is, when the planning apparatus determines that the queues having the queue type of the third type of queue are not included in all the queues included in the slice path of the target network slice and at least one queue having the queue type of the second type of queue is included, the slice path type of the target network slice is determined to be the second type of path type.
And when the planning device determines that the queue of the slice path of the target network slice comprises at least one queue with a queue type of a third type of queue type, determining that the slice path type of the target network slice is the third type of path type.
As an example, assume that the slice path of the target network slice includes 3 queues, queue 1, queue 2, and queue 3, respectively. When the planning device determines that the queue types of the queue 1, the queue 2 and the queue 3 are all the first type of queue types, the slice path type of the target network slice is determined to be the first type of path type needing to reduce the slice bandwidth. When the planning device determines that the queues 1, 2 and 3 are not the queues of the third type of queue types, and at least one queue type of queues 1, 2 and 3 is the queue of the second type of queue type, the slice path type of the target network slice is determined to be the second type of path type without adjusting the slice bandwidth. When the planning device determines that the queues 1, 2 and 3 comprise at least one queue type of the third class, the slice path type of the target network slice is determined to be the type of the third class path which needs to increase the slice bandwidth.
S1035, determining the direction of adjusting the first slice bandwidth of the node on the slice path of the target network slice according to the slice path type of the target network slice.
The planning device determines the direction of adjusting the first slice bandwidth of the nodes on the slice path of the target network slice according to the slice path type of the target network slice. Here, adjusting the direction of the first slice bandwidth includes increasing the first slice bandwidth and decreasing the first slice bandwidth.
Specifically, when the planning device determines that the slice path type of the target network slice is the second type of path type, it is determined that the first slice bandwidth of all nodes on the slice path of the target network slice is not adjusted. In this case, the planning apparatus determines the first slice bandwidth of each node on the slice path of the current target network slice as the target slice bandwidth of the each node, and ends the flow of adjusting the target slice bandwidth of each node on the slice path of the target network slice.
It will be appreciated that since the current available slice bandwidth for a queue of the second type of queue type just meets the user specified target performance parameter, there is no need to adjust the available slice bandwidth for that queue. Thus, for a target network slice whose slice path type is a second type path type that includes at least one second type queue and does not include a third type queue, there is no need to adjust the slice bandwidth of the nodes on the slice path of the target network slice to avoid that the first slice bandwidth of the nodes on the slice path of the target network slice after adjustment results in it not meeting the target performance parameter.
When the planning device determines that the slice path type of the target network slice is the first type of path type, it is determined to reduce a first slice bandwidth of at least one node on the slice path of the target network slice. In this case, the apparatus is programmed to execute S1036.
It can be appreciated that since the current obtained slice bandwidth of a queue of the first type of queue type substantially meets the target performance parameter specified by the user, the slice bandwidth obtained by the queue can be reduced appropriately to avoid waste of bandwidth. Therefore, for a target network slice whose slice path type is a first type path type including only a first type queue type, the slice bandwidth of a node on the slice path of the target network slice can be appropriately reduced to avoid bandwidth waste.
When the planning device determines that the slice path type of the target network slice is the third type of path type, it is determined to increase the first slice bandwidth of at least one node on the slice path of the target network slice. In this case, the planning apparatus executes S1037.
It will be appreciated that since the current available slice bandwidth of a queue of the third type does not meet the user specified target performance parameter, the available slice bandwidth of the queue may be increased appropriately to meet the user specified target performance parameter. Therefore, for the target network slice with the slice path type of the third type path type comprising at least one queue type queue of the third type, the slice bandwidth of the node on the slice path of the target network slice can be increased, so that the target network slice meets the target performance parameters specified by the user, and the problem of insufficient bandwidth of the target network slice when the target network slice transmits service data is avoided.
And S1036, when the direction of adjusting the first slice bandwidth is determined to be the first slice bandwidth reduction, reducing the first slice bandwidth of at least one node on the slice path of the target network slice by a preset step length.
The size of the preset step is not particularly limited in the embodiment of the application. In the embodiment of the application, the preset step length can be a fixed step length or a variable step length. When the preset step size is a variable step size, the size of the preset step size for adjusting the node slice bandwidth each time can be determined by the difference size of the target performance parameter and the first performance parameter. For example, when the difference between the target performance parameter and the first performance parameter is larger, a larger step size can be selected to adjust the bandwidth of the node, so as to improve the adjustment efficiency of the slice bandwidth. For another example, when the difference between the target performance parameter and the first performance parameter is small, a smaller step size may be selected to adjust the bandwidth of the node to avoid over-adjusting the slice bandwidth.
Optionally, the planning device may randomly reduce the first slice bandwidth of at least one node in the slice path of the target network slice by a preset step size.
Optionally, the planning device may determine, firstly, a first node that needs to reduce the first slice bandwidth according to a first constraint condition, and reduce the first slice bandwidth of the first node by a preset step. Wherein, the nodes meeting the first constraint condition are: and after the first slice bandwidth of the node is reduced by a preset step length, the reduced slice bandwidth of the node is larger than the input flow rate of the target network slice on the node.
The second performance parameter of each node on the slice path of the target network slice is the performance parameter of the data of the target network slice when flowing through each node. In particular, the planning apparatus may determine the second performance parameter of each node based on the first slice bandwidth of each node on the slice path of the target network slice and the input traffic rate of the target network slice at the each node. Here, the description of the input traffic rate may refer to the above, and will not be repeated. As an example, for any node on the slice path of the target network slice, the planning apparatus may substitute the first slice bandwidth of the any node, the input traffic rate of the target network slice at the any node, and the predetermined probability value into the preset model described above, so that the second performance parameter of the any node may be obtained. It will be appreciated that the planning means is for determining the second performance parameter for each node in the target network slice based on the same predetermined probability value.
It should be understood that, based on the characteristics of the second performance parameter, when the first slice bandwidth of the node with the smallest second performance parameter is compressed, the end-to-end performance parameter of the target network slice increases the slowest, so that the purpose of compressing more bandwidth can be achieved, and the bandwidth waste can be reduced.
In addition, the node marked with the preset mark refers to a node which is not regulated in slice bandwidth when planning bandwidth resources of other network slices before planning bandwidth resources of a target network slice. In other words, when any node in the target network slice is marked with a preset mark, it is characterized that the slice bandwidth of that any node has not been adjusted when bandwidth resources of other network slices including that node are planned before bandwidth resources of the target network slice are planned. In this way, when planning bandwidth resources in the target network slice, the first slice bandwidth of the node that has not been adjusted is adjusted, so that the transmission performance of other network slices that affect the planned bandwidth resources can be avoided.
Or, the node marked with the preset mark refers to a node which is not regulated in slice bandwidth when planning bandwidth resources of nodes on other slice paths of the target network slice before planning bandwidth resources of nodes in any slice path of the target network slice. In other words, when any node in the target network slice is marked with a preset mark, the slice bandwidth of any node is not adjusted when the bandwidth resources of nodes in other slice paths including the any node are characterized before the bandwidth resources of the nodes in any of the slice paths are planned. In this way, when planning bandwidth resources of nodes in the other slice paths, the first slice bandwidth of the node which is not regulated is regulated, so that the transmission performance of the slice paths of the planned bandwidth resources can be prevented from being influenced.
And S1037, when the direction of adjusting the first slice bandwidth is determined to be the direction of increasing the first slice bandwidth, increasing the first slice bandwidth of at least one node on the slice path of the target network slice by a preset step length.
The description of the preset step length refers to the description of S1036, and will not be repeated.
Optionally, the planning device may randomly increase the first slice bandwidth of at least one node on the slice path of the target network slice by a preset step size.
Optionally, the planning device may determine, through a second constraint condition, a second node that needs to increase the first slice bandwidth, and increase the first slice bandwidth of the second node by a preset step size. Wherein the nodes satisfying the second constraint condition are: and after the first slice bandwidth of the node is increased by a preset step length, the increased slice bandwidth of the node is larger than the input flow rate of the target network slice on the node. The description of S1036 may be referred to for the second performance parameter, the second performance parameter of the determining node, and the description of the preset flag, which are not described herein.
It should be understood that, based on the characteristics of the second performance parameter, when the first slice bandwidth of the node with the largest second performance parameter is increased, the end-to-end performance parameter of the target network slice decreases fastest, so that the purpose of increasing the bandwidth can be achieved, and the purpose of saving the bandwidth can be achieved.
S1038, determining the target slice bandwidth of each node on the slice path of the target network slice based on the increased or decreased slice bandwidth.
Specifically, the planning apparatus may take the reduced first slice bandwidth as a new first slice bandwidth of the first node, or take the increased first slice bandwidth as a new first slice bandwidth of the second node, and execute S1031-S1034 again to determine a new slice path type of the target network slice after adjusting the slice bandwidth once, and determine the target slice bandwidth of each node on the slice path of the target network slice based on the new slice path type.
In one possible scenario, when the planning apparatus determines that the slice path type of the target network slice is the first type of path type at S1034 at the kth time, if the planning apparatus determines that the new slice path type is the second type of path type at S1034 at the kth+1th time, the flow is stopped. Where k is a positive integer. In this case, the planning apparatus determines the first slice bandwidth of each node on the slice path of the target network slice at the current time as the target slice bandwidth of each node on the slice path of the target network slice. If the new slice path type determined by the planning apparatus k+1th time at S1034 is still the first type of path type, then S1035-S1038 are performed to determine a target slice bandwidth for each node on the slice path of the target network slice.
Alternatively, when the planning apparatus determines that the slice path type of the target network slice is the first type of path type at S1034 for the kth time, if the new slice path type determined at S1034 for the (k+1) th time of the planning apparatus is the third type of path type, the flow is stopped. And the planning device determines the first slice bandwidth of each node on the slice path of the target network slice after the k-1 th slice bandwidth adjustment as the target slice bandwidth of each node on the slice path of the target network slice.
In another possible case, if the planning apparatus determines that the slice path type of the target network slice is the third type of path type at S1034 at the kth time, if the planning apparatus determines that the new slice path type is the first type of path type or the second type of path type at S1034 at the kth+1th time, the flow is stopped. In this case, the planning apparatus determines the first slice bandwidth of each node on the slice path of the target network slice at the current time as the target slice bandwidth of each node on the slice path of the target network slice. If the new slice path type determined by the planning apparatus k+1th time at S1034 is still of the third class path type, then S1035-S1038 are performed to determine a target slice bandwidth for each node on the slice path of the target network slice.
In this way, the planning device can finally determine the target slice bandwidth of each node on the slice path of the target network slice by iteratively adjusting the slice bandwidths of the nodes on the slice path of the target network slice.
Optionally, the planning apparatus may further mark a preset mark for the node that has once adjusted the slice bandwidth after determining the target slice bandwidth of each node on the slice path of the target network slice. The description of the preset mark may refer to the description of S1036, and will not be repeated.
Further, when the target network slice includes a plurality of slice paths, the planning device can sequentially adjust the slice bandwidths of the nodes on each slice path of the target network slice by the method described above, so as to obtain the target slice bandwidth of each node in the target network slice, and further avoid the problem that the slice bandwidth is wasted or insufficient when the target network slice is used for transmitting service data processing.
For a network operator or a network equipment provider, the network operator or the network equipment provider can adjust and plan the slice bandwidths of the nodes in all the created network slices based on the method, so that the problem that the created slice bandwidths are wasted or insufficient when service data are transmitted to users is avoided.
In summary, the method for planning slice bandwidth provided by the embodiment of the application can automatically plan bandwidth resources for the target network slice based on the target performance parameter of the target network slice desired by the user and the slice information of the target network slice, thereby automatically determining the target slice bandwidth capable of meeting the target performance parameter designated by the user for each node on the slice path of the target network slice, and avoiding the problem of slice bandwidth waste or shortage when the target network slice transmits service data.
The foregoing description of the solution provided by the embodiments of the present application has been mainly presented in terms of a method.
In order to achieve the above functions, as shown in fig. 10, fig. 10 shows a schematic structural diagram of a slice bandwidth planning apparatus 100 according to an embodiment of the present application. The planning apparatus 100 is configured to perform the above-described planning method of slice bandwidths, for example, to perform the method shown in fig. 3 or fig. 7. The planning apparatus 100 may include an acquisition unit 101 and a determination unit 102.
An obtaining unit 101, configured to obtain user information of a first user and a target performance parameter. A determining unit 102, configured to determine slice information of a target network slice according to user information; and determining bandwidth resources of the target network slice according to the target performance parameters and the slice information. Wherein the target performance parameter is used to indicate a transmission performance of the target network slice desired by the first user.
As an example, in connection with fig. 3, the acquisition unit 101 may be used to perform S101, and the determination unit 102 may be used to perform S102 and S103.
Optionally, the acquiring unit 101 is specifically configured to: and receiving the user information and the target performance parameters which are input on the slice bandwidth planning interface.
As an example, in connection with fig. 3, the acquisition unit 101 may be used to perform S101.
Optionally, the bandwidth resource of the target network slice includes a target slice bandwidth configured for each node in the target network slice.
Optionally, the planning apparatus 100 further includes: and a display unit 103 for displaying the target slice bandwidth configured for each node on the slice bandwidth display interface.
As an example, in connection with fig. 3, the display unit 103 may be used to perform S104.
Optionally, the display unit 103 is further configured to: and displaying the estimated performance parameters of each node on the slice bandwidth display interface. Wherein the estimated performance parameter for each node is a performance parameter determined based on a target slice bandwidth configured for each node.
As an example, in connection with fig. 3, the display unit 103 may be used to perform S104.
Optionally, the slice path of the target network slice includes one or more queues, and the display unit 103 is further configured to: and displaying the estimated performance parameters of the one or more queues on the slice bandwidth display interface. Wherein the estimated performance parameter of the one or more queues is a performance parameter determined based on bandwidth resources of the target network slice.
As an example, in connection with fig. 3, the display unit 103 may be used to perform S104.
Optionally, the performance parameters include time delay and/or packet loss rate under one or more preset probabilities.
Optionally, the target performance parameter includes a target performance parameter of at least one queue in a slice path of the target network slice, where the target performance parameter of any queue in the at least one queue is used to indicate a transmission performance of the any queue desired by the first user.
Optionally, the slice information includes slice path information of the target network slice. The obtaining unit 101 is further configured to obtain, according to the slice path information, a first slice bandwidth of each node on a slice path of the target network slice, where the first slice bandwidth of each node is a slice bandwidth of the each node at a current time. The determining unit 102 is specifically configured to determine a first performance parameter of each queue in the at least one queue according to the first slice bandwidth of each node, and determine a slice path type of the target network slice according to the first performance parameter of each queue and the target performance parameter of each queue. The planning apparatus 100 further includes: and the adjusting unit 104 is configured to adjust a slice bandwidth of at least one node on the slice path of the target network slice according to the slice path type of the target network slice, so as to obtain the target slice bandwidth of each node. The first performance parameter of each queue is a performance parameter when the slice bandwidth of each node is the first slice bandwidth and each queue transmits data between the client and the server. The slice path types include: a first type of path type that requires a reduction in slice bandwidth, a second type of path type that does not require an adjustment in slice bandwidth, and a third type of path type that requires an increase in slice bandwidth.
As an example, in connection with fig. 7, the acquisition unit 101 may be used to perform S1031, the determination unit 102 may be used to perform S1032 and S1034, and the adjustment unit 104 may be used to perform S1035.
Optionally, the determining unit 102 is further specifically configured to: and determining the bandwidth occupied by each queue in each node according to the first slice bandwidth of each node and the queue configuration of each node. And determining a first performance parameter of each queue according to the bandwidth occupied by each queue at each node, the slice flow information of each node and the preset probability. Wherein the queues of each node are configured to indicate a proportion of bandwidth allocated by each node to each queue. The slice traffic information for each node includes an incoming traffic rate for the target network slice at the each node.
As an example, in connection with fig. 7, the determination unit 102 may be used to perform S1032.
Optionally, the slice traffic information of each node further includes a burst traffic size and a burst traffic rate input by the target network slice at the each node.
Optionally, for any one of the at least one queue, the determining unit 102 is further specifically configured to: and determining a first performance function of the any queue on each node according to the bandwidth occupied by the any queue on each node and the slice flow information of each node. And determining a second performance function of any queue according to the first performance function of the any queue on each node, and determining a first performance parameter of any queue according to the preset probability and the second performance function, wherein the performance parameter comprises time delay and/or packet loss rate. The first performance function is used for indicating the relation between the performance parameters and the probabilities when the any queue transmits data on each node, and the second performance function is used for indicating the relation between the performance parameters and the probabilities when the any queue transmits data between the client and the server.
As an example, in connection with fig. 7, the determination unit 102 may be used to perform S1032.
Optionally, the determining unit 102 is further specifically configured to: for any one of the at least one queue, determining the queue type of the any one queue according to the first performance parameter of the any one queue and the target performance parameter of the any one queue. And determining the slice path type of the target network slice according to the queue type of each queue. Wherein, the queue type of any one queue comprises: the first performance parameter of any queue is smaller than the target performance parameter of any queue, the difference between the first performance parameter of any queue and the target performance parameter of any queue is larger than the first type of queue of the threshold value, the first performance parameter of any queue is smaller than the target performance parameter of any queue, the difference is smaller than the second type of queue of the threshold value, and the first performance parameter of any queue is larger than the third type of queue of the target performance parameter of any queue.
As an example, in connection with fig. 7, the determination unit 102 may be used to perform S1033 and S1034.
Optionally, the determining unit 102 is further specifically configured to: when the queue types of all queues in the slice path of the target network slice are the first type of queue types, determining that the slice path type of the target network slice is the first type of path type; when the queue types of all the queues in the slice path of the target network slice are the second type of queue types, or the queue types of the queues in the target network slice comprise the first type of queue types and the second type of queue types, determining that the slice path type of the target network slice is the second type of path type; and when at least one queue with the queue type of the third type of queue type is included in the queues included in the slice path of the target network slice, determining that the slice path type of the target network slice is the third type of path type.
As an example, in connection with fig. 7, the determination unit 102 may be used to perform S1034.
Optionally, the adjusting unit 104 is specifically configured to: if the slice path type of the target network slice is the first type of path type, reducing the first slice bandwidth of at least one node on the slice path of the target network slice to obtain the target slice bandwidth of each node; if the slice path type of the target network slice is the third type of path type, the first slice bandwidth of at least one node on the slice path of the target network slice is increased to obtain the target slice bandwidth of each node.
As an example, in connection with fig. 7, the adjusting unit 104 may be used to perform S1035.
Optionally, if the slice path type of the target network slice is the first type path type or the third type path type, the determining unit 102 is further configured to determine the second performance parameter of each node according to the first slice bandwidth of each node and the input traffic rate of the target network slice on each node. Wherein the second performance parameter of each node is a performance parameter of the target network slice when the data flows through each node. The adjusting unit 104 is further specifically configured to reduce the first slice bandwidth of the first node on the slice path of the target network slice if the slice path type of the target network slice is the first type of path type. The first node is a node with the minimum second performance parameter on a slice path of the target network slice, the reduced first slice bandwidth of the first node is larger than the input flow rate of the target network slice on the first node, and the first node is marked with a preset mark. Wherein the preset indicia of the first node indicia is used to characterize that the slice bandwidth of the first node is not adjusted when planning bandwidth resources of other slice paths including the first node.
As an example, in connection with fig. 7, the determining unit 102 and the adjusting unit 104 may be used to perform S1035.
Optionally, the adjusting unit 104 is further specifically configured to: and if the slice path type of the target network slice is the third type of path type, increasing the slice bandwidth of the second node on the slice path of the target network slice. The second node is a node with the largest second performance parameter on the slice path of the target network slice, and the first slice bandwidth of the second node after the increase is larger than the input flow rate of the target network slice on the second node.
As an example, in connection with fig. 7, the adjusting unit 104 may be used to perform S1035.
Optionally, the second node is marked with a preset mark. Wherein the preset flag of the second node flag is used to characterize that the slice bandwidth of the second node is not adjusted when planning bandwidth resources of other slice paths including the second node.
As an example, combine the figures
Optionally, the adjusting unit 104 is further specifically configured to: and when the slice path type of the target network slice determined according to the first slice bandwidth after the reduction of at least one node on the slice path of the target network slice is the second type path type, determining the slice bandwidth of each node at the current moment as the target slice bandwidth of each node.
As an example, in connection with fig. 7, the adjusting unit 104 may be used to perform S1035.
Optionally, the adjusting unit 104 is further specifically configured to: and when the slice path type of the target network slice determined according to the first slice bandwidth increased by at least one node on the slice path of the target network slice is the first type path type or the second type path type, determining the slice bandwidth of each node at the current moment as the target slice bandwidth of each node.
As an example, in connection with fig. 7, the adjusting unit 104 may be used to perform S1035.
For a specific description of the above alternative modes, reference may be made to the foregoing method embodiments, and details are not repeated here. In addition, any explanation and description of the beneficial effects of the planning apparatus 100 provided above may refer to the corresponding method embodiments described above, and will not be repeated.
As an example, in connection with fig. 2, the functions implemented by the determining unit 102 and the adjusting unit 104 in the planning apparatus 100 may be implemented by the processor 21 in fig. 2 executing the program code in the memory 22 in fig. 2. The functions implemented by the acquisition unit 101 and the display unit 103 can be implemented by the input-output interface 25 in fig. 2.
Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It should be noted that the division of the modules in fig. 10 is illustrative, and is merely a logic function division, and other division manners may be implemented in practice. For example, two or more functions may also be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules.
The embodiment of the application also provides a chip system 110, as shown in fig. 11, wherein the chip system 110 comprises at least one processor and at least one interface circuit. By way of example, when the chip system 110 includes one processor and one interface circuit, the one processor may be the processor 111 shown in the solid line box (or the processor 111 shown in the broken line box) in fig. 11, and the one interface circuit may be the interface circuit 112 shown in the solid line box (or the interface circuit 112 shown in the broken line box) in fig. 11. When the chip system 110 includes two processors including the processor 111 shown in the solid line box and the processor 111 shown in the broken line box in fig. 11, and two interface circuits including the interface circuit 112 shown in the solid line box and the interface circuit 112 shown in the broken line box in fig. 11. This is not limited thereto.
The processor 111 and the interface circuit 112 may be interconnected by wires. For example, the interface circuit 112 may be configured to receive signals (e.g., receive target performance parameters, etc.). For another example, interface circuitry 112 may be used to send signals to other devices (e.g., processor 111). The interface circuit 112 may, for example, read instructions stored in a memory and send the instructions to the processor 111. The instructions, when executed by the processor 111, may cause the messaging device to perform the various steps described in the embodiments above. Of course, the chip system 110 may also include other discrete devices, which are not particularly limited in this embodiment of the present application.
Embodiments of the present application also provide a computer program product, and a computer readable storage medium for storing the computer program product. The computer program product may include one or more program instructions that, when executed by one or more processors, may provide the functionality or portions of the functionality described above with respect to fig. 3 or 7. Thus, for example, one or more features of S101-S104 of FIG. 3 may be carried by one or more instructions in the computer program product.
In some examples, a messaging appliance, such as that described with respect to fig. 10, may be configured to provide various operations, functions, or actions in response to one or more program instructions stored by a computer readable storage medium.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using a software program, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The processes or functions in accordance with embodiments of the present application are produced in whole or in part on and when the computer-executable instructions are executed by a computer. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, a website, computer, server, or data center via a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. Computer readable storage media can be any available media that can be accessed by a computer or data storage devices including one or more servers, data centers, etc. that can be integrated with the media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (42)

1. A method of planning a slice bandwidth, applied to a planning apparatus, the method comprising:
acquiring user information of a first user and target performance parameters, wherein the target performance parameters are used for indicating the transmission performance of a target network slice expected by the first user;
determining slice information of the target network slice according to the user information;
and determining the bandwidth resource of the target network slice according to the target performance parameter and the slice information.
2. The method of claim 1, wherein the obtaining user information and target performance parameters of the first user comprises:
and receiving the user information and the target performance parameters which are input on a slice bandwidth planning interface.
3. The method of claim 1 or 2, wherein the bandwidth resources of the target network slice comprise a target slice bandwidth configured for each node in the target network slice.
4. A method according to claim 3, characterized in that the method further comprises:
and displaying the target slice bandwidth configured for each node on a slice bandwidth display interface.
5. The method according to claim 4, wherein the method further comprises:
displaying the estimated performance parameters of each node on the slice bandwidth display interface, wherein the estimated performance parameters of each node are determined based on the target slice bandwidth configured for each node.
6. The method of claim 4 or 5, wherein the slice path of the target network slice comprises one or more queues, the method further comprising:
displaying estimated performance parameters of the one or more queues on the slice bandwidth display interface, wherein the estimated performance parameters of the one or more queues are determined performance parameters based on the bandwidth resources of the target network slice.
7. The method according to any of claims 1-6, wherein the performance parameters comprise delay and/or packet loss rate at one or more predetermined probabilities.
8. The method of any of claims 1-7, wherein the target performance parameter comprises a target performance parameter of at least one queue in a slice path of the target network slice, the target performance parameter of any of the at least one queue indicating transmission performance of the any of the at least one queue in the slice path of the target network slice desired by the first user.
9. The method of claim 8, wherein the slice information comprises slice path information for the target network slice, and wherein determining bandwidth resources for the target network slice based on the target performance parameter and the slice information comprises:
acquiring a first slice bandwidth of each node on a slice path of the target network slice according to the slice path information, wherein the first slice bandwidth of each node is the slice bandwidth of each node at the current moment;
determining a first performance parameter of each queue in the at least one queue according to the first slice bandwidth of each node, wherein the first performance parameter of each queue is a performance parameter when the slice bandwidth of each node is the first slice bandwidth and each queue transmits data between a client and a server;
determining a slice path type of the target network slice according to the first performance parameter of each queue and the target performance parameter of each queue; wherein the slice path type includes: a first type of path type requiring reduced slice bandwidth, a second type of path type requiring no adjustment of slice bandwidth, and a third type of path type requiring increased slice bandwidth;
And adjusting the slice bandwidth of at least one node on the slice path of the target network slice according to the slice path type of the target network slice so as to obtain the target slice bandwidth of each node.
10. The method of claim 9, wherein determining the first performance parameter for each of the at least one queue based on the first slice bandwidth for each node comprises:
determining the bandwidth occupied by each queue in each node according to the first slice bandwidth of each node and the queue configuration of each node; the queues of each node are configured to instruct the ratio of bandwidth allocated by each node to each queue;
determining a first performance parameter of each queue according to the bandwidth occupied by each queue at each node, the slice flow information of each node and the preset probability; the slice traffic information of each node comprises an input traffic rate of the target network slice on each node.
11. The method of claim 10, wherein the per-node slice traffic information further comprises a bursty traffic size and bursty traffic rate entered by the target network slice at the per-node.
12. The method according to claim 10 or 11, wherein for any one of the at least one queue, the determining the first performance parameter of each queue according to the bandwidth occupied by each queue at each node, the slice traffic information of each node, and the preset probability comprises:
determining a first performance function of the any queue on each node according to the bandwidth occupied by the any queue on each node and the slice flow information of each node, wherein the first performance function is used for indicating the relation between performance parameters and probability when the any queue transmits data on each node, and the performance parameters comprise time delay and/or packet loss rate;
determining a second performance function of any queue according to the first performance function of the any queue on each node, wherein the second performance function is used for indicating the relation between performance parameters and probabilities of the any queue when data are transmitted between the client and the server;
and determining a first performance parameter of any queue according to the preset probability and the second performance function.
13. The method of any of claims 9-12, wherein determining the slice path type of the target network slice based on the first performance parameter of each queue and the target performance parameter of each queue comprises:
for any one of the at least one queue, determining a queue type of the any one queue according to the first performance parameter of the any one queue and the target performance parameter of the any one queue; wherein, the queue type of any one queue comprises: a first type of queue having a first performance parameter less than a target performance parameter of the any queue and a difference between the first performance parameter of the any queue and the target performance parameter of the any queue greater than a threshold, a second type of queue having a first performance parameter less than the target performance parameter of the any queue and the difference less than the threshold, and a third type of queue having a first performance parameter greater than the target performance parameter of the any queue;
and determining the slice path type of the target network slice according to the queue type of each queue.
14. The method of claim 13, wherein determining the slice path type of the target network slice based on the queue type of each queue comprises:
when the queue types of all queues in the slice path of the target network slice are the first type of queue types, determining that the slice path type of the target network slice is the first type of path type;
when the queue types of all the queues in the slice path of the target network slice are the second type queue types, or the queue types of the queues in the target network slice comprise the first type queue types and the second type queue types, determining that the slice path type of the target network slice is the second type path type;
and when the queue of which at least one queue type is the third type of queue type is included in the queues included in the slice path of the target network slice, determining that the slice path type of the target network slice is the third type of path type.
15. The method of any of claims 9-14, wherein adjusting the slice bandwidth of at least one node in the slice path of the target network slice according to the slice path type of the target network slice to obtain the target slice bandwidth configured for each node comprises:
If the slice path type of the target network slice is the first type of path type, reducing the first slice bandwidth of at least one node on the slice path of the target network slice to obtain the target slice bandwidth of each node;
and if the slice path type of the target network slice is the third type of path type, increasing the first slice bandwidth of at least one node on the slice path of the target network slice to obtain the target slice bandwidth of each node.
16. The method of claim 15, wherein if the slice path type of the target network slice is the first type of path type or the third type of path type, the method further comprises:
determining a second performance parameter of each node according to the first slice bandwidth of each node and the input flow rate of the target network slice on each node; the second performance parameter of each node is a performance parameter of the target network slice when the data flows through each node;
the reducing the first slice bandwidth of at least one node on the slice path of the target network slice if the slice path type of the target network slice is the first type path type includes:
If the slice path type of the target network slice is the first type path type, reducing a first slice bandwidth of a first node on a slice path of the target network slice; the first node is a node with the minimum second performance parameter on a slice path of the target network slice, the reduced first slice bandwidth of the first node is larger than the input flow rate of the target network slice on the first node, and the first node is marked with a preset mark; wherein the preset flag of the first node flag is used to characterize that a slice bandwidth of the first node is not adjusted when planning bandwidth resources of other slice paths including the first node.
17. The method of claim 16, wherein increasing the slice bandwidth of at least one node in the target network slice if the slice path type of the target network slice is the third class path type comprises:
if the slice path type of the target network slice is the third type of path type, increasing the slice bandwidth of a second node on the slice path of the target network slice; the second node is the node with the largest second performance parameter on the slice path of the target network slice, and the increased first slice bandwidth of the second node is greater than the input traffic rate of the target network slice on the second node.
18. The method of claim 17, wherein the second node is marked with the preset mark, the preset mark of the second node mark being used to characterize that a slice bandwidth of the second node is not adjusted when planning bandwidth resources of other slice paths including the second node.
19. The method according to any one of claims 15-18, wherein if the slice path type of the target network slice is the first type of path type, reducing the slice bandwidth of at least one node on the slice path of the target network slice to obtain the target slice bandwidth of each node specifically comprises:
and when the slice path type of the target network slice determined according to the first slice bandwidth after the reduction of at least one node on the slice path of the target network slice is the second type path type, determining the slice bandwidth of each node at the current moment as the target slice bandwidth of each node.
20. The method according to any one of claims 15-19, wherein if the slice path type of the target network slice is the third type of path type, increasing the slice bandwidth of at least one node on the slice path of the target network slice to obtain the target slice bandwidth of each node specifically comprises:
And when the slice path type of the target network slice determined according to the first slice bandwidth increased by at least one node on the slice path of the target network slice is the first type path type or the second type path type, determining the slice bandwidth of each node at the current moment as the target slice bandwidth of each node.
21. A slice bandwidth planning apparatus, comprising:
the acquisition unit is used for acquiring user information of a first user and target performance parameters, wherein the target performance parameters are used for indicating the transmission performance of a target network slice expected by the first user;
a determining unit, configured to determine slice information of the target network slice according to the user information; and determining bandwidth resources of the target network slice according to the target performance parameter and the slice information.
22. The apparatus according to claim 21, wherein the acquisition unit is specifically configured to:
and receiving the user information and the target performance parameters which are input on a slice bandwidth planning interface.
23. The apparatus of claim 21 or 22, wherein the bandwidth resources of the target network slice comprise a target slice bandwidth configured for each node in the target network slice.
24. The apparatus of claim 23, wherein the apparatus further comprises:
and the display unit is used for displaying the target slice bandwidth configured for each node on the slice bandwidth display interface.
25. The apparatus of claim 24, wherein the display unit is further configured to:
displaying the estimated performance parameters of each node on the slice bandwidth display interface, wherein the estimated performance parameters of each node are determined based on the target slice bandwidth configured for each node.
26. The apparatus of claim 24 or 25, wherein the slice path of the target network slice comprises one or more queues, the display unit further configured to:
displaying estimated performance parameters of the one or more queues on the slice bandwidth display interface, wherein the estimated performance parameters of the one or more queues are determined performance parameters based on the bandwidth resources of the target network slice.
27. The apparatus according to any of claims 21-26, wherein the performance parameters comprise delay and/or packet loss rate at one or more predetermined probabilities.
28. The apparatus of any of claims 21-27, wherein the target performance parameter comprises a target performance parameter of at least one queue in a slice path of the target network slice, the target performance parameter of any of the at least one queue indicating transmission performance of the any of the at least one queue in the slice path of the target network slice desired by the first user.
29. The apparatus of claim 28, wherein the slice information comprises slice path information for the target network slice;
the obtaining unit is further configured to obtain, according to the slice path information, a first slice bandwidth of each node on a slice path of the target network slice, where the first slice bandwidth of each node is the slice bandwidth of each node at the current time;
the determining unit is specifically configured to determine a first performance parameter of each queue in the at least one queue according to the first slice bandwidth of each node, and determine a slice path type of the target network slice according to the first performance parameter of each queue and the target performance parameter of each queue; the first performance parameter of each queue is a performance parameter when each queue transmits data between a client and a server when the slice bandwidth of each node is the first slice bandwidth; the slice path types include: a first type of path type requiring reduced slice bandwidth, a second type of path type requiring no adjustment of slice bandwidth, and a third type of path type requiring increased slice bandwidth;
The apparatus further comprises:
and the adjusting unit is used for adjusting the slice bandwidth of at least one node on the slice path of the target network slice according to the slice path type of the target network slice so as to obtain the target slice bandwidth of each node.
30. The apparatus according to claim 29, wherein the determining unit is further specifically configured to:
determining the bandwidth occupied by each queue in each node according to the first slice bandwidth of each node and the queue configuration of each node; the queues of each node are configured to indicate the proportion of bandwidth allocated by each node to each queue;
determining a first performance parameter of each queue according to the bandwidth occupied by each queue at each node, the slice flow information of each node and the preset probability; the slice traffic information of each node comprises an input traffic rate of the target network slice on each node.
31. The apparatus of claim 30, wherein the slice traffic information for each node further comprises a burst traffic size and a burst traffic rate entered by the target network slice at each node.
32. The apparatus according to claim 30 or 31, wherein for any of the at least one queue, the determining unit is further specifically configured to:
determining a first performance function of the any queue on each node according to the bandwidth occupied by the any queue on each node and the slice flow information of each node, wherein the first performance function is used for indicating the relation between performance parameters and probability when the any queue transmits data on each node, and the performance parameters comprise time delay and/or packet loss rate;
determining a second performance function of any queue according to the first performance function of the any queue on each node, wherein the second performance function is used for indicating the relation between performance parameters and probabilities of the any queue when data are transmitted between the client and the server;
and determining a first performance parameter of any queue according to the preset probability and the second performance function.
33. The apparatus according to any one of claims 29-32, wherein the determining unit is further specifically configured to:
for any one of the at least one queue, determining a queue type of the any one queue according to the first performance parameter of the any one queue and the target performance parameter of the any one queue; wherein, the queue type of any one queue comprises: a first type of queue having a first performance parameter less than a target performance parameter of the any queue and a difference between the first performance parameter of the any queue and the target performance parameter of the any queue greater than a threshold, a second type of queue having a first performance parameter less than the target performance parameter of the any queue and the difference less than the threshold, and a third type of queue having a first performance parameter greater than the target performance parameter of the any queue;
And determining the slice path type of the target network slice according to the queue type of each queue.
34. The apparatus according to claim 33, wherein the determining unit is further specifically configured to:
when the queue types of all queues in the slice path of the target network slice are the first type of queue types, determining that the slice path type of the target network slice is the first type of path type;
when the queue types of all the queues in the slice path of the target network slice are the second type queue types, or the queue types of the queues in the target network slice comprise the first type queue types and the second type queue types, determining that the slice path type of the target network slice is the second type path type;
and when at least one queue with the queue type being the third type of queue type is included in the queues included in the slice path of the target network slice, determining that the slice path type of the target network slice is the third type of path type.
35. The device according to any one of claims 29-34, wherein the adjustment unit is specifically adapted to:
If the slice path type of the target network slice is the first type of path type, reducing the first slice bandwidth of at least one node on the slice path of the target network slice to obtain the target slice bandwidth of each node;
and if the slice path type of the target network slice is the third type of path type, increasing the first slice bandwidth of at least one node on the slice path of the target network slice to obtain the target slice bandwidth of each node.
36. The apparatus of claim 35, wherein if the slice path type of the target network slice is the first type of path type or the third type of path type;
the determining unit is further configured to determine a second performance parameter of each node according to the first slice bandwidth of each node and the input traffic rate of the target network slice on each node; the second performance parameter of each node is a performance parameter of the target network slice when the data flows through each node;
the adjusting unit is further specifically configured to reduce a first slice bandwidth of a first node on a slice path of the target network slice if a slice path type of the target network slice is the first type of path type; the first node is a node with the minimum second performance parameter on a slice path of the target network slice, the reduced first slice bandwidth of the first node is larger than the input flow rate of the target network slice on the first node, and the first node is marked with a preset mark; wherein the preset flag of the first node flag is used to characterize that a slice bandwidth of the first node is not adjusted when planning bandwidth resources of other slice paths including the first node.
37. The device according to claim 36, wherein the adjusting unit is further specifically configured to:
if the slice path type of the target network slice is the third type of path type, increasing the slice bandwidth of a second node on the slice path of the target network slice; the second node is the node with the largest second performance parameter on the slice path of the target network slice, and the increased first slice bandwidth of the second node is greater than the input traffic rate of the target network slice on the second node.
38. The apparatus of claim 37, wherein the second node is marked with the preset flag, the preset flag of the second node flag being used to characterize that a slice bandwidth of the second node is not adjusted when planning bandwidth resources of other slice paths including the second node.
39. The device according to any one of claims 25-38, wherein the adjusting unit is further specifically adapted to:
and when the slice path type of the target network slice determined according to the first slice bandwidth after the reduction of at least one node on the slice path of the target network slice is the second type path type, determining the slice bandwidth of each node at the current moment as the target slice bandwidth of each node.
40. The device according to any one of claims 25-39, wherein the adjusting unit is further specifically adapted to:
and when the slice path type of the target network slice determined according to the first slice bandwidth increased by at least one node on the slice path of the target network slice is the first type path type or the second type path type, determining the slice bandwidth of each node at the current moment as the target slice bandwidth of each node.
41. A planning device for slice bandwidth, comprising a memory and a processor; the processor executing program instructions in the memory to cause the planning apparatus to perform the method of any one of claims 1-20.
42. A computer readable storage medium comprising program instructions which, when run on a computer or a processor, cause the computer or the processor to perform the method of any of claims 1-20.
CN202210344303.XA 2022-04-02 2022-04-02 Planning method and device for slice bandwidth Pending CN116938723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210344303.XA CN116938723A (en) 2022-04-02 2022-04-02 Planning method and device for slice bandwidth

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210344303.XA CN116938723A (en) 2022-04-02 2022-04-02 Planning method and device for slice bandwidth

Publications (1)

Publication Number Publication Date
CN116938723A true CN116938723A (en) 2023-10-24

Family

ID=88376037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210344303.XA Pending CN116938723A (en) 2022-04-02 2022-04-02 Planning method and device for slice bandwidth

Country Status (1)

Country Link
CN (1) CN116938723A (en)

Similar Documents

Publication Publication Date Title
EP3072260B1 (en) Methods, systems, and computer readable media for a network function virtualization information concentrator
CN111769998B (en) Method and device for detecting network delay state
JP5276589B2 (en) A method for optimizing information transfer in telecommunications networks.
US9219691B2 (en) Source-driven switch probing with feedback request
WO2021169308A1 (en) Data stream type identification model updating method and related device
CN105519043A (en) System and method for mapping a service-level topology to a service-specific data plane logical topology
EP3066569B1 (en) Centralized networking configuration in distributed systems
US9722930B2 (en) Exploiting probabilistic latency expressions for placing cloud applications
CN111512602A (en) Method, equipment and system for sending message
CN111181873B (en) Data transmission method, data transmission device, storage medium and electronic equipment
JP6660283B2 (en) Traffic demand forecasting device, traffic demand forecasting method, and program
Buzhin et al. Evaluation of Telecommunication Equipment Delays in Software-Defined Networks
CN115277504B (en) Network traffic monitoring method, device and system
US7339953B2 (en) Surplus redistribution for quality of service classification for packet processing
CN116938723A (en) Planning method and device for slice bandwidth
CN114978998B (en) Flow control method, device, terminal and storage medium
CN112994934B (en) Data interaction method, device and system
CN110928693B (en) Computing equipment and resource allocation method
US10958526B2 (en) Methods for managing bandwidth allocation in a cloud-based system and related bandwidth managers and computer program products
Wang et al. Research on bandwidth control technology based on SDN
CN110086662B (en) Method for implementing demand definition network and network architecture
CN112367708A (en) Network resource allocation method and device
CN114866488A (en) Information flow identification method, network chip and network equipment
Sedaghat et al. R2T-DSDN: reliable real-time distributed controller-based SDN
WO2024016801A1 (en) Base station computing power arrangement method and apparatus, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication