CN112188557A - Method and device for ensuring priority QoS of green packet during congestion - Google Patents

Method and device for ensuring priority QoS of green packet during congestion Download PDF

Info

Publication number
CN112188557A
CN112188557A CN202010973217.6A CN202010973217A CN112188557A CN 112188557 A CN112188557 A CN 112188557A CN 202010973217 A CN202010973217 A CN 202010973217A CN 112188557 A CN112188557 A CN 112188557A
Authority
CN
China
Prior art keywords
packet
congestion
queue
value
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010973217.6A
Other languages
Chinese (zh)
Other versions
CN112188557B (en
Inventor
许航
邓卉文
谢力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fiberhome Telecommunication Technologies Co Ltd
Original Assignee
Fiberhome Telecommunication Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fiberhome Telecommunication Technologies Co Ltd filed Critical Fiberhome Telecommunication Technologies Co Ltd
Priority to CN202010973217.6A priority Critical patent/CN112188557B/en
Publication of CN112188557A publication Critical patent/CN112188557A/en
Application granted granted Critical
Publication of CN112188557B publication Critical patent/CN112188557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/24Negotiating SLA [Service Level Agreement]; Negotiating QoS [Quality of Service]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to the technical field of computer networks, and provides a method and a device for ensuring the priority QoS of a green packet during congestion. Each inlet side monitors the buffer storage condition of one or more queues owned by each inlet side, and matches the mapping relation according to the queue buffer occupation condition of each priority data packet contained in each queue to obtain a corresponding congestion value carried in a reported congestion message; and the outlet side receives the congestion message sent by each inlet side and generates a packet loss strategy or a packet loss strategy parameter which is sent to each inlet side. The invention can give consideration to WRED discarding strategy, avoid TCP global synchronization, and improve the network passing efficiency of the bearing network equipment in congestion.

Description

Method and device for ensuring priority QoS of green packet during congestion
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of computer networks, in particular to a method and a device for ensuring the priority QoS of a green packet during congestion.
[ background of the invention ]
In recent years, with the update iteration of network technology, the access network device 4G/5G base station is used on a large scale, which brings about the explosive increase of data traffic in the core network device. Different types of data flows in the network all create a large burden in the network routing nodes, and the problem of network congestion follows. In network communications, network congestion is a key factor in the resulting delay and throughput QoS metrics. On a high-load network node, it is particularly important to improve the network bandwidth utilization rate under the network congestion condition and ensure the service quality of different services. Currently, the common network congestion strategy includes Tail Drop (Tail-Drop) and Weighted Random Early Detection (Weighted Random Early Detection, abbreviated as WRED).
The Tail-Drop is a traditional Drop strategy, and when the length of the queue reaches a set maximum buffer depth, messages entering the buffer are selected to be dropped. The traditional discarding strategy can cause the phenomenon of TCP global synchronization, and particularly, the access side can avoid congestion by adjusting the size of a TCP sliding window, so that the network can repeatedly vibrate, and the network transmission efficiency is reduced.
The WRED congestion discarding strategy is to discard messages entering the queue at random after the queue buffer reaches a certain congestion degree. This approach may avoid TCP global synchronization from occurring. Due to the randomness of early discarding, the TCP session does not enter a slow start state or a high-rate sending state at the same time, but always has some high-rate sending, so that the high utilization rate of the network bandwidth can be ensured. The WRED can bind to different priority queues and can configure RED parameters of messages with different colors. However, it is necessary to guarantee the CIR of the ingress green packet in the distributed system, and the current discarding strategy of Tail-Drop and WRED cannot guarantee that the green packet is prior to the high-priority yellow packet, and once the egress is congested, the ingress green packet bandwidth is occupied by the high-priority yellow packet. Violates the original design purpose and meaning of the green packet CIR. In a distributed system, in order to ensure the CIR of a green packet, a common method is to map the green packet at the entry to a higher priority level, and although the green packet can be guaranteed to pass preferentially, the yellow-green packet enters different queues to cause the problem of message disorder.
In view of the above, overcoming the drawbacks of the prior art is an urgent problem in the art.
[ summary of the invention ]
The invention aims to solve the problems that an outlet of the existing WRED and Tail-Drop discarding strategy in a distributed system cannot sense the congestion condition of an inlet cache and cannot ensure the prior passing of a green packet message of inlet switching equipment.
The invention adopts the following technical scheme:
in a first aspect, the present invention provides a method for ensuring green packet priority QoS during congestion, where different data packets on an entry side correspond to respective priority levels, and packets with the same priority level are introduced into the same buffer queue; wherein, the entry side stores the mapping relationship between the priority level of the data packet, the queue buffer occupation and the corresponding congestion value for reporting, and the method comprises:
each inlet side monitors the buffer storage condition of one or more queues owned by each inlet side, and matches the mapping relation according to the queue buffer occupation condition of each priority data packet contained in each queue to obtain a corresponding congestion value carried in a reported congestion message;
one or more items of the corresponding data packet priority, the corresponding congestion value and the inlet identification number are carried in a congestion message and sent to an outlet side;
and the outlet side receives the congestion message sent by each inlet side, extracts one or more of the priority of the corresponding data packet, the corresponding congestion value and the inlet identification number, and then generates a packet loss strategy or a packet loss strategy parameter which is issued to each inlet side so that the corresponding inlet side completes the QoS transmission of the green packet through the packet loss strategy or the packet loss strategy parameter.
Preferably, the packet loss policy parameter represents the queue drop threshold, and after one or more of the priority of the corresponding data packet, the corresponding congestion value, and the entry identification number are extracted, a packet loss policy or a packet loss policy parameter issued to each entry side is generated, so that the corresponding entry side completes QoS transmission of the green packet through the packet loss policy or the packet loss policy parameter, specifically including:
according to the priority level and the corresponding congestion value in the congestion message reported by each inlet side, counting a queue discarding threshold value;
and returning the queue discarding threshold value to each entrance side so that the entrance side can complete the packet discarding operation of each data packet under the corresponding priority according to the queue discarding threshold value.
Preferably, the counting of the queue discard threshold according to the priority level and the corresponding congestion value in the congestion message reported by each entry side specifically includes:
and taking the maximum value of the corresponding congestion values in the reported congestion messages on each inlet side as the queue discard threshold value.
Preferably, the corresponding entry side completes QoS transmission of the green packet through the packet loss policy or the packet loss policy parameter, and specifically includes:
storing a packet loss strategy table at an entrance side, and respectively setting a green packet and other color data packets with discarding attribute values corresponding to the priorities of different data packets; the discarding attribute value is an inherent attribute of an occupied length of the priority data packet during caching under the condition that the corresponding exit side in the corresponding queue is allowed to return the packet loss strategy parameter, and once the corresponding discarding attribute value is smaller than the queue discarding threshold value, the packet loss operation of the corresponding priority data packet is carried out;
the longest queue drop threshold value is set for the green packet, and the queue drop threshold values corresponding to the data packets of other colors are increased in a stepwise manner according to the corresponding priority level.
Preferably, the discard attribute value includes a first discard attribute value and a second discard attribute value, where the second discard attribute value is greater than the first discard attribute value, and a first discard probability and a second discard probability are respectively set corresponding to the first discard attribute value and the second discard attribute value, and then once the corresponding discard attribute value is smaller than the queue discard threshold, the packet discard operation of the corresponding priority packet is performed, specifically including:
when the queue discarding threshold value is smaller than the first discarding attribute value, not performing the discarding action;
when the queue discarding threshold value is larger than the first discarding attribute value and smaller than the second discarding attribute value, discarding the specified color data packet in the queue according to the first discarding probability;
and when the queue discarding threshold value is larger than the second discarding attribute value, discarding the data packet with the designated color in the corresponding queue according to the second discarding probability.
Preferably, the method further comprises:
configuring speed-limiting meters at different entrance sides, measuring the speed of a data packet, finishing corresponding dyeing, and respectively dyeing the flow accessed by a user network side interface UNI into a green packet and a yellow packet according to the intervals of a committed information rate CIR and a peak information rate PIR;
wherein, the green packet is a data packet with a rate lower than CIR, and the yellow packet is a data packet with a rate between CIR and PIR.
Preferably, the queue buffer occupancy specifically includes:
the method comprises the steps that when each priority level data message reaches a preset value, the inlet side triggers a congestion message reported to the outlet side, wherein the congestion message carries different congestion values;
the preset buffer value is the data proportion of one or more typical queue data which are to reach and/or are in congestion;
wherein, the higher the priority of the data packet is, the larger the congestion value mapped when the data packet correspondingly reaches the preset buffer value is.
Preferably, once the proportion of the priority data packet exceeds a second preset cache value, the corresponding congestion value takes the highest value to be reported; thereby reserving sufficient transmission space for the green packet; wherein the second cache preset value is 90% -98%.
Preferably, the outlet side generates a packet loss strategy or a packet loss strategy parameter corresponding to each inlet side one by one according to the priority, the congestion value and the inlet identification number carried in the congestion message reported by each inlet side, carries the corresponding packet loss strategy parameter in the congestion response message, and returns the corresponding packet loss strategy parameter to the corresponding inlet side by using the inlet identification number as a destination address, so that the corresponding inlet side completes the QoS transmission of the green packet through the packet loss strategy or the packet loss strategy parameter.
In a second aspect, the present invention further provides a device for guaranteeing priority QoS of a green packet during congestion, which is used to implement the method for guaranteeing priority QoS of a green packet during congestion in the first aspect, and the device includes:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to perform the method for guaranteeing green packet priority QoS in congestion according to the first aspect.
In a third aspect, the present invention further provides a non-transitory computer storage medium storing computer-executable instructions for execution by one or more processors to perform the method for guaranteeing priority QoS of green packets in congestion according to the first aspect.
The invention can give consideration to WRED discarding strategy, avoid TCP global synchronization, and improve the network passing efficiency of the bearing network equipment in congestion. The strategy method ensures the priority of the service message of the inlet switching equipment and also ensures the prior passing characteristic of the green packet of the upstream equipment. The method has great significance for higher service bearing requirements of services in the future 5G and cloud times.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a flowchart illustrating a method for guaranteeing a priority QoS of a green packet during congestion according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a mapping relationship between priorities, queue occupation ratios and congestion values according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a method for guaranteeing QoS priority of a green treasure during congestion according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a structure of a packet loss policy table on an ingress side according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an effect of quantifying an ingress buffer according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an architecture for performing congestion value reporting on an ingress side queue buffer and an egress side according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an apparatus for guaranteeing priority QoS of a green packet during congestion according to an embodiment of the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the description of the present invention, the terms "inner", "outer", "longitudinal", "lateral", "upper", "lower", "top", "bottom", and the like indicate orientations or positional relationships based on those shown in the drawings, and are for convenience only to describe the present invention without requiring the present invention to be necessarily constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
In the prior art, in a distributed system, in order to ensure a CIR of a green packet, a common method is to map the green packet at an entry to a higher priority, and although the green packet can be guaranteed to pass preferentially, the yellow-green packet enters different queues to cause the problem of message disorder. This is because the data packets with different priorities are put into different queues on the ingress side, and therefore, if the priority of the data packet is modified before the ingress queue is buffered, the data packet that should be entered into the queue 1 is buffered into the queue 2, which causes a problem of out-of-order. The above process is determined by the existing mechanism for caching data at the entrance, is an objective factor of the technical problem proposed by the present invention, and is a technical problem scenario to be solved by the technical scheme of the present invention.
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1:
embodiment 1 of the present invention provides a method for ensuring a priority QoS of a green packet during congestion, which is used as a scene characteristic introduction implemented in the embodiment of the present invention, different data packets at an entry side correspond to respective priority levels, and packets with the same priority level are introduced into the same cache queue; wherein, the entry side stores the mapping relationship between the priority level of the data packet, the queue buffer occupation and the corresponding congestion value for reporting, and the method comprises:
in step 201, each entry side monitors the buffer amount of one or more queues owned by the entry side, and matches the mapping relationship according to the queue buffer occupancy of each priority data packet included in each queue, so as to obtain the congestion value to be carried in the corresponding reported congestion message.
As shown in fig. 2, a mapping relationship diagram is schematically provided in an embodiment of the present invention, in which the first behavior corresponds to the content of congestion values, "TH 0", "TH 1", … ", and" TH14 ", which respectively represent 15 congestion values, and the corresponding congestion values. Each queue on the ingress side is bound with the mapping relationship (in other embodiments of the present invention, the mapping relationship is also described as a cache quantization table), a corresponding binding action occurs in configuring L2 and L3 services, and the corresponding egress side and ingress side collectively include 8 priority data packet scenarios, where the mapping relationship completes implementation of a subsequent packet loss policy. As shown in fig. 6, in the scenario example, the ingress side devices A, B, …, M establish 10G Port traffic with the device N (shown in fig. 6 is a Port1 Port of the corresponding device N, where ports 1-255 all belong to the Port of the device N), configure the 100G Port 8 queue of the device N to enable the scheduling policy, and finally all ingress forwarding device queues can be bound to the same mapping relationship. In the mapping relationship shown in fig. 2, priority 0 corresponds to an entry whose profile-ID (i.e., the column corresponding to the identifier "ID" in the figure) value is 32, and analogy is repeated until priority 7 corresponds to an entry whose profile-ID 39.
Still taking fig. 2 as an example, the queue buffer occupation situation in step 201 is described, wherein, in the queue with the storage priority of 32, if the specific gravity of the corresponding data occupation queue reaches 60%, the corresponding congestion value TH0 is selected according to the mapping relationship shown in fig. 2, and is used as the object content to be sent to the exit side to be carried in the congestion message. When the proportion of the queue occupied by the data reaches 60%, the other priority queues are stored, and the congestion value can be extracted by referring to the mapping relation. In fig. 2, the last column means that the data in the queue is reported with the highest congestion value once the data occupancy reaches 95%, and the implementation principle and mechanism thereof will be specifically explained in conjunction with the process of the subsequent feature expansion and explanation of the present invention. In fig. 2, the last line of profile-id is 40, and the corresponding content is labeled "/" means that the priority is the highest, which may exceed the packet loss operation proposed in the embodiment of the present invention, that is, the highest priority is possessed, even if congestion occurs, the packet cannot be processed as a packet loss object, which also considers that there is a data service similar to a special transmission requirement in an actual situation, and the corresponding data packet needs such special treatment accordingly.
In step 202, one or more items of the corresponding packet priority, the corresponding congestion value and the entry identification number are carried in a congestion message and sent to the egress side.
In a specific implementation process, the congestion value must be carried, because the outlet side needs to calculate a packet loss strategy or a packet loss strategy parameter fed back to each inlet side according to the congestion value reported from each inlet side; the priority and the entry mark number are configured optionally in different application scenarios, for example: the egress side locally stores address information of each ingress side, and the corresponding address information is fixed, in which case the corresponding ingress identification number does not necessarily have to be carried in the congestion message. On the other hand, the carrying of the priority of the data packet is to better count and grasp the congestion condition of the data on the egress side, and as a most basic scheme implemented by the embodiment of the present invention, because a set of uniform mapping relationship has been established between the egress side and the ingress side, for the egress side, if a packet loss policy or a packet loss policy parameter in the following step 203 needs to be generated, only the congestion value needs to be known, and therefore, the congestion value also belongs to an optional parameter item; if one of the congestion information is carried in the congestion information and carried in the congestion information by combining the three of the entry mark number and the congestion value to be sent to the exit side, the exit side can converse the congestion images of the queues at different entry sides, so that a technical implementation of differentially sending the packet loss strategy or the packet loss strategy parameters can be further realized (in the most basic technical scheme provided by the invention, the packet loss strategy parameters are sent to all the entry sides, and the corresponding packet loss strategy parameters are consistent for the queues at all the entry sides, and the corresponding contents are introduced in a specific implementation manner of the embodiment of the invention).
In step 203, the egress side receives the congestion message sent from each ingress side, and after extracting one or more of the priority of the corresponding data packet, the corresponding congestion value, and the entry identification number, generates a packet loss policy or a packet loss policy parameter to be issued to each ingress side, so that the corresponding ingress side completes QoS transmission of the green packet through the packet loss policy or the packet loss policy parameter.
In the preferred scheme of the embodiment of the present invention, a method for sending packet loss policy parameters is adopted, that is, a packet loss policy table is maintained at each entry side, and when a corresponding table correspondingly implements a packet loss action, a queue drop threshold given by the exit side is required to be effectively executed, so that not only is the signaling interaction complexity and the message carrying bandwidth between the exit side and the entry side reduced, but also the message processing complexity at the exit side is reduced, and the improvement of the overall performance is achieved.
The embodiment of the invention can give consideration to the WRED discarding strategy, avoid the global synchronization of the TCP and improve the network passing efficiency of the bearing network equipment in congestion. The strategy method ensures the priority of the service message of the inlet switching equipment and also ensures the prior passing characteristic of the green packet of the upstream equipment. The method has great significance for higher service bearing requirements of services in the future 5G and cloud times.
In this embodiment of the present invention, after extracting one or more of the priority of the corresponding data packet, the corresponding congestion value, and the entry identification number, which are referred to in step 203, a packet loss policy or a packet loss policy parameter issued to each entry side is generated, so that the corresponding entry side completes QoS transmission of the green packet through the packet loss policy or the packet loss policy parameter, and a most intuitive and effective implementation manner is further provided, as shown in fig. 3, specifically including:
in step 2031, a queue drop threshold is counted according to the priority and the corresponding congestion value in the congestion message reported by each ingress side.
As a most brief possible implementation form of the embodiment of the present invention, the corresponding statistical discard threshold of the queue may be a maximum value of corresponding congestion values in the reported congestion messages on each entry side, which is taken as the discard threshold of the queue.
In step 2031, the queue discard threshold is returned to each ingress side, so that the ingress side completes the packet loss operation of each data packet under the corresponding priority according to the queue discard threshold.
The queue discard threshold corresponds to the packet loss policy parameter introduced in step 203, and the corresponding execution on the entry side completes the packet loss operation of each data packet under the corresponding priority, and how to complete an effective combination of the two can be specifically described by the packet loss policy on the entry side in fig. 4, so as to complete the packet loss process of the present invention together. It should be noted in advance that fig. 4 is only for convenience of explaining the parameter setting of the ingress side according to the queue drop threshold (i.e. the packet drop policy parameter) in the embodiment of the present invention, and cannot directly represent the parameter value in the actual implementation.
Specifically, a table similar to that shown in fig. 4 is a queue discard threshold table entry (that is, another expression form of the packet loss policy table corresponding to the embodiment of the present invention) that is locally stored in each ingress side, a column of a "quantization template" in the corresponding table corresponds to profile-ID values of 8 pieces of priority data stored in queues on the ingress side, and reference may be made to corresponding parameter values in the first column ID in fig. 2, where the meanings represented by the two are the same and are both profile-ID values; while 0-7 in the adjacent "priority" column is just another expression, both are presented in fig. 4 only for expressing the association of the two, and in actual operation, because there is a default association between the two, it is sufficient to maintain one of the columns. Wherein, the color is that speed-limiting meters are arranged at corresponding different inlet sides, the speed of the data packet is measured and the corresponding dyeing is finished, the flow accessed by a User Network Interface (UNI) is respectively dyed into a green packet and a yellow packet according to the section of Committed Information Rate (CIR) and Peak Information Rate (Peak Information Rate, PIR); wherein, the green packet is a data packet with a rate lower than CIR, and the yellow packet is a data packet with a rate between CIR and PIR.
The discard threshold min and the discard threshold max in fig. 4 (i.e. one of the specific expressions of the discard attribute value in the embodiment of the present invention) are the key points for implementing the packet loss policy in the corresponding queue discard threshold table entry. For the exit side, the queue drop threshold (i.e. the packet loss policy parameter) is obtained by statistics, and after the queue drop threshold is sent to each entry side. With respect to fig. 4, the queue drop thresholds are used to implement packet loss in conjunction with respective priority-color-drop thresholds. For example: if the queue drop threshold given to the ingress side by the egress side is 4, the corresponding ingress side performs packet dropping operations on queues under the priorities thereof, wherein when a queue with a priority of 0 (corresponding profile-id is 32) arrives, the corresponding relationship between the drop threshold specified by each color and the queue drop threshold is matched, because the drop threshold of the yellow packet is 1 and is smaller than the queue drop threshold of 4, the yellow packet in the queue with the priority of 0 is 100% dropped according to fig. 4; by analogy, each exit side compares the discard threshold specified by each priority with the queue discard threshold according to the locally maintained corresponding queue discard threshold table entry in fig. 4, and completes the discard operation of the data packets in other queues according to the queue operation mode with the priority of 0. For convenience of exemplary description, the area discarding threshold min and the discarding threshold max are not shown in fig. 4, but are presented in a manner that the two values are the same, the corresponding technical characteristics may be specifically expanded and applied in a specific implementation process, and the corresponding discarding rate may also be set differently to different percentage values, and how to set the value may be obtained according to an empirical test, which is not described herein in detail.
To sum up, as for the technical characteristics shown in the example of fig. 4, in an alternative of the embodiment of the present invention, a packet loss policy table is stored on the entry side, and the green packet and the other color packets are respectively provided with a drop attribute value (i.e. a list of drop thresholds in fig. 4) corresponding to the priorities of different packets; the discarding attribute value is an inherent attribute of an occupied length of the priority data packet during caching under the condition that the corresponding exit side in the corresponding queue is allowed to return the packet loss strategy parameter, and once the corresponding discarding attribute value is smaller than the queue discarding threshold value, the packet loss operation of the corresponding priority data packet is carried out;
wherein the longest queue drop threshold setting is given to green packets (e.g., as shown in fig. 4, a setting of a corresponding drop threshold of 15, i.e., the highest priority, is given to green packets, at which time the corresponding green packet is not dropped for the ingress side because the maximum value of the queue drop threshold from the egress side is 14), while the queue drop thresholds for packets of the respective other colors are increased stepwise according to the corresponding priority increase
In the embodiment of the present invention, further, summarizing the mapping relationship illustrated in fig. 2 exemplarily described in embodiment 1 as a technical characteristic applicable to the general technical solution of the present invention, specifically, the mapping relationship is as follows:
the method includes that the inlet side triggers a congestion message reported to the outlet side when each priority level data message reaches a preset value (as shown in fig. 2, typical values are 60%, 80% and 95%, and setting of specific parameter values can be adjusted according to specific application scenarios and is not necessarily limited to the given parameter values), wherein the congestion message carries different congestion values;
the preset buffer value is the data proportion of one or more typical queue data which are to reach and/or are in congestion; wherein, the higher the priority of the data packet is, the larger the congestion value mapped when the data packet correspondingly reaches the preset buffer value is. I.e., a stepped structure formed by the ratio of "60%" to "80%" corresponding to each profile-id value in fig. 2 is presented.
In the present example, a corresponding principle explanation is further given for 95% of what appears when fig. 2 is cited in example 1. In the implementation process of the present invention, similar to the situation that the yellow packet and the green packet occupy the same queue under the condition that the 60% and 80% occupation ratios in fig. 2 both belong to normal, however, once the occupation ratio of the priority data packet exceeds the second preset buffer value, the corresponding congestion value takes the highest value to report; thereby reserving sufficient transmission space for the green packet; wherein the second buffer preset value is 90% -98% (e.g., 95% in fig. 2). After completing the congestion value reporting in the two stages of 60% and 80% similar to that shown in fig. 2, if the data occupation ratio in the queue reaches 95% (i.e. an expression of the second buffer preset value), it indicates that the source of congestion generated in the queue is derived from the green packet itself, and then the result of "reporting the highest value of the corresponding congestion value" is obtained, according to the way of feeding back the queue discard threshold (i.e. the packet loss policy parameter) at the exit side described in the above embodiment, the corresponding exit side will issue the congestion value of the highest value as the queue discard threshold to each exit side, thereby effectively ensuring full buffering of the green packet in the queue under the corresponding extreme conditions, and the matched transmission process of the buffered green packet.
In a specific implementation process, the table parameters presented in fig. 4 may be adjusted in a more test process, and a larger adjustment space is used for adjusting the difference between the discard threshold min and the discard threshold max (the same value is used for convenience of illustration in fig. 4), or adjusting the discard rate 1 and the discard rate 2, and this variation space is larger and needs to be adjusted according to the actual packet characteristics; while this particular tuning method is not the focus of the present invention, it is within the skill of the art to test the optimum value with a limited number of attempts.
In combination with the embodiment of the present invention, there is also a preferred implementation scheme, where the outlet side generates packet loss policies or packet loss policy parameters corresponding to each inlet side according to priorities, congestion values, and inlet identification numbers carried in congestion messages reported by each inlet side, respectively, and carries the corresponding packet loss policy parameters in a congestion response message, and returns the congestion response message to the corresponding inlet side by using the inlet identification numbers as destination addresses, so that the corresponding inlet side completes QoS transmission of green packets through the packet loss policies or packet loss policy parameters.
Example 2:
the embodiment of the present invention will further describe the core implementation in embodiment 1 of the present invention by appropriately combining some expressions in the specific technical scenario field. In the embodiment of the present invention, the corresponding entry side is represented as an entry forwarding device, the exit side is represented as an exit forwarding device, the corresponding configured speed limit Meter is represented as a Meter metric, the mapping relationship is represented as a cache quantized entry, and the congestion value is represented as a quantized value field MAXQSZ (also represented as maxqsize in the related drawings of the present invention).
Different entrance forwarding devices configure a Meter measurement and dyeing strategy, and respectively dye the flow accessed at the UNI side into a green packet and a yellow packet (the green packet is lower than the CIR, and the yellow packet is between the CIR and the PIR) according to the CIR and PIR intervals.
As shown in fig. 2, configuring queue cache quantization entries (Buffer Range), that is, percentage values in the graph, and implementing segmented quantization of the cache depth by binding templates of the cache depth corresponding to profile-id for each voq (virtual Output queue) queue (that is, representation form in a specific scene of the corresponding queue in embodiment 1). The entries of the cache quantization exist independently of the VOQs (meaning that different VOQs can establish an association relationship with the same cache quantization entry), and the mapping relationship between the VOQs and the profile-id can be set through the entries, because for the ingress forwarding device, the priority of data corresponding to the cache in the queue managed and maintained by the ingress forwarding device is pre-configured and cannot be adjusted at will, and therefore, the role and meaning of setting the mapping relationship between the VOQs and the profile-id exist. Different buffer quantization values correspond to a range of queue buffer depths, as shown in fig. 2, TH0 indicates that the maximum queue buffer depth of 32 priority has reached 60%.
When the entrance forwarding device applies for scheduling credit signaling to the exit forwarding device, the buffer depth of the queue is reported to the exit forwarding device through the field of quantized value MAXQSZ. And the outlet forwarding equipment counts the MAXQSZ values of all the inlet forwarding equipment in a period of time, and feeds back the MAXQSZ values counted by the outlet to the inlet forwarding equipment through credit signaling. Therefore, the method can realize that the outlet senses the cache depth of the inlet (namely, the cache depth of the current queue is reversely deduced through the queue priority carried in the credit signaling and the congestion value of the corresponding queue, which is based on the premise that the outlet forwarding device knows that the inlet forwarding is the table entry quantized in the cache of fig. 2 used by the device) and feeds the cache depth back to all the inlet forwarding devices, thereby establishing a mechanism for detecting the congestion of the inlet forwarding devices and discarding the congestion by combining with the discarding strategy. The MAXQSZ is a maximum buffer value of all 8 priority queues in the ingress forwarding device. The specific discarding strategy can be implemented by referring to the scheme provided in embodiment 1, and is not described herein again.
The optional packet loss policy in the embodiment of the present invention is further illustrated by the queue drop threshold table entry shown in fig. 4 as follows: the ingress forwarding device obtains a quantized value (i.e., the packet loss policy parameter in embodiment 1) by obtaining the MAXQSZ field in the credit signaling response message sent by the egress forwarding device, and compares the quantized value with the "discard threshold min" and the "discard threshold max" in the queue discard threshold table entry, specifically, the current ingress queue cache depth quantized value is smaller than the "discard threshold min" and does not perform a discard action, and when the current ingress queue cache depth quantized value is larger than the "discard threshold min" and smaller than the "discard threshold max", the ingress forwarding device discards according to the probability of "discard probability 1", and when the current ingress queue cache depth quantized value is larger than the "discard threshold max", the ingress forwarding device discards according to the probability of "discard. It should be noted that, in the embodiment 1, only a part of parameters of the table shown in fig. 4 are used, and the basic functions are shown in a complete manner, whereas in the embodiment of the present invention, the contents of fig. 4 are shown in a more complete manner.
For the egress device, it needs to sense the queue congestion status of all ingress forwarding devices and feed back to each ingress. Therefore, the cached quantization table on the device needs to be kept consistent to play a role of global reference.
The buffer quantization table available for configuration of the chip at present has 64 sets of configuration items, each item is provided with 15 levels, and the buffer length (byte unit) can be configured between each level. This means that when the egress port rate does not satisfy the ingress rate, which results in congestion, the maxqsz (ingress Max Queue size) field carried by the ingress when applying credit signaling to the egress is at most 0xf and at least 1, as shown in fig. 5.
Fig. 2 is a specific setting of the cache depth entries for profile-ids from 32 to 40. And the queue binding buffer quantization table acts when L2 and L3 services are configured and 8 priority is enabled in the packet loss strategy. The ingress forwarding devices A, B, …, M establish a 10G port service with the device N, configure the 100G port 8 queue of the device N to enable the scheduling policy, and finally all ingress forwarding device queues can be bound to a same cache depth template. Here, priority 0 is assigned to the entry of profile-id 32, and analogy to priority 7 is assigned to the entry of profile-id 39.
As shown in fig. 6, the MAXQSZ (Maxqsize is shown in fig. 6) of the ingress forwarding devices A, B, …, M is calculated after being read from the cache entry in the device N, and is transferred from the device N as the egress side to the ingress side, as shown by the associated dotted line of Maxqsize ═ 5 in fig. 6. Specifically, a buffer interval range (one interval of 15 levels) corresponding to the size is searched, and the buffer interval range is reported to the outlet side in a quantization value mode, so that the device N on the outlet side can calculate the MAXQSZ. The interval from L0 to L1, namely MAXQSZ, is 0x1, the interval from L13 to L14 reports 0xe, and when the corresponding size of the interval exceeds L14, MAXQSZ is 0 xf. It is necessary to set the duration of the refresh interval, for example, if the traffic ratio of the ingress forwarding devices A, B, …, and M is 1:2: …: M, the frequency of applying credit to the queue of the M device will be higher, and the number of times of updating the MAXQSZ queue to the egress is larger, the egress statistics MAXQSZ stored by other ingress forwarding devices will probably come from the statistic value of M, which increases the packet loss time of other devices, and fails to achieve the effect of synchronizing the MAXQSZ of the ingress device in real time.
When the egress forwarding device returns the credit response to the ingress forwarding device, the credit response carries the MAXQSZ value (i.e., the packet loss policy parameter in embodiment 1) counted in a period of time, where the parameter is associated with the discard template bound to the ingress and is a key parameter for selecting to discard the packet at the ingress.
In the embodiment of the present invention, a queue may have messages with different colors (green, yellow, red, black), each of which can set a discard threshold, and the discard template has 32 sets of entries (as shown in fig. 4, if the queue includes green and yellow, it is represented by only 16 sets).
Example 3:
fig. 7 is a schematic diagram illustrating an architecture of a device for guaranteeing QoS priority of green packets during congestion according to an embodiment of the present invention. The apparatus for guaranteeing priority QoS for green packets in case of congestion according to the present embodiment includes one or more processors 21 and a memory 22. In fig. 7, one processor 21 is taken as an example.
The processor 21 and the memory 22 may be connected by a bus or other means, as exemplified by the bus connection in fig. 7.
The memory 22, which is a non-volatile computer-readable storage medium, may be used to store a non-volatile software program and a non-volatile computer-executable program, such as the method of guaranteeing priority QoS for green packets when congestion occurs in embodiment 1. The processor 21 performs a method of guaranteeing green packet priority QoS in congestion by executing a non-volatile software program and instructions stored in the memory 22.
The memory 22 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 22 may optionally include memory located remotely from the processor 21, and these remote memories may be connected to the processor 21 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 22 and, when executed by the one or more processors 21, perform the method for guaranteeing a priority QoS for green packets in congestion in embodiment 1 described above, for example, perform the respective steps shown in fig. 1 and 3 described above.
It should be noted that, for the information interaction, execution process and other contents between the modules and units in the apparatus and system, the specific contents may refer to the description in the embodiment of the method of the present invention because the same concept is used as the embodiment of the processing method of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the embodiments may be implemented by associated hardware as instructed by a program, which may be stored on a computer-readable storage medium, which may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A method for guaranteeing priority QoS of green packets in congestion is characterized in that different data messages on an inlet side correspond to respective priority levels, and messages with the same priority level are led into the same cache queue; wherein, the entry side stores the mapping relationship between the priority level of the data packet, the queue buffer occupation and the corresponding congestion value for reporting, and the method comprises:
each inlet side monitors the buffer storage condition of one or more queues owned by each inlet side, and matches the mapping relation according to the queue buffer occupation condition of each priority data packet contained in each queue to obtain a corresponding congestion value carried in a reported congestion message;
one or more items of the corresponding data packet priority, the corresponding congestion value and the inlet identification number are carried in a congestion message and sent to an outlet side;
and the outlet side receives the congestion message sent by each inlet side, extracts one or more of the priority of the corresponding data packet, the corresponding congestion value and the inlet identification number, and then generates a packet loss strategy or a packet loss strategy parameter which is issued to each inlet side so that the corresponding inlet side completes the QoS transmission of the green packet through the packet loss strategy or the packet loss strategy parameter.
2. The method according to claim 1, wherein the packet loss policy parameter represents the queue drop threshold, and after one or more of the corresponding packet priority, the corresponding congestion value, and the entry identification number are extracted, a packet loss policy or a packet loss policy parameter issued to each entry side is generated, so that the corresponding entry side completes QoS transmission of the green packet through the packet loss policy or the packet loss policy parameter, specifically comprising:
according to the priority level and the corresponding congestion value in the congestion message reported by each inlet side, counting a queue discarding threshold value;
and returning the queue discarding threshold value to each entrance side so that the entrance side can complete the packet discarding operation of each data packet under the corresponding priority according to the queue discarding threshold value.
3. The method according to claim 2, wherein the method for ensuring priority QoS of the green packet during congestion calculates a queue drop threshold according to a priority level and a corresponding congestion value in the congestion message reported by each entry side, and specifically includes:
and taking the maximum value of the corresponding congestion values in the reported congestion messages on each inlet side as the queue discard threshold value.
4. The method according to claim 2, wherein the corresponding ingress side completes QoS transmission of the green packet by using the packet loss policy or the packet loss policy parameter, and specifically comprises:
storing a packet loss strategy table at an entrance side, and respectively setting a green packet and other color data packets with discarding attribute values corresponding to the priorities of different data packets; the discarding attribute value is an inherent attribute of an occupied length of the priority data packet during caching under the condition that the corresponding exit side in the corresponding queue is allowed to return the packet loss strategy parameter, and once the corresponding discarding attribute value is smaller than the queue discarding threshold value, the packet loss operation of the corresponding priority data packet is carried out;
the longest queue drop threshold value is set for the green packet, and the queue drop threshold values corresponding to the data packets of other colors are increased in a stepwise manner according to the corresponding priority level.
5. The method according to claim 4, wherein the discard attribute values include a first discard attribute value and a second discard attribute value, where the second discard attribute value is greater than the first discard attribute value, and a first discard probability and a second discard probability are respectively set for the first discard attribute value and the second discard attribute value, and then, once the corresponding discard attribute value is smaller than the queue discard threshold, the packet discard operation of the corresponding priority packet is performed, specifically including:
when the queue discarding threshold value is smaller than the first discarding attribute value, not performing the discarding action;
when the queue discarding threshold value is larger than the first discarding attribute value and smaller than the second discarding attribute value, discarding the specified color data packet in the queue according to the first discarding probability;
and when the queue discarding threshold value is larger than the second discarding attribute value, discarding the data packet with the designated color in the corresponding queue according to the second discarding probability.
6. The method of guaranteeing priority QoS for green packets during congestion according to any of claims 1-5, wherein the method further comprises:
configuring speed-limiting meters at different entrance sides, measuring the speed of a data packet, finishing corresponding dyeing, and respectively dyeing the flow accessed by a user network side interface UNI into a green packet and a yellow packet according to the intervals of a committed information rate CIR and a peak information rate PIR;
wherein, the green packet is a data packet with a rate lower than CIR, and the yellow packet is a data packet with a rate between CIR and PIR.
7. The method for ensuring priority QoS of green packets during congestion according to any one of claims 1 to 5, wherein the queue buffer occupancy specifically is:
the method comprises the steps that when each priority level data message reaches a preset value, the inlet side triggers a congestion message reported to the outlet side, wherein the congestion message carries different congestion values;
the preset buffer value is the data proportion of one or more typical queue data which are to reach and/or are in congestion;
wherein, the higher the priority of the data packet is, the larger the congestion value mapped when the data packet correspondingly reaches the preset buffer value is.
8. The method of claim 7, wherein once the percentage of priority packets exceeds a second buffer default value, the corresponding congestion value is reported with the highest value; thereby reserving sufficient transmission space for the green packet; wherein the second cache preset value is 90% -98%.
9. The method according to claim 1, wherein the outlet side generates a packet loss strategy or a packet loss strategy parameter corresponding to each inlet side according to the priority, the congestion value, and the inlet identification number carried in the congestion message reported by each inlet side, and carries the corresponding packet loss strategy parameter in the congestion response message, and returns the congestion response message to the corresponding inlet side by using the inlet identification number as a destination address, so that the corresponding inlet side completes QoS transmission of the green packet through the packet loss strategy or the packet loss strategy parameter.
10. An apparatus for guaranteeing green packet priority QoS when congested, the apparatus comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to perform the method for guaranteeing green packet priority QoS when congested as recited in any of claims 1-9.
CN202010973217.6A 2020-09-16 2020-09-16 Method and device for ensuring priority QoS of green packet during congestion Active CN112188557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010973217.6A CN112188557B (en) 2020-09-16 2020-09-16 Method and device for ensuring priority QoS of green packet during congestion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010973217.6A CN112188557B (en) 2020-09-16 2020-09-16 Method and device for ensuring priority QoS of green packet during congestion

Publications (2)

Publication Number Publication Date
CN112188557A true CN112188557A (en) 2021-01-05
CN112188557B CN112188557B (en) 2022-06-17

Family

ID=73921933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010973217.6A Active CN112188557B (en) 2020-09-16 2020-09-16 Method and device for ensuring priority QoS of green packet during congestion

Country Status (1)

Country Link
CN (1) CN112188557B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125912A (en) * 2021-10-27 2022-03-01 中盈优创资讯科技有限公司 Method and device for positioning packet loss fault of 5G special line service
WO2023241649A1 (en) * 2022-06-17 2023-12-21 Huawei Technologies Co., Ltd. Method and apparatus for managing a packet received at a switch

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005102104A (en) * 2003-09-02 2005-04-14 Nippon Telegr & Teleph Corp <Ntt> Ip multicast delivery system, its rate controlling method, its program and recording medium
CN102368741A (en) * 2011-12-05 2012-03-07 盛科网络(苏州)有限公司 Method supporting hierarchical queue scheduling and flow shaping and apparatus thereof
CN102413063A (en) * 2012-01-12 2012-04-11 盛科网络(苏州)有限公司 Method and system for dynamically adjusting allocation threshold value of output port resources
CN102420776A (en) * 2012-01-12 2012-04-18 盛科网络(苏州)有限公司 Method and system for dynamically regulating portal resource allocation threshold value
CN107204931A (en) * 2016-03-20 2017-09-26 迈络思科技Tlv有限公司 The flexible Application of congestion control measures
CN107332686A (en) * 2017-05-23 2017-11-07 烽火通信科技股份有限公司 A kind of method and system that qos feature is realized in SPTN equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005102104A (en) * 2003-09-02 2005-04-14 Nippon Telegr & Teleph Corp <Ntt> Ip multicast delivery system, its rate controlling method, its program and recording medium
CN102368741A (en) * 2011-12-05 2012-03-07 盛科网络(苏州)有限公司 Method supporting hierarchical queue scheduling and flow shaping and apparatus thereof
CN102413063A (en) * 2012-01-12 2012-04-11 盛科网络(苏州)有限公司 Method and system for dynamically adjusting allocation threshold value of output port resources
CN102420776A (en) * 2012-01-12 2012-04-18 盛科网络(苏州)有限公司 Method and system for dynamically regulating portal resource allocation threshold value
CN107204931A (en) * 2016-03-20 2017-09-26 迈络思科技Tlv有限公司 The flexible Application of congestion control measures
CN107332686A (en) * 2017-05-23 2017-11-07 烽火通信科技股份有限公司 A kind of method and system that qos feature is realized in SPTN equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
汪雁等: "DiffServ网络的拥塞控制和带宽保证", 《计算机工程与应用》 *
肖道举等: "基于侦听机制的网络拥塞控制模型", 《华中科技大学学报(自然科学版)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125912A (en) * 2021-10-27 2022-03-01 中盈优创资讯科技有限公司 Method and device for positioning packet loss fault of 5G special line service
CN114125912B (en) * 2021-10-27 2023-09-29 中盈优创资讯科技有限公司 Method and device for locating packet loss fault of 5G private line service
WO2023241649A1 (en) * 2022-06-17 2023-12-21 Huawei Technologies Co., Ltd. Method and apparatus for managing a packet received at a switch

Also Published As

Publication number Publication date
CN112188557B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
US11316795B2 (en) Network flow control method and network device
US10171369B2 (en) Systems and methods for buffer management
EP3029896B1 (en) Qos implementation method, apparatus and system in openflow network
US8467342B2 (en) Flow and congestion control in switch architectures for multi-hop, memory efficient fabrics
US20040151184A1 (en) Class-based rate control using multi-threshold leaky bucket
US8284789B2 (en) Methods and apparatus for providing dynamic data flow queues
CN105991470B (en) method and device for caching message by Ethernet equipment
CN110087261B (en) Enterprise-level wireless local area network flow control method based on OpenWrt
CN112188557B (en) Method and device for ensuring priority QoS of green packet during congestion
US8942090B2 (en) Technique for throughput control for packet switches
CN113726671B (en) Network congestion control method and related products
CN109842570B (en) Aggregation rate control method, equipment and system
JP2006506845A (en) How to select a logical link for a packet in a router
US8203956B1 (en) Method and apparatus providing a precedence drop quality of service (PDQoS)
US7957394B1 (en) Automatic network switch configuration to support quality of service
CN111108728B (en) Method and device for processing message
CN115037694A (en) Data transmission method and device, electronic equipment and storage medium
Assegie et al. Improving network performance with an integrated priority queue and weighted fair queue scheduling
US20230142425A1 (en) Virtual dual queue core stateless active queue management (agm) for communication networks
CN108702336A (en) Dynamic optimization queue in data routing
US20230155947A1 (en) Method for identifying flow, and apparatus
US8660001B2 (en) Method and apparatus for providing per-subscriber-aware-flow QoS
Wang et al. Pursuing differentiated services in a sdn-based iot-oriented pub/sub system
Fang et al. Buffer management algorithm design and implementation based on network processors
Ceco et al. Performance comparison of active queue management algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant