WO2023226716A1 - Packet transmission method, forwarding node, transmission end and storage medium - Google Patents

Packet transmission method, forwarding node, transmission end and storage medium Download PDF

Info

Publication number
WO2023226716A1
WO2023226716A1 PCT/CN2023/092332 CN2023092332W WO2023226716A1 WO 2023226716 A1 WO2023226716 A1 WO 2023226716A1 CN 2023092332 W CN2023092332 W CN 2023092332W WO 2023226716 A1 WO2023226716 A1 WO 2023226716A1
Authority
WO
WIPO (PCT)
Prior art keywords
scheduling
scheduling queue
queue
object block
forwarding node
Prior art date
Application number
PCT/CN2023/092332
Other languages
French (fr)
Chinese (zh)
Inventor
徐安民
于德雷
程宏涛
李凤凯
孟锐
王闯
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023226716A1 publication Critical patent/WO2023226716A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Definitions

  • the embodiments of the present application relate to the field of communication technology, and in particular to a data packet sending method, a forwarding node, a sending end and a storage medium.
  • the basic unit of data processing by upper-layer applications is an object block.
  • An object block includes a complete piece of data.
  • the data corresponding to a video frame in a media application is an object block.
  • the basic unit of network transmission data is a packet. Therefore, when the sender sends data to the receiver, it needs to divide the object block provided by the upper-layer application into multiple data packets, and then send each data packet to the network forwarder. Node, the forwarding node sends each data packet to the receiving end.
  • a best effort queue is configured on the forwarding node.
  • the forwarding node sends data it selects the data packet with the earliest enqueuing time from the best-effort service queue through the comparator, and determines the data packet belonging to the same object block as the earliest enqueuing time data packet from the best-effort service queue through the matcher, and then in turn Send these packets.
  • data packets belonging to the same object block can be sent together as much as possible, thereby reducing the frequency of interleaved transmission of data packets of different object blocks.
  • Embodiments of the present application provide a data packet sending method, a forwarding node, a sending end and a storage medium, which can improve the forwarding efficiency of the forwarding node.
  • the technical solution is as follows:
  • the first aspect provides a data packet forwarding method.
  • the forwarding node determines the first scheduling queue from the multiple scheduling queues based on the priority of each scheduling queue in the multiple scheduling queues; wherein each scheduling queue in the multiple scheduling queues is used to cache at least one The data packets of the object block and the data packets of the same object block are cached in the same dispatch queue.
  • the priority of each dispatch queue is determined based on the target attribute of the object block to which the data packet cached by the corresponding dispatch queue belongs.
  • the target attribute is in the corresponding dispatch queue.
  • the properties of the object block remain unchanged during the data packet sending process; the forwarding node sends the data packet in the first scheduling queue.
  • each dispatch queue The size relationship between the priorities will basically not change in a short period of time.
  • the forwarding node will select the data packet in the same scheduling queue to send each time within this short period of time.
  • the data packets sent by the forwarding node within this short period of time have a high probability of belonging to the same object block. In this way, the data blocks in the same object block are sent together as much as possible. This reduces the frequency of interleaved transmission of data packets of different object blocks.
  • the embodiment of the present application can achieve this by scheduling queues and corresponding priorities, with low hardware overhead and high forwarding efficiency.
  • the forwarding node receives the first data packet, the first data packet carries the identifier of the target object block, and the target object block is the first The object block to which the data packet belongs; if the first data packet does not carry the first packet label, the forwarding node adds the first data packet to the second scheduling queue based on the identification of the target object block.
  • the second scheduling queue is the one already in multiple scheduling queues.
  • a scheduling queue that caches data packets of the target object block.
  • the first packet tag is used to indicate whether the first data packet is the first data packet of the target object block.
  • the first data packet does not carry the first packet label, it indicates that the first data packet is not the first packet of the target object block. In this scenario, just add the first data packet to the scheduling queue of the data packets that cache the target object block before the current time. In this way, data packets of the same object block can be cached in the same scheduling queue.
  • the forwarding node receives the first data packet, if the first data packet also carries a first packet label, the forwarding node receives the first data packet from multiple Select one of the dispatch queues as the third dispatch queue, and add the first data packet to the third dispatch queue.
  • the target object block is equivalent to a new object block.
  • a scheduling queue needs to be allocated for the target object block, so that subsequent data packets of the target object block are cached in the allocated scheduling queue.
  • the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue.
  • the implementation process is: the forwarding node selects one scheduling queue from multiple scheduling queues. Select a dispatch queue with no buffered data packets as the third dispatch queue.
  • the forwarding node can select a dispatch queue from the remaining empty dispatch queues as the third dispatch queue. This way of joining the team will be called the first way of joining the team later.
  • each of the multiple scheduling queues caches at most one object block of data packets. Through the first queuing method, each scheduling queue can be used to cache only one object block.
  • the target attribute is the allowed network delay of the corresponding object block
  • the first data packet also carries the allowed network delay of the target object block.
  • the forwarding node updates the priority of the third scheduling queue so that the updated third scheduling queue has priority.
  • the level is higher than the priority of the fourth scheduling queue and lower than the priority of the fifth scheduling queue.
  • the fourth scheduling queue means that the network allowed delay of the object blocks cached in multiple scheduling queues is greater than the network allowed time of the target object block.
  • the fifth scheduling queue refers to a scheduling queue in which the allowable network delay of the object blocks cached in multiple scheduling queues is smaller than the allowable network delay of the target object block.
  • the priority of the scheduling queue where the object blocks with high network allowable delay requirements are located is higher, thereby ensuring that the object blocks with high network allowable delay requirements are sent first. go out.
  • the implementation process of the forwarding node selecting one scheduling queue as the third scheduling queue from multiple scheduling queues is as follows: the forwarding node determines the number of the sixth scheduling queue. left The remaining capacity, the sixth scheduling queue is the scheduling queue where the first packet received most recently before the current time is located; if the remaining capacity of the sixth scheduling queue is not enough to cache the target object block, the forwarding node selects uncached data from multiple scheduling queues The dispatch queue of the packet is used as the third dispatch queue. Accordingly, after the forwarding node determines the remaining capacity of the sixth scheduling queue, if the remaining capacity of the sixth scheduling queue is sufficient to cache the target object block, the forwarding node uses the sixth scheduling queue as the third scheduling queue.
  • each scheduling queue can be used to cache several object blocks.
  • new object blocks will enter the next empty scheduling queue. This way of joining the team will be called the second way of joining the team later.
  • the implementation process of the forwarding node determining the remaining capacity of the sixth scheduling queue is: the forwarding node determines the number of cached object blocks, and the number of cached object blocks is The number of object blocks to which the cached data packets of the sixth dispatch queue belong; the forwarding node determines the difference between the first threshold and the number of cached object blocks as the remaining capacity of the sixth dispatch queue, and the first threshold is the sixth dispatch queue.
  • the number of object blocks that the dispatch queue can cache; the remaining capacity of the sixth dispatch queue is insufficient to cache the data packet of the target object block means: the remaining capacity of the sixth dispatch queue is 0.
  • the implementation process of the forwarding node determining the remaining capacity of the sixth scheduling queue is: the forwarding node determines the total size of the cached data, and the total size of the cached data is The total size of the cached data packets of the sixth scheduling queue; the forwarding node determines the difference between the second threshold and the total size of the cached data as the remaining capacity of the sixth scheduling queue, and the second threshold is the amount that the sixth scheduling queue can The total size of cached data packets; the first data packet also carries the data size of the target object block, and the remaining capacity of the sixth scheduling queue is not enough to cache the data packet of the target object block means: the remaining capacity of the sixth scheduling queue is smaller than the target object The data size of the block.
  • whether a queue is full can be determined based on the maximum number of object blocks that the scheduling queue can carry or the maximum data size that can be carried, which improves the flexibility of first packet enqueueing. sex.
  • the target attribute is the arrival time of the first data packet of the corresponding object block.
  • the forwarding node after the forwarding node selects a scheduling queue that has not cached data packets from multiple scheduling queues as the third scheduling queue, the forwarding node updates the priority of the third scheduling queue so that the updated third scheduling queue has priority.
  • the priority is lower than other scheduling queues that have cached data packets.
  • the priority of the scheduling queue where the object block whose first packet arrived earlier is located is higher.
  • this method if all the data packets of all object blocks cached by any one of the multiple scheduling queues are sent, then the The priority of any scheduling queue, so that the updated priority of any scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
  • the scheduling queue since a scheduling queue is only used to cache one object block, when all the data packets of the object block cached by a scheduling queue (that is, all object blocks) are sent, the scheduling queue needs to be triggered. priority update.
  • the priority of the dispatch queue is determined based on the target attribute of the first object block cached by the dispatch queue, and a dispatch queue cache has multiple object blocks and the first packet of these multiple object blocks They arrive continuously. Therefore, when all data packets of all object blocks cached in a certain scheduling queue are sent, the priority of the scheduling queue needs to be updated.
  • multiple scheduling queues are arranged in sequence.
  • the implementation process of the forwarding node selecting one scheduling queue as the third scheduling queue from multiple scheduling queues is as follows: the forwarding node determines the sixth scheduling queue from the multiple scheduling queues, and the sixth scheduling queue is the most recent one before the current time. The scheduling queue where the first packet received at one time is located; a scheduling queue ranked after the sixth scheduling queue is determined as the third scheduling queue.
  • each scheduling queue can be used to cache several object blocks, and new object blocks enter each scheduling queue in turn. This way of joining the team will be called the third way of joining the team later.
  • the target attribute is the arrival time of the first data packet of the corresponding object block.
  • the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue, if the third scheduling queue does not cache data packets of other object blocks except the first data packet, the forwarding node updates the third The priority of the scheduling queue is set so that the priority of the updated third scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
  • the scheduling queue where the object block whose first packet arrives earlier has a higher priority.
  • this method if all the data packets of one of the object blocks cached by any of the multiple scheduling queues are sent, then update The priorities of multiple scheduling queues are such that the updated priority of any scheduling queue is lower than the updated priority of other scheduling queues that have cached data packets.
  • the priority of the dispatch queue is determined based on the target attribute of the object block cached by the dispatch queue, and a dispatch queue cache has multiple object blocks and the first packets of these multiple object blocks do not arrive consecutively. , that is, the first packets arriving at adjacent forwarding nodes are cached in different scheduling queues.
  • the priority of the scheduling queue needs to be updated.
  • the implementation manner of updating the priority of any scheduling queue can be: if the priority of any scheduling queue is the highest priority level, the priority of any scheduling queue is updated.
  • the priority update can be triggered only when all the data packets of a certain object block in the scheduling queue with the highest priority have been sent, reducing the data processing pressure of the node.
  • a data packet sending method In the second aspect, a data packet sending method is provided.
  • the sending end determines multiple data packets of the object block to be sent.
  • Each of the multiple data packets carries the identifier of the object block, and multiple The first packet in the data packet carries the first packet label, and the first packet label is used to indicate that the corresponding data packet is the first data packet of the object block to be sent; the sending end sends multiple data packets.
  • the first packet also carries the allowed network delay of the object block to be sent.
  • the first packet also carries the data size of the object block to be sent.
  • the tail packets in the multiple data packets carry a tail packet label, and the tail packet label is used to indicate that the corresponding data packet is the last one of the object block to be sent. data pack.
  • a third aspect provides a forwarding node, which has the function of implementing the behavior of the data packet sending method in the first aspect.
  • the forwarding node includes at least one module, and the at least one module is used to implement the data packet sending method provided in the first aspect.
  • a fourth aspect provides a sending end, which has the function of implementing the behavior of the data packet sending method in the above second aspect.
  • the sending end includes at least one module, and the at least one module is used to implement the data packet sending method provided in the second aspect.
  • a forwarding node in a fifth aspect, includes a processor and a memory.
  • the memory is used to store a program that supports the forwarding node in executing the data packet sending method provided in the first aspect, and stores Data used to implement the forwarding node method provided in the first aspect.
  • the processor is configured to execute a program stored in the memory.
  • the operating device of the storage device may further include a communication bus, which is used to establish a connection between the processor and the memory.
  • a sending end in a sixth aspect, includes a processor and a memory.
  • the memory is used to store a program that supports the sending end in executing the data packet sending method provided in the second aspect, and stores a program for Data involved in implementing the sender method provided in the second aspect above.
  • the processor is configured to execute a program stored in the memory.
  • the operating device of the storage device may further include a communication bus, which is used to establish a connection between the processor and the memory.
  • a computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the data package described in the first or second aspect. Send method.
  • An eighth aspect provides a computer program product containing instructions that, when run on a computer, causes the computer to execute the data packet sending method described in the first aspect or the second aspect.
  • Figure 1 is a schematic diagram of a queue scheduling process based on FIFO technology provided by an embodiment of the present application
  • Figure 2 is a schematic diagram of a FIFO-based scheduling result provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of an object block-based scheduling result for the three object blocks in Figure 2;
  • Figure 4 is a schematic diagram of a network architecture provided by an embodiment of the present application.
  • Figure 5 is a flow chart of a data packet sending method provided by an embodiment of the present application.
  • Figure 6 is a flow chart of a data packet sending method provided by an embodiment of the present application.
  • Figure 7 is a flow chart of a data packet sending method provided by an embodiment of the present application.
  • Figure 8 is a flow chart of a data packet sending method provided by an embodiment of the present application.
  • Figure 9 is a schematic structural diagram of a forwarding node provided by an embodiment of the present application.
  • Figure 10 is a schematic structural diagram of a sending end provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • object blocks or objects or object data
  • An object block can be understood as a complete piece of data processed by the upper-layer application.
  • an object block can be understood as an entire video frame.
  • an object block can be understood as a whole picture.
  • the basic unit of network data transmission currently supported by network protocols is a data packet (or data block or message). Therefore, the basic process of sending data from the sender to the receiver can be: the upper layer application of the sender provides the network transport layer of the sender with Object block, the network transmission layer of the sending end divides the object block into multiple data packets, and then sends each data packet to the network transmission layer of the receiving end. After receiving each data packet, the network transmission layer of the receiving end reorganizes each data packet into Object block and submits the reorganized object block to the upper-layer application on the receiving end.
  • forwarding node or intermediate node, forwarding end, or forwarding device, etc.
  • the efficiency of the forwarding node in forwarding data packets will affect the final transmission efficiency of the object block to a certain extent, which in turn affects the efficiency of the upper-layer application in processing the object block. Therefore, how forwarding nodes forward data packets is a hot topic in current research.
  • First in first out (FIFO) technology is a technology used by forwarding nodes to forward data packets.
  • FIFO technology each time a forwarding node receives a data packet, it caches the data packet in a best-effort queue.
  • the forwarding node sends data it directly sends each data packet in the best-effort queue in order from early to late according to the queue entry time of each data packet. That is, the data packets that enter the best-effort queue first are sent out first, so it is called FIFO technology.
  • Figure 1 is a schematic diagram of a queue scheduling process based on FIFO technology provided by an embodiment of the present application.
  • the data packets received by the forwarding node enter the best-effort queue sequentially from the left, that is, the received data packets are sequentially added to the end of the best-effort queue.
  • the forwarding node sends each packet sequentially from the head of the best-effort queue (the right side of the queue in Figure 1).
  • the block marked 1 in Figure 1 indicates the data packet of the object block 1
  • the block marked 2 indicates the data packet of the object block 2
  • the block marked 3 indicates the data packet of the object block 3.
  • the service levels of different terminals may be different.
  • the service level of each terminal is planned by the operator. Based on this, a best-effort queue is configured for each service level at the forwarding node, so that different terminals belonging to the same service level The sent packets are buffered in the same best-effort queue.
  • the forwarding node Since data packets sent by different terminals belonging to the same service level will be cached in the same best-effort queue, the forwarding node cannot guarantee that data packets with special needs will arrive at the receiving end first, that is, the user's quality of service (quality of service) cannot be guaranteed. , QoS) level or a certain priority. In other words, the forwarding node only sends each data packet in sequence based on the enqueuing time of each data packet in the best-effort queue. If the communication load of the current network is large, for example, the data packets of many terminals are cached in the best-effort queue, In this scenario, the data transmission bit rate and transmission time experienced by the user side corresponding to a certain terminal are not fixed.
  • FIFO technology has the following problems:
  • the forwarding node only sends each data packet in sequence based on the enqueuing time of each data packet in the best-effort queue. If the sender has requirements for the completion time (CT) of the object block, for example, the sender requires CT to be minimized or CT does not exceed the deadline (DL), in this scenario, the FIFO technology cannot satisfy the sender's requirements. need.
  • CT completion time
  • DL deadline
  • DL can be understood as a requirement of the sender for the network transmission duration of data. Therefore, DL is later called the network allowable delay.
  • Figure 2 is a schematic diagram of a FIFO-based scheduling result provided by an embodiment of the present application.
  • object block 1 assumes there are currently three object blocks, marked as object block 1, object block 2 and object block 3 respectively.
  • the data size of object block 1 is shown as 1
  • the data size of object block 2 is shown as 2
  • the data size of object block 3 is shown as 3.
  • this data size does not represent the actual data size of the object block, but is only used to illustrate the data size relationship between object block 1, object block 2, and object block 3.
  • the allowable network delay of object block 1 is 1 second (s)
  • the allowable network delay of object block 2 is 6s
  • the allowable network delay of object block 3 is 6s.
  • the data packets of object block 2 and object block 3 are sent interleavedly, that is, the data packets of object block 2 and object block 3 are both sent during this period, and at the end of this period The transmission of all data packets of object block 2 is completed. During this period of 5-6s, the remaining data packets in object block 3 are sent.
  • the total completion time of object block 1 is 3s
  • the total completion time of object block 2 is 5s
  • the total completion time of object block is 6s.
  • the total completion time of each object block is relatively large, and the total completion time of object block 1 is 3 seconds, which exceeds the allowable network delay of object block 1 by 1 second.
  • the forwarding node when forwarding data packets, the forwarding node can consider sending data packets belonging to the same object block together, that is, to avoid interleaved transmission of data packets of different object blocks.
  • Figure 3 is a schematic diagram of an object block-based scheduling result for the three object blocks in Figure 2.
  • object block 1 is sent first, then object block 2, and finally object block 3
  • the total completion time of object block 1 is 1s
  • the total completion time of each object block is shorter than the total completion time scheduled based on FIFO technology, and the total completion time of each object block does not exceed the corresponding network allowable delay.
  • forwarding nodes can configure comparators and matchers to avoid interleaved transmission of data packets of different object blocks.
  • this technology requires a comparator and a matcher to be configured on the forwarding node, resulting in high hardware overhead on the forwarding node.
  • the algorithm complexity involved in this technology is too high, which affects the forwarding efficiency of the forwarding node.
  • embodiments of the present application provide a data packet sending method, in which the frequency of interleaved sending of data packets of different object blocks can be reduced based on scheduling queues.
  • the frequency of interleaved sending of data packets of different object blocks can be reduced based on scheduling queues.
  • the hardware overhead is low.
  • the algorithm is simple, thereby improving the efficiency of forwarding nodes in forwarding data packets.
  • FIG. 4 is a schematic diagram of a network architecture provided by an embodiment of the present application.
  • the network architecture is used to implement the data packet sending method provided by an embodiment of the present application.
  • the network architecture includes a sending end 401, a forwarding node 402 and a receiving end 403.
  • the sending end 401 and the forwarding node 402 are connected through a wired or wireless manner for communication, and the forwarding node 402 and the receiving end 403 are connected through a wired or wireless manner for communication.
  • the sending end 401 divides the object block into multiple data packets. These multiple data packets include the first packet (that is, the first data packet, compared with B in Figure 4), the last packet (that is, the last data packet, marked with E in Figure 4), and the ones between the first packet and the last packet. intermediate data packet (marked M in Figure 4).
  • the sending end sends each data packet in order from the first packet to the last packet.
  • the sending end marks the identifier of the object block in each data packet, and for the first packet of the object block, in the first
  • the data packet is marked as the first packet in the packet.
  • the data packet is marked as the last packet in the last packet.
  • each data packet of the object block carries the identifier of the object block
  • the first packet of the object block carries the first packet label, and optionally the last packet Comes with tail bag tag.
  • the first packet label is used to indicate that the corresponding data packet is the first data packet of the object block
  • the tail packet label is used to indicate that the corresponding data packet is the last data packet of the object block.
  • the forwarding node 402 when the forwarding node 402 receives any data packet, since the data packet carries the identifier of the object block to which it belongs, the forwarding node 402 can determine the object block to which the data packet belongs. And if the data packet further carries a first packet label, the forwarding node 402 can determine that the data packet is the first packet of the object block to which it belongs. If the data packet further carries a tail packet label, the forwarding node 402 can determine that the data packet is the tail packet of the object block to which it belongs.
  • the forwarding node 402 can implement scheduling and transmission of data packets in units of object blocks, thereby reducing the frequency of interleaved transmission of data packets of different object blocks.
  • the receiving end 403 receives the data packet, based on the object carried by the data packet
  • the identification of the block reorganizes each data packet into an object block, and submits the object block to the upper application.
  • the data packet sent by the sending end 401 can also carry the data size of the object block and/or the allowable network delay.
  • the function of the data size of the object block or the allowable network delay will be explained in detail in subsequent embodiments, here Don’t expand it yet.
  • the data size of the object block and/or the allowed network delay can only be carried in the first packet of the object block.
  • the data size and/or the network allowable delay of the object block can also be carried in each data packet of the object block.
  • the network allows delay, which is not limited in the embodiments of this application.
  • the sending end 401 and the receiving end 403 can be any data processing end, such as mobile phones, computers and other terminals or servers.
  • the forwarding node 402 may be a forwarding device such as a switch or router.
  • the sending end 401, the forwarding node 402, and the receiving end 403 will perform clock synchronization during network initialization to facilitate subsequent data transmission, which will not be described in detail in the embodiment of this application.
  • embodiments of the present application provide a data packet sending method.
  • the overall concept of the embodiments of the present application is first explained below.
  • the forwarding node in order to enable the forwarding node to schedule and send each data packet in units of object blocks, the forwarding node is configured with multiple scheduling queues, and each scheduling queue is used to cache data packets of at least one object block.
  • each scheduling queue is used to cache data packets of one object block, or each scheduling queue is used to cache data packets of two or more object blocks. In other words, packets of the same object block are buffered in the same dispatch queue.
  • the priority can be configured for each scheduling queue. In this way, when forwarding data packets, the forwarding node can quickly select a scheduling queue from multiple scheduling queues based on the priority of each scheduling queue, and then send the data packets in the selected scheduling queue.
  • the priority of the scheduling queue is related to the fixed target attribute of the object block cached by the scheduling queue, rather than to the specific data packet.
  • the relationship between the priorities of each scheduling queue is basically the same in a short period of time.
  • the forwarding node will select the data packet in the same scheduling queue to send each time within this short period of time.
  • the data packets of the same object block are cached in the same scheduling queue, there is a high probability that the data packets sent by the forwarding node in this short period of time belong to the same object block. In this way, the data blocks in the same object block can be sent together as much as possible. sent, thus reducing the frequency of interleaved sending between data packets of different object blocks.
  • the forwarding node needs to send the object block first to avoid long network transmission time of the object block.
  • the target attribute may be, for example, the arrival time of the first packet of the object block.
  • the arrival time of the first packet specifically refers to the time when the forwarding node receives the first packet.
  • the forwarding node can schedule the object block based on the network transmission delay of each object block. That is, the target attribute may be, for example, the network allowable delay of the object block.
  • the arrival time of the first packet of the object block and the network transmission delay are two examples of target attributes.
  • the target attribute may also be other attributes of the object block, such as other requirements of the sender for the object block, which are not limited in the embodiments of the present application.
  • the target attributes are carried in the data packet of the object block.
  • the sending end sends an object block, it needs to carry the target attribute at least in the first packet of the object block, so that the forwarding node can adjust the priority of each scheduling queue based on the target attribute.
  • the method provided by the embodiment of the present application includes the sending end sending the data packet, and in the data
  • the packet carries information related to the object block.
  • the forwarding node receives the data packet and caches the data packet in the scheduling queue in units of object blocks.
  • the forwarding node sends data packets based on the priority of each scheduling queue. This is explained below in three embodiments.
  • Figure 5 is a flow chart of a data packet sending method provided by an embodiment of the present application. This method is used to explain how the sender sends the data packet. As shown in Figure 5, the method includes the following steps 501 and 502.
  • Step 501 The sending end determines multiple data packets of the object block to be sent. Each of the multiple data packets carries the identifier of the object block, and the first packet of the multiple data packets carries a first packet label. The first packet label is used to indicate that the corresponding data packet is the object block. First packet.
  • the purpose of carrying the first packet label in the first packet is to facilitate the forwarding node to identify the new object block and allocate a scheduling queue to the new object block.
  • Each data packet carries the identifier of the object block so that the forwarding node can cache data packets of the same object block in the same scheduling queue.
  • the first packet may further carry the data size of the object block and/or the allowed network delay of the object block.
  • each data packet of the object block may carry the data size of the object block and/or the allowed network delay of the object block.
  • the data size of the object block indicates the total size of all data packets included in the object block.
  • the purpose of carrying the data size of the object block in the first packet is to facilitate the forwarding node to allocate a scheduling queue to the newly received object block based on the capacity of the scheduling queue.
  • the purpose of the network allowable delay of the object block carried in the first packet is to facilitate the forwarding node to adjust the priority of the scheduling queue based on the network allowable delay of the object block, so that object blocks with high network transmission delay requirements can be sent out first. .
  • the tail packet among the plurality of data packets may also carry a tail packet label, and the tail packet label is used to indicate that the corresponding data packet is the last data packet of the object block.
  • the purpose of the tail packet carrying the tail packet label is to facilitate the forwarding node to recognize that all data of the object block has been forwarded, and then update the priority of the scheduling queue.
  • the tail packet may not carry a tail packet label.
  • the forwarding node may determine whether the data of the object block has been forwarded based on the amount of data of the object block that has been sent.
  • Step 502 The sending end sends multiple data packets of the object block.
  • the sending end sends the multiple data packets in order from the first packet to the last packet.
  • Figure 6 is a flow chart of a data packet sending method provided by an embodiment of the present application. This method is used to explain how forwarding nodes receive and cache data packets. As shown in Figure 6, the method includes the following steps 601 to 603.
  • Step 601 The forwarding node receives the first data packet.
  • the first data packet carries the identifier of the target object block, and the target object block is the object block to which the first data packet belongs.
  • Step 602 If the first data packet does not carry the first packet label, the forwarding node adds the first data packet to the second scheduling queue based on the identification of the target object block.
  • the second scheduling queue has the target object cached in multiple scheduling queues.
  • the first packet label is used to indicate whether the first data packet is the first data packet of the target object block.
  • the forwarding node has received the packet of the target object block before the current time.
  • the forwarding node in order to facilitate the forwarding node to quickly determine which object block data packets are buffered in the scheduling queue.
  • the forwarding node is configured with a mapping relationship between the scheduling queue and the object block.
  • the mapping relationship includes an object block identifier corresponding to each scheduling queue, and the object block identifier corresponding to each scheduling queue is used to mark the object blocks cached in the scheduling queue.
  • the forwarding node can determine the second scheduling queue from the mapping relationship between the scheduling queue and the object block based on the identification of the target object block.
  • Step 603 If the first data packet also carries a first packet label, the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue, and adds the first data packet to the third scheduling queue.
  • the target object block is equivalent to a new object block.
  • a scheduling queue needs to be allocated for the target object block, so that subsequent data packets of the target object block are cached in the allocated scheduling queue.
  • the forwarding node may also add the corresponding relationship between the identification of the target object block and the third scheduling queue to the mapping relationship between the scheduling queue and the object block. , so that subsequent forwarding nodes can cache all data packets of the target object block into the third scheduling queue.
  • the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue in the following three ways.
  • each scheduling queue caches data packets of at most one object block, and each time a new object block is received, an empty scheduling queue is allocated for the new object block.
  • An empty scheduling queue is a scheduling queue that does not cache any data.
  • a dispatch queue can be selected from the remaining empty dispatch queues as The third dispatch queue.
  • step 603 the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue. queue.
  • scheduling queues there are 10 scheduling queues at the forwarding node, which are labeled scheduling queue 1 to scheduling queue 10 respectively.
  • the forwarding node receives the first data packet, there are data packets cached in scheduling queue 1 to scheduling queue 3 among the 10 scheduling queues.
  • the forwarding node selects any scheduling queue from scheduling queue 4 to scheduling queue 10. For example, dispatch queue 4, and use the selected dispatch queue as dispatch queue 4.
  • the forwarding node can add the first data packet to the default queue.
  • the default queue may be, for example, a best-effort queue.
  • the priority of the best-effort queue is lower than the priorities of multiple scheduling queues. Therefore, when the forwarding node receives a data packet, it gives priority to adding the first data packet to multiple scheduling queues. If the multiple scheduling queues do not meet the conditions, the first data packet is added to the best-effort queue. in queue.
  • the target attribute can be the arrival time of the first data packet (that is, the first packet) of the corresponding object block, or it can be the network allowable delay of the corresponding object block.
  • the forwarding node when the target attribute is the arrival time of the first packet of the corresponding object block, the forwarding node selects a scheduling queue that has not cached data packets from multiple scheduling queues as the third scheduling queue, and transfers the first data After the packet is added to the third dispatch queue, the forwarding node further updates the priority of the third dispatch queue, so that the priority of the updated third dispatch queue is lower than the priority of other dispatch queues that have cached data packets. That is, the scheduling queue where the object block whose first packet arrives earlier has a higher priority.
  • the first update method is to set the priority of the third scheduling queue to be one level lower than the lowest priority of other scheduling queues that have cached data packets, and the priority of other scheduling queues that have cached data packets. The level remains unchanged.
  • dispatch queue 4 As the third dispatch queue from dispatch queue 4 to dispatch queue 10, other dispatch queues that have cached data packets are dispatch queue 1 to dispatch queue 3. Assume that dispatch queue 1 The priorities of dispatch queue 3 are 10, 9, and 8 respectively. At this time, you can directly set the priority of dispatch queue 4 to 7.
  • the second update method is to upgrade the priorities of other scheduling queues that have cached data packets by one level, and set the priority of the third scheduling queue to be higher than the upgraded priority of other scheduling queues that have cached data packets.
  • the lowest priority is one level lower.
  • dispatch queue 4 As the third dispatch queue from dispatch queue 4 to dispatch queue 10, other dispatch queues that have cached data packets are dispatch queue 1 to dispatch queue 3. Assume that dispatch queue 1 The priorities of dispatch queue 3 are 3, 2, and 1 respectively. At this time, you can directly upgrade the priorities of dispatch queue 1 to dispatch queue 3 to 4, 3, and 2 respectively, and set the priority of dispatch queue 4 to 1.
  • the forwarding node selects a scheduling queue that has not cached data packets from multiple scheduling queues as the third scheduling queue, and after adding the first data packet to the third scheduling queue, the forwarding node further updates the third scheduling queue.
  • the priority is such that the updated priority of the third scheduling queue is higher than the priority of the fourth scheduling queue and lower than the priority of the fifth scheduling queue.
  • the fourth scheduling queue refers to the scheduling queue in which the network allowable delay of the object blocks cached in multiple scheduling queues is greater than the network allowable delay of the target object block
  • the fifth scheduling queue refers to the object blocks cached in multiple scheduling queues.
  • the priority of the scheduling queue in which the object blocks with high network allowable delay requirements are located can be implemented to ensure that the object blocks with high network allowable delay requirements are sent out first.
  • the forwarding node can further update the priority of the third scheduling queue in two ways.
  • the priority of the fifth dispatch queue remains unchanged, the priority of the fourth dispatch queue is lowered by one level, and the priority of the third dispatch queue is set to: higher than the priority of the fifth dispatch queue.
  • the lowest priority is one level lower.
  • the forwarding node selects dispatch queue 4 as the third dispatch queue from dispatch queue 4 to dispatch queue 10
  • other dispatch queues that have buffered data packets are dispatch queue 1 to dispatch queue 3.
  • the priorities of dispatch queue 1 to dispatch queue 3 are 10, 9, and 8 respectively
  • the allowable network delays of the object blocks cached by these three dispatch queues are 1, 3, and 6 respectively
  • the allowable network delays of the target object blocks are is 4.
  • the priority of scheduling queue 3 is reduced to 7
  • the priority of scheduling queue 4 is set to 8, which is one level lower than 9.
  • the priority of the fourth dispatch queue remains unchanged, the priority of the fifth dispatch queue is upgraded by one level, and the priority of the third dispatch queue is set to: higher than the priority of the fourth dispatch queue. The highest priority is one level higher.
  • the forwarding node selects dispatch queue 4 as the third dispatch queue from dispatch queue 4 to dispatch queue 10
  • other dispatch queues that have buffered data packets are dispatch queue 1 to dispatch queue 3.
  • the priorities of dispatch queue 1 to dispatch queue 3 are 10, 9, and 8 respectively
  • the allowable network delays of the object blocks cached by these three dispatch queues are 1, 3, and 6 respectively
  • the allowable network delays of the target object blocks are is 4.
  • the priorities of dispatch queue 1 and dispatch queue 2 are upgraded to 11 and 10 respectively, and the priority of dispatch queue 4 is set to 9, which is one level higher than 8.
  • the priority of the updated third scheduling queue only needs to be lower than the priority of the fifth scheduling queue. That is, the network permission for object blocks cached in multiple dispatch queues is Many delays are smaller than the allowable network delay of the target object block, so just set the priority of the third scheduling queue to the lowest.
  • the priority of the updated third scheduling queue only needs to be higher than the priority of the fourth scheduling queue. That is, the allowable network delays of the object blocks cached in multiple scheduling queues are all greater than the allowable network delays of the target object block, so it is sufficient to directly set the priority of the third scheduling queue to the highest.
  • the above implementation method of updating the priority of the third scheduling queue according to the allowed network delay can be understood as: starting from the first scheduling queue after the arrangement Initially, the third scheduling queue is inserted in front of the first scheduling queue that satisfies the following condition: the allowable network delay of the cached object block exceeds the allowable network delay of the target object block.
  • the priority size relationship of each scheduling queue after the third scheduling queue is inserted still satisfies the sorting relationship.
  • the scheduling queues that have buffered data packets are marked as A, B, C, and D in order from high to low priority.
  • the allowable network delay (DL) of the object block to which the data packet cached by A belongs is 1
  • the allowable network delay (DL) of the object block to which the data packets cached by B and C belong is 3
  • the allowed network delay (DL) of the object block to which the data packet cached by D belongs is 3.
  • the network allowable delay (DL) of the object block is 6. If the network allowable delay (DL) carried by the first data packet is 4, and the third scheduling queue is selected as N, then after adding the first data packet to N, each queue will be queued in order from high to low priority. Scheduling queue sorting. After sorting, scheduling queue N should be before scheduling queue D and after scheduling queues B and C. That is, the priority order of each new scheduling queue from high to low is A, B, C, N, and D.
  • the priorities of these scheduling queues can be set to the default priority, such as level 0, which is not limited in the embodiments of the present application.
  • the priority of the dispatch queue is determined based on the target attribute of the object block cached in the dispatch queue, when all the data packets of the object blocks cached in a certain dispatch queue are sent, the priority of the dispatch queue needs to be levels are updated.
  • the forwarding node updates the priority of the scheduling queue in the following manner: if multiple scheduling queues After all the data packets of the object block cached by any scheduling queue are sent, the priority of the scheduling queue is updated so that the updated priority of the scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
  • the priorities of other scheduling queues that have cached data packets may or may not be updated. If updated, the size relationship between the priorities of other scheduling queues that have cached data packets before and after the update remains unchanged.
  • the scheduling queues that have cached data packets are sorted from high to low priority as A, B, C, N, and D, and the priorities of these five scheduling queues are 10, 9, 8, and 7 respectively. ,6. If all the data packets of the object blocks cached by scheduling queue C are sent at the current time, the priority of scheduling queue C is set to the lowest, and the priorities of the other four scheduling queues A, B, N, and D can remain unchanged, or Can be reset to 10, 9, 8, 7 respectively.
  • the priority update may be triggered only when all data packets of the object block in the scheduling queue with the highest priority are sent. In this scenario, if all the data packets of the object blocks cached by any one of the multiple scheduling queues are sent, it is still necessary to determine whether the scheduling queue is the scheduling queue with the highest priority. If the scheduling queue is not the highest priority, Scheduling queue will not trigger priority update.
  • the scheduling queue is the scheduling queue with the highest priority, an update of the priority is triggered.
  • the scheduling queues in which all data packets of cached object blocks have been sent at the current time are determined, and the priorities of these scheduling queues are set to be lower than the priorities of other scheduling queues that cache data packets.
  • the priorities of other scheduling queues that have buffered data packets may or may not be updated. If updated, the size relationship between the priorities of other scheduling queues that have cached data packets before and after the update remains unchanged.
  • the forwarding node can confirm that all data packets of the object block cached in the scheduling queue have been sent.
  • the forwarding node can use other channels. (such as the data size of the object block) to determine whether all the data packets of the object block buffered by the scheduling queue have been sent, which will not be explained in detail here.
  • Each scheduling queue is used to cache several object blocks. When one scheduling queue is full, new object blocks will enter the next empty scheduling queue.
  • the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue.
  • the implementation method may be: the forwarding node determines the remaining capacity of the sixth scheduling queue, and the sixth scheduling queue is The dispatch queue where the first packet received most recently before the current time is located; if the remaining capacity of the sixth dispatch queue is not enough to cache the target object block, the forwarding node selects a dispatch queue that has not cached data packets from multiple dispatch queues as the third dispatch queue. Correspondingly, if the remaining capacity of the sixth scheduling queue is sufficient to cache the target object block, the forwarding node uses the sixth scheduling queue as the third scheduling queue.
  • the forwarding node when the forwarding node receives a new first packet, it determines whether the scheduling queue to which the first packet was last added can still cache the object block to which the new first packet belongs. If it can be cached, the object block to which the new first packet belongs will be cached in the dispatch queue. If it cannot be cached, the next empty dispatch queue will be selected to cache the object block to which the new data packet belongs.
  • the forwarding node selects each scheduling queue in turn to cache the object block.
  • the remaining capacity of a scheduling queue is not enough, the next empty scheduling queue will be selected to cache the object block.
  • the forwarding node determines the remaining capacity of the sixth scheduling queue by: the forwarding node determines the number of cached object blocks, and the number of cached object blocks is the object block to which the cached data packet of the sixth scheduling queue belongs.
  • the forwarding node determines the difference between the first threshold and the number of cached object blocks as the remaining capacity of the sixth scheduling queue, and the first threshold is the number of object blocks that the sixth scheduling queue can cache. In this scenario, the remaining capacity of the sixth scheduling queue is not enough to cache the data packets of the target object block, which means: the remaining capacity of the sixth scheduling queue is 0.
  • the first threshold may also be referred to as the upper limit of the number of bearers of the scheduling queue, and the upper limit of the number of bearers may be understood as the maximum number of object blocks that the scheduling queue can bear.
  • scheduling queue 1 to scheduling queue 4 there are four scheduling queues configured at the forwarding node, which are labeled scheduling queue 1 to scheduling queue 4 respectively.
  • the first threshold of each scheduling queue is 3. Assume that the first packets of object block 1 to object block 6 arrive in sequence, then add the first packets of object block 1 to object block 3 to scheduling queue 1, and add the first packets of object block 4 to object block 6 to scheduling queue 2 .
  • the forwarding node may further determine the remaining capacity of the third scheduling queue.
  • the remaining capacity of the third scheduling queue is: the first threshold of the third scheduling queue minus 1 (1 represents that the third scheduling queue has been used to cache an object block).
  • the forwarding node determines the remaining capacity of the sixth scheduling queue in the following manner: the forwarding node determines the total size of cached data, and the total size of cached data is the total size of cached data packets in the sixth scheduling queue. ; The forwarding node determines the difference between the second threshold and the total size of cached data as the remaining capacity of the sixth scheduling queue, and the second threshold is the total size of data packets that can be cached by the sixth scheduling queue.
  • the first data packet also carries the data size of the target object block.
  • the remaining capacity of the sixth scheduling queue is not enough to cache the data packet of the target object block. This means: the remaining capacity of the sixth scheduling queue is smaller than the data of the target object block. size.
  • the second threshold may also be called a token of the scheduling queue, and the token may be understood as the total amount of data that can be cached by the cache space of the scheduling queue.
  • scheduling queue 1 there are four scheduling queues configured at the forwarding node, which are labeled scheduling queue 1 to scheduling queue 4 respectively.
  • the second threshold of each scheduling queue is 10. Assume that the first packets of object block 1 to object block 3 arrive in sequence, and the data sizes of object block 1 and object block 2 are both 5, then the first packets of object block 1 and object block 2 are added to scheduling queue 1. For object block 3, since the remaining capacity of dispatch queue 1 is 0, the first packet of object block 3 needs to be added to the next empty dispatch queue (that is, dispatch queue 2).
  • the forwarding node may further determine the remaining capacity of the third scheduling queue.
  • the remaining capacity of the third scheduling queue is: the second threshold of the third scheduling queue minus the data size of the target object block.
  • the forwarding node when the forwarding node selects a scheduling queue with uncached data packets as the third scheduling queue from multiple scheduling queues, if there is no empty scheduling queue at the current time, the forwarding node can add the third scheduling queue to the queue.
  • a packet is added to the default queue.
  • the default queue may be, for example, a best-effort queue.
  • the target attribute in this scenario can be the arrival time of the first data packet of the corresponding object block.
  • the priority of the scheduling queue can be determined based on the arrival time of the first packet added to the scheduling queue for the first time. That is, for any scheduling queue, when an object block is added to the scheduling queue for the first time, the priority of the scheduling queue is configured. When subsequent object blocks are added to the scheduling queue, the priority of the scheduling queue will not be updated. .
  • the forwarding node uses the sixth scheduling queue as the third scheduling queue and adds the first data packet to the third After scheduling the queue, the forwarding node does not update the priority of the third scheduling queue (that is, the sixth scheduling queue).
  • the forwarding node selects a scheduling queue that has not cached the data packet from the multiple scheduling queues as the third scheduling queue, and adds the first data packet to the third scheduling queue.
  • the forwarding node updates the priority of the third scheduling queue so that the priority of the updated third scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
  • the target attribute in the first queue entry method is the arrival time of the first data packet of the corresponding object block. , which will not be described in detail here.
  • the priority of the dispatch queue is determined based on the target attribute of the object block cached by the dispatch queue, and a dispatch queue cache has multiple object blocks and the first packet of these multiple object blocks They arrive continuously. Therefore, when all data packets of all object blocks cached in a certain scheduling queue are sent, the priority of the scheduling queue needs to be updated.
  • the forwarding node may update the priority of the scheduling queue in the following manner: if any of the multiple scheduling queues Once the data packets of all object blocks cached by a scheduling queue have been sent, the priority of the scheduling queue is updated so that the updated priority of the scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
  • the priorities of other scheduling queues that have cached data packets may or may not be updated. If updated, others have The size relationship between the priorities of the scheduling queue with cached data packets before and after the update remains unchanged.
  • the priority update may be triggered only when all data packets of the object block in the scheduling queue with the highest priority are sent.
  • the scheduling queue is the scheduling queue with the highest priority. If the scheduling queue is not the scheduling queue with the highest priority, The scheduling queue will not trigger priority update.
  • the scheduling queue is the scheduling queue with the highest priority
  • an update of the priority is triggered.
  • the scheduling queues in which all data packets of all object blocks cached at the current time have been sent are determined, and the priorities of these scheduling queues are set to be lower than the priorities of other scheduling queues that cache data packets.
  • the priorities of other scheduling queues that have buffered data packets may or may not be updated. If updated, the size relationship between the priorities of other scheduling queues that have cached data packets before and after the update remains unchanged.
  • the forwarding node determines whether all the data packets of the object block have been sent.
  • the implementation method can also refer to the first enqueuing method, which will not be described again here.
  • Each scheduling queue is used to cache several object blocks, and new object blocks enter each scheduling queue in turn.
  • the way in which the forwarding node selects one scheduling queue as the third scheduling queue from multiple scheduling queues in step 603 can be as follows: the forwarding node determines the sixth scheduling queue from the multiple scheduling queues, and the sixth scheduling queue is the current time. The scheduling queue where the first packet received most recently is located; a scheduling queue ranked after the sixth scheduling queue is determined as the third scheduling queue.
  • scheduling queue 1 to scheduling queue 4 there are four scheduling queues configured at the forwarding node, which are labeled scheduling queue 1 to scheduling queue 4 respectively. Assume that the first packets of object block 1 to object block 6 arrive in sequence, then add the first packets of object block 1 to object block 4 to dispatch queue 1 to dispatch queue 4 respectively, and add the first packet of object block 5 to dispatch queue 1 , add the first packet of object block 6 to scheduling queue 2.
  • the target attribute in this scenario can be the arrival of the first data packet of the corresponding object block. time.
  • the priority of the scheduling queue can be determined based on the arrival time of the first packet added to the scheduling queue for the first time. That is, for any scheduling queue, when an object block is added to the scheduling queue for the first time, the priority of the scheduling queue is configured. When subsequent object blocks are added to the scheduling queue, the priority of the scheduling queue will not be updated. .
  • the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue, and adds the first data packet to the third scheduling queue. If the third scheduling queue is In addition to the first data packet, there are also data packets of other object blocks cached, so the forwarding node does not update the priority of the third scheduling queue. Correspondingly, if the third scheduling queue does not cache data packets of other object blocks except the first data packet, the forwarding node updates the priority of the third scheduling queue.
  • the target attribute in the first queue entry method is the arrival time of the first data packet of the corresponding object block. The method will not be described again here.
  • the priority of the dispatch queue is determined based on the target attribute of the object block cached by the dispatch queue, and a dispatch queue cache has multiple object blocks and the first packets of these multiple object blocks are not
  • the first packets arriving continuously that is, the first packets arriving at adjacent forwarding nodes are cached in different scheduling queues respectively.
  • the priority of the scheduling queue needs to be updated.
  • the forwarding node may update the priority of the scheduling queue in the following manner: If multiple scheduling After all the data packets of one of the object blocks cached by any scheduling queue in the queue are sent, the priority of the scheduling queue is updated so that the updated priority of the scheduling queue is lower than that of other scheduling queues that have cached data packets. priority.
  • the priorities of other scheduling queues that have cached data packets may or may not be updated. If updated, the size relationship between the priorities of other scheduling queues that have cached data packets before and after the update remains unchanged.
  • the priority update may be triggered only when all data packets of a certain object block in the scheduling queue with the highest priority have been sent. In this scenario, if all the data packets of one of the object blocks cached by any scheduling queue among multiple scheduling queues are sent, it is still necessary to determine whether the scheduling queue is the scheduling queue with the highest priority. If the scheduling queue is not a priority, The highest scheduling queue will not trigger priority update.
  • the scheduling queue is the scheduling queue with the highest priority, an update of the priority is triggered.
  • the scheduling queues that have all sent the data packets of at least one object block cached before the current time are determined, and the priorities of these scheduling queues are set to be lower than the priorities of other scheduling queues that have cached data packets.
  • the priorities of other scheduling queues that have buffered data packets may or may not be updated. If updated, the size relationship between the priorities of other scheduling queues that have cached data packets before and after the update remains unchanged.
  • the forwarding node determines whether all data packets of the object block have been sent.
  • the implementation method can also refer to the first queuing method, which will not be described again here.
  • Figure 7 is a flow chart of a data packet sending method provided by an embodiment of the present application. This method is used to explain how a forwarding node sends data packets in each scheduling queue. As shown in Figure 7, the method includes the following steps 701 and 702.
  • Step 701 The forwarding node determines the first scheduling queue from the multiple scheduling queues based on the priority of each scheduling queue in the multiple scheduling queues. Among them, each scheduling queue in multiple scheduling queues is used to cache data packets of at least one object block, and data packets of the same object block are cached in the same scheduling queue. The priority of each scheduling queue is based on the corresponding scheduling queue cache. The target attribute of the object block to which the data packet belongs is determined. The target attribute is an attribute that remains unchanged during the sending process of the data packet of the corresponding object block.
  • each scheduling queue is used for Cache packets of at least one object block.
  • each scheduling queue is used to cache data packets of one object block, or each scheduling queue is used to cache data packets of two or more object blocks. In other words, packets of the same object block are buffered in the same dispatch queue.
  • the priority can be configured for each scheduling queue. in this way, When forwarding data packets, the forwarding node can quickly select a scheduling queue from multiple scheduling queues based on the priority of each scheduling queue, and then send the data packets in the selected scheduling queue. Based on this, the following technical effects can be achieved:
  • the relationship between the priorities of each scheduling queue is basically unchanged in a short period of time. Changes will occur.
  • the forwarding node will select the data packet in the same scheduling queue to send each time within this short period of time. And since the data packets of the same object block are cached in the same scheduling queue, there is a high probability that the data packets sent by the forwarding node in this short period of time belong to the same object block. In this way, the data blocks in the same object block can be sent together as much as possible. sent, thus reducing the frequency of interleaved sending between data packets of different object blocks.
  • step 701 the forwarding node determines the first scheduling queue specifically by selecting the scheduling queue with the highest priority from multiple scheduling queues as the first scheduling queue.
  • Step 702 The forwarding node sends the data packet in the first scheduling queue.
  • the implementation process of step 702 may be: the forwarding node determines the second data packet from the first scheduling queue, and the second data packet is the data packet with the earliest queue time in the first scheduling queue; the forwarding node sends the second data packet. Two data packets. This method is simple to operate and easy to implement.
  • the implementation process of step 702 may be: the forwarding node determines the object block to which the latest data packet sent before the current time belongs, and then selects the data packet belonging to the object block from the first scheduling queue. send. In this way, it is strictly guaranteed that data packets of different object blocks will not be sent interleaved.
  • the sending end packetizes the object block and obtains multiple data packets. And set the first packet and the last packet in these multiple data packets, that is, the first packet carries the first packet label, and the last packet carries the last packet label. In addition, each data packet also carries the identifier of the object block. The sender then sends these multiple data packets.
  • the forwarding node When the forwarding node receives the first packet, it selects a scheduling queue and adds the first packet to the selected scheduling queue (that is, the first packet is selected into the queue). In addition, the forwarding node also determines the priority of the scheduling queue based on the arrival order of the object blocks (that is, the arrival time of the first packet) or the DL of the object blocks (that is, the allowed network delay). The forwarding node forwards the data packets in each scheduling queue based on the priority of the scheduling queue. After receiving the data packets, the receiving end splices each data packet into an object block, thereby enabling the receiving end to receive data packets based on the object block.
  • the forwarding node when the forwarding node is sending packets, if all the data packets of a certain object block are sent, the priority update of each scheduling queue can be triggered.
  • each step in FIG. 8 can refer to the embodiment of FIGS. 5 to 7 , and does not constitute a limitation to the embodiment of FIGS. 5 to 7 .
  • Single object block single dispatch queue can be implemented.
  • the data packets of an object block are cached in a scheduling queue to avoid interleaved transmission of data packets of different object blocks, such as the first queue entry method in the embodiment of Figure 6.
  • a scheduling queue to avoid interleaved transmission of data packets of different object blocks, such as the first queue entry method in the embodiment of Figure 6.
  • multiple object blocks can be cached in a scheduling queue, such as the second enqueuing method and the third enqueuing method in the embodiment of FIG. 6 .
  • Circular scheduling of each scheduling queue can be realized, so that each scheduling queue has the highest priority opportunity.
  • object blocks enter each scheduling queue in a loop, so each scheduling queue may become the scheduling queue with the highest priority, thereby realizing priority circulation among different scheduling queues.
  • round-robin priority method fair forwarding opportunities between each scheduling queue are ensured, thereby avoiding starvation of a certain scheduling queue.
  • the priority of the scheduling queue is updated.
  • the scheduling queue to be added is selected based on the data size of the object block or the first threshold (upper limit of the number of bearers) or the second threshold (token) of the scheduling queue.
  • the embodiment of the present application Compared with traditional scheduling, which directly depends on the priority of the data packet without considering the information of the object block, the embodiment of the present application considers the data size of the object block, the header packet and the tail packet of the object block and other information, and performs corresponding scheduling, thereby realizing different Object block deinterlacing effect.
  • the priority of the scheduling queue where the object block is located can be determined based on the network allowable delay, so that more urgent objects can be sent out faster.
  • the embodiment of this application sends object blocks based on the scheduling queue. Because data packets of the same object block are cached in the same dispatch queue, and the priority of the dispatch queue is related to the fixed target attribute of the object block cached by the dispatch queue, rather than to the specific data packet, so each dispatch queue The size relationship between the priorities will basically not change in a short period of time. When the priority of each scheduling queue remains unchanged, the forwarding node will select the data packet in the same scheduling queue to send each time within this short period of time. And since the data packets of the same object block are cached in the same scheduling queue, there is a high probability that the data packets sent by the forwarding node in this short period of time belong to the same object block. In this way, the data blocks in the same object block can be sent together as much as possible. sent, thus reducing the frequency of interleaved sending between data packets of different object blocks.
  • the embodiment of the present application can be implemented by scheduling queues, and the hardware overhead is small.
  • the embodiment of this application can realize the forwarding node to forward data packets at high speed without loss of throughput.
  • comparators and matchers are used to reduce the frequency of interleaved transmission between data packets of different object blocks, the forwarding efficiency of the forwarding node will be reduced.
  • the embodiment of the present application can be implemented by using a scheduling queue, thereby achieving high-speed forwarding of data packets by the forwarding node.
  • a resource reservation method (such as reserving a time slice for a certain object block) is used to forward each object block to avoid interleaved transmission between different object blocks.
  • the embodiment of the present application does not require resource reservation by intermediate nodes and does not require a negotiation process to achieve efficient and low-latency transmission of data packets.
  • FIG 9 is a schematic structural diagram of a forwarding node provided by an embodiment of the present application. As shown in Figure 9, the forwarding node 900 includes the following modules 901-903.
  • the processing module 901 is configured to determine a first scheduling queue from multiple scheduling queues based on the priority of each scheduling queue in the multiple scheduling queues. For specific implementation, please refer to step 701 in the embodiment of Figure 7 .
  • each scheduling queue in multiple scheduling queues is used to cache data packets of at least one object block, and data packets of the same object block are cached in the same scheduling queue.
  • the priority of each scheduling queue is based on the corresponding scheduling queue cache.
  • the target attribute of the object block to which the data packet belongs is determined.
  • the target attribute is an attribute that remains unchanged during the sending process of the data packet of the corresponding object block;
  • the sending module 902 is used to send the data packets in the first scheduling queue.
  • the sending module 902 is used to send the data packets in the first scheduling queue.
  • the forwarding node also includes:
  • the receiving module 903 is configured to receive a first data packet, where the first data packet carries an identifier of a target object block, and the target object block is the object block to which the first data packet belongs.
  • the first data packet carries an identifier of a target object block
  • the target object block is the object block to which the first data packet belongs.
  • the processing module 901 is also configured to add the first data packet to the second scheduling queue based on the identification of the target object block if the first data packet does not carry the first packet tag.
  • the second scheduling queue is a buffered data packet in multiple scheduling queues.
  • the first packet label is used to indicate whether the first data packet is the first data packet of the target object block. For specific implementation, please refer to step 602 in the embodiment of Figure 6 .
  • the processing module 901 is also configured to: if the first data packet also carries a first packet label, select one scheduling queue from multiple scheduling queues as the third scheduling queue, and add the first data packet to the third scheduling queue.
  • the processing module 901 is also configured to: if the first data packet also carries a first packet label, select one scheduling queue from multiple scheduling queues as the third scheduling queue, and add the first data packet to the third scheduling queue.
  • the processing module 901 is also configured to: if the first data packet also carries a first packet label, select one scheduling queue from multiple scheduling queues as the third scheduling queue, and add the first data packet to the third scheduling queue.
  • the processing module 901 is also configured to: if the first data packet also carries a first packet label, select one scheduling queue from multiple scheduling queues as the third scheduling queue, and add the first data packet to the third scheduling queue.
  • processing module is used to:
  • each of the plurality of scheduling queues buffers at most one object block of data packets.
  • the target attribute is the allowable network delay of the corresponding object block, and the first data packet also carries the allowable network delay of the target object block;
  • Processing modules are also used to:
  • the fourth scheduling queue refers to multiple The allowable network delay of the object blocks cached in the scheduling queue is greater than the allowable network delay of the target object block.
  • the fifth scheduling queue refers to the allowable network delay of the object blocks cached in multiple scheduling queues is less than the allowable network delay of the target object block. Network delay-tolerant scheduling queue.
  • processing module is used to:
  • a scheduling queue that has not cached data packets is selected from the plurality of scheduling queues as the third scheduling queue.
  • processing module is also used to:
  • the sixth dispatch queue is used as the third dispatch queue.
  • processing module is used to:
  • the remaining capacity of the sixth scheduling queue is insufficient to cache the data packet of the target object block means: the remaining capacity of the sixth scheduling queue is 0.
  • processing module is used to:
  • the difference between the second threshold and the total size of cached data is determined as the remaining capacity of the sixth scheduling queue, and the second threshold The value is the total size of data packets that can be cached by the sixth scheduling queue;
  • the first data packet also carries the data size of the target object block.
  • the remaining capacity of the sixth scheduling queue is insufficient to cache the data packet of the target object block. This means that the remaining capacity of the sixth scheduling queue is smaller than the data size of the target object block.
  • the target attribute is the arrival time of the first data packet of the corresponding object block
  • Processing modules are also used to:
  • processing module is also used to:
  • the priority of any one of the scheduling queues is updated so that the updated priority of any one of the scheduling queues is lower than that of other cached queues. The priority of the packet's dispatch queue.
  • multiple scheduling queues are arranged in sequence
  • Processing modules are used for:
  • a scheduling queue ranked after the sixth scheduling queue is determined as the third scheduling queue.
  • the target attribute is the arrival time of the first data packet of the corresponding object block
  • Processing modules are also used to:
  • the third scheduling queue does not cache data packets of other object blocks except the first data packet, update the priority of the third scheduling queue so that the priority of the updated third scheduling queue is lower than other cached data packets.
  • the priority of the dispatch queue is not cache data packets of other object blocks except the first data packet.
  • processing module is also used to:
  • the priorities of the multiple scheduling queues are updated so that the updated priority of any one of the scheduling queues is lower than that of other cached ones.
  • the updated priority of the scheduling queue for packets is lower than that of other cached ones.
  • processing module is used to:
  • the priority of the scheduling queue is related to the fixed target attribute of the object block cached by the scheduling queue, rather than to the specific data Packet-related, so the size relationship between the priorities of each scheduling queue will basically not change in a short period of time.
  • the forwarding node will select the data packet in the same scheduling queue to send each time within this short period of time.
  • the data packets of the same object block are cached in the same scheduling queue, there is a high probability that the data packets sent by the forwarding node in this short period of time belong to the same object block. In this way, the data blocks in the same object block can be sent together as much as possible. sent, thus reducing the frequency of interleaved sending between data packets of different object blocks.
  • the embodiment of the present application can achieve this by scheduling queues and corresponding priorities, with low hardware overhead and high forwarding efficiency.
  • the forwarding node provided in the above embodiment forwards a data packet
  • only the division of the above functional modules is used as an example.
  • the above function allocation can be completed by different functional modules according to needs, that is, The internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the forwarding node and the method embodiment for sending data packets provided in the above embodiments belong to the same concept. Please refer to the method embodiment for details of the specific implementation process, which will not be described again here.
  • FIG 10 is a schematic structural diagram of a sending end provided by an embodiment of the present application. As shown in Figure 10, the sending end 1000 includes the following modules 1001-1002.
  • the processing module 1001 is used to determine multiple data packets of the object block to be sent. Each data packet in the multiple data packets carries the identifier of the object block, and the first packet in the multiple data packets carries the first packet label.
  • the first packet label Used to indicate that the corresponding data packet is the first data packet of the object block to be sent. For specific implementation, please refer to step 501 in the embodiment of Figure 5 .
  • the sending module 1002 is used to send multiple data packets. For specific implementation methods, reference can be made to step 502 in the embodiment of Figure 5 .
  • the first packet also carries the allowed network delay of the object block to be sent.
  • the first packet also carries the data size of the object block to be sent.
  • the tail packet in the multiple data packets carries a tail packet label, and the tail packet label is used to indicate that the corresponding data packet is the last data packet of the object block to be sent.
  • the purpose of carrying the first packet label in the first packet is to facilitate the forwarding node to identify the new object block and allocate a scheduling queue to the new object block.
  • Each data packet carries the identifier of the object block so that the forwarding node can cache data packets of the same object block in the same scheduling queue.
  • the forwarding node can implement scheduling of data packets in units of object blocks through the embodiments shown in Figures 6 and 7.
  • the sending end provided in the above embodiment forwards the data packet
  • only the division of the above functional modules is used as an example.
  • the above function allocation can be completed by different functional modules according to needs, that is, The internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the sending end provided by the above embodiments and the method embodiments for sending data packets belong to the same concept. Please refer to the method embodiments for the specific implementation process, which will not be described again here.
  • FIG. 11 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the forwarding node, sending end, receiving end, etc. in the foregoing embodiments can all be implemented by the computer device shown in Figure 11.
  • the computer device includes a processor 1101, a communication bus 1102, a memory 1103 and at least one communication interface 1104.
  • the processor 1101 may be a general central processing unit (CPU), an application-specific integrated circuit (ASIC), or one or more integrated circuits used to control the execution of the program of the present application.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • Communication bus 1102 is used to transfer information between the above-mentioned components.
  • the memory 1103 may be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory (RAM)) or other type that can store information and instructions.
  • a dynamic storage device can also be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disc storage (including compressed optical discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disks or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be accessed by a computer any other medium, but not limited to this.
  • the memory 1103 may exist independently and be connected to the processor 1101 through the communication bus 1102.
  • the memory 1103 may also be integrated with the processor 1101.
  • the memory 1103 is used to store the program code for executing the solution of the present application, and the processor 1101 controls the execution.
  • the processor 1101 is used to execute program codes stored in the memory 1103.
  • the program code may include one or more software modules. Both the forwarding node and the sending end can determine data for developing applications through one or more software modules in the program code in the processor 1101 and the memory 1103 .
  • the communication interface 1104 uses any device such as a transceiver to communicate with other devices or communication networks.
  • the communication network can be Ethernet, wireless access network (RAN), wireless local area networks, WLAN) etc.
  • the computer device may include multiple processors, such as the processor 1101 and the processor 1105 shown in FIG. 11 .
  • processors may be a single-CPU processor or a multi-CPU processor.
  • a processor here may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
  • the above computer device may be a general computer device or a special computer device.
  • the computer device can be a desktop computer, a portable computer, a network server, a personal digital assistant (PDA), a mobile phone, a tablet computer, a wireless terminal device, a router and other communication devices or an embedded device.
  • PDA personal digital assistant
  • the embodiments of the present application do not limit the type of computer equipment.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the available media may be magnetic media (such as floppy disks, hard disks, tapes), optical media (such as digital versatile discs (DVD)), or semiconductor media (such as solid state disks (SSD) )wait.

Abstract

Disclosed are a packet transmission method, a forwarding node, a transmission end and a storage medium, which belong to the technical field of communications. The method comprises: a forwarding node determining a first scheduling queue from a plurality of scheduling queues on the basis of the priority of each of the plurality of scheduling queues, wherein each of the plurality of scheduling queues is used for caching packets of at least one object block, the packets of the same object block are cached in the same scheduling queue, the priority of each scheduling queue is determined on the basis of a target attribute of an object block to which the packets cached in the corresponding scheduling queue belong, and the target attribute is an attribute remaining unchanged during the transmission of the packets of the corresponding object block; and the forwarding node transmitting packets in the first scheduling queue. Compared with reducing the frequency of interleaved transmission of packets of different object blocks on the basis of a comparator and a matcher, the embodiments of the present application can have the same function by means of scheduling queues and corresponding priorities, which requires small hardware overheads and achieves a high forwarding efficiency.

Description

数据包发送方法、转发节点、发送端及存储介质Data packet sending method, forwarding node, sending end and storage medium
本申请要求于2022年05月24日提交的申请号为202210575912.6、发明名称为“数据包发送方法、转发节点、发送端及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application with application number 202210575912.6 and the invention name "data packet sending method, forwarding node, sending end and storage medium" submitted on May 24, 2022, the entire content of which is incorporated by reference in in this application.
技术领域Technical field
本申请实施例涉及通信技术领域,特别涉及一种数据包发送方法、转发节点、发送端及存储介质。The embodiments of the present application relate to the field of communication technology, and in particular to a data packet sending method, a forwarding node, a sending end and a storage medium.
背景技术Background technique
上层应用处理数据的基本单元为对象块,一个对象块包括一段完整的数据,比如媒体类应用中一张视频帧对应的数据即为一个对象块。而网络传输数据的基本单元为数据包(packet),因此,发送端向接收端发送数据时,需将上层应用提供的对象块划分为多个数据包,然后将各个数据包发送至网络的转发节点,由转发节点将各个数据包发送至接收端。The basic unit of data processing by upper-layer applications is an object block. An object block includes a complete piece of data. For example, the data corresponding to a video frame in a media application is an object block. The basic unit of network transmission data is a packet. Therefore, when the sender sends data to the receiver, it needs to divide the object block provided by the upper-layer application into multiple data packets, and then send each data packet to the network forwarder. Node, the forwarding node sends each data packet to the receiving end.
相关技术中,转发节点上配置有尽力服务(best effort)队列。转发节点每接收到一个数据包,将该数据包添加至尽力服务队列。转发节点发送数据时,通过比较器从尽力服务队列中选择入队时间最早的数据包,通过匹配器从尽力服务队列中确定与入队时间最早的数据包属于同一对象块的数据包,然后依次发送这些数据包。通过这种方式可以尽量将属于同一对象块的数据包一起发送,以此降低不同对象块的数据包之间交织发送的频率。In related technologies, a best effort queue is configured on the forwarding node. Each time a forwarding node receives a data packet, it adds the data packet to the best-effort service queue. When the forwarding node sends data, it selects the data packet with the earliest enqueuing time from the best-effort service queue through the comparator, and determines the data packet belonging to the same object block as the earliest enqueuing time data packet from the best-effort service queue through the matcher, and then in turn Send these packets. In this way, data packets belonging to the same object block can be sent together as much as possible, thereby reducing the frequency of interleaved transmission of data packets of different object blocks.
但是上述发送数据包的流程复杂,导致转发节点的算法开销大,从而影响转发节点的转发效率。However, the above-mentioned process of sending data packets is complicated, resulting in high algorithm overhead for the forwarding node, thus affecting the forwarding efficiency of the forwarding node.
发明内容Contents of the invention
本申请实施例提供了一种数据包发送方法、转发节点、发送端及存储介质,可以提高转发节点的转发效率。技术方案如下:Embodiments of the present application provide a data packet sending method, a forwarding node, a sending end and a storage medium, which can improve the forwarding efficiency of the forwarding node. The technical solution is as follows:
第一方面,提供了一种数据包转发方法。在该方法中,转发节点基于多个调度队列中每个调度队列的优先级,从多个调度队列中确定第一调度队列;其中,多个调度队列中的每个调度队列用于缓存至少一个对象块的数据包、且同一对象块的数据包缓存在同一调度队列中,每个调度队列的优先级是基于相应调度队列缓存的数据包所属对象块的目标属性确定的,目标属性为在相应对象块的数据包发送过程中保持不变的属性;转发节点发送第一调度队列中的数据包。The first aspect provides a data packet forwarding method. In this method, the forwarding node determines the first scheduling queue from the multiple scheduling queues based on the priority of each scheduling queue in the multiple scheduling queues; wherein each scheduling queue in the multiple scheduling queues is used to cache at least one The data packets of the object block and the data packets of the same object block are cached in the same dispatch queue. The priority of each dispatch queue is determined based on the target attribute of the object block to which the data packet cached by the corresponding dispatch queue belongs. The target attribute is in the corresponding dispatch queue. The properties of the object block remain unchanged during the data packet sending process; the forwarding node sends the data packet in the first scheduling queue.
因为同一对象块的数据包是缓存在同一调度队列中的,且调度队列的优先级和该调度队列缓存的对象块的固定的目标属性相关,而不是和具体的数据包相关,如此各个调度队列的优先级之间的大小关系在较短一段时间内基本不会发生变化。在各个调度队列的优先级不变的情况下,转发节点在该较短时间内每次都会选择同一调度队列中的数据包发送。又由于同 一对象块的数据包缓存在同一调度队列中,因此在该较短时间内转发节点发送的数据包属于同一对象块的概率很大,以此来实现同一对象块中的数据块尽量一起发送,从而降低了不同对象块的数据包之间交织发送的频率。Because data packets of the same object block are cached in the same dispatch queue, and the priority of the dispatch queue is related to the fixed target attribute of the object block cached by the dispatch queue, rather than to the specific data packet, so each dispatch queue The size relationship between the priorities will basically not change in a short period of time. When the priority of each scheduling queue remains unchanged, the forwarding node will select the data packet in the same scheduling queue to send each time within this short period of time. And because of the same The data packets of an object block are cached in the same scheduling queue, so the data packets sent by the forwarding node within this short period of time have a high probability of belonging to the same object block. In this way, the data blocks in the same object block are sent together as much as possible. This reduces the frequency of interleaved transmission of data packets of different object blocks.
相对于基于比较器和匹配器来降低不同对象块的数据包之间交织发送的频率,本申请实施例通过调度队列以及对应的优先级便可实现,硬件开销小,且转发效率高。Compared with reducing the frequency of interleaved transmission of data packets of different object blocks based on comparators and matchers, the embodiment of the present application can achieve this by scheduling queues and corresponding priorities, with low hardware overhead and high forwarding efficiency.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,在该方法中,转发节点接收第一数据包,第一数据包携带目标对象块的标识,目标对象块为第一数据包所属的对象块;如果第一数据包没有携带首包标签,转发节点基于目标对象块的标识,将第一数据包添加至第二调度队列,第二调度队列为多个调度队列中已缓存有目标对象块的数据包的调度队列,首包标签用于指示第一数据包是否为目标对象块的首个数据包。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, in this method, the forwarding node receives the first data packet, the first data packet carries the identifier of the target object block, and the target object block is the first The object block to which the data packet belongs; if the first data packet does not carry the first packet label, the forwarding node adds the first data packet to the second scheduling queue based on the identification of the target object block. The second scheduling queue is the one already in multiple scheduling queues. A scheduling queue that caches data packets of the target object block. The first packet tag is used to indicate whether the first data packet is the first data packet of the target object block.
如果第一数据包没有携带首包标签,则表明第一数据包不是目标对象块的首包。这种场景下,只需将第一数据包添加到当前时间之前缓存目标对象块的数据包的调度队列即可。通过该方式,便可实现将同一对象块的数据包缓存在同一调度队列中。If the first data packet does not carry the first packet label, it indicates that the first data packet is not the first packet of the target object block. In this scenario, just add the first data packet to the scheduling queue of the data packets that cache the target object block before the current time. In this way, data packets of the same object block can be cached in the same scheduling queue.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,在该方法中,转发节点接收第一数据包之后,如果第一数据包还携带首包标签,转发节点从多个调度队列中选择一个调度队列作为第三调度队列,将第一数据包添加至第三调度队列。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, in this method, after the forwarding node receives the first data packet, if the first data packet also carries a first packet label, the forwarding node receives the first data packet from multiple Select one of the dispatch queues as the third dispatch queue, and add the first data packet to the third dispatch queue.
如果第一数据包携带首包标签,则表明第一数据包是目标对象块的首包。这种场景下,对于转发节点而言,目标对象块相当于新的对象块,此时需要为目标对象块分配一个调度队列,以便后续目标对象块的数据包均缓存到分配的调度队列中。If the first data packet carries a first packet tag, it indicates that the first data packet is the first packet of the target object block. In this scenario, for the forwarding node, the target object block is equivalent to a new object block. At this time, a scheduling queue needs to be allocated for the target object block, so that subsequent data packets of the target object block are cached in the allocated scheduling queue.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,转发节点从多个调度队列中选择一个调度队列作为第三调度队列的实现过程为:转发节点从多个调度队列中选择未缓存数据包的调度队列作为第三调度队列。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue. The implementation process is: the forwarding node selects one scheduling queue from multiple scheduling queues. Select a dispatch queue with no buffered data packets as the third dispatch queue.
为了实现将不同对象块缓存在不同的调度队列中,对于新接收到的目标对象块,转发节点可以从剩下的空的调度队列中选择一个调度队列作为第三调度队列。后续将这种入队方式称为第一种入队方式。In order to cache different object blocks in different dispatch queues, for the newly received target object block, the forwarding node can select a dispatch queue from the remaining empty dispatch queues as the third dispatch queue. This way of joining the team will be called the first way of joining the team later.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,多个调度队列中的每个调度队列至多缓存一个对象块的数据包。通过第一种入队方式,可以实现每个调度队列仅仅用于缓存一个对象块。Based on the data packet forwarding method provided in the first aspect, in one possible implementation manner, each of the multiple scheduling queues caches at most one object block of data packets. Through the first queuing method, each scheduling queue can be used to cache only one object block.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,目标属性为相应对象块的网络允许时延,第一数据包还携带目标对象块的网络允许时延。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, the target attribute is the allowed network delay of the corresponding object block, and the first data packet also carries the allowed network delay of the target object block.
这种场景下,转发节点从多个调度队列中选择未缓存数据包的调度队列作为第三调度队列之后,转发节点更新第三调度队列的优先级,以使更新后的第三调度队列的优先级高于第四调度队列的优先级、且低于第五调度队列的优先级,第四调度队列是指多个调度队列中缓存的对象块的网络允许时延大于目标对象块的网络允许时延的调度队列,第五调度队列是指多个调度队列中缓存的对象块的网络允许时延小于目标对象块的网络允许时延的调度队列。In this scenario, after the forwarding node selects a scheduling queue that has not cached data packets from multiple scheduling queues as the third scheduling queue, the forwarding node updates the priority of the third scheduling queue so that the updated third scheduling queue has priority. The level is higher than the priority of the fourth scheduling queue and lower than the priority of the fifth scheduling queue. The fourth scheduling queue means that the network allowed delay of the object blocks cached in multiple scheduling queues is greater than the network allowed time of the target object block. The fifth scheduling queue refers to a scheduling queue in which the allowable network delay of the object blocks cached in multiple scheduling queues is smaller than the allowable network delay of the target object block.
在第一种入队方式中,通过上述实现方式,可以实现对网络允许时延要求高的对象块所在的调度队列的优先级更高,从而保证对网络允许时延要求高的对象块优先发送出去。In the first enqueuing method, through the above implementation method, the priority of the scheduling queue where the object blocks with high network allowable delay requirements are located is higher, thereby ensuring that the object blocks with high network allowable delay requirements are sent first. go out.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,转发节点从多个调度队列中选择一个调度队列作为第三调度队列的实现过程为:转发节点确定第六调度队列的剩 余容量,第六调度队列为当前时间之前最近一次接收的首包所在的调度队列;如果第六调度队列的剩余容量不足以缓存目标对象块,则转发节点从多个调度队列中选择未缓存数据包的调度队列作为第三调度队列。相应地,转发节点确定第六调度队列的剩余容量之后,如果第六调度队列的剩余容量足以缓存目标对象块,则转发节点将第六调度队列作为第三调度队列。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, the implementation process of the forwarding node selecting one scheduling queue as the third scheduling queue from multiple scheduling queues is as follows: the forwarding node determines the number of the sixth scheduling queue. left The remaining capacity, the sixth scheduling queue is the scheduling queue where the first packet received most recently before the current time is located; if the remaining capacity of the sixth scheduling queue is not enough to cache the target object block, the forwarding node selects uncached data from multiple scheduling queues The dispatch queue of the packet is used as the third dispatch queue. Accordingly, after the forwarding node determines the remaining capacity of the sixth scheduling queue, if the remaining capacity of the sixth scheduling queue is sufficient to cache the target object block, the forwarding node uses the sixth scheduling queue as the third scheduling queue.
通过上述实现方式,可以实现每个调度队列用于缓存若干个对象块,在一个调度队列满载的情况下,新来的对象块才会进入下一个空的调度队列。后续将这种入队方式称为第二种入队方式。Through the above implementation method, each scheduling queue can be used to cache several object blocks. When one scheduling queue is full, new object blocks will enter the next empty scheduling queue. This way of joining the team will be called the second way of joining the team later.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,转发节点确定第六调度队列的剩余容量的实现过程为:转发节点确定已缓存对象块数量,已缓存对象块数量为第六调度队列已缓存的数据包所属的对象块的数量;转发节点将第一阈值和已缓存对象块数量之间的差值,确定为第六调度队列的剩余容量,第一阈值为第六调度队列能够缓存的对象块的数量;第六调度队列的剩余容量不足以缓存目标对象块的数据包是指:第六调度队列的剩余容量为0。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, the implementation process of the forwarding node determining the remaining capacity of the sixth scheduling queue is: the forwarding node determines the number of cached object blocks, and the number of cached object blocks is The number of object blocks to which the cached data packets of the sixth dispatch queue belong; the forwarding node determines the difference between the first threshold and the number of cached object blocks as the remaining capacity of the sixth dispatch queue, and the first threshold is the sixth dispatch queue. The number of object blocks that the dispatch queue can cache; the remaining capacity of the sixth dispatch queue is insufficient to cache the data packet of the target object block means: the remaining capacity of the sixth dispatch queue is 0.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,转发节点确定第六调度队列的剩余容量的实现过程为:转发节点确定已缓存数据总大小,已缓存数据总大小为第六调度队列已缓存的数据包的总大小;转发节点将第二阈值和已缓存数据总大小之间的差值,确定为第六调度队列的剩余容量,第二阈值为第六调度队列能够缓存的数据包的总大小;第一数据包还携带目标对象块的数据大小,第六调度队列的剩余容量不足以缓存目标对象块的数据包是指:第六调度队列的剩余容量小于目标对象块的数据大小。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, the implementation process of the forwarding node determining the remaining capacity of the sixth scheduling queue is: the forwarding node determines the total size of the cached data, and the total size of the cached data is The total size of the cached data packets of the sixth scheduling queue; the forwarding node determines the difference between the second threshold and the total size of the cached data as the remaining capacity of the sixth scheduling queue, and the second threshold is the amount that the sixth scheduling queue can The total size of cached data packets; the first data packet also carries the data size of the target object block, and the remaining capacity of the sixth scheduling queue is not enough to cache the data packet of the target object block means: the remaining capacity of the sixth scheduling queue is smaller than the target object The data size of the block.
在第二种入队方式中,基于上述两种实现方式,可以根据调度队列能够承载的对象块的最大数量或能够承载的最大数据大小来确定一个队列是否满载,提高了首包入队的灵活性。In the second enqueuing method, based on the above two implementation methods, whether a queue is full can be determined based on the maximum number of object blocks that the scheduling queue can carry or the maximum data size that can be carried, which improves the flexibility of first packet enqueueing. sex.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,目标属性为相应对象块的首个数据包的到达时间。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, the target attribute is the arrival time of the first data packet of the corresponding object block.
这种场景下,转发节点从多个调度队列中选择未缓存数据包的调度队列作为第三调度队列之后,转发节点更新第三调度队列的优先级,以使更新后的第三调度队列的优先级低于其他已缓存有数据包的调度队列的优先级。In this scenario, after the forwarding node selects a scheduling queue that has not cached data packets from multiple scheduling queues as the third scheduling queue, the forwarding node updates the priority of the third scheduling queue so that the updated third scheduling queue has priority. The priority is lower than other scheduling queues that have cached data packets.
在第一种入队方式或第二种入队方式中,基于上述方式,可以实现首包早到达的对象块所在的调度队列的优先级更高。In the first queuing method or the second queuing method, based on the above method, the priority of the scheduling queue where the object block whose first packet arrived earlier is located is higher.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,在该方法中,如果多个调度队列中任一调度队列缓存的所有对象块的数据包全部发送完毕,则更新该任一调度队列的优先级,以使该任一调度队列更新后的优先级,低于其他已缓存有数据包的调度队列的优先级。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, in this method, if all the data packets of all object blocks cached by any one of the multiple scheduling queues are sent, then the The priority of any scheduling queue, so that the updated priority of any scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
在第一种入队方式中,由于一个调度队列仅仅用于缓存一个对象块,因此当某个调度队列缓存的对象块(也即全部对象块)的数据包全部发送完毕时,需要触发调度队列的优先级的更新。In the first enqueuing method, since a scheduling queue is only used to cache one object block, when all the data packets of the object block cached by a scheduling queue (that is, all object blocks) are sent, the scheduling queue needs to be triggered. priority update.
在第二种入队方式中,由于调度队列的优先级是基于调度队列缓存的首个对象块的目标属性确定的,而一个调度队列缓存有多个对象块且这多个对象块的首包是连续到达的,因此,在某个调度队列中缓存的全部对象块的数据包全部发送完毕的情况下,需要对调度队列的优先级进行更新。 In the second enqueuing method, because the priority of the dispatch queue is determined based on the target attribute of the first object block cached by the dispatch queue, and a dispatch queue cache has multiple object blocks and the first packet of these multiple object blocks They arrive continuously. Therefore, when all data packets of all object blocks cached in a certain scheduling queue are sent, the priority of the scheduling queue needs to be updated.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,多个调度队列按照顺序依次排列。这种场景下,转发节点从多个调度队列中选择一个调度队列作为第三调度队列的实现过程为:转发节点从多个调度队列中确定第六调度队列,第六调度队列为当前时间之前最近一次接收到的首包所在的调度队列;将排序在第六调度队列之后的一个调度队列确定为第三调度队列。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, multiple scheduling queues are arranged in sequence. In this scenario, the implementation process of the forwarding node selecting one scheduling queue as the third scheduling queue from multiple scheduling queues is as follows: the forwarding node determines the sixth scheduling queue from the multiple scheduling queues, and the sixth scheduling queue is the most recent one before the current time. The scheduling queue where the first packet received at one time is located; a scheduling queue ranked after the sixth scheduling queue is determined as the third scheduling queue.
通过上述实现方式,可以实现每个调度队列用于缓存若干个对象块,新来的对象块轮流进入各个调度队列。后续将这种入队方式称为第三种入队方式。Through the above implementation method, each scheduling queue can be used to cache several object blocks, and new object blocks enter each scheduling queue in turn. This way of joining the team will be called the third way of joining the team later.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,目标属性为相应对象块的首个数据包的到达时间。这种场景下,转发节点从多个调度队列中选择一个调度队列作为第三调度队列之后,如果第三调度队列除第一数据包外没有缓存其他对象块的数据包,则转发节点更新第三调度队列的优先级,以使更新后的第三调度队列的优先级低于其他已缓存有数据包的调度队列的优先级。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, the target attribute is the arrival time of the first data packet of the corresponding object block. In this scenario, after the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue, if the third scheduling queue does not cache data packets of other object blocks except the first data packet, the forwarding node updates the third The priority of the scheduling queue is set so that the priority of the updated third scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
在第三种入队方式,基于上述方式,可以实现首包早到达的对象块所在的调度队列的优先级更高。In the third queuing method, based on the above method, the scheduling queue where the object block whose first packet arrives earlier has a higher priority.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,在该方法中,如果多个调度队列中任一调度队列缓存的其中一个对象块的数据包全部发送完毕,则更新多个调度队列的优先级,以使任一调度队列更新后的优先级,低于其他已缓存有数据包的调度队列更新后的优先级。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, in this method, if all the data packets of one of the object blocks cached by any of the multiple scheduling queues are sent, then update The priorities of multiple scheduling queues are such that the updated priority of any scheduling queue is lower than the updated priority of other scheduling queues that have cached data packets.
在第三种入队方式中,调度队列的优先级是基于调度队列缓存的对象块的目标属性确定的,而一个调度队列缓存有多个对象块且这多个对象块的首包不是连续到达的,也即,相邻到达转发节点的首包是分别缓存在不同的调度队列中的。这种场景下,为了避免先到的对象块的传输时间较长,在某个调度队列中缓存的其中一个对象块的数据包全部发送完毕的情况下,需要对调度队列的优先级进行更新。In the third queuing method, the priority of the dispatch queue is determined based on the target attribute of the object block cached by the dispatch queue, and a dispatch queue cache has multiple object blocks and the first packets of these multiple object blocks do not arrive consecutively. , that is, the first packets arriving at adjacent forwarding nodes are cached in different scheduling queues. In this scenario, in order to avoid the long transmission time of the first arriving object block, when all the data packets of one of the object blocks cached in a certain scheduling queue are sent, the priority of the scheduling queue needs to be updated.
基于第一方面提供的数据包转发方法,在一种可能的实现方式中,在该方法中,更新任一调度队列的优先级的实现方式可以为:如果任一调度队列的优先级为最高优先级,则更新任一调度队列的优先级。Based on the data packet forwarding method provided in the first aspect, in a possible implementation manner, in this method, the implementation manner of updating the priority of any scheduling queue can be: if the priority of any scheduling queue is the highest priority level, the priority of any scheduling queue is updated.
通过上述方式,可以仅仅在优先级最高的调度队列存在某个对象块的数据包全部发送完毕时触发优先级更新,降低节点的数据处理压力。Through the above method, the priority update can be triggered only when all the data packets of a certain object block in the scheduling queue with the highest priority have been sent, reducing the data processing pressure of the node.
第二方面,提供了一种数据包发送方法,在该方法中,发送端确定待发送对象块的多个数据包,多个数据包中每个数据包均携带对象块的标识,且多个数据包中的首包携带首包标签,首包标签用于指示相应数据包为待发送对象块的首个数据包;发送端发送多个数据包。In the second aspect, a data packet sending method is provided. In this method, the sending end determines multiple data packets of the object block to be sent. Each of the multiple data packets carries the identifier of the object block, and multiple The first packet in the data packet carries the first packet label, and the first packet label is used to indicate that the corresponding data packet is the first data packet of the object block to be sent; the sending end sends multiple data packets.
基于第二方面提供的数据包转发方法,在一种可能的实现方式中,首包还携带待发送对象块的网络允许时延。Based on the data packet forwarding method provided in the second aspect, in a possible implementation manner, the first packet also carries the allowed network delay of the object block to be sent.
基于第二方面提供的数据包转发方法,在一种可能的实现方式中,首包还携带待发送对象块的数据大小。Based on the data packet forwarding method provided in the second aspect, in a possible implementation manner, the first packet also carries the data size of the object block to be sent.
基于第二方面提供的数据包转发方法,在一种可能的实现方式中,多个数据包中的尾包携带尾包标签,尾包标签用于指示相应数据包为待发送对象块的最后一个数据包。Based on the data packet forwarding method provided in the second aspect, in one possible implementation, the tail packets in the multiple data packets carry a tail packet label, and the tail packet label is used to indicate that the corresponding data packet is the last one of the object block to be sent. data pack.
第二方面提供的数据包发送方法的相关技术效果可以参考第一方面,在此不再赘述。 The relevant technical effects of the data packet sending method provided in the second aspect can be referred to the first aspect, and will not be described again here.
第三方面,提供了一种转发节点,所述转发节点具有实现上述第一方面中数据包发送方法行为的功能。所述转发节点包括至少一个模块,该至少一个模块用于实现上述第一方面所提供的数据包发送方法。A third aspect provides a forwarding node, which has the function of implementing the behavior of the data packet sending method in the first aspect. The forwarding node includes at least one module, and the at least one module is used to implement the data packet sending method provided in the first aspect.
第四方面,提供了一种发送端,所述发送端具有实现上述第二方面中数据包发送方法行为的功能。所述发送端包括至少一个模块,该至少一个模块用于实现上述第二方面所提供的数据包发送方法。A fourth aspect provides a sending end, which has the function of implementing the behavior of the data packet sending method in the above second aspect. The sending end includes at least one module, and the at least one module is used to implement the data packet sending method provided in the second aspect.
第五方面,提供了一种转发节点,所述转发节点的结构中包括处理器和存储器,所述存储器用于存储支持转发节点执行上述第一方面所提供的数据包发送方法的程序,以及存储用于实现上述第一方面所提供的转发节点方法所涉及的数据。所述处理器被配置为用于执行所述存储器中存储的程序。所述存储设备的操作装置还可以包括通信总线,该通信总线用于该处理器与存储器之间建立连接。In a fifth aspect, a forwarding node is provided. The structure of the forwarding node includes a processor and a memory. The memory is used to store a program that supports the forwarding node in executing the data packet sending method provided in the first aspect, and stores Data used to implement the forwarding node method provided in the first aspect. The processor is configured to execute a program stored in the memory. The operating device of the storage device may further include a communication bus, which is used to establish a connection between the processor and the memory.
第六方面,提供了一种发送端,所述发送端的结构中包括处理器和存储器,所述存储器用于存储支持发送端执行上述第二方面所提供的数据包发送方法的程序,以及存储用于实现上述第二方面所提供的发送端方法所涉及的数据。所述处理器被配置为用于执行所述存储器中存储的程序。所述存储设备的操作装置还可以包括通信总线,该通信总线用于该处理器与存储器之间建立连接。In a sixth aspect, a sending end is provided. The structure of the sending end includes a processor and a memory. The memory is used to store a program that supports the sending end in executing the data packet sending method provided in the second aspect, and stores a program for Data involved in implementing the sender method provided in the second aspect above. The processor is configured to execute a program stored in the memory. The operating device of the storage device may further include a communication bus, which is used to establish a connection between the processor and the memory.
第七方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面或第二方面所述的数据包发送方法。In a seventh aspect, a computer-readable storage medium is provided. The computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the data package described in the first or second aspect. Send method.
第八方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面或第二方面所述的数据包发送方法。An eighth aspect provides a computer program product containing instructions that, when run on a computer, causes the computer to execute the data packet sending method described in the first aspect or the second aspect.
上述第二方面至第八方面所获得的技术效果与第一方面中对应的技术手段获得的技术效果近似,在这里不再赘述。The technical effects obtained by the above-mentioned second to eighth aspects are similar to those obtained by the corresponding technical means in the first aspect, and will not be described again here.
附图说明Description of the drawings
图1是本申请实施例提供的一种基于FIFO技术的队列调度过程示意图;Figure 1 is a schematic diagram of a queue scheduling process based on FIFO technology provided by an embodiment of the present application;
图2是本申请实施例提供的一种基于FIFO的调度结果示意图;Figure 2 is a schematic diagram of a FIFO-based scheduling result provided by an embodiment of the present application;
图3是针对图2中三个对象块的一种基于对象块的调度结果示意图;Figure 3 is a schematic diagram of an object block-based scheduling result for the three object blocks in Figure 2;
图4是本申请实施例提供的一种网络架构示意图;Figure 4 is a schematic diagram of a network architecture provided by an embodiment of the present application;
图5是本申请实施例提供的一种数据包发送方法流程图;Figure 5 is a flow chart of a data packet sending method provided by an embodiment of the present application;
图6是本申请实施例提供的一种数据包发送方法流程图;Figure 6 is a flow chart of a data packet sending method provided by an embodiment of the present application;
图7是本申请实施例提供的一种数据包发送方法流程图;Figure 7 is a flow chart of a data packet sending method provided by an embodiment of the present application;
图8是本申请实施例提供的一种数据包发送方法流程图;Figure 8 is a flow chart of a data packet sending method provided by an embodiment of the present application;
图9是本申请实施例提供的一种转发节点的结构示意图;Figure 9 is a schematic structural diagram of a forwarding node provided by an embodiment of the present application;
图10是本申请实施例提供的一种发送端的结构示意图; Figure 10 is a schematic structural diagram of a sending end provided by an embodiment of the present application;
图11是本申请实施例提供的一种计算机设备的结构示意图。FIG. 11 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
具体实施方式Detailed ways
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。In order to make the objectives, technical solutions, and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
在对本申请实施例进行详细解释说明之前,先对本申请实施例的应用场景进行介绍。Before explaining the embodiments of the present application in detail, the application scenarios of the embodiments of the present application are first introduced.
目前网络的上层应用(或应用程序或业务)在处理数据时,通常是以对象块(或对象或对象数据)为基本单元处理数据的。一个对象块可以理解为上层应用处理的一段完整数据。比如,上层应用为媒体类应用时,一个对象块可以为理解为一整个视频帧。又比如,上层应用为图片类应用时,一个对象块可以理解为一整张图片。At present, when the upper-layer applications (or applications or services) of the network process data, they usually use object blocks (or objects or object data) as the basic unit to process data. An object block can be understood as a complete piece of data processed by the upper-layer application. For example, when the upper-layer application is a media application, an object block can be understood as an entire video frame. For another example, when the upper-layer application is a picture-type application, an object block can be understood as a whole picture.
但是目前网络协议支持的网络数据传输的基本单元为数据包(或数据块或报文),因此,发送端向接收端发送数据的基本过程可以为:发送端的上层应用向发送端的网络传输层提供对象块,发送端的网络传输层将对象块划分为多个数据包,然后将各个数据包发送至接收端的网络传输层,接收端的网络传输层在接收到各个数据包后,将各个数据包重组为对象块,并向接收端的上层应用提交重组后的对象块。However, the basic unit of network data transmission currently supported by network protocols is a data packet (or data block or message). Therefore, the basic process of sending data from the sender to the receiver can be: the upper layer application of the sender provides the network transport layer of the sender with Object block, the network transmission layer of the sending end divides the object block into multiple data packets, and then sends each data packet to the network transmission layer of the receiving end. After receiving each data packet, the network transmission layer of the receiving end reorganizes each data packet into Object block and submits the reorganized object block to the upper-layer application on the receiving end.
由于网络规模的日益增大,发送端发送的数据包通常是由网络中的转发节点(或中间节点或转发端、或转发设备等)转发至接收端。基于此,转发节点转发数据包的效率在一定程度上会影响对象块的最终传输效率,进而影响上层应用处理对象块的效率。因此,转发节点如何转发数据包是当前研究的一大热点。Due to the increasing scale of the network, data packets sent by the sending end are usually forwarded to the receiving end by a forwarding node (or intermediate node, forwarding end, or forwarding device, etc.) in the network. Based on this, the efficiency of the forwarding node in forwarding data packets will affect the final transmission efficiency of the object block to a certain extent, which in turn affects the efficiency of the upper-layer application in processing the object block. Therefore, how forwarding nodes forward data packets is a hot topic in current research.
先入先出(first in first out,FIFO)技术是一种转发节点用来转发数据包的技术。在FIFO技术中,转发节点每接收到一个数据包,便将该数据包缓存至尽力而为队列中。而转发节点在发送数据时,则直接按照各个数据包的入队时间从早到晚的顺序依次发送尽力而为队列中的各个数据包。也即是,先进入尽力而为队列的数据包先发送出去,因此称为FIFO技术。First in first out (FIFO) technology is a technology used by forwarding nodes to forward data packets. In FIFO technology, each time a forwarding node receives a data packet, it caches the data packet in a best-effort queue. When the forwarding node sends data, it directly sends each data packet in the best-effort queue in order from early to late according to the queue entry time of each data packet. That is, the data packets that enter the best-effort queue first are sent out first, so it is called FIFO technology.
图1是本申请实施例提供的一种基于FIFO技术的队列调度过程示意图。如图1所示,转发节点接收到的数据包从左边依次进入尽力而为队列,也即接收到的数据包依次添加到尽力而为队列的队尾。转发节点从尽力而为队列的队首(图1中队列的右侧)依次发送各个数据包。Figure 1 is a schematic diagram of a queue scheduling process based on FIFO technology provided by an embodiment of the present application. As shown in Figure 1, the data packets received by the forwarding node enter the best-effort queue sequentially from the left, that is, the received data packets are sequentially added to the end of the best-effort queue. The forwarding node sends each packet sequentially from the head of the best-effort queue (the right side of the queue in Figure 1).
其中,图1中标记为1的方块指示对象块1的数据包,标记为2的方块指示对象块2的数据包,标记为3的方块指示对象块3的数据包。Among them, the block marked 1 in Figure 1 indicates the data packet of the object block 1, the block marked 2 indicates the data packet of the object block 2, and the block marked 3 indicates the data packet of the object block 3.
需要说明的,不同的终端的服务等级可能不同,每个终端的服务等级由运营商规划,基于此,转发节点处针对每个服务等级配置一个尽力而为队列,如此属于同一服务等级的不同终端发送的数据包便会缓存在同一个尽力而为队列中。It should be noted that the service levels of different terminals may be different. The service level of each terminal is planned by the operator. Based on this, a best-effort queue is configured for each service level at the forwarding node, so that different terminals belonging to the same service level The sent packets are buffered in the same best-effort queue.
由于属于同一服务等级的不同终端发送的数据包会缓存在同一个尽力而为队列,因此转发节点不能保证有特殊需求的数据包会优先到达接收端,也即不保证用户的服务质量(quality ofservice,QoS)水平或一定的优先级。换句话说,转发节点只是基于尽力而为队列中各个数据包的入队时间依次发送各个数据包,如果当前网络的通信荷载较大,比如尽力而为队列中缓存了很多个终端的数据包,这种场景下某个终端对应的用户侧所感受到的数据传输比特率和传输时间是不固定的。 Since data packets sent by different terminals belonging to the same service level will be cached in the same best-effort queue, the forwarding node cannot guarantee that data packets with special needs will arrive at the receiving end first, that is, the user's quality of service (quality of service) cannot be guaranteed. , QoS) level or a certain priority. In other words, the forwarding node only sends each data packet in sequence based on the enqueuing time of each data packet in the best-effort queue. If the communication load of the current network is large, for example, the data packets of many terminals are cached in the best-effort queue, In this scenario, the data transmission bit rate and transmission time experienced by the user side corresponding to a certain terminal are not fixed.
基于对FIFO技术的介绍可知,FIFO技术存在如下问题:Based on the introduction to FIFO technology, it can be seen that FIFO technology has the following problems:
(1)在FIFO技术中,属于同一服务等级的不同终端发送的数据包会缓存在同一个尽力而为队列,如此不同终端的不同上层应用或同一终端的不同上层应用发送的不同对象块的数据包也相应缓存在同一个尽力而为队列,这样很容易导致某个对象块的数据包丢失。但是,对于大部分上层应用来说,对象块必须是完整的,不能缺少任何数据包,否则上层应用无法对对象块处理。也即,FIFO技术并没有考虑上层应用针对对象块的传输需求,只是简单地基于数据包进行调度。(1) In FIFO technology, data packets sent by different terminals belonging to the same service level will be cached in the same best-effort queue, so that different upper-layer applications of different terminals or different upper-layer applications of the same terminal send data of different object blocks Packets are also cached in the same best-effort queue, which can easily lead to packet loss for a certain object block. However, for most upper-layer applications, the object block must be complete and cannot lack any data packets, otherwise the upper-layer application cannot process the object block. That is to say, FIFO technology does not consider the upper-layer application's transmission requirements for object blocks, but simply schedules based on data packets.
(2)在FIFO技术中,转发节点只是基于最大努力队列中各个数据包的入队时间依次发送各个数据包。如果发送端对对象块的完成时间(completion time,CT)有需求,比如发送端要求CT最小化或者要求CT不超过截止时间(deadline,DL),这种场景下通过FIFO技术则无法满足发送端的需求。(2) In FIFO technology, the forwarding node only sends each data packet in sequence based on the enqueuing time of each data packet in the best-effort queue. If the sender has requirements for the completion time (CT) of the object block, for example, the sender requires CT to be minimized or CT does not exceed the deadline (DL), in this scenario, the FIFO technology cannot satisfy the sender's requirements. need.
其中,DL可以理解为发送端对是数据的网路传输时长的一个要求,因此,后续将DL称为网络允许时延。Among them, DL can be understood as a requirement of the sender for the network transmission duration of data. Therefore, DL is later called the network allowable delay.
图2是本申请实施例提供的一种基于FIFO的调度结果示意图。如图2所示,假设当前有三个对象块,分别标记为对象块1、对象块2和对象块3。其中,对象块1的数据大小示意为1,对象块2的数据大小示意为2,对象块3的数据大小示意为3。需要说明的是,该数据大小并不代表对象块的实际数据大小,仅仅用于说明对象块1、对象块2以及对象块3之间的数据大小关系。此外,如图2所示,对象块1的网络允许时延为1秒(s),对象块2的网络允许时延为6s,对象块3的网络允许时延为6s。Figure 2 is a schematic diagram of a FIFO-based scheduling result provided by an embodiment of the present application. As shown in Figure 2, assume that there are currently three object blocks, marked as object block 1, object block 2 and object block 3 respectively. Among them, the data size of object block 1 is shown as 1, the data size of object block 2 is shown as 2, and the data size of object block 3 is shown as 3. It should be noted that this data size does not represent the actual data size of the object block, but is only used to illustrate the data size relationship between object block 1, object block 2, and object block 3. In addition, as shown in Figure 2, the allowable network delay of object block 1 is 1 second (s), the allowable network delay of object block 2 is 6s, and the allowable network delay of object block 3 is 6s.
如图2所示,由于对象块1、对象块2和对象块3的数据包是交织缓存在尽力而为队列中的,因此在0-3s这段时间内,对象块1、对象块2和对象块3的数据包是交织发送的,也即这段时间内对象块1、对象块2和对象块3的数据包均有发送,且这段时间结束时完成对象块1的全部数据包的发送。在3-5s这段时间内,对象块2和对象块3的数据包是交织发送的,也即这段时间内对象块2和对象块3的数据包均有发送,且这段时间结束时完成对象块2的全部数据包的发送。在5-6s这段时间内,发送对象块3中的剩下数据包。As shown in Figure 2, since the data packets of object block 1, object block 2 and object block 3 are interleaved and cached in the best-effort queue, within the period of 0-3s, the data packets of object block 1, object block 2 and The data packets of object block 3 are sent interleavedly, that is, the data packets of object block 1, object block 2 and object block 3 are all sent during this period, and all data packets of object block 1 are completed at the end of this period. send. During the period of 3-5s, the data packets of object block 2 and object block 3 are sent interleavedly, that is, the data packets of object block 2 and object block 3 are both sent during this period, and at the end of this period The transmission of all data packets of object block 2 is completed. During this period of 5-6s, the remaining data packets in object block 3 are sent.
如此,对象块1的总完成时间为3s,对象块2的总完成时间为5s,对象块的总完成时间为6s。显然每个对象块的单独的总完成时间都比较多,且对象块1的总完成时间3s超过了对象块1的网络允许时延1s。In this way, the total completion time of object block 1 is 3s, the total completion time of object block 2 is 5s, and the total completion time of object block is 6s. Obviously, the total completion time of each object block is relatively large, and the total completion time of object block 1 is 3 seconds, which exceeds the allowable network delay of object block 1 by 1 second.
基于此,转发节点在转发数据包时可以考虑将属于同一对象块的数据包一起发送,也即避免不同对象块的数据包之间交织发送。Based on this, when forwarding data packets, the forwarding node can consider sending data packets belonging to the same object block together, that is, to avoid interleaved transmission of data packets of different object blocks.
图3是针对图2中三个对象块的一种基于对象块的调度结果示意图。如图3所示,如果先发送对象块1、然后是对象块2,最后是对象块3,则对象块1的总完成时间为1s,对象块2的总完成时间为3-1=2s,对象块3的总完成时间为6-3=3s。显然,每个对象块单独的总完成时间均比基于FIFO技术调度的总完成时间要短,并且每个对象块的总完成时间均没有超出对应的网络允许时延。Figure 3 is a schematic diagram of an object block-based scheduling result for the three object blocks in Figure 2. As shown in Figure 3, if object block 1 is sent first, then object block 2, and finally object block 3, the total completion time of object block 1 is 1s, and the total completion time of object block 2 is 3-1=2s, The total completion time of object block 3 is 6-3=3s. Obviously, the total completion time of each object block is shorter than the total completion time scheduled based on FIFO technology, and the total completion time of each object block does not exceed the corresponding network allowable delay.
这种场景下,假设对象块1、对象块2以及对象块3是同时达到转发节点的,如果将发送一个对象块时其余对象块的等待时间也算在内,则对象块1的实际完成时间为1s,对象块2的实际完成时间为3s,对象块3的实际完成时间为6s。这种情况下,三个对象块的总实际完成时间可以表示为1+3+6=10s,而在图2中三个对象块的总实际完成时间则为3+5+6=14s。 显然,所有对象块的总实际完成时间要少很多。In this scenario, assuming that object block 1, object block 2 and object block 3 arrive at the forwarding node at the same time, if the waiting time of other object blocks when sending one object block is also included, then the actual completion time of object block 1 is 1s, the actual completion time of object block 2 is 3s, and the actual completion time of object block 3 is 6s. In this case, the total actual completion time of the three object blocks can be expressed as 1+3+6=10s, while in Figure 2, the total actual completion time of the three object blocks is 3+5+6=14s. Obviously, the total actual completion time of all object blocks is much less.
另外,如图3所示,如果先发送对象块1、然后是对象块3,最后是对象块2,则对象块1的总完成时间为1s,对象块3的总完成时间为4-1=3s,对象块2的总完成时间为6-4=2s。显然,这种场景下,每个对象块单独的总完成时间也均比基于FIFO技术调度的总完成时间要短,并且每个对象块的总完成时间均没有超出对应的网络允许时延。In addition, as shown in Figure 3, if object block 1 is sent first, then object block 3, and finally object block 2, the total completion time of object block 1 is 1s, and the total completion time of object block 3 is 4-1= 3s, the total completion time of object block 2 is 6-4=2s. Obviously, in this scenario, the total completion time of each object block is shorter than the total completion time scheduled based on FIFO technology, and the total completion time of each object block does not exceed the corresponding network allowable delay.
同样这种场景下,假设对象块1、对象块2以及对象块3是同时达到转发节点的,如果将发送一个对象块时其余对象块的等待时间也算在内,则对象块1的实际完成时间为1s,对象块2的实际完成时间为6s,对象块3的实际完成时间为4s。这种情况下,三个对象块的总实际完成时间可以表示为1+4+6=11s,而在图2中三个对象块的总实际完成时间则为3+5+6=14s。显然,所有对象块的总实际完成时间要少很多。In the same scenario, assuming that object block 1, object block 2 and object block 3 arrive at the forwarding node at the same time, if the waiting time of other object blocks when sending one object block is also included, then the actual completion of object block 1 The time is 1s, the actual completion time of object block 2 is 6s, and the actual completion time of object block 3 is 4s. In this case, the total actual completion time of the three object blocks can be expressed as 1+4+6=11s, while in Figure 2, the total actual completion time of the three object blocks is 3+5+6=14s. Obviously, the total actual completion time of all object blocks is much less.
目前转发节点可以通过配置比较器和匹配器来避免不同对象块的数据包之间交织发送。但是这种技术一方面由于需要在转发节点上配置比较器和匹配器,导致转发节点的硬件开销大。另一方面,这种技术涉及的算法复杂度太高,进而影响转发节点的转发效率。Currently, forwarding nodes can configure comparators and matchers to avoid interleaved transmission of data packets of different object blocks. However, on the one hand, this technology requires a comparator and a matcher to be configured on the forwarding node, resulting in high hardware overhead on the forwarding node. On the other hand, the algorithm complexity involved in this technology is too high, which affects the forwarding efficiency of the forwarding node.
基于此,本申请实施例提供了一种数据包发送方法,在该方法中,基于调度队列便可降低不同对象块的数据包之间交织发送的频率。一方面无需配置过多硬件,因此硬件开销低。另一方面算法简单,从而提高了转发节点转发数据包的效率。Based on this, embodiments of the present application provide a data packet sending method, in which the frequency of interleaved sending of data packets of different object blocks can be reduced based on scheduling queues. On the one hand, there is no need to configure too much hardware, so the hardware overhead is low. On the other hand, the algorithm is simple, thereby improving the efficiency of forwarding nodes in forwarding data packets.
下面对本申请实施例提供的数据包发送方法进行详细解释说明。The data packet sending method provided by the embodiment of the present application will be explained in detail below.
图4是本申请实施例提供的一种网络架构示意图,该网络架构用于实现本申请实施例提供的数据包发送方法。如图4所示,该网络架构包括发送端401、转发节点402和接收端403。其中,发送端401和转发节点402之间通过有线或无线方式连接以进行通信,转发节点402和接收端403之间通过有线或无线方式连接以进行通信。FIG. 4 is a schematic diagram of a network architecture provided by an embodiment of the present application. The network architecture is used to implement the data packet sending method provided by an embodiment of the present application. As shown in Figure 4, the network architecture includes a sending end 401, a forwarding node 402 and a receiving end 403. The sending end 401 and the forwarding node 402 are connected through a wired or wireless manner for communication, and the forwarding node 402 and the receiving end 403 are connected through a wired or wireless manner for communication.
发送端401在发送任一对象块时,将该对象块划分为多个数据包。这多个数据包包括首包(也即首个数据包,图4中比较为B)、尾包(也即最后一个数据包,图4中标记为E)以及位于首包和尾包之间的中间数据包(图4中标记为M)。发送端在发送对象块的各个数据包时,按照从首包到尾包的顺序依次发送各个数据包。When sending any object block, the sending end 401 divides the object block into multiple data packets. These multiple data packets include the first packet (that is, the first data packet, compared with B in Figure 4), the last packet (that is, the last data packet, marked with E in Figure 4), and the ones between the first packet and the last packet. intermediate data packet (marked M in Figure 4). When sending each data packet of the object block, the sending end sends each data packet in order from the first packet to the last packet.
此外,为了便于转发节点402以对象块为单位进行转发调度,对于该对象块的各个数据包,发送端在各个数据包中标记该对象块的标识,且对于该对象块的首包,在首包中标记该数据包为首包,可选地对于对象块的尾包,在尾包中标记该数据包为尾包。换句话说,对于发送端401发送的任一对象块,该对象块的每个数据包中均携带该对象块的标识,并且该对象块的首包中携带首包标签,可选地尾包中携带尾包标签。首包标签用于指示相应数据包为该对象块的首个数据包,尾包标签用于指示相应数据包为该对象块的最后一个数据包。In addition, in order to facilitate the forwarding node 402 to perform forwarding scheduling in units of object blocks, for each data packet of the object block, the sending end marks the identifier of the object block in each data packet, and for the first packet of the object block, in the first The data packet is marked as the first packet in the packet. Optionally, for the last packet of the object block, the data packet is marked as the last packet in the last packet. In other words, for any object block sent by the sending end 401, each data packet of the object block carries the identifier of the object block, and the first packet of the object block carries the first packet label, and optionally the last packet Comes with tail bag tag. The first packet label is used to indicate that the corresponding data packet is the first data packet of the object block, and the tail packet label is used to indicate that the corresponding data packet is the last data packet of the object block.
基于上述发送端401的配置,转发节点402在接收到任一数据包时,由于该数据包携带所属的对象块的标识,因此转发节点402能够确定出该数据包所属的对象块。并且如果该数据包还进一步携带首包标签,则转发节点402可以确定该数据包为所属的对象块的首包。如果该数据包还进一步携带尾包标签,则转发节点402可以确定该数据包为所属的对象块的尾包。Based on the above configuration of the sending end 401, when the forwarding node 402 receives any data packet, since the data packet carries the identifier of the object block to which it belongs, the forwarding node 402 can determine the object block to which the data packet belongs. And if the data packet further carries a first packet label, the forwarding node 402 can determine that the data packet is the first packet of the object block to which it belongs. If the data packet further carries a tail packet label, the forwarding node 402 can determine that the data packet is the tail packet of the object block to which it belongs.
基于此,转发节点402便可实现以对象块为单位进行数据包的调度发送,从而降低不同对象块的数据包之间交织发送的频率。接收端403接收到数据包后,基于数据包携带的对象 块的标识,将各个数据包重组为对象块,并将对象块提交给上层应用。Based on this, the forwarding node 402 can implement scheduling and transmission of data packets in units of object blocks, thereby reducing the frequency of interleaved transmission of data packets of different object blocks. After the receiving end 403 receives the data packet, based on the object carried by the data packet The identification of the block, reorganizes each data packet into an object block, and submits the object block to the upper application.
此外,发送端401发送的数据包还可以携带所属对象块的数据大小和/或网络允许时延,关于对象块的数据大小或网络允许时延的功能将在后续实施例中详细说明,在此先不展开。其中,可以仅仅在对象块的首包中携带所属对象块的数据大小和/或网络允许时延,可选地,也可以在对象块的每个数据包中携带所属对象块的数据大小和/或网络允许时延,本申请实施例对此不做限定。In addition, the data packet sent by the sending end 401 can also carry the data size of the object block and/or the allowable network delay. The function of the data size of the object block or the allowable network delay will be explained in detail in subsequent embodiments, here Don’t expand it yet. Among them, the data size of the object block and/or the allowed network delay can only be carried in the first packet of the object block. Alternatively, the data size and/or the network allowable delay of the object block can also be carried in each data packet of the object block. Or the network allows delay, which is not limited in the embodiments of this application.
其中,发送端401和接收端403可以为任意数据处理端,比如手机、电脑等终端或服务器等。转发节点402可以为交换机、路由器等转发设备。此外,发送端401、转发节点402以及接收端403这三者在网络初始化时会进行时钟同步,以便后续传输数据,本申请实施例对此不做详细说明。Among them, the sending end 401 and the receiving end 403 can be any data processing end, such as mobile phones, computers and other terminals or servers. The forwarding node 402 may be a forwarding device such as a switch or router. In addition, the sending end 401, the forwarding node 402, and the receiving end 403 will perform clock synchronization during network initialization to facilitate subsequent data transmission, which will not be described in detail in the embodiment of this application.
基于图4所示的网络架构,本申请实施例提供了一种数据包发送方法。为了便于后续实施例的展开,下面先对本申请实施例的整体构思进行解释说明。Based on the network architecture shown in Figure 4, embodiments of the present application provide a data packet sending method. In order to facilitate the development of subsequent embodiments, the overall concept of the embodiments of the present application is first explained below.
在本申请实施例中,为了使得转发节点能够实现以对象块为单位调度发送各个数据包,转发节点上配置有多个调度队列,每个调度队列用于缓存至少一个对象块的数据包。比如,每个调度队列用于缓存一个对象块的数据包,或者,每个调度队列用于缓存两个及两个以上的对象块的数据包。换句话说,同一对象块的数据包是缓存在同一调度队列中的。In this embodiment of the present application, in order to enable the forwarding node to schedule and send each data packet in units of object blocks, the forwarding node is configured with multiple scheduling queues, and each scheduling queue is used to cache data packets of at least one object block. For example, each scheduling queue is used to cache data packets of one object block, or each scheduling queue is used to cache data packets of two or more object blocks. In other words, packets of the same object block are buffered in the same dispatch queue.
进一步,为了降低转发节点的算法复杂度,可以针对各个调度队列配置优先级。如此,转发节点在转发数据包时,可以快速基于各个调度队列的优先级,从多个调度队列中选择一个调度队列,然后发送选择的调度队列中的数据包。Furthermore, in order to reduce the algorithm complexity of the forwarding node, the priority can be configured for each scheduling queue. In this way, when forwarding data packets, the forwarding node can quickly select a scheduling queue from multiple scheduling queues based on the priority of each scheduling queue, and then send the data packets in the selected scheduling queue.
其中,调度队列的优先级和该调度队列缓存的对象块的固定的目标属性相关,而不是和具体的数据包相关,如此各个调度队列的优先级之间的大小关系在较短一段时间内基本不会发生变化。在各个调度队列的优先级不变的情况下,转发节点在该较短时间内每次都会选择同一调度队列中的数据包发送。又由于同一对象块的数据包缓存在同一调度队列中,因此在该较短时间内转发节点发送的数据包属于同一对象块的概率很大,以此来实现同一对象块中的数据块尽量一起发送,从而降低了不同对象块的数据包之间交织发送的频率。Among them, the priority of the scheduling queue is related to the fixed target attribute of the object block cached by the scheduling queue, rather than to the specific data packet. In this way, the relationship between the priorities of each scheduling queue is basically the same in a short period of time. Nothing will change. When the priority of each scheduling queue remains unchanged, the forwarding node will select the data packet in the same scheduling queue to send each time within this short period of time. And since the data packets of the same object block are cached in the same scheduling queue, there is a high probability that the data packets sent by the forwarding node in this short period of time belong to the same object block. In this way, the data blocks in the same object block can be sent together as much as possible. sent, thus reducing the frequency of interleaved sending between data packets of different object blocks.
在一些实施例中,对于发送端先发发送的对象块,转发节点需要优先发送该对象块,以避免该对象块的网络传输时间较长。基于此,目标属性示例地可以为对象块的首包到达时间。首包达到时间具体是指转发节点接收到首包的时间。In some embodiments, for the object block sent first by the sending end, the forwarding node needs to send the object block first to avoid long network transmission time of the object block. Based on this, the target attribute may be, for example, the arrival time of the first packet of the object block. The arrival time of the first packet specifically refers to the time when the forwarding node receives the first packet.
在另一些实施例中,如果发送端对某个对象块的网络传输时延有特殊需求,这种场景下,转发节点可以基于各个对象块的网络传输时延对对象块进行调度。也即,目标属性示例地可以为对象块的网络允许时延。In other embodiments, if the sending end has special requirements for the network transmission delay of a certain object block, in this scenario, the forwarding node can schedule the object block based on the network transmission delay of each object block. That is, the target attribute may be, for example, the network allowable delay of the object block.
需要说明的是,对象块的首包到达时间和网络传输时延是目标属性的两种示例说明。可选地,目标属性也可以为对象块的其他属性,比如发送端对对象块的其他需求,本申请实施例对此并不限定。It should be noted that the arrival time of the first packet of the object block and the network transmission delay are two examples of target attributes. Optionally, the target attribute may also be other attributes of the object block, such as other requirements of the sender for the object block, which are not limited in the embodiments of the present application.
此外,目标属性是携带在对象块的数据包中的。发送端在发送某个对象块时,需要至少在该对象块的首包中携带该目标属性,以使转发节点基于该目标属性调整各个调度队列的优先级。In addition, the target attributes are carried in the data packet of the object block. When the sending end sends an object block, it needs to carry the target attribute at least in the first packet of the object block, so that the forwarding node can adjust the priority of each scheduling queue based on the target attribute.
基于此,对于发送端而言,本申请实施例提供的方法包括发送端发送数据包,并在数据 包中携带和对象块有关的信息。对于转发节点而言,本申请实施例提供的方法包括两个方面。一方面是转发节点接收数据包并以对象块为单位将数据包缓存在调度队列中的过程。另一方面转发节点基于各个调度队列的优先级发送数据包的过程。下面分三个实施例对此展开说明。Based on this, for the sending end, the method provided by the embodiment of the present application includes the sending end sending the data packet, and in the data The packet carries information related to the object block. For the forwarding node, the method provided by the embodiment of this application includes two aspects. On the one hand, the forwarding node receives the data packet and caches the data packet in the scheduling queue in units of object blocks. On the other hand, the forwarding node sends data packets based on the priority of each scheduling queue. This is explained below in three embodiments.
图5是本申请实施例提供的一种数据包发送方法流程图。该方法用于对发送端如何发送数据包进行解释说明。如图5所示,该方法包括如下步骤501和步骤502。Figure 5 is a flow chart of a data packet sending method provided by an embodiment of the present application. This method is used to explain how the sender sends the data packet. As shown in Figure 5, the method includes the following steps 501 and 502.
步骤501:发送端确定待发送的对象块的多个数据包。其中,这多个数据包中每个数据包均携带该对象块的标识,且这多个数据包中的首包携带首包标签,该首包标签用于指示相应数据包为该对象块的首个数据包。Step 501: The sending end determines multiple data packets of the object block to be sent. Each of the multiple data packets carries the identifier of the object block, and the first packet of the multiple data packets carries a first packet label. The first packet label is used to indicate that the corresponding data packet is the object block. First packet.
其中,首包携带首包标签的目的是为了便于转发节点识别到新的对象块,并为该新的对象块分配调度队列。每个数据包均携带该对象块的标识,是为了转发节点能够将同一对象块的数据包缓存在同一个调度队列中。The purpose of carrying the first packet label in the first packet is to facilitate the forwarding node to identify the new object block and allocate a scheduling queue to the new object block. Each data packet carries the identifier of the object block so that the forwarding node can cache data packets of the same object block in the same scheduling queue.
另外,在步骤501中,首包还可以进一步携带该对象块的数据大小和/或该对象块的网络允许时延。可选地,该对象块的每个数据包均可以携带该对象块的数据大小和/或该对象块的网络允许时延。In addition, in step 501, the first packet may further carry the data size of the object block and/or the allowed network delay of the object block. Optionally, each data packet of the object block may carry the data size of the object block and/or the allowed network delay of the object block.
其中,对象块的数据大小指示对象块包括的全部数据包的总大小。首包携带对象块的数据大小的目的是为了便于转发节点基于调度队列的容量为新接收到的对象块分配调度队列。The data size of the object block indicates the total size of all data packets included in the object block. The purpose of carrying the data size of the object block in the first packet is to facilitate the forwarding node to allocate a scheduling queue to the newly received object block based on the capacity of the scheduling queue.
首包携带对象块的网络允许时延的目的是为了便于转发节点基于对象块的网络允许时延调整调度队列的优先级,以便于对网路传输时延要求的高的对象块能够优先发送出去。The purpose of the network allowable delay of the object block carried in the first packet is to facilitate the forwarding node to adjust the priority of the scheduling queue based on the network allowable delay of the object block, so that object blocks with high network transmission delay requirements can be sent out first. .
此外,在步骤501中,这多个数据包中的尾包还可以携带尾包标签,该尾包标签用于指示相应数据包为该对象块的最后一个数据包。尾包携带尾包标签的目的是为了便于转发节点识别出该对象块的全部数据转发完毕,进而更新调度队列的优先级。In addition, in step 501, the tail packet among the plurality of data packets may also carry a tail packet label, and the tail packet label is used to indicate that the corresponding data packet is the last data packet of the object block. The purpose of the tail packet carrying the tail packet label is to facilitate the forwarding node to recognize that all data of the object block has been forwarded, and then update the priority of the scheduling queue.
可选地,在转发节点能够通过其他途径识别出该对象块的全部数据是否转发完毕的场景中,尾包也可以不携带尾包标签。示例地,在首包携带对象块的数据大小的情况下,转发节点可以基于已发送的该对象块的数据量,确定该对象块的数据是否转发完毕。Optionally, in a scenario where the forwarding node can identify whether all the data of the object block has been forwarded through other means, the tail packet may not carry a tail packet label. For example, when the first packet carries the data size of the object block, the forwarding node may determine whether the data of the object block has been forwarded based on the amount of data of the object block that has been sent.
其中,数据包中携带的信息的具体实现方式将在后续详细说明,在此先不展开。Among them, the specific implementation method of the information carried in the data packet will be explained in detail later, and will not be elaborated here.
步骤502:发送端发送该对象块的多个数据包。Step 502: The sending end sends multiple data packets of the object block.
具体地,发送端按照从首包为尾包的顺序依次发送这多个数据包。Specifically, the sending end sends the multiple data packets in order from the first packet to the last packet.
图6是本申请实施例提供的一种数据包发送方法流程图。该方法用于对转发节点如何接收并缓存数据包进行解释说明。如图6所示,该方法包括如下步骤601至步骤603。Figure 6 is a flow chart of a data packet sending method provided by an embodiment of the present application. This method is used to explain how forwarding nodes receive and cache data packets. As shown in Figure 6, the method includes the following steps 601 to 603.
步骤601:转发节点接收第一数据包,第一数据包携带目标对象块的标识,目标对象块为第一数据包所属的对象块。Step 601: The forwarding node receives the first data packet. The first data packet carries the identifier of the target object block, and the target object block is the object block to which the first data packet belongs.
步骤602:如果第一数据包没有携带首包标签,转发节点基于目标对象块的标识,将第一数据包添加至第二调度队列,第二调度队列为多个调度队列中已缓存有目标对象块的数据包的调度队列,首包标签用于指示第一数据包是否为目标对象块的首个数据包。Step 602: If the first data packet does not carry the first packet label, the forwarding node adds the first data packet to the second scheduling queue based on the identification of the target object block. The second scheduling queue has the target object cached in multiple scheduling queues. The scheduling queue of the block's data packets. The first packet label is used to indicate whether the first data packet is the first data packet of the target object block.
如果第一数据包没有携带首包标签,则表明第一数据包不是目标对象块的首包。换句话说,转发节点当前时间之前已接收到目标对象块的数据包。这种场景下,只需将第一数据包添加到当前时间之前缓存目标对象块的数据包的调度队列即可。通过该方式,便可实现将同 一对象块的数据包缓存在同一调度队列中。If the first data packet does not carry the first packet label, it indicates that the first data packet is not the first packet of the target object block. In other words, the forwarding node has received the packet of the target object block before the current time. In this scenario, just add the first data packet to the scheduling queue of the data packets that cache the target object block before the current time. In this way, it is possible to achieve the same An object block of packets is cached in the same dispatch queue.
在一些实施例中,为了便于转发节点快速确定调度队列中缓存了哪些对象块的数据包。转发节点处配置有调度队列和对象块之间的映射关系。该映射关系包括与每个调度队列对应的对象块标识,每个调度队列对应的对象块标识用于标记该调度队列中缓存的对象块。In some embodiments, in order to facilitate the forwarding node to quickly determine which object block data packets are buffered in the scheduling queue. The forwarding node is configured with a mapping relationship between the scheduling queue and the object block. The mapping relationship includes an object block identifier corresponding to each scheduling queue, and the object block identifier corresponding to each scheduling queue is used to mark the object blocks cached in the scheduling queue.
这种场景下,在步骤602中,转发节点基于目标对象块的标识,便可从调度队列和对象块之间的映射关系中确定出第二调度队列。In this scenario, in step 602, the forwarding node can determine the second scheduling queue from the mapping relationship between the scheduling queue and the object block based on the identification of the target object block.
步骤603:如果第一数据包还携带首包标签,转发节点从多个调度队列中选择一个调度队列作为第三调度队列,将第一数据包添加至第三调度队列。Step 603: If the first data packet also carries a first packet label, the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue, and adds the first data packet to the third scheduling queue.
如果第一数据包携带首包标签,则表明第一数据包是目标对象块的首包。这种场景下,对于转发节点而言,目标对象块相当于新的对象块,此时需要为目标对象块分配一个调度队列,以便后续目标对象块的数据包均缓存到分配的调度队列中。If the first data packet carries a first packet tag, it indicates that the first data packet is the first packet of the target object block. In this scenario, for the forwarding node, the target object block is equivalent to a new object block. At this time, a scheduling queue needs to be allocated for the target object block, so that subsequent data packets of the target object block are cached in the allocated scheduling queue.
另外,在一些实施例中,转发节点在确定出第三调度队列之后,还可以将目标对象块的标识和第三调度队列之间的对应关系,添加到调度队列和对象块之间的映射关系中,以便后续转发节点将目标对象块的数据包均缓存到第三调度队列中。In addition, in some embodiments, after determining the third scheduling queue, the forwarding node may also add the corresponding relationship between the identification of the target object block and the third scheduling queue to the mapping relationship between the scheduling queue and the object block. , so that subsequent forwarding nodes can cache all data packets of the target object block into the third scheduling queue.
此外,转发节点从多个调度队列中选择一个调度队列作为第三调度队列的实现方式示例地有以下三种。In addition, the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue in the following three ways.
第一种入队方式:每个调度队列至多缓存一个对象块的数据包,在每接收到一个新的对象块,为该新的对象块分配一个空的调度队列。The first method of enqueuing: each scheduling queue caches data packets of at most one object block, and each time a new object block is received, an empty scheduling queue is allocated for the new object block.
空的调度队列也即没有缓存任何数据的调度队列。在第一种入队方式中,由于每个调度队列用于缓存一个对象块的数据包,因此,对于新接收到的目标对象块,可以从剩下的空的调度队列中选择一个调度队列作为第三调度队列。An empty scheduling queue is a scheduling queue that does not cache any data. In the first queuing method, since each dispatch queue is used to cache the data packet of an object block, for the newly received target object block, a dispatch queue can be selected from the remaining empty dispatch queues as The third dispatch queue.
也即,在步骤603中,转发节点从多个调度队列中选择一个调度队列作为第三调度队列的实现方式为:转发节点从多个调度队列中选择未缓存数据包的调度队列作为第三调度队列。That is, in step 603, the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue. queue.
比如,转发节点处有10个调度队列,分别标记为调度队列1至调度队列10。转发节点在接收到第一数据包时,这10个调度队列中调度队列1至调度队列3中缓存有数据包,此时转发节点则从调度队列4至调度队列10中任选一个调度队列,比如调度队列4,并将选择的调度队列作为调度队列4。For example, there are 10 scheduling queues at the forwarding node, which are labeled scheduling queue 1 to scheduling queue 10 respectively. When the forwarding node receives the first data packet, there are data packets cached in scheduling queue 1 to scheduling queue 3 among the 10 scheduling queues. At this time, the forwarding node selects any scheduling queue from scheduling queue 4 to scheduling queue 10. For example, dispatch queue 4, and use the selected dispatch queue as dispatch queue 4.
可选地,如果当前时间没有空的调度队列,转发节点可以将第一数据包添加到默认队列中。该默认队列示例地可以为尽力而为队列。Optionally, if there is no empty scheduling queue at the current time, the forwarding node can add the first data packet to the default queue. The default queue may be, for example, a best-effort queue.
需要说明的是,在本申请实施例中,尽力而为队列的优先级比多个调度队列的优先级均低。因此,转发节点在接收到数据包时,优先考虑将第一数据包添加到多个调度队列中,在多个调度队列均不满足条件的情况下,再将第一数据包添加到尽力而为队列中。It should be noted that in this embodiment of the present application, the priority of the best-effort queue is lower than the priorities of multiple scheduling queues. Therefore, when the forwarding node receives a data packet, it gives priority to adding the first data packet to multiple scheduling queues. If the multiple scheduling queues do not meet the conditions, the first data packet is added to the best-effort queue. in queue.
另外,在第一种入队方式中,目标属性可以为相应对象块的首个数据包(也即首包)的到达时间,也可以为相应对象块的网络允许时延。In addition, in the first enqueuing method, the target attribute can be the arrival time of the first data packet (that is, the first packet) of the corresponding object block, or it can be the network allowable delay of the corresponding object block.
在一些实施例中,在目标属性为相应对象块的首包的到达时间的情况下,转发节点从多个调度队列中选择未缓存数据包的调度队列作为第三调度队列,并将第一数据包添加到第三调度队列之后,转发节点进一步更新第三调度队列的优先级,以使更新后的第三调度队列的优先级低于其他已缓存有数据包的调度队列的优先级。也即,首包早到达的对象块所在的调度队列的优先级更高。 In some embodiments, when the target attribute is the arrival time of the first packet of the corresponding object block, the forwarding node selects a scheduling queue that has not cached data packets from multiple scheduling queues as the third scheduling queue, and transfers the first data After the packet is added to the third dispatch queue, the forwarding node further updates the priority of the third dispatch queue, so that the priority of the updated third dispatch queue is lower than the priority of other dispatch queues that have cached data packets. That is, the scheduling queue where the object block whose first packet arrives earlier has a higher priority.
其中,转发节点进一步更新第三调度队列的优先级可以有两种更新方式。There are two update methods for the forwarding node to further update the priority of the third scheduling queue.
第一种更新方式,将第三调度队列的优先级设置为:比其他已缓存有数据包的调度队列的优先级中的最低优先级低一级,其他已缓存有数据包的调度队列的优先级保持不变。The first update method is to set the priority of the third scheduling queue to be one level lower than the lowest priority of other scheduling queues that have cached data packets, and the priority of other scheduling queues that have cached data packets. The level remains unchanged.
比如,在上述转发节点从调度队列4至调度队列10中选择调度队列4作为第三调度队列的场景中,其他已缓存有数据包的调度队列为调度队列1至调度队列3,假设调度队列1至调度队列3的优先级分别为10、9、8,则此时可以直接将调度队列4的优先级设置为7。For example, in the above scenario where the forwarding node selects dispatch queue 4 as the third dispatch queue from dispatch queue 4 to dispatch queue 10, other dispatch queues that have cached data packets are dispatch queue 1 to dispatch queue 3. Assume that dispatch queue 1 The priorities of dispatch queue 3 are 10, 9, and 8 respectively. At this time, you can directly set the priority of dispatch queue 4 to 7.
第二种更新方式,将其他已缓存有数据包的调度队列的优先级均升一级,将第三调度队列的优先级设置为:比其他已缓存有数据包的调度队列升级后的优先级中的最低优先级低一级。The second update method is to upgrade the priorities of other scheduling queues that have cached data packets by one level, and set the priority of the third scheduling queue to be higher than the upgraded priority of other scheduling queues that have cached data packets. The lowest priority is one level lower.
比如,在上述转发节点从调度队列4至调度队列10中选择调度队列4作为第三调度队列的场景中,其他已缓存有数据包的调度队列为调度队列1至调度队列3,假设调度队列1至调度队列3的优先级分别为3、2、1,则此时可以直接将调度队列1至调度队列3的优先级分别升级为4、3、2,并将调度队列4的优先级设置为1。For example, in the above scenario where the forwarding node selects dispatch queue 4 as the third dispatch queue from dispatch queue 4 to dispatch queue 10, other dispatch queues that have cached data packets are dispatch queue 1 to dispatch queue 3. Assume that dispatch queue 1 The priorities of dispatch queue 3 are 3, 2, and 1 respectively. At this time, you can directly upgrade the priorities of dispatch queue 1 to dispatch queue 3 to 4, 3, and 2 respectively, and set the priority of dispatch queue 4 to 1.
在另一些实施例中,在目标属性为相应对象块的网络允许时延的情况下,第一数据包还携带目标对象块的网络允许时延。这种场景下,转发节点从多个调度队列中选择未缓存数据包的调度队列作为第三调度队列,并将第一数据包添加到第三调度队列之后,转发节点进一步更新第三调度队列的优先级,以使更新后的第三调度队列的优先级高于第四调度队列的优先级、且低于第五调度队列的优先级。其中,第四调度队列是指多个调度队列中缓存的对象块的网络允许时延大于目标对象块的网络允许时延的调度队列,第五调度队列是指多个调度队列中缓存的对象块的网络允许时延小于目标对象块的网络允许时延的调度队列。In other embodiments, when the target attribute is the allowable network delay of the corresponding object block, the first data packet also carries the allowable network delay of the target object block. In this scenario, the forwarding node selects a scheduling queue that has not cached data packets from multiple scheduling queues as the third scheduling queue, and after adding the first data packet to the third scheduling queue, the forwarding node further updates the third scheduling queue. The priority is such that the updated priority of the third scheduling queue is higher than the priority of the fourth scheduling queue and lower than the priority of the fifth scheduling queue. Among them, the fourth scheduling queue refers to the scheduling queue in which the network allowable delay of the object blocks cached in multiple scheduling queues is greater than the network allowable delay of the target object block, and the fifth scheduling queue refers to the object blocks cached in multiple scheduling queues. A scheduling queue whose network allowable delay is smaller than the network allowable delay of the target object block.
基于此,可以实现对网络允许时延要求高的对象块所在的调度队列的优先级更高,从而保证对网络允许时延要求高的对象块优先发送出去。Based on this, the priority of the scheduling queue in which the object blocks with high network allowable delay requirements are located can be implemented to ensure that the object blocks with high network allowable delay requirements are sent out first.
其中,转发节点进一步更新第三调度队列的优先级同样可以有两种更新方式。Among them, the forwarding node can further update the priority of the third scheduling queue in two ways.
第一种更新方式,第五调度队列的优先级保持不变,将第四调度队列的优先级降低一级,将第三调度队列的优先级设置为:比第五调度队列的优先级中的最低优先级低一级。In the first update method, the priority of the fifth dispatch queue remains unchanged, the priority of the fourth dispatch queue is lowered by one level, and the priority of the third dispatch queue is set to: higher than the priority of the fifth dispatch queue. The lowest priority is one level lower.
比如,在上述转发节点从调度队列4至调度队列10中选择调度队列4作为第三调度队列的场景中,其他已缓存有数据包的调度队列为调度队列1至调度队列3。假设调度队列1至调度队列3的优先级分别为10、9、8,且这三个调度队列缓存的对象块的网络允许时延分别为1、3和6,目标对象块的网络允许时延为4。此时,则将调度队列3的优先级降低为7,将调度队列4的优先级设置为9低一级的8。For example, in the scenario where the forwarding node selects dispatch queue 4 as the third dispatch queue from dispatch queue 4 to dispatch queue 10, other dispatch queues that have buffered data packets are dispatch queue 1 to dispatch queue 3. Assume that the priorities of dispatch queue 1 to dispatch queue 3 are 10, 9, and 8 respectively, and the allowable network delays of the object blocks cached by these three dispatch queues are 1, 3, and 6 respectively, and the allowable network delays of the target object blocks are is 4. At this time, the priority of scheduling queue 3 is reduced to 7, and the priority of scheduling queue 4 is set to 8, which is one level lower than 9.
第二种更新方式,第四调度队列的优先级保持不变,将第五调度队列的优先级升级一级,将第三调度队列的优先级设置为:比第四调度队列的优先级中的最高优先级高一级。In the second update method, the priority of the fourth dispatch queue remains unchanged, the priority of the fifth dispatch queue is upgraded by one level, and the priority of the third dispatch queue is set to: higher than the priority of the fourth dispatch queue. The highest priority is one level higher.
比如,在上述转发节点从调度队列4至调度队列10中选择调度队列4作为第三调度队列的场景中,其他已缓存有数据包的调度队列为调度队列1至调度队列3。假设调度队列1至调度队列3的优先级分别为10、9、8,且这三个调度队列缓存的对象块的网络允许时延分别为1、3和6,目标对象块的网络允许时延为4。此时,则将调度队列1和调度队列2的优先级分别升级至11和10,将调度队列4的优先级设置为8高一级的9。For example, in the scenario where the forwarding node selects dispatch queue 4 as the third dispatch queue from dispatch queue 4 to dispatch queue 10, other dispatch queues that have buffered data packets are dispatch queue 1 to dispatch queue 3. Assume that the priorities of dispatch queue 1 to dispatch queue 3 are 10, 9, and 8 respectively, and the allowable network delays of the object blocks cached by these three dispatch queues are 1, 3, and 6 respectively, and the allowable network delays of the target object blocks are is 4. At this time, the priorities of dispatch queue 1 and dispatch queue 2 are upgraded to 11 and 10 respectively, and the priority of dispatch queue 4 is set to 9, which is one level higher than 8.
需要说明的是,如果多个调度队列中不存在第四调度队列,则更新后的第三调度队列的优先级只需低于第五调度队列的优先级即可。也即,多个调度队列中缓存的对象块的网络允 许时延均小于目标对象块的网络允许时延,因此直接将第三调度队列的优先级设置为最低即可。It should be noted that if there is no fourth scheduling queue in the multiple scheduling queues, the priority of the updated third scheduling queue only needs to be lower than the priority of the fifth scheduling queue. That is, the network permission for object blocks cached in multiple dispatch queues is Many delays are smaller than the allowable network delay of the target object block, so just set the priority of the third scheduling queue to the lowest.
相应地,如果多个调度队列中不存在第五调度队列,则更新后的第三调度队列的优先级只需高于第四调度队列的优先级即可。也即,多个调度队列中缓存的对象块的网络允许时延均大于目标对象块的网络允许时延,因此直接将第三调度队列的优先级设置为最高即可。Correspondingly, if there is no fifth scheduling queue in the plurality of scheduling queues, the priority of the updated third scheduling queue only needs to be higher than the priority of the fourth scheduling queue. That is, the allowable network delays of the object blocks cached in multiple scheduling queues are all greater than the allowable network delays of the target object block, so it is sufficient to directly set the priority of the third scheduling queue to the highest.
假设已缓存有数据包的各个调度队列的按照优先级从高到底的顺序排列,则上述按照网络允许时延更新第三调度队列的优先级实现方式可以理解为:从排列后第一个调度队列开始,将第三调度队列***到首个满足如下条件的调度队列的前面,该条件为:缓存的对象块的网络允许时延超过目标对象块的网络允许时延。***第三调度队列之后的各个调度队列的优先级的大小关系依然满足排序的关系。Assuming that the scheduling queues that have cached data packets are arranged in order of priority from high to low, then the above implementation method of updating the priority of the third scheduling queue according to the allowed network delay can be understood as: starting from the first scheduling queue after the arrangement Initially, the third scheduling queue is inserted in front of the first scheduling queue that satisfies the following condition: the allowable network delay of the cached object block exceeds the allowable network delay of the target object block. The priority size relationship of each scheduling queue after the third scheduling queue is inserted still satisfies the sorting relationship.
比如,已缓存有数据包的调度队列按照优先级从高到低的顺序依次标记为A、B、C、D。其中,A缓存的数据包所属的对象块的网络允许时延(DL)为1,B和C缓存的数据包所属的对象块的网络允许时延(DL)为3,D缓存的数据包所属的对象块的网络允许时延(DL)为6。如果第一数据包携带的网络允许时延(DL)为4,且选择第三条调度队列为N,则将第一数据包添加到N中之后,按照优先级从高到低的顺序对各个调度队列排序,排序后调度队列N应该在调度队列D之前,而在调度队列B和C之后。也即,新的各个调度队列优先级排序从高到低为A、B、C、N、D。For example, the scheduling queues that have buffered data packets are marked as A, B, C, and D in order from high to low priority. Among them, the allowable network delay (DL) of the object block to which the data packet cached by A belongs is 1, the allowable network delay (DL) of the object block to which the data packets cached by B and C belong is 3, and the allowed network delay (DL) of the object block to which the data packet cached by D belongs is 3. The network allowable delay (DL) of the object block is 6. If the network allowable delay (DL) carried by the first data packet is 4, and the third scheduling queue is selected as N, then after adding the first data packet to N, each queue will be queued in order from high to low priority. Scheduling queue sorting. After sorting, scheduling queue N should be before scheduling queue D and after scheduling queues B and C. That is, the priority order of each new scheduling queue from high to low is A, B, C, N, and D.
需要说明的是,对于未缓存数据包的调度队列,这些调度队列的优先级可以设置为默认优先级,比如0级,本申请实施例对此不做限定。It should be noted that for scheduling queues that do not cache data packets, the priorities of these scheduling queues can be set to the default priority, such as level 0, which is not limited in the embodiments of the present application.
此外,由于调度队列的优先级是基于调度队列缓存的对象块的目标属性确定的,因此,在某个调度队列中缓存的对象块的数据包全部发送完毕的情况下,需要对调度队列的优先级进行更新。In addition, since the priority of the dispatch queue is determined based on the target attribute of the object block cached in the dispatch queue, when all the data packets of the object blocks cached in a certain dispatch queue are sent, the priority of the dispatch queue needs to be levels are updated.
其中,在一些实施例中,在某个调度队列中缓存的对象块的数据包全部发送完毕的情况下,转发节点对调度队列的优先级进行更新的实现方式可以为:如果多个调度队列中任一调度队列缓存的对象块的数据包全部发送完毕,则更新该调度队列的优先级,以使该调度队列更新后的优先级低于其他已缓存有数据包的调度队列的优先级。Among them, in some embodiments, when all the data packets of the object blocks cached in a certain scheduling queue are sent, the forwarding node updates the priority of the scheduling queue in the following manner: if multiple scheduling queues After all the data packets of the object block cached by any scheduling queue are sent, the priority of the scheduling queue is updated so that the updated priority of the scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
其他已缓存有数据包的调度队列的优先级可更新,也可以不更新。如果更新,则其他已缓存有数据包的调度队列更新前后的优先级之间的大小关系保持不变。The priorities of other scheduling queues that have cached data packets may or may not be updated. If updated, the size relationship between the priorities of other scheduling queues that have cached data packets before and after the update remains unchanged.
比如,已缓存有数据包的各个调度队列按照优先级从高到低排序的依次为A、B、C、N、D,且这五个调度队列的优先级分别为10、9、8、7、6。若当前时间调度队列C缓存的对象块的数据包全部发送完毕,则将调度队列C的优先级设置为最低,其他四个调度队列A、B、N、D的优先级可以保持不变,也可以分别重新设置为10、9、8、7。For example, the scheduling queues that have cached data packets are sorted from high to low priority as A, B, C, N, and D, and the priorities of these five scheduling queues are 10, 9, 8, and 7 respectively. ,6. If all the data packets of the object blocks cached by scheduling queue C are sent at the current time, the priority of scheduling queue C is set to the lowest, and the priorities of the other four scheduling queues A, B, N, and D can remain unchanged, or Can be reset to 10, 9, 8, 7 respectively.
可选地,在另一些实施例中,可以仅仅在优先级最高的调度队列的对象块的数据包全部发送完毕时触发优先级更新。这种场景下,如果多个调度队列中任一调度队列缓存的对象块的数据包全部发送完毕,还需判断该调度队列是否是优先级最高的调度队列,如果该调度队列不是优先级最高的调度队列,则不触发优先级的更新。Optionally, in other embodiments, the priority update may be triggered only when all data packets of the object block in the scheduling queue with the highest priority are sent. In this scenario, if all the data packets of the object blocks cached by any one of the multiple scheduling queues are sent, it is still necessary to determine whether the scheduling queue is the scheduling queue with the highest priority. If the scheduling queue is not the highest priority, Scheduling queue will not trigger priority update.
相应地,如果该调度队列是优先级最高的调度队列,则触发优先级的更新。具体地,确定当前时间缓存的对象块的数据包全部发送完毕的调度队列,将这些调度队列的优先级设置为低于其他缓存有数据包的调度队列的优先级。 Correspondingly, if the scheduling queue is the scheduling queue with the highest priority, an update of the priority is triggered. Specifically, the scheduling queues in which all data packets of cached object blocks have been sent at the current time are determined, and the priorities of these scheduling queues are set to be lower than the priorities of other scheduling queues that cache data packets.
同样地,其他已缓存有数据包的调度队列的优先级可更新,也可以不更新。如果更新,则其他已缓存有数据包的调度队列更新前后的优先级之间的大小关系保持不变。Similarly, the priorities of other scheduling queues that have buffered data packets may or may not be updated. If updated, the size relationship between the priorities of other scheduling queues that have cached data packets before and after the update remains unchanged.
另外,在发送端在对象块的尾包中添加尾包标签的场景中,对于任一调度队列,如果转发节点当前发送的数据包为该调度队列中的数据包,且该数据包携带尾包标签,则转发节点可以确定该调度队列缓存的对象块的数据包全部发送完毕。In addition, in the scenario where the sending end adds a tail packet label to the tail packet of the object block, for any scheduling queue, if the data packet currently sent by the forwarding node is the data packet in the scheduling queue, and the data packet carries the tail packet label, the forwarding node can confirm that all data packets of the object block cached in the scheduling queue have been sent.
可选地,如果发送端在对象块的尾包中没有添加尾包标签,对于任一调度队列,如果转发节点当前发送的数据包为该调度队列中的数据包,则转发节点可以通过其他途径(比如对象块的数据大小)来确定该调度队列缓存的对象块的数据包是否全部发送完毕,在此不再详细说明。Optionally, if the sending end does not add a tail packet label to the tail packet of the object block, for any scheduling queue, if the data packet currently sent by the forwarding node is a data packet in the scheduling queue, the forwarding node can use other channels. (such as the data size of the object block) to determine whether all the data packets of the object block buffered by the scheduling queue have been sent, which will not be explained in detail here.
第二种入队方式:每个调度队列用于缓存若干个对象块,在一个调度队列满载的情况下,新来的对象块才会进入下一个空的调度队列。The second way to join the queue: Each scheduling queue is used to cache several object blocks. When one scheduling queue is full, new object blocks will enter the next empty scheduling queue.
基于第二种入队方式,步骤603中转发节点从多个调度队列中选择一个调度队列作为第三调度队列的实现方式可以为:转发节点确定第六调度队列的剩余容量,第六调度队列为当前时间之前最近一次接收的首包所在的调度队列;如果第六调度队列的剩余容量不足以缓存目标对象块,则转发节点从多个调度队列中选择未缓存数据包的调度队列作为第三调度队列。相应地,如果第六调度队列的剩余容量足以缓存目标对象块,则转发节点将第六调度队列作为第三调度队列。Based on the second queue entry method, in step 603, the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue. The implementation method may be: the forwarding node determines the remaining capacity of the sixth scheduling queue, and the sixth scheduling queue is The dispatch queue where the first packet received most recently before the current time is located; if the remaining capacity of the sixth dispatch queue is not enough to cache the target object block, the forwarding node selects a dispatch queue that has not cached data packets from multiple dispatch queues as the third dispatch queue. Correspondingly, if the remaining capacity of the sixth scheduling queue is sufficient to cache the target object block, the forwarding node uses the sixth scheduling queue as the third scheduling queue.
也即,转发节点在接收到新的首包时,确定上一次添加首包的调度队列还能否缓存下该新的首包所属的对象块。如果能缓存下,则选择该调度队列中缓存该新的首包所属的对象块,如果不能缓存下,则选择下一个空的调度队列缓存该新的数据包所属的对象块。That is, when the forwarding node receives a new first packet, it determines whether the scheduling queue to which the first packet was last added can still cache the object block to which the new first packet belongs. If it can be cached, the object block to which the new first packet belongs will be cached in the dispatch queue. If it cannot be cached, the next empty dispatch queue will be selected to cache the object block to which the new data packet belongs.
换句话说,在第二种入队方式中,转发节点依次选择各个调度队列来缓存对象块,在一个调度队列的剩余容量不够的情况下,才会选择下一个空的调度队列来缓存对象块。In other words, in the second queuing method, the forwarding node selects each scheduling queue in turn to cache the object block. When the remaining capacity of a scheduling queue is not enough, the next empty scheduling queue will be selected to cache the object block. .
在一些实施例中,转发节点确定第六调度队列的剩余容量的实现方式可以为:转发节点确定已缓存对象块数量,已缓存对象块数量为第六调度队列已缓存的数据包所属的对象块的数量;转发节点将第一阈值和已缓存对象块数量之间的差值,确定为第六调度队列的剩余容量,第一阈值为第六调度队列能够缓存的对象块的数量。这种场景下,第六调度队列的剩余容量不足以缓存目标对象块的数据包是指:第六调度队列的剩余容量为0。In some embodiments, the forwarding node determines the remaining capacity of the sixth scheduling queue by: the forwarding node determines the number of cached object blocks, and the number of cached object blocks is the object block to which the cached data packet of the sixth scheduling queue belongs. The forwarding node determines the difference between the first threshold and the number of cached object blocks as the remaining capacity of the sixth scheduling queue, and the first threshold is the number of object blocks that the sixth scheduling queue can cache. In this scenario, the remaining capacity of the sixth scheduling queue is not enough to cache the data packets of the target object block, which means: the remaining capacity of the sixth scheduling queue is 0.
其中,第一阈值还可以称为调度队列的承载数目上限,该承载数目上限可以理解为调度队列能够承载的对象块的最大数量。The first threshold may also be referred to as the upper limit of the number of bearers of the scheduling queue, and the upper limit of the number of bearers may be understood as the maximum number of object blocks that the scheduling queue can bear.
比如,转发节点处配置有4个调度队列,分别标记为调度队列1至调度队列4。其中,每个调度队列的第一阈值均为3。假设对象块1至对象块6的首包依次达到,则将对象块1至对象块3的首包添加到调度队列1中,将对象块4至对象块6的首包添加到调度队列2中。For example, there are four scheduling queues configured at the forwarding node, which are labeled scheduling queue 1 to scheduling queue 4 respectively. Among them, the first threshold of each scheduling queue is 3. Assume that the first packets of object block 1 to object block 6 arrive in sequence, then add the first packets of object block 1 to object block 3 to scheduling queue 1, and add the first packets of object block 4 to object block 6 to scheduling queue 2 .
在这种场景下,转发节点将第一数据包添加到第三调度队列之后,转发节点还可以进一步确定第三调度队列的剩余容量。示例地,第三调度队列的剩余容量为:第三调度队列的第一阈值减去1(1代表第三调度队列已经用来缓存一个对象块)。In this scenario, after the forwarding node adds the first data packet to the third scheduling queue, the forwarding node may further determine the remaining capacity of the third scheduling queue. For example, the remaining capacity of the third scheduling queue is: the first threshold of the third scheduling queue minus 1 (1 represents that the third scheduling queue has been used to cache an object block).
在另一些实施例中,转发节点确定第六调度队列的剩余容量的实现方式可以为:转发节点确定已缓存数据总大小,已缓存数据总大小为第六调度队列已缓存的数据包的总大小;转发节点将第二阈值和已缓存数据总大小之间的差值,确定为第六调度队列的剩余容量,第二阈值为第六调度队列能够缓存的数据包的总大小。 In other embodiments, the forwarding node determines the remaining capacity of the sixth scheduling queue in the following manner: the forwarding node determines the total size of cached data, and the total size of cached data is the total size of cached data packets in the sixth scheduling queue. ; The forwarding node determines the difference between the second threshold and the total size of cached data as the remaining capacity of the sixth scheduling queue, and the second threshold is the total size of data packets that can be cached by the sixth scheduling queue.
这种场景下,第一数据包还携带目标对象块的数据大小,第六调度队列的剩余容量不足以缓存目标对象块的数据包是指:第六调度队列的剩余容量小于目标对象块的数据大小。In this scenario, the first data packet also carries the data size of the target object block. The remaining capacity of the sixth scheduling queue is not enough to cache the data packet of the target object block. This means: the remaining capacity of the sixth scheduling queue is smaller than the data of the target object block. size.
其中,第二阈值还可以称为调度队列的令牌(Token),该令牌可以理解为调度队列的缓存空间能够缓存的总数据量大小。The second threshold may also be called a token of the scheduling queue, and the token may be understood as the total amount of data that can be cached by the cache space of the scheduling queue.
比如,转发节点处配置有4个调度队列,分别标记为调度队列1至调度队列4。其中,每个调度队列的第二阈值均为10。假设对象块1至对象块3的首包依次达到,对象块1和对象块2的数据大小均为5,则将对象块1和对象块2的首包添加到调度队列1中。对于对象块3,由于调度队列1的剩余容量为0,因此需要将对象块3的首包添加到下一个空的调度队列(也即调度队列2)中。For example, there are four scheduling queues configured at the forwarding node, which are labeled scheduling queue 1 to scheduling queue 4 respectively. Wherein, the second threshold of each scheduling queue is 10. Assume that the first packets of object block 1 to object block 3 arrive in sequence, and the data sizes of object block 1 and object block 2 are both 5, then the first packets of object block 1 and object block 2 are added to scheduling queue 1. For object block 3, since the remaining capacity of dispatch queue 1 is 0, the first packet of object block 3 needs to be added to the next empty dispatch queue (that is, dispatch queue 2).
在这种场景下,转发节点将第一数据包添加到第三调度队列之后,转发节点还可以进一步确定第三调度队列的剩余容量。示例地,第三调度队列的剩余容量为:第三调度队列的第二阈值减去目标对象块的数据大小。In this scenario, after the forwarding node adds the first data packet to the third scheduling queue, the forwarding node may further determine the remaining capacity of the third scheduling queue. For example, the remaining capacity of the third scheduling queue is: the second threshold of the third scheduling queue minus the data size of the target object block.
另外,对于第二种入队方式,在转发节点从多个调度队列中选择未缓存数据包的调度队列作为第三调度队列的过程中,如果当前时间没有空的调度队列,转发节点可以将第一数据包添加到默认队列中。该默认队列示例地可以为尽力而为队列。In addition, for the second queuing method, when the forwarding node selects a scheduling queue with uncached data packets as the third scheduling queue from multiple scheduling queues, if there is no empty scheduling queue at the current time, the forwarding node can add the third scheduling queue to the queue. A packet is added to the default queue. The default queue may be, for example, a best-effort queue.
在第二种入队方式中,由于一个调度队列缓存有多个对象块,为了降低优先级算法的复杂度,这种场景下目标属性可以为相应对象块的首个数据包的到达时间。此时,调度队列的优先级可以基于该调度队列中首次添加的首包达到时间来确定。也即,对于任一调度队列,在首次向该调度队列添加一个对象块时,为该调度队列配置优先级,后续再向该调度队列添加对象块时,则不再更新该调度队列的优先级。In the second enqueuing method, since a scheduling queue caches multiple object blocks, in order to reduce the complexity of the priority algorithm, the target attribute in this scenario can be the arrival time of the first data packet of the corresponding object block. At this time, the priority of the scheduling queue can be determined based on the arrival time of the first packet added to the scheduling queue for the first time. That is, for any scheduling queue, when an object block is added to the scheduling queue for the first time, the priority of the scheduling queue is configured. When subsequent object blocks are added to the scheduling queue, the priority of the scheduling queue will not be updated. .
基于此,在第二种入队方式中,如果第六调度队列的剩余容量足以缓存目标对象块,则转发节点将第六调度队列作为第三调度队列,并将第一数据包添加到第三调度队列之后,转发节点并不更新第三调度队列(也即第六调度队列)的优先级。Based on this, in the second queuing method, if the remaining capacity of the sixth scheduling queue is enough to cache the target object block, the forwarding node uses the sixth scheduling queue as the third scheduling queue and adds the first data packet to the third After scheduling the queue, the forwarding node does not update the priority of the third scheduling queue (that is, the sixth scheduling queue).
相应地,如果第六调度队列的剩余容量不足以缓存目标对象块,则转发节点从多个调度队列中选择未缓存数据包的调度队列作为第三调度队列,并将第一数据包添加到第三调度队列之后,转发节点更新第三调度队列的优先级,以使更新后的第三调度队列的优先级低于其他已缓存有数据包的调度队列的优先级。Correspondingly, if the remaining capacity of the sixth scheduling queue is not enough to cache the target object block, the forwarding node selects a scheduling queue that has not cached the data packet from the multiple scheduling queues as the third scheduling queue, and adds the first data packet to the third scheduling queue. After the third scheduling queue is created, the forwarding node updates the priority of the third scheduling queue so that the priority of the updated third scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
其中,转发节点更新第三调度队列的优先级的详细实现方式,可以参考第一种入队方式中目标属性为相应对象块的首个数据包的到达时间的场景下的两种优先级更新方式,在此不再赘述。Among them, for the detailed implementation method of the forwarding node updating the priority of the third scheduling queue, you can refer to the two priority update methods in the scenario where the target attribute in the first queue entry method is the arrival time of the first data packet of the corresponding object block. , which will not be described in detail here.
此外,在第二种入队方式中,由于调度队列的优先级是基于调度队列缓存的对象块的目标属性确定的,而一个调度队列缓存有多个对象块且这多个对象块的首包是连续到达的,因此,在某个调度队列中缓存的全部对象块的数据包全部发送完毕的情况下,需要对调度队列的优先级进行更新。In addition, in the second enqueuing method, because the priority of the dispatch queue is determined based on the target attribute of the object block cached by the dispatch queue, and a dispatch queue cache has multiple object blocks and the first packet of these multiple object blocks They arrive continuously. Therefore, when all data packets of all object blocks cached in a certain scheduling queue are sent, the priority of the scheduling queue needs to be updated.
在一些实施例中,在某个调度队列中缓存的全部对象块的数据包全部发送完毕的情况下,转发节点对调度队列的优先级进行更新的实现方式可以为:如果多个调度队列中任一调度队列缓存的所有对象块的数据包全部发送完毕,则更新该调度队列的优先级,以使该调度队列更新后的优先级低于其他已缓存有数据包的调度队列的优先级。In some embodiments, when all data packets of all object blocks cached in a certain scheduling queue have been sent, the forwarding node may update the priority of the scheduling queue in the following manner: if any of the multiple scheduling queues Once the data packets of all object blocks cached by a scheduling queue have been sent, the priority of the scheduling queue is updated so that the updated priority of the scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
其他已缓存有数据包的调度队列的优先级可更新,也可以不更新。如果更新,则其他已 缓存有数据包的调度队列更新前后的优先级之间的大小关系保持不变。The priorities of other scheduling queues that have cached data packets may or may not be updated. If updated, others have The size relationship between the priorities of the scheduling queue with cached data packets before and after the update remains unchanged.
具体实现方式可以参考第一种入队方式,在此不再赘述。For the specific implementation method, please refer to the first method of joining the team, which will not be described again here.
可选地,在另一些实施例中,可以仅仅在优先级最高的调度队列的对象块的数据包全部发送完毕时触发优先级更新。这种场景下,如果多个调度队列中任一调度队列缓存的全部对象块的数据包全部发送完毕,还需判断该调度队列是否是优先级最高的调度队列,如果该调度队列不是优先级最高的调度队列,则不触发优先级的更新。Optionally, in other embodiments, the priority update may be triggered only when all data packets of the object block in the scheduling queue with the highest priority are sent. In this scenario, if all the data packets of all object blocks cached by any scheduling queue among multiple scheduling queues are sent, it is still necessary to determine whether the scheduling queue is the scheduling queue with the highest priority. If the scheduling queue is not the scheduling queue with the highest priority, The scheduling queue will not trigger priority update.
相应地,如果该调度队列是优先级最高的调度队列,则触发优先级的更新。具体地,确定当前时间缓存的全部对象块的数据包全部发送完毕的调度队列,将这些调度队列的优先级设置为低于其他缓存有数据包的调度队列的优先级。Correspondingly, if the scheduling queue is the scheduling queue with the highest priority, an update of the priority is triggered. Specifically, the scheduling queues in which all data packets of all object blocks cached at the current time have been sent are determined, and the priorities of these scheduling queues are set to be lower than the priorities of other scheduling queues that cache data packets.
同样地,其他已缓存有数据包的调度队列的优先级可更新,也可以不更新。如果更新,则其他已缓存有数据包的调度队列更新前后的优先级之间的大小关系保持不变。Similarly, the priorities of other scheduling queues that have buffered data packets may or may not be updated. If updated, the size relationship between the priorities of other scheduling queues that have cached data packets before and after the update remains unchanged.
此外,在第二种入队方式中,转发节点确定对象块的全部数据包是否发送完毕的实现方式同样可以参考第一种入队方式,在此不再赘述。In addition, in the second enqueuing method, the forwarding node determines whether all the data packets of the object block have been sent. The implementation method can also refer to the first enqueuing method, which will not be described again here.
第三种入队方式:每个调度队列用于缓存若干个对象块,新来的对象块轮流进入各个调度队列。The third way to join the queue: Each scheduling queue is used to cache several object blocks, and new object blocks enter each scheduling queue in turn.
由于新来的对象块是轮流进入各个调度队列的,因此预先将多个调度队列按照顺序依次排列,以便于转发节点在每次接收到新的对象块时,轮流选择各个调度队列缓存新的对象块。Since new object blocks enter each scheduling queue in turn, multiple scheduling queues are arranged in order in advance, so that each time the forwarding node receives a new object block, it takes turns to select each scheduling queue to cache the new object. piece.
其中,多个调度队列按照顺序排列的具体实现方式不做限定,在此不再详细说明。Among them, the specific implementation method of arranging multiple scheduling queues in order is not limited and will not be described in detail here.
基于此,步骤603中转发节点从多个调度队列中选择一个调度队列作为第三调度队列的实现方式可以为:转发节点从多个调度队列中确定第六调度队列,第六调度队列为当前时间之前最近一次接收到的首包所在的调度队列;将排序在第六调度队列之后的一个调度队列确定为第三调度队列。Based on this, the way in which the forwarding node selects one scheduling queue as the third scheduling queue from multiple scheduling queues in step 603 can be as follows: the forwarding node determines the sixth scheduling queue from the multiple scheduling queues, and the sixth scheduling queue is the current time. The scheduling queue where the first packet received most recently is located; a scheduling queue ranked after the sixth scheduling queue is determined as the third scheduling queue.
比如,转发节点处配置有4个调度队列,分别标记为调度队列1至调度队列4。假设对象块1至对象块6的首包依次达到,则将对象块1至对象块4的首包分别添加到调度队列1至调度队列4中,将对象块5的首包添加到调度队列1中,将对象块6的首包添加到调度队列2中。For example, there are four scheduling queues configured at the forwarding node, which are labeled scheduling queue 1 to scheduling queue 4 respectively. Assume that the first packets of object block 1 to object block 6 arrive in sequence, then add the first packets of object block 1 to object block 4 to dispatch queue 1 to dispatch queue 4 respectively, and add the first packet of object block 5 to dispatch queue 1 , add the first packet of object block 6 to scheduling queue 2.
同样地,在第三种入队方式中,由于一个调度队列缓存有多个对象块,为了降低优先级算法的复杂度,这种场景下目标属性可以为相应对象块的首个数据包的到达时间。此时,调度队列的优先级可以基于该调度队列中首次添加的首包达到时间来确定。也即,对于任一调度队列,在首次向该调度队列添加一个对象块时,为该调度队列配置优先级,后续再向该调度队列添加对象块时,则不再更新该调度队列的优先级。Similarly, in the third queuing method, since a scheduling queue caches multiple object blocks, in order to reduce the complexity of the priority algorithm, the target attribute in this scenario can be the arrival of the first data packet of the corresponding object block. time. At this time, the priority of the scheduling queue can be determined based on the arrival time of the first packet added to the scheduling queue for the first time. That is, for any scheduling queue, when an object block is added to the scheduling queue for the first time, the priority of the scheduling queue is configured. When subsequent object blocks are added to the scheduling queue, the priority of the scheduling queue will not be updated. .
基于此,在第三种入队方式中,转发节点从多个调度队列中选择一个调度队列作为第三调度队列,并将第一数据包添加到第三调度队列之后,如果第三调度队列除第一数据包外还缓存有其他对象块的数据包,则转发节点不更新第三调度队列的优先级。相应地,如果第三调度队列除第一数据包外没有缓存其他对象块的数据包,则转发节点更新第三调度队列的优先级。Based on this, in the third queue joining method, the forwarding node selects one scheduling queue from multiple scheduling queues as the third scheduling queue, and adds the first data packet to the third scheduling queue. If the third scheduling queue is In addition to the first data packet, there are also data packets of other object blocks cached, so the forwarding node does not update the priority of the third scheduling queue. Correspondingly, if the third scheduling queue does not cache data packets of other object blocks except the first data packet, the forwarding node updates the priority of the third scheduling queue.
其中,转发节点更新第三调度队列的优先级的详细实现方式,同样可以参考第一种入队方式中目标属性为相应对象块的首个数据包的到达时间的场景下的两种优先级更新方式,在此不再赘述。 Among them, for the detailed implementation method of the forwarding node updating the priority of the third scheduling queue, you can also refer to the two priority updates in the scenario where the target attribute in the first queue entry method is the arrival time of the first data packet of the corresponding object block. The method will not be described again here.
此外,在第三种入队方式中,调度队列的优先级是基于调度队列缓存的对象块的目标属性确定的,而一个调度队列缓存有多个对象块且这多个对象块的首包不是连续到达的,也即,相邻到达转发节点的首包是分别缓存在不同的调度队列中的。这种场景下,为了避免先到的对象块的传输时间较长,在某个调度队列中缓存的其中一个对象块的数据包全部发送完毕的情况下,需要对调度队列的优先级进行更新。In addition, in the third queuing method, the priority of the dispatch queue is determined based on the target attribute of the object block cached by the dispatch queue, and a dispatch queue cache has multiple object blocks and the first packets of these multiple object blocks are not The first packets arriving continuously, that is, the first packets arriving at adjacent forwarding nodes are cached in different scheduling queues respectively. In this scenario, in order to avoid the long transmission time of the first arriving object block, when all the data packets of one of the object blocks cached in a certain scheduling queue are sent, the priority of the scheduling queue needs to be updated.
因此,在一些实施例中,在某个调度队列中缓存的其中一个对象块的数据包全部发送完毕的情况下,转发节点对调度队列的优先级进行更新的实现方式可以为:如果多个调度队列中任一调度队列缓存的其中一个对象块的数据包全部发送完毕,则更新该调度队列的优先级,以使该调度队列更新后的优先级低于其他已缓存有数据包的调度队列的优先级。Therefore, in some embodiments, when all the data packets of one of the object blocks cached in a certain scheduling queue are sent, the forwarding node may update the priority of the scheduling queue in the following manner: If multiple scheduling After all the data packets of one of the object blocks cached by any scheduling queue in the queue are sent, the priority of the scheduling queue is updated so that the updated priority of the scheduling queue is lower than that of other scheduling queues that have cached data packets. priority.
其他已缓存有数据包的调度队列的优先级可更新,也可以不更新。如果更新,则其他已缓存有数据包的调度队列更新前后的优先级之间的大小关系保持不变。The priorities of other scheduling queues that have cached data packets may or may not be updated. If updated, the size relationship between the priorities of other scheduling queues that have cached data packets before and after the update remains unchanged.
具体实现方式可以参考第一种入队方式,在此不再赘述。For the specific implementation method, please refer to the first method of joining the team, which will not be described again here.
可选地,在另一些实施例中,可以仅仅在优先级最高的调度队列存在某个对象块的数据包全部发送完毕时触发优先级更新。这种场景下,如果多个调度队列中任一调度队列缓存的其中一个对象块的数据包全部发送完毕,还需判断该调度队列是否是优先级最高的调度队列,如果该调度队列不是优先级最高的调度队列,则不触发优先级的更新。Optionally, in other embodiments, the priority update may be triggered only when all data packets of a certain object block in the scheduling queue with the highest priority have been sent. In this scenario, if all the data packets of one of the object blocks cached by any scheduling queue among multiple scheduling queues are sent, it is still necessary to determine whether the scheduling queue is the scheduling queue with the highest priority. If the scheduling queue is not a priority, The highest scheduling queue will not trigger priority update.
相应地,如果该调度队列是优先级最高的调度队列,则触发优先级的更新。具体地,确定当前时间之前缓存的至少一个对象块的数据包全部发送完毕的调度队列,将这些调度队列的优先级设置为低于其他缓存有数据包的调度队列的优先级。Correspondingly, if the scheduling queue is the scheduling queue with the highest priority, an update of the priority is triggered. Specifically, the scheduling queues that have all sent the data packets of at least one object block cached before the current time are determined, and the priorities of these scheduling queues are set to be lower than the priorities of other scheduling queues that have cached data packets.
同样地,其他已缓存有数据包的调度队列的优先级可更新,也可以不更新。如果更新,则其他已缓存有数据包的调度队列更新前后的优先级之间的大小关系保持不变。Similarly, the priorities of other scheduling queues that have buffered data packets may or may not be updated. If updated, the size relationship between the priorities of other scheduling queues that have cached data packets before and after the update remains unchanged.
此外,在第三种入队方式中,转发节点确定对象块的全部数据包是否发送完毕的实现方式同样可以参考第一种入队方式,在此不再赘述。In addition, in the third queuing method, the forwarding node determines whether all data packets of the object block have been sent. The implementation method can also refer to the first queuing method, which will not be described again here.
需要说明的是,上述三种入队方式用于示例解释步骤603,本申请实施例并不限定转发节点为新接收到对象块分配第三调度队列的具体实现方式。It should be noted that the above three queue entry methods are used to illustrate step 603. The embodiment of the present application does not limit the specific implementation method of the forwarding node allocating the third scheduling queue to the newly received object block.
图7是本申请实施例提供的一种数据包发送方法流程图,该方法用于对转发节点如何发送各个调度队列中的数据包进行解释说明。如图7所示,该方法包括如下步骤701和步骤702。Figure 7 is a flow chart of a data packet sending method provided by an embodiment of the present application. This method is used to explain how a forwarding node sends data packets in each scheduling queue. As shown in Figure 7, the method includes the following steps 701 and 702.
步骤701:转发节点基于多个调度队列中每个调度队列的优先级,从多个调度队列中确定第一调度队列。其中,多个调度队列中的每个调度队列用于缓存至少一个对象块的数据包、且同一对象块的数据包缓存在同一调度队列中,每个调度队列的优先级是基于相应调度队列缓存的数据包所属对象块的目标属性确定的,目标属性为在相应对象块的数据包发送过程中保持不变的属性。Step 701: The forwarding node determines the first scheduling queue from the multiple scheduling queues based on the priority of each scheduling queue in the multiple scheduling queues. Among them, each scheduling queue in multiple scheduling queues is used to cache data packets of at least one object block, and data packets of the same object block are cached in the same scheduling queue. The priority of each scheduling queue is based on the corresponding scheduling queue cache. The target attribute of the object block to which the data packet belongs is determined. The target attribute is an attribute that remains unchanged during the sending process of the data packet of the corresponding object block.
基于图6所示的实施例可知,在本申请实施例中,为了使得转发节点能够实现以对象块为单位调度发送各个数据包,转发节点上配置有多个调度队列,每个调度队列用于缓存至少一个对象块的数据包。比如,每个调度队列用于缓存一个对象块的数据包,或者,每个调度队列用于缓存两个及两个以上的对象块的数据包。换句话说,同一对象块的数据包是缓存在同一调度队列中的。Based on the embodiment shown in Figure 6, it can be seen that in the embodiment of the present application, in order to enable the forwarding node to schedule and send each data packet in units of object blocks, multiple scheduling queues are configured on the forwarding node, and each scheduling queue is used for Cache packets of at least one object block. For example, each scheduling queue is used to cache data packets of one object block, or each scheduling queue is used to cache data packets of two or more object blocks. In other words, packets of the same object block are buffered in the same dispatch queue.
进一步,为了降低转发节点的算法复杂度,可以针对各个调度队列配置优先级。如此, 转发节点在转发数据包时,可以快速基于各个调度队列的优先级,从多个调度队列中选择一个调度队列,然后发送选择的调度队列中的数据包。基于此,可以实现如下技术效果:Furthermore, in order to reduce the algorithm complexity of the forwarding node, the priority can be configured for each scheduling queue. in this way, When forwarding data packets, the forwarding node can quickly select a scheduling queue from multiple scheduling queues based on the priority of each scheduling queue, and then send the data packets in the selected scheduling queue. Based on this, the following technical effects can be achieved:
由于调度队列的优先级和该调度队列缓存的对象块的固定的目标属性相关,而不是和具体的数据包相关,如此各个调度队列的优先级之间的大小关系在较短一段时间内基本不会发生变化。在各个调度队列的优先级不变的情况下,转发节点在该较短时间内每次都会选择同一调度队列中的数据包发送。又由于同一对象块的数据包缓存在同一调度队列中,因此在该较短时间内转发节点发送的数据包属于同一对象块的概率很大,以此来实现同一对象块中的数据块尽量一起发送,从而降低了不同对象块的数据包之间交织发送的频率。Since the priority of a scheduling queue is related to the fixed target attributes of the object blocks cached in the scheduling queue, rather than to specific data packets, the relationship between the priorities of each scheduling queue is basically unchanged in a short period of time. Changes will occur. When the priority of each scheduling queue remains unchanged, the forwarding node will select the data packet in the same scheduling queue to send each time within this short period of time. And since the data packets of the same object block are cached in the same scheduling queue, there is a high probability that the data packets sent by the forwarding node in this short period of time belong to the same object block. In this way, the data blocks in the same object block can be sent together as much as possible. sent, thus reducing the frequency of interleaved sending between data packets of different object blocks.
其中,步骤701中转发节点确定第一调度队列具体可以为:从多个调度队列中选择优先级最高的调度队列作为第一调度队列。In step 701, the forwarding node determines the first scheduling queue specifically by selecting the scheduling queue with the highest priority from multiple scheduling queues as the first scheduling queue.
步骤702:转发节点发送第一调度队列中的数据包。Step 702: The forwarding node sends the data packet in the first scheduling queue.
在一些实施例中,步骤702的实现过程可以为:转发节点从第一调度队列中确定第二数据包,第二数据包为第一调度队列中入队时间最早的数据包;转发节点发送第二数据包。该方法操作简单,容易实现。In some embodiments, the implementation process of step 702 may be: the forwarding node determines the second data packet from the first scheduling queue, and the second data packet is the data packet with the earliest queue time in the first scheduling queue; the forwarding node sends the second data packet. Two data packets. This method is simple to operate and easy to implement.
可选地,在另一些实施例中,步骤702的实现过程可以为:转发节点确定当前时间之前最近一次发送的数据包所属的对象块,然后从第一调度队列选择属于该对象块的数据包发送。通过这种方式,可以严格保证不同对象块的数据包不会交织发送。Optionally, in other embodiments, the implementation process of step 702 may be: the forwarding node determines the object block to which the latest data packet sent before the current time belongs, and then selects the data packet belonging to the object block from the first scheduling queue. send. In this way, it is strictly guaranteed that data packets of different object blocks will not be sent interleaved.
为了进一步理解本申请实施例,下面以图8为例对发送端、转发节点以及接收端之间的交互过程进行说明。In order to further understand the embodiment of the present application, the interaction process between the sending end, the forwarding node and the receiving end will be described below by taking Figure 8 as an example.
如图8所示,发送端的上层应用需要发送对象块时,发送端将该对象块分包,得到多个数据包。并设置这多个数据包中的首包和尾包,也即首包携带首包标签,尾包携带尾包标签。另外,每个数据包还携带该对象块的标识。然后发送端发送这多个数据包。As shown in Figure 8, when the upper-layer application of the sending end needs to send an object block, the sending end packetizes the object block and obtains multiple data packets. And set the first packet and the last packet in these multiple data packets, that is, the first packet carries the first packet label, and the last packet carries the last packet label. In addition, each data packet also carries the identifier of the object block. The sender then sends these multiple data packets.
转发节点接收到首包时,选择一个调度队列,并将该首包添加到选择的调度队列中(也即首包选择入队)。另外,转发节点还基于对象块的到达顺序(也即首包到达时间)或对象块的DL(也即网络允许时延)确定该调度队列的优先级。转发节点基于调度队列的优先级转发各个调度队列中的数据包。接收端在接收到数据包后,将各个数据包拼接为对象块,从而实现接收端基于对象块接收数据包。When the forwarding node receives the first packet, it selects a scheduling queue and adds the first packet to the selected scheduling queue (that is, the first packet is selected into the queue). In addition, the forwarding node also determines the priority of the scheduling queue based on the arrival order of the object blocks (that is, the arrival time of the first packet) or the DL of the object blocks (that is, the allowed network delay). The forwarding node forwards the data packets in each scheduling queue based on the priority of the scheduling queue. After receiving the data packets, the receiving end splices each data packet into an object block, thereby enabling the receiving end to receive data packets based on the object block.
另外,转发节点在发送包的过程中,如果某个对象块的全部数据包发送完毕,则可以触发各个调度队列的优先级更新。In addition, when the forwarding node is sending packets, if all the data packets of a certain object block are sent, the priority update of each scheduling queue can be triggered.
需要说明的是,图8中的各个步骤均可以参考图5-图7实施例,并不构成对图5-图7实施例的限定。It should be noted that each step in FIG. 8 can refer to the embodiment of FIGS. 5 to 7 , and does not constitute a limitation to the embodiment of FIGS. 5 to 7 .
基于图5至图8所示的实施例可知,本申请实施例包括以下内容:Based on the embodiments shown in Figures 5 to 8, it can be seen that the embodiments of this application include the following content:
1、可以实现单对象块单调度队列。1. Single object block single dispatch queue can be implemented.
一个对象块的数据包缓存在1个调度队列里,从而避免不同对象块的数据包之间交织发送,比如图6实施例中第一种入队方式。可选地,如果对象块数目过多,可以将多个对象块缓存在一个调度队列,比如图6实施例中第二种入队方式和第三种入队方式。The data packets of an object block are cached in a scheduling queue to avoid interleaved transmission of data packets of different object blocks, such as the first queue entry method in the embodiment of Figure 6. Optionally, if there are too many object blocks, multiple object blocks can be cached in a scheduling queue, such as the second enqueuing method and the third enqueuing method in the embodiment of FIG. 6 .
2、可以实现各个调度队列的循环调度,让每个调度队列都有最优先的机会。 2. Circular scheduling of each scheduling queue can be realized, so that each scheduling queue has the highest priority opportunity.
基于图6的三种入队方式,对象块是循环进入各个调度队列的,因此各个调度队列都有可能成为优先级最高的调度队列,从而实现优先级在不同调度队列之间循环。通过循环优先级的方式,保障各个调度队列之间的转发机会公平,从而避免某个调度队列饿死的情况。Based on the three queue entry methods in Figure 6, object blocks enter each scheduling queue in a loop, so each scheduling queue may become the scheduling queue with the highest priority, thereby realizing priority circulation among different scheduling queues. Through the round-robin priority method, fair forwarding opportunities between each scheduling queue are ensured, thereby avoiding starvation of a certain scheduling queue.
3、基于对象块的第一个数据包(头包),最后一个数据包(尾包)触发调度队列的优先级的更新。基于对象块的数据大小或调度队列的第一阈值(承载数目上限)或第二阈值(令牌)选择待加入的调度队列。3. Based on the first data packet (head packet) and the last data packet (tail packet) of the object block, the priority of the scheduling queue is updated. The scheduling queue to be added is selected based on the data size of the object block or the first threshold (upper limit of the number of bearers) or the second threshold (token) of the scheduling queue.
相对于传统的调度直接根据数据包的优先级,不考虑对象块的信息,本申请实施例考虑对象块的数据大小、对象块的头包和尾包等信息,进行相应的调度,从而实现不同对象块去交织效果。Compared with traditional scheduling, which directly depends on the priority of the data packet without considering the information of the object block, the embodiment of the present application considers the data size of the object block, the header packet and the tail packet of the object block and other information, and performs corresponding scheduling, thereby realizing different Object block deinterlacing effect.
4、利用对象块其他信息,如网络允许时延,实现更优的传输效果。4. Use other information of the object block, such as network allowed delay, to achieve better transmission effects.
在某些对象块有网络允许时延需求的时候,可以根据网络允许时延决定该对象块所在的调度队列的优先级,让更紧急的对象更快发送出去。When some object blocks have network allowable delay requirements, the priority of the scheduling queue where the object block is located can be determined based on the network allowable delay, so that more urgent objects can be sent out faster.
总之,基于上述内容,本申请实施例可以实现如下技术效果:In short, based on the above content, the embodiments of the present application can achieve the following technical effects:
1、本申请实施例基于调度队列发送对象块。因为同一对象块的数据包是缓存在同一调度队列中的,且调度队列的优先级和该调度队列缓存的对象块的固定的目标属性相关,而不是和具体的数据包相关,如此各个调度队列的优先级之间的大小关系在较短一段时间内基本不会发生变化。在各个调度队列的优先级不变的情况下,转发节点在该较短时间内每次都会选择同一调度队列中的数据包发送。又由于同一对象块的数据包缓存在同一调度队列中,因此在该较短时间内转发节点发送的数据包属于同一对象块的概率很大,以此来实现同一对象块中的数据块尽量一起发送,从而降低了不同对象块的数据包之间交织发送的频率。1. The embodiment of this application sends object blocks based on the scheduling queue. Because data packets of the same object block are cached in the same dispatch queue, and the priority of the dispatch queue is related to the fixed target attribute of the object block cached by the dispatch queue, rather than to the specific data packet, so each dispatch queue The size relationship between the priorities will basically not change in a short period of time. When the priority of each scheduling queue remains unchanged, the forwarding node will select the data packet in the same scheduling queue to send each time within this short period of time. And since the data packets of the same object block are cached in the same scheduling queue, there is a high probability that the data packets sent by the forwarding node in this short period of time belong to the same object block. In this way, the data blocks in the same object block can be sent together as much as possible. sent, thus reducing the frequency of interleaved sending between data packets of different object blocks.
相对于基于比较器和匹配器来降低不同对象块的数据包之间交织发送的频率,本申请实施例通过调度队列便可实现,硬件开销小。Compared with reducing the frequency of interleaved transmission between data packets of different object blocks based on comparators and matchers, the embodiment of the present application can be implemented by scheduling queues, and the hardware overhead is small.
2、本申请实施例能够实现转发节点高速转发数据包,不损失吞吐。2. The embodiment of this application can realize the forwarding node to forward data packets at high speed without loss of throughput.
如果采用比较器和匹配器来降低不同对象块的数据包之间交织发送的频率,会降低转发节点的转发效率。本申请实施例采用调度队列便可实现,从而实现转发节点高速转发数据包。If comparators and matchers are used to reduce the frequency of interleaved transmission between data packets of different object blocks, the forwarding efficiency of the forwarding node will be reduced. The embodiment of the present application can be implemented by using a scheduling queue, thereby achieving high-speed forwarding of data packets by the forwarding node.
3、本申请实施例的整个过程无需协议协商过程。3. The entire process of the embodiment of this application does not require a protocol negotiation process.
相对于各个节点采用资源预留方式(比如针对某个对象块预留时间片)转发各个对象块,以避免不同对象块之间交织发送。本申请实施例无需中间节点进行资源预留,无需协商过程,便可实现高效、低时延传输数据包。Relative to each node, a resource reservation method (such as reserving a time slice for a certain object block) is used to forward each object block to avoid interleaved transmission between different object blocks. The embodiment of the present application does not require resource reservation by intermediate nodes and does not require a negotiation process to achieve efficient and low-latency transmission of data packets.
图9是本申请实施例提供的一种转发节点的结构示意图。如图9所示,该转发节点900包括如下模块901-903。Figure 9 is a schematic structural diagram of a forwarding node provided by an embodiment of the present application. As shown in Figure 9, the forwarding node 900 includes the following modules 901-903.
处理模块901,用于基于多个调度队列中每个调度队列的优先级,从多个调度队列中确定第一调度队列。具体实现方式可以参考图7实施例中的步骤701。The processing module 901 is configured to determine a first scheduling queue from multiple scheduling queues based on the priority of each scheduling queue in the multiple scheduling queues. For specific implementation, please refer to step 701 in the embodiment of Figure 7 .
其中,多个调度队列中的每个调度队列用于缓存至少一个对象块的数据包、且同一对象块的数据包缓存在同一调度队列中,每个调度队列的优先级是基于相应调度队列缓存的数据包所属对象块的目标属性确定的,目标属性为在相应对象块的数据包发送过程中保持不变的属性;Among them, each scheduling queue in multiple scheduling queues is used to cache data packets of at least one object block, and data packets of the same object block are cached in the same scheduling queue. The priority of each scheduling queue is based on the corresponding scheduling queue cache. The target attribute of the object block to which the data packet belongs is determined. The target attribute is an attribute that remains unchanged during the sending process of the data packet of the corresponding object block;
发送模块902,用于发送第一调度队列中的数据包。具体实现方式可以参考图7实施例 中的步骤702。The sending module 902 is used to send the data packets in the first scheduling queue. For specific implementation, please refer to the embodiment in Figure 7 Step 702 in .
可选地,转发节点还包括:Optionally, the forwarding node also includes:
接收模块903,用于接收第一数据包,第一数据包携带目标对象块的标识,目标对象块为第一数据包所属的对象块。具体实现方式可以参考图6实施例中的步骤601。The receiving module 903 is configured to receive a first data packet, where the first data packet carries an identifier of a target object block, and the target object block is the object block to which the first data packet belongs. For specific implementation, please refer to step 601 in the embodiment of FIG. 6 .
处理模块901,还用于如果第一数据包没有携带首包标签,基于目标对象块的标识,将第一数据包添加至第二调度队列,第二调度队列为多个调度队列中已缓存有目标对象块的数据包的调度队列,首包标签用于指示第一数据包是否为目标对象块的首个数据包。具体实现方式可以参考图6实施例中的步骤602。The processing module 901 is also configured to add the first data packet to the second scheduling queue based on the identification of the target object block if the first data packet does not carry the first packet tag. The second scheduling queue is a buffered data packet in multiple scheduling queues. The scheduling queue of the data packet of the target object block. The first packet label is used to indicate whether the first data packet is the first data packet of the target object block. For specific implementation, please refer to step 602 in the embodiment of Figure 6 .
可选地,处理模块901还用于:如果第一数据包还携带首包标签,从多个调度队列中选择一个调度队列作为第三调度队列,将第一数据包添加至第三调度队列。具体实现方式可以参考图6实施例中的步骤603。Optionally, the processing module 901 is also configured to: if the first data packet also carries a first packet label, select one scheduling queue from multiple scheduling queues as the third scheduling queue, and add the first data packet to the third scheduling queue. For specific implementation, please refer to step 603 in the embodiment of Figure 6 .
可选地,处理模块用于:Optionally, the processing module is used to:
从多个调度队列中选择未缓存数据包的调度队列作为第三调度队列。Select a dispatch queue that does not cache data packets from multiple dispatch queues as the third dispatch queue.
可选地,多个调度队列中的每个调度队列至多缓存一个对象块的数据包。Optionally, each of the plurality of scheduling queues buffers at most one object block of data packets.
可选地,目标属性为相应对象块的网络允许时延,第一数据包还携带目标对象块的网络允许时延;Optionally, the target attribute is the allowable network delay of the corresponding object block, and the first data packet also carries the allowable network delay of the target object block;
处理模块还用于:Processing modules are also used to:
更新第三调度队列的优先级,以使更新后的第三调度队列的优先级高于第四调度队列的优先级、且低于第五调度队列的优先级,第四调度队列是指多个调度队列中缓存的对象块的网络允许时延大于目标对象块的网络允许时延的调度队列,第五调度队列是指多个调度队列中缓存的对象块的网络允许时延小于目标对象块的网络允许时延的调度队列。Update the priority of the third scheduling queue so that the updated priority of the third scheduling queue is higher than the priority of the fourth scheduling queue and lower than the priority of the fifth scheduling queue. The fourth scheduling queue refers to multiple The allowable network delay of the object blocks cached in the scheduling queue is greater than the allowable network delay of the target object block. The fifth scheduling queue refers to the allowable network delay of the object blocks cached in multiple scheduling queues is less than the allowable network delay of the target object block. Network delay-tolerant scheduling queue.
可选地,处理模块用于:Optionally, the processing module is used to:
确定第六调度队列的剩余容量,第六调度队列为当前时间之前最近一次接收的首包所在的调度队列;Determine the remaining capacity of the sixth scheduling queue, which is the scheduling queue where the first packet received most recently before the current time is located;
如果第六调度队列的剩余容量不足以缓存目标对象块,则从多个调度队列中选择未缓存数据包的调度队列作为第三调度队列。If the remaining capacity of the sixth scheduling queue is insufficient to cache the target object block, a scheduling queue that has not cached data packets is selected from the plurality of scheduling queues as the third scheduling queue.
可选地,处理模块还用于:Optionally, the processing module is also used to:
如果第六调度队列的剩余容量足以缓存目标对象块,则将第六调度队列作为第三调度队列。If the remaining capacity of the sixth dispatch queue is sufficient to cache the target object block, the sixth dispatch queue is used as the third dispatch queue.
可选地,处理模块用于:Optionally, the processing module is used to:
确定已缓存对象块数量,已缓存对象块数量为第六调度队列已缓存的数据包所属的对象块的数量;Determine the number of cached object blocks, which is the number of object blocks to which the cached data packets of the sixth dispatch queue belong;
将第一阈值和已缓存对象块数量之间的差值,确定为第六调度队列的剩余容量,第一阈值为第六调度队列能够缓存的对象块的数量;Determine the difference between the first threshold and the number of cached object blocks as the remaining capacity of the sixth scheduling queue, and the first threshold is the number of object blocks that the sixth scheduling queue can cache;
第六调度队列的剩余容量不足以缓存目标对象块的数据包是指:第六调度队列的剩余容量为0。The remaining capacity of the sixth scheduling queue is insufficient to cache the data packet of the target object block means: the remaining capacity of the sixth scheduling queue is 0.
可选地,处理模块用于:Optionally, the processing module is used to:
确定已缓存数据总大小,已缓存数据总大小为第六调度队列已缓存的数据包的总大小;Determine the total size of cached data, which is the total size of cached data packets in the sixth scheduling queue;
将第二阈值和已缓存数据总大小之间的差值,确定为第六调度队列的剩余容量,第二阈 值为第六调度队列能够缓存的数据包的总大小;The difference between the second threshold and the total size of cached data is determined as the remaining capacity of the sixth scheduling queue, and the second threshold The value is the total size of data packets that can be cached by the sixth scheduling queue;
第一数据包还携带目标对象块的数据大小,第六调度队列的剩余容量不足以缓存目标对象块的数据包是指:第六调度队列的剩余容量小于目标对象块的数据大小。The first data packet also carries the data size of the target object block. The remaining capacity of the sixth scheduling queue is insufficient to cache the data packet of the target object block. This means that the remaining capacity of the sixth scheduling queue is smaller than the data size of the target object block.
可选地,目标属性为相应对象块的首个数据包的到达时间;Optionally, the target attribute is the arrival time of the first data packet of the corresponding object block;
处理模块还用于:Processing modules are also used to:
更新第三调度队列的优先级,以使更新后的第三调度队列的优先级低于其他已缓存有数据包的调度队列的优先级。Update the priority of the third scheduling queue so that the updated priority of the third scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
可选地,处理模块还用于:Optionally, the processing module is also used to:
如果多个调度队列中任一调度队列缓存的所有对象块的数据包全部发送完毕,则更新任一调度队列的优先级,以使任一调度队列更新后的优先级,低于其他已缓存有数据包的调度队列的优先级。If all the data packets of all object blocks cached by any one of the multiple scheduling queues are sent, the priority of any one of the scheduling queues is updated so that the updated priority of any one of the scheduling queues is lower than that of other cached queues. The priority of the packet's dispatch queue.
可选地,多个调度队列按照顺序依次排列;Optionally, multiple scheduling queues are arranged in sequence;
处理模块用于:Processing modules are used for:
从多个调度队列中确定第六调度队列,第六调度队列为当前时间之前最近一次接收到的首包所在的调度队列;Determine the sixth dispatch queue from multiple dispatch queues, where the sixth dispatch queue is the dispatch queue where the first packet received most recently before the current time is located;
将排序在第六调度队列之后的一个调度队列确定为第三调度队列。A scheduling queue ranked after the sixth scheduling queue is determined as the third scheduling queue.
可选地,目标属性为相应对象块的首个数据包的到达时间;Optionally, the target attribute is the arrival time of the first data packet of the corresponding object block;
处理模块还用于:Processing modules are also used to:
如果第三调度队列除第一数据包外没有缓存其他对象块的数据包,则更新第三调度队列的优先级,以使更新后的第三调度队列的优先级低于其他已缓存有数据包的调度队列的优先级。If the third scheduling queue does not cache data packets of other object blocks except the first data packet, update the priority of the third scheduling queue so that the priority of the updated third scheduling queue is lower than other cached data packets. The priority of the dispatch queue.
可选地,处理模块还用于:Optionally, the processing module is also used to:
如果多个调度队列中任一调度队列缓存的其中一个对象块的数据包全部发送完毕,则更新多个调度队列的优先级,以使任一调度队列更新后的优先级,低于其他已缓存有数据包的调度队列更新后的优先级。If all the data packets of one of the object blocks cached by any one of the multiple scheduling queues are sent, the priorities of the multiple scheduling queues are updated so that the updated priority of any one of the scheduling queues is lower than that of other cached ones. The updated priority of the scheduling queue for packets.
可选地,处理模块用于:Optionally, the processing module is used to:
如果任一调度队列的优先级为最高优先级,则更新任一调度队列的优先级。If the priority of any scheduling queue is the highest priority, update the priority of any scheduling queue.
在本申请实施例中,因为同一对象块的数据包是缓存在同一调度队列中的,且调度队列的优先级和该调度队列缓存的对象块的固定的目标属性相关,而不是和具体的数据包相关,如此各个调度队列的优先级之间的大小关系在较短一段时间内基本不会发生变化。在各个调度队列的优先级不变的情况下,转发节点在该较短时间内每次都会选择同一调度队列中的数据包发送。又由于同一对象块的数据包缓存在同一调度队列中,因此在该较短时间内转发节点发送的数据包属于同一对象块的概率很大,以此来实现同一对象块中的数据块尽量一起发送,从而降低了不同对象块的数据包之间交织发送的频率。In the embodiment of this application, because the data packets of the same object block are cached in the same scheduling queue, and the priority of the scheduling queue is related to the fixed target attribute of the object block cached by the scheduling queue, rather than to the specific data Packet-related, so the size relationship between the priorities of each scheduling queue will basically not change in a short period of time. When the priority of each scheduling queue remains unchanged, the forwarding node will select the data packet in the same scheduling queue to send each time within this short period of time. And since the data packets of the same object block are cached in the same scheduling queue, there is a high probability that the data packets sent by the forwarding node in this short period of time belong to the same object block. In this way, the data blocks in the same object block can be sent together as much as possible. sent, thus reducing the frequency of interleaved sending between data packets of different object blocks.
相对于基于比较器和匹配器来降低不同对象块的数据包之间交织发送的频率,本申请实施例通过调度队列以及对应的优先级便可实现,硬件开销小,且转发效率高。Compared with reducing the frequency of interleaved transmission of data packets of different object blocks based on comparators and matchers, the embodiment of the present application can achieve this by scheduling queues and corresponding priorities, with low hardware overhead and high forwarding efficiency.
需要说明的是:上述实施例提供的转发节点在转发数据包时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上 述实施例提供的转发节点与数据包发送的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。It should be noted that when the forwarding node provided in the above embodiment forwards a data packet, only the division of the above functional modules is used as an example. In actual applications, the above function allocation can be completed by different functional modules according to needs, that is, The internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, on The forwarding node and the method embodiment for sending data packets provided in the above embodiments belong to the same concept. Please refer to the method embodiment for details of the specific implementation process, which will not be described again here.
图10是本申请实施例提供的一种发送端的结构示意图。如图10所示,该发送端1000包括如下模块1001-1002。Figure 10 is a schematic structural diagram of a sending end provided by an embodiment of the present application. As shown in Figure 10, the sending end 1000 includes the following modules 1001-1002.
处理模块1001,用于确定待发送对象块的多个数据包,多个数据包中每个数据包均携带对象块的标识,且多个数据包中的首包携带首包标签,首包标签用于指示相应数据包为待发送对象块的首个数据包。具体实现方式可以参考图5实施例中的步骤501。The processing module 1001 is used to determine multiple data packets of the object block to be sent. Each data packet in the multiple data packets carries the identifier of the object block, and the first packet in the multiple data packets carries the first packet label. The first packet label Used to indicate that the corresponding data packet is the first data packet of the object block to be sent. For specific implementation, please refer to step 501 in the embodiment of Figure 5 .
发送模块1002,用于发送多个数据包。具体实现方式可以参考图5实施例中的步骤502。The sending module 1002 is used to send multiple data packets. For specific implementation methods, reference can be made to step 502 in the embodiment of Figure 5 .
可选地,首包还携带待发送对象块的网络允许时延。Optionally, the first packet also carries the allowed network delay of the object block to be sent.
可选地,首包还携带待发送对象块的数据大小。Optionally, the first packet also carries the data size of the object block to be sent.
可选地,多个数据包中的尾包携带尾包标签,尾包标签用于指示相应数据包为待发送对象块的最后一个数据包。Optionally, the tail packet in the multiple data packets carries a tail packet label, and the tail packet label is used to indicate that the corresponding data packet is the last data packet of the object block to be sent.
其中,首包携带首包标签的目的是为了便于转发节点识别到新的对象块,并为该新的对象块分配调度队列。每个数据包均携带该对象块的标识,是为了转发节点能够将同一对象块的数据包缓存在同一个调度队列中。基于上述发送端的配置,转发节点可以通过图6和图7所示的实施例实现以对象块为单位调度数据包。The purpose of carrying the first packet label in the first packet is to facilitate the forwarding node to identify the new object block and allocate a scheduling queue to the new object block. Each data packet carries the identifier of the object block so that the forwarding node can cache data packets of the same object block in the same scheduling queue. Based on the above configuration of the sending end, the forwarding node can implement scheduling of data packets in units of object blocks through the embodiments shown in Figures 6 and 7.
需要说明的是:上述实施例提供的发送端在转发数据包时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的发送端与数据包发送的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。It should be noted that when the sending end provided in the above embodiment forwards the data packet, only the division of the above functional modules is used as an example. In actual applications, the above function allocation can be completed by different functional modules according to needs, that is, The internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the sending end provided by the above embodiments and the method embodiments for sending data packets belong to the same concept. Please refer to the method embodiments for the specific implementation process, which will not be described again here.
图11是本申请施例提供的一种计算机设备的结构示意图。前述实施例中的转发节点、发送端以及接收端等均可以通过图11所示的计算机设备来实现。参见图11,该计算机设备包括处理器1101,通信总线1102、存储器1103以及至少一个通信接口1104。FIG. 11 is a schematic structural diagram of a computer device provided by an embodiment of the present application. The forwarding node, sending end, receiving end, etc. in the foregoing embodiments can all be implemented by the computer device shown in Figure 11. Referring to Figure 11, the computer device includes a processor 1101, a communication bus 1102, a memory 1103 and at least one communication interface 1104.
处理器1101可以是通用中央处理器(central processing unit,CPU)、特定应用集成电路(application-specific integrated circuit,ASIC)或一个或多个用于控制本申请方案程序执行的集成电路。The processor 1101 may be a general central processing unit (CPU), an application-specific integrated circuit (ASIC), or one or more integrated circuits used to control the execution of the program of the present application.
通信总线1102用于在上述组件之间传送信息。Communication bus 1102 is used to transfer information between the above-mentioned components.
存储器1103可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其它类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其它类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only Memory,CD-ROM)或其它光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘或者其它磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质,但不限于此。存储器1103可以是独立存在,通过通信总线1102与处理器1101相连接。存储器1103也可以和处理器1101集成在一起。 The memory 1103 may be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory (RAM)) or other type that can store information and instructions. A dynamic storage device can also be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disc storage (including compressed optical discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disks or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be accessed by a computer any other medium, but not limited to this. The memory 1103 may exist independently and be connected to the processor 1101 through the communication bus 1102. The memory 1103 may also be integrated with the processor 1101.
其中,存储器1103用于存储执行本申请方案的程序代码,并由处理器1101来控制执行。处理器1101用于执行存储器1103中存储的程序代码。程序代码中可以包括一个或多个软件模块。转发节点和发送端均可以通过处理器1101以及存储器1103中的程序代码中的一个或多个软件模块,来确定用于开发应用的数据。Among them, the memory 1103 is used to store the program code for executing the solution of the present application, and the processor 1101 controls the execution. The processor 1101 is used to execute program codes stored in the memory 1103. The program code may include one or more software modules. Both the forwarding node and the sending end can determine data for developing applications through one or more software modules in the program code in the processor 1101 and the memory 1103 .
通信接口1104,使用任何收发器一类的装置,用于与其它设备或通信网络通信,通信网络可以是以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等。The communication interface 1104 uses any device such as a transceiver to communicate with other devices or communication networks. The communication network can be Ethernet, wireless access network (RAN), wireless local area networks, WLAN) etc.
在具体实现中,作为一种实施例,计算机设备可以包括多个处理器,例如图11中所示的处理器1101和处理器1105。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。In a specific implementation, as an embodiment, the computer device may include multiple processors, such as the processor 1101 and the processor 1105 shown in FIG. 11 . Each of these processors may be a single-CPU processor or a multi-CPU processor. A processor here may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
上述的计算机设备可以是一个通用计算机设备或者是一个专用计算机设备。在具体实现中,计算机设备可以是台式机、便携式电脑、网络服务器、掌上电脑(personal digital assistant,PDA)、移动手机、平板电脑、无线终端设备、路由器等通信设备或者嵌入式设备。本申请实施例不限定计算机设备的类型。The above computer device may be a general computer device or a special computer device. In specific implementation, the computer device can be a desktop computer, a portable computer, a network server, a personal digital assistant (PDA), a mobile phone, a tablet computer, a wireless terminal device, a router and other communication devices or an embedded device. The embodiments of the present application do not limit the type of computer equipment.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意结合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如:同轴电缆、光纤、数据用户线(digital subscriber line,DSL))或无线(例如:红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如:软盘、硬盘、磁带)、光介质(例如:数字通用光盘(digital versatile disc,DVD))、或者半导体介质(例如:固态硬盘(solid state disk,SSD))等。In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means. The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated. The available media may be magnetic media (such as floppy disks, hard disks, tapes), optical media (such as digital versatile discs (DVD)), or semiconductor media (such as solid state disks (SSD) )wait.
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。Those of ordinary skill in the art can understand that all or part of the steps to implement the above embodiments can be completed by hardware, or can be completed by instructing relevant hardware through a program. The program can be stored in a computer-readable storage medium. The above-mentioned The storage media mentioned can be read-only memory, magnetic disks or optical disks, etc.
以上所述为本申请提供的实施例,并不用以限制本申请实施例,凡在本申请实施例的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请实施例的保护范围之内。 The above-mentioned embodiments are provided for this application and are not intended to limit the embodiments of this application. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the embodiments of this application shall be included in this application. within the protection scope of the embodiment.

Claims (36)

  1. 一种数据包发送方法,其特征在于,所述方法包括:A data packet sending method, characterized in that the method includes:
    转发节点基于多个调度队列中每个调度队列的优先级,从所述多个调度队列中确定第一调度队列;The forwarding node determines the first scheduling queue from the multiple scheduling queues based on the priority of each scheduling queue in the multiple scheduling queues;
    其中,所述多个调度队列中的每个调度队列用于缓存至少一个对象块的数据包、且同一对象块的数据包缓存在同一调度队列中,每个调度队列的优先级是基于相应调度队列缓存的数据包所属对象块的目标属性确定的,所述目标属性为在相应对象块的数据包发送过程中保持不变的属性;Wherein, each scheduling queue in the plurality of scheduling queues is used to cache data packets of at least one object block, and data packets of the same object block are cached in the same scheduling queue. The priority of each scheduling queue is based on the corresponding scheduling The target attribute of the object block to which the data packet cached in the queue belongs is determined, and the target attribute is an attribute that remains unchanged during the sending process of the data packet of the corresponding object block;
    所述转发节点发送所述第一调度队列中的数据包。The forwarding node sends the data packet in the first scheduling queue.
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1, further comprising:
    所述转发节点接收第一数据包,所述第一数据包携带目标对象块的标识,所述目标对象块为所述第一数据包所属的对象块;The forwarding node receives a first data packet, the first data packet carries an identifier of a target object block, and the target object block is the object block to which the first data packet belongs;
    如果所述第一数据包没有携带首包标签,所述转发节点基于所述目标对象块的标识,将所述第一数据包添加至第二调度队列,所述第二调度队列为所述多个调度队列中已缓存有所述目标对象块的数据包的调度队列,所述首包标签用于指示所述第一数据包是否为所述目标对象块的首个数据包。If the first data packet does not carry a first packet label, the forwarding node adds the first data packet to a second scheduling queue based on the identification of the target object block, and the second scheduling queue is the multi-packet tag. A scheduling queue in which the data packet of the target object block has been cached, and the first packet tag is used to indicate whether the first data packet is the first data packet of the target object block.
  3. 如权利要求2所述的方法,其特征在于,所述转发节点接收第一数据包之后,所述方法还包括:The method of claim 2, wherein after the forwarding node receives the first data packet, the method further includes:
    如果所述第一数据包还携带所述首包标签,所述转发节点从所述多个调度队列中选择一个调度队列作为第三调度队列,将所述第一数据包添加至所述第三调度队列。If the first data packet also carries the first packet label, the forwarding node selects one scheduling queue from the plurality of scheduling queues as the third scheduling queue, and adds the first data packet to the third scheduling queue. Dispatch queue.
  4. 如权利要求3所述的方法,其特征在于,所述转发节点从所述多个调度队列中选择一个调度队列作为所述第三调度队列,包括:The method of claim 3, wherein the forwarding node selects one scheduling queue from the plurality of scheduling queues as the third scheduling queue, including:
    所述转发节点从所述多个调度队列中选择未缓存数据包的调度队列作为所述第三调度队列。The forwarding node selects a scheduling queue that does not cache data packets from the plurality of scheduling queues as the third scheduling queue.
  5. 如权利要求4所述的方法,其特征在于,所述多个调度队列中的每个调度队列至多缓存一个对象块的数据包。The method of claim 4, wherein each of the plurality of scheduling queues caches at most one object block of data packets.
  6. 如权利要求5所述的方法,其特征在于,所述目标属性为相应对象块的网络允许时延,所述第一数据包还携带所述目标对象块的网络允许时延;The method of claim 5, wherein the target attribute is the allowable network delay of the corresponding object block, and the first data packet also carries the allowable network delay of the target object block;
    所述转发节点从所述多个调度队列中选择未缓存数据包的调度队列作为所述第三调度队列之后,所述方法还包括:After the forwarding node selects a scheduling queue without cached data packets from the plurality of scheduling queues as the third scheduling queue, the method further includes:
    所述转发节点更新所述第三调度队列的优先级,以使更新后的所述第三调度队列的优先级高于第四调度队列的优先级、且低于第五调度队列的优先级,所述第四调度队列是指所述 多个调度队列中缓存的对象块的网络允许时延大于所述目标对象块的网络允许时延的调度队列,所述第五调度队列是指所述多个调度队列中缓存的对象块的网络允许时延小于所述目标对象块的网络允许时延的调度队列。The forwarding node updates the priority of the third scheduling queue so that the updated priority of the third scheduling queue is higher than the priority of the fourth scheduling queue and lower than the priority of the fifth scheduling queue, The fourth scheduling queue refers to the The network allowable delay of the object blocks cached in multiple scheduling queues is greater than the network allowable delay of the target object block. The fifth scheduling queue refers to the network allowable network delay of the object blocks cached in the multiple scheduling queues. A scheduling queue whose allowable delay is smaller than the allowable delay of the network of the target object block.
  7. 如权利要求3所述的方法,其特征在于,所述转发节点从所述多个调度队列中选择一个调度队列作为所述第三调度队列,包括:The method of claim 3, wherein the forwarding node selects one scheduling queue from the plurality of scheduling queues as the third scheduling queue, including:
    所述转发节点确定第六调度队列的剩余容量,所述第六调度队列为当前时间之前最近一次接收的首包所在的调度队列;The forwarding node determines the remaining capacity of the sixth scheduling queue, and the sixth scheduling queue is the scheduling queue where the first packet received most recently before the current time is located;
    如果所述第六调度队列的剩余容量不足以缓存所述目标对象块,则所述转发节点从所述多个调度队列中选择未缓存数据包的调度队列作为所述第三调度队列。If the remaining capacity of the sixth scheduling queue is insufficient to cache the target object block, the forwarding node selects a scheduling queue that does not cache data packets from the plurality of scheduling queues as the third scheduling queue.
  8. 如权利要求7所述的方法,其特征在于,所述转发节点确定第六调度队列的剩余容量之后,所述方法还包括:The method of claim 7, wherein after the forwarding node determines the remaining capacity of the sixth scheduling queue, the method further includes:
    如果所述第六调度队列的剩余容量足以缓存所述目标对象块,则所述转发节点将所述第六调度队列作为所述第三调度队列。If the remaining capacity of the sixth scheduling queue is sufficient to cache the target object block, the forwarding node uses the sixth scheduling queue as the third scheduling queue.
  9. 如权利要求7或8所述的方法,其特征在于,所述转发节点确定第六调度队列的剩余容量,包括:The method according to claim 7 or 8, characterized in that the forwarding node determines the remaining capacity of the sixth scheduling queue, including:
    所述转发节点确定已缓存对象块数量,所述已缓存对象块数量为所述第六调度队列已缓存的数据包所属的对象块的数量;The forwarding node determines the number of cached object blocks, and the number of cached object blocks is the number of object blocks to which the cached data packets of the sixth scheduling queue belong;
    所述转发节点将第一阈值和所述已缓存对象块数量之间的差值,确定为所述第六调度队列的剩余容量,所述第一阈值为所述第六调度队列能够缓存的对象块的数量;The forwarding node determines the difference between the first threshold and the number of cached object blocks as the remaining capacity of the sixth scheduling queue, and the first threshold is the objects that can be cached by the sixth scheduling queue. number of blocks;
    所述第六调度队列的剩余容量不足以缓存所述目标对象块的数据包是指:所述第六调度队列的剩余容量为0。The remaining capacity of the sixth scheduling queue is insufficient to cache the data packet of the target object block means that the remaining capacity of the sixth scheduling queue is 0.
  10. 如权利要求7或8所述的方法,其特征在于,所述转发节点确定第六调度队列的剩余容量,包括:The method according to claim 7 or 8, characterized in that the forwarding node determines the remaining capacity of the sixth scheduling queue, including:
    所述转发节点确定已缓存数据总大小,所述已缓存数据总大小为所述第六调度队列已缓存的数据包的总大小;The forwarding node determines the total size of cached data, and the total size of cached data is the total size of cached data packets in the sixth scheduling queue;
    所述转发节点将第二阈值和所述已缓存数据总大小之间的差值,确定为所述第六调度队列的剩余容量,所述第二阈值为所述第六调度队列能够缓存的数据包的总大小;The forwarding node determines the difference between the second threshold and the total size of the cached data as the remaining capacity of the sixth scheduling queue, and the second threshold is the data that can be cached by the sixth scheduling queue. The total size of the package;
    所述第一数据包还携带所述目标对象块的数据大小,所述第六调度队列的剩余容量不足以缓存所述目标对象块的数据包是指:所述第六调度队列的剩余容量小于所述目标对象块的数据大小。The first data packet also carries the data size of the target object block. The remaining capacity of the sixth scheduling queue is not enough to cache the data packet of the target object block. This means that the remaining capacity of the sixth scheduling queue is less than The data size of the target object block.
  11. 如权利要求4或7所述的方法,其特征在于,所述目标属性为相应对象块的首个数据包的到达时间;The method according to claim 4 or 7, characterized in that the target attribute is the arrival time of the first data packet of the corresponding object block;
    所述转发节点从所述多个调度队列中选择未缓存数据包的调度队列作为所述第三调度队列之后,所述方法还包括: After the forwarding node selects a scheduling queue without cached data packets from the plurality of scheduling queues as the third scheduling queue, the method further includes:
    所述转发节点更新所述第三调度队列的优先级,以使更新后的所述第三调度队列的优先级低于其他已缓存有数据包的调度队列的优先级。The forwarding node updates the priority of the third scheduling queue so that the updated priority of the third scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
  12. 如权利要求4-11中任一所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 4-11, characterized in that the method further includes:
    如果所述多个调度队列中任一调度队列缓存的所有对象块的数据包全部发送完毕,则更新所述任一调度队列的优先级,以使所述任一调度队列更新后的优先级,低于其他已缓存有数据包的调度队列的优先级。If all the data packets of all object blocks cached by any one of the multiple scheduling queues are sent, then the priority of any one of the scheduling queues is updated so that the updated priority of any one of the scheduling queues is, Lower priority than other scheduling queues that have cached data packets.
  13. 如权利要求3所述的方法,其特征在于,所述多个调度队列按照顺序依次排列;The method of claim 3, wherein the plurality of scheduling queues are arranged in sequence;
    所述转发节点从所述多个调度队列中选择一个调度队列作为所述第三调度队列,包括:The forwarding node selects one scheduling queue from the plurality of scheduling queues as the third scheduling queue, including:
    所述转发节点从所述多个调度队列中确定第六调度队列,所述第六调度队列为当前时间之前最近一次接收到的首包所在的调度队列;The forwarding node determines a sixth scheduling queue from the plurality of scheduling queues, and the sixth scheduling queue is the scheduling queue where the first packet received most recently before the current time is located;
    将排序在所述第六调度队列之后的一个调度队列确定为所述第三调度队列。A scheduling queue sorted after the sixth scheduling queue is determined as the third scheduling queue.
  14. 如权利要求13所述的方法,其特征在于,所述目标属性为相应对象块的首个数据包的到达时间;The method of claim 13, wherein the target attribute is the arrival time of the first data packet of the corresponding object block;
    所述转发节点从所述多个调度队列中选择一个调度队列作为所述第三调度队列之后,所述方法还包括:After the forwarding node selects one scheduling queue as the third scheduling queue from the plurality of scheduling queues, the method further includes:
    如果所述第三调度队列除所述第一数据包外没有缓存其他对象块的数据包,则所述转发节点更新所述第三调度队列的优先级,以使更新后的所述第三调度队列的优先级低于其他已缓存有数据包的调度队列的优先级。If the third scheduling queue does not cache data packets of other object blocks except the first data packet, the forwarding node updates the priority of the third scheduling queue so that the updated third scheduling The priority of the queue is lower than the priority of other scheduling queues that have cached data packets.
  15. 如权利要求13或14所述的方法,其特征在于,所述方法还包括:The method according to claim 13 or 14, characterized in that the method further includes:
    如果所述多个调度队列中任一调度队列缓存的其中一个对象块的数据包全部发送完毕,则更新所述多个调度队列的优先级,以使所述任一调度队列更新后的优先级,低于其他已缓存有数据包的调度队列更新后的优先级。If all the data packets of one of the object blocks cached by any one of the multiple scheduling queues are sent, the priorities of the multiple scheduling queues are updated so that the updated priority of any one of the multiple scheduling queues is , lower than the updated priority of other scheduling queues that have cached data packets.
  16. 如权利要求12或15所述的方法,其特征在于,所述更新所述任一调度队列的优先级,包括:The method of claim 12 or 15, wherein updating the priority of any scheduling queue includes:
    如果所述任一调度队列的优先级为最高优先级,则更新所述任一调度队列的优先级。If the priority of any scheduling queue is the highest priority, update the priority of any scheduling queue.
  17. 一种数据包发送方法,其特征在于,所述方法包括:A data packet sending method, characterized in that the method includes:
    发送端确定待发送对象块的多个数据包,所述多个数据包中每个数据包均携带所述对象块的标识,且所述多个数据包中的首包携带首包标签,所述首包标签用于指示相应数据包为所述待发送对象块的首个数据包;The sending end determines multiple data packets of the object block to be sent, each of the multiple data packets carries the identifier of the object block, and the first packet of the multiple data packets carries the first packet label, so The first packet label is used to indicate that the corresponding data packet is the first data packet of the object block to be sent;
    所述发送端发送所述多个数据包。The sending end sends the multiple data packets.
  18. 如权利要求17所述的方法,其特征在于,所述首包还携带所述待发送对象块的网络允许时延。 The method according to claim 17, characterized in that the first packet also carries the allowed network delay of the object block to be sent.
  19. 如权利要求17或18所述的方法,其特征在于,所述首包还携带所述待发送对象块的数据大小。The method according to claim 17 or 18, characterized in that the first packet also carries the data size of the object block to be sent.
  20. 如权利要求17-19任一所述的方法,其特征在于,所述多个数据包中的尾包携带尾包标签,所述尾包标签用于指示相应数据包为所述待发送对象块的最后一个数据包。The method according to any one of claims 17 to 19, characterized in that the tail packet in the plurality of data packets carries a tail packet label, and the tail packet label is used to indicate that the corresponding data packet is the object block to be sent. the last packet.
  21. 一种转发节点,其特征在于,所述转发节点包括:A forwarding node, characterized in that the forwarding node includes:
    处理模块,用于基于多个调度队列中每个调度队列的优先级,从所述多个调度队列中确定第一调度队列;A processing module configured to determine a first scheduling queue from the plurality of scheduling queues based on the priority of each of the plurality of scheduling queues;
    其中,所述多个调度队列中的每个调度队列用于缓存至少一个对象块的数据包、且同一对象块的数据包缓存在同一调度队列中,每个调度队列的优先级是基于相应调度队列缓存的数据包所属对象块的目标属性确定的,所述目标属性为在相应对象块的数据包发送过程中保持不变的属性;Wherein, each scheduling queue in the plurality of scheduling queues is used to cache data packets of at least one object block, and data packets of the same object block are cached in the same scheduling queue. The priority of each scheduling queue is based on the corresponding scheduling The target attribute of the object block to which the data packet cached in the queue belongs is determined, and the target attribute is an attribute that remains unchanged during the sending process of the data packet of the corresponding object block;
    发送模块,用于发送所述第一调度队列中的数据包。A sending module, configured to send data packets in the first scheduling queue.
  22. 如权利要求21所述的转发节点,其特征在于,所述转发节点还包括:The forwarding node according to claim 21, characterized in that the forwarding node further includes:
    接收模块,用于接收第一数据包,所述第一数据包携带目标对象块的标识,所述目标对象块为所述第一数据包所属的对象块;A receiving module, configured to receive a first data packet, where the first data packet carries an identifier of a target object block, and the target object block is the object block to which the first data packet belongs;
    所述处理模块,还用于如果所述第一数据包没有携带首包标签,基于所述目标对象块的标识,将所述第一数据包添加至第二调度队列,所述第二调度队列为所述多个调度队列中已缓存有所述目标对象块的数据包的调度队列,所述首包标签用于指示所述第一数据包是否为所述目标对象块的首个数据包。The processing module is also configured to add the first data packet to a second scheduling queue based on the identification of the target object block if the first data packet does not carry a first packet tag. The second scheduling queue It is a scheduling queue in which the data packet of the target object block has been cached in the plurality of scheduling queues, and the first packet tag is used to indicate whether the first data packet is the first data packet of the target object block.
  23. 如权利要求22所述的转发节点,其特征在于,所述处理模块还用于:The forwarding node according to claim 22, characterized in that the processing module is also used to:
    如果所述第一数据包还携带所述首包标签,从所述多个调度队列中选择一个调度队列作为第三调度队列,将所述第一数据包添加至所述第三调度队列。If the first data packet also carries the first packet tag, select one scheduling queue from the plurality of scheduling queues as the third scheduling queue, and add the first data packet to the third scheduling queue.
  24. 如权利要求23所述的转发节点,其特征在于,所述处理模块用于:The forwarding node according to claim 23, characterized in that the processing module is used to:
    从所述多个调度队列中选择未缓存数据包的调度队列作为所述第三调度队列。Select a scheduling queue that does not cache data packets from the plurality of scheduling queues as the third scheduling queue.
  25. 如权利要求24所述的转发节点,其特征在于,所述多个调度队列中的每个调度队列至多缓存一个对象块的数据包。The forwarding node according to claim 24, characterized in that each of the plurality of scheduling queues caches data packets of at most one object block.
  26. 如权利要求25所述的转发节点,其特征在于,所述目标属性为相应对象块的网络允许时延,所述第一数据包还携带所述目标对象块的网络允许时延;The forwarding node according to claim 25, wherein the target attribute is the allowable network delay of the corresponding object block, and the first data packet also carries the allowable network delay of the target object block;
    所述处理模块还用于:The processing module is also used to:
    更新所述第三调度队列的优先级,以使更新后的所述第三调度队列的优先级高于第四调度队列的优先级、且低于第五调度队列的优先级,所述第四调度队列是指所述多个调度队列 中缓存的对象块的网络允许时延大于所述目标对象块的网络允许时延的调度队列,所述第五调度队列是指所述多个调度队列中缓存的对象块的网络允许时延小于所述目标对象块的网络允许时延的调度队列。Update the priority of the third scheduling queue so that the updated priority of the third scheduling queue is higher than the priority of the fourth scheduling queue and lower than the priority of the fifth scheduling queue, and the fourth The dispatch queue refers to the multiple dispatch queues The fifth scheduling queue refers to the scheduling queue in which the network allowed delay of the cached object block is greater than the network allowed delay of the target object block. The fifth scheduling queue refers to the network allowable delay of the cached object block in the plurality of scheduling queues is less than The network-tolerant scheduling queue of the target object block.
  27. 如权利要求23所述的转发节点,其特征在于,所述处理模块用于:The forwarding node according to claim 23, characterized in that the processing module is used to:
    确定第六调度队列的剩余容量,所述第六调度队列为当前时间之前最近一次接收的首包所在的调度队列;Determine the remaining capacity of the sixth scheduling queue, which is the scheduling queue where the first packet received most recently before the current time is located;
    如果所述第六调度队列的剩余容量不足以缓存所述目标对象块,则从所述多个调度队列中选择未缓存数据包的调度队列作为所述第三调度队列。If the remaining capacity of the sixth scheduling queue is insufficient to cache the target object block, a scheduling queue that does not cache data packets is selected from the plurality of scheduling queues as the third scheduling queue.
  28. 如权利要求24或27所述的转发节点,其特征在于,所述目标属性为相应对象块的首个数据包的到达时间;The forwarding node according to claim 24 or 27, wherein the target attribute is the arrival time of the first data packet of the corresponding object block;
    所述处理模块还用于:The processing module is also used to:
    更新所述第三调度队列的优先级,以使更新后的所述第三调度队列的优先级低于其他已缓存有数据包的调度队列的优先级。Update the priority of the third scheduling queue so that the updated priority of the third scheduling queue is lower than the priority of other scheduling queues that have cached data packets.
  29. 如权利要求24-28任一所述的转发节点,其特征在于,所述处理模块还用于:The forwarding node according to any one of claims 24-28, characterized in that the processing module is also used to:
    如果所述多个调度队列中任一调度队列缓存的所有对象块的数据包全部发送完毕,则更新所述任一调度队列的优先级,以使所述任一调度队列更新后的优先级,低于其他已缓存有数据包的调度队列的优先级。If all the data packets of all object blocks cached by any one of the multiple scheduling queues are sent, then the priority of any one of the scheduling queues is updated so that the updated priority of any one of the scheduling queues is, Lower priority than other scheduling queues that have cached data packets.
  30. 如权利要求23所述的转发节点,其特征在于,所述多个调度队列按照顺序依次排列;The forwarding node according to claim 23, characterized in that the plurality of scheduling queues are arranged in order;
    所述处理模块用于:The processing module is used for:
    从所述多个调度队列中确定第六调度队列,所述第六调度队列为当前时间之前最近一次接收到的首包所在的调度队列;Determine a sixth scheduling queue from the plurality of scheduling queues, where the sixth scheduling queue is the scheduling queue where the first packet received most recently before the current time is located;
    将排序在所述第六调度队列之后的一个调度队列确定为所述第三调度队列。A scheduling queue sorted after the sixth scheduling queue is determined as the third scheduling queue.
  31. 如权利要求30所述的转发节点,其特征在于,所述目标属性为相应对象块的首个数据包的到达时间;The forwarding node according to claim 30, wherein the target attribute is the arrival time of the first data packet of the corresponding object block;
    所述处理模块还用于:The processing module is also used to:
    如果所述第三调度队列除所述第一数据包外没有缓存其他对象块的数据包,则更新所述第三调度队列的优先级,以使更新后的所述第三调度队列的优先级低于其他已缓存有数据包的调度队列的优先级。If the third scheduling queue does not cache data packets of other object blocks except the first data packet, update the priority of the third scheduling queue so that the updated priority of the third scheduling queue Lower priority than other scheduling queues that have cached data packets.
  32. 如权利要求30或31所述的转发节点,其特征在于,所述处理模块还用于:The forwarding node according to claim 30 or 31, characterized in that the processing module is also used to:
    如果所述多个调度队列中任一调度队列缓存的其中一个对象块的数据包全部发送完毕,则更新所述多个调度队列的优先级,以使所述任一调度队列更新后的优先级,低于其他已缓存有数据包的调度队列更新后的优先级。 If all the data packets of one of the object blocks cached by any one of the multiple scheduling queues are sent, the priorities of the multiple scheduling queues are updated so that the updated priority of any one of the multiple scheduling queues is , lower than the updated priority of other scheduling queues that have cached data packets.
  33. 一种发送端,其特征在于,所述发送端包括:A sending end, characterized in that the sending end includes:
    处理模块,用于确定待发送对象块的多个数据包,所述多个数据包中每个数据包均携带所述对象块的标识,且所述多个数据包中的首包携带首包标签,所述首包标签用于指示相应数据包为所述待发送对象块的首个数据包;A processing module, configured to determine multiple data packets of the object block to be sent, each of the multiple data packets carries an identifier of the object block, and the first packet of the multiple data packets carries the first packet Tag, the first packet tag is used to indicate that the corresponding data packet is the first data packet of the object block to be sent;
    发送模块,用于发送所述多个数据包。A sending module, configured to send the multiple data packets.
  34. 一种转发节点,其特征在于,所述转发节点包括存储器和处理器;A forwarding node, characterized in that the forwarding node includes a memory and a processor;
    所述存储器用于存储支持所述转发节点执行权利要求1-16任一项所述的方法的程序,以及存储用于实现权利要求1-16任一项所述的方法所涉及的数据;The memory is used to store a program that supports the forwarding node in executing the method described in any one of claims 1-16, and to store data involved in implementing the method described in any one of claims 1-16;
    所述处理器被配置为用于执行所述存储器中存储的程序。The processor is configured to execute a program stored in the memory.
  35. 一种发送端,其特征在于,所述发送端包括存储器和处理器;A sending end, characterized in that the sending end includes a memory and a processor;
    所述存储器用于存储支持所述发送端执行权利要求17-20任一项所述的方法的程序,以及存储用于实现权利要求17-20任一项所述的方法所涉及的数据;The memory is used to store a program that supports the sending end to execute the method described in any one of claims 17-20, and to store data involved in implementing the method described in any one of claims 17-20;
    所述处理器被配置为用于执行所述存储器中存储的程序。The processor is configured to execute a program stored in the memory.
  36. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行权利要求1-16或17-20任一项所述的方法。 A computer-readable storage medium, characterized in that the computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the instructions described in any one of claims 1-16 or 17-20. method.
PCT/CN2023/092332 2022-05-24 2023-05-05 Packet transmission method, forwarding node, transmission end and storage medium WO2023226716A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210575912.6 2022-05-24
CN202210575912.6A CN117155874A (en) 2022-05-24 2022-05-24 Data packet transmitting method, forwarding node, transmitting terminal and storage medium

Publications (1)

Publication Number Publication Date
WO2023226716A1 true WO2023226716A1 (en) 2023-11-30

Family

ID=88906821

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/092332 WO2023226716A1 (en) 2022-05-24 2023-05-05 Packet transmission method, forwarding node, transmission end and storage medium

Country Status (2)

Country Link
CN (1) CN117155874A (en)
WO (1) WO2023226716A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579705B (en) * 2024-01-16 2024-04-02 四川并济科技有限公司 System and method for dynamically scheduling servers based on batch data requests

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190332426A1 (en) * 2018-04-28 2019-10-31 EMC IP Holding Company LLC Method, device and computer program product for processing data
CN112311693A (en) * 2019-07-26 2021-02-02 华为技术有限公司 Service data transmission method and device
CN113067778A (en) * 2021-06-04 2021-07-02 新华三半导体技术有限公司 Flow management method and flow management chip
CN113327053A (en) * 2021-06-21 2021-08-31 中国农业银行股份有限公司 Task processing method and device
CN114153581A (en) * 2021-11-29 2022-03-08 北京金山云网络技术有限公司 Data processing method, data processing device, computer equipment and storage medium
CN114448903A (en) * 2020-10-20 2022-05-06 华为技术有限公司 Message processing method, device and communication equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190332426A1 (en) * 2018-04-28 2019-10-31 EMC IP Holding Company LLC Method, device and computer program product for processing data
CN112311693A (en) * 2019-07-26 2021-02-02 华为技术有限公司 Service data transmission method and device
CN114448903A (en) * 2020-10-20 2022-05-06 华为技术有限公司 Message processing method, device and communication equipment
CN113067778A (en) * 2021-06-04 2021-07-02 新华三半导体技术有限公司 Flow management method and flow management chip
CN113327053A (en) * 2021-06-21 2021-08-31 中国农业银行股份有限公司 Task processing method and device
CN114153581A (en) * 2021-11-29 2022-03-08 北京金山云网络技术有限公司 Data processing method, data processing device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN117155874A (en) 2023-12-01

Similar Documents

Publication Publication Date Title
US10341260B2 (en) Early queueing network device
RU2419226C2 (en) Memory control for high-speed control of access to environment
US6879561B1 (en) Method and system for wireless packet scheduling with per packet QoS support and link adaptation
TWI477127B (en) Computer-implemented method,machine-readable medium and client device for scheduling packet transmission
US8532030B2 (en) Techniques for initiating communication in a wireless network
JP2006500830A (en) System and method for managing wireless network channels
US7570634B2 (en) Priority queuing of frames in a TDMA network
US11533654B2 (en) Data packet prioritization for downlink transmission at network level
WO2023226716A1 (en) Packet transmission method, forwarding node, transmission end and storage medium
US11190980B1 (en) Tag-based data packet routing in dual connectivity systems
US20220086681A1 (en) Data packet prioritization for downlink transmission at sender level
KR101143211B1 (en) Method and apparatus for increasing radio frequency efficiency for mixed voice over internet protocol and data traffic
US11936761B2 (en) Tag-based data packet prioritization in dual connectivity systems
WO2021101640A1 (en) Method and apparatus of packet wash for in-time packet delivery
CN112838992A (en) Message scheduling method and network equipment
WO2019165855A1 (en) Message transmission method and device
US20210135999A1 (en) Packet Control Method, Flow Table Update Method, and Node Device
US20230412522A1 (en) Control apparatus, resource allocation method and program
CN111756557B (en) Data transmission method and device
WO2021026740A1 (en) Traffic balancing method, network device and electronic device
CA3119033C (en) Method and apparatus for dynamic track allocation in a network
WO2022246710A1 (en) Method for controlling data stream transmission and communication device
US20240049043A1 (en) Prioritizing data packets in wireless communication network
EP3902215A1 (en) Method for transmitting data and network device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23810789

Country of ref document: EP

Kind code of ref document: A1