CN107968748A - The method and device that a kind of multichannel run-off is sent - Google Patents

The method and device that a kind of multichannel run-off is sent Download PDF

Info

Publication number
CN107968748A
CN107968748A CN201610915269.1A CN201610915269A CN107968748A CN 107968748 A CN107968748 A CN 107968748A CN 201610915269 A CN201610915269 A CN 201610915269A CN 107968748 A CN107968748 A CN 107968748A
Authority
CN
China
Prior art keywords
path
message
switched
sent
fringe node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610915269.1A
Other languages
Chinese (zh)
Inventor
肖阳
陈凯
李本超
毛航宇
肖臻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Huawei Technologies Co Ltd
Original Assignee
Peking University
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, Huawei Technologies Co Ltd filed Critical Peking University
Priority to CN201610915269.1A priority Critical patent/CN107968748A/en
Publication of CN107968748A publication Critical patent/CN107968748A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/742Route cache; Operation thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/33Flow control; Congestion control using forward notification

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses the method and device that a kind of multichannel run-off is sent, this method includes the message that fringe node obtains to be switched path, the message in the to be switched path is the message for being switched to the forwarding of the second path in flow repeating process from first path caused by load balancing, currently transmitted message and the message in the to be switched path belong to the message in same data flow wherein in first path, by the packet buffer in be switched path in buffering area, after setting time length, the message in the to be switched path cached in buffering area is sent by the second path.By the way that the message for belonging to the to be switched path in same data flow is first buffered in buffering area, after the message of first path is sent completely, the message in the to be switched path cached in buffering area is forwarded by the second path again, the message belonged in same data flow can be avoided the occurrence of and the problem of out of order occur when being forwarded by different paths.

Description

The method and device that a kind of multichannel run-off is sent
Technical field
The present embodiments relate to field of communication technology, more particularly to the method and device that a kind of multichannel run-off is sent.
Background technology
Current internet be it is a kind of interconnect autonomous system, its element is autonomous system, different levels Autonomous system connected by some key nodes, and then form bigger autonomous system and whole internet.Internet will Ensure the communication capacity of high quality, it is necessary to there is efficient flow forwarding mechanism to make to ensure.Traditional flow retransmission technique is to be based on The forwarded hop-by-hop of IP (Internet Protocol, procotol) address, this mode often result in backbone path or core In the flow set of network, very big pressure is added to corresponding equipment, at the same time it is difficult to ensure network performance, this state sheet It is because run at high level often results in excessive risk in matter.The simplest solution method of the problem is to improve single node performance, so that The pressure of flow peak period is dealt with, but this method shortcoming is obvious, on the one hand, another technically there are unit performance bottleneck Aspect, flow peak period is often very short, this results in the substantial amounts of wasting of resources, and this state is substantially a kind of light running With wasting of resources antinomy.Another solution method is the multipath made full use of in network, and this mode is not only able to abundant profit With the resource in network, and there is very big extended capability, will not be influenced be subject to unit performance.
In order to make full use of the multipath in network, researcher has done substantial amounts of work, under wan environment Multipath flow-rate adjustment research concentrates on traffic engineering field, and hot spot in recent years has gone to data center.Realize multipath Converting flow, it usually needs four steps:1) multipath can be used by calculating;2) measurement can use the quality of multipath;3) according to path Quality by flow on different paths by corresponding proportion distribute;4) specific implementation of flow forwarding.In whole process, stream Amount scheduling granularity is most important:Granularity is too thick, does not have adjustment effect;Granularity is too thin, often results in the out of order of thread.
The flow retransmission technique based on message is relatively common in various flow rate retransmission technique, to data packet in router It according to message is unit that processing, which is, so the mode based on message is most natural mode in multipath traffic partition.Base It is as follows in the course of work of the traffic partition algorithm of message:All data packets for going to same destination, according to successively Order is forwarded on all multipaths for go to the destination with the mode of rotation.Than if any path 1, path 2, path 3 Three paths, there is message 1, message 2, message 3, message 4 ....Then message 1 walks path 1, and message 2 walks path 2, and message 3 walks path 3, message 4 walks path 1 ....If the traffic sharing ratio on mulitpath is not the same, can also carry out carrying weight Round robin.Such as three the weight ratios of paths be 1:2:1.Then message 1 walks path 1, and message 2 and message 3 walk path 2, report Text 4 walks path 3, and message 5 walks path 1 ....
It is also obvious the shortcomings that traffic partition algorithm based on message in such scheme.Because data flow is not accounted for Information, for the message in same data flow, may walk different paths, and the delay without same path may be different, So easily cause the out of order problem in data flow.Out of order problem has a great influence for the transport protocol on upper strata, it is easy to draws Play the decline of whole network performance.
The content of the invention
The embodiment of the present invention provides the method and device that a kind of multichannel run-off is sent, to the stream in the prior art solved The problem of out of order is produced during amount forwarding.
First aspect, there is provided a kind of method that multichannel run-off is sent, including:
Fringe node obtains the message in be switched path, and the message in the to be switched path is because negative in flow repeating process Carry it is balanced caused by the message of the second path forwarding is switched to from first path, wherein in first path currently transmitted message with The message in the to be switched path belongs to the message in same data flow;
The fringe node is by the packet buffer in the to be switched path in buffering area;
The fringe node leads to the message in the to be switched path cached in the buffering area after setting time length Second path is crossed to send, wherein, the setting time length meets that message currently transmitted in the first path is sent The duration requirement of completion.
With reference to first aspect, in the first possible implementation of first aspect, the fringe node is by described in During the message in the to be switched path cached in buffering area is sent by the second path, further include:
If obtaining the message in new to be switched path, the fringe node by the message in the new to be switched path after It is continuous to cache to the buffering area.
With reference to first aspect, in second of possible implementation of first aspect, in the fringe node by described in After the message in the to be switched path cached in buffering area is sent completely by second path, further include:
If obtaining the message in new to be switched path, the fringe node is straight by the message in the new to be switched path Receive and send.
With reference to first aspect, in the third possible implementation of first aspect, the fringe node obtains to be cut The message in path is changed, including:
The fringe node obtains the number of the traffic sharing ratio in each path and the reality message to be sent in each path Amount;
The fringe node determines the number of the message needed for each path according to the traffic sharing ratio in each path Amount;
The quantity of message of the fringe node according to needed for each path and the reality in each path are to be sent Message quantity, determine the message in be switched path.
With reference to first aspect, in the 4th kind of possible implementation of first aspect, the fringe node obtains to be cut The message in path is changed, including:
The fringe node is obtained in the traffic sharing ratio and current adjustment cycle in each path in each path to same The actual sending time of the message of destination node;
The fringe node is determined to send on each path needed for message according to the traffic sharing ratio in each path Time;
The fringe node each path in the time and current adjustment cycle according to needed for sending message on each path In to same destination node message actual sending time, determine the message in be switched path.
With reference to first aspect or first aspect the 4th kind of possible implementation, in the 5th kind of possibility of first aspect Implementation in, the fringe node is determined to send report on each path according to the traffic sharing ratio in each path The time of text meets following formula (1):
ti=T*Fi-ΔT……………………………………(1)
Wherein, tiTo send the time of message on each path, T is adjusts the cycle, FiFor the traffic sharing ratio in each path Example, Δ T is setting time length.
Second aspect, there is provided the device that a kind of multichannel run-off is sent, including:
Transmit-Receive Unit, for obtaining the message in be switched path, the message in the to be switched path is flow repeating process The middle message for being switched to the forwarding of the second path from first path caused by load balancing, it is currently transmitted wherein in first path Message and the message in the to be switched path belong to the message in same data flow;
Processing unit, for the packet buffer in be switched path that obtains the Transmit-Receive Unit in buffering area;
Transmit-Receive Unit is additionally operable to after setting time length, the processing unit is cached in the buffer to be switched The message in path is sent by second path, wherein, the setting time length meets currently to send out in the first path The duration requirement that the message sent is sent completely.
With reference to second aspect, in the first possible implementation of second aspect, the processing unit is additionally operable to:
The message in the to be switched path cached in the buffering area is sent by the second path in the Transmit-Receive Unit During, the message in new to be switched path is obtained by the Transmit-Receive Unit, by the message in the new to be switched path after It is continuous to cache to the buffering area.
With reference to second aspect, in second of possible implementation of second aspect, the Transmit-Receive Unit is additionally operable to:
The message in the to be switched path cached in the buffering area is sent out by second path in the Transmit-Receive Unit After sending completion, the message in new to be switched path is obtained, the message in the new to be switched path is directly transmitted.
With reference to second aspect, in the third possible implementation of second aspect, the Transmit-Receive Unit is specifically used for:
Obtain the traffic sharing ratio in each path and the quantity of the reality message to be sent in each path;
According to the traffic sharing ratio in each path, the quantity of the message needed for each path is determined;
The quantity of the quantity of message according to needed for each path and the reality in each path message to be sent, Determine the message in be switched path.
With reference to second aspect, in the 4th kind of possible implementation of second aspect, the Transmit-Receive Unit is specifically used for:
Obtain the report for arriving same destination node in the traffic sharing ratio and current adjustment cycle in each path in each path The actual sending time of text;
According to the traffic sharing ratio in each path, determine to send the time needed for message on each path;
In time and current adjustment cycle according to needed for sending message on each path same target is arrived in each path The actual sending time of the message of node, determines the message in be switched path.
With reference to second aspect or the 4th kind of possible implementation of second aspect, in the 5th kind of possibility of second aspect Implementation in, the Transmit-Receive Unit is specifically used for:
According to the traffic sharing ratio in each path, determine that the time for sending message on each path meets following public affairs Formula (1):
ti=T*Fi-ΔT……………………………………(1)
Wherein, tiTo send the time of message on each path, T is adjusts the cycle, FiFor the traffic sharing ratio in each path Example, Δ T is setting time length.
The third aspect, there is provided a kind of fringe node, including:Transceiver, processor and memory;
Transceiver obtains the message in be switched path, and the message in the to be switched path is because of load in flow repeating process The message of the second path forwarding, currently transmitted message and institute wherein in first path are switched to caused by balanced from first path The message for stating to be switched path belongs to message in same data flow;The to be switched path that processor obtains the transceiver Packet buffer is in buffering area;Transceiver is additionally operable to after setting time length, and the processor is cached in the buffer The message in be switched path is sent by second path, wherein, the setting time length meets in the first path The duration requirement that currently transmitted message is sent completely.
With reference to the third aspect, in the first possible implementation of the third aspect, the processor is in the transmitting-receiving During device sends the message in the to be switched path cached in the buffering area by the second path, pass through the transceiver The message in new to be switched path is obtained, the message in the new to be switched path is continued into caching to the buffering area.
With reference to the third aspect, in second of possible implementation of the third aspect, the transceiver is in the transmitting-receiving After the message in the to be switched path cached in the buffering area is sent completely by device by second path, new treat is obtained The message of toggle path, the message in the new to be switched path is directly transmitted.
With reference to the third aspect, in the third possible implementation of the third aspect, the transceiver obtains each path Traffic sharing ratio and each path reality message to be sent quantity;According to the traffic sharing ratio in each path Example, determines the quantity of the message needed for each path;The quantity of message according to needed for each path and each path Reality message to be sent quantity, determine the message in be switched path.
With reference to the third aspect, in the 4th kind of possible implementation of the third aspect, the transceiver obtains each path Traffic sharing ratio and in the current adjustment cycle in each path to same destination node message actual sending time;Root According to the traffic sharing ratio in each path, determine to send the time needed for message on each path;According on each path When sending the actual transmission for the message for arriving same destination node in each path in the time needed for message and current adjustment cycle Between, determine the message in be switched path.
With reference to the third aspect or the 4th kind of possible implementation of the third aspect, in the 5th kind of possibility of the third aspect Implementation in, the transceiver is determined to send message on each path according to the traffic sharing ratio in each path Time meet following formula (1):
ti=T*Fi-ΔT……………………………………(1)
Wherein, tiTo send the time of message on each path, T is adjusts the cycle, FiFor the traffic sharing ratio in each path Example, Δ T is setting time length.
The embodiment of the present invention shows that fringe node obtains the message in be switched path, and the message in the to be switched path is stream The message of the second path forwarding is switched in amount repeating process from first path caused by load balancing, wherein in first path Currently transmitted message and the message in the to be switched path belong to the message in same data flow, and the message in be switched path is delayed There are buffering area, and after setting time length, the message in the to be switched path cached in buffering area is sent out by the second path Send, wherein, setting time length meets the duration requirement that message currently transmitted in first path is sent completely.By that will belong to The message in the to be switched path in same data flow is first buffered in buffering area, be sent completely in the message of first path and then The message in the to be switched path cached in buffering area is forwarded by the second path, can avoid the occurrence of and belong in same data flow Message there is the problem of out of order when being forwarded by different path.
Brief description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly introduced.
Fig. 1 is a kind of schematic diagram of system architecture provided in an embodiment of the present invention;
Fig. 2 is the flow diagram for the method that a kind of multichannel run-off provided in an embodiment of the present invention is sent;
Fig. 3 is the structure diagram for the device that a kind of multichannel run-off provided in an embodiment of the present invention is sent;
Fig. 4 is a kind of structure diagram of fringe node provided in an embodiment of the present invention.
Embodiment
In order to make the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawings and embodiments to this Invention is described in further detail.
Fig. 1 shows a kind of system architecture that the embodiment of the present invention is applicable in, and can be realized to multichannel based on the system architecture The control that run-off is sent, the system architecture that multichannel run-off provided in an embodiment of the present invention is sent include multiple fringe nodes 101。
In embodiments of the present invention, which can be edge router, also known as couple in router, be to be located at The boundary point of network is used for the router being connected with other networks.The flow of the fringe node 101 is entered by external network The distribution of other fringe nodes 101 by being connected with the fringe node 101 is needed, is sent to destination node.
Based on foregoing description, Fig. 2 shows the stream for the method that a kind of multichannel run-off provided in an embodiment of the present invention is sent Journey, the flow can be performed by the device that multichannel run-off is sent, which can be fringe node, can also be located at the edge In node.
As shown in Fig. 2, the specific steps of the flow include:
Step 201, fringe node obtains the message in be switched path.
Step 202, fringe node by the packet buffer in be switched path in buffering area.
Step 203, fringe node leads to the message in the to be switched path cached in buffering area after setting time length Cross the transmission of the second path.
In step 201, the message in the to be switched path be flow repeating process in caused by load balancing from first Path is switched to the message of the second path forwarding, and currently transmitted message belongs on the message and first path in the to be switched path Message in same data flow.Such as a data stream packet 1, message 2 ..., message n;Wherein, message 1, message 2, report Text 3 is sent on the first path, in the adjustment cycle of load balancing, determines that first path compares congestion, originally first The message 3 sent on path, which needs to be switched to the second path, to be transmitted, and realizes load balancing, so that it is determined that going out the message 3 and being The message in be switched path.
Collect in the embodiment of the present invention, fringe node can obtain the message in be switched path by following manner:
Mode one
Fringe node gets the traffic sharing ratio in each path of destination node and the current of each path is actually treated The quantity of the quantity of the message of transmission, the then message according to needed for the traffic sharing ratio in each path determines each path. The reality of data and each path further according to the message needed for each path message to be sent, determines to be switched path Message.
For example, fringe node is to having path 1, path 2, path 3 between destination node;Current adjustment cycle Zhong Ge road The traffic sharing ratio in footpath is 2:3:1, the quantity of the actual message to be sent in current each path is respectively to have 3 reports in path 1 Text, has 2 messages in path 2, there is 1 message in path 3.Fringe node can be obtained according to the traffic sharing ratio in each path 2 messages can be currently sent to path 1, path 2 can currently send 3 messages, and path 3 can currently send 1 message. Therefore, it is transmitted within the load balancing adjustment cycle, it is necessary to which a message in path 1 is switched in path 2, should be from road The message that the message that sends be to be switched path is switched in path 2 in footpath 1.
Mode two
Fringe node gets each path in the traffic sharing ratio and current adjustment cycle in each path of destination node In to same destination node message actual sending time, then determined according to the traffic sharing ratio in each path each The time needed for message is sent on path.Further according in the time and current adjustment cycle sent on each path needed for message The actual sending time of the message of same destination node is arrived in each path, determines the message in be switched path.
For example, fringe node is to having path 1, path 2, path 3 between destination node;Current adjustment cycle Zhong Ge road The traffic sharing ratio in footpath is 2:3:The reality of the message of same destination node is arrived in the 1 present load balanced adjustment cycle in each path It is 3 minutes the time in path 1 that border sending time, which is respectively, the time in path 2 is 2 minutes, the time in path 3 is 1 minute.According to The traffic sharing ratio in each path, can obtain path 1 sent message time be 2 minutes, path 2 has sent message Time is 3 minutes, to have sent time of message be 1 minute in path 3.Therefore, it is necessary to will within the present load balanced adjustment cycle Part of message in path 1 is switched in path 2 and is transmitted, and the part sent should be switched in path 2 from path 1 Message is the message in be switched path.
Wherein, fringe node is determined to send the time of message on each path in the traffic sharing ratio according to each path Need to meet following formula (1):
ti=T*Fi-ΔT……………………………………(1)
Wherein, tiTo send the time of message on each path, T is adjusts the cycle, FiFor the traffic sharing ratio in each path Example, Δ T is setting time length.
In step 202, fringe node obtains the message in be switched path afterwards, it is necessary to first treat this in step 201 The packet buffer of toggle path is in buffering area.Multiple buffering area is provided with fringe node, per paths for a buffering Area, for being buffered in the message of its respective path transmission.
In step 203, fringe node is after setting time length, by the report in the to be switched path cached in buffering area Text is sent by the second path, when which needs to meet that message currently transmitted in first path is sent completely Long to require, which will ensure the difference more than all path delays, ensure that transmission will not be out of order.
Fringe node by the message in the to be switched path cached in buffering area by the second path occur during, if The fringe node gets the message in new to be switched path again, then the fringe node by the message in the new to be switched path after It is continuous to be buffered in the buffering area, wait the message in the to be switched path of current cache to be retransmited after being sent completely, ensure what is sent Message will not cause out of order.
Alternatively, fringe node is sent completely by the message in the to be switched path cached in buffering area by the second path Afterwards, which obtains the message in new to be switched path again, the to be switched road of the fringe node framework new acquisition The message in footpath directly transmits away, without being cached again.
Above-described embodiment shows that fringe node obtains the message in be switched path, and the message in the to be switched path is flow It is switched to the message of the second path forwarding in repeating process from first path caused by load balancing, wherein first path is taken in The message of preceding transmission and the message in the to be switched path belong to the message in same data flow, by the packet buffer in be switched path In buffering area, after setting time length, the message in the to be switched path cached in buffering area is sent by the second path, Wherein, setting time length meets the duration requirement that message currently transmitted in first path is sent completely.It is same by that will belong to The message in the to be switched path in one data flow is first buffered in buffering area, is sent completely and then incites somebody to action in the message of first path The message in the to be switched path cached in buffering area is forwarded by the second path, can be avoided the occurrence of and be belonged in same data flow There is the problem of out of order when being forwarded by different paths in message.
In order to preferably explain the embodiment of the present invention, the embodiment of the present invention will be described by specific implement scene below and carried The flow that the multichannel run-off of confession is sent.
Mode one
In embodiments of the present invention, the multiple streams for going to same destination are mapped in the way of five-tuple Hash The aggregated flow of fixed qty.The quantity for being hashing onto aggregated flow is adjustable.The quantity of aggregated flow is more, it is necessary to the stream of record Table size is bigger, but traffic partition precision is higher.
Wherein, system needs to agree as follows parameter:
Assuming that fringe node quantity is N in current network, numbering E1, E2 ..., EN.For a specific edge section For point, being externally entering the outlet of the flow map network of the fringe node has N-1, for example comes for fringe node E1 Say, external flow enters network from E1, and corresponding outlet there are N-1:E2、E3、…、EN.Claim to enter from a fringe node, from The flow that another fringe node is gone out is an aggregated flow, such as E1-E2.
By same aggregated flow according to five-tuple Hash mapping to X message, numbering B1, B2 ..., BX.Assuming that every Aggregated flow has P paths to use, numbering P1, P2 ..., PP.Assuming that each paths that upper strata flow partitioning algorithm provides Traffic sharing ratio is F=(F1, F2 ..., FP).
Assuming that there is Y buffering area in router, which is used as carrying out flow buffering in toggle path.It is such as current Need B1 being switched to path P 2 from path P 1, then need B1 being cached in some buffering area, cache Δ T time, Ran Houzai Continue normal send.The buffering area is the physical cache being additionally added in fringe node.
Process described below is for the E1-E2 aggregated flows in an aggregated flow, such as E1 in a fringe node.Its The process that his flow is sent is all.
In a regulating cycle, the first traffic sharing ratio F provided according to upper strata flow partitioning algorithm, calculates and treats road The set { B1, B2 ... } of the message of footpath switching.Calculating process can be carried out as follows:Every paths needs pair are first calculated according to F How many streams are answered, F*X, which can obtain every paths, needs how many a messages;Then with how many report per paths in practice Text is subtracted each other, and the positive expression path needs more messages, and negative expression needs unnecessary message being transferred to other roads Footpath;These messages are found, in the set for the message for being added to be switched path.
Then, the message in the set of to be switched message, which is respectively enterd in buffering area, is cached, and a message corresponds to One buffering area.If the quantity Y of buffering area is less than the size of the set for the message for needing toggle path, then Y before only selecting A BIN carries out path switching, other paths temporarily without switching, are put into next regulating cycle and switch over.
After Δ T time, first the flow cached in buffering area is sent, during transmission, is newly obtained Message continues to be cached in buffering area, after the message in buffering area is sent, it is possible to by the message of the new acquisition without Buffering area is crossed to be directly transmitted.The regulating cycle terminates at this time, waits and enters next regulating cycle.
Mode two
In embodiments of the present invention, the multiple streams for going to same destination are mapped in the way of five-tuple Hash The aggregated flow of fixed qty.Wherein, system needs to agree as follows parameter:
Assuming that fringe node quantity is N in current network, numbering E1, E2 ..., EN.For a specific edge section For point, being externally entering the outlet of the flow map network of the fringe node has N-1, for example comes for fringe node E1 Say, external flow enters network from E1, and corresponding outlet there are N-1:E2、E3、…、EN.Claim to enter from a fringe node, from The flow that another fringe node is gone out is an aggregated flow, such as E1-E2.
Assuming that every aggregated flow has P paths to use, numbering P1, P2 ..., PP.Assuming that upper strata flow partitioning algorithm The traffic sharing ratio of each paths provided is F=(F1, F2 ..., FP).
Assuming that there is Y buffering area in router, which is used as carrying out flow buffering in toggle path.It is such as current Need B1 being switched to path P 2 from path P 1, then need B1 being cached in some buffering area, cache Δ T time, Ran Houzai Continue normal send.The buffering area is the physical cache being additionally added in fringe node.
Process described below is for an aggregated flow in a border router, such as the E1-E2 in E1 routers Aggregated flow.Other flow-rate adjustment processes are all.
In a regulating cycle T, the first traffic sharing ratio F provided according to upper strata flow partitioning algorithm, calculates The time sent on different paths, i.e., calculate the time sent on different paths by above-mentioned formula (1).
Then, it will respectively enter in corresponding buffering area and cached to the message of same destination node, each target section The corresponding buffering area of point.
After Δ T time, first the flow cached in buffering area is sent, during transmission, is newly obtained Message continues to be cached in buffering area, after the message in buffering area is sent, it is possible to by the message of the new acquisition without Buffering area is crossed to be directly transmitted.The regulating cycle terminates at this time, waits and enters next regulating cycle.
Based on identical technical concept, Fig. 3 shows the device that a kind of multichannel run-off is sent, which can perform more The flow that path flow is sent, which can be fringe node, can also be located in the fringe node.
As shown in figure 3, the device specifically includes:
Transmit-Receive Unit 301, for obtaining the message in be switched path, the message in the to be switched path is forwarded over for flow It is switched to the message of the second path forwarding in journey from first path caused by load balancing, it is currently transmitted wherein in first path The message in message and the to be switched path belong to message in same data flow;
Processing unit 302, for the packet buffer in be switched path that obtains the Transmit-Receive Unit 301 in buffering area;
Transmit-Receive Unit 301 is additionally operable to after setting time length, and the processing unit 302 is cached in the buffer The message in be switched path is sent by second path, wherein, the setting time length meets in the first path The duration requirement that currently transmitted message is sent completely.
Alternatively, the processing unit 302 is additionally operable to:
The message in the to be switched path cached in the buffering area is sent by the second path in the Transmit-Receive Unit 301 During, the message in new to be switched path is obtained by the Transmit-Receive Unit 301, by the report in the new to be switched path Text continues caching to the buffering area.
Alternatively, the Transmit-Receive Unit 301 is additionally operable to:
The message in the to be switched path cached in the buffering area is sent out by second path in the Transmit-Receive Unit After sending completion, the message in new to be switched path is obtained, the message in the new to be switched path is directly transmitted.
Alternatively, the Transmit-Receive Unit 301 is specifically used for:
Obtain the traffic sharing ratio in each path and the quantity of the reality message to be sent in each path;
According to the traffic sharing ratio in each path, the quantity of the message needed for each path is determined;
The quantity of the quantity of message according to needed for each path and the reality in each path message to be sent, Determine the message in be switched path.
Alternatively, the Transmit-Receive Unit 301 is specifically used for:
Obtain the report for arriving same destination node in the traffic sharing ratio and current adjustment cycle in each path in each path The actual sending time of text;
According to the traffic sharing ratio in each path, determine to send the time needed for message on each path;
In time and current adjustment cycle according to needed for sending message on each path same target is arrived in each path The actual sending time of the message of node, determines the message in be switched path.
Alternatively, the Transmit-Receive Unit 301 is specifically used for:
According to the traffic sharing ratio in each path, determine that the time for sending message on each path meets following public affairs Formula (1):
ti=T*Fi-ΔT……………………………………(1)
Wherein, tiTo send the time of message on each path, T is adjusts the cycle, FiFor the traffic sharing ratio in each path Example, Δ T is setting time length.
It is a kind of fringe node 400 provided in an embodiment of the present invention referring to Fig. 4 based on same idea.The fringe node 400 can perform the function of the step of receiver is implemented in the various embodiments described above or execution.The fringe node 400 may include: Transceiver 401, processor 402 and memory 403.Processor 402 is used for the operation of fringe node 400;Memory 403 can wrap Read-only storage and random access memory are included, is stored with the instruction and data that processor 402 can perform.Memory 403 A part can also include nonvolatile RAM (NVRAM).Transceiver 401, processor 402 and memory 403 Connected etc. each component by bus 409, wherein bus 409 can also include power bus, control in addition to including data/address bus Bus and status signal bus in addition.But for the sake of clear explanation, various buses are all designated as bus 409 in figure.
The method that a kind of multichannel run-off that the embodiment of the present invention discloses is sent can be applied in processor 402, or Realized by processor 402.During realization, each step of process flow can pass through the integrated of the hardware in processor 402 The instruction of logic circuit or software form is completed.Processor 402 can be general processor, digital signal processor, special Integrated circuit, field programmable gate array either other programmable logic device, discrete gate or transistor logic, discrete Nextport hardware component NextPort, it is possible to achieve or disclosed each method, step and logic diagram in the execution embodiment of the present invention.General procedure Device can be microprocessor or any conventional processor etc..The step of method with reference to disclosed in the embodiment of the present invention, can be with It is embodied directly in hardware processor and performs completion, or completion is performed with the hardware in processor and software module combination.Software Module can be located at random access memory, flash memory, read-only storage, programmable read only memory or electrically erasable programmable storage In the storage medium of this areas such as device, register maturation.The storage medium is located at memory 403, and processor 402 reads memory The information stored in 403, the step of completing a kind of method that multichannel run-off is sent with reference to its hardware.
Transceiver 401 obtains the message in be switched path, the message in the to be switched path be in flow repeating process because The message of the second path forwarding, currently transmitted message wherein in first path are switched to caused by load balancing from first path Belong to the message in same data flow with the message in the to be switched path.Processor 402 is treated what the transceiver 401 obtained The packet buffer of toggle path is in buffering area.After setting time length, transceiver 401 is to be cut by what is cached in the buffer The message for changing path is sent by second path, wherein, the setting time length meets current in the first path The duration requirement that the message of transmission is sent completely.
Alternatively, the message in the to be switched path cached in the buffering area is passed through into the second tunnel in the transceiver 401 During footpath is sent, the transceiver 401 obtains the message in new to be switched path, and the processor 402 will be described new The message in be switched path continues caching to the buffering area.
Alternatively, the message in the to be switched path cached in the buffering area is passed through described in the transceiver 401 After two paths are sent completely, the transceiver 401 obtains the message in new to be switched path, by the new to be switched path Message directly transmit.
Alternatively, the transceiver 401 obtain each path traffic sharing ratio and each path reality it is to be sent The quantity of message, according to the traffic sharing ratio in each path, determines the quantity of the message needed for each path, according to described The quantity of the quantity of message needed for each path and the reality in each path message to be sent, determines to be switched path Message.
Alternatively, the transceiver 401 obtains each path in the traffic sharing ratio and current adjustment cycle in each path In to same destination node message actual sending time, according to the traffic sharing ratio in each path, determine each The time needed for message is sent on path, it is each in the time and current adjustment cycle according to needed for sending message on each path The actual sending time of the message of same destination node is arrived in path, determines the message in be switched path.
Alternatively, the transceiver 401 is determined to send on each path according to the traffic sharing ratio in each path The time of message meets following formula (1):
ti=T*Fi-ΔT……………………………………(1)
Wherein, tiTo send the time of message on each path, T is adjusts the cycle, FiFor the traffic sharing ratio in each path Example, Δ T is setting time length.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method or computer program product. Therefore, the present invention can be using the embodiment in terms of complete hardware embodiment, complete software embodiment or combination software and hardware Form.Deposited moreover, the present invention can use to can use in one or more computers for wherein including computer usable program code The shape for the computer program product that storage media is implemented on (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) Formula.
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that it can be realized by computer program instructions every first-class in flowchart and/or the block diagram The combination of flow and/or square frame in journey and/or square frame and flowchart and/or the block diagram.These computer programs can be provided The processors of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce A raw machine so that the instruction performed by computer or the processor of other programmable data processing devices, which produces, to be used in fact The device of function in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or Function in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, thus in computer or The instruction performed on other programmable devices is provided and is used for realization in one flow of flow chart or multiple flows and/or block diagram one In a square frame or multiple square frames the step of function.
Although preferred embodiments of the present invention have been described, but those skilled in the art once know basic creation Property concept, then can make these embodiments other change and modification.So appended claims be intended to be construed to include it is excellent Select embodiment and fall into all change and modification of the scope of the invention.
Obviously, those skilled in the art can to the various modification and variations of the present invention without departing from the spirit of the present invention and Scope.In this way, if these modifications and changes of the present invention belongs within the scope of the claims in the present invention and its equivalent technologies, Then the present invention is also intended to comprising including these modification and variations.

Claims (12)

1. a kind of method that multichannel run-off is sent, it is characterised in that including:
Fringe node obtains the message in be switched path, and the message in the to be switched path is equal because loading in flow repeating process Weighing apparatus caused by from first path be switched to the second path forwarding message, wherein in first path currently transmitted message with it is described The message in be switched path belongs to the message in same data flow;
The fringe node is by the packet buffer in the to be switched path in buffering area;
The fringe node passes through institute after setting time length, by the message in the to be switched path cached in the buffering area The transmission of the second path is stated, wherein, the setting time length meets that message currently transmitted in the first path is sent completely Duration requirement.
2. the method as described in claim 1, it is characterised in that the fringe node is to be cut by what is cached in the buffering area Change during the message in path sent by the second path, further include:
If obtaining the message in new to be switched path, the message in the new to be switched path is continued to delay by the fringe node Deposit to the buffering area.
3. the method as described in claim 1, it is characterised in that to be cut by what is cached in the buffering area in the fringe node Change path message be sent completely by second path after, further include:
If obtaining the message in new to be switched path, the fringe node directly sends out the message in the new to be switched path Send.
4. the method as described in claim 1, it is characterised in that the fringe node obtains the message in be switched path, including:
The fringe node obtains the traffic sharing ratio in each path and the quantity of the reality message to be sent in each path;
The fringe node determines the quantity of the message needed for each path according to the traffic sharing ratio in each path;
The quantity of message of the fringe node according to needed for each path and the reality in each path report to be sent The quantity of text, determines the message in be switched path.
5. the method as described in claim 1, it is characterised in that the fringe node obtains the message in be switched path, including:
The fringe node obtains in the traffic sharing ratio and current adjustment cycle in each path and arrives same target in each path The actual sending time of the message of node;
The fringe node according to the traffic sharing ratio in each path, determine to send on each path needed for message when Between;
The fringe node arrives in the time and current adjustment cycle according to needed for sending message on each path in each path The actual sending time of the message of same destination node, determines the message in be switched path.
6. method as claimed in claim 5, it is characterised in that the fringe node is according to the traffic sharing ratio in each path Example, determines that the time for sending message on each path meets following formula (1):
ti=T*Fi-ΔT……………………………………(1)
Wherein, tiTo send the time of message on each path, T is adjusts the cycle, FiFor the traffic sharing ratio in each path, Δ T For setting time length.
A kind of 7. device that multichannel run-off is sent, it is characterised in that including:
Transmit-Receive Unit, for obtaining the message in be switched path, the message in the to be switched path be in flow repeating process because The message of the second path forwarding, currently transmitted message wherein in first path are switched to caused by load balancing from first path Belong to the message in same data flow with the message in the to be switched path;
Processing unit, for the packet buffer in be switched path that obtains the Transmit-Receive Unit in buffering area;
Transmit-Receive Unit is additionally operable to after setting time length, the to be switched path that the processing unit is cached in the buffer Message sent by second path, wherein, the setting time length meets currently transmitted in the first path The duration requirement that message is sent completely.
8. device as claimed in claim 7, it is characterised in that the processing unit is additionally operable to:
In the process that the Transmit-Receive Unit sends the message in the to be switched path cached in the buffering area by the second path In, the message in new to be switched path is obtained by the Transmit-Receive Unit, the message in the new to be switched path is continued to delay Deposit to the buffering area.
9. device as claimed in claim 7, it is characterised in that the Transmit-Receive Unit is additionally operable to:
The message in the to be switched path cached in the buffering area has been sent by second path in the Transmit-Receive Unit Into the message for afterwards, obtaining new to be switched path, the message in the new to be switched path is directly transmitted.
10. device as claimed in claim 7, it is characterised in that the Transmit-Receive Unit is specifically used for:
Obtain the traffic sharing ratio in each path and the quantity of the reality message to be sent in each path;
According to the traffic sharing ratio in each path, the quantity of the message needed for each path is determined;
The quantity of the quantity of message according to needed for each path and the reality in each path message to be sent, determines Go out the message in be switched path.
11. device as claimed in claim 7, it is characterised in that the Transmit-Receive Unit is specifically used for:
Obtain in the traffic sharing ratio and current adjustment cycle in each path in each path to the message of same destination node Actual sending time;
According to the traffic sharing ratio in each path, determine to send the time needed for message on each path;
In time and current adjustment cycle according to needed for sending message on each path same destination node is arrived in each path Message actual sending time, determine the message in be switched path.
12. device as claimed in claim 11, it is characterised in that the Transmit-Receive Unit is specifically used for:
According to the traffic sharing ratio in each path, determine that the time for sending message on each path meets following formula (1):
ti=T*Fi-ΔT……………………………………(1)
Wherein, tiTo send the time of message on each path, T is adjusts the cycle, FiFor the traffic sharing ratio in each path, Δ T For setting time length.
CN201610915269.1A 2016-10-20 2016-10-20 The method and device that a kind of multichannel run-off is sent Pending CN107968748A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610915269.1A CN107968748A (en) 2016-10-20 2016-10-20 The method and device that a kind of multichannel run-off is sent

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610915269.1A CN107968748A (en) 2016-10-20 2016-10-20 The method and device that a kind of multichannel run-off is sent

Publications (1)

Publication Number Publication Date
CN107968748A true CN107968748A (en) 2018-04-27

Family

ID=61996519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610915269.1A Pending CN107968748A (en) 2016-10-20 2016-10-20 The method and device that a kind of multichannel run-off is sent

Country Status (1)

Country Link
CN (1) CN107968748A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109347623A (en) * 2018-10-18 2019-02-15 盛科网络(苏州)有限公司 Conversate the method and device of holding in a kind of application of balancing link load
CN111193663A (en) * 2019-12-18 2020-05-22 苏州浪潮智能科技有限公司 Method and device for dynamically calculating delay required by switching paths of link aggregation group
CN112737940A (en) * 2019-10-28 2021-04-30 华为技术有限公司 Data transmission method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146361A (en) * 2006-09-14 2008-03-19 富士通株式会社 Mobile communication system and communication method thereof
CN101247348A (en) * 2008-03-12 2008-08-20 华为技术有限公司 Load sharing method and device
CN101651965A (en) * 2008-08-14 2010-02-17 索尼株式会社 Wireless communication device, communication system, and communication control method, and program
CN103918307A (en) * 2011-11-03 2014-07-09 高通股份有限公司 Packet ordering based on delivery route changes
CN105704022A (en) * 2014-11-27 2016-06-22 华为技术有限公司 Path switching method and network nodes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146361A (en) * 2006-09-14 2008-03-19 富士通株式会社 Mobile communication system and communication method thereof
CN101247348A (en) * 2008-03-12 2008-08-20 华为技术有限公司 Load sharing method and device
CN101651965A (en) * 2008-08-14 2010-02-17 索尼株式会社 Wireless communication device, communication system, and communication control method, and program
CN103918307A (en) * 2011-11-03 2014-07-09 高通股份有限公司 Packet ordering based on delivery route changes
CN105704022A (en) * 2014-11-27 2016-06-22 华为技术有限公司 Path switching method and network nodes

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109347623A (en) * 2018-10-18 2019-02-15 盛科网络(苏州)有限公司 Conversate the method and device of holding in a kind of application of balancing link load
CN112737940A (en) * 2019-10-28 2021-04-30 华为技术有限公司 Data transmission method and device
WO2021083160A1 (en) * 2019-10-28 2021-05-06 华为技术有限公司 Data transmission method and apparatus
CN112737940B (en) * 2019-10-28 2023-12-08 华为技术有限公司 Data transmission method and device
CN111193663A (en) * 2019-12-18 2020-05-22 苏州浪潮智能科技有限公司 Method and device for dynamically calculating delay required by switching paths of link aggregation group

Similar Documents

Publication Publication Date Title
Noormohammadpour et al. Datacenter traffic control: Understanding techniques and tradeoffs
CN111682952B (en) On-demand probing for quality of experience metrics
US10193810B2 (en) Congestion-aware load balancing
CN108900419A (en) Route decision method and device based on deeply study under SDN framework
CN116018790A (en) Receiver-based precise congestion control
CN103825838B (en) A kind of data center removes bandwidth fragmentation stream scheduling method
CN102415059A (en) Bus control device
CN108092888A (en) A kind of transmission method based on Overlay networks, gateway and Transmission system
Desmouceaux et al. 6lb: Scalable and application-aware load balancing with segment routing
CN104469845B (en) A kind of message processing method, system and equipment
Tajiki et al. CECT: computationally efficient congestion-avoidance and traffic engineering in software-defined cloud data centers
CN110784399A (en) Multi-path optimization combined transmission method and system for MPTCP (Multi-path Transmission control protocol) network
CN107154897B (en) Isomery stream partition method based on packet scattering in DCN
Hwang et al. Deadline and incast aware TCP for cloud data center networks
CN107968748A (en) The method and device that a kind of multichannel run-off is sent
Abdelmoniem et al. Mitigating incast-TCP congestion in data centers with SDN
US10536385B2 (en) Output rates for virtual output queses
CN105141512A (en) Unified network configuration and control method supporting packet/circuit mixed exchange network
CN104333514B (en) Method for controlling network flow, device and system
Zhang et al. Fine-grained load balancing with traffic-aware rerouting in datacenter networks
Zhang et al. Load balancing with deadline-driven parallel data transmission in data center networks
Hwang et al. Load balancing and routing mechanism based on software defined network in data centers
Sayit et al. DASH-QoS: A scalable network layer service differentiation architecture for DASH over SDN
US11750525B2 (en) Congestion control for low latency datacenter networks
Hu et al. Load Balancing With Multi-Level Signals for Lossless Datacenter Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180427