CN1972239A - Ethernet cache exchanging and scheduling method and apparatus - Google Patents

Ethernet cache exchanging and scheduling method and apparatus Download PDF

Info

Publication number
CN1972239A
CN1972239A CNA2005101238855A CN200510123885A CN1972239A CN 1972239 A CN1972239 A CN 1972239A CN A2005101238855 A CNA2005101238855 A CN A2005101238855A CN 200510123885 A CN200510123885 A CN 200510123885A CN 1972239 A CN1972239 A CN 1972239A
Authority
CN
China
Prior art keywords
packet
cell
port
unit
output port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2005101238855A
Other languages
Chinese (zh)
Other versions
CN100550833C (en
Inventor
范其蓬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan FiberHome Networks Co Ltd
Original Assignee
Wuhan FiberHome Networks Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan FiberHome Networks Co Ltd filed Critical Wuhan FiberHome Networks Co Ltd
Priority to CNB2005101238855A priority Critical patent/CN100550833C/en
Publication of CN1972239A publication Critical patent/CN1972239A/en
Application granted granted Critical
Publication of CN100550833C publication Critical patent/CN100550833C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

This invention provides one Ethernet exchange buffer and transfer method, which comprises the following steps: receiving data pack from input end; cutting data pack into certain length signal element by cut unit; storing signal element into central memory unit; adopting equal time lag transfer through end and line transfer unit and turning output end by each time lag; adopting run require or transfer of different lines at one output end; after getting transfer, reading signal element from common central buffer unit and the data pack resets the signal element sent to the linkage. This invention also provides one Ethernet exchange buffer and transfer device.

Description

The method and apparatus of Ethernet cache exchanging and scheduling
Technical field
The present invention relates to the network exchange method, relate in particular to the method and apparatus of Ethernet cache exchanging and scheduling.
Background technology
Ethernet switch technology mainly contains direct-passing mode and store-and-forward mode at present.
The ethernet switch technology of direct-passing mode can be realized by the fabric switch between each ethernet port.When fabric switch when input port detects a packet, the packet header of described this packet of matrix switch machine check, obtain the destination address of packet, dynamic look-up table by fabric switch inside converts destination address to corresponding output port, and packet is sent to corresponding ports straight-throughly, realize function of exchange.
Owing to do not store packet, the delay of direct-passing mode is very little, exchange velocity is very fast.But its shortcoming is: because the content of packet is not preserved by switch, so can't check whether the packet that is transmitted is wrong, can not provide error detection capability; Owing to there is not buffer memory, can not coordinate high-speed port and low-speed port, therefore the input/output end port with different rates directly can not be connected, and this method is not suitable for the Ethernet exchange of multiport yet.
Store-and-forward mode is the mode that is most widely used at present, store-and-forward mode can be divided into central shared buffer memory and independent buffer memory according to the occupancy mode of spatial cache again, Ethernet data packet system with central shared buffer memory is an example, and its concrete handling process is as follows:
After packet entered switch by Ethernet interface, Ethernet receives controller and receives the processing logic unit did not have buffer memory, and they handle and send to the central cache unit by linear speed to packet.Receive processing logic and carry out all forward commands, that is to say, receive processing logic and determined which output port each bag that enters will be sent to, and described reception processing logic has also comprised Level 2 Forwarding, the 3rd layer of route and other function.Receive the destination interface of these packets of processing logic notice central cache unit.
When the central cache unit carried out buffer memory and sends scheduling packet, the central cache unit sent to each output port to packet by linear speed.For those bags (multicast, broadcasting, or DLF bag) that is sent to a plurality of ports, input logic only sends a packet to the central cache unit, but indicates a plurality of destination interfaces.The central cache unit will wrap and be sent to each output port respectively.
Share in the exchange of storage at Ethernet switch, can adopt the exchanged form of above packet switch usually.This mode is applied in the exchange that low port number, low bandwidth require and still can, but in the multiport exchange of handling gigabit speed even 10,000,000,000 speed, will go wrong: at first, under the situation of high transfer rate, store-and-forward mode needs great buffer memory expense, and this will strengthen the cost of system; Secondly, the time-delay of store-and-forward mode when carrying out data processing is long, particularly adopts when sharing the storage package exchanged form, because length of data package has nothing in common with each other, when transmitting and handling each packet, will produce the time-delay that has nothing in common with each other, thereby delay variation occur.
Summary of the invention
Big, the problem that produces delay variation during based on the queue scheduling of bag of store-and-forward mode buffer memory expense during at the high rate data transmission that occurs in the prior art, the present invention propose a kind of can with less system cache expense realize high-speed transfer, based on Ethernet cache exchanging and dispatching method and the Ethernet cache exchanging and the dispatching device of the cell of regular length.
Ethernet cache exchanging provided by the invention and dispatching method comprise: receive packet from input port; By the cell cutter unit described packet is divided into the cell of certain-length, wherein, each cell has label, and this label is represented the data message of described cell, and described cell is stored in the central cache unit; By port and queue scheduling unit time slot polling dispatching such as employing between different output ports, every a time interval, the output port wheel changes once scheduling power, dispatches between the different queue of same output port; After output port obtains scheduling power, from the central cache unit that this output port is shared, read cell, by the packet recomposition unit cell is recombinated, the cell data after the reorganization is sent on the link.
Further, between the different queue of same output port, use the heavy round robin algorithm of strict priority algorithm, round robin algorithm or cum rights to dispatch.
After receiving packet, label in the header part of each packet by the forwarding logic unit of tabling look-up from the packet that input port receives, to represent the output port that will send of this packet from input port; Label in the header part of each packet by the packet filtering logical block from the packet that input port receives, to represent whether this abandons and the grade of service this packet; The cell cutter unit is peeled off described packet header mark, and cuts apart cell according to described mark.
To arrange formation from the packet of input port by control of formation address and stack cell, will shine upon from the cell of each input port formation and the part zone of central cache unit.
The data message of cell comprises effective word joint number, packet header bag tail attribute and the source port information of cell.
To arrange formation respectively according to output port, COS classification and clean culture/multicast from the input port packet.
Ethernet cache exchanging and dispatching method also comprise: when from the number-of-packet of input port input during greater than set point, the controller transmitted traffic control frame of input port can be notified in the central cache unit, when the number-of-packet of input port during less than set point, input port withdraws from the flow control state.
When from the number-of-packet of input port input during, be dropped from the packet of input port input greater than set point.
The device of Ethernet cache exchanging provided by the invention and scheduling comprises input port and output port, also comprises: the cell cutter unit, and the packet that is used for receiving from input port is divided into the cell of certain-length; The central cache unit is used to store the cell that the cell cutter unit is cut apart; Control of formation address and stack cell are used for the input port classification queue; Port and queue scheduling unit are used for time slot such as employing polling dispatching between different output ports, so that every a time interval, the output port wheel changes once scheduling power, side employing wheel changes or the weighted round robin scheduling between the different queue of same output port; The packet recomposition unit is used for output port is packed from the cell that the central cache unit of being shared reads.
Ethernet cache exchanging provided by the invention and dispatching device also comprise: the forwarding logic unit of tabling look-up is used for labelling in the header part of each packet, to represent the output port that will send of this packet; The packet filtering logical block is labelled in the header part of each packet, to represent whether this abandons this packet.
Ethernet cache exchanging provided by the invention and dispatching device also comprise: control of formation address and stack cell are used for input port is arranged formation, and will shine upon from the cell of each input port formation and the part zone of central cache unit.Wherein, control of formation address and stack cell are arranged formation with input port according to output port, COS classification and clean culture/multicast.
Adopt method and apparatus provided by the present invention, under the situation of limited central cache capacity, utilize the method for cell switching, cooperate and wait the timeslot scheduling algorithm between port, central cache more effectively upgrades and discharges the address in the central cache pond of storage cell, thus the buffer memory expense when having reduced network exchange.
Because the unit of deal with data is the cell of regular length.For the Ethernet bag of different length, can before exchange, be divided into cell, after the output reorganization, re-send to link and get on.Utilization utilizes the fixing characteristic of cell output time based on the cell queuing and the dispatching algorithm of regular length, has reduced delay variation.
In addition, realized formation classification feature, can support hundreds of virtual queues to exist simultaneously according to output port, clean culture/multicast, COS by formation address control and stack cell.Timeslot scheduling and the method that traditional scheduler combines have realized the low delay variation of packet switch and at the flexible dispatching of the different grades of service.
Description of drawings
Fig. 1 is the embodiment flow chart of Ethernet cache exchanging of the present invention and dispatching method.
Fig. 2 is the example structure figure of Ethernet cache exchanging of the present invention and dispatching device.
Fig. 3 is the cell structure figure in the embodiment of the invention.
Embodiment
Fig. 1 is the flow chart of Ethernet cache exchanging of the present invention and dispatching method.According to Ethernet cache exchanging provided by the invention and dispatching method, at first,, the flow of input port is monitored in step 10, according to the definite operation of monitored results, for example, receive packet or packet discard to packet.
When during greater than set point,, can notifying the controller transmitted traffic control frame of input port in the central cache unit in step 80 from the number-of-packet of input port input, and, in step 90, abandon the packet of input.
When during less than set point, in step 20, receiving packet from input port from the number-of-packet of input port input, described input port can be 12 gigabit ports and PCI mouth and 10,000,000,000 ports.To arrange formation from the packet of input port by control of formation address and stack cell, in the present embodiment:
Each input port is pressed output port, COS classification and clean culture/multicast and is arranged formation.Total number of queues 13 * 8 in the central cache unit (13 gigabit port and each 8 the COS formation of PCI port)+13 (10,000,000,000 ports are according to the inbound port formation)+13 (13 gigabit port and multicast of PCI port or broadcast queue) amounts to 130 formations.
In step 30, described packet adds mark in the header part of each bag after by table look-up retransmission unit and packet filtering unit, if show the reason that sends to cpu i/f and the COS of packet simultaneously to show whether this bag abandons, will send to which port and to send to cpu i/f.
In the present embodiment, above-mentioned packet added is labeled as 3-7 byte, and the packet that wherein sends to cpu port adds the mark of 7 bytes, and the packet that sends to other port adds the mark of 3 bytes, that is:
DESTINATION_PORT[13:0]: show the port that packet will go;
Wherein, DESTINATION_PORT[11:0] the corresponding GE1-GE12 of difference (gigabit port 1 is to gigabit port 12), DESTINATION_PORT[12] be the 10GE port, DESTINATION_PORT[13] be cpu port.
DROP: show whether current bag will abandon.
COS[2:0]: show the packet precedence information.In addition, the packet that is sent to cpu port has following information:
16 rx_reason[15:0] show that this packet sends to the reason that CPU handles;
4 rx_port[3:0] show the port numbers in this bag source;
3 rx_cpu_cos[2:0] show the COS grade of this bag;
1 rx_untagged shows whether this bag needs marking.
In step 40, the cell cutter unit is at first peeled off these label informations, remove described label information after promptly label information being read, according to described label information these bags are cut into the cell that length is 34 bytes (each cell carries the label of 2 bytes, shows effective word joint number and packet header bag tail and source port information) then.
In step 45, will shine upon from the cell of each input port formation and the part zone of central cache unit by control of formation address and stack cell.Wherein, by formation address control and stack cell with the input port classification queue, the queue sharing central cache unit of all input ports.In the present embodiment, the RAM of the 12K * 288Bits that adopts is the central cache unit, can put 12K cell altogether, if under the situation in average division space, 130 each formations of formation can 92 cells of average cache (64-1522 byte takies 2-45 cell respectively, so each formation can 2 bags the longest of buffer memory).
Then, in step 50, cell is stored in the central cache unit.Preferably, described central cache unit has the bit wide that is complementary with cell, and in the present embodiment, the central cache unit has the bit wide of 288 bits.
In step 55, by port and queue scheduling unit time slot polling dispatching such as employing between different output ports, every a time interval, the output port wheel changes once scheduling power.In the present embodiment, each output port wheel of per 34 system clocks changes once scheduling power, and the output port that obtains the power of dispatching is according to the scheduling strategy between the formation of the state decision output port of quantity, priority height or the output port of packet and cell in the formation of queuings such as input port, COS.About the scheduling between the formation of output port, preferably, use three kinds of traditional scheduler algorithms: the round robin algorithm that strict priority algorithm, round robin algorithm, cum rights weigh.It should be appreciated by those skilled in the art that other dispatching algorithms that are fit to also are fit to combine with the time slot polling dispatching algorithm that waits of the present invention.
In step 60, after output port was determined one of them formation sent, this output port was sent out to the central cache unit and is read the cell instruction, so that read cell from the respective memory regions of CPU.In step 70, the packet recomposition unit is recombinated to the cell that is read, and the cell data after the reorganization is sent on the link.
Fig. 2 is the example structure figure of Ethernet cache exchanging of the present invention and dispatching device.Ethernet cache exchanging provided by the invention and dispatching device comprise input port 110, output port 150, cell cutting unit 120, central cache unit 130, packet recomposition unit 140, port and queue scheduling unit 160.Preferably, described device can also comprise formation address control and stack cell 105, the forwarding logic unit 115 of tabling look-up, packet filtering logical block 116.
Control of formation address and stack cell 105 in Ethernet cache exchanging and the dispatching device are used for the data qualification formation with input port 110.Wherein, number of queues is COS*PNUM+PNUM formation, and wherein COS is the priority number, and PNUM is the port number of system.Address space to central buffer unit 130 distributes then, for example, on average or dynamically divides the space of central cache unit 130.Preferably, further separating unicast formation and multicast queue.
Input port 110 and output port 150, described input port 110 and output port 150 are a plurality of gigabit ports, PCI port and 10,000,000,000 ports, all are connected with extraneous link.Wherein, described input port 110 is used to receive external packet, and preferably, input port comprises the counting unit (not shown) to input packet counting, in order to importing to such an extent that packet carries out flow control.
Described output port 150 is used for reading the cell instruction by controller wherein to 130 of central cache unit, so that the cell in the central cache unit 130 is read into packet recomposition unit 140.
Preferably, Ethernet cache exchanging and dispatching device also comprise table look-up forwarding logic unit 115 and packet filtering logical block 116.Utilize the forwarding logic unit 115 of tabling look-up in Ethernet cache exchanging and the dispatching device to label, to represent the output port that will send of this packet in the header part of each packet that receives from input port 110.Can also utilize the packet filtering logical block 116 in Ethernet cache exchanging and the dispatching device to label, whether abandon and the grade of service to represent this packet in the header part of each packet that receives from input port.
Described cell cutter unit 120 is used for and will receives the cell that packet is divided into certain-length from input port 110.In the present embodiment, preferably, cell cutter unit 120 is at first peeled off above-mentioned label information, removes described label information after promptly label information being read, and according to described label information these bags is cut into the cell that length is 34 bytes then.
Central cache unit 130 is used to store the cell that cell cutter unit 120 is cut apart; In the present embodiment, central cache unit 130 adopts the buffer memory SRAM of 288 bit wides, replacedly, also can adopt memories such as DRAM, SDRAM as the central cache unit.Cell after cutting apart is stored in the described central cache unit 130, is 272 bit wides after the conversion of 34 byte cells, and remaining 16 is the packet header sign of cell.
Fig. 3 has shown the cell structure of embodiment of the present invention:
CI[1:0]: 10B: bag load part; 11B: bag portion; 01B: header part.
BE[5:0]: the effective word joint number of bag tail cell.BE[5:0 for example]=100000B represents to have 32 effective bytes.
SP[3:0]: show the source port number of packet, for example SP[3:0]=0001B represents gigabit port No. 1.
COS[2:0]: show the COS grade.
CEL[271:0]: it also is 34 bytes that the effective byte of cell has 272 altogether.
Ethernet cache exchanging and dispatching device also comprise packet recomposition unit 140, are used for output port 150 is recombinated from the cell that the central cache unit of being shared 130 reads the cell assembly unit packing again that scheduling is come out.Preferably, packet recomposition unit 140 can also be carried out verification to the cell data after the reorganization when cell is recombinated, abandons wrong bag and carries out the packet loss counting statistics.
Ethernet cache exchanging and dispatching device also comprise port and queue scheduling unit 160, are used for waiting the time slot polling dispatching in 150 employings of different output ports, so that every a time interval, output port 150 is taken turns commentaries on classics and once dispatched power.The output port 150 of acquisition scheduling power is dispatched according to wheel commentaries on classics or weighted round robin scheduling algorithm between different queue again, output port sends in central cache unit 130 and reads the cell instruction then, so that 140 pairs of cells of packet recomposition unit are recombinated, further cell is sent to corresponding output port 150 from central buffer unit 130.And, the low delay variation that waits timeslot scheduling algorithm and strict priority algorithm, round robin algorithm, dispatching algorithms such as round robin algorithm that cum rights is heavy to combine to have realized packet exchange and at the scheduling of the different grades of service.
It may be noted that enforcement of the present invention is not limited to the foregoing description, the modification of any other form only otherwise break away from spirit of the present invention, also belongs to protection scope of the present invention.

Claims (13)

1. the method for Ethernet cache exchanging and scheduling is characterized in that, comprising:
A. receive packet from input port, the packet that input port receives is arranged formation by control of formation address and stack cell;
B. by the cell cutter unit described packet is divided into the cell of regular length, wherein, each cell has label, and this label is represented the data message of described cell;
C. described cell is stored in the central cache unit;
D. by port and queue scheduling unit time slot polling dispatching such as employing between different output ports, every a time interval, the output port wheel changes once scheduling power, and dispatches between the different queue of same output port;
E. after output port was determined one of them formation sent, this output port was sent out to the central cache unit and is read the cell instruction;
F. by the packet recomposition unit cell that is read is recombinated, the cell data after the reorganization is sent on the link.
2. method according to claim 1 is characterized in that, at step D, uses the heavy round robin algorithm of strict priority algorithm, round robin algorithm or cum rights to dispatch between the different queue of same output port.
3. method according to claim 1 is characterized in that, between step B and C, will shine upon from the cell of each input port formation and the part zone of central cache unit by control of formation address and stack cell.
4. method according to claim 3 is characterized in that, in steps A, described arrangement formation is carried out according to output port, COS classification and clean culture/multicast.
5. method according to claim 1 is characterized in that, described method also comprises: between steps A and B,
Label in the header part of each packet by the forwarding logic unit of tabling look-up, with the output port of representing that this packet will send;
Label in the header part of each packet by the packet filtering logical block, to represent whether this abandons and the grade of service this packet;
The cell cutter unit is peeled off described packet header mark.
6. method according to claim 1 is characterized in that, the data message of described cell comprises effective word joint number, packet header bag tail attribute and the source port information of cell.
7. method according to claim 1, it is characterized in that, before steps A, when from the number-of-packet of input port input during greater than set point, the controller transmitted traffic control frame of input port can be notified in the central cache unit, when the number-of-packet of input port during less than set point, input port receives data.
8. method according to claim 7 is characterized in that, when from the number-of-packet of input port input during greater than set point, is dropped from the packet of input port input.
9. the device of Ethernet cache exchanging and scheduling comprises input port and output port, it is characterized in that described device also comprises:
The cell cutter unit, the packet that is used for receiving from input port is divided into the cell of certain-length;
The central cache unit is used to store the cell that the cell cutter unit is cut apart;
Port and queue scheduling unit are used for time slot such as employing polling dispatching between different output ports, so that every a time interval, the output port wheel changes once scheduling power, and dispatches between the different queue of same output port;
The packet recomposition unit is used for output port is recombinated from the cell that the central cache unit of being shared reads.
10. device according to claim 9 is characterized in that, described device also comprises:
The forwarding logic unit of tabling look-up is used for labelling in the header part of each packet, to represent the output port that will send of this packet;
Whether the packet filtering logical block is used for labelling in the header part of each packet, abandon and the grade of service to represent this packet.
11. device according to claim 9, it is characterized in that, described device also comprises: control of formation address and stack cell are used for the packet from input port is arranged formation, and will shine upon from the cell of each input port formation and the part zone of central cache unit.
12. device according to claim 11 is characterized in that, control of described formation address and stack cell are arranged formation with input port according to output port, COS classification and clean culture/multicast.
13. device according to claim 9 is characterized in that, described port and queue scheduling unit adopt the heavy round robin algorithm of strict priority algorithm, round robin algorithm or cum rights to dispatch between the different queue of same output port.
CNB2005101238855A 2005-11-24 2005-11-24 The method and apparatus of Ethernet cache exchanging and scheduling Expired - Fee Related CN100550833C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005101238855A CN100550833C (en) 2005-11-24 2005-11-24 The method and apparatus of Ethernet cache exchanging and scheduling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005101238855A CN100550833C (en) 2005-11-24 2005-11-24 The method and apparatus of Ethernet cache exchanging and scheduling

Publications (2)

Publication Number Publication Date
CN1972239A true CN1972239A (en) 2007-05-30
CN100550833C CN100550833C (en) 2009-10-14

Family

ID=38112839

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005101238855A Expired - Fee Related CN100550833C (en) 2005-11-24 2005-11-24 The method and apparatus of Ethernet cache exchanging and scheduling

Country Status (1)

Country Link
CN (1) CN100550833C (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101834787A (en) * 2010-04-12 2010-09-15 中兴通讯股份有限公司 Method and system for dispatching data
WO2010124516A1 (en) * 2009-04-30 2010-11-04 中兴通讯股份有限公司 Method and device for scheduling data communication input ports
CN102104548A (en) * 2011-03-02 2011-06-22 中兴通讯股份有限公司 Method and device for receiving and processing data packets
CN102118304A (en) * 2010-01-05 2011-07-06 中兴通讯股份有限公司 Cell switching method and cell switching device
CN101674250B (en) * 2009-11-09 2012-05-02 盛科网络(苏州)有限公司 Port bandwidth guaranteed packet switching chip and implementation method thereof
CN102447608A (en) * 2010-10-08 2012-05-09 中兴通讯股份有限公司 Method, device and system for realizing packet reorganization by adopting accelerating technology
CN102684983A (en) * 2011-03-15 2012-09-19 中兴通讯股份有限公司 Cell scheduling method and device
WO2012092894A3 (en) * 2012-02-01 2012-12-27 华为技术有限公司 Multicore processor system
CN102970249A (en) * 2012-12-25 2013-03-13 武汉烽火网络有限责任公司 Routing switching device and method
CN103023806A (en) * 2012-12-18 2013-04-03 武汉烽火网络有限责任公司 Control method and control device of cache resource of shared cache type Ethernet switch
CN103595658A (en) * 2013-11-18 2014-02-19 清华大学 Scalable fixed-length multi-path switching system without closed-loop flow control
WO2014166092A1 (en) * 2013-04-11 2014-10-16 华为技术有限公司 Resource allocation method, switch, and controller
CN104158770A (en) * 2014-08-20 2014-11-19 电子科技大学 A method and device for dividing and recombining switch packet
CN105262562A (en) * 2015-09-07 2016-01-20 香港中文大学深圳研究院 Preprocessing method for grouping and recombining algebraic exchange engine data packets
CN105635000A (en) * 2015-12-30 2016-06-01 华为技术有限公司 Message storing and forwarding method, circuit and device
CN107483405A (en) * 2017-07-17 2017-12-15 中国科学院空间应用工程与技术中心 A kind of dispatching method for supporting elongated cell and scheduling system
CN109525518A (en) * 2018-12-25 2019-03-26 北京物芯科技有限责任公司 A kind of IP packet method for network address translation and device based on FPGA
CN110430146A (en) * 2019-06-26 2019-11-08 天津芯海创科技有限公司 Cell recombination method and switching fabric based on CrossBar exchange
CN112104451A (en) * 2020-11-20 2020-12-18 武汉绿色网络信息服务有限责任公司 Method and device for refreshing data packet transmission port
CN115242728A (en) * 2022-06-27 2022-10-25 新华三技术有限公司 Message transmission method and device

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010124516A1 (en) * 2009-04-30 2010-11-04 中兴通讯股份有限公司 Method and device for scheduling data communication input ports
CN101674250B (en) * 2009-11-09 2012-05-02 盛科网络(苏州)有限公司 Port bandwidth guaranteed packet switching chip and implementation method thereof
CN102118304B (en) * 2010-01-05 2014-03-12 中兴通讯股份有限公司 Cell switching method and cell switching device
CN102118304A (en) * 2010-01-05 2011-07-06 中兴通讯股份有限公司 Cell switching method and cell switching device
CN101834787A (en) * 2010-04-12 2010-09-15 中兴通讯股份有限公司 Method and system for dispatching data
CN102447608A (en) * 2010-10-08 2012-05-09 中兴通讯股份有限公司 Method, device and system for realizing packet reorganization by adopting accelerating technology
CN102447608B (en) * 2010-10-08 2014-11-05 中兴通讯股份有限公司 Method, device and system for realizing packet reorganization by adopting accelerating technology
CN102104548A (en) * 2011-03-02 2011-06-22 中兴通讯股份有限公司 Method and device for receiving and processing data packets
CN102684983A (en) * 2011-03-15 2012-09-19 中兴通讯股份有限公司 Cell scheduling method and device
CN102684983B (en) * 2011-03-15 2016-08-03 中兴通讯股份有限公司 A kind of cell scheduling method and apparatus
US9152482B2 (en) 2012-02-01 2015-10-06 Huawei Technologies Co., Ltd. Multi-core processor system
WO2012092894A3 (en) * 2012-02-01 2012-12-27 华为技术有限公司 Multicore processor system
CN103023806A (en) * 2012-12-18 2013-04-03 武汉烽火网络有限责任公司 Control method and control device of cache resource of shared cache type Ethernet switch
CN103023806B (en) * 2012-12-18 2015-09-16 武汉烽火网络有限责任公司 The cache resources control method of shared buffer memory formula Ethernet switch and device
CN102970249A (en) * 2012-12-25 2013-03-13 武汉烽火网络有限责任公司 Routing switching device and method
WO2014166092A1 (en) * 2013-04-11 2014-10-16 华为技术有限公司 Resource allocation method, switch, and controller
CN103595658A (en) * 2013-11-18 2014-02-19 清华大学 Scalable fixed-length multi-path switching system without closed-loop flow control
CN103595658B (en) * 2013-11-18 2016-09-21 清华大学 Expansible fixed length multipath exchange system without closed-loop flow control
CN104158770A (en) * 2014-08-20 2014-11-19 电子科技大学 A method and device for dividing and recombining switch packet
CN104158770B (en) * 2014-08-20 2018-02-13 电子科技大学 A kind of method and apparatus of exchange data bag cutting and restructuring
CN105262562A (en) * 2015-09-07 2016-01-20 香港中文大学深圳研究院 Preprocessing method for grouping and recombining algebraic exchange engine data packets
CN105635000A (en) * 2015-12-30 2016-06-01 华为技术有限公司 Message storing and forwarding method, circuit and device
CN105635000B (en) * 2015-12-30 2019-02-01 华为技术有限公司 A kind of message storage forwarding method and circuit and equipment
CN107483405A (en) * 2017-07-17 2017-12-15 中国科学院空间应用工程与技术中心 A kind of dispatching method for supporting elongated cell and scheduling system
CN107483405B (en) * 2017-07-17 2020-01-31 中国科学院空间应用工程与技术中心 scheduling method and scheduling system for supporting variable length cells
CN109525518A (en) * 2018-12-25 2019-03-26 北京物芯科技有限责任公司 A kind of IP packet method for network address translation and device based on FPGA
CN109525518B (en) * 2018-12-25 2021-01-12 北京物芯科技有限责任公司 IP message network address conversion method and device based on FPGA
CN110430146A (en) * 2019-06-26 2019-11-08 天津芯海创科技有限公司 Cell recombination method and switching fabric based on CrossBar exchange
CN112104451A (en) * 2020-11-20 2020-12-18 武汉绿色网络信息服务有限责任公司 Method and device for refreshing data packet transmission port
CN115242728A (en) * 2022-06-27 2022-10-25 新华三技术有限公司 Message transmission method and device
CN115242728B (en) * 2022-06-27 2023-07-21 新华三技术有限公司 Message transmission method and device

Also Published As

Publication number Publication date
CN100550833C (en) 2009-10-14

Similar Documents

Publication Publication Date Title
CN100550833C (en) The method and apparatus of Ethernet cache exchanging and scheduling
US8009569B2 (en) System and a method for maintaining quality of service through a congested network
CN101136854B (en) Method and apparatus for implementing data packet linear speed processing
CN100405344C (en) Apparatus and method for distributing buffer status information in a switching fabric
CN1543149B (en) Flow control in a network environment
CN100579065C (en) Transmission method and device for high speed data flow and data exchange device
EP1045558B1 (en) Very wide memory TDM switching system
CN101478483A (en) Method for implementing packet scheduling in switch equipment and switch equipment
US9602436B2 (en) Switching device
CN104378308A (en) Method and device for detecting message sending rate
US7352766B2 (en) High-speed memory having a modular structure
US7126959B2 (en) High-speed packet memory
CN104954292A (en) System and method for segmenting and regrouping data packets on basis of CLOS (Chinese library of science) switch network
CN102111327B (en) Method and system for cell dispatching
CN106789734B (en) Control system and method for macro frame in exchange control circuit
EP2526478B1 (en) A packet buffer comprising a data section an a data description section
US7680043B2 (en) Network processor having fast flow queue disable process
CN110290074A (en) The Crossbar crosspoint design method interconnected between FPGA piece
CN114531488B (en) High-efficiency cache management system for Ethernet switch
CN104468156B (en) A kind of method and apparatus that resource overhead is saved using time-slot arbitration
CN1172488C (en) Dividing method for bond ports of switch and switch chip
CN1165142C (en) Ouput quene method and device of network data packets
CN104618083B (en) Method for forwarding multi-channel message
CN100379216C (en) High speed port device for communication equipment
CN110233805B (en) Switching device, system and method for variable cell

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20091014

Termination date: 20171124