WO2012162949A1 - 一种报文重组重排序方法、装置和*** - Google Patents

一种报文重组重排序方法、装置和*** Download PDF

Info

Publication number
WO2012162949A1
WO2012162949A1 PCT/CN2011/078504 CN2011078504W WO2012162949A1 WO 2012162949 A1 WO2012162949 A1 WO 2012162949A1 CN 2011078504 W CN2011078504 W CN 2011078504W WO 2012162949 A1 WO2012162949 A1 WO 2012162949A1
Authority
WO
WIPO (PCT)
Prior art keywords
cell
message
queue
sequence number
source
Prior art date
Application number
PCT/CN2011/078504
Other languages
English (en)
French (fr)
Inventor
拉米
乌曼斯基•艾利克斯
王研
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201180001422.5A priority Critical patent/CN103069757B/zh
Priority to KR1020147005957A priority patent/KR101607180B1/ko
Priority to JP2014525279A priority patent/JP5863076B2/ja
Priority to PCT/CN2011/078504 priority patent/WO2012162949A1/zh
Priority to EP11866504.1A priority patent/EP2736197B1/en
Publication of WO2012162949A1 publication Critical patent/WO2012162949A1/zh
Priority to US14/182,106 priority patent/US9380007B2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9057Arrangements for supporting packet reassembly or resequencing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/34Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/185Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with management of multicast group membership
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2408Traffic characterised by specific attributes, e.g. priority or QoS for supporting different services, e.g. a differentiated services [DiffServ] type of service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/624Altering the ordering of packets in an individual queue

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a message reorganization reordering method, apparatus, and system.
  • the data communication device is composed of a plurality of line cards and a switch circuit board, and the line card and the exchange card board are vertically connected.
  • the input line card After receiving the packets of various lengths destined for different destinations from the upstream device, the input line card sends the packets to the packet processor (PP, Packet Processor), and the packet processor according to the packet
  • PP Packet Processor
  • TM Traffic Management
  • TM Traffic Management
  • the cells may be out of order (that is, the cells belonging to the same packet do not follow the letter).
  • the position of the element in the message arrives in order), so the unordered cells need to be resequenced, and the cells belonging to the same message are reassembled into complete messages, and then
  • the packets sent by the same source port are arranged in the order in which the source port is sent, and finally sent from the output port to the downstream device.
  • a series of operations such as reassembling the cells that belong to the same packet, into a complete packet, and the packets sent by the same source port are arranged in the order in which the source port is sent.
  • the embodiments of the present invention provide a packet reordering method, apparatus, and system, which can be applied to a high-performance device and occupy less memory and control resources.
  • a text reorganization reordering method including: Receiving a cell sent by a source port, where the cell carries a source identifier, a message sequence number, and a cell sequence number;
  • the correctly sorted message is a complete message
  • the cells in the correctly sorted message are sorted according to the cell sequence number, to obtain correctly sorted cells
  • a network device including:
  • a receiving unit configured to receive a cell sent by the source port, where the cell carries a source identifier, a message sequence number, and a cell sequence number;
  • a pre-processing unit configured to perform pre-processing on the cell received by the receiving unit according to the source identifier, to determine whether to insert the cell into the database
  • a message sorting unit configured to sort the cells in the message reorganization database according to the message sequence number, to obtain correctly sorted messages
  • a cell sorting unit configured to: when determining that the message is a complete message, sort the cells in the correctly sorted message according to the cell sequence number, to obtain correctly sorted cells;
  • a reorganization unit configured to perform a "3 ⁇ 4 text reorganization" on the correctly sorted cells.
  • a communication system including a source port device and any network device provided by the embodiment of the present invention; the source port device, configured to send a cell to a network device, where the cell carries a source identifier, a packet sequence number, and a letter Meta serial number.
  • the received cell is preprocessed according to the source identifier to determine whether to insert the received cell into the message reassembly database, and then directly according to the message sequence number and the cell sequence number of the cell.
  • the cells in the order are sorted, and finally the message is reorganized. Because the solution can re-reorder the packets according to the source, the re-sequencing of the packets based on the time-scale synchronization in the prior art can be applied to the high-performance device. Moreover, since the solution can control and manage the received cells (that is, the packets waiting to be reorganized), the utilization of resources is high, and less memory and control resources can be occupied.
  • FIG. 1 is a flowchart of a packet reordering method according to Embodiment 1 of the present invention.
  • FIG. 2a is a structural diagram of a packet reordering process according to an embodiment of the present invention
  • FIG. 2b is a flowchart of a packet reordering method according to Embodiment 2 of the present invention.
  • 2c is a schematic diagram of a message sequence and message reassembly database structure in unicast
  • 2d is a flow chart of a method for preprocessing in an embodiment of the present invention.
  • 2e is a schematic diagram of a cell sorting mechanism in a message reordering reordering method according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a network device according to an embodiment of the present disclosure.
  • FIG. 3b is another schematic structural diagram of a network device according to an embodiment of the present invention.
  • Embodiments of the present invention provide a message reorganization reordering method, apparatus, and system. The following is a detailed description. Embodiment 1
  • a packet reordering method includes: receiving a cell sent by a source port, where the cell carries a source identifier, a message sequence number, and a cell sequence number; and preprocessing the received cell according to the source identifier, Determining whether to insert the cell into the message reassembly database; sorting the cells in the message reassembly database according to the message sequence number to obtain the correctly sorted message; if the correctly sorted message is a complete message, then according to The cell sequence number sorts the cells in the correctly sorted message to obtain correctly sorted cells; and reorganizes the correctly sorted cells.
  • the specific process can be as follows:
  • a source port where the cell carries a source identifier (SID, Source Identification), a sequence number, and a cell sequence number, as follows:
  • Source ID It is used to identify the source and type of the cell. According to the source ID, you can know which source port the cell is from.
  • the source ID can be divided into a unicast source identifier and a multicast source identifier.
  • the unicast source identifier is the source port number.
  • the multicast source identifier is the source port number, the service type, and the identifier of the multicast replication domain (MGID, Multicast Group)
  • the set of IDs that is, ⁇ source port number, message service type, MGID ⁇ , where the MGID is assigned a multicast message by the PP of the source line card.
  • Message sequence number Used to identify the location of the message in the data stream.
  • Cell sequence number Used to identify the location of the cell in the message.
  • the cell sent by the source port may be specifically received through a fabric.
  • the cell is segmented by the source port, and the source port is given the cell sequence number and the cell sequence number; for example, two messages of the same destination port are divided into two
  • the cell the first ⁇ 3 ⁇ 4 file is divided into 3 cells, and the second message is divided into 4 cells.
  • the sequence numbers of the cells of the first message are (1, 1), (1, 2), (1, 3); the sequence numbers of the cells of the second message are (2, 1), respectively. (2, 2), (2, 3), (2, 4).
  • the first number in parentheses is the message serial number, and the second number in parentheses is the cell serial number.
  • the cell may include cell data and cell information.
  • the details can be as follows:
  • the cell data of the cell is saved in the payload memory (payload memory) according to the source identifier, and the corresponding load memory index value is configured according to the load memory.
  • the index value can obtain corresponding cell data from the load memory;
  • the cell information may include a source identifier of the cell, a local timestamp, a load memory index value, and a message sequence.
  • Some information related to cells such as the serial number of the cell;
  • the local timestamp is mainly used to judge the time that the cell waits locally. If the waiting time is too long, it will be discarded.
  • the depth of all PF queues is less than a preset threshold, it is determined whether the cell corresponding to the hop queue information in the IF queue should be processed; if it should be processed, the hop queue information of the IF queue is popped; After being processed, the first cell information of the IF queue is pushed into one of the PF queues (the PF queue can be constructed with multiple cells for buffering different waiting times), and the team leader letter of the most full PF queue is popped up. Meta information
  • the cell information is inserted into the message reassembly database; When the message sequence number of the cell is greater than the maximum open message sequence number of the same source in the message reassembly database, the cell information is inserted into one of the PF queues.
  • the depth of the PF queue is greater than a preset threshold, it is determined whether the cell corresponding to the head cell information in the overflow PF queue should be processed; if it should not be processed, the overflow PF queue is recovered (ie, The cell corresponding to the head cell information of the PF queue whose queue depth is greater than the preset threshold, for example, the PF queue is full, and then determines whether the cell corresponding to the head cell information in the IF queue should be determined.
  • the step to be processed (see the previous description); if it should be processed, the step of pushing the head cell information of the IF queue into one of the PF queues and popping up the head cell information of the most complete PF queue is performed.
  • the PF queue is mainly used to store cells that reach the destination port too early. That is, if the cell reaches the destination port too early, it will not be processed at present, but will be placed in the PF queue, and then correctly. The time is taken out of the PF queue and inserted into the message reorganization database.
  • Whether the cell should be processed may be determined according to the number of open packets (that is, packets waiting to be processed) in the queue, that is, the minimum message sequence number of the packet being processed and the current cell are determined. Whether the difference between the sequence number of the message is less than a preset value, if it is less than the preset value, It means that the cell should be processed. At this time, the current cell can be processed. Otherwise, it means that the cell should not be processed. At this time, the cell can be delayed, that is, put into the PF team. For example, the maximum number of open packets in the queue is 16 and the minimum message sequence number of the packets currently being processed is 1024. Then, the cells of all packets whose sequence number is less than 1024 should be discarded directly.
  • the message sequence number is greater than or equal to 1024, and the "3 ⁇ 4" message of less than or equal to "1024+16" should be processed, for example, to further determine that the cell has not timed out, and the message sequence number of the cell is smaller than the message.
  • the cell is inserted into the message reassembly database; and all cells with a sequence number greater than "1024+16" should be cached, that is, inserted into the PF. Queue. That is, the step "processes the cell information in the IF queue according to the source identification, the serial number of the serial number, and the state of the FIFO (Postponed FIFO) queue to determine whether to insert the cell information.
  • the message reorganization database can be: 3 ⁇ 4 mouth:
  • the depth of all the PF queues is less than the preset threshold, it is determined whether the difference between the message sequence number of the cell corresponding to the cell header information in the IF queue and the smallest message sequence number of the currently processed message is smaller than The fourth threshold; if yes, the head cell information of the IF queue is popped; if not, the head cell information of the IF queue is pushed into one of the PF queues, and the head cell information of the most complete PF queue is popped up.
  • the cell information is inserted into the message reassembly database; When the message sequence number of the cell is larger than the largest message sequence number of the same source in the message reassembly database, the cell information is inserted into one of the PF queues.
  • the difference between the message sequence number of the cell corresponding to the cell head cell information in the overflow PF queue and the minimum message sequence number of the currently processed message is determined. Whether it is less than the fifth threshold; if yes, recovering the cell corresponding to the head cell information of the overflow PF queue, and then performing a message sequence number determining that the cell corresponding to the first cell information in the IF queue is currently being Whether the difference of the minimum packet sequence number of the processed packet is less than the fourth threshold; if not, performing the process of pushing the head cell information of the IF queue into one of the PF queues, and popping the fullest PF queue The team's first cell information step.
  • the fourth threshold and the fifth threshold may be set according to actual application requirements. Whether it is unicast or multicast, it can include a large number of message sorting queues, and each message sorting queue has several messages waiting for reordering. (The packets waiting for reorganization and reordering also become open reports. Text, ie open packet).
  • the unicast may classify the message sorting queue according to the packet service type (CoS, Class of Service) and the source identifier (SID, Source Identification), that is, the cells of different packet service types and source identifiers should be different.
  • the message is sorted in the queue.
  • Multicast can divide the message sorting queue according to the CoS, Class of Service, SID, Source Identification and Multicast Group ID.
  • Each message sorting queue has a message descriptor array, that is, a pre-allocated piece of memory as data to accommodate the messages waiting to be sorted.
  • Each entry in the message descriptor array is actually a pointer to the cell descriptor belonging to the message.
  • the cell descriptor also has a pointer to the next cell, so that all cells belonging to a message form a linked list and can traverse all cells starting from the first pointer in the message array.
  • the size of the message descriptor array (that is, how many open messages can be accommodated) can be dynamically adjusted. For example, several message descriptor arrays of different sizes may be allocated in advance. If an open message of a sorting queue is increased, causing the message descriptor array to overflow, a larger message descriptor array can be used instead of the previous smaller message descriptor array. That is, it can be as follows:
  • the message reassembly database may include source descriptor memory (SDM, Source Descriptor Memory) and source queue memory (SQM, Source Queue Memory); wherein the source descriptor memory is used for The source descriptor (Source Descriptor) is indexed by the packet service type and the source port number, that is, the ⁇ message service type, source port number ⁇ is used as an index; and the source queue memory is used to store the source description.
  • SDM Source Descriptor Memory
  • SQL Source Queue Memory
  • the source queue corresponding to the source queue, the source queue memory includes a pointer to a Reorder-Packet-Array-Buffer, and the sort message buffer is used to store the message descriptor of the cell waiting for sorting (Packet) Descriptor ), each message descriptor points to the last cell descriptor (Cell Descriptor) of the cell that belongs to this message.
  • the message reassembly database can include the MGID descriptor.
  • the MGID descriptor memory is used to store the MGID descriptor, and the packet service type, MGID, and source port number are used as indexes, for example, ⁇ message service type, MGID, source port number ⁇
  • the hash value is used as an index
  • the MGID queue memory is used to store the MGID queue corresponding to the MGID descriptor.
  • the MGID queue memory may include a pointer to the sorted message buffer area, and the sorted message buffer area is used for storing the wait.
  • the sorting packet buffer is a dynamic packet descriptor array, and includes a memory block of a different type of M.
  • the M is a positive integer, which is similar to the unicast, and is not described here.
  • the MGID descriptor memory and the MGID queue memory can support the hash function, that is, each process (such as an IF queue, a PF queue, a full queue, a sorted message buffer, and a cell sort buffer) can pass through.
  • the hash function accesses the MGID descriptor memory and the MGID queue memory; wherein, the hash function can be constructed as multiple hash buckets.
  • the so-called hash function is to map the ⁇ message service type, MGID, source port number ⁇ into a relatively small space.
  • the hash function can be input with ⁇ message service type, MGID, source port number ⁇ to calculate an index value, and the index value corresponds to one or more (that is, multiple hash buckets) message sorting. queue.
  • the port of the current destination may be sent to the multicast source to reduce the amount of multicast data sent to the destination port.
  • the first threshold may be set according to the requirements of the actual application.
  • each hash table entry includes 2 hash buckets (ie, including two message sort queues), (1, 2, 128), (7, 0, 256), (11, 3, 567) ⁇ ⁇ ⁇ , etc. are all mapped to 16, which means that with these combinations as input, the result of the hash function is 16.
  • the cells of (1, 2, 128) can use the first message sorting queue, and the cells of (7, 0, 256) can use the second message sorting queue, the cells of (11, 3, 567) It can only be discarded when it is reached (because there are only two message sorting queues per hash). of course, If all the cells of (1, 2, 128) are successfully reorganized, the sorting queue has been scheduled, and the first message sorting queue is empty.
  • the resource conflict lookup table indicates which of these message sorting queues (SID, CoS, MGID) index values are occupied, and is dynamically maintained according to the current state.
  • the specific information may be as follows:
  • the message can be sorted as follows: The first message is sorted.
  • the queue is: (1, 1), (1, 3), (1, 2);
  • the second message sorting queue is: (2, 3), (2, 1), (2, 4), (2, 2).
  • the correctly sorted message is a complete message
  • the details can be as follows:
  • the message descriptor of the complete message is added to the corresponding complete queue (CF, Complete-FIFO) according to the broadcast type of the cell and the type of the message service;
  • the message descriptor inserts the corresponding cell information into the cell sorting buffer (also referred to as the cell reordering buffer, Complete Packets FIFO), and extracts the cell in the cell sorting buffer according to the cell sequence number.
  • the cell information is sorted.
  • the broadcast type of the cell may include unicast or multicast.
  • the cell sorting buffer In order to improve the efficiency of sorting, the cell sorting buffer needs to ensure sufficient blank space when performing cell sorting. Therefore, it can be set that the blank space of the cell sorting buffer is larger than a preset threshold, for example, in cell sorting.
  • a preset threshold for example, in cell sorting.
  • the sorting the extracted cell information in the cell sorting buffer according to the cell sequence number may be as follows:
  • the cell sort buffer maintains a read pointer and a write pointer, and the write pointer always points to the current
  • the start position of the message sorted by the cell, and the read pointer is used to read the sorted cells one by one for message recombination; therefore, the write pointer and the cell sequence number can be added to calculate the inserted cell information.
  • the offset position is then adjusted according to the offset position of the inserted cell information in the message (ie, the correct position in the message) to obtain correctly sorted cell information.
  • the recombination resource performs message reorganization on the cell data.
  • At least two recombination resources may be allocated for the message of the correctly sorted cell information.
  • reorganized resources are managed in a pool of resources that are assigned to each source when needed. When it is needed, it means when the reorganization is about to start; for example, when the cells in the sorting buffer have been correctly ordered, an indication signal requesting message reorganization can be sent, such as FIFO-RDY signal to report After the message reorganization mechanism receives the indication signal of the request message reorganization, the message reorganization mechanism can start to reassemble the message in the sort buffer.
  • the message reassembly mechanism may receive multiple indication signals for reassembly of the request message. In this case, the message reassembly mechanism may select a message according to the packet service type.
  • the meta-sort buffer is processed, that is, at this time, the correct reordering of the cells can be:
  • the cell sorting buffer is selected according to the message service type, and the correctly sorted cell information in the selected cell sorting buffer is read one by one, and the cell data is extracted according to the read cell information to reassemble the message.
  • the packet will be discarded. . For example, it may be determined whether the arrival time of the first cell in the packet that reaches the destination port (for example, the local timestamp may be used to identify the arrival time) and the current time is greater than a preset second threshold, and if yes, determine Whether the message reassembly is timed out, and then the message is discarded; otherwise, if not, it is confirmed that there is no timeout for the reorganization of the message, and the reorganization of the message is continued.
  • the timeout of the re-establishment timeout may also include the estimated time of the network delay.
  • the time delay of the switching network may be estimated according to the sequence number of the packet. For details, refer to the related art, and details are not described herein. Then, if the message is too late in the switching network, it can be discarded directly without waiting for the reassembly super. Discard again. That is, the method may further include: before reorganizing the correctly sorted cells into a message, the method may further include:
  • the packet is discarded.
  • the second threshold and the third threshold may be set according to requirements of an actual application.
  • the subsequent message such as the message with the sequence number of 124
  • the subsequent message cannot be reorganized even if all the cells have been collected. Instead, it must wait for the message with the message sequence number 123 to receive all the cells, or wait for the message with the sequence number 123 to be discarded before it can be processed. Because the waiting timeout is generally a long time, if a packet whose packet sequence number is 123 is in the switching network, the delay is particularly long. At this time, the packet with the sequence number of 123 cannot be timed out.
  • the third threshold for example, the third threshold is set to 100, so that it is not necessary to wait for the packet with the packet sequence number 123 to time out, but the packet with the sequence number of 123 can be directly discarded.
  • the above reorganization reordering process may be divided into multiple parallel reordering engines to support unicast or multicast data streams of different message service types (CoS, Class of Service).
  • the packet service type is equivalent to the packet priority.
  • the received cell is preprocessed according to the source identifier to determine whether to insert the received cell into the message reassembly database, and then directly according to the cell sequence number and the cell sequence number of the cell to the database.
  • the cells are sorted, and finally the message is reorganized. Since the solution can perform reordering of the packets directly according to the source, the re-reordering of the messages based on the time-scale synchronization in the prior art can be applied to the high-performance device. Moreover, since the scheme can control and manage the received cells (that is, the packets waiting to be reorganized), the utilization of resources is high, and less memory and control resources can be occupied.
  • the figure is an architecture diagram of a message reorganization reordering process, which can be mainly divided into the following steps: Step 1. After the cell reaches the destination port, the memory allocation module allocates load memory for the received cell, and the pre-processing module saves the cell data of the cell in the load memory according to the source identifier, and according to the source identifier of the cell.
  • the cell information of the cell is classified, and the cell information is stored in different IFs according to the category, for example, according to the broadcast type, and if it is unicast data, it is placed in the unicast input queue (UIF, Unicast Input) In FIFO, if it is multicast data, it is placed in the Multicast Input FIFO (MIF), see Figure 2a.
  • the cell information may include information about a cell identifier, a local timestamp, a load memory index value, a message sequence number, and a cell sequence number.
  • the preprocessing module processes the cell information in the UIF queue and the MIF queue according to the source identifier, the local timestamp, and the state of the PF queue to determine whether to insert the cell information into the message reassembly database of the message sorting module.
  • the UIF queue and the MIF queue have corresponding processing modules in the message reorganization database. For example, referring to FIG. 2b, the UIF queue corresponds to the "unicast" module in the message reorganization database, and the MIF corresponds to the message. Reorganize the "multicast" module in the database.
  • Step 2 Message sorting: The message sorting module sorts the packets in each source to obtain correctly sorted packets. For example, the message descriptors in the message buffer can be sorted according to the message sequence number. Sorting and placing cells with the same message sequence number in the message reassembly database into the same message sorting queue, see "CoSO, CoSl, CoS2" in Figure 2a, in this figure, CoS0, CoSl and CoS2 They respectively represent the packet sorting queues of different packet service types.
  • the packets have been correctly sorted, but the cells in the packets are still out of order, that is, CoS0, CoSl, and CoS2 are correctly sorted, but CoS0, The cells in CoSl and CoS2 are still out of order.
  • Step 3 Cell sorting: The cell sorting module respectively sorts the cells in each complete packet to obtain correctly sorted cells.
  • the complete message means that the cells in the message have all been received in the afternoon.
  • Step 4 The message reassembly module reassembles the correctly sorted cells into a complete message.
  • the memory allocation unit may allocate a recombination resource for the message reassembly module, and the message reassembly module reads the correctly sorted cell information in the cell sorting module one by one, according to the read cell information in the load.
  • the cell data is extracted in the memory to reassemble the message, and the reassembled message is output, see Figure 2a.
  • the above reorganization reordering process can be further divided into multiple parallel reorganization engines to support Unicast traffic and multicast traffic with the same CoS (Class of Service).
  • the design of the load memory should satisfy the read request of the message reassembly module once per clock cycle. In the process of message reorganization, it cannot be interrupted. Therefore, it is sufficient that the message reassembly module does not need a FIFO queue capable of accommodating the entire message, but only a FIFO queue capable of accommodating 2 to 4 cells. After a message begins to reorganize, it will complete the reorganization before another message begins.
  • the memory allocation unit is mainly used to allocate and reclaim the following types of memory: cell descriptor, message descriptor array, MGID queue memory (MGID queue memory is for multicast data);
  • the memory includes at least one FIFO queue for managing the allocatable buffer. Each FIFO queue should be able to allocate a buffer entry in one clock cycle.
  • the packet sorting module After the packet sorting module dequeues a packet, it submits the packet to the packet recovery module.
  • the packet recovery module submits the cells in the packet to the cell recovery module one by one. When a packet is discarded, it is also submitted to the packet collection module. The same processing is performed to discard the packet.
  • the sequence number of all packets is also smaller than the sequence number of the discarded packet. A message that still resides in the message reassembly database.
  • the source port divides the file into cells, and configures a corresponding sequence number and a cell sequence number for each cell.
  • the serial numbers of the cells of the first text are (1, 1), (1, 2), (1, 3), (1, 4); the sequence numbers of the cells of the second message are ( 2, 1 ), ( 2, 2 ), ( 2, 3 ), ( 2, 4 ), ( 2, 5 ).
  • the first number is the message serial number and the second number is the cell serial number.
  • the message sequence number is inserted into each cell, and the message sequence number is maintained based on the data stream. For example, for unicast data, the message sequence number is maintained according to the destination port and the packet service type. For multicast data, the serial number is >3 ⁇ 4 according to the MGID and the message service type. Carry out maintenance.
  • the source port transmits the cell to the destination port through the switching network, where the cell carries the source identifier, the packet sequence number, and the cell sequence number.
  • the pre-processing module of the destination port After receiving the cells, the pre-processing module of the destination port adds the local timestamp of the received cells to the received cells, and saves the cell data of the cells in the memory allocation module. Load the memory, and configure the corresponding load memory index value. According to the load memory index value, the corresponding cell data can be obtained from the load memory.
  • the pre-processing module of the destination port inserts the cell information of the cell into the UIF queue according to the source identifier, where the cell information may include a source identifier of the cell, a local timestamp, a load memory index value, and a message sequence number. Some information related to cells such as cell serial numbers.
  • the pre-processing module of the destination port performs pre-processing on the cell information in the UIF queue according to the source identifier, the packet sequence number, and the state of the PF queue to determine whether to insert the cell information into the message reassembly database.
  • the message reassembly database is composed of source descriptor memory and source queue memory, the source descriptor memory is used to store source descriptors, and each source descriptor corresponds to a source queue in the source queue memory; Queue memory is used to store source queues. Each source queue corresponds to multiple message descriptors in the sorted message buffer.
  • the source queue memory contains a pointer to the sorted message buffer. This sorted message buffer can accommodate 32 to 512 message descriptors, each message descriptor pointing to the last received cell descriptor of the cell belonging to the message, each cell descriptor containing a cell data pointing into the payload memory
  • the load memory index value is used.
  • the message buffer can be directly accessed.
  • the access index can be calculated by subtracting the first message sequence number (Packet-Sequence-Number - First-Packet-Sequence) from the current message sequence number.
  • Source queue memory can be built into a dynamic array of message descriptors, including N types of memory blocks of different capacities. Each source descriptor contains a pointer to a memory block of this type, the type of memory block being determined by the size of the desired message descriptor array. Inserting or deleting a message descriptor may require switching from one memory block to another. In order to get bandwidth, the memory block supports simultaneous reading and writing, so there is no need to maintain the read pointer. The reason is that after the message is fetched, it causes a bit shift in the position of a message descriptor in the array, and the read pointer always points to the starting position.
  • the pre-processing module (Prepare Packet) of the destination port determines whether the depth of all the PF queues is less than a preset threshold. If yes, step A2 is performed. If not, step A7 is performed.
  • the depth of the PF queue is smaller than the preset threshold, indicating that there is enough space in the PF queue to cache more cells that need to be delayed.
  • the threshold can be set according to the requirements of the actual application.
  • the preprocessing module of the destination port It is determined whether the cell corresponding to the first cell information of the UIF queue in the UIF queue should be processed according to the local timestamp, and if yes, step A3 is performed; if not, step A9 is performed.
  • the pre-processing module of the destination port pops up the first cell information of the IF queue, and then performs step A4.
  • the pre-processing module of the destination port determines whether the cell corresponding to the cell head cell information of the popped IF queue or the cell corresponding to the cell head cell information of the PF queue times out. If yes, step A5 is performed. Otherwise, step A10 is performed.
  • the pre-processing module of the destination port recovers the cell corresponding to the first cell information of the team and the message to which the cell belongs, for example, the cell corresponding to the first cell information of the team is sent.
  • Send the cell reclaiming module Cell Recycle
  • Packet Recycle Packet Recycle
  • the cell recovery module is mainly used for recovering cells, and the message recovery module is mainly used for recovering packets.
  • A6 If not all the depths of the PF queues are less than a preset threshold, the pre-processing module of the destination port determines, according to the local timestamp, whether the cells corresponding to the head cell information in the overflow PF queue should be processed. Then, step A7 is performed, and if not, step A8 is performed.
  • the overflow PF queue refers to a PF queue whose depth of the PF queue is greater than a preset threshold. A7. If the cell corresponding to the first cell information in the overflow PF queue should not be processed, the pre-processing module pops up the first cell information in the overflow PF queue, and recovers the corresponding information of the first cell information of the team. Cell. For example, the cell corresponding to the first cell information of the team is sent to the cell recovery module, and then returns to step A2.
  • the pre-processing module pops up the head cell information of the IF queue, and pushes the team first cell information into one of the PF queues. , then go to step A9.
  • the pre-processing module pops up the first cell information of the most complete PF queue, and then returns to the execution step.
  • the pre-processing module pops up the cell head cell information corresponding to the cell, and reads the source descriptor memory in the message reorganization database based on the source identifier, and then performs step All.
  • the pre-processing module determines whether the message sequence number of the cell corresponding to the first cell information of the team is greater than the maximum open message sequence number in the source descriptor memory, and if yes, step A12 is performed, and if not, Go to step A13.
  • the pre-processing module inserts the first cell information into the source descriptor memory.
  • the pre-processing module inserts the first cell information into one of the PF queues.
  • the pre-processing module delays the cells that are not in the current open message processing window until the open message processing window covers the cells.
  • a monitor can also be set to check the PF queue. If it is checked that a non-empty PF queue has not been accessed for a long time, the monitor trigger pre-processing module preferentially processes the PF.
  • the PF queue can be divided into multiple parallel queues according to the delay time required by the cells.
  • the cell delay time can be evaluated by subtracting the largest message sequence number in the current message reassembly database with the message sequence number in the current cell. The larger the difference, the longer the delay time required by the cell. .
  • the message sorting module puts the message descriptor into the sorting message buffer area, sorts the message descriptors in the sorted message buffer area according to the message sequence number, and has the same message in the message reorganization database.
  • the cell information of the sequence number is placed in the message sorting queue corresponding to the message descriptor.
  • the cell sorting module sorts the cell information in the correctly sorted message according to the cell serial number, and obtains the correctly sorted cell information.
  • the cell sorting module can be constructed as a plurality of synchronized complete queues, and each service type (also referred to as priority, CoS) of the message can work independently with its own control logic and data buffer queue.
  • service type also referred to as priority, CoS
  • the cell sorting module extracts the message descriptor of each complete message from the message sorting queue, and adds it to the corresponding complete queue according to the broadcast type and the service type of the packet (ie, CoS);
  • the message descriptor in the message inserts the corresponding cell information into the cell sorting buffer, and sorts the extracted cell information in the cell sorting buffer according to the cell sequence number.
  • the complete message refers to all the cells that received the message in the message reorganization database.
  • the cell sort buffer can hold 2 Jumbo Packets, which is about 20K bytes. Only when the amount of data is less than half, the message descriptor is extracted and the corresponding cell information is sorted into it.
  • Figure 2e which is a schematic diagram of a cell sorting mechanism, the cell sorting buffer maintains a read pointer (RD_PTR) and a write pointer (WR_PTR).
  • the write pointer always points to the beginning of the currently sorted message, and the offset position of the inserted cell information is calculated by adding the write pointer and the cell sequence number.
  • the cell information of the message can be sorted only when the size of the message is smaller than the size of the blank position in the cell sorting buffer. After the read pointer waits until the cell information of the next message is sorted, the cell information in the message is read one by one.
  • "EOP” in Figure 2e represents the last cell in a message
  • "SOP" indicates the start position of the message.
  • the concatenated cell (Concatenated-Cell) can be processed as follows: When the first message is inserted, the pointer of the last cell of the first message can be inserted twice in the input FIFO while the cell is in the cell.
  • the Cell-Reorder-Buffer adds a flag indicating that the first cell cannot be extracted to the reassembly mechanism.
  • the second cell is reassembled, the data is discarded. This is because when the source splits the message into cells, the last cell often has a data black hole, that is, the length of the message cannot be split into several complete cells, resulting in the last cell. The valid data is smaller than the previous complete cell. Such cells can cause resource waste during transmission and processing.
  • the optimization measure is that the cells are no longer segmented according to the message boundary, but the continuous message is treated as a stream slice, so that all cells contain the complete data segment.
  • it will inevitably result in some cells containing both the last piece of data of the previous message and the first piece of data of the latter message. This is the last piece of data containing the previous message, and the cell containing the first piece of data of the latter message is called a spliced cell.
  • the cell needs to be inserted into the input FIFO queue twice, one as The E0P cell of the previous ⁇ , one SOP cell as the next one.
  • the two cells will exist continuously in the cell sorting module. Since the contents of the two cells are the same, the two cells cannot be submitted to the message. Reorganization unit. Therefore, a flag bit identifier is required as the first cell of the E0P, and the message reassembly module cannot be submitted, but after the reordering of the message of the second cell is completed, the first cell is discarded to ensure the submission. There is only one copy of the data to the message reassembly module.
  • the cell sort buffer When a message is ready for reorganization, the cell sort buffer will send a request to the reorganization mechanism. Find the indication signal for message reorganization, such as the FIFO-RDY signal.
  • the cell sorting mechanism (including the cell sorting buffer and the full queue) is composed of a plurality of parallel first-input first-out (FIFO) queues, and the message-based service type between the queues (also known as priority, or CoS) is distinguished from the broadcast type, thus eliminating the need for an additional queue by the sorting mechanism, which can save resources.
  • FIFO first-input first-out
  • the message reassembly module receives the indication signal of the request message reorganization, for example, the FIFO_RDY signal, and reads the correctly sorted cell information in the cell sorting buffer one by one, and extracts according to the read cell information. Cell data to reassemble the message.
  • the indication signal of the request message reorganization for example, the FIFO_RDY signal
  • the message reassembly module can be based on the service type (ie priority) of the message, because there may be multiple cell sorting buffers simultaneously sending an indication signal of the request message reassembly to the message reassembly module.
  • request signal reassembly indication signals such as FIFO_RDY signals
  • the received cell information is preprocessed according to the source identifier to determine whether to insert the received cell information into the message reassembly database, and then directly according to the message sequence in the cell information.
  • the number and the cell serial number sort the cell information in the database, and finally extract corresponding cell data according to the sorted cell information to implement message reorganization. Since the solution can perform reordering of the packets directly according to the source, the re-reordering of the messages based on the time-scale synchronization in the prior art can be applied to the high-performance device. Moreover, since the scheme can control and manage the received cells (that is, the packets waiting to be reorganized), the utilization of resources is high, and less memory and control resources can be occupied.
  • Unicast messages require source descriptor memory, indexed by ⁇ priority, source port number ⁇ .
  • the multicast message requires the MGID descriptor memory and is indexed by the hash value (Hash) of ⁇ priority, MGID, source port number ⁇ .
  • the MGID represents a group of destination ports that belong to the same replication domain.
  • the source queue memory in unicast is replaced by MGID queue memory. That is, the multicast message reorganization database
  • the multicast reorganization database may include MGID descriptor memory and MGID queue memory; wherein, the MGID descriptor memory is used to store the MGID descriptor, and the hash value of ⁇ priority, MGID, source port number ⁇ is used as an index.
  • the MGID queue memory is used to store the MGID queue corresponding to the MGID descriptor.
  • the MGID queue memory may include a pointer to the sorted message buffer area, and the sorted message buffer area is used to store the message waiting for the sorted cell. Descriptor, each message descriptor points to the last received cell descriptor of the cell belonging to this message.
  • the MGID queue memory can be constructed as a dynamic message descriptor array, including M types of different capacity memory blocks, where M is a positive integer; each MGID descriptor contains a pointer to such a memory block, a memory block The type is determined by the size of the desired message descriptor array.
  • the rows of these memories may correspond to multiple sources (that is, ⁇ priority, MGID, source port number ⁇ ). Since these lines of memory are applied for one source, a new requirement has been introduced: When requesting resources for a new multicast source, it is necessary to check that the lines have been released. In order to solve this problem, another memory is needed to maintain this information.
  • a threshold may be configured, which is referred to as a first threshold in this embodiment.
  • a control information is sent to the source to reduce transmission. The traffic of multicast data to this destination port.
  • the multicast sorting mechanism and the reorganization mechanism are the same as the unicast, and therefore will not be described here. For details, refer to the second embodiment.
  • each message descriptor should include an earliest cell timestamp (Earliest-Cell-Timestamp) indicating that the first of all cells in the message arrived. The arrival time of the cell of the destination port. If the difference between the current time and the earliest cell timestamp is greater than the preset second threshold, the message is marked as a reassembly timeout, and when the message arrives at the head of the message descriptor array, even if the message is still Instead of receiving all of its cells, it does not wait for other cells to receive the message, but pops the message and discards it. Otherwise, before the message arrives at the head of the sorting array, it does not judge whether the message is reorganized or not, but performs the following operations: When inserting the first cell of each message, update the earliest in the message descriptor. Cell timestamp field.
  • the oldest cell timestamp in the source descriptor is checked regardless of which message the arriving cell belongs to. For example, in the case where no data arrives from a source, Static Source Detection (SSD, Silent-Source-Detection) will trigger such a check.
  • SSD Static Source Detection
  • timeout messages For timeout messages, they will be transferred to the message collection mechanism for processing.
  • the Chinese reorganization reordering method may further include:
  • the destination port discards the packet when the estimated value of the delay of the switching network (that is, the transmission time of the packet in the switching network) exceeds the preset third threshold.
  • the transmission time of the packet in the switching network can be estimated by the destination port according to the sequence number of the packet.
  • the embodiment of the present invention further provides a network device, which can be specifically used as a destination port.
  • the network device includes a receiving unit 301, a pre-processing unit 302, a message sorting unit 303, a cell sorting unit 304, and a recombining unit 305.
  • the receiving unit 301 is configured to receive a cell that is sent by the source port, where the cell carries a source identifier, a message sequence number, and a cell sequence number, and the source identifier may be a source port number.
  • the pre-processing unit 302 is configured to pre-process the cells received by the receiving unit 301 according to the source identifier to determine whether to insert the cell into the database;
  • the message sorting unit 303 is configured to sort the cells in the message reorganization database according to the message sequence number, to obtain correctly sorted messages;
  • the cell sorting unit 304 is configured to: when determining that the message is a complete message, sort the cells in the correctly sorted message obtained by the message sorting unit 303 according to the cell sequence number, to obtain the correctly sorted cells;
  • the recombining unit 305 is configured to perform message reassembly on the correctly sorted cells obtained by the cell sorting unit 304.
  • the cell may include cell data and cell information. To save resources, in the process of sorting, only the cell information may be operated.
  • the recombining unit 305 reassembles the message, the corresponding information is extracted according to the cell information.
  • the cell data is reorganized.
  • the pre-processing unit 302 can include a data saving sub-unit 3021, an information insertion sub-unit 3022, and a decision sub-unit 3023.
  • the data saving subunit 3021 is configured to: after the local timestamp of the arrival of the cell received by the receiving unit 301, save the cell data of the cell in the load memory according to the source identifier, and configure the corresponding load memory index value. ;
  • the information insertion subunit 3022 is configured to insert the cell information of the cell received by the receiving unit 301 into the input FIFO queue, where the cell information includes a source identifier of the cell, a local timestamp, and a load memory index. Information such as value, message serial number, and cell serial number;
  • the decision subunit 3023 is configured to process, according to the source identifier, the message sequence number, and the state of the PF queue, the cell information inserted into the IF queue by the information insertion subunit 3022 to determine whether to insert the cell information into the text reorganization database. Go in.
  • the details can be as follows:
  • the determining sub-unit 3023 is specifically configured to: when the depth of all the PF queues is less than a preset threshold, Whether the difference between the message sequence number of the cell corresponding to the first cell information of the IF queue and the minimum message sequence number of the currently processed message is less than the fourth threshold; if yes, the head of the IF queue is popped up. Cell information; if not, the head cell information of the IF queue is pushed into one of the PF queues, and the head cell information of the most complete PF queue is popped up; the head cell information of the popped IF queue is determined.
  • the cell corresponding to the cell header information of the corresponding cell or PF queue times out; if it times out, recovers the cell corresponding to the cell information and the message to which the cell belongs; if there is no timeout,
  • the cell information is inserted into the message reassembly database; the message sequence number of the cell is greater than the message reassembly
  • the maximum open message sequence number of the same source in the database is used, the cell information is inserted into one of the PF queues.
  • the decision subunit is further configured to: when the depth of the PF queue is greater than a preset threshold, determine, according to the local timestamp, the message sequence number of the cell corresponding to the cell head cell information in the overflow PF queue and is currently processing Whether the difference of the minimum message sequence number of the message is less than the fourth threshold; if yes, recovering the cell corresponding to the head cell information of the overflow PF queue, and then performing determining the IF queue squad first cell information The step of whether the difference between the packet sequence number of the corresponding cell and the minimum packet sequence number of the currently processed packet is less than the fourth threshold; if not, performing the step of pushing the cell header information of the IF queue into One of the PF queues, and pops up the leader cell information step of the fullest PF queue.
  • the message reorganization database can be designed separately for unicast and multicast.
  • a message reorganization database for unicast and a message reorganization database for multicast can be separately set in a message reassembly database.
  • the message reorganization database may include source descriptor memory and source queue memory.
  • the source descriptor memory is used to store the source descriptor, and the packet service type and the source port number are indexed, that is, the ⁇ message service type, source port number ⁇ is used as an index; and the source queue memory is used for storing
  • the source queue corresponding to the source descriptor, the source queue memory may include a pointer to the sorting message buffer area, and the sorting message buffer area is used to store the message descriptors of the cells waiting for sorting, and each message descriptor points to The cell descriptor of the cell that was last received for this message.
  • the source descriptor memory and the source queue memory may be created by a dynamic message descriptor array, including N types of memory blocks of different capacities, where N is a positive integer.
  • the message reassembly database includes MGID descriptor memory and MGID queue memory.
  • the source MGID descriptor memory is used to store the MGID descriptor, and is indexed by the packet service type, the MGID, and the source port number, for example, a hash value of the ⁇ message service type, MGID, source port number ⁇ .
  • the MGID queue memory is used to store the MGID queue corresponding to the MGID descriptor, and the MGID queue memory may include a pointer to the sorting message buffer area, where the sorting message buffer area is used for waiting for sorting.
  • the message descriptor of the cell each message descriptor pointing to the last received cell descriptor of the cell belonging to the message.
  • the MGID descriptor memory and the MGID queue memory are dynamic message descriptor arrays, including M types of memory modules of different capacities, where M is a positive integer.
  • the MGID descriptor memory and the MGID queue memory support the hash function, that is, each process (such as an IF queue, a PF queue, a full queue, a sorted message buffer, and a cell sort buffer) can access the MGID descriptor memory through a hash function.
  • MGID queue memory ie:
  • the message sorting unit 303 is specifically configured to access the MGID descriptor memory and the MGID queue memory by using a hash function;
  • the hash function can be constructed as multiple hash buckets. Since the hash function is adopted, resource conflicts may occur. Therefore, it is also necessary to maintain a resource conflict lookup table (FAVT), so that when a resource conflict occurs, a resource conflict may occur according to the preset resource conflict lookup table. The cell is pushed into another MGID queue or discarded. Therefore, as shown in FIG. 3b, the network device may further include a conflict processing unit 306;
  • the conflict processing unit 306 is configured to: when a resource conflict occurs, push a cell that has a resource conflict according to a preset resource conflict query table into another MGID queue or discard (ie, recycle). Specifically, when the packet sorting unit 303 accesses the MGID descriptor memory and the MGID queue memory through the hash function, if a resource conflict occurs, the cell in which the resource conflict occurs is pushed into another MGID according to the preset resource conflict query table. Queue or discard.
  • a threshold may be configured, which is referred to as a first threshold in the embodiment of the present invention.
  • a control information is sent to the source.
  • the network device may further include a control message sending unit 307;
  • the control message sending unit 307 is configured to: when the information amount of the resource conflict lookup table exceeds a preset first threshold, trigger the destination port to send control information to the multicast source, so as to reduce multicast sent to the destination port.
  • the amount of data is configured to: when the information amount of the resource conflict lookup table exceeds a preset first threshold, trigger the destination port to send control information to the multicast source, so as to reduce multicast sent to the destination port. The amount of data.
  • the cell sorting unit 304 may include an enqueue subunit 3041, a buffer subunit 3042, and a sorting subunit 3043;
  • the enqueue sub-unit 3041 is configured to add the message descriptor of the complete message to the corresponding complete message according to the broadcast type and priority of the cell when determining that the packet ordered by the message sorting unit 302 is a complete message. In the queue;
  • a buffer subunit 3042 configured to insert corresponding cell information into the cell sorting buffer according to a message descriptor in the complete queue
  • the sorting sub-unit 3043 is configured to sort the extracted cell information in the cell sorting buffer according to the cell sequence number.
  • the cell sorting buffer maintains a read pointer and a write pointer.
  • the write pointer always points to the start position of the message currently being sorted by the cell, and the read pointer is used to read the sorted cell information one by one. Performing the reorganization of the text, therefore, the write pointer and the cell sequence number can be added to calculate the offset position of the inserted cell information, and then the position of the inserted cell information in the message is adjusted according to the offset position, Get the correctly sorted cell information; ie:
  • the sorting sub-unit 3043 is specifically configured to add the write pointer and the cell sequence number to calculate an offset position of the inserted cell information, and adjust the position of the inserted cell information in the message according to the offset position.
  • the reassembly unit 305 may include an allocation subunit 3051, a reading subunit 3052, and a recombination subunit 3053; the text of the correctly sorted cell information is allocated to the recombination resource;
  • the reading subunit 3052 is configured to read out the correctly sorted cell information in the cell sorting buffer one by one;
  • the recombination subunit 3053 is configured to obtain corresponding cell data according to the cell information read by the reading subunit 3052, and perform reassembly of the obtained cell data by using the recombination resource allocated by the allocation subunit. For example, the recombination subunit 3053 can read the cell data stored in the payload memory by the data storage subunit 3021 according to the cell information read by the subunit 3052.
  • At least two recombination resources may be allocated to the packet of the correctly sorted cell information, that is, The allocation sub-unit 3051 is further configured to allocate at least two recombination resources for the belonging "3 ⁇ 4" of the correctly sorted cell information.
  • the reorganized resources are managed in a resource pool and allocated to each source when needed.
  • it means when the reorganization is about to start; for example, when the cells in the sorting buffer have been correctly ordered, an indication signal requesting message reorganization can be sent, such as FIFO-RDY signal to report
  • the message reorganization mechanism can start to reassemble the message in the sort buffer.
  • the message reassembly mechanism may receive multiple indication signals for reassembly of the request message. In this case, the message reassembly mechanism may select a message according to the packet service type.
  • the meta-sort buffer is processed, that is, the reorganization unit may further include a selection sub-unit 3054;
  • the selecting subunit 3054 is configured to select a cell sorting buffer according to the packet service type; then the subunit 3051 is configured to be used for selecting the correctly sorted cell information in the cell sorting buffer selected by the selecting subunit 3054. >3 ⁇ 4 text assigned to the reorganization resource;
  • the reading subunit 3052 is configured to read out the correctly sorted cell information in the cell sorting buffer selected by the selecting subunit 3054 one by one.
  • the network device may further include a discarding unit, and the discarding unit is configured to discard the packet when the packet reassembly timeout is determined.
  • the discarding unit is configured to confirm that the packet reassembly is timed out when the difference between the arrival time of the first cell that reaches the destination port in the packet and the current time is greater than a preset second threshold.
  • the details may be as follows: Each message descriptor includes an earliest cell timestamp indicating the arrival time of the first cell of the cell that reaches the destination port among all cells of the message.
  • the message is marked as a reassembly timeout, and when the message arrives at the head of the message descriptor array, even if the message is still Instead of receiving all of its cells, it does not wait for other cells to receive the message, but pops the message and discards it.
  • the receiving unit 301 is configured to receive the cell that is sent by the source port by using the switching network.
  • the discarding unit is further configured to: when the estimated value of the switching network delay exceeds a preset third threshold, discard the packet.
  • the transmission time of the packet in the switching network can be estimated by the destination port according to the sequence number of the packet.
  • the specific values of the second threshold and the third threshold may be set according to actual application requirements.
  • the foregoing units may be implemented as separate entities, or may be implemented as the same entity or multiple entities.
  • the pre-processing unit 302 may be implemented by the pre-processing module in the second embodiment
  • the message sorting unit 303 The message sorting module in the second embodiment can be implemented by the message sorting module in the second embodiment
  • the reorganization unit 305 can be implemented by the message reassembly module in the second embodiment.
  • the discarding unit can be implemented by the message reclamation module and the signaling reclaiming module in the second embodiment, and the like.
  • the obtained cell is preprocessed to determine whether the received cell is inserted into the message reassembly database, and then the message sorting unit 303 and the cell sorting unit 304 directly access the database according to the message sequence number and the cell sequence number of the cell.
  • the cells in the cell are sorted, and finally the reassembly unit 305 reassembles the message according to the ordered cells. Since the scheme can re-synthesize the text directly according to the source, it can be applied to the high-performance device as compared with the prior art, based on the time-scale synchronization to implement the reordering of the message. Moreover, since the scheme can control and manage the received cells (that is, the packets waiting to be reorganized), the utilization of resources is high, and less memory and control resources can be occupied.
  • the embodiment of the present invention further provides a communication system, including a source port device and any network device provided by the embodiment of the present invention, where the network device serves as a destination port (referred to as a destination port device), and specifically See the previous embodiment; for example, it can be as follows:
  • the destination port device is configured as a destination port, and is configured to receive a cell sent by the source port, where the cell carries the source identifier, the packet sequence number, and the cell sequence number, and performs pre-processing on the received cell according to the source identifier. Determining whether to insert the cell into the message reorganization database, sorting the cells in the message reassembly database according to the message sequence number, and obtaining the correctly sorted message, if the correctly sorted message is complete The message, according to the cell sequence number, sorts the cells in the correctly sorted message, obtains the correctly sorted cells, and performs the reorganization of the correctly sorted cells;
  • the source port device is used as a source port to send a cell to the destination port device.
  • the cell carries the source identifier, the packet sequence number, and the cell sequence number.
  • the destination port device is specifically configured to: after receiving the local timestamp of the received cell, the cell data of the cell is saved in the load memory according to the source identifier, and the corresponding load memory index value is configured; The cell information of the cell is inserted into the IF queue, where the cell information includes the source identifier of the cell, the local timestamp, the load memory index value, the message sequence number, and the cell sequence number; according to the source identifier, the local timestamp, and The state of the PF queue processes the cell information in the IF queue to determine whether to insert the cell information into the "3" text reorganization database. See the previous method embodiment and Figure 2d for details.
  • the destination port device of the communication system in this embodiment preprocesses the received cell according to the source identifier to determine whether to insert the received cell into the message reassembly database, and then directly according to the cell.
  • the message sequence number and the cell sequence number sort the cells in the database, and finally the message is reorganized. Since the solution can re-sequence the packets according to the source, the re-reordering of the packets based on the time-scale synchronization in the prior art can be applied to the high-performance device. Moreover, since the solution can control the received cells (that is, the packets waiting to be reorganized), the utilization of resources is high, and less memory and control resources can be occupied.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read only memory (ROM, Read Only Memory), random access memory (RAM), disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种报文重组重排序方法,包括:接收源端口发送的信元,所述信元携带源标识、报文序列号和信元序列号;根据源标识对接收到的信元进行预处理,以确定是否将所述信元***报文重组数据库;根据报文序列号对报文重组数据库中的信元进行排序,得到正确排序的报文;若所述正确排序的报文为完整报文,则根据信元序列号对所述正确排序的报文中的信元进行排序,得到正确排序的信元;对所述正确排序的信元进行报文重组。还相应地提供一种网络设备和通信***。

Description

一种报文重组重排序方法、 装置和***
技术领域
本发明涉及通信技术领域, 具体涉及一种报文重组重排序方法、装置和系 统。
背景技术
数据通信设备由多个线卡和交换网 (Fabric Switch )板卡组成, 线卡与交 换网板卡之间交叉垂直连接。输入端线卡从上游设备接收到去往不同目的地的 各种长度的报文 (packet )后, 将这些报文输送给报文处理器 (PP , Packet Processor ), 由报文处理器根据报文的目的地查找路由表, 以确定其输出端, 并输送给流量管理器(TM, Traffic Management ), 报文经过流量管理器整形 之后在交换网接口 ( FI , fabric switch Interface )被切分成为多个固定大小的信 元(cell )。 这些信元被连续发送到交换网。 交换网对每个信元独立选择路径, 并传送到目的端口。
由于不同的信元可能是经过交换网的不同路径来进行传输的,所以当信元 到达目的端口时,各信元之间可能是乱序的(即属于同一个报文的信元没有按 照信元在报文中的位置顺序到达), 所以, 需要对这些乱序的信元进行重新排 序( resequence ), 并将属于同一才艮文的信元重组 ( reassembly )为完整的才艮文, 然后再将属于同一源端口发送的报文按照源端口发送的顺序进行排列,最后再 从输出端口发送到下游设备。其中,将属于同一报文的信元重组为完整的报文, 以及将属于同一源端口发送的报文按照源端口发送的顺序进行排列等一系列 操作, 统称为报文的重组重排序。现有技术一般利用时标在目的端口对各个源 的报文进行重组重排序, 这种方式对时标的同步具有较高的要求。 由于在具有 很多源的情况下, 很难达到十分精确的时标同步, 所以, 这种方式并不适用于 高性能的设备。但如果不基于时标, 而是筒单地在目的端口按照多个源来进行 排序的话, 又需要浪费大量的内存和控制资源。
发明内容
本发明实施例提供一种报文重组重排序方法、 装置和***, 即可以适用于 高性能的设备, 又可以占用较少的内存和控制资源。
一种 文重组重排序方法, 包括: 接收源端口 (source port )发送的信元, 所述信元携带源标识、 报文序列 号和信元序列号;
根据源标识对接收到的信元进行预处理,以确定是否将所述信元***报文 重组数据库;
根据报文序列号对报文重组数据库中的信元进行排序,得到正确排序的报 文;
若所述正确排序的报文为完整报文,则根据信元序列号对所述正确排序的 报文中的信元进行排序, 得到正确排序的信元;
对所述正确排序的信元进行 >¾文重组。
一种网络设备, 包括:
接收单元, 用于接收源端口发送的信元, 所述信元携带源标识、 报文序列 号和信元序列号;
预处理单元, 用于根据源标识对接收单元接收到的信元进行预处理, 以确 定是否将所述信元***数据库;
报文排序单元, 用于根据报文序列号对报文重组数据库中的信元进行排 序, 得到正确排序的报文;
信元排序单元, 用于在确定报文为完整报文时,根据信元序列号对所述正 确排序的报文中的信元进行排序, 得到正确排序的信元;
重组单元, 用于对所述正确排序的信元进行 "¾文重组。
一种通信***, 包括源端口设备和本发明实施例提供的任一种网络设备; 所述源端口设备, 用于发送信元给网络设备, 所述信元携带源标识、 报文 序列号和信元序列号。
本发明实施例采用根据源标识对接收到的信元进行预处理,以确定是否将 接收到的信元***报文重组数据库,然后再直接根据信元的报文序列号和信元 序列号对数据库中的信元进行排序, 最终实现报文的重组。 由于本方案可以直 接根据源来对报文进行重组重排序,所以相对于现有技术中基于时标同步来实 现报文的重组重排序而言, 可以适用于高性能设备。 而且, 由于本方案可以对 接收到的信元(即等待重组的报文)进行控制管理,所以对资源的利用率较高, 可以占用较少的内存和控制资源。 附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所 需要使用的附图作筒单地介绍,显而易见地, 下面描述中的附图仅仅是本发明 的一些实施例, 对于本领域技术人员来讲, 在不付出创造性劳动的前提下, 还 可以根据这些附图获得其他的附图。
图 1是本发明实施例一提供的报文重组重排序方法的流程图;
图 2a是本发明实施例提供的报文重组重排序过程的架构图
图 2b是本发明实施例二提供的报文重组重排序方法的流程图;
图 2c是单播中报文排序和报文重组数据库结构的示意图;
图 2d是本发明实施例中预处理的方法流程图;
图 2e是本发明实施例提供的报文重组重排序方法中信元排序机制的示意 图;
图 3a是本发明实施例提供的网络设备的结构示意图;
图 3b是本发明实施例提供的网络设备的另一结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清 楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而不是 全部的实施例。基于本发明中的实施例, 本领域技术人员在没有作出创造性劳 动前提下所获得的所有其他实施例, 都属于本发明保护的范围。
本发明实施例提供一种报文重组重排序方法、装置和***。 以下分别进行 详细说明。 实施例一、
本实施例将从作为目的端口的网络设备的角度进行描述。
一种报文重组重排序方法, 包括: 接收源端口发送的信元, 其中, 该信元 携带源标识、报文序列号和信元序列号; 根据源标识对接收到的信元进行预处 理, 以确定是否将该信元***报文重组数据库; 根据报文序列号对报文重组数 据库中的信元进行排序,得到正确排序的报文;若正确排序的报文为完整报文, 则根据信元序列号对该正确排序的报文中的信元进行排序,得到正确排序的信 元; 对该正确排序的信元进行^艮文重组。 参见图 1, 具体流程可以如下:
101、 接收源端口发送的信元, 其中, 信元携带源标识 ( SID , Source Identification ), 4艮文序列号和信元序列号, 如下:
源标识: 用于标识信元的来源和类型,根据源标识可以获知信元来自哪个 源端口; 该源标识又可以分为单播源标识和多播源标识,单播源标识为源端口 号和报文服务类型 (CoS, Class of Service) 的集合, 即{源端口号, 报文服务 类型 }; 多播源标识为源端口号、 服务类型和多播复制域的标识 (MGID, Multicast Group ID ) 的集合, 即{源端口号, 报文服务类型, MGID}, 其中, MGID由源线卡的 PP赋予多播报文。
报文序列号: 用于标识报文在数据流中的位置。
信元序列号: 用于标识信元在报文中的位置。
例如, 具体可以通过交换网 (fabric)接收源端口发送的信元。 其中, 该 信元由源端口对 ^艮文切分而成, 并由源端口赋予信元 ^艮文序列号和信元序列 号; 例如, 有两个同一目的端口的^艮文被切分为信元, 第一个 ^¾文切分为 3个 信元, 第二个报文切分为 4个信元。第一个报文的信元的序列号分别是(1, 1 ), ( 1, 2), ( 1, 3); 第二个报文的信元的序列号分别是(2, 1 ), (2, 2), (2, 3), (2, 4)。 其中, 括号里的第一个数字是报文序列号, 括号里的第二个数字 是信元序列号。
102、 根据源标识对接收到的信元进行预处理, 以确定是否将该信元*** 报文重组数据库。
其中, 信元可以包括信元数据和信元信息, 为了节省资源, 在排序的过程 中, 可以只对信元信息进行操作, 等到报文重组时, 才根据信元信息提取相应 的信元数据进行重组。 例如, 具体可以如下:
( 1 ) 为接收到的信元打上到达时的本地时间戳后, 根据源标识将信元的 信元数据保存在负载内存( Payload Memory )中, 并配置相应的负载内存索引 值, 根据负载内存索引值可以从负载内存中获取相应的信元数据;
(2)将信元的信元信息***到输入先进先出 (IF, Input FIFO) 队列中, 其中, 信元信息可以包括信元的源标识、 本地时间戳、 负载内存索引值、 报文 序列号和信元序列号等信元相关的一些信息; 其中, 本地时间戳主要是为了判断信元在本地等待的时间,如果等待时间 太长就会被丟弃。 而源标识、 负载内存索引值、 才艮文序列号和信元序列号等信 息的作用可参见前面的说明, 在此不再赘述。
( 3 )根据源标识、 4艮文序列号和延时先进先出 (PF, Postponed FIFO ) 队列的状态对 IF队列中的信元信息进行处理,以确定是否将该信元信息***报 文重组数据库。 如下:
当所有 PF队列的深度小于预设的阈值时,确定 IF队列中队首信元信息所对 应的信元是否应该被处理; 若应该被处理, 则弹出 IF队列的队首信元信息; 若 不应该被处理,则将 IF队列的队首信元信息压入 PF队列之一( PF队列可以构建 有多个, 用于緩存不同等待时间的信元), 并弹出最满的 PF队列的队首信元信 息;
判断所弹出的 IF队列的队首信元信息所对应的信元或 PF队列的队首信元 信息所对应的信元是否超时; 若超时, 则回收该信元信息所对应的信元和该信 元所属的报文; 若没有超时, 则在该信元的报文序列号小于报文重组数据库中 同一源的最大报文序列号时,将该信元信息***报文重组数据库; 在该信元的 报文序列号大于报文重组数据库中同一源的最大开放报文序列号时,将该信元 信息*** PF队列之一。
反之, 当有 PF队列的深度大于预设的阈值时, 确定溢出 PF队列中的队首 信元信息所对应的信元是否应该被处理; 若不应该被处理, 则回收该溢出 PF 队列 (即队列的深度大于预设的阈值的 PF队列, 比如该 PF队列已满) 的队首 信元信息所对应的信元,然后执行确定 IF队列中的队首信元信息所对应的信元 是否应该被处理的步骤(参见前面的描述); 若应该被处理, 则执行将 IF队列 的队首信元信息压入 PF队列之一, 并弹出最满的 PF队列的队首信元信息步骤。
其中, PF队列主要用于存放到达目的端口太早的信元, 也就是说, 如果 信元到达目的端口太早, 则当前不会被处理, 而是会被放入 PF队列中, 然后在 正确的时间, 才从 PF队列中取出, 并***到报文重组数据库中。
其中, 信元是否应该被处理, 可以根据队列中的开放报文(open packet, 即等待处理的报文)数量来确定, 即确定正在处理的报文的最小报文序列号和 当前信元的报文序列号的差值是否小于某个预设的值, 若小于这个预设的值, 就表示信元应该被处理, 此时可以处理当前信元, 否则, 就表示信元不应该被 处理, 此时可以把信元延时, 即放入 PF队歹 ll中。 比如, 队列中的开放报文最大 数量是 16个, 而当前正在处理的报文的最小报文序列号是 1024, 那么所有报文 序列号小于 1024的报文的信元都应该被直接丟弃; 报文序列号大于等于 1024, 并且小于等于 "1024+16" 的 "¾文的信元应该被处理, 比如在进一步确定该信 元没有超时,且该信元的报文序列号小于报文重组数据库中同一源的最大报文 序列号时, 将该信元***到报文重组数据库中; 而所有序号大于 "1024+16" 的报文的信元, 则应该被緩存, 也就是*** PF队列。 即, 步骤 "根据源标识、 才艮文序列号和延时先进先出 (PF, Postponed FIFO ) 队列的状态对 IF队列中的 信元信息进行处理, 以确定是否将该信元信息***报文重组数据库"具体可以 :¾口下:
当所有 PF队列的深度小于预设的阈值时,确定 IF队列中队首信元信息所对 应的信元的报文序列号与当前正在处理的报文的最小报文序列号的差值,是否 小于第四阈值; 若是, 则弹出 IF队列的队首信元信息; 若否, 则将 IF队列的队 首信元信息压入 PF队列之一, 并弹出最满的 PF队列的队首信元信息; 判断所 弹出的 IF队列的队首信元信息所对应的信元或 PF队列的队首信元信息所对应 的信元是否超时; 若超时, 则回收该信元信息所对应的信元和该信元所属的报 文; 若没有超时, 则在该信元的报文序列号小于报文重组数据库中同一源的最 大开放报文序列号时,将该信元信息***报文重组数据库; 在该信元的报文序 列号大于报文重组数据库中同一源的最大报文序列号时,将该信元信息*** PF 队列之一。
当有 PF队列的深度大于预设的阈值时, 确定溢出 PF队列中的队首信元信 息所对应的信元的报文序列号与当前正在处理的报文的最小报文序列号的差 值, 是否小于第五阈值; 若是, 则回收该溢出 PF队列的队首信元信息所对应的 信元,然后执行确定 IF队列中队首信元信息所对应的信元的报文序列号与当前 正在处理的报文的最小报文序列号的差值, 是否小于第四阈值的步骤; 若否, 则执行将 IF队列的队首信元信息压入 PF队列之一,并弹出最满的 PF队列的队首 信元信息步骤。
其中, 第四阈值和第五阈值可以根据实际应用的需求进行设置。 无论是单播, 还是多播, 都可以包括很多个报文排序队列, 而每个报文排 序队列又都有若干个报文等待重组重排序(在等待重组重排序的报文也成为开 放报文, 即 open packet )。 其中, 单播可以根据报文服务类型 (CoS , Class of Service )和源标识(SID, Source Identification )来划分报文排序队列, 也就是 不同报文服务类型和源标识的信元应该在不同的报文排序队列中。而多播则可 以根据 4艮文服务类型 ( CoS , Class of Service ) , 源标识 ( SID , Source Identification )和多播复制域的标识(MGID, Multicast Group ID )来划分报文 排序队列。 其中, 每个报文排序队列都有一个报文描述符阵, 也就是预先分配 的一块内存作为数据来容纳等待排序的报文。 报文描述符阵中的每一个表项, 其实就是一个指针, 可以指向属于此报文的信元描述符。信元描述符中也有一 个指针指向下一个信元, 这样所有属于一个报文的信元就组成了一个链表, 并 且从报文阵列中的第一个指针开始可以遍历到所有信元。 其中,报文描述符阵 的大小(即可以容纳多少开放报文)是可以动态调整的。 例如, 可以预先分配 了若干种大小不等的报文描述符阵。如果一个排序队列的开放报文增加,导致 报文描述符阵溢出,则可以用一个更大的报文描述符阵取代先前较小的报文描 述符阵。 即, 可以如下:
对于单播(Unicast ) 来说, 该报文重组数据库可以包括源描述符内存 ( SDM , Source Descriptor Memory ) 和源队歹 'J内存 ( SQM , Source Queue Memory ); 其中, 源描述符内存用于存储源描述符 ( Source Descriptor ), 并以 报文服务类型和源端口号作为索引,即以 {报文服务类型,源端口号 }作为索引; 而源队列内存则用于存放与所述源描述符对应的源队列,该源队列内存包括一 个指向排序报文緩存区( Reorder-Packet- Array-Buffer )的指针, 排序报文緩存 区用于存放等待排序的信元的报文描述符 ( Packet Descriptor ), 每个报文描述 符指向最后一次收到的属于此报文的信元的信元描述符( Cell Descriptor)。
其中, 排序报文緩沖区可以由动态的报文描述符阵 ( Dynamic Packet Descriptor Array )构建, 具体可以包括 N类不同容量的内存块, 其中, N为正 整数。 例如, 当 N = 3时, 可以保护如下容量的三种内存块:
( 32*X )、 ( 128*X )和(512*X ), 其中, X是一个报文描述符的宽度。 对于多播(Multicast ) 来说, 该报文重组数据库可以包括 MGID描述符内 存和 MGID队列内存; 其中, MGID描述符内存用于存储 MGID描述符, 并以报 文服务类型、 MGID和源端口号作为索引, 比如用 {报文服务类型, MGID, 源 端口号 }的哈希(Hash )值作为索引, 而 MGID队列内存则用于存放与 MGID描 述符对应的 MGID队列; 该 MGID队列内存可以包括一个指向排序报文緩存区 的指针,排序报文緩存区用于存放等待排序的信元的报文描述符,每个报文描 述符指向最后一次收到的属于此报文的信元的信元描述符。
其中, 排序报文緩沖区为动态的报文描述符阵, 包括 M类不同容量的内存 块, 其中, M为正整数, 与单播类似, 在此不再赘述。
因为 {报文服务类型, MGID, 源端口号}表示的空间非常大, 很难为每个 {报文服务类型, MGID, 源端口号}索引创建一个对应的报文排序队列。 所以, 为了节省资源, 可以让 MGID描述符内存和 MGID队列内存支持哈希功能, 即 各个进程(如 IF队列、 PF队列、 完整队列、 排序报文緩沖区和信元排序緩沖区 等)可以通过哈希功能访问 MGID描述符内存和 MGID队列内存; 其中, 哈希 功能可以被构造为多个哈希桶。所谓哈希功能,就是把 {报文服务类型, MGID, 源端口号}映射到一个比较小的空间中。 比如, 可以让哈希函数以 {报文服务类 型, MGID, 源端口号}为输入, 计算出一个索引值, 而这个索引值对应了一 个或者多个(就是多个哈希桶)报文排序队列。
由于采用了哈希功能, 所以可能会产生资源沖突, 因此, 还需要维护一个 资源沖突查询表 ( FAVT, Fully Associative Victim Table ), 以便在发生资源沖 突时, 可以根据该预置的资源沖突查询表将发生资源沖突的信元的压入另一 MGID队列或丟弃。
此外, 当该资源沖突查询表的信息量超过预置第一阈值时, 可以激发本目 的端口发送控制信息给多播源, 以减少发往本目的端口的多播数据量。 其中, 第一阈值可以根据实际应用的需求进行设置。
例如, 每个哈希表项包括 2个哈希桶(即包括两个报文排序队列), (1 , 2, 128 ) , ( 7, 0, 256 ) , ( 11 , 3 , 567 ) ··· ··· , 等都映射到了 16, 也就说以这 些组合作为输入, 哈希函数的结果都是 16。 (1 , 2, 128 )的信元可以利用第一 个报文排序队列, (7, 0, 256 )的信元可以利用第二个报文排序队列, (11 , 3 , 567 )的信元达到时只能被丟弃(因为每个哈希只有两个报文排序队列)。 当然, 如果(1, 2, 128) 的信元全部重组成功, 已经被调度出了排序队列, 第一个 报文排序队列就是空的。 这是( 11, 3, 567)的信元达到, 就可以利用第一个 报文排序队列了。 资源沖突查询表就是表示这些报文排序队列为哪些 (SID, CoS, MGID)索引值占用, 是根据当前状态动态维护的。
103、 根据报文序列号对报文重组数据库中的信元进行排序, 得到正确排 序的 文; 例如, 具体可以如下:
按照报文序列号对排序报文緩存区中的报文描述符进行排序,并将报文重 组数据库中具有相同报文序列号的信元信息放在报文描述符所对应的报文排 序队列中;比如,如果报文重组数据库中保存的信元信息中的序列号分别为( 1, 1)、 (2, 3)、 (1, 3)、 (2, 1)、 (2, 4)、 (1, 2)和(2, 2), 其中, 第一数字 为报文序列号, 第二个数字为信元序列号, 则报文排序后的情况可以如下: 第一个报文排序队列为: (1, 1), (1, 3), (1, 2);
第二个报文排序队列为: (2, 3), (2, 1), (2, 4), (2, 2)。
可见, 此时报文已经被正确排序, 但是报文中的信元仍然属于乱序状态。
104、若正确排序的报文为完整报文, 则根据信元序列号对步骤 103得到的 正确排序的报文中的信元进行排序, 得到正确排序的信元; 其中, 完整报文指 的是报文中的信元已全部接收完毕。 例如, 具体可以如下:
若报文为完整报文,则根据信元的广播类型和报文服务类型将该完整报文 的才艮文描述符加入到相应的完整队列 (CF, Complete-FIFO ) 中; 根据完整队 列中的报文描述符将对应的信元信息***到信元排序緩沖区(也称为信元重排 序緩沖区, Complete Packets FIFO ) 中, 根据信元序列号在信元排序緩沖区中 对提取到的信元信息进行排序。 其中, 信元的广播类型可以包括单播或多播。
为了提高排序的效率, 在进行信元排序时,信元排序緩沖区需要保证有足 够的空白空间, 所以, 可以设定当信元排序緩沖区的空白空间大于预置阈值, 比如在信元排序緩沖区的实际数据量小于可容纳数据量的二分之一时,才根据 完整队列中的报文描述符将对应的信元信息***到信元排序緩沖区中。
其中,根据信元序列号在信元排序緩沖区中对提取到的信元信息进行排序 具体可以如下:
信元排序緩沖区维护一个读指针和一个写指针,写指针总是指向当前正在 进行信元排序的报文的开始位置,读指针用于将排好序的信元逐个读出以进行 报文重组; 因此, 可以将写指针和信元序列号相加计算出***的信元信息的偏 移位置, 然后根据偏移位置调整该***的信元信息在报文中的位置(即在报文 中的正确位置), 以得到正确排序的信元信息。
105、 对正确排序的信元进行 ^艮文重组; 例如, 具体可以如下:
为该正确排序的信元信息的所属报文分配重组资源,将信元排序緩沖区中 的正确排序的信元信息逐个读出, 根据读出的信元信息获取相应的信元数据, 利用分配的重组资源对信元数据进行报文重组。
其中, 为了提高效率, 可以为该正确排序的信元信息的所属报文分配至少 两个重组资源。 为了节约资源, 重组资源被管理在一个资源池中, 在需要的时 候才分配给每个源。所谓需要的时候,指的是将要开始进行重组的时候; 比如, 当排序緩沖区中的信元已经正确排好序后,可以发送一个请求报文重组的指示 信号, 比如 FIFO— RDY信号给报文重组机制, 报文重组机制接收到该请求报文 重组的指示信号后, 就可以开始进行对排序緩沖区中的信元进行报文重组。
当然,如果信元排序緩沖区至少具有两个, 则报文重组机制可能会同时接 收到多个请求报文重组的指示信号,对于这种情况,报文重组机制可以根据报 文服务类型选择信元排序緩沖区来进行处理, 即此时,对正确排序的信元进行 才艮文重组具体可以为:
根据报文服务类型选择信元排序緩沖区,将选择的信元排序緩沖区中的正 确排序的信元信息逐个读出, 根据读出的信元信息提取信元数据以重组报文。
可选的, 为了节约资源和提高效率, 避免在某一报文上浪费过多的时间, 可以对 ^艮文重组的时间进行监控, 如果 文重组超时( Reassembly Timeout ), 则将报文丟弃。例如, 可以判断报文中第一个到达目的端口的信元的到达时间 (比如可以采用本地时间戳来标识到达时间)与当前时间的差值是否大于预置 的第二阈值, 若是, 则确定报文重组是否超时, 于是将报文丟弃; 反之, 若否, 则确认 ^艮文重组没有超时, 继续对^艮文进行重组。
其中, 重组超时的时间还可以包括交换网时延的估算时间, 其中, 该交换 网时延可以根据报文序列号估算出来, 具体可参见现有技术, 在此不再赘述。 然后报文在交换网中耽误的时间太多, 则可以被直接丟弃, 而无需等待重组超 时再丟弃。 即可选的, 在将正确排序的信元重组为报文之前, 该方法还可以包 括:
确定交换网时延的估算值超过预置的第三阈值时, 则将报文丟弃。
其中, 第二阈值和第三阈值可以根据实际应用的需求进行设置。
(即报文序列号为 123的报文)还有一个信元没有达到, 则此时后续报文, 如 报文序列号为 124的报文即使已经收齐了所有信元也不能被重组, 而是必须等 待报文序列号为 123的报文接收到所有信元后,或者等待报文序列号为 123的报 文超时被丟弃之后, 才能被处理。 由于等待超时一般是一个比较长的时间, 所 以如果报文序列号为 123的某个信元在交换网中时延特别长, 则此时可以不等 待报文序列号为 123的报文超时, 而是确定交换网时延的估算值是否超过预置 的第三阈值, 比如如果此时接收到了这个排序队列中报文序列号为 256的报文 中的信元, 由于 256与 123的差大于第三阈值(比如第三阈值设定为 100 ), 所以 可以不需要等待报文序列号为 123的报文超时, 而是可以直接把报文序列号为 123的4艮文丟弃。
另外, 需说明的是, 为了提高效率, 以上重组重排序过程可以划分为多个 并行的重组重排序引擎, 以支持不同报文服务类型 (CoS, Class of Service ) 的单播或多播数据流。 其中, 报文服务类型相当于报文优先级。
本实施例采用根据源标识对接收到的信元进行预处理,以确定是否将接收 到的信元***报文重组数据库,然后再直接根据信元的报文序列号和信元序列 号对数据库中的信元进行排序, 最终实现报文的重组。 由于本方案可以直接根 据源来对报文进行重组重排序,所以相对于现有技术中基于时标同步来实现报 文的重组重排序而言, 可以适用于高性能设备。 而且, 由于本方案可以对接收 到的信元(即等待重组的报文)进行控制管理, 所以对资源的利用率较高, 可 以占用较少的内存和控制资源。 实施例二、
根据实施例一所描述的方法, 以下将举例作进一步详细说明。
如图 2a所示, 该图为报文重组重排序过程的架构图, 主要可以分为如下 几个步骤: 步骤 1、 信元到达目的端口后, 内存分配模块为接收到的信元分配负载内 存,预处理模块根据源标识将信元的信元数据保存在负载内存中, 并根据信元 的源标识对信元的信元信息进行分类,根据类别将信元信息分别存入不同的 IF 中, 比如, 可以按照广播类型进行分类, 如果是单播数据, 则放入单播输入队 列( UIF, Unicast Input FIFO )中,如果是多播数据,则放入多播输入队列( MIF, Multicast Input FIFO )中, 参见图 2a。 其中, 信元信息可以包括信元的源标识、 本地时间戳、 负载内存索引值、报文序列号和信元序列号等信元相关的一些信 息。
预处理模块根据源标识、 本地时间戳和 PF队列的状态分别对 UIF队列和 MIF队列中的信元信息进行处理,以确定是否将信元信息***到报文排序模块 的报文重组数据库中去, 其中, UIF队列和 MIF 队列在报文重组数据库中都 具有相对应的处理模块, 例如, 参见图 2b, UIF队列对应着报文重组数据库中 的 "单播" 模块, 而 MIF对应着报文重组数据库中的 "多播" 模块。
步骤 2、 报文排序: 报文排序模块分别对各个源中的报文进行排序, 以得 到正确排序的报文, 比如, 可以按照报文序列号对排序报文緩存区中的报文描 述符进行排序,并将报文重组数据库中具有相同报文序列号的信元放入同一报 文排序队列, 参见图 2a中的 "CoSO, CoSl , CoS2" , 在该图中, CoS0、 CoSl 和 CoS2分别代表着不同报文服务类型的报文排序队列; 此时, 报文已经被正 确排序, 但是报文中的信元仍然处于乱序状态, 即 CoS0、 CoSl和 CoS2已经 正确排序, 但是 CoS0、 CoSl和 CoS2中信元仍然处于乱序状态。
步骤 3、信元排序:信元排序模块分别对各个完整报文中的信元进行排序, 以得到正确排序的信元。其中, 完整报文指的是该报文中的信元已经全部接收 午。
步骤 4、 报文重组: 报文重组模块把正确排序的信元重组为一个完整的报 文。 参见图 2a, 此时, 内存分配单元可以为报文重组模块分配重组资源, 报 文重组模块将信元排序模块中的正确排序的信元信息逐个读出,根据读出的信 元信息在负载内存中提取信元数据以重组报文, 并输出重组好的报文, 参见图 2a。
以上重组重排序过程又可以进一步划分为多个并行的重组引擎,以支持不 同报文服务类型 (CoS, Class of Service ) 的单播流量和多播流量。 需说明的是, 其中, 负载内存的设计应该满足报文重组模块每个时钟周期 一次读取请求。 在报文重组过程中, 不能被中断。 因此, 在报文重组模块不需 要能够容纳整个报文的 FIFO队列,而仅仅只需能够容纳 2至 4个信元的 FIFO 队列就足够了。 一个报文开始重组之后, 会在另一个报文开始之前完成重组。
另外, 内存分配单元主要用于分配和回收下列类型的内存: 信元描述符、 报文描述符阵列、 MGID队列内存( MGID队列内存是针对多播数据来而言的); 上述每种类型的内存都包括至少一个 FIFO队列, 用于管理可分配緩沖区。 每 个 FIFO队列应该能够在一个时钟周期分配一个緩沖区表项。
报文排序模块出队一个报文之后, 就把报文提交给报文回收模块,报文回 收模块把报文中的信元逐个提交给信元回收模块。 当一个报文被丟弃的时候, 也会被提交给报文回收模块,执行相同的处理以丟弃报文, 同时也会丟弃所有 报文序列号小于被丟弃报文的序列号且仍然驻留在报文重组数据库的报文。
以下将以单播为例作进一步详细说明。 参见图 2b, 具体流程可以如下:
201、 源端口将 ^艮文切分成信元, 并为各个信元配置相应的 文序列号和 信元序列号;
例如, 在源端口, 两个去往同一目的端口的 文被切分为信元, 第一个才艮 文被切分为 4个信元, 第二个 文被切分为 5个信元。 第一个 文的信元的序 列号分别是(1 , 1 ), ( 1 , 2 ), ( 1 , 3 ), ( 1 , 4 ); 第二个报文的信元的序列号 分别是(2, 1 ), ( 2, 2 ), ( 2, 3 ), ( 2, 4 ), ( 2, 5 )。 其中第一个数字是报文 序列号, 第二个数字是信元序列号。
在源端口,报文序列号被***到每个信元中,报文序列号基于数据流进行 维护。 比如, 对于单播数据来说, 就是根据目的端口和报文服务类型来对报文 序列号进行维护, 而对于多播数据来说, 就是根据 MGID和报文服务类型来 对>¾文序列号进行维护。
202、 源端口将信元通过交换网传递给目的端口, 其中, 信元携带源标识、 报文序列号和信元序列号。
203、 目的端口的预处理模块接收到这些信元后, 为接收到的信元打上到 达时的本地时间戳,将这些信元的信元数据保存在内存分配模块为其分配的负 载内存中, 并配置相应的负载内存索引值,根据负载内存索引值可以从负载内 存中获取相应的信元数据。
204、 目的端口的预处理模块根据源标识将信元的信元信息***到 UIF队 列中,其中,信元信息可以包括信元的源标识、本地时间戳、 负载内存索引值、 报文序列号和信元序列号等信元相关的一些信息。
205、 目的端口的预处理模块根据源标识、报文序列号和 PF队列的状态对 UIF队列中的信元信息进行预处理,以确定是否将该信元信息***报文重组数 据库。
其中, 参见图 2c, 该报文重组数据库由源描述符内存和源队列内存组成, 源描述符内存用于存储源描述符,每个源描述符对应着源队列内存中的一个源 队列; 源队列内存用于存储源队列,每个源队列对应着排序报文緩沖区中多个 报文描述符, 源队列内存包含一个指向排序报文緩沖区的指针, 这种排序报文 緩沖区可以容纳 32到 512个报文描述符, 每个报文描述符指向最后一次收到 的属于此报文的信元的信元描述符,每个信元描述符包含一个指向负载内存中 的信元数据的负载内存索引值。报文緩沖区可以直接访问,访问索引可以用当 前报文序列号减去第一个报文序列号 ( Packet-Sequence-Number - First-Packet-Sequence ) 来计算。
源队列内存可以被构建为动态的报文描述符阵列, 包括 N类不同容量的 内存块。每个源描述符包含一个指向这类内存块的指针, 内存块的类型根据所 需报文描述符阵列大小决定。***或者删除一个报文描述符, 可能需要从一种 内存块切换为另一种内存块。 为了得到带宽, 内存块支持同时读写, 因此不需 要维护读指针。原因是,提取报文之后会引起阵列中一个报文描述符位置的位 移, 读指针总是指向开始的位置。
参见图 2d, 该预处理的具体流程可以如下:
A1、 目的端口的预处理模块( Prepare Packet )判断所有 PF队列的深度是 否小于预设的阈值, 若是, 则执行步骤 A2, 若否, 则执行步骤 A7。
其中, PF队列的深度小于预设的阈值,表明 PF队列中还具有足够的空间, 可以緩存更多需要延时的信元。其中,阈值可以根据实际应用的需求进行设置。
A2、 若所有 PF队列的深度都小于预设的阈值, 则目的端口的预处理模块 根据本地时间戳确定 UIF队列中队首信元信息所对应的信元是否应该被处理, 若是, 则执行步骤 A3 , 若否, 则执行步骤 A9。
A3、 若 UIF队列中队首信元信息所对应的信元应该被处理, 则目的端口 的预处理模块弹出 IF队列的队首信元信息, 然后执行步骤 A4。
A4、 目的端口的预处理模块判断所弹出的 IF队列的队首信元信息所对应 的信元或 PF队列的队首信元信息所对应的信元是否超时, 若是, 则执行步骤 A5, 若否, 则执行步骤 A10。
A5、 若信元已超时, 则目的端口的预处理模块回收该队首信元信息所对 应的信元和该信元所属的报文,比如将该队首信元信息所对应的信元发送给信 元回收模块 (Cell Recycle ), 并将该信元所属的报文发送给报文回收模块 ( Packet Recycle )。
其中,信元回收模块主要用于回收信元,报文回收模块主要用于回收报文。 A6、 若并不是所有 PF队列的深度都小于预设的阈值, 则目的端口的预处 理模块根据本地时间戳确定溢出 PF队列中的队首信元信息所对应的信元是否 应该被处理, 若是, 则执行步骤 A7, 若否, 则执行步骤 A8。
其中, 溢出 PF队列指的是 PF队列的深度大于预设的阈值的 PF队列。 A7、 若溢出 PF队列中的队首信元信息所对应的信元不应该被处理, 则预 处理模块弹出溢出 PF队列中的队首信元信息, 并回收该队首信元信息所对应 的信元。 比如将该队首信元信息所对应的信元发送给信元回收模块, 然后返回 执行步骤 A2。
A8、 若溢出 PF队列中的队首信元信息所对应的信元应该被处理, 则预处 理模块弹出 IF队列的队首信元信息, 并将该队首信元信息压入 PF队列之一, 然后执行步骤 A9。
A9、 预处理模块弹出最满的 PF队列的队首信元信息, 然后返回执行步骤
A4。
A10、若信元没有超时,则预处理模块弹出该信元所对应的队首信元信息, 并基于源标识读取报文重组数据库中的源描述符内存, 然后执行步骤 All.
Al 1、 预处理模块判断该队首信元信息所对应的信元的报文序列号是否大 于源描述符内存中的最大开放报文序列号, 若是, 则执行步骤 A12, 若否, 则 执行步骤 A13。
A12、 预处理模块将该队首信元信息***源描述符内存。
A13、 预处理模块将该队首信元信息*** PF队列之一。
由上述预处理流程可以看出, 一般的,预处理模块会对不在当前开放报文 处理窗口中的信元进行延时处理, 直到开放报文处理窗口覆盖了信元。
可选的, 还可以设置一个监视器(watchdog ), 以对 PF队列进行检查, 如 果检查到存在非空的 PF队列长时间没有被访问, 则监视器触发预处理模块优 先处理这个 PF。
另外。 为了充分利用 PF队列的空间, 并避免 PF队列提取数据产生突发, 根据信元需要的延时时间, PF 队列可以被划分为多个并列的队列。 信元延时 时间可以这样评估,用当前信元中的报文序列号减去当前报文重组数据库中的 最大报文序列号, 这个差值越大, 信元需要的延时时间就越长。
206、 报文排序模块将报文描述符放进排序报文緩存区, 按照报文序列号 对排序报文緩存区中的报文描述符进行排序,并将报文重组数据库中具有相同 报文序列号的信元信息放在报文描述符所对应的报文排序队列中。
此时报文已经被正确排序, 但是报文中的信元仍然属于乱序状态。
207、 若正确排序的报文为完整报文, 则信元排序模块根据信元序列号对 正确排序的报文中的信元信息进行排序, 得到正确排序的信元信息。
其中,信元排序模块可以被构建为多个同步的完整队列,报文的每个服务 类型(也称为优先级, 即 CoS )可以用其自身的控制逻辑和数据緩沖队列独立 工作。
信元排序模块把每个完整报文的报文描述符从报文排序队列中提取出来, 根据广播类型和报文的服务类型 (即 CoS )加入到相应的完整队列中; 然后, 根据完整队列中的报文描述符将对应的信元信息***到信元排序緩沖区中,根 据信元序列号在信元排序緩沖区中对提取到的信元信息进行排序。
其中, 完整报文指的是报文重组数据库接收到了这个报文的所有信元。 而 信元排序緩沖区可以容纳 2个巨型包(Jumbo Packet ), 也就是大约 20K字节。 只有在其数据量小于一半的时候,才开始提取报文描述符将对应的信元信息到 其中排序。 参见图 2e, 该图为信元排序机制的示意图, 信元排序緩沖区维护一个读 指针(RD— PTR )和一个写指针(WR— PTR )。 写指针总是指向当前正在排序 的报文的开始位置,用写指针和信元序列号相加计算出***信元信息的偏移位 置。 只有当报文大小小于信元排序緩沖区中空白位置的大小时, 才可以进行报 文的信元信息排序。读指针等到下一个报文的信元信息完成排序之后, 开始一 个一个的把本报文中的信元信息读出。 其中, 图 2e 中的 "EOP" 表示一个报 文中的最后一个信元, "SOP" 表示报文的开始位置。
其中, 对于拼接信元(Concatenated-Cell )可以这样处理: 当***第一个 报文时, 可以把该第一个报文最后一个信元的指针***两次在输入 FIFO中, 同时在信元同步緩沖区 (Cell-Reorder-Buffer ) 添加标示, 指示第一个信元不 能被提取到重组机制, 当第二个信元重组完成后, 这个数据将被丟弃。 这是因为,当源端将报文切分成信元时,最后一个信元往往存在数据黑洞, 也就是报文的长度不能刚好被切分成为几个完整的信元,导致最后一个信元的 有效数据小于前面的完整信元。这样的信元在传输和处理过程中会造成资源浪 费。 优化措施就是, 不再按照报文边界切分信元, 而是把连续的报文看作是一 个数据流切片, 这样所有信元都包含完整的数据段。 但是这样的话, 必然导致 某些信元既包含前一个报文的最后一段数据, 也包含后一个报文的第一段数 据。这种即包含前一个报文的最后一段数据,也包含后一个报文的第一段数据 的信元就称为拼接信元。 在进行信元重排序时需要知道信元的属性, 标出 "SOP" 和 "E0P"。 对于拼接信元, 既是前一个 文的 E0P信元, 也是后一 个报文的 SOP信元, 所以在接收到这样的拼接信元后, 需要将该信元两次插 入输入 FIFO队列中,一个作为前一个^艮文的 E0P信元,一个作为后一个 4艮文 的 SOP信元。 这样当信元需要进行重排序时, 这两个信元会在信元排序模块 中连续存在, 由于这两个信元的内容是相同的, 所以不能把这两个信元都提交 给报文重组单元。 所以需要一个标志位标识作为 E0P的第一个信元不能提交 报文重组模块, 而是等到第二个信元的报文重排序完成之后,将该第一个信元 丟弃, 以保证提交到报文重组模块的数据只有一份。
当一个报文准备好重组的时候,信元排序緩沖区会向重组机制发送一个请 求报文重组的指示信号, 比如 FIFO— RDY的信号。
需要说明的是, 信元排序机制(包括信元排序緩沖区和完整队列 )是由多 个并列的先进先出 (FIFO, First Input First Output ) 队列组成, 这些队列之间 基于报文的服务类型(也称为优先级, 即 CoS )和广播类型进行区分, 因此消 除了排序机制对额外队列的需求, 可以节省资源。
208、 报文重组模块接收到请求报文重组的指示信号, 比如 FIFO— RDY的 信号后,将信元排序緩沖区中的正确排序的信元信息逐个读出,根据读出的信 元信息提取信元数据以重组报文。
由于可能会有多个信元排序緩沖区同时发送请求报文重组的指示信号给 报文重组模块,所以在这种情况下,报文重组模块可以基于报文的服务类型(即 优先级)在多个发送了请求报文重组的指示信号 (比如 FIFO— RDY信号) 的 信元排序緩沖区之间进行仲裁, 以选择合适的信元排序緩沖区, 然后将选择的 信元排序緩沖区中的正确排序的信元信息逐个读出,根据读出的信元信息(具 体是信元信息中的负载内存索引值)在负载内存中提取信元数据以重组报文。
由上可知, 本实施例采用根据源标识对接收到的信元信息进行预处理, 以 确定是否将接收到的信元信息***报文重组数据库,然后再直接根据信元信息 中的报文序列号和信元序列号对数据库中的信元信息进行排序,最终根据排好 序的信元信息提取相应的信元数据, 以实现报文重组。 由于本方案可以直接根 据源来对报文进行重组重排序,所以相对于现有技术中基于时标同步来实现报 文的重组重排序而言, 可以适用于高性能设备。 而且, 由于本方案可以对接收 到的信元(即等待重组的报文)进行控制管理, 所以对资源的利用率较高, 可 以占用较少的内存和控制资源。 实施例三、
实施二中所描述的方法同样也适用于多播数据,相对于单播而言, 多播主 要具有如下区别:
单播报文需要源描述符内存, 以{优先级, 源端口号}为索引。 而多播报文 需要 MGID描述符内存, 以{优先级, MGID, 源端口号}的哈希值( Hash ) 为 索引。 其中, MGID表示属于同一个复制域的一组目的端口。 相应的, 在多播 中, 用 MGID队列内存取代单播中的源队列内存。 即, 多播的报文重组数据库 具体可以如下:
多播的才艮文重组数据库可以包括 MGID描述符内存和 MGID队列内存; 其 中, MGID描述符内存用于存储 MGID描述符, 并以 {优先级, MGID, 源端口 号}的哈希值作为索引, 而 MGID队列内存则用于存放与 MGID描述符对应的 MGID队列; 该 MGID队列内存可以包括一个指向排序报文緩存区的指针, 排 序报文緩存区用于存放等待排序的信元的报文描述符,每个报文描述符指向最 后一次收到的属于此报文的信元的信元描述符。
其中, MGID队列内存可以被构建为动态的报文描述符阵, 包括 M类不 同容量的内存块, 其中, M为正整数; 每个 MGID描述符包含一个指向这类 内存块的指针, 内存块的类型根据所需报文描述符阵列大小决定。
需说明的是, 因为采用哈希函数访问 MGID描述符内存和 MGID队列内 存, 所以这些内存的行就可能对应多个源 (也就是{优先级, MGID, 源端口 号 } )。 既然这些内存的行是为一个源申请的, 就引入了一个新的需求: 在为一 个新的多播源申请资源时, 需要检查这些行已经被释放。 为了解决这个问题, 需要另一个内存来维护这些信息。
另外, 还需说明的是, 由于采用了哈希功能, 所以可能会产生资源沖突, 因此,还需要维护一个资源沖突查询表( FAVT, Fully Associative Victim Table ), 以便在发生资源沖突时,可以根据该预置的资源沖突查询表将发生资源沖突的 信元的压入另一 MGID队列或丟弃。为了避免 FAVT表满的情况,可以通过配置 一个阈值, 在本实施例中称为第一阈值, 在 FAVT表中的信息量超过该第一阈 值时, 激发一个控制信息发送到源, 以减少发往这个目的端口的多播数据的流 量。
多播排序机制和重组机制与单播相同,故在此不再赘述, 具体可参见实施 例二。
由上可知, 本发明实施例所提供的重组重排序方法不仅适用于单播, 而且 也适用了多播, 可以实现与实施例二同样的有益效果。 实施例四、
为了避免在某一个报文的重排序重组时花费过多的时间,即避免流量超时 ( Timeout Traffic ), 在上述方法实施例的基础上, 进一步的, 还可以根据本地 时间戳来检查信元和所属的报文不会在交换网或报文重组阶段花费太长时间。 :¾口下:
(一)重组超时 ( Reassembly Timeout )
为了检验一个报文不会在重组阶段花费太多时间,每个报文描述符应该包 括一个最早信元时间戳 ( Earliest-Cell-Timestamp ), 表示这个报文的所有信元 中第一个到达目的端口的信元的到达时间。如果当前时间和此最早信元时间戳 的差值大于预设第二阈值, 就把此报文标记为重组超时, 当这个报文到达报文 描述符阵列的队首时, 即使这个报文还没有收到其所有信元,也不再等待接收 该报文的其他信元, 而是把该报文弹出队列并且丟弃。 否则, 在报文到达排序 阵列队首之前, 不对其该报文是否重组超时进行判断, 而是执行如下操作: ***每个报文的第一个信元时, 更新报文描述符中的最早信元时间戳字 段。
当一个报文达到报文描述符阵列的队首时,其最早信元时间戳被复制到所 属的源描述符。
当***一个信元时, 源描述符中的最早信元时间戳都会被检查, 不管到达 的信元属于哪个报文。 比如, 在某个源没有数据到达的情况下, 静态源检测 ( SSD, Silent-Source-Detection )将会触发这类检查。
对于超时报文, 将会被转移到报文回收机制进行处理。
(二) 交换网超时( Fabric Timeout )
在交换网中耽误时间太多的报文可以被直接丟弃, 而不再等待重组超时。 即 ·艮文重组重排序方法还可以包括:
目的端口确定交换网时延的估算值(即报文在交换网中的传输时间)超过 预置的第三阈值时, 则将报文丟弃。
其中,报文在交换网中的传输时间可以在目的端口根据报文的序列号通过 估算得到, 具体可参见现有技术, 在此不再赘述。
由上可知, 本实施例除了可以实现实施例二和四同样的有益效果之外,还 可以根据本地时间戳对 ^艮文在交换网传输的时间和^艮文重组时间进行监控,以 避免产生过多的时延, 有利于提高报文重排序重组的效率。 实施例五、 为了更好地实施以上方法, 本发明实施例还提供一种网络设备, 具体可以 作为目的端口。 参见图 3a, 该网络设备包括接收单元 301、 预处理单元 302、 报 文排序单元 303、 信元排序单元 304和重组单元 305;
接收单元 301 , 用于接收源端口发送的信元, 其中, 信元携带源标识、 报 文序列号和信元序列号, 源标识具体可以为源端口号。
预处理单元 302 , 用于根据源标识对接收单元 301接收到的信元进行预处 理, 以确定是否将该信元***数据库;
报文排序单元 303 , 用于根据报文序列号对报文重组数据库中的信元进行 排序, 得到正确排序的报文;
信元排序单元 304, 用于在确定报文为完整报文时, 根据信元序列号对报 文排序单元 303得到的正确排序的报文中的信元进行排序, 得到正确排序的信 元;
重组单元 305 ,用于对信元排序单元 304得到的正确排序的信元进行报文重 组。
其中, 信元可以包括信元数据和信元信息, 为了节省资源, 在排序的过程 中, 可以只对信元信息进行操作, 等到重组单元 305对报文进行重组时, 才根 据信元信息提取相应的信元数据进行重组。
则参见图 3b, 预处理单元 302可以包括数据保存子单元 3021、 信息***子 单元 3022和决策子单元 3023;
数据保存子单元 3021 , 用于为接收单元 301接收到的信元打上到达时的本 地时间戳后,根据源标识将信元的信元数据保存在负载内存中, 并配置相应的 负载内存索引值;
信息***子单元 3022, 用于将接收单元 301接收到的信元的信元信息*** 到输入先进先出 IF队列中, 其中, 信元信息包括信元的源标识、 本地时间戳、 负载内存索引值、 报文序列号和信元序列号等信息;
决策子单元 3023 , 用于根据源标识、 报文序列号和 PF队列的状态对信息 ***子单元 3022***到 IF队列中的信元信息进行处理,以确定是否将该信元信 息*** 文重组数据库中去。 例如, 具体可以如下:
决策子单元 3023 , 具体用于当所有 PF队列的深度小于预设的阈值时, 确 定 IF队列中队首信元信息所对应的信元的报文序列号与当前正在处理的报文 的最小报文序列号的差值, 是否小于第四阈值; 若是, 则弹出 IF队列的队首信 元信息; 若否, 则将 IF队列的队首信元信息压入 PF队列之一, 并弹出最满的 PF 队列的队首信元信息;判断所弹出的 IF队列的队首信元信息所对应的信元或 PF 队列的队首信元信息所对应的信元是否超时; 若超时, 则回收该信元信息所对 应的信元和该信元所属的报文; 若没有超时, 则在该信元的报文序列号小于报 文重组数据库中同一源的最大开放报文序列号时,将该信元信息***报文重组 数据库;在该信元的报文序列号大于报文重组数据库中同一源的最大开放报文 序列号时, 将该信元信息*** PF队列之一。
该决策子单元, 还用于当有 PF队列的深度大于预设的阈值时, 根据本地 时间戳确定溢出 PF队列中的队首信元信息所对应的信元的报文序列号与当前 正在处理的报文的最小报文序列号的差值, 是否小于第四阈值; 若是, 则回收 该溢出 PF队列的队首信元信息所对应的信元,然后执行确定 IF队列中队首信元 信息所对应的信元的报文序列号与当前正在处理的报文的最小报文序列号的 差值, 是否小于第四阈值的步骤; 若否, 则执行将 IF队列的队首信元信息压入 PF队列之一, 并弹出最满的 PF队列的队首信元信息步骤。
具体可参见实施例一、 实施例二中的预处理流程以及图 2d。
报文重组数据库具体可以针对单播和多播分别进行设计。例如, 可以在一 个报文重组数据库下又分别设置针对单播的报文重组数据库和针对多播的报 文重组数据库。
其中,对于单播来说,报文重组数据库可以包括源描述符内存和源队列内 存。 其中, 源描述符内存用于存储源描述符, 并以报文服务类型和源端口号为 索引, 即以 {报文服务类型, 源端口号}作为索引; 而源队列内存用于存放与该 源描述符对应的源队列,该源队列内存可以包括一个指向排序报文緩存区的指 针,排序报文緩存区用于存放等待排序的信元的报文描述符,每个报文描述符 指向最后一次收到的属于此报文的信元的信元描述符。
其中, 源描述符内存和源队列内存可以由动态的报文描述符阵创建, 包括 N类不同容量的内存块, 其中, N为正整数。
对于多播来说,报文重组数据库包括 MGID描述符内存和 MGID队列内存, 其中, 源 MGID描述符内存用于存储 MGID描述符, 并以报文服务类型、 MGID 和源端口号为索引, 比如用 {报文服务类型, MGID, 源端口号}的哈希(Hash ) 值作为索引, 而 MGID队列内存用于存放与所述 MGID描述符对应的 MGID队 列, 所述 MGID队列内存可以包括一个指向排序报文緩存区的指针, 该排序报 文緩存区用于存放等待排序的信元的报文描述符,每个报文描述符指向最后一 次收到的属于此报文的信元的信元描述符。
其中, MGID描述符内存和 MGID队列内存为动态的报文描述符阵, 包括 M类不同容量的内存块, 其中, M为正整数。
MGID描述符内存和 MGID队列内存支持哈希功能, 即各个进程 (如 IF队 列、 PF队列、 完整队列、 排序报文緩沖区和信元排序緩沖区等)可以通过哈希 功能访问 MGID描述符内存和 MGID队列内存; 即:
报文排序单元 303 ,具体用于通过哈希功能访问 MGID描述符内存和 MGID 队列内存;
其中, 哈希功能可以被构造为多个哈希桶。 由于采用了哈希功能, 所以可 能会产生资源沖突, 因此, 还需要维护一个资源沖突查询表(FAVT ), 以便在 发生资源沖突时,可以根据该预置的资源沖突查询表将发生资源沖突的信元的 压入另一 MGID队列或丟弃。 所以, 如图 3b所示, 该网络设备还可以包括沖突 处理单元 306;
沖突处理单元 306, 用于在发生资源沖突时, 根据预置的资源沖突查询表 将发生资源沖突的信元的压入另一 MGID队列或丟弃 (即回收)。 具体为在报 文排序单元 303通过哈希功能访问 MGID描述符内存和 MGID队列内存时,如果 发生资源沖突,则根据预置的资源沖突查询表将发生资源沖突的信元的压入另 一 MGID队列或丟弃。
此外, 为了避免 FAVT表满的情况, 还可以通过配置一个阈值, 在本发明 实施例中称为第一阈值, 在 FAVT表中的信息量超过该第一阈值时, 激发一个 控制信息发送到源, 以减少发往这个目的端口的多播数据的流量。 所以, 如图 3b所示, 该网络设备还可以包括控制消息发送单元 307;
控制消息发送单元 307, 用于当资源沖突查询表的信息量超过预置第一阈 值时, 激发目的端口发送控制信息给多播源, 以减少发往所述目的端口的多播 数据量。
其中, 如图 3b所示, 信元排序单元 304可以包括入队子单元 3041、 緩存子 单元 3042和排序子单元 3043 ;
入队子单元 3041 , 用于在确定报文排序单元 302排好序的报文为完整报文 时,根据信元的广播类型和优先级将完整报文的报文描述符加入到相应的完整 队列中;
緩存子单元 3042 ,用于根据完整队列中的报文描述符将对应的信元信息插 入到所述信元排序緩沖区中;
排序子单元 3043 ,用于根据信元序列号在信元排序緩沖区中对提取到的信 元信息进行排序。
其中,信元排序緩沖区维护一个读指针和一个写指针, 写指针总是指向当 前正在进行信元排序的报文的开始位置,读指针用于将排好序的信元信息逐个 读出以进行 ·艮文重组, 因此, 可以将写指针和信元序列号相加计算出***的信 元信息的偏移位置, 然后根据偏移位置调整该***的信元信息在报文中的位 置, 以得到正确排序的信元信息; 即:
排序子单元 3043 ,具体用于将写指针和信元序列号相加计算出***的信元 信息的偏移位置, 根据该偏移位置调整所述***的信元信息在报文中的位置。
其中, 如图 3b所示, 重组单元 305可以包括分配子单元 3051、 读取子单元 3052和重组子单元 3053 ; 正确排序的信元信息的所属 文分配至重组资源;
读取子单元 3052 ,用于将信元排序緩沖区中的正确排序的信元信息逐个读 出;
重组子单元 3053 ,用于根据读取子单元 3052读出的信元信息获取相应的信 元数据, 利用分配子单元分配的重组资源对获取到的信元数据进行报文重组。 比如, 重组子单元 3053可以根据读取子单元 3052读出的信元信息,将数据保存 子单元 3021保存在负载内存中的信元数据读取出来。
进一步的, 为了提高效率, 可以为该正确排序的信元信息的所属报文分配 至少两个重组资源, 即: 分配子单元 3051 ,还用于为正确排序的信元信息的所属 "¾文分配至少两个 重组资源。
其中, 为了节约资源, 重组资源被管理在一个资源池中, 在需要的时候才 分配给每个源。 所谓需要的时候, 指的是将要开始进行重组的时候; 比如, 当 排序緩沖区中的信元已经正确排好序后,可以发送一个请求报文重组的指示信 号, 比如 FIFO— RDY信号给报文重组机制, 报文重组机制接收到该请求报文重 组的指示信号后, 就可以开始进行对排序緩沖区中的信元进行报文重组。
当然,如果信元排序緩沖区至少具有两个, 则报文重组机制可能会同时接 收到多个请求报文重组的指示信号,对于这种情况,报文重组机制可以根据报 文服务类型选择信元排序緩沖区来进行处理,即该重组单元还可以包括选择子 单元 3054;
选择子单元 3054, 用于根据报文服务类型选择信元排序緩沖区; 则分配子单元 3051 ,具体用于为选择子单元 3054选择的信元排序緩沖区中 的正确排序的信元信息的所属>¾文分配至重组资源;
读取子单元 3052,用于将选择子单元 3054选择的信元排序緩沖区中的正确 排序的信元信息逐个读出。
为了避免在某一个报文的重组重排序时花费过多的时间,即避免流量超时 进一步的,还可以根据本地时间戳来检查信元和所属的报文不会在交换网或报 文重组阶段花费太长时间。 即如图 3b所示, 该网络设备还可以包括丟弃单元; 丟弃单元, 用于确定报文重组超时时, 将报文丟弃。
该丟弃单元,具体用于在报文中第一个到达目的端口的信元的到达时间与 当前时间的差值大于预置的第二阈值时, 确认报文重组超时。 具体可以如下: 每个报文描述符包括一个最早信元时间戳,表示这个报文的所有信元中第 一个到达目的端口的信元的到达时间。如果当前时间和此最早信元时间戳的差 值大于预设第二阈值, 就把此报文标记为重组超时, 当这个报文到达报文描述 符阵列的队首时, 即使这个报文还没有收到其所有信元,也不再等待接收该报 文的其他信元, 而是把该报文弹出队列并且丟弃。
此外, 在交换网中耽误时间太多的报文也可以直接被丟弃, 此时, 可以不 再等待重组超时。 如下: 接收单元 301 , 具体用于通过交换网接收源端口发送的信元; 则丟弃单元,还用于确定交换网时延的估算值超过预置的第三阈值时, 则 将报文丟弃。
其中,报文在交换网中的传输时间可以在目的端口根据报文的序列号通过 估算得到, 具体可参见现有技术, 在此不再赘述。
第二阈值和第三阈值的具体取值可以根据实际应用的需求进行设置。 具体实施时, 以上各个单元可以分别作为独立的实体实现,也可以作为同 一或若干个实体来实现, 例如, 预处理单元 302可以由实施例二中的预处理模 块来实现, 报文排序单元 303可以由实施例二中的报文排序模块来实现, 信元 排序单元 304可以由实施例二中的信元排序模块来实现,重组单元 305可以由实 施例二中的报文重组模块来实现,而丟弃单元可以实施例二中的报文回收模块 和信令回收模块来实现, 等等。
以上各个单元的具体实施可参见前面实施例, 在此不再赘述。 到的信元进行预处理, 以确定是否将接收到的信元***报文重组数据库, 然后 由报文排序单元 303和信元排序单元 304直接根据信元的报文序列号和信元序 列号对数据库中的信元进行排序, 最终由重组单元 305根据排好序的信元对报 文进行重组。 由于本方案可以直接根据源来对 文进行重组重排序, 所以相对 于现有技术中基于时标同步来实现报文的重组重排序而言,可以适用于高性能 设备。 而且, 由于本方案可以对接收到的信元(即等待重组的报文)进行控制 管理, 所以对资源的利用率较高, 可以占用较少的内存和控制资源。 实施例六、
相应的, 本发明实施例还相应地提供一种通信***, 包括源端口设备和本 发明实施例提供的任意一种网络设备, 该网络设备作为目的端口(即称为目的 端口设备), 具体可参见前面的实施例; 例如, 可以如下:
目的端口设备, 作为目的端口, 用于接收源端口发送的信元, 其中, 该信 元携带源标识、报文序列号和信元序列号,根据源标识对接收到的信元进行预 处理, 以确定是否将该信元***报文重组数据库,根据报文序列号对报文重组 数据库中的信元进行排序,得到正确排序的报文, 若该正确排序的报文为完整 报文, 则根据信元序列号对所述正确排序的报文中的信元进行排序,得到正确 排序的信元, 对该正确排序的信元进行 ^¾文重组;
源端口设备, 作为源端口, 用于发送信元给目的端口设备, 其中, 信元携 带源标识、 报文序列号和信元序列号。
其中, 目的端口设备, 具体用于为接收到的信元打上到达时的本地时间戳 后,根据源标识将信元的信元数据保存在负载内存中, 并配置相应的负载内存 索引值; 将信元的信元信息***到 IF队列中, 其中, 信元信息包括信元的源标 识、 本地时间戳、 负载内存索引值、 报文序列号和信元序列号; 根据源标识、 本地时间戳和 PF队列的状态对 IF队列中的信元信息进行处理,以确定是否将所 述信元信息*** "¾文重组数据库, 具体可参见前面的方法实施例以及图 2d。
以上各个设备的具体实施可参见前面实施例, 在此不再赘述。
由上可知,本实施例的通信***的目的端口设备采用根据源标识对接收到 的信元进行预处理, 以确定是否将接收到的信元***报文重组数据库, 然后再 直接根据信元的报文序列号和信元序列号对数据库中的信元进行排序,最终实 现报文的重组。 由于本方案可以直接根据源来对报文进行重组重排序, 所以相 对于现有技术中基于时标同步来实现报文的重组重排序而言,可以适用于高性 能设备。 而且, 由于本方案可以对接收到的信元(即等待重组的报文)进行控 制管理, 所以对资源的利用率较高, 可以占用较少的内存和控制资源。 本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步 骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读 存储介质中, 存储介质可以包括: 只读存储器(ROM, Read Only Memory ), 随机存取记忆体(RAM, Random Access Memory ) , 磁盘或光盘等。
以上对本发明实施例所提供的一种报文重组重排序方法、装置和***进行 以上实施例的说明只是用于帮助理解本发明的方法及其核心思想; 同时,对于 本领域的技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有 改变之处, 综上所述, 本说明书内容不应理解为对本发明的限制。

Claims

权 利 要 求
1、 一种 文重组重排序方法, 其特征在于, 包括:
接收源端口发送的信元,所述信元携带源标识、报文序列号和信元序列号; 根据源标识对接收到的信元进行预处理,以确定是否将所述信元***报文 重组数据库;
根据报文序列号对报文重组数据库中的信元进行排序,得到正确排序的报 文;
若所述正确排序的报文为完整报文,则根据信元序列号对所述正确排序的 报文中的信元进行排序, 得到正确排序的信元;
对所述正确排序的信元进行 >¾文重组。
2、 根据权利要求 1所述的方法, 其特征在于, 所述根据源标识对接收到的 信元进行预处理, 以确定是否将所述信元***报文重组数据库包括:
为接收到的信元打上到达时的本地时间戳后,根据源标识将信元的信元数 据保存在负载内存中, 并配置相应的负载内存索引值;
将信元的信元信息***到输入先进先出 IF队列中,所述信元信息包括信元 的源标识、 本地时间戳、 负载内存索引值、 ·艮文序列号和信元序列号;
根据源标识、报文序列号和延时先进先出 PF队列的状态对 IF队列中的信元 信息进行处理, 以确定是否将所述信元信息***报文重组数据库。
3、 根据权利要求 2所述的方法, 其特征在于, 所述根据源标识、 报文序列 号和 PF队列的状态对 IF队列中的信元信息进行处理,以确定是否将所述信元插 入报文重组数据库包括:
当所有 PF队列的深度小于预设的阈值时,确定 IF队列中队首信元信息所对 应的信元的报文序列号与当前正在处理的报文的最小报文序列号的差值,是否 小于第四阈值;
若是, 则弹出 IF队列的队首信元信息;
若否, 则将 IF队列的队首信元信息压入 PF队列之一, 并弹出最满的 PF队 列的队首信元信息;
判断所弹出的 IF队列的队首信元信息所对应的信元或 PF队列的队首信元 信息所对应的信元是否超时; 若超时, 则回收该信元信息所对应的信元和该信元所属的报文; 若没有超时,则在该信元的报文序列号小于报文重组数据库中同一源的最 大报文序列号时,将该信元信息***报文重组数据库; 在该信元的报文序列号 大于报文重组数据库中同一源的最大报文序列号时,将该信元信息*** PF队列 之一。
4、 根据权利要求 3所述的方法, 其特征在于, 还包括:
当有 PF队列的深度大于预设的阈值时, 确定溢出 PF队列中的队首信元信 息所对应的信元的报文序列号与当前正在处理的报文的最小报文序列号的差 值, 是否小于第五阈值;
若是, 则回收该溢出 PF队列的队首信元信息所对应的信元, 然后执行确 定 IF队列中队首信元信息所对应的信元的报文序列号与当前正在处理的报文 的最小报文序列号的差值, 是否小于第四阈值的步骤;
若否,则执行将 IF队列的队首信元信息压入 PF队列之一,并弹出最满的 PF 队列的队首信元信息步骤。
5、 根据权利要求 1至 4中任一项所述的方法, 其特征在于,
所述报文重组数据库包括源描述符内存和源队列内存,所述源描述符内存 用于存储源描述符, 并以报文服务类型和源标识为索引, 所述源队列内存用于 存放与所述源描述符对应的源队列,所述源队列内存包括一个指向排序报文緩 存区的指针, 所述排序报文緩存区用于存放等待排序的信元的报文描述符,每 个报文描述符指向最后一次收到的属于此报文的信元的信元描述符。
6、 根据权利要求 5所述的方法, 其特征在于,
所述排序报文緩沖区为动态的报文描述符阵, 包括 N类不同容量的内存 块, 其中, N为正整数。
7、 根据权利要求 1至 4中任一项所述的方法, 其特征在于,
所述报文重组数据库包括多播复制域的标识 MGID描述符内存和 MGID队 列内存, 所述源 MGID描述符内存用于存储 MGID描述符, 并以报文服务类型、 MGID和源标识为索引,所述 MGID队列内存用于存放与所述 MGID描述符对应 的 MGID队列, 所述 MGID队列内存包括一个指向排序报文緩存区的指针, 所 述排序报文緩存区用于存放等待排序的信元的报文描述符,每个报文描述符指 向最后一次收到的属于此报文的信元的信元描述符。
8、 根据权利要求 7所述的方法, 其特征在于,
所述排序报文緩沖区为动态的报文描述符阵, 包括 M类不同容量的内存 块, 其中, M为正整数。
9、 根据权利要求 7所述的方法, 其特征在于,
通过哈希功能访问 MGID描述符内存和 MGID队列内存;
在发生资源沖突时,根据预置的资源沖突查询表将发生资源沖突的信元的 压入另一 MGID队列或丟弃。
10、 根据权利要求 9所述的方法, 其特征在于,
当所述资源沖突查询表的信息量超过预置第一阈值时,激发目的端口发送 控制信息给多播源, 以减少发往所述目的端口的多播数据量。
11、 根据权利要求 2至 4、 6或 8至 10中任一项所述的方法, 其特征在于, 所 述根据信元序列号对所述正确排序的报文中的信元进行排序包括:
若报文为完整报文,则根据信元的广播类型和报文服务类型将所述完整报 文的 文描述符加入到相应的完整队列中;
根据完整队列中的报文描述符将对应的信元信息***到信元排序緩沖区 中;
根据信元序列号在所述信元排序緩沖区中对提取到的信元信息进行排序。
12、 根据权利要求 11所述的方法, 其特征在于, 所述根据信元序列号在所 述信元排序緩沖区中对提取到的信元信息进行排序包括:
所述信元排序緩沖区维护一个读指针和一个写指针,写指针总是指向当前 正在进行信元排序的报文的开始位置,读指针用于将排好序的信元信息逐个读 出以进行报文重组;
将写指针和信元序列号相加计算出***的信元信息的偏移位置; 根据所述偏移位置调整所述***的信元信息在报文中的位置。
13、 根据权利要求 11所述的方法, 其特征在于, 所述对正确排序的信元进 行报文重组包括:
为所述正确排序的信元信息的所属 文分配至少两个重组资源; 将选择的信元排序緩沖区中的正确排序的信元信息逐个读出; 根据读出的信元信息获取相应的信元数据,利用分配的重组资源对获取到 的信元数据进行报文重组。
14、 根据权利要求 13所述的方法, 其特征在于, 所述信元排序緩沖区至少 具有两个, 则所述对正确排序的信元进行 "¾文重组还包括:
根据报文服务类型选择信元排序緩沖区;
则所述将选择的信元排序緩沖区中的正确排序的信元信息逐个读出具体 为: 将选择的信元排序緩沖区中的正确排序的信元信息逐个读出。
15、 根据权利要求 2至 4、 6、 8至 10或 12至 14中任一项所述的方法, 其特征 在于, 还包括:
确定报文重组超时时, 将报文丟弃。
16、 根据权利要求 15所述的方法, 其特征在于, 所述确定报文重组超时具 体为:
若才艮文中第一个到达目的端口的信元的本地时间戳与当前时间的差值大 于预置的第二阈值时, 则确认报文重组超时。
17、 根据权利要求 2至 4、 6、 8至 10、 12至 14或 16中任一项所述的方法, 其 特征在于,
所述接收源端口发送的信元具体为: 通过交换网接收源端口发送的信元; 则所述对正确排序的信元进行报文重组之前,还包括: 确定交换网时延的 估算值超过预置的第三阈值时, 则将报文丟弃。
18、 一种网络设备, 其特征在于, 包括:
接收单元, 用于接收源端口发送的信元, 所述信元携带源标识、 报文序列 号和信元序列号;
预处理单元, 用于根据源标识对接收单元接收到的信元进行预处理, 以确 定是否将所述信元***数据库;
报文排序单元, 用于根据报文序列号对报文重组数据库中的信元进行排 序, 得到正确排序的报文;
信元排序单元, 用于在确定报文为完整报文时,根据信元序列号对所述正 确排序的报文中的信元进行排序, 得到正确排序的信元;
重组单元, 用于对所述正确排序的信元进行 "¾文重组。
19、根据权利要求 18所述的网络设备,其特征在于,所述预处理单元包括: 数据保存子单元, 用于为接收到的信元打上到达时的本地时间戳后,根据 源标识将信元的信元数据保存在负载内存中, 并配置相应的负载内存索引值; 信息***子单元, 用于将信元的信元信息***到输入先进先出 IF队列中, 所述信元信息包括信元的源标识、 本地时间戳、 负载内存索引值、 报文序列号 和信元序列号;
决策子单元, 用于根据源标识、 报文序列号和延时先进先出 PF队列的状 态对 IF队列中的信元信息进行处理,以确定是否将所述信元信息***报文重组 数据库。
20、 根据权利要求 19所述的网络设备, 其特征在于, 包括:
所述决策子单元, 具体用于当所有 PF队列的深度小于预设的阈值时, 确 定 IF队列中队首信元信息所对应的信元的报文序列号与当前正在处理的报文 的最小报文序列号的差值, 是否小于第四阈值; 若小于第四阈值, 则弹出 IF队 列的队首信元信息; 若大于等于第四阈值, 则将 IF队列的队首信元信息压入 PF 队列之一, 并弹出最满的 PF队列的队首信元信息; 判断所弹出的 IF队列的队首 信元信息所对应的信元或 PF队列的队首信元信息所对应的信元是否超时;若超 时, 则回收该信元信息所对应的信元和该信元所属的报文; 若没有超时, 则在 该信元的报文序列号小于报文重组数据库中同一源的最大报文序列号时,将该 信元信息***报文重组数据库;在该信元的报文序列号大于报文重组数据库中 同一源的最大报文序列号时, 将该信元信息*** PF队列之一。
21、 根据权利要求 20所述的网络设备, 其特征在于,
所述决策子单元, 还用于当有 PF队列的深度大于预设的阈值时, 确定溢 出 PF队列中的队首信元信息所对应的信元的报文序列号与当前正在处理的报 文的最小报文序列号的差值, 是否小于第五阈值; 若小于第五阈值, 则回收该 溢出 PF队列的队首信元信息所对应的信元,然后执行确定 IF队列中队首信元信 息所对应的信元的报文序列号与当前正在处理的报文的最小报文序列号的差 值, 是否小于第四阈值的步骤; 若大于等于第五阈值, 则执行将 IF队列的队首 信元信息压入 PF队列之一, 并弹出最满的 PF队列的队首信元信息步骤。
22、 根据权利要求 18至 21中任一项所述的网络设备, 其特征在于, 所述报文重组数据库包括源描述符内存和源队列内存,所述源描述符内存 用于存储源描述符, 并以报文服务类型和源标识为索引, 所述源队列内存用于 存放与所述源描述符对应的源队列,所述源队列内存包括一个指向排序报文緩 存区的指针, 所述排序报文緩存区用于存放等待排序的信元的报文描述符,每 个报文描述符指向最后一次收到的属于此报文的信元的信元描述符。
23、 根据权利要求 22所述的网络设备, 其特征在于,
所述排序报文緩沖区为动态的报文描述符阵, 包括 N类不同容量的内存 块, 其中, N为正整数。
24、 根据权利要求 18至 21中任一项所述的网络设备, 其特征在于, 所述报文重组数据库包括多播复制域的标识 MGID描述符内存和 MGID队 列内存, 所述源 MGID描述符内存用于存储 MGID描述符, 并以报文服务类型、 MGID和源端口号为索引,所述 MGID队列内存用于存放与所述 MGID描述符对 应的 MGID队列, 所述 MGID队列内存包括一个指向排序报文緩存区的指针, 所述排序报文緩存区用于存放等待排序的信元的报文描述符,每个报文描述符 指向最后一次收到的属于此报文的信元的信元描述符。
25、 根据权利要求 24所述的网络设备, 其特征在于,
所述排序报文緩沖区为动态的报文描述符阵, 包括 M类不同容量的内存 块, 其中, M为正整数。
26、根据权利要求 24所述的网络设备,其特征在于,还包括沖突处理单元; 所述报文排序单元, 具体用于通过哈希功能访问 MGID描述符内存和
MGID队列内存;
则沖突处理单元, 用于在发生资源沖突时,根据预置的资源沖突查询表将 发生资源沖突的信元的压入另一 MGID队列或丟弃。
27、 根据权利要求 26所述的网络设备, 其特征在于, 还包括:
控制消息发送单元,用于当所述资源沖突查询表的信息量超过预置第一阈 值时, 激发目的端口发送控制信息给多播源, 以减少发往所述目的端口的多播 数据量。
28、 根据权利要求 19至 21、 23或 25至 27中任一项所述的网络设备, 其特征 在于, 所述信元排序单元包括: 入队子单元, 用于在确定报文为完整报文时,根据信元的广播类型和优先 级将所述完整报文的报文描述符加入到相应的完整队列中;
緩存子单元,用于根据完整队列中的报文描述符将对应的信元信息***到 所述信元排序緩沖区中;
排序子单元,用于根据信元序列号在所述信元排序緩沖区中对提取到的信 元信息进行排序。
29、 根据权利要求 28所述的网络设备, 其特征在于, 所述信元排序緩沖区 维护一个读指针和一个写指针,写指针总是指向当前正在进行信元排序的报文 的开始位置, 读指针用于将排好序的信元信息逐个读出以进行 ·艮文重组; 贝' J :
所述排序子单元,具体用于将写指针和信元序列号相加计算出***的信元 信息的偏移位置, 根据所述偏移位置调整所述***的信元信息在报文中的位 置。
30、 根据权利要求 28所述的网络设备, 其特征在于, 所述重组单元包括: 分配子单元,用于为所述正确排序的信元信息的所属 文分配至少两个重 组资源;
读取子单元, 用于将信元排序緩沖区中的正确排序的信元信息逐个读出; 重组子单元, 用于根据读取子单元读出的信元信息获取相应的信元数据, 利用分配子单元分配的重组资源对获取到的信元数据进行报文重组。
31、 根据权利要求 30所述的网络设备, 其特征在于, 所述重组单元还包括 选择子单元;
选择子单元, 用于根据报文服务类型选择信元排序緩沖区;
则所述分配子单元,具体用于为选择子单元选择的信元排序緩沖区中的正 确排序的信元信息的所属报文分配至重组资源;
所述读取子单元,用于将选择子单元选择的信元排序緩沖区中的正确排序 的信元信息逐个读出。
32、根据权利要求 19至 21、 23、 25至 27或 29至 31中任一项所述的网络设备, 其特征在于, 还包括:
丟弃单元, 用于确定报文重组超时时, 将报文丟弃。
33、 根据权利要求 32所述的网络设备, 其特征在于, 丟弃单元,具体用于在报文中第一个到达目的端口的信元的到达时间与当 前时间的差值大于预置的第二阈值时, 确认报文重组超时。
34、 根据权利要求 32所述的网络设备, 其特征在于,
所述接收单元, 具体用于通过交换网接收源端口发送的信元;
则所述丟弃单元, 还用于确定交换网时延的估算值超过预置的第三阈值 时, 则将报文丟弃。
35、 一种通信***, 其特征在于, 包括源端口设备和权利要求 19至 34所述 的任一种网络设备;
所述源端口设备, 用于发送信元给网络设备, 所述信元携带源标识、报文 序列号和信元序列号。
PCT/CN2011/078504 2011-08-17 2011-08-17 一种报文重组重排序方法、装置和*** WO2012162949A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN201180001422.5A CN103069757B (zh) 2011-08-17 2011-08-17 一种报文重组重排序方法、装置和***
KR1020147005957A KR101607180B1 (ko) 2011-08-17 2011-08-17 패킷 재조립 및 재배열 방법, 장치 및 시스템
JP2014525279A JP5863076B2 (ja) 2011-08-17 2011-08-17 パケットを再構築し再順序付けするための方法、装置、およびシステム
PCT/CN2011/078504 WO2012162949A1 (zh) 2011-08-17 2011-08-17 一种报文重组重排序方法、装置和***
EP11866504.1A EP2736197B1 (en) 2011-08-17 2011-08-17 Packet reassembly and resequence method, apparatus and system
US14/182,106 US9380007B2 (en) 2011-08-17 2014-02-17 Method, apparatus and system for packet reassembly and reordering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/078504 WO2012162949A1 (zh) 2011-08-17 2011-08-17 一种报文重组重排序方法、装置和***

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/182,106 Continuation US9380007B2 (en) 2011-08-17 2014-02-17 Method, apparatus and system for packet reassembly and reordering

Publications (1)

Publication Number Publication Date
WO2012162949A1 true WO2012162949A1 (zh) 2012-12-06

Family

ID=47258297

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/078504 WO2012162949A1 (zh) 2011-08-17 2011-08-17 一种报文重组重排序方法、装置和***

Country Status (6)

Country Link
US (1) US9380007B2 (zh)
EP (1) EP2736197B1 (zh)
JP (1) JP5863076B2 (zh)
KR (1) KR101607180B1 (zh)
CN (1) CN103069757B (zh)
WO (1) WO2012162949A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573711A (zh) * 2014-10-14 2016-05-11 深圳市中兴微电子技术有限公司 一种数据缓存方法及装置
CN106911644A (zh) * 2015-12-23 2017-06-30 ***通信集团广西有限公司 一种报文重组方法和设备
CN109561443A (zh) * 2017-09-26 2019-04-02 电信科学技术研究院 一种信息处理方法、装置、设备及计算机可读存储介质
CN114390004A (zh) * 2022-03-24 2022-04-22 成都数联云算科技有限公司 一种报文处理方法及***及装置及介质
CN114679425B (zh) * 2022-03-24 2024-05-31 深圳震有科技股份有限公司 一种5g网络下的报文处理方法、网络设备及存储介质

Families Citing this family (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9253248B2 (en) * 2010-11-15 2016-02-02 Interactic Holdings, Llc Parallel information system utilizing flow control and virtual channels
US9146677B2 (en) * 2013-01-28 2015-09-29 Applied Micro Circuits Corporation Systems and methods for queue request ordering without stalling requests in aliasing conditions by using a hash indexed based table
US10454714B2 (en) 2013-07-10 2019-10-22 Nicira, Inc. Method and system of overlay flow control
US9338105B2 (en) * 2013-09-03 2016-05-10 Broadcom Corporation Providing oversubscription of pipeline bandwidth
GB2533328A (en) * 2014-12-16 2016-06-22 Ibm Message processing
JP6342351B2 (ja) * 2015-03-02 2018-06-13 東芝メモリ株式会社 ストレージシステム
US10135789B2 (en) 2015-04-13 2018-11-20 Nicira, Inc. Method and system of establishing a virtual private network in a cloud service for branch networking
US20170017420A1 (en) * 2015-07-15 2017-01-19 Innovium, Inc. System And Method For Enabling High Read Rates To Data Element Lists
US20170017419A1 (en) * 2015-07-15 2017-01-19 Innovium, Inc. System And Method For Enabling High Read Rates To Data Element Lists
US9904635B2 (en) * 2015-08-27 2018-02-27 Samsung Electronics Co., Ltd. High performance transaction-based memory systems
KR101785821B1 (ko) 2016-01-04 2017-10-16 엘에스산전 주식회사 시리얼 통신의 프레임 수신 모니터링 방법
US10089339B2 (en) * 2016-07-18 2018-10-02 Arm Limited Datagram reassembly
US9906821B1 (en) * 2016-08-23 2018-02-27 Cisco Technology, Inc. Packet reordering system
US10992568B2 (en) 2017-01-31 2021-04-27 Vmware, Inc. High performance software-defined core network
US20200036624A1 (en) 2017-01-31 2020-01-30 The Mode Group High performance software-defined core network
US11706127B2 (en) 2017-01-31 2023-07-18 Vmware, Inc. High performance software-defined core network
US20180219765A1 (en) 2017-01-31 2018-08-02 Waltz Networks Method and Apparatus for Network Traffic Control Optimization
US10686625B2 (en) 2017-10-02 2020-06-16 Vmware, Inc. Defining and distributing routes for a virtual network
US11115480B2 (en) 2017-10-02 2021-09-07 Vmware, Inc. Layer four optimization for a virtual network defined over public cloud
US10999100B2 (en) 2017-10-02 2021-05-04 Vmware, Inc. Identifying multiple nodes in a virtual network defined over a set of public clouds to connect to an external SAAS provider
US11223514B2 (en) 2017-11-09 2022-01-11 Nicira, Inc. Method and system of a dynamic high-availability mode based on current wide area network connectivity
CN110391990A (zh) * 2018-04-18 2019-10-29 中兴通讯股份有限公司 数据传输方法及装置、***、存储介质、电子装置
US10567307B2 (en) 2018-04-27 2020-02-18 Avago Technologies International Sales Pte. Limited Traffic management for high-bandwidth switching
US10686714B2 (en) * 2018-04-27 2020-06-16 Avago Technologies International Sales Pte. Limited Traffic management for high-bandwidth switching
US10820057B2 (en) 2018-11-07 2020-10-27 Nvidia Corp. Scalable light-weight protocols for wire-speed packet ordering
US11108704B2 (en) 2018-12-04 2021-08-31 Nvidia Corp. Use of stashing buffers to improve the efficiency of crossbar switches
US11805081B2 (en) * 2019-03-04 2023-10-31 Intel Corporation Apparatus and method for buffer management for receive segment coalescing
US10999137B2 (en) 2019-08-27 2021-05-04 Vmware, Inc. Providing recommendations for implementing virtual networks
US11044190B2 (en) 2019-10-28 2021-06-22 Vmware, Inc. Managing forwarding elements at edge nodes connected to a virtual network
EP4049421B1 (en) * 2019-10-29 2023-11-29 Huawei Technologies Co., Ltd. Systems and methods for sorting data elements with approximation to o(1)
US11489783B2 (en) 2019-12-12 2022-11-01 Vmware, Inc. Performing deep packet inspection in a software defined wide area network
US11606712B2 (en) 2020-01-24 2023-03-14 Vmware, Inc. Dynamically assigning service classes for a QOS aware network link
US11190457B2 (en) * 2020-02-19 2021-11-30 At&T Intellectual Property I, L.P. Selectively bypassing a routing queue in a routing device in a fifth generation (5G) or other next generation network
CN112511455A (zh) * 2020-04-30 2021-03-16 中兴通讯股份有限公司 信元流特征值调整方法、装置、***和存储介质
US11363124B2 (en) 2020-07-30 2022-06-14 Vmware, Inc. Zero copy socket splicing
US11575591B2 (en) 2020-11-17 2023-02-07 Vmware, Inc. Autonomous distributed forwarding plane traceability based anomaly detection in application traffic for hyper-scale SD-WAN
US11575600B2 (en) 2020-11-24 2023-02-07 Vmware, Inc. Tunnel-less SD-WAN
US11929903B2 (en) 2020-12-29 2024-03-12 VMware LLC Emulating packet flows to assess network links for SD-WAN
WO2022154725A1 (en) * 2021-01-14 2022-07-21 Telefonaktiebolaget Lm Ericsson (Publ) Packet tunneling in a wireless communication network
US11792127B2 (en) 2021-01-18 2023-10-17 Vmware, Inc. Network-aware load balancing
US11979325B2 (en) 2021-01-28 2024-05-07 VMware LLC Dynamic SD-WAN hub cluster scaling with machine learning
CN113014586B (zh) * 2021-03-04 2022-09-13 即时匹配(上海)网络科技有限公司 Rtp数据包乱序处理及重组帧方法和***
US11381499B1 (en) 2021-05-03 2022-07-05 Vmware, Inc. Routing meshes for facilitating routing through an SD-WAN
US11729065B2 (en) 2021-05-06 2023-08-15 Vmware, Inc. Methods for application defined virtual network service among multiple transport in SD-WAN
US11681575B2 (en) * 2021-06-10 2023-06-20 Arm Limited Ordered data sub-component extraction
CN113691469B (zh) * 2021-07-27 2023-12-26 新华三技术有限公司合肥分公司 报文乱序重排方法及单板
CN113676413B (zh) * 2021-08-26 2024-05-17 锐捷网络股份有限公司 一种乱序报文的重排方法、装置及存储介质
US11943146B2 (en) 2021-10-01 2024-03-26 VMware LLC Traffic prioritization in SD-WAN
CN113965391B (zh) * 2021-10-27 2023-10-20 成都数默科技有限公司 一种多数据包文件冒泡排序的方法
US11770215B2 (en) 2022-02-17 2023-09-26 Nvidia Corp. Transceiver system with end-to-end reliability and ordering protocols
US11909815B2 (en) 2022-06-06 2024-02-20 VMware LLC Routing based on geolocation costs
CN114968893B (zh) * 2022-07-27 2022-09-30 井芯微电子技术(天津)有限公司 基于时间戳的PCIe报文队列调度方法、***及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6781992B1 (en) * 2000-11-30 2004-08-24 Netrake Corporation Queue engine for reassembling and reordering data packets in a network
US20090252168A1 (en) * 2008-04-02 2009-10-08 Alaxala Networks Corporation Multi-plane cell switch fabric system
CN101795236A (zh) * 2010-03-31 2010-08-04 成都市华为赛门铁克科技有限公司 报文保序方法及装置
CN101932006A (zh) * 2009-06-19 2010-12-29 中兴通讯股份有限公司 数据包发送、接收、传输方法及装置

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2848784B2 (ja) * 1994-08-02 1999-01-20 沖電気工業株式会社 パケット交換方式
US5802050A (en) 1996-06-10 1998-09-01 Telefonaktiebolaget Lm Ericsson Minicell sequence number count
US6791947B2 (en) * 1996-12-16 2004-09-14 Juniper Networks In-line packet processing
US6148001A (en) * 1997-06-12 2000-11-14 Nokia Telecommunications, Oy Multipoint-to-point system which multiplexes complete packets comprised of ATM cells on to a single virtual channel connection
US6741552B1 (en) * 1998-02-12 2004-05-25 Pmc Sierra Inertnational, Inc. Fault-tolerant, highly-scalable cell switching architecture
JP2001036549A (ja) * 1999-07-15 2001-02-09 Toshiba Corp データ処理システムおよびタイムスタンプ生成方法
US7102999B1 (en) * 1999-11-24 2006-09-05 Juniper Networks, Inc. Switching device
US6977930B1 (en) * 2000-02-14 2005-12-20 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
US7107359B1 (en) * 2000-10-30 2006-09-12 Intel Corporation Host-fabric adapter having hardware assist architecture and method of connecting a host system to a channel-based switched fabric in a data network
US7095744B2 (en) * 2000-11-22 2006-08-22 Dune Networks Method and system for switching variable sized packets
JP4489308B2 (ja) * 2001-01-05 2010-06-23 富士通株式会社 パケットスイッチ
US7342942B1 (en) * 2001-02-07 2008-03-11 Cortina Systems, Inc. Multi-service segmentation and reassembly device that maintains only one reassembly context per active output port
US7085274B1 (en) * 2001-09-19 2006-08-01 Juniper Networks, Inc. Context-switched multi-stream pipelined reorder engine
US7167476B1 (en) * 2002-04-12 2007-01-23 Juniper Networks, Inc. Systems and methods for routing data in a network device
US7124231B1 (en) * 2002-06-14 2006-10-17 Cisco Technology, Inc. Split transaction reordering circuit
US7486678B1 (en) 2002-07-03 2009-02-03 Greenfield Networks Multi-slice network processor
US20050100035A1 (en) * 2003-11-11 2005-05-12 Avici Systems, Inc. Adaptive source routing and packet processing
US7873693B1 (en) * 2004-02-13 2011-01-18 Habanero Holdings, Inc. Multi-chassis fabric-backplane enterprise servers
US8279866B2 (en) * 2004-11-17 2012-10-02 Samsung Electronics Co., Ltd. Method and system for switching packets in a communication network
US8553684B2 (en) * 2006-04-24 2013-10-08 Broadcom Corporation Network switching system having variable headers and addresses
JP4867778B2 (ja) 2007-05-07 2012-02-01 株式会社日立製作所 分散型スイッチファブリックシステム
ES2363942T3 (es) * 2008-12-19 2011-08-19 Alcatel Lucent Elemento de red escalable con funcionalidad de segmentación y reensamblaje (sar) para conmutar señales de multiplexación por división de tiempo.
US9602439B2 (en) * 2010-04-30 2017-03-21 Juniper Networks, Inc. Methods and apparatus for flow control associated with a switch fabric
US9071499B2 (en) * 2011-03-28 2015-06-30 Citrix Systems, Inc. Systems and methods for emulating a NIC for packet transmission on hardware RSS unaware NICs in a multi-core system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6781992B1 (en) * 2000-11-30 2004-08-24 Netrake Corporation Queue engine for reassembling and reordering data packets in a network
US20090252168A1 (en) * 2008-04-02 2009-10-08 Alaxala Networks Corporation Multi-plane cell switch fabric system
CN101932006A (zh) * 2009-06-19 2010-12-29 中兴通讯股份有限公司 数据包发送、接收、传输方法及装置
CN101795236A (zh) * 2010-03-31 2010-08-04 成都市华为赛门铁克科技有限公司 报文保序方法及装置

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573711A (zh) * 2014-10-14 2016-05-11 深圳市中兴微电子技术有限公司 一种数据缓存方法及装置
US10205673B2 (en) 2014-10-14 2019-02-12 Sanechips Technology Co. Ltd. Data caching method and device, and storage medium
CN105573711B (zh) * 2014-10-14 2019-07-19 深圳市中兴微电子技术有限公司 一种数据缓存方法及装置
CN106911644A (zh) * 2015-12-23 2017-06-30 ***通信集团广西有限公司 一种报文重组方法和设备
CN109561443A (zh) * 2017-09-26 2019-04-02 电信科学技术研究院 一种信息处理方法、装置、设备及计算机可读存储介质
CN109561443B (zh) * 2017-09-26 2021-06-11 大唐移动通信设备有限公司 一种信息处理方法、装置、设备及计算机可读存储介质
CN114390004A (zh) * 2022-03-24 2022-04-22 成都数联云算科技有限公司 一种报文处理方法及***及装置及介质
CN114679425B (zh) * 2022-03-24 2024-05-31 深圳震有科技股份有限公司 一种5g网络下的报文处理方法、网络设备及存储介质

Also Published As

Publication number Publication date
EP2736197A4 (en) 2014-05-28
US20140160935A1 (en) 2014-06-12
CN103069757A (zh) 2013-04-24
EP2736197B1 (en) 2016-03-09
EP2736197A1 (en) 2014-05-28
JP2014522202A (ja) 2014-08-28
KR20140046052A (ko) 2014-04-17
US9380007B2 (en) 2016-06-28
KR101607180B1 (ko) 2016-03-29
JP5863076B2 (ja) 2016-02-16
CN103069757B (zh) 2015-07-22

Similar Documents

Publication Publication Date Title
WO2012162949A1 (zh) 一种报文重组重排序方法、装置和***
US8505013B2 (en) Reducing data read latency in a network communications processor architecture
CN107948094B (zh) 一种高速数据帧无冲突入队处理的装置及方法
EP0960511B1 (en) Method and apparatus for reclaiming buffers
US8514874B2 (en) Thread synchronization in a multi-thread network communications processor architecture
US5311509A (en) Configurable gigabits switch adapter
US9864633B2 (en) Network processor having multicasting protocol
US7406041B2 (en) System and method for late-dropping packets in a network switch
JP4779955B2 (ja) パケット処理装置及びパケット処理方法
US8910171B2 (en) Thread synchronization in a multi-thread network communications processor architecture
CN112084136B (zh) 队列缓存管理方法、***、存储介质、计算机设备及应用
US8432908B2 (en) Efficient packet replication
JP5892500B2 (ja) メッセージ処理方法及び装置
JPH07321822A (ja) マルチキャスティング機能を備えた装置
US8868889B2 (en) Instruction breakpoints in a multi-core, multi-thread network communications processor architecture
EP2526478B1 (en) A packet buffer comprising a data section an a data description section
WO2006098763A2 (en) Intelligent memory interface
CN114531488B (zh) 一种面向以太网交换器的高效缓存管理***
CN104158770A (zh) 一种交换机数据包切分与重组的方法与装置
CN102916902A (zh) 数据存储方法及装置
WO2022121808A1 (zh) 直通转发模式的调度方法、设备及存储介质
US11201831B1 (en) Packed ingress interface for network apparatuses
US7751422B2 (en) Group tag caching of memory contents
CN111131089A (zh) 改善多播业务hol阻塞的队列管理方法
CN100495974C (zh) 数据传输过程中的流量整形方法

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180001422.5

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11866504

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014525279

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2011866504

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20147005957

Country of ref document: KR

Kind code of ref document: A