WO2017070851A1 - Channelization for flexible ethernet - Google Patents

Channelization for flexible ethernet Download PDF

Info

Publication number
WO2017070851A1
WO2017070851A1 PCT/CN2015/092992 CN2015092992W WO2017070851A1 WO 2017070851 A1 WO2017070851 A1 WO 2017070851A1 CN 2015092992 W CN2015092992 W CN 2015092992W WO 2017070851 A1 WO2017070851 A1 WO 2017070851A1
Authority
WO
WIPO (PCT)
Prior art keywords
rate
standard
interface
data
mii
Prior art date
Application number
PCT/CN2015/092992
Other languages
French (fr)
Inventor
Bin Liu
Sheping SHI
Chengbin Wu
Xiaobing Niu
Original Assignee
Zte Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zte Corporation filed Critical Zte Corporation
Priority to PCT/CN2015/092992 priority Critical patent/WO2017070851A1/en
Publication of WO2017070851A1 publication Critical patent/WO2017070851A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • H04J3/1652Optical Transport Network [OTN]
    • H04J3/1658Optical Transport Network [OTN] carrying packets or ATM cells
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems

Definitions

  • This document relates to a channelization method based on flexible Ethernet, and in particular to flexible Ethernet technology in the Optical Interworking Forum (OIF) field.
  • OFI Optical Interworking Forum
  • Optical Interworking Forum (OIF) organization in early 2015 set up a flexible Ethernet (Flex Ethernet or FlexE) project group to address some problems encountered in current data transmission networks. Flex Ethernet is expected to support varying payload bandwidth per wavelength due to optimization of the modulation format.
  • This document discloses techniques for channelization and data bandwidth scheduling for flexible Ethernet traffic generation and transmission.
  • a method for transferring data from one or more input data streams to a flexible Ethernet (FlexE) shim layer for transmission and includesoperating flexible containers to receive the one or more input data streams to generate, from each flexible container, one or more output streams; processing one or more output steams from each flexible container in one or more reconciliation sublayers to generate data signals; and providing, over a modified industry-standard interface, the data signals to a FlexE shim layer in which the data signals are encoded to generate logical serial data streams of encoded blocks representing FlexE clients which are mapped to allocated timing slots for transmission.
  • Each flexible container and the processing in the one or more reconciliation sublayers enable the modified industry-standard interface to include one or more interfaces having respective rates to accommodate for transmission via the FlexE shim layer.
  • a method and an apparatus for implementing a technique of transferring data from one or more input data streams to a FlexE shim layer for transmission are disclosed.
  • the technique includes operating flexible containers to receive the one or more input data streams to generate, from each flexible container, one or more output streams, processing one or more output steams from each flexible container in one or more reconciliation sublayers to generate 64-bit data signals and providing, over a modified industry-standard interface, the 64-bit data signals to a FlexE shim layer in which the 64-bit data signals are each encoded using a 64B/66B encoding to generate logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots for transmission.
  • the modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate.
  • each flexible container corresponds to a FlexE client and a processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client.
  • a method and an apparatus for implementing a technique of transferring data from a flexible Ethernet (FlexE) shim layer to one or more output data streams during reception of the data includes providing, over a modified industry standard interface, 64-bit data signals from a FlexE shim layer that decodes, using a 64B/66B decoding, logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots in the data, processing the 64-bit data signals through one or more reconciliation sublayers to generate one or more data stream inputs to flexible containers and operating the flexible containers to output the one or more output data streams from the received one or more input streams.
  • FlexE flexible Ethernet
  • the modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate.
  • each flexible container corresponds to a FlexE client and processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client.
  • a method and an apparatus for a technique of communicating data from multiple Ethernet inputs having multiple interface rates to an optical network include constructing one or more elastic/flexible container operating on data at a media access control (MAC) layer, wherein the elastic/flexible container comprises a variable length data structure, configuring an output interface having an interface rate equal to a total of the multiple interface rates, wherein the output interface is configured to carry data in time slots according to a transmission schedule, processing data packets received from the multiple Ethernet interfaces through a network protocol stack implemented on the network apparatus to generate multiple processed multiple Ethernet data streams, allocating data packets from the processed multiple Ethernet data streams to time slots of an output data stream transmitted out of the output interface according to the transmission schedule, and communicating, to an optical network, a routing policy by which multiple optical data units of the optical network are to receive the output data stream.
  • MAC media access control
  • a method and apparatus for receiving data from an optical network and transmitting over multiple Ethernet interfaces implement a technique that includes executing at least one MAC protocol stack instance on the network apparatus, constructing an elastic/flexible container operating on data at a MAC layer, wherein the elastic/flexible container comprises a variable length data structure, configuring an input interface having an interface rate equal to a total of the multiple interface rates, wherein the input interface is configured to carry data in time slots according to a transmission schedule, receiving, a routing policy by which multiple optical data units of the optical network are transmitting data on to the input interface, and selecting data packets from the input interface according to the transmission schedule for processing through a network protocol stack implemented on the network apparatus for transmission to the multiple Ethernet outputs.
  • a technique for transmitting and receiving data using a FlexE shim includes an apparatus for transferring data from a first group of one or more input data streams to a flexible Ethernet (FlexE) shim layer for transmission, and transferring, from the FlexE shim layer to a first group of one or more output data streams for reception.
  • FlexE flexible Ethernet
  • the apparatus includes a number of flexible containers for receiving the first group one or more input data streams to generate, from each flexible container, a second group of one or more output streams, one or more reconciliation sublayer modules for processing the second group of one or more output steams from each flexible container to generate a first group of 64-bit data signals, and a modified industry-standard interface for providing the first group of 64-bit data signals to a FlexE shim layer in which the first group of 64-bit data signals are each encoded using a 64B/66B encoding to generate a first group of logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to a first group allocated timing slots for transmission.
  • the one or more reconciliation sublayers further process the second group of 64-bit data signals to generate a second group of one or more data stream inputs to flexible containers.
  • the flexible containers further outputs the first group of one or more output data streams from the received second group of one or more input streams.
  • the modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate.
  • each flexible container corresponds to a FlexE client and a processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client.
  • FIG. 1 shows a block diagram of an example of a 100G line card with multiple Traffic Management (TM) chips.
  • TM Traffic Management
  • FIG. 2 illustrates an example of a bottleneck of fixed-rate Ethernet with respect to the current flexible IP and optical transmission synergetic networks.
  • FIG. 3 is a block diagram example depicting the location of FlexE SHIM layer within the IEEE802.3 stack.
  • FIG. 4 shows an example of data channelization for a FlexE client data stream channelized to form a calendar.
  • FIG. 5 shows another example of data channelization for a FlexE client data stream channelized to form a calendar.
  • FIG. 6 illustrates an example of flexible Ethernet networking by a router and an OTN device connected via a port.
  • FIG. 7 illustrates an example of flexible Ethernet networking by a router and an OTN device connected with four PHY bonding together.
  • FIG. 8 shows an example of an optical communication network.
  • FIG. 9 illustrates an example of mapping of a media independent interface (MII) corresponding to an elastic/flexible container.
  • MII media independent interface
  • FIG. 10 illustrates an example of a mapping of an MII corresponding to a 150G elastic/flexible container.
  • FIG. 11 illustrates an example of a mapping of an MII corresponding to a 135G elastic/flexible container.
  • FIG. 12 illustrates an example extended 10 Gigabit MII (XGMII) interface.
  • FIG. 13 illustrates an example of a ten Gigabit attachment unit interface (XAUI) .
  • XAUI ten Gigabit attachment unit interface
  • FIG. 14 illustrates an example of three FlexE client flows set in a router.
  • FIG. 15 is a distribution diagram showing an example of a FlexE calendar.
  • FIG. 16 illustrates an example cache structure of a chip.
  • FIG. 17 illustrates an example of a free cache organized using a linked queue.
  • FIG. 18 illustrates an example of allocation and freeing of a cache space.
  • FIG. 19 illustrates an example of a data-send operation of a data transmission apparatus.
  • FIG. 20 illustrates an example of a data-receive operation of a data transmission apparatus.
  • FIG. 21 illustrates an example flowchart for a method of data transmission.
  • FIG. 22 illustrates an example of a data transmission apparatus.
  • FIG. 23 illustrates an example flowchart for a method of receiving data transmissions.
  • FIG. 24 illustrates an example of a data reception apparatus.
  • FIG. 25 illustrates an example of a data structure.
  • FIG. 26 shows an example flowchart for a method of data communication.
  • FIG. 27 shows an example flowchart for another method of data communication.
  • FIG. 28 shows an example of a FlexE mux structure.
  • FIG. 29 shows an example of a FlexE de-mux structure.
  • FIG. 30 shows an example transport network that is unaware of FlexE carrying FlexE data.
  • FIG. 31 shows an example of a transport network that is aware of Flex E carrying FlexE data.
  • Ethernet is a universally used data connection interface.
  • Currently deployed Ethernet products are often named after the connection rate achieved by the physical layer, e.g., 10 Mbit/s, 1 Gbps, and so on.
  • the current Ethernet interfaces are fixed in bandwidth, and thus are not able to use the flexibility of bandwidth that can be achieved by a packet switching equipment, because a packet switching equipment only provides a single stream with fixed bandwidth externally, e.g., because Ethernet is available only at a few fixed rates.
  • present day technologies fail to make use of the transmission bandwidth flexibility that can be achieved by aggregating, or combining, traffic from multiple Ethernet devices with different connection rates.
  • Flex Ethernet defines channelization, binding and sub-rate functions by standardizing a flexible Ethernet (FlexE) MAC interface.
  • FlexE enables a standard Ethernet physical media dependent layer (PMD) to connect one or more Ethernet MACs, and provides efficient channel binding standard rate and non-standard rate, so that Ethernet switches and routers can be configured with different bandwidth on demand, and thus increases flexibility of bandwidth configuration of data center network.
  • PMD physical media dependent layer
  • the terms used in the present document are consistent with their meaning in the FlexE Implementation Agreement Draft 1.1., Release date July 2015 (IA OIF-FLEXE-01.0) , which is incorporated herein in its entirety.
  • the transmission pipeline rate may be lower than the PMD rate of Ethernet, and a matching of the rate of router port to the transfer rate may be performed to support non-standard Ethernet rate.
  • binding may refer to a bound to an Ethernet physical layer.
  • a 200G MAC can be supported on two bound 100GBASE-R physical layers.
  • a large Ethernet rate may be divided into multiple (e.g., three or four) sub-rates, and resources corresponding to each sub-rate may be bound to that sub-rate. This mapping between a high bandwidth connection and multiple low bandwidth connections is also sometimes called channelization.
  • MLG multi-link gearbox
  • PMD physical medium dependent
  • optical network For optical networks, with the development of ODUflex (Optical channel Data Unit flexible) , streams of flexible grid, bandwidth variable transponders (BVT) , flexible reconfigurable optical add-drop multiplexers (ROADM) and flexible OTN, and flexible bandwidth can be carried via optical network.
  • BVT bandwidth variable transponders
  • ROADM flexible reconfigurable optical add-drop multiplexers
  • flexible bandwidth can be carried via optical network.
  • the fiber-optic network has the infrastructure capability that can be used for traffic management with flexible bandwidths.
  • data packet forwarding devices such as Ethernet routers or switches, can handle substantially flexible bandwidth stream.
  • FIG. 1 is a block diagram illustration of an example of a line card 100 that communicatively couples Ethernet data connections with optical transmission equipment.
  • the technology disclosed in the present document can be implemented on a line card that offers data connectivity between Ethernet electrical signals and optical signals carrying data traffic in an optical network.
  • FIG. 2 shows an example block diagram to illustrate bottleneck of the fixed-rate Ethernet to the current flexible IP and optical coordinated transport networks.
  • the packets flow through the network processor (NP) units and traffic management (TM) units 202 (both these units can control the flow of bandwidth) e.g., as depicted by flexible rate output streams 210.
  • NP network processor
  • TM traffic management
  • an Ethernet router/switch can easily generate different streams with different rates and bandwidth on demand at the Ethernet interface 204.
  • bandwidth of Ethernet interface defined by IEEE 802.3 is a fixed rate, such as 10G, 40G, 100G and 400Gbps, e.g., as depicted by 208. Due to limitations of fixed-rate Ethernet interfaces, the flexibility of an Ethernet router/switch is not fully utilized for feeding data to flexible optical equipment 206 that includes such optical transport devices as a flexible optical module, an ODUflex, a flex ROADM, etc.
  • the FlexE framework proposed in the OIF provides a common mechanism to support multiple Ethernet MAC layer rates, it may or may not correspond to any existing Ethernet PHY rate. Because of having a more flexible and universal channel bonding characteristics, the followed channel flexibility, sub-rate flexibility, as well as an important feature of no need to modify the PMD, will enable flexible Ethernet to leverage the future market application, and usher in an emerging Ethernet and optical transport market.
  • FIG. 3 shows an example of the positioning of the FlexE SHIM layer 302 in the IEEE802.3 stack 300 that implements the flexible Ethernet technology.
  • FlexE SHIM layer 302 After data streams having certain bandwidths reach the MAC layer, e.g., through the MII interface, they form parallel data streams (e.g., XGMII Interface 32-bit parallel data stream) . These data streams are combined into 64-bit data signal TXD ⁇ 63: 0>. Then, FlexE SHIM layer 302 performs 64B/66B encoding on the data from MII interface, and generates a 66-bit block. The 66-bit block is formed of two parts, one part is the 2-bit synchronization header, and the other part is a 64-bit payload, the logical serial stream of 64B/66B block is called FlexE client (FlexE client) in FlexE technology.
  • the flexible Ethernet technology adds a FlexE SHIM layer 302 on the original Scramble (scramble) as defined in conventional 802.3 stack, and bypasses the original 64B/66B encoding and decoding process, while locating the 64B/66B encoding and decoding process to the top of the SHIM layer.
  • a FlexE group may be composed of 1 to n 100GBASE-R Ethernet physical layer devices (PHY) .
  • PHY physical layer devices
  • Each physical layer devices uses most PCS functions described in section 82 of the IEEE draft standard 802.3-2015, including the PCS channel distribution, channel tag insertion, alignment and correction. All PHY in a FlexE group may use the same physical layer clock.
  • FlexE payload carried on each PHY on the FlexE group has a format of logic serial stream of valid 64B/66B block, except for the marks occupied by aligned mark of PCS channel (cannot carry FlexE payload) .
  • Each FlexE client may represent a 64B/66B block logical serial stream of an Ethernet MAC layer.
  • an elastic/flexible container may be configured to operate at a variable speed to match the rate of the MAC stream.
  • the MAC layer of a FlexE client can run at a rate of 10, 40 and m ⁇ 25Gb/s, where m is an integer.
  • the 64B/66B encoding is based on IEEE standard 802.3-2015 Figure 82-4.
  • the FlexE mechanism functions by using a calendar, and allocates 66B block location on each PHY of FlexE group to each FlexE client.
  • a calendar may use a basic unit of a time slot.
  • the time slot may correspond to 5Gbps size and a length of 20 time slots for every 100G capacity of FlexE group.
  • FIG. 28 shows an example of functions performed in a FlexE mux in the transmit direction.
  • Data from FlexE clients is passed through a 64B/66B encoding module and an idle insert/delete module before being distributed and inserted into a master calendar.
  • What is presented for insertion into the slots of the FlexE master calendar is a stream of 64B/66B encoded blocks encoded per IEEE Std 802.3-2015 Table 82-4 which has been rate-matched to other clients of the same FlexE shim. This stream of 66B blocks might be created directly at the required rate using back-pressure from a network processor.
  • the stream of blocks may come from a multi-lane Ethernet PHY, where the lanes need to be deskewed and re-interleaved with alignment markers removed prior to performing idle insertion/deletion to rate match with other clients of the same FlexE shim. Or the stream may have come from another FlexE shim, for example, connected across an OTN network, where all that is required is to perform idle insertion/deletion to rate match with other clients of the same FlexE shim.
  • the logical length of the master calendar is 20n (1502) .
  • each master calendar has the allocated block, is allocated to n sub calendar with a length of 20 on each PHY of FlexE group (1504) .
  • the allocation order of once every 20 blocks are selected in the simply cyclic allocation on the 66B block, to facilitate the PHY to add to FlexE server without the need to change existing calendar slot allocated to FlexE client.
  • FIG. 29 shows example functions of the FlexE demux (the FlexE shim in the receive direction) .
  • the 100GBASE-R Lower Layers, PMA, lane deskew, interleave, AM removal, descramble, which represent the layers of each 100GBASE_R PHYs below the PCS, are used exactly as specified in IEEE Std 802.3-2012.
  • the PCS lanes are recovered, deskewed, reinterleaved, and the alignment markers are removed.
  • the aggregate stream is descrambled.
  • Calendar Interleaving and Overhead Extraction The calendar slots of the each PHY are logically interleaved in a specified order.
  • the FlexE overhead is recovered from each PHY.
  • the 66B blocks are extracted from the master calendar positions assigned to each FlexE client in a pre-specified order.
  • Idle Insertion/Deletion 66B Decoding: Where these functions are performed and whether they are inside or outside the FlexE implementation is embodiment-specific.
  • the 66B blocks could be delivered directly to a network processor. If delivered to a single-lane PHY, idle insertion/deletion may be used to increase the rate to the PHY rate, realigning to 4-byte boundaries in the process (for 10G or 25G) and recoding 64B/66B according to clause 49. For a multi-lane PHY, idle insertion/deletion is used to increase the rate to the PHY rate less the space needed for alignment markers, the blocks are distributed to PCS lanes with AM insertion. For a FlexE client mapped over OTN, idle insertion/deletion may be used to adjust the rate as required for the OTN mapping.
  • a SHIM layer of flexible Ethernet realizes the flexible Ethernet basic functions by mapping Flex client data stream to the corresponding calendar time slot set.
  • the interface name for 10Gb/s is the XAUI interface, or XGMII interfaces
  • media-independent interface name for 100Gb/s is CGMII interface, which is a 64-bit parallel interface (the data distribution is based on 64-bit units) interface, and then passed to the physical coding sublayer to process. Due to this, the current MII interface rate is fixed at the standard rate, it cannot match the rate of FlexE client, which makes the potential of open architecture of the flexible Ethernet not being fully exploited.
  • the FlexE technology currently provides no methods for allocating FlexE client data streams to different sub-rate channels, as well as for traffic shaping and rate matching.
  • the present document provides, among other things, a method of stream distribution to different sub-rate channels.
  • an apparatus to distribute traffic to various sub-rate channels includes one or more optical modules to supports FlexE technology and various stack methods that support FlexE.
  • flexibility and changeability solutions can be implemented at one or more of a MAC layer, a FlexE SHIM layer and a Physical layers.
  • an elastic/flexible container structure can be constructed in the MAC layer in order to carry the MAC stream with the changed rate.
  • a flexible MII interface in the SHIM layer may be used to carry data streams with a different rate from an upper layer.
  • the modified MII interface can be adapted to MAC layer data stream with changed bandwidth.
  • the reconciliation sublayers map the logical MAC physical layer service primitives to and from standard electrical interfaces used by the PHYs.
  • the RS maps signals from the upper layer (MAC) to physical signaling primitives understood by the sublayer below the RS.
  • a FlexE modules from SHIM MII interface encodes the data into 64B/66B encoding. The resulting 64B/66B encoding is mapped to the corresponding slot on demand in accordance with the general principles and flexible Ethernet.
  • the technique may include constructing in the MAC layer an elastic/flexible container that matches target FlexE client’s various rates.
  • the components of the container include, but are not limited to, a cache, a FIFO (buffer) , logic resources, related circuit resources, etc., for loading FlexE client data streams.
  • the elastic/flexible container corresponding to each data stream for different FlexE clients may be logically separated from each other.
  • the technique may include constructing an MII interface type set that matches commonly used FlexE client rates.
  • the set members include:
  • MII interface with the rate of 5Gbps new logic MII interface formed by combining various MII interfaces
  • a new logic MII interface formed by combining n same-type interfaces of each MII interface and n is a natural number.
  • various MII interface still correspond to their matching RSs
  • the new logic MII interfaces formed by various combinations still correspond to their matching logic RSs.
  • a new MII interface unlike current rate of standard Ethernet, has a rate of 5Gbps, which is constructed by reducing the clock rate of XGMII interface with 10Gbps to half of the original one.
  • a clock rate change of N/M may be used, where N and M are both integers.
  • the new MII interface unlike current rate of standard Ethernet, has a rate of 150Gbps, which is constructed by increasing the clock rate of XGMII interface with 100Gbps to 1.5 times of the original one.
  • the new MII interface unlike current rate of standard Ethernet, has a rate of 300Gbps, which is constructed by reducing the clock rate of CDMII interfaces with 400Gbps to 0.75 times of the original one.
  • the new MII interface unlike current rate of standard Ethernet, has a rate of 5Gbps, which is constructed by increasing the clock rate of XGMII interface with 100Gbps to four times of the original one.
  • the above various new non-standard-rate MII interfaces still correspond to their matching RS sublayers.
  • Each FlexE client data stream will be corresponded to a member MII interface or logic MII interface in the MII interface type set through RS, so as to generate 64B/66B data to be mapped into a corresponding ODUflex, and the ODUflex is corresponding to a time slot assigned to the MAC client in a 20n Master Calender.
  • Optical module that supports FlexE technology does not need to modify PMD, and methods that support general principles of FlexE can refer to oif2015.127.02 document.
  • FIG. 4 depicts a system 400 in which each single instance of a FlexE client provides data being coupled to a single instance of ODUFlex.
  • a system 500 is depicted, in which a single FlexE client is supporting data communication over traffic split into two MII flows, each having its own 64 /66 bit conversion stage and a their own reconciliation stage, which individually may have a standard MII rate, that are being fed to the ODUflex stage.
  • the FlexE channelization can be accomplished by implementing the following processing operations.
  • an elastic/flexible container that matches target FlexE client’s various rates.
  • the components of the container include, but are not limited to, cache, FIFO, logic resources, the related circuit resources, etc., for loading FlexE client data streams.
  • the elastic/flexible container corresponding to each data stream for different FlexE clients is separated from each other.
  • the set members comprise already-existing various MII interfaces, MII interface with the rate of 5Gbps, new type interface formed by combining various MII interfaces, and a new type interface formed by combining n same-type interfaces of each MII interface, and n is a natural number.
  • a logical MII interface is constructed that matches such rate.
  • Various MII interface still correspond to their matching RSs, and the new logic MII interfaces formed by various combinations still correspond to their matching logic RSs.
  • the MAC layer of a FlexE client may operate at a rate of 10, 40 or m x 25 Gbps.
  • the MAC layer of a FlexE client may also operate at a multiple of the following rates: multiples of 5 Gb/s, 10 Gb/s, 25 Gb/s, 40 Gb/s, 50 Gb/s, 100 Gb/s, 200 Gb/s, 300 Gb/s, 400 Gb/s, 1T Gb/s.
  • multiples, or linear combinations of sub-multiples of the rates may be used (e.g., combination using weights that are rational numbers -where numerator and denominator of the multiple are integers) .
  • each MAC data stream passes through RS sublayer and MII interfaces in an one-to-one correspondence (or 1: n mapping relationship, where n is a natural number) .
  • the 64B/66B data generated (belong to FlexE SHIM layer from here) are mapped to a corresponding ODUflex.
  • the MII interface can also be replaced with XAUI, XGMII interfaces, etc., as appropriate.
  • the ODUflex is corresponding to time slots assigned to the MAC client in the 20n Master Calendar.
  • the data inside ODUflex are put into the time slot set to form a multiframe.
  • the multiframe is formed by each FlexE data block separated by 20 ⁇ 1024 and a corresponding overhead block.
  • the structure of the overhead block can be referred to FIG. 25, and the time slot structure in the multi-frame can refer to FIG. 15.
  • the router/switch setting policies enable multiple data streams (for example, data streams with a specific destination MAC address, a specific destination IP address, or data stream with a specific protocol, and the setting mode is not limited) to be sent to a destination output port that supports FlexE technology in the local device;
  • This operation can be achieved via the command lines, network management software, or SDN controllers.
  • FlexE client generated is processed in the order of above first step to the fourth step.
  • the method described here achieves the effect of channelized allocating traffic to each of different sub-rate channels, with each FlexE client data flowing into the channel of their own time slot which will not be mixed together.
  • This technique thus gives business applications a very flexible means to control the pipeline. Parameters of sub-channel can be flexibly controlled through either the command line, or SDN network controller.
  • an elastic/flexible container that matches target FlexE client’s various rates.
  • the components of the container include, but are not limited to, cache, FIFO, logic resources, the related circuit resources, etc., for loading FlexE client data streams.
  • the elastic/flexible container corresponding to each data stream for different FlexE clients is separated from each other.
  • the set members comprise already-existing various MII interfaces, MII interface with the rate of 5Gbps, new type interface formed by combining various MII interfaces, and a new type interface formed by combining n same-type interfaces of each MII interface, and n is a natural number.
  • the new MII interface unlike current rate of standard Ethernet, has a rate of 5Gbps, which is constructed by reducing the clock rate of XGMII interface with 10Gbps to half of the original one.
  • the new MII interface unlike current rate of standard Ethernet, has a rate of 150Gbps, which is constructed by increasing the clock rate of XGMII interface with 100Gbps to 1.5 times of the original one.
  • the new MII interface unlike current rate of standard Ethernet, has a rate of 300Gbps, which is constructed by reducing the clock rate of CDMII interfaces with 400Gbps to 0.75 times of the original one.
  • the new MII interface unlike current rate of standard Ethernet, has a rate of 5GBps, which is constructed by increasing the clock rate of XGMII interface with 100Gbps to four times of the original one.
  • a MAC layer of a FlexE client may operate at a rate of 10, 40, or m ⁇ 25 Gb/s.
  • the MAC layer rate FlexE client may also be a multiple of the following rates: 5 Gb/s, 10 Gb/s, 25 Gb/s, 40 Gb/s, 50 Gb/s, 100 Gb/s, 200 Gb/s, 300 Gb/s, 400 Gb/s, 1T Gb/s.
  • each MAC data stream passes through RS sublayer and MII interfaces in an one-to-one correspondence (or 1: n mapping relationship, where n is a natural number) .
  • the 64B/66B data generated (belong to FlexE SHIM layer from here) are mapped to a corresponding ODUflex; said MII interface can also be replaced with XAUI, XGMII interfaces, etc.
  • the ODUflex is corresponding to time slots assigned to the MAC client in the 20n Master Calendar.
  • the data inside ODUflex are put into the time slot set to form a multiframe.
  • the router/switch setting policies are set to enable multiple data streams (for example, data streams with a specific destination MAC address, a specific destination IP address, or data stream with a specific protocol, and the setting mode is not limited) to be sent to a destination output port that supports FlexE technology in the local device;
  • This operation can be achieved via the command lines, network management software, or SDN controllers.
  • the logic serial stream is called FlexE client (FlexE client) in FlexE technology
  • the FlexE client generated is processed in the order of above operations.
  • This embodiment illustrates an example of how to construct an elastic/flexible container, and how to correspond sub-streams with different rates, elastic/flexible container that the sub-streams belongs to, and a cached rate.
  • elastic/flexible containers that match various target rate of FlexE client may be constructed. Taking into account to maximize the use of cache resources, to meet the design conditions, resources that form the elastic/flexible container may be established as a shared structure, for example, shared cache structure, and shared TM traffic management, etc.
  • Elastic/flexible container planning is considered in both in and out direction, and therefore the corresponding resources should be divided into two sets as sending and receiving, which are shown in FIG. 19 and FIG. 20.
  • the transmission direction uses three modules, including a stream distributor, an elastic/flexible container and a channel distributor.
  • the receiving direction uses three modules, including a streaming aggregator, an elastic/flexible container and a channel aggregator.
  • the stream distributor module is a switching module that is responsible for the formation of sub-streams, its input is from the input port of a router/switch, and the output of stream divider comprises n independent sub-streams that are sent to flexible containers.
  • the stream distributor recognizes the input streams, and assigns each of them to one of n output sub streams.
  • the input of channel distributor is from a sub-stream from the elastic/flexible container.
  • a few designated physical channels (e.g., time slot set) are used for loading sub-streams based on the configuration.
  • Channel allocation is responsible for loading a sub-stream to the specified physical channel.
  • the channel aggregator functions the opposite of the channel distributor.
  • the receiving direction receives the stream from the specified channel, restores the data packets, and then sent to the elastic/flexible container.
  • the data streams from the channel are grouped, and are aggregated on the channel aggregator, the packets are restored, and are written into a flexible container. Then, the packet is read out by the stream aggregation from all the flexible containers, and a complete stream from the receiving direction is combined and recovered.
  • FIG. 16 shows the internal cache structure of a switch chip. Assuming a capacity of 180 KB on-chip data packet cache space is divided into 720 units, each called a page, and each page with a capacity of 256 Byte, for storing the received data packet. Cache space is allocated in units of pages for each data packet, and larger packets can occupy more than one pages of cache space. In addition to packet cache region, the data cache space also stores several groups of queues for the transmitting descriptor. Each queue is aimed at each transmission port in the chip, which stores the transmitting descriptor for the port.
  • Each transmitting descriptor packet contains the index number for address of data packet cache space and associated control information.
  • the internal chip of router/switch device organizes free cache space by linked queue.
  • a queue represented by linked list can be short as linked queue.
  • a linked queue requires two pointers indicating the head of the queue and the tail of the queue (referred to the head pointer and tail pointers) to be uniquely determined. It is a single linked table with both a head pointer and a tail pointer, and its operating characteristic is FIFO. Addresses for all free cache spaces in switch and control chip are managed using a linked table.
  • the linked table is a free address queue, wherein the free list represents free address queue, which is formed by the free address head pointer Free_Hptr and tail pointer Free_Tptr. All operations on the free cache management are focused on free list for free address queues.
  • FIG. 17 shows the a list of states for cache space block 0, 3, 5, 1, 7, 719, 6 that are free, as well as contents in CRAM and Free_Hptr, Free_Tptr register.
  • Free block list uses an initialization process before use. After the initialization, content in Free_Hptr register is 0, content in word 0 of CRAM 0 is 1. Content in word 1 is 2, ..., Content in word 718 is 719. Content in Free_Tptr register is 719.
  • the allocation of free cache space is to take free cache unit from the head of list, while the release of cache space is to add the already-processed cache unit to the end of the list.
  • FIG. 18 depicts the allocation and release of cache space.
  • the list has only one item, namely Free_Hptr register has the same contents as the Free_Tptr register, after allocation operation, value in the Free_Tptr register should be modified accordingly. Defining both Free_Hptr and Free_Tptr register as all 1s indicates an invalid state. When the list is empty, contents in both Free_Hptr and Free_Tptr are all 1s, at this time there is no cache space to be allocated.
  • the Free_Hptr and Free_Tptr register only need to be set to the newly released free unit number. While the embodiment described herein assumes that the on-chip buffer is 180 KB in size as an example, it may be possible to use larger on-chip buffers for processing that is consistent to the above description, including using on-chip buffers of storage capacities of 3-4 orders of magnitude greater than 180 KB or even larger buffers.
  • FIG. 9 is a diagram that is corresponding to the embodiment. After the elastic/flexible container is constructed, MII interface may be constructed next.
  • three FlexE clients have rates of 400G, 25G and 150G respectively, as shown in the example illustration of FIG. 9, three elastic/flexible containers are constructed in the MAC layer to match the target FlexE client rates, for loading the three FlexE clients’ data streams.
  • the elastic/flexible containers corresponding to different FlexE clients’ data streams are isolated from each other.
  • the MII interface combinations matching commonly used FlexE client’s rate are constructed. If a FlexE client has a rate of 400G, the corresponding MII interface is constructed, which comprises: four 100G MII interfaces, or eight 50G MII interfaces, or eighty 5G MII interfaces.
  • FIG. 10 is a diagram that is corresponding to the following embodiment.
  • a combination of different types of MII interfaces can be constructed. For example, if a FlexE client has a rate of 150G, the corresponding MII interface can be constructed, which comprises: one 100G MII interface and one 50G MII interface.
  • different types and numbers of MII interface combinations can also be constructed, for example, if a FlexE client has a rate of 135G, the corresponding MII interface can be constructed, which comprises: two 50G MII interfaces, one 20G of MII interface, and three 5G MII interfaces, as depicted in FIG. 11.
  • the previous embodiments described how to plan and build a system for transferring FlexE packet traffic.
  • the system can be pre-set.
  • the Ethernet port on the router has a rate of 400G.
  • MII interface group (four 100G MII interfaces) corresponding to 400G rate constructed in accordance with embodiment 2, is called logical MII interface corresponding to the FlexE client’s rate.
  • FlexE flexible Ethernet is enabled on the port, without performing flexible Ethernet binding to PHY of other ports.
  • the port of OTN equipment also has a speed of 400G, and MII interface group (comprising four 100G MII interfaces) corresponding to 400G rate can also be constructed the same in accordance with Embodiment 2.
  • Destination MAC address of data stream from Port1 is set to MAC1, which enters the elastic/flexible container having a capacity of 10G.
  • Destination MAC address of data streams from port2, port3, port4, port5, port6, port7, port8, port9 is set to MAC2, which enters the elastic/flexible container having a capacity of 25G.
  • Destination MAC address of data streams from port9, port10 is set to MAC3, which enters the elastic/flexible container having a capacity of 20G.
  • the above configurations and assignment of port IDs to MACs in the router are done by the network management software.
  • Each set FlexE client data stream corresponds to a MII media independent interface or a logical MII interface in a one-to-one correspondence via the RS sublayer.
  • Each data within the MAC data stream forms parallel data streams via the MII interface, relying on 64B /66B encoding mapping and coding techniques to generate a 66-bit block for use by FlexE flexible Ethernet SHIM layer.
  • the FlexE SHIM layer is generally understood to include the 64B/66B conversion or encoding module.
  • the generated 64B/66B data are mapped into a corresponding ODUflex.
  • the ODUflex corresponds to a time slot set assigned to the MAC client in the 20n Master Calendar, and the data inside the ODUflex are put into these time slot set.
  • the MAC data streams into corresponding ODUflex correspond to time slot sets, or groups, assigned to the MAC client in the 20n Master Calendar.
  • These time slot set, ODUflex constitute logic PHY which belongs to the FlexE client data stream, and are channelized.
  • the MAC data stream can therefore be used as a whole, and be transmitted to the destination node intact.
  • three FlexE client data streams may not be distinguishable from each other, and the sent-to logical PHY will carry three FlexE clients’ data streams, and the logical PHY will send the data to one destination node. If the above three FlexE clients’ data streams are destined to different nodes, then it will cause some FlexE client’s data stream to be sent to the wrong destination OTN node. In order to avoid occurrence of such situation, logical PHY data should be unloaded on the local OTN node, and layer 2 processing chip will perform switch operation on the Ethernet data as payload. After switch, the data are re-packaged and then sent to different destinations.
  • port PHY of a router are composed together.
  • the networking environment of connected routers and OTN devices are shown in FIG. 7, wherein four Ethernet ports with rates of 100Gb/sare bound to a flexible Ethernet port of a rate of 400Gb/s.
  • the port rate of the OTN equipment is also 400G, wherein similarly four Ethernet ports with rates of 100Gb/sare bound to a flexible Ethernet port of a rate of 400Gb/s.
  • Corresponding detection and configuration are done by network management.
  • FlexE client data flows FlexE client 1, FlexE client 2 and FlexE client 3 are set in the router, which are able to pass the flexible Ethernet ports, with a respective rate of 10Gb/s, 25Gb/s, and 20Gb/s.
  • Each FlexE client data stream corresponds to a MII media independent interface (or logical MII interface) in a one-to-one correspondence via the RS sublayer.
  • Each data within FlexE client data stream (the MAC data stream) forms parallel data streams via the MII interface, relying on 64B/66B encoding mapping and coding techniques to generate 66-bit block, for use by FlexE flexible Ethernet SHIM layer.
  • the location start from where 66 bits are formed belongs to the category of FlexE SHIM layer.
  • the generated 64B/66B data are mapped into a corresponding ODUflex.
  • the ODUflex corresponds to a time slot set assigned to the MAC client in the 20n Master Calendar, and the data inside the ODUflex are put into these time slot sets; other processes are same as the conventional FlexE flexible Ethernet technology.
  • the MAC data streams into corresponding ODUflex correspond to time slot sets assigned to the MAC client in the 20n Master Calendar.
  • These time slot set, ODUflex constitute logic PHY which belongs to the FlexE client data stream, and are channelized.
  • the FlexE client data stream can therefore be used as a whole, and be transmitted to the destination node intact.
  • Each FlexE client data stream corresponds to a RS sublayer and a MII media independent interface in the following manner:
  • FlexE client 1 corresponds to one MII interface.
  • FlexE client 2 corresponds to a MII ground formed by a combination of three 50GMII interfaces (logical MII interface) .
  • FlexE client 3 corresponds to a MII ground formed by a combination of two 10GMII interfaces (logical MII interface) .
  • Data within FlexE client 1 data stream form parallel data streams via the MII interface, relying on 64B/66B encoding mapping and coding techniques to generate 66-bit block, and mapped to two 5G time slots in the first Master Calendar;
  • data within FlexE client 2 data stream form three groups of parallel data streams via three 50G MII interfaces, relying on 64B/66B encoding mapping and coding techniques to generate three 66-bit blocks, and mapped to 30 5G time slots in the second and third Master Calendars;
  • data within FlexE client 3 data stream form two groups of parallel data streams via two 10G MII interfaces, relying on 64B/66B encoding mapping and coding techniques to generate two 66-bit blocks, and mapped to two 5G time slots in the fourth Master Calendar.
  • the data can be sent by FlexE SHIM layer.
  • the location of start from where 66 bits are formed belongs to the category of FlexE SHIM layer.
  • the generated 64B/66B data are mapped into a corresponding ODUflex.
  • the ODUflex corresponds to a time slot set assigned to the MAC client in the 20n Master Calendar, and the data inside the ODUflex are put into these time slot sets.
  • the remaining processes are same as the conventional FlexE flexible Ethernet technology.
  • the FlexE client data streams into corresponding ODUflex correspond to time slot sets assigned to the MAC client in the 20n Master Calendar. These time slot set, ODUflex constitute logic PHY which belongs to the FlexE client data stream, and are channelized. The FlexE client data stream can therefore be used as a whole, and be transmitted to the destination node intact.
  • an existing MII interface is used to construct MII interfaces with different rates.
  • Some embodiments may considers reusing existing interfaces.
  • the new MII interface unlike current rate of standard Ethernet, has a rate of 5Gbps, which is constructed by reducing the transmitting and receiving clocks from a clock rate of XGMII interface with 10Gbps to half of the original one.
  • the new MII interface unlike current rate of standard Ethernet, has a rate of 150Gbps, which is constructed by increasing the clock rate of the transmitting and receiving clocks from a clock rate of XGMII interface with 100Gbps to 1.5 times of the original one;
  • the new MII interface unlike current rate of standard Ethernet, has a rate of 300Gbps, which is constructed by reducing the clock rate of the transmitting and receiving clocks from a clock rate of CDMII interfaces with 400Gbps to 0.75 times of the original one;
  • the new MII interface unlike current rate of standard Ethernet, has a rate of 800Gbps, which is constructed by increasing the clock rate of the transmitting and receiving clocks from a clock rate of XGMII interface with 400Gbps to twice of the original one;
  • TXD [31: 0] data transmission channel, 32-bit parallel data.
  • RXD [31: 0] data receiving channel, 32-bit parallel data.
  • TXC [3: 0] respectively correspond to TXD [31: 24] , TXD [23: 16] , TXD [15: 8] , and TXD [7: 0] .
  • RXC [3: 0] respectively correspond to RXD [31: 24] , RXD [23: 16] , RXD [15: 8] , RXD [7: 0] .
  • TX_CLK reference clock for TXD and TXC.
  • RX_CLK reference clock for RXD and RXC.
  • the clock frequency is (1/2) *156.25MHz, Both rising and falling edges of the clock signal sample data.
  • XGMII interfaces Due to the impact of electrical characteristics, maximum transmission distance of PCB traces of XGMII interface is only 7cm, and XGMII interfaces have too many line connections, which is inconvenient for practical applications. Therefore, in practical applications, XGMII interfaces are usually replaced with XAUI interfaces.
  • XAUI namely 10 Gigabit attachment unit interface, 10G attachment unit Interface.
  • XAUI based on XGMII, realizes expansion of physical distance of XGMII interface, and increases the transmission distance of PCB traces to 50cm, and enables trace on the back side.
  • FIG. 30 shows an example transport network that is unaware of FlexE carrying FlexE data.
  • data traffic from a transmitting FlexE is shown to be sent over four fiber connections through the transport network to the receiving FlexE shim.
  • Data packets from a single Flex client may thus be carried over multiple different physical connections, e.g., optical fibers in FIG. 30.
  • the physical distance between the transmitting shim and the receiving shim may be hundreds or even thousands of kilometers.
  • the data packets for the same transmitting FlexE client may experience large differences in data propagation times during transmission from the transmitting shim to the receiving shim.
  • Ethernet allows for some clock adjustments to remove differential delays or skews, the differential delays experienced in long haul networks may be too large to overcome by the currently prescribed FlexE technologies.
  • FIG. 31 shows an example of a transport network that is aware of Flex E carrying FlexE data.
  • a similar clock skew problem may exist in when data packets from the same FlexE client at FlexE shim 3102 travel through two different paths (the 150G path, or the 25G/50G path) to the receiving Flex shim 3104. This clock skew may be due to the differential propagation delay and also due to processing delay through the protocol stack implementation at the shim 3106.
  • a timing skew correction mechanism similar to the Precision Time Protocol mechanism (PTP) specified in IEEE 1558, which is incorporated by reference herein, may be utilized. While PTP was specified for improving time synchronization in a local area network with smaller physical reach, e.g., few hundred meters, its basic principles can be applied for achieving timing synchronization in FlexE data transport through long haul networks.
  • the equipment running one of the FlexE shims may be selected or chosen as the timing server, and may provide accurate timing information to the other equipment.
  • the clock synchronization may be performed at a protocol sublayer that is between the FlexE shim and the physical layer, e.g., in the PCS.
  • a mechanism similar to synchronous Ethernet (SyncE) may be used for correcting clock skews.
  • clock information may be passed between phy layers of all receiving and transmitting devices and may be used for correcting clock skews.
  • a very high precision clock source e.g., a master clock, may be made available in the network. The precision of this clock source may be of the order of 10 -11 clock inaccuracy.
  • the clock synchronization is achieved by using an operations administration and maintenance (OAM) protocol data unit (PDU) that is identified by a specific Ethernet frame header.
  • OAM operations administration and maintenance
  • PDU protocol data unit
  • the synchronization may be achieved at the FlexE physical layer by processing OAM PDUs after the 64B/66B encoding is performed.
  • FIG. 21 illustrates a flowchart for an example method 2100 for communicating data from multiple Ethernet inputs having multiple interface rates to an optical network are disclosed.
  • the method 2100 includes constructing (2104) one or more elastic/flexible containers operating on data at a media access control (MAC) layer.
  • the elastic/flexible container comprises a variable length data structure and variable logical resources.
  • the method 2100 includes configuring (2106) an output interface having an interface rate equal to a total of the multiple Ethernet interface rates, to carry data in time slots according to a transmission schedule corresponding to the constructed one or more elastic/flexible containers in the MAC layer.
  • the method 2100 includes processing (2108) data packets received from the multiple Ethernet interfaces through the one or more elastic/flexible containers to generate multiple processed multiple Ethernet data streams.
  • the method 2100 includes allocating (2110) data packets from the processed multiple Ethernet data streams according to time slots of the transmission schedule to generate an output data stream.
  • the method 2100 includes communicating (2112) , to an optical network, a routing policy by which multiple optical data units of the optical network are to receive the output data stream.
  • FIG. 22 illustrates a block diagram for an example apparatus 2200 for communicating data from multiple Ethernet inputs having multiple interface rates to an optical network
  • the apparatus 2200 includes a module for constructing (2204) one or more elastic/flexible containers operating on data at a media access control (MAC) layer, wherein each elastic/flexible container comprises a variable length data structure and variable logic resources, a module for configuring (2206) an output interface having an interface rate equal to a total of the multiple Ethernet interface rates to carry data in time slots according to a transmission schedule corresponding to the constructed one or more elastic/flexible containers in the MAC layer, a module for processing (2208) data packets received from the multiple Ethernet interfaces through the one or more elastic/flexible containers to generate processed multiple Ethernet data streams, a module for allocating (2210) data packets from the processed multiple Ethernet data streams according to time slots of the transmission schedule to generate an output data stream, and a module for communicating (2212) , to an optical network, a routing policy by which multiple optical data units of the optical network are to receive the output data stream
  • FIG. 23 illustrates a flowchart for an example method 2300 for receiving data from an optical network and transmitting over multiple Ethernet interfaces are disclosed.
  • the method 2300 includes constructing (2304) one or more elastic/flexible container operating on data at a media access control (MAC) layer, wherein each elastic/flexible container comprises a variable length data structure and variable logic resources, configuring (2306) an input interface having an interface rate equal to a total of the multiple Ethernet interface rates, wherein the input interface is configured to carry data in time slots according to a transmission schedule corresponding to the constructed one or more elastic/flexible containers in the MAC layer, receiving (2308) , , a routing policy by which multiple optical data units of the optical network are transmitting data on to the input interface, and selecting (2310) data packets from the input interface according to the transmission schedule for processing through a network protocol stack implemented on the network apparatus for transmission to the multiple Ethernet outputs.
  • the method 2300 may be implemented to transfer data from an ingress point of a network to an egress point of the transmission network using the
  • FIG. 24 illustrates a block diagram of an example apparatus 2400 for receiving data from an optical network and transmitting over multiple Ethernet interfaces are disclosed.
  • the apparatus 2400 includes a module for constructing (2404) one or more elastic/flexible container operating on data at a media access control (MAC) layer, wherein each elastic/flexible container comprises a variable length data structure and variable logic resources, a module for configuring (2406) an input interface having an interface rate equal to a total of the multiple Ethernet interface rates, wherein the input interface is configured to carry data in time slots according to a transmission schedule corresponding to the constructed one or more elastic/flexible containers in the MAC layer, a module for receiving (2408) , a routing policy by which multiple optical data units of the optical network are transmitting data on to the input interface, and a module for selecting (2410) data packets from the input interface according to the transmission schedule for processing through a network protocol stack implemented on the network apparatus for transmission to the multiple Ethernet outputs.
  • MAC media access control
  • the output interface is configured in form of one or multiple ODUflexes.
  • each elastic/flexible container further comprises at least some of a data cache, a first in first out (FIFO) structure and a logic circuit.
  • the method 2100 or 2300 may include constructing the one or more elastic/flexible containers such that there is one elastic/flexible container corresponding to each of the multiple Ethernet inputs.
  • the output interface rate is a linear combination of submultiples of at least two standard media independent interface rates.
  • the standard media independent interface rates include 5 Gbps, 10 Gbps, 25 Gbps, 40 Gbps, 50 Gbps, 100 Gbps, 200 Gbps, 300 Gbps, 400 Gbps and 1 T Gbps.
  • achieving a submultiple rate for a given standard media independent interface rate by dividing a clock for the given standard media independent interface rate by an integer factor.
  • the linear combination may be obtained by increasing clock rate of at least one standard media independent interface by an integer factor.
  • FIG. 26 shows a flowchart example of a method 2600 of transferring data from one or more input data streams to a flexible Ethernet (FlexE) shim layer for transmission.
  • the method may be implemented in a network processor or another network apparatus such as a router or a switch that is used to transmit data to a transport network at an ingress point.
  • a network processor or another network apparatus such as a router or a switch that is used to transmit data to a transport network at an ingress point.
  • the method 2600 includes, at 2602, operating flexible containers to receive the one or more input data streams to generate, from each flexible container, one or more output streams.
  • the method 2600 includes, at 2604, processing one or more output steams from each flexible container in one or more reconciliation sublayers to generate 64-bit data signals.
  • the method 2600 includes, at 2606, providing, over a modified industry-standard interface, the 64-bit data signals to a FlexE shim layer in which the 64-bit data signals are each encoded using a 64B/66B encoding to generate logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots for transmission.
  • the master calendar mechanism e.g., as described in the FlexE specification document, may be used for the timing slot allocation.
  • the modified industry-standard interface may include one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate;
  • each flexible container may correspond to a FlexE client and the processing rate of each flexible container may match a rate of a logical data stream for a corresponding FlexE client.
  • an apparatus for transferring data from one or more input data streams to a flexible Ethernet (FlexE) shim layer for transmission includes a number of flexible containers for receiving the one or more input data streams to generate, from each flexible container, one or more output streams, one or more reconciliation sublayer modules for processing the one or more output steams from each flexible container to generate 64-bit data signals, and a modified industry-standard interface for providing the 64-bit data signals to a FlexE shim layer in which the 64-bit data signals are each encoded using a 64B/66B encoding to generate logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots for transmission.
  • FlexE flexible Ethernet
  • the modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate.
  • Each flexible container corresponds to a FlexE client and the processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client.
  • the method 2600 further includes performing, by the FlexE shim layer, idle insert/delete processing on the 64B/66B blocks according to IEEE 802.3 standard.
  • the 2600 may also include inserting, prior to the transmission, timing information for clock synchronization.
  • the timing information is inserted according to the Precision Time Protocol of IEEE-1558 or the Synchronous Ethernet protocol.
  • the clock synchronization information is inserted in an operations administration and maintenance (OAM) protocol data unit (PDU) .
  • OAM operations administration and maintenance
  • FIG. 27 shows a flowchart example of a method 2700 transferring data from a FlexE shim layer to one or more output data streams during reception of the data.
  • the input data streams may a transport network ingress point, while the output data stream may be at a transport network egress point.
  • the method 2700 includes, at 2702, providing, over a modified industry standard interface, 64-bit data signals from a FlexE shim layer that decodes, using a 64B/66B decoding, logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots in the data.
  • the data may have been allocated to timing slots using the master calendar mechanism, e.g., as described in the FlexE specification document.
  • the method 2700 includes, at 2704, processing the 64-bit data signals through one or more reconciliation sublayers to generate one or more data stream inputs to flexible containers.
  • the method 2700 includes, at 2706, operating the flexible containers to output the one or more output data streams from the received one or more input streams.
  • an apparatus for transferring data from a flexible Ethernet (FlexE) shim layer to one or more output data streams during reception of the data includes a modified industry standard interface that provides 64-bit data signals from a FlexE shim layer that decodes, using a 64B/66B decoding, logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots in the data, one or more reconciliation sublayers to process the 64-bit data signals to generate one or more data stream inputs to flexible containers and flexible containers to output the one or more output data streams from the received one or more input streams.
  • the method 2700 includes performing clock synchronization using the above-described techniques for achieving clock synchronization.
  • the non-standard rate may be constructed by reducing a rate of a standard MII. Alternately, or in addition, the non-standard rate may constructed by increasing a rate of a standard MII.
  • the non-standard rate may be constructed by changing clock rate of a standard MII or by changing data transfer rate of a SerDes interface or by using a 4-pulse amplitude modulation (PAM-4) data signal modulation technique, e.g., in place of the NRZ modulation.
  • the non-standard rate is constructed by combining two or more standard MII rates to form a logic MII interface.
  • the non-standard rate is constructed by combining two or more different MII interfaces wherein one of the different MII interface rates is a rate that is adjusted from a standard rate.
  • the standard rate corresponds to a rate of a 100 Gigabit MII interface (CGMII) , a 10 Gigabit MII (XGMII) , a 10 Gigabit attachment unit interface (XAUI) or a 400 Gigabit MII interface (CDMII) , or another rate, e.g., a reduced media independent interface rate (RMII) , an RGMII, a quad serial GMII, a serial MII (SMII) , and so on.
  • CGMII Gigabit MII interface
  • XGMII 10 Gigabit MII
  • XAUI 10 Gigabit attachment unit interface
  • CDMII 400 Gigabit MII interface
  • another rate e.g., a reduced media independent interface rate (RMII) , an RGMII, a quad serial GMII, a serial MII (SMII) , and so on.
  • the assignment of which input and output streams are to be processed by which flexible container may be performed by a stream distributor module.
  • the resources allocated to each flexible container e.g., hardware resources, may be proportional to the bandwidth of the corresponding input data stream processed by the flexible container.
  • the hardware resources may, e.g., be a cache space or a buffer.
  • some flexible containers may receive from, or provide to, data over multiple data streams and/or MII streams.
  • method 2600 further includes encoding the data into a logical series stream as 64B/66B blocks and performing idle insert/delete processing on the logical series stream according to IEEE 802.3 standard. An example of this processing is described with reference to FIG. 28.
  • a technique for transmitting and receiving data using a FlexE shim includes an apparatus for transferring data from a first group of one or more input data streams to a flexible Ethernet (FlexE) shim layer for transmission, and transferring, from the FlexE shim layer to a first group of one or more output data streams for reception.
  • FlexE flexible Ethernet
  • the apparatus includes a number of flexible containers for receiving the first group one or more input data streams to generate, from each flexible container, a second group of one or more output streams, one or more reconciliation sublayer modules for processing the second group of one or more output steams from each flexible container to generate a first group of 64-bit data signals, and a modified industry-standard interface for providing the first group of 64-bit data signals to a FlexE shim layer in which the first group of 64-bit data signals are each encoded using a 64B/66B encoding to generate a first group of logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to a first group allocated timing slots for transmission.
  • the one or more reconciliation sublayers further process the second group of 64-bit data signals to generate a second group of one or more data stream inputs to flexible containers.
  • the flexible containers further outputs the first group of one or more output data streams from the received second group of one or more input streams.
  • the modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate.
  • each flexible container corresponds to a FlexE client and a processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client. Examples of transmitting embodiments are shown and described with respect to FIGs. 9, 10, 11 and 19, and examples of receive side operations are described with respect to FIG. 20.
  • the disclosed and other embodiments and the functional operations and modules described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them.
  • the disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them.
  • data processing apparatus encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) .
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of non volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto optical disks e.g., CD ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Abstract

Techniques for use in Flexible Ethernet data communication include communicating data received on multiple Ethernet inputs having multiple interface rates to an optical network, include executing at least one media access control (MAC) protocol stack instance, constructing an elastic/flexible container operating on data at a MAC layer, configuring an output interface to carry data in time slots according to a transmission schedule, processing data packets received from the multiple Ethernet interfaces through a network protocol stack implemented on the network apparatus to generate multiple processed multiple Ethernet data streams, allocating data packets from the processed multiple Ethernet data streams to time slots of an output data stream transmitted out of the output interface according to the transmission schedule, and using a routing policy by which multiple optical data units of the optical network are to receive the output data stream.

Description

CHANNELIZATION FOR FLEXIBLE ETHERNET TECHNICAL FIELD
This document relates to a channelization method based on flexible Ethernet, and in particular to flexible Ethernet technology in the Optical Interworking Forum (OIF) field.
BACKGROUND
To satisfy the ever-increasing demand for data bandwidth, and continued pressure to keep capital and operational expenses low, technology vendors and network operators are looking for ways by which to use available network capacity as efficiently as possible. Optical Interworking Forum (OIF) organization in early 2015 set up a flexible Ethernet (Flex Ethernet or FlexE) project group to address some problems encountered in current data transmission networks. Flex Ethernet is expected to support varying payload bandwidth per wavelength due to optimization of the modulation format.
SUMMARY
This document discloses techniques for channelization and data bandwidth scheduling for flexible Ethernet traffic generation and transmission.
In one example aspect, a method is provided for transferring data from one or more input data streams to a flexible Ethernet (FlexE) shim layer for transmission and includesoperating flexible containers to receive the one or more input data streams to generate, from each flexible container, one or more output streams; processing one or more output steams from each flexible container in one or more reconciliation sublayers to generate data signals; and providing, over a modified industry-standard interface, the data signals to a FlexE shim layer in which the data signals are encoded to generate logical serial data streams of encoded blocks representing FlexE clients which are mapped to allocated timing slots for transmission. Each flexible container and the processing in the one or more reconciliation sublayers enable the modified industry-standard interface to include one or more interfaces having respective rates to accommodate for transmission via the FlexE shim layer.
In another example aspect, a method and an apparatus for implementing a technique of transferring data from one or more input data streams to a FlexE shim layer for transmission  are disclosed. The technique includes operating flexible containers to receive the one or more input data streams to generate, from each flexible container, one or more output streams, processing one or more output steams from each flexible container in one or more reconciliation sublayers to generate 64-bit data signals and providing, over a modified industry-standard interface, the 64-bit data signals to a FlexE shim layer in which the 64-bit data signals are each encoded using a 64B/66B encoding to generate logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots for transmission. The modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate. In the technique, each flexible container corresponds to a FlexE client and a processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client.
In another example aspect, a method and an apparatus for implementing a technique of transferring data from a flexible Ethernet (FlexE) shim layer to one or more output data streams during reception of the data is disclosed. The technique includes providing, over a modified industry standard interface, 64-bit data signals from a FlexE shim layer that decodes, using a 64B/66B decoding, logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots in the data, processing the 64-bit data signals through one or more reconciliation sublayers to generate one or more data stream inputs to flexible containers and operating the flexible containers to output the one or more output data streams from the received one or more input streams. The modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate. Using the technique, each flexible container corresponds to a FlexE client and processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client.
In another example aspect, a method and an apparatus for a technique of communicating data from multiple Ethernet inputs having multiple interface rates to an optical network are disclosed. The technique includes constructing one or more elastic/flexible container operating on data at a media access control (MAC) layer, wherein the elastic/flexible container comprises a variable length data structure, configuring an output interface having an  interface rate equal to a total of the multiple interface rates, wherein the output interface is configured to carry data in time slots according to a transmission schedule, processing data packets received from the multiple Ethernet interfaces through a network protocol stack implemented on the network apparatus to generate multiple processed multiple Ethernet data streams, allocating data packets from the processed multiple Ethernet data streams to time slots of an output data stream transmitted out of the output interface according to the transmission schedule, and communicating, to an optical network, a routing policy by which multiple optical data units of the optical network are to receive the output data stream.
In another example aspect, a method and apparatus for receiving data from an optical network and transmitting over multiple Ethernet interfaces are disclosed. The disclosed method and apparatus implement a technique that includes executing at least one MAC protocol stack instance on the network apparatus, constructing an elastic/flexible container operating on data at a MAC layer, wherein the elastic/flexible container comprises a variable length data structure, configuring an input interface having an interface rate equal to a total of the multiple interface rates, wherein the input interface is configured to carry data in time slots according to a transmission schedule, receiving, a routing policy by which multiple optical data units of the optical network are transmitting data on to the input interface, and selecting data packets from the input interface according to the transmission schedule for processing through a network protocol stack implemented on the network apparatus for transmission to the multiple Ethernet outputs.
In yet another example aspect, a technique for transmitting and receiving data using a FlexE shim includes an apparatus for transferring data from a first group of one or more input data streams to a flexible Ethernet (FlexE) shim layer for transmission, and transferring, from the FlexE shim layer to a first group of one or more output data streams for reception. The apparatus includes a number of flexible containers for receiving the first group one or more input data streams to generate, from each flexible container, a second group of one or more output streams, one or more reconciliation sublayer modules for processing the second group of one or more output steams from each flexible container to generate a first group of 64-bit data signals, and a modified industry-standard interface for providing the first group of 64-bit data signals to a FlexE shim layer in which the first group of 64-bit data signals are each encoded using a 64B/66B encoding to generate a first group of logical serial data streams of 64B/66B blocks  representing FlexE clients which are mapped to a first group allocated timing slots for transmission. The one or more reconciliation sublayers further process the second group of 64-bit data signals to generate a second group of one or more data stream inputs to flexible containers. The flexible containers further outputs the first group of one or more output data streams from the received second group of one or more input streams. The modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate. In the technique, each flexible container corresponds to a FlexE client and a processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client.
These and other aspects, and their implementations and variations are set forth in the drawings, the description and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows a block diagram of an example of a 100G line card with multiple Traffic Management (TM) chips.
FIG. 2 illustrates an example of a bottleneck of fixed-rate Ethernet with respect to the current flexible IP and optical transmission synergetic networks.
FIG. 3 is a block diagram example depicting the location of FlexE SHIM layer within the IEEE802.3 stack.
FIG. 4 shows an example of data channelization for a FlexE client data stream channelized to form a calendar.
FIG. 5 shows another example of data channelization for a FlexE client data stream channelized to form a calendar.
FIG. 6 illustrates an example of flexible Ethernet networking by a router and an OTN device connected via a port.
FIG. 7 illustrates an example of flexible Ethernet networking by a router and an OTN device connected with four PHY bonding together.
FIG. 8 shows an example of an optical communication network.
FIG. 9 illustrates an example of mapping of a media independent interface (MII) corresponding to an elastic/flexible container.
FIG. 10 illustrates an example of a mapping of an MII corresponding to a 150G elastic/flexible container.
FIG. 11 illustrates an example of a mapping of an MII corresponding to a 135G elastic/flexible container.
FIG. 12 illustrates an example extended 10 Gigabit MII (XGMII) interface.
FIG. 13 illustrates an example of a ten Gigabit attachment unit interface (XAUI) .
FIG. 14 illustrates an example of three FlexE client flows set in a router.
FIG. 15 is a distribution diagram showing an example of a FlexE calendar.
FIG. 16 illustrates an example cache structure of a chip.
FIG. 17 illustrates an example of a free cache organized using a linked queue.
FIG. 18 illustrates an example of allocation and freeing of a cache space.
FIG. 19 illustrates an example of a data-send operation of a data transmission apparatus.
FIG. 20 illustrates an example of a data-receive operation of a data transmission apparatus.
FIG. 21 illustrates an example flowchart for a method of data transmission.
FIG. 22 illustrates an example of a data transmission apparatus.
FIG. 23 illustrates an example flowchart for a method of receiving data transmissions.
FIG. 24 illustrates an example of a data reception apparatus.
FIG. 25 illustrates an example of a data structure.
FIG. 26 shows an example flowchart for a method of data communication.
FIG. 27 shows an example flowchart for another method of data communication.
FIG. 28 shows an example of a FlexE mux structure.
FIG. 29 shows an example of a FlexE de-mux structure.
FIG. 30 shows an example transport network that is unaware of FlexE carrying FlexE data.
FIG. 31 shows an example of a transport network that is aware of Flex E carrying FlexE data.
DETAILED DESCRIPTION
Ethernet is a universally used data connection interface. Currently deployed Ethernet products are often named after the connection rate achieved by the physical layer, e.g., 10 Mbit/s, 1 Gbps, and so on. The current Ethernet interfaces are fixed in bandwidth, and thus are not able to use the flexibility of bandwidth that can be achieved by a packet switching equipment, because a packet switching equipment only provides a single stream with fixed bandwidth externally, e.g., because Ethernet is available only at a few fixed rates. Thus, present day technologies fail to make use of the transmission bandwidth flexibility that can be achieved by aggregating, or combining, traffic from multiple Ethernet devices with different connection rates.
The Flex Ethernet project managed by the OIF is led by Cisco, and co-initiated by Juniper, Finisar, and Xilinx and other manufacturers. Flex Ethernet defines channelization, binding and sub-rate functions by standardizing a flexible Ethernet (FlexE) MAC interface. FlexE enables a standard Ethernet physical media dependent layer (PMD) to connect one or more Ethernet MACs, and provides efficient channel binding standard rate and non-standard rate, so that Ethernet switches and routers can be configured with different bandwidth on demand, and thus increases flexibility of bandwidth configuration of data center network. Unless otherwise noted, the terms used in the present document are consistent with their meaning in the FlexE Implementation Agreement Draft 1.1., Release date July 2015 (IA OIF-FLEXE-01.0) , which is incorporated herein in its entirety.
When using a sub-rate function, the transmission pipeline rate may be lower than the PMD rate of Ethernet, and a matching of the rate of router port to the transfer rate may be performed to support non-standard Ethernet rate. In this context, binding may refer to a bound to an Ethernet physical layer. For example, a 200G MAC can be supported on two bound 100GBASE-R physical layers. For example, a large Ethernet rate may be divided into multiple (e.g., three or four) sub-rates, and resources corresponding to each sub-rate may be bound to that sub-rate. This mapping between a high bandwidth connection and multiple low bandwidth connections is also sometimes called channelization.
The Flex Ethernet initiative is also expected to extend the multi-link gearbox (MLG) specification, which is a protocol specification for data transmission, to be physical medium dependent (PMD) , and will provide efficient channel for binding standard and non-standard rates. 
For optical networks, with the development of ODUflex (Optical channel Data Unit flexible) , streams of flexible grid, bandwidth variable transponders (BVT) , flexible reconfigurable optical add-drop multiplexers (ROADM) and flexible OTN, and flexible bandwidth can be carried via optical network. This means that the fiber-optic network has the infrastructure capability that can be used for traffic management with flexible bandwidths. Similarly, data packet forwarding devices, such as Ethernet routers or switches, can handle substantially flexible bandwidth stream.
FIG. 1 is a block diagram illustration of an example of a line card 100 that communicatively couples Ethernet data connections with optical transmission equipment. In some embodiments, the technology disclosed in the present document can be implemented on a line card that offers data connectivity between Ethernet electrical signals and optical signals carrying data traffic in an optical network.
FIG. 2 shows an example block diagram to illustrate bottleneck of the fixed-rate Ethernet to the current flexible IP and optical coordinated transport networks. During processing of data packets in some devices, the packets flow through the network processor (NP) units and traffic management (TM) units 202 (both these units can control the flow of bandwidth) e.g., as depicted by flexible rate output streams 210. Using these two units, an Ethernet router/switch can easily generate different streams with different rates and bandwidth on demand at the Ethernet interface 204. However, for Ethernet router/switch, bandwidth of Ethernet interface defined by IEEE 802.3 is a fixed rate, such as 10G, 40G, 100G and 400Gbps, e.g., as depicted by 208. Due to limitations of fixed-rate Ethernet interfaces, the flexibility of an Ethernet router/switch is not fully utilized for feeding data to flexible optical equipment 206 that includes such optical transport devices as a flexible optical module, an ODUflex, a flex ROADM, etc.
The potential of flexibility, that grouping devices already have, can be used to build a flexible network, but a new type of flexible Ethernet technology is useful in enabling a flexible network. The FlexE framework proposed in the OIF provides a common mechanism to support multiple Ethernet MAC layer rates, it may or may not correspond to any existing Ethernet PHY rate. Because of having a more flexible and universal channel bonding characteristics, the followed channel flexibility, sub-rate flexibility, as well as an important feature of no need to modify the PMD, will enable flexible Ethernet to leverage the future market application, and usher in an emerging Ethernet and optical transport market.
FIG. 3 shows an example of the positioning of the FlexE SHIM layer 302 in the IEEE802.3 stack 300 that implements the flexible Ethernet technology.
After data streams having certain bandwidths reach the MAC layer, e.g., through the MII interface, they form parallel data streams (e.g., XGMII Interface 32-bit parallel data stream) . These data streams are combined into 64-bit data signal TXD <63: 0>. Then, FlexE SHIM layer 302 performs 64B/66B encoding on the data from MII interface, and generates a 66-bit block. The 66-bit block is formed of two parts, one part is the 2-bit synchronization header, and the other part is a 64-bit payload, the logical serial stream of 64B/66B block is called FlexE client (FlexE client) in FlexE technology.
As seen in the example from FIG. 3, the flexible Ethernet technology adds a FlexE SHIM layer 302 on the original Scramble (scramble) as defined in conventional 802.3 stack, and bypasses the original 64B/66B encoding and decoding process, while locating the 64B/66B encoding and decoding process to the top of the SHIM layer.
A FlexE group may be composed of 1 to n 100GBASE-R Ethernet physical layer devices (PHY) . Each physical layer devices uses most PCS functions described in section 82 of the IEEE draft standard 802.3-2015, including the PCS channel distribution, channel tag insertion, alignment and correction. All PHY in a FlexE group may use the same physical layer clock. FlexE payload carried on each PHY on the FlexE group has a format of logic serial stream of valid 64B/66B block, except for the marks occupied by aligned mark of PCS channel (cannot carry FlexE payload) .
Each FlexE client may represent a 64B/66B block logical serial stream of an Ethernet MAC layer. In the MAC layer, an elastic/flexible container may be configured to operate at a variable speed to match the rate of the MAC stream. According to the OIF standard, the MAC layer of a FlexE client can run at a rate of 10, 40 and m×25Gb/s, where m is an integer. The 64B/66B encoding is based on IEEE standard 802.3-2015 Figure 82-4.
The FlexE mechanism functions by using a calendar, and allocates 66B block location on each PHY of FlexE group to each FlexE client. A calendar may use a basic unit of a time slot. In some embodiments, the time slot may correspond to 5Gbps size and a length of 20 time slots for every 100G capacity of FlexE group.
FIG. 28 shows an example of functions performed in a FlexE mux in the transmit direction. Data from FlexE clients is passed through a 64B/66B encoding module and an idle  insert/delete module before being distributed and inserted into a master calendar. What is presented for insertion into the slots of the FlexE master calendar is a stream of 64B/66B encoded blocks encoded per IEEE Std 802.3-2015 Table 82-4 which has been rate-matched to other clients of the same FlexE shim. This stream of 66B blocks might be created directly at the required rate using back-pressure from a network processor. It might come from a single-lane Ethernet PHY such as 10G or 25G, where the process of rate-matching involves both idle insertion/deletion, plus converting the rate-aligned stream from the 4-byte alignment of IEEE Std 802.3-2015 clause 49 to the 8-byte alignment of IEEE Std 802.3-2015 clause 82. The stream of blocks may come from a multi-lane Ethernet PHY, where the lanes need to be deskewed and re-interleaved with alignment markers removed prior to performing idle insertion/deletion to rate match with other clients of the same FlexE shim. Or the stream may have come from another FlexE shim, for example, connected across an OTN network, where all that is required is to perform idle insertion/deletion to rate match with other clients of the same FlexE shim.
With reference to FIG. 15, for a FlexE group comprising n bound 100GBASE-R PHY, the logical length of the master calendar is 20n (1502) . According to FIG. 15, each master calendar has the allocated block, is allocated to n sub calendar with a length of 20 on each PHY of FlexE group (1504) . The allocation order of once every 20 blocks are selected in the simply cyclic allocation on the 66B block, to facilitate the PHY to add to FlexE server without the need to change existing calendar slot allocated to FlexE client.
FIG. 29 shows example functions of the FlexE demux (the FlexE shim in the receive direction) . The 100GBASE-R Lower Layers, PMA, lane deskew, interleave, AM removal, descramble, which represent the layers of each 100GBASE_R PHYs below the PCS, are used exactly as specified in IEEE Std 802.3-2012. The PCS lanes are recovered, deskewed, reinterleaved, and the alignment markers are removed. The aggregate stream is descrambled.
Calendar Interleaving and Overhead Extraction: The calendar slots of the each PHY are logically interleaved in a specified order. The FlexE overhead is recovered from each PHY.
LF Generator: In the case that any PHY of the FlexE group has failed (PCS_Status=FALSE) or overhead lock or calendar lock has not been achieved on the overhead of any of the PHYs, LF is generated to be demapped from the master calendar for each FlexE PHY.
66B Block Extraction from Master Calendar: The 66B blocks are extracted from the master calendar positions assigned to each FlexE client in a pre-specified order.
Idle Insertion/Deletion, 66B Decoding: Where these functions are performed and whether they are inside or outside the FlexE implementation is embodiment-specific. The 66B blocks could be delivered directly to a network processor. If delivered to a single-lane PHY, idle insertion/deletion may be used to increase the rate to the PHY rate, realigning to 4-byte boundaries in the process (for 10G or 25G) and recoding 64B/66B according to clause 49. For a multi-lane PHY, idle insertion/deletion is used to increase the rate to the PHY rate less the space needed for alignment markers, the blocks are distributed to PCS lanes with AM insertion. For a FlexE client mapped over OTN, idle insertion/deletion may be used to adjust the rate as required for the OTN mapping.
A SHIM layer of flexible Ethernet realizes the flexible Ethernet basic functions by mapping Flex client data stream to the corresponding calendar time slot set.
However, the current flexible Ethernet technology only considers splitting the standard rate of port into different sub-rate according to need, without mentioning the issue for how to distribute traffic to channels with various sub-rates, and there is no clear method on how the data stream is assigned to each sub-channel. Currently, data stream transmitted from the MAC layer passes through the MII. For 400G connection rate, this is called CDMII, where the name of MII corresponding to other rate will change accordingly. For example, the interface name for 10Gb/sis the XAUI interface, or XGMII interfaces, media-independent interface name for 100Gb/s is CGMII interface, which is a 64-bit parallel interface (the data distribution is based on 64-bit units) interface, and then passed to the physical coding sublayer to process. Due to this, the current MII interface rate is fixed at the standard rate, it cannot match the rate of FlexE client, which makes the potential of open architecture of the flexible Ethernet not being fully exploited.
Due to the current limitations of flexible Ethernet technology, multiple FlexE client data streams from the router can only share a logical path, which is named logic PHY. If these FlexE client data stream will be destined to different OTN nodes, after these MAC data streams are transmitted to the OTN equipment, appropriate treatments are carried out, otherwise an error shown in FIG. 8 will occur.
In the current proposal of flexible Ethernet standard, there is no clear explanation on how each FlexE client is mapped to time slots in the calendar. The techniques disclosed in the  present document can be used to, among other aspects, allocate a FlexE client data stream to channels with different sub-rate.
Brief overview
The FlexE technology currently provides no methods for allocating FlexE client data streams to different sub-rate channels, as well as for traffic shaping and rate matching. The present document provides, among other things, a method of stream distribution to different sub-rate channels.
In some embodiments, an apparatus to distribute traffic to various sub-rate channels includes one or more optical modules to supports FlexE technology and various stack methods that support FlexE.
In some embodiments, in order to structure flexible channel bandwidths from MAC layer to a FlexE SHIM layer, and finally to the Physical layer, and to match rates of MAC layer, FlexE SHIM and Physical layer with each other, flexibility and changeability solutions can be implemented at one or more of a MAC layer, a FlexE SHIM layer and a Physical layers.
In some embodiments, an elastic/flexible container structure can be constructed in the MAC layer in order to carry the MAC stream with the changed rate. A flexible MII interface in the SHIM layer may be used to carry data streams with a different rate from an upper layer. The modified MII interface can be adapted to MAC layer data stream with changed bandwidth.
After a data stream having a certain bandwidth reaches the MAC layer, travels through the reconciliation sublayer (RS) , and MII interfaces of different types configured to form parallel data streams with a variety of different rates, the data streams are combined into a data signal TXD 64 bits <63: 0> . In some embodiments, the reconciliation sublayers map the logical MAC physical layer service primitives to and from standard electrical interfaces used by the PHYs. In some embodiments, the RS maps signals from the upper layer (MAC) to physical signaling primitives understood by the sublayer below the RS. A FlexE modules from SHIM MII interface encodes the data into 64B/66B encoding. The resulting 64B/66B encoding is mapped to the corresponding slot on demand in accordance with the general principles and flexible Ethernet.
In some embodiments, the technique may include constructing in the MAC layer an elastic/flexible container that matches target FlexE client’s various rates. The components of the container include, but are not limited to, a cache, a FIFO (buffer) , logic resources, related circuit resources, etc., for loading FlexE client data streams. The elastic/flexible container  corresponding to each data stream for different FlexE clients may be logically separated from each other.
In some embodiments, the technique may include constructing an MII interface type set that matches commonly used FlexE client rates. In some embodiments, the set members include:
Already-existing various MII interfaces, MII interface with the rate of 5Gbps, new logic MII interface formed by combining various MII interfaces, and a new logic MII interface formed by combining n same-type interfaces of each MII interface, and n is a natural number.
In some embodiments, various MII interface still correspond to their matching RSs, and the new logic MII interfaces formed by various combinations still correspond to their matching logic RSs.
In some embodiments, a new MII interface, unlike current rate of standard Ethernet, has a rate of 5Gbps, which is constructed by reducing the clock rate of XGMII interface with 10Gbps to half of the original one. In some embodiments, a clock rate change of N/M may be used, where N and M are both integers.
In some embodiments, the new MII interface, unlike current rate of standard Ethernet, has a rate of 150Gbps, which is constructed by increasing the clock rate of XGMII interface with 100Gbps to 1.5 times of the original one.
In some embodiments, the new MII interface, unlike current rate of standard Ethernet, has a rate of 300Gbps, which is constructed by reducing the clock rate of CDMII interfaces with 400Gbps to 0.75 times of the original one.
In some embodiments, the new MII interface, unlike current rate of standard Ethernet, has a rate of 5Gbps, which is constructed by increasing the clock rate of XGMII interface with 100Gbps to four times of the original one.
In some embodiments, the above various new non-standard-rate MII interfaces still correspond to their matching RS sublayers.
Each FlexE client data stream will be corresponded to a member MII interface or logic MII interface in the MII interface type set through RS, so as to generate 64B/66B data to be mapped into a corresponding ODUflex, and the ODUflex is corresponding to a time slot assigned to the MAC client in a 20n Master Calender.
Optical module that supports FlexE technology does not need to modify PMD, and methods that support general principles of FlexE can refer to oif2015.127.02 document.
FIG. 4 depicts a system 400 in which each single instance of a FlexE client provides data being coupled to a single instance of ODUFlex. In comparison, in FIG. 5, a system 500 is depicted, in which a single FlexE client is supporting data communication over traffic split into two MII flows, each having its own 64 /66 bit conversion stage and a their own reconciliation stage, which individually may have a standard MII rate, that are being fed to the ODUflex stage.
The FlexE channelization can be accomplished by implementing the following processing operations.
Preparatory operation:
-Construct, in the MAC layer, an elastic/flexible container that matches target FlexE client’s various rates. The components of the container include, but are not limited to, cache, FIFO, logic resources, the related circuit resources, etc., for loading FlexE client data streams. The elastic/flexible container corresponding to each data stream for different FlexE clients is separated from each other.
-Construct an MII interface type set that matches commonly used FlexE client rates. The set members comprise already-existing various MII interfaces, MII interface with the rate of 5Gbps, new type interface formed by combining various MII interfaces, and a new type interface formed by combining n same-type interfaces of each MII interface, and n is a natural number.
According to demanded FlexE client rate, a logical MII interface is constructed that matches such rate. Various MII interface still correspond to their matching RSs, and the new logic MII interfaces formed by various combinations still correspond to their matching logic RSs.
Next, in a first operation, the MAC layer of a FlexE client may operate at a rate of 10, 40 or m x 25 Gbps. In some embodiments, the MAC layer of a FlexE client may also operate at a multiple of the following rates: multiples of 5 Gb/s, 10 Gb/s, 25 Gb/s, 40 Gb/s, 50 Gb/s, 100 Gb/s, 200 Gb/s, 300 Gb/s, 400 Gb/s, 1T Gb/s. Alternatively or additionally, multiples, or linear combinations of sub-multiples of the rates may be used (e.g., combination using weights that are rational numbers -where numerator and denominator of the multiple are integers) .
In a second operation, each MAC data stream passes through RS sublayer and MII interfaces in an one-to-one correspondence (or 1: n mapping relationship, where n is a natural number) . The 64B/66B data generated (belong to FlexE SHIM layer from here) are mapped to a  corresponding ODUflex. In some embodiments, the MII interface can also be replaced with XAUI, XGMII interfaces, etc., as appropriate.
In a third operation, the ODUflex is corresponding to time slots assigned to the MAC client in the 20n Master Calendar. The data inside ODUflex are put into the time slot set to form a multiframe.
The multiframe is formed by each FlexE data block separated by 20 × 1024 and a corresponding overhead block. The structure of the overhead block can be referred to FIG. 25, and the time slot structure in the multi-frame can refer to FIG. 15.
In the fourth operation, the router/switch setting policies enable multiple data streams (for example, data streams with a specific destination MAC address, a specific destination IP address, or data stream with a specific protocol, and the setting mode is not limited) to be sent to a destination output port that supports FlexE technology in the local device;
This operation can be achieved via the command lines, network management software, or SDN controllers.
After the data stream reaches the MAC layer, through the MII interface, through the processing of FlexE SHIM to form a logic serial stream of 64B/66B block, that is FlexE client (FlexE client) .
In the fifth operation, FlexE client generated is processed in the order of above first step to the fourth step.
In one advantageous aspect, the method described here achieves the effect of channelized allocating traffic to each of different sub-rate channels, with each FlexE client data flowing into the channel of their own time slot which will not be mixed together. This technique thus gives business applications a very flexible means to control the pipeline. Parameters of sub-channel can be flexibly controlled through either the command line, or SDN network controller.
Example Embodiments
Preparatory operation:
-Construct in the MAC layer an elastic/flexible container that matches target FlexE client’s various rates. The components of the container include, but are not limited to, cache, FIFO, logic resources, the related circuit resources, etc., for loading FlexE client data streams. The elastic/flexible container corresponding to each data stream for different FlexE clients is separated from each other.
-Construct a MII interface type set that matches commonly used FlexE client rates. The set members comprise already-existing various MII interfaces, MII interface with the rate of 5Gbps, new type interface formed by combining various MII interfaces, and a new type interface formed by combining n same-type interfaces of each MII interface, and n is a natural number.
According to demanded FlexE client rate, constructing a logical MII interface that matches such rate.
Various MII interface still correspond to their matching RSs, and the new logic MII interfaces formed by various combinations still correspond to their matching logic RSs. In some embodiments, the new MII interface, unlike current rate of standard Ethernet, has a rate of 5Gbps, which is constructed by reducing the clock rate of XGMII interface with 10Gbps to half of the original one. In some embodiments, the new MII interface, unlike current rate of standard Ethernet, has a rate of 150Gbps, which is constructed by increasing the clock rate of XGMII interface with 100Gbps to 1.5 times of the original one.
In some embodiments, the new MII interface, unlike current rate of standard Ethernet, has a rate of 300Gbps, which is constructed by reducing the clock rate of CDMII interfaces with 400Gbps to 0.75 times of the original one.
In some embodiments, the new MII interface, unlike current rate of standard Ethernet, has a rate of 5GBps, which is constructed by increasing the clock rate of XGMII interface with 100Gbps to four times of the original one.
The above various new non-standard-rate MII interfaces still correspond to their matching RS sublayers.
In a first operation, a MAC layer of a FlexE client may operate at a rate of 10, 40, or m × 25 Gb/s. The MAC layer rate FlexE client may also be a multiple of the following rates: 5 Gb/s, 10 Gb/s, 25 Gb/s, 40 Gb/s, 50 Gb/s, 100 Gb/s, 200 Gb/s, 300 Gb/s, 400 Gb/s, 1T Gb/s.
In a second operation, each MAC data stream passes through RS sublayer and MII interfaces in an one-to-one correspondence (or 1: n mapping relationship, where n is a natural number) . The 64B/66B data generated (belong to FlexE SHIM layer from here) are mapped to a corresponding ODUflex; said MII interface can also be replaced with XAUI, XGMII interfaces, etc.
In a third operation, the ODUflex is corresponding to time slots assigned to the MAC client in the 20n Master Calendar. The data inside ODUflex are put into the time slot set to form a multiframe.
In a next operation, the router/switch setting policies are set to enable multiple data streams (for example, data streams with a specific destination MAC address, a specific destination IP address, or data stream with a specific protocol, and the setting mode is not limited) to be sent to a destination output port that supports FlexE technology in the local device;
This operation can be achieved via the command lines, network management software, or SDN controllers.
After the data stream reaches the MAC layer, a logic serial stream of 64B/66B block forms, the logic serial stream is called FlexE client (FlexE client) in FlexE technology;
In a next operation, the FlexE client generated is processed in the order of above operations.
With the above processing, FlexE channelization is realized. Some example embodiments are discussed herein.
Embodiment 1
This embodiment illustrates an example of how to construct an elastic/flexible container, and how to correspond sub-streams with different rates, elastic/flexible container that the sub-streams belongs to, and a cached rate.
In designing an Ethernet switch and a control chip, elastic/flexible containers that match various target rate of FlexE client may be constructed. Taking into account to maximize the use of cache resources, to meet the design conditions, resources that form the elastic/flexible container may be established as a shared structure, for example, shared cache structure, and shared TM traffic management, etc.
Elastic/flexible container planning is considered in both in and out direction, and therefore the corresponding resources should be divided into two sets as sending and receiving, which are shown in FIG. 19 and FIG. 20.
The transmission direction uses three modules, including a stream distributor, an elastic/flexible container and a channel distributor. The receiving direction uses three modules, including a streaming aggregator, an elastic/flexible container and a channel aggregator.
Referring to FIG. 19, the stream distributor module is a switching module that is responsible for the formation of sub-streams, its input is from the input port of a router/switch, and the output of stream divider comprises n independent sub-streams that are sent to flexible containers. The stream distributor recognizes the input streams, and assigns each of them to one of n output sub streams. The input of channel distributor is from a sub-stream from the elastic/flexible container. A few designated physical channels (e.g., time slot set) are used for loading sub-streams based on the configuration. Channel allocation is responsible for loading a sub-stream to the specified physical channel. The channel aggregator functions the opposite of the channel distributor. It receives the stream from the specified channel, restores the data packets, and then sent to the elastic/flexible container. In the receiving direction, the data streams from the channel are grouped, and are aggregated on the channel aggregator, the packets are restored, and are written into a flexible container. Then, the packet is read out by the stream aggregation from all the flexible containers, and a complete stream from the receiving direction is combined and recovered.
Because bandwidth for each sub-stream is not necessarily the same, elastic/flexible container resources may also be changed. FIG. 16 shows the internal cache structure of a switch chip. Assuming a capacity of 180 KB on-chip data packet cache space is divided into 720 units, each called a page, and each page with a capacity of 256 Byte, for storing the received data packet. Cache space is allocated in units of pages for each data packet, and larger packets can occupy more than one pages of cache space. In addition to packet cache region, the data cache space also stores several groups of queues for the transmitting descriptor. Each queue is aimed at each transmission port in the chip, which stores the transmitting descriptor for the port. Each transmitting descriptor packet contains the index number for address of data packet cache space and associated control information. The internal chip of router/switch device organizes free cache space by linked queue. A queue represented by linked list can be short as linked queue. A linked queue requires two pointers indicating the head of the queue and the tail of the queue (referred to the head pointer and tail pointers) to be uniquely determined. It is a single linked table with both a head pointer and a tail pointer, and its operating characteristic is FIFO. Addresses for all free cache spaces in switch and control chip are managed using a linked table. The linked table is a free address queue, wherein the free list represents free address queue,  which is formed by the free address head pointer Free_Hptr and tail pointer Free_Tptr. All operations on the free cache management are focused on free list for free address queues.
FIG. 17 shows the a list of states for  cache space block  0, 3, 5, 1, 7, 719, 6 that are free, as well as contents in CRAM and Free_Hptr, Free_Tptr register. Free block list uses an initialization process before use. After the initialization, content in Free_Hptr register is 0, content in word 0 of CRAM 0 is 1. Content in word 1 is 2, …, Content in word 718 is 719. Content in Free_Tptr register is 719.
The allocation of free cache space is to take free cache unit from the head of list, while the release of cache space is to add the already-processed cache unit to the end of the list. FIG. 18 depicts the allocation and release of cache space. When the list has only one item, namely Free_Hptr register has the same contents as the Free_Tptr register, after allocation operation, value in the Free_Tptr register should be modified accordingly. Defining both Free_Hptr and Free_Tptr register as all 1s indicates an invalid state. When the list is empty, contents in both Free_Hptr and Free_Tptr are all 1s, at this time there is no cache space to be allocated. For the operation to release free unit, the Free_Hptr and Free_Tptr register only need to be set to the newly released free unit number. While the embodiment described herein assumes that the on-chip buffer is 180 KB in size as an example, it may be possible to use larger on-chip buffers for processing that is consistent to the above description, including using on-chip buffers of storage capacities of 3-4 orders of magnitude greater than 180 KB or even larger buffers.
In the above example for the cache design, based on the expected size of the elastic/flexible container, setting the number of pages managed by a linked queue corresponding to the elastic/flexible container is actually corresponding to the rate of the FlexE client. The ratio of the numbers of pages allocated by FlexE clients with different rates may be equal to the ratio of the respective rates. In this way, the aim of dynamic and flexible adjustment of cache resources for each sub-streams under the condition of a shared cache is achieved. Depending on the expected size of an elastic container and the size or number of the page (s) managed by corresponding linked list of the elastic container, a dynamic adjustment can be made in allocating the resources to render a ratio of different rates of sub-streams to the number of assigned pages to be equal to the ratio of the individual rates.
Embodiment 2
FIG. 9 is a diagram that is corresponding to the embodiment. After the elastic/flexible container is constructed, MII interface may be constructed next.
Suppose three FlexE clients have rates of 400G, 25G and 150G respectively, as shown in the example illustration of FIG. 9, three elastic/flexible containers are constructed in the MAC layer to match the target FlexE client rates, for loading the three FlexE clients’ data streams. The elastic/flexible containers corresponding to different FlexE clients’ data streams are isolated from each other.
The MII interface combinations matching commonly used FlexE client’s rate are constructed. If a FlexE client has a rate of 400G, the corresponding MII interface is constructed, which comprises: four 100G MII interfaces, or eight 50G MII interfaces, or eighty 5G MII interfaces.
Embodiment 3
FIG. 10 is a diagram that is corresponding to the following embodiment. As previously described, a combination of different types of MII interfaces can be constructed. For example, if a FlexE client has a rate of 150G, the corresponding MII interface can be constructed, which comprises: one 100G MII interface and one 50G MII interface. Similarly, different types and numbers of MII interface combinations can also be constructed, for example, if a FlexE client has a rate of 135G, the corresponding MII interface can be constructed, which comprises: two 50G MII interfaces, one 20G of MII interface, and three 5G MII interfaces, as depicted in FIG. 11.
Embodiment 4
The previous embodiments described how to plan and build a system for transferring FlexE packet traffic. In some embodiments, the system can be pre-set.
In the networking environment for routers and OTN devices connectivity, as shown in FIG. 6, the Ethernet port on the router has a rate of 400G. MII interface group (four 100G MII interfaces) corresponding to 400G rate constructed in accordance with embodiment 2, is called logical MII interface corresponding to the FlexE client’s rate. FlexE flexible Ethernet is enabled on the port, without performing flexible Ethernet binding to PHY of other ports. The port of OTN equipment also has a speed of 400G, and MII interface group (comprising four 100G MII interfaces) corresponding to 400G rate can also be constructed the same in accordance with Embodiment 2.
3 FlexE client data flows FlexE client 1, FlexE client 2 and FlexE client 3 are set in the router, which are able to pass the flexible Ethernet ports, with a respective rate of 10Gb/s, 25Gb/s, and 20Gb/s, and the setting method is shown in FIG. 14.
Destination MAC address of data stream from Port1 is set to MAC1, which enters the elastic/flexible container having a capacity of 10G.
Destination MAC address of data streams from port2, port3, port4, port5, port6, port7, port8, port9 is set to MAC2, which enters the elastic/flexible container having a capacity of 25G.
Destination MAC address of data streams from port9, port10 is set to MAC3, which enters the elastic/flexible container having a capacity of 20G. In some embodiments, the above configurations and assignment of port IDs to MACs in the router are done by the network management software.
Each set FlexE client data stream corresponds to a MII media independent interface or a logical MII interface in a one-to-one correspondence via the RS sublayer. Each data within the MAC data stream forms parallel data streams via the MII interface, relying on 64B /66B encoding mapping and coding techniques to generate a 66-bit block for use by FlexE flexible Ethernet SHIM layer. As described with reference to FIG. 3. under the existing FlexE specification, the FlexE SHIM layer is generally understood to include the 64B/66B conversion or encoding module.
Next, the generated 64B/66B data are mapped into a corresponding ODUflex. The ODUflex corresponds to a time slot set assigned to the MAC client in the 20n Master Calendar, and the data inside the ODUflex are put into these time slot set.
Due to the above processing, the MAC data streams into corresponding ODUflex correspond to time slot sets, or groups, assigned to the MAC client in the 20n Master Calendar. These time slot set, ODUflex constitute logic PHY which belongs to the FlexE client data stream, and are channelized. The MAC data stream can therefore be used as a whole, and be transmitted to the destination node intact.
Without the above-described processing flow, in accordance with the existing flexible Ethernet technology, three FlexE client data streams may not be distinguishable from each other, and the sent-to logical PHY will carry three FlexE clients’ data streams, and the logical PHY will send the data to one destination node. If the above three FlexE clients’ data streams are destined to different nodes, then it will cause some FlexE client’s data stream to be sent to the  wrong destination OTN node. In order to avoid occurrence of such situation, logical PHY data should be unloaded on the local OTN node, and layer 2 processing chip will perform switch operation on the Ethernet data as payload. After switch, the data are re-packaged and then sent to different destinations.
With the processing mechanism in some described embodiments, different FlexE client data streams are channelized before entering the shared physical pipe, which eliminates the need for a following Layer 2 switch in OTN equipment, and truly realizes sub-rate function which flexible Ethernet pursues.
Embodiment 5
In this embodiments, port PHY of a router are composed together. The networking environment of connected routers and OTN devices are shown in FIG. 7, wherein four Ethernet ports with rates of 100Gb/sare bound to a flexible Ethernet port of a rate of 400Gb/s.
The port rate of the OTN equipment is also 400G, wherein similarly four Ethernet ports with rates of 100Gb/sare bound to a flexible Ethernet port of a rate of 400Gb/s.. Corresponding detection and configuration are done by network management.
3 FlexE client data flows FlexE client 1, FlexE client 2 and FlexE client 3 are set in the router, which are able to pass the flexible Ethernet ports, with a respective rate of 10Gb/s, 25Gb/s, and 20Gb/s. Each FlexE client data stream corresponds to a MII media independent interface (or logical MII interface) in a one-to-one correspondence via the RS sublayer. Each data within FlexE client data stream (the MAC data stream) forms parallel data streams via the MII interface, relying on 64B/66B encoding mapping and coding techniques to generate 66-bit block, for use by FlexE flexible Ethernet SHIM layer. According to specification defined by a flexible Ethernet technology, the location start from where 66 bits are formed belongs to the category of FlexE SHIM layer.
Next, the generated 64B/66B data are mapped into a corresponding ODUflex. The ODUflex corresponds to a time slot set assigned to the MAC client in the 20n Master Calendar, and the data inside the ODUflex are put into these time slot sets; other processes are same as the conventional FlexE flexible Ethernet technology.
Due to the above processing, the MAC data streams into corresponding ODUflex correspond to time slot sets assigned to the MAC client in the 20n Master Calendar. These time slot set, ODUflex constitute logic PHY which belongs to the FlexE client data stream, and are  channelized. The FlexE client data stream can therefore be used as a whole, and be transmitted to the destination node intact.
Without the above-described processing flow, in accordance with the existing flexible Ethernet technology, three FlexE client data streams will not be distinguished, and the sent-to logical PHY will carry three FlexE clients’ data streams, and the logical PHY will send the data to one destination node. If the above three FlexE clients’ data streams are respectively destined to different nodes, then it will cause some FlexE client’s data stream to be sent to the wrong destination OTN node. In order to avoid occurrence of such situation, logical PHY data should be unloaded on the local OTN node, and layer 2 processing chip will perform switch operation on the Ethernet data as payload. After switch, the data are re-packaged and then sent to different destinations.
With the described processing mechanism, different FlexE client data streams are channelized before entering the shared physical pipe, which eliminates the need for a following Layer 2 switch in OTN equipment, and truly realizes sub-rate function which flexible Ethernet pursues.
Embodiment 6
This is another composition of port PHY. In the networking environment of connected routers and OTN devices, as shown in FIG. 7, four Ethernet ports with rates of 100Gb/sare bound to a flexible Ethernet port of a rate of 400Gb/s. The three FlexE client data flows FlexE client 1, FlexE client 2 and FlexE client 3 are set in the router, which are able to pass the flexible Ethernet ports, with a respective rate of 10Gb/s, 150Gb/s, and 20Gb/s. Each FlexE client data stream corresponds to a RS sublayer and a MII media independent interface in the following manner:
FlexE client 1 corresponds to one MII interface.
FlexE client 2 corresponds to a MII ground formed by a combination of three 50GMII interfaces (logical MII interface) .
FlexE client 3 corresponds to a MII ground formed by a combination of two 10GMII interfaces (logical MII interface) .
Data within FlexE client 1 data stream form parallel data streams via the MII interface, relying on 64B/66B encoding mapping and coding techniques to generate 66-bit block, and mapped to two 5G time slots in the first Master Calendar; data within FlexE client 2 data  stream form three groups of parallel data streams via three 50G MII interfaces, relying on 64B/66B encoding mapping and coding techniques to generate three 66-bit blocks, and mapped to 30 5G time slots in the second and third Master Calendars; data within FlexE client 3 data stream form two groups of parallel data streams via two 10G MII interfaces, relying on 64B/66B encoding mapping and coding techniques to generate two 66-bit blocks, and mapped to two 5G time slots in the fourth Master Calendar. After the mapping, the data can be sent by FlexE SHIM layer.
According to the FlexE protocol, the location of start from where 66 bits are formed belongs to the category of FlexE SHIM layer. Next, the generated 64B/66B data are mapped into a corresponding ODUflex. The ODUflex corresponds to a time slot set assigned to the MAC client in the 20n Master Calendar, and the data inside the ODUflex are put into these time slot sets. The remaining processes are same as the conventional FlexE flexible Ethernet technology.
Due to the above processing, the FlexE client data streams into corresponding ODUflex correspond to time slot sets assigned to the MAC client in the 20n Master Calendar. These time slot set, ODUflex constitute logic PHY which belongs to the FlexE client data stream, and are channelized. The FlexE client data stream can therefore be used as a whole, and be transmitted to the destination node intact.
Without the above-described processing flow, in accordance with the existing flexible Ethernet technology, three FlexE client data streams will not be distinguished, and the sent-to logical PHY will carry three FlexE clients’ data streams, and the logical PHY will send the data to one destination node. If the above three FlexE clients’ data streams are respectively destined to different nodes, then it will cause some FlexE client’s data stream to be sent to the wrong destination OTN node. In order to avoid occurrence of such situation, logical PHY data should be unloaded on the local OTN node, and layer 2 processing chip will perform switch operation on the Ethernet data as payload. After switch, the data are re-packaged and then sent to different destinations.
With the processing mechanism described herein, different FlexE client data streams are channelized before entering the shared physical pipe, which eliminates the need for a following Layer 2 switch in OTN equipment, and truly realizes sub-rate function which flexible Ethernet pursues.
Embodiment 7
In embodiment 7, an existing MII interface is used to construct MII interfaces with different rates.
Some embodiments may considers reusing existing interfaces. The new MII interface, unlike current rate of standard Ethernet, has a rate of 5Gbps, which is constructed by reducing the transmitting and receiving clocks from a clock rate of XGMII interface with 10Gbps to half of the original one.
The new MII interface, unlike current rate of standard Ethernet, has a rate of 150Gbps, which is constructed by increasing the clock rate of the transmitting and receiving clocks from a clock rate of XGMII interface with 100Gbps to 1.5 times of the original one;
The new MII interface, unlike current rate of standard Ethernet, has a rate of 300Gbps, which is constructed by reducing the clock rate of the transmitting and receiving clocks from a clock rate of CDMII interfaces with 400Gbps to 0.75 times of the original one;
The new MII interface, unlike current rate of standard Ethernet, has a rate of 800Gbps, which is constructed by increasing the clock rate of the transmitting and receiving clocks from a clock rate of XGMII interface with 400Gbps to twice of the original one;
The above various new non-standard-rate MII interfaces still correspond to their matching RS sublayers;
From FIG. 12:
TXD [31: 0] : data transmission channel, 32-bit parallel data.
RXD [31: 0] : data receiving channel, 32-bit parallel data.
TXC [3: 0] : transmit channel control signal, when TXC = 0, it indicates the signals transmitted on TXD are data; TXC = 1, it indicates that the signals transmitted on TXD are control characters. TXC [3: 0] respectively correspond to TXD [31: 24] , TXD [23: 16] , TXD [15: 8] , and TXD [7: 0] .
RXC [3: 0] : receiving channel control signal, when RXC = 0, it indicates that the signals transmitted on RXD are data. When RXC = 1, it indicates that the signals transmitted on RXD are control character. RXC [3: 0] respectively correspond to RXD [31: 24] , RXD [23: 16] , RXD [15: 8] , RXD [7: 0] .
TX_CLK: reference clock for TXD and TXC. The clock frequency is (1/2) *156.25MHz, Both rising and falling edges of the clock signal sample data. (1/2) *156.25MHz *2 *32 = 5Gbps.
RX_CLK: reference clock for RXD and RXC. The clock frequency is (1/2) *156.25MHz, Both rising and falling edges of the clock signal sample data.
From FIG. 13:
Due to the impact of electrical characteristics, maximum transmission distance of PCB traces of XGMII interface is only 7cm, and XGMII interfaces have too many line connections, which is inconvenient for practical applications. Therefore, in practical applications, XGMII interfaces are usually replaced with XAUI interfaces. XAUI, namely 10 Gigabit attachment unit interface, 10G attachment unit Interface. XAUI, based on XGMII, realizes expansion of physical distance of XGMII interface, and increases the transmission distance of PCB traces to 50cm, and enables trace on the back side.
The source XGMII divide the transceiving 32 bit width data stream into four separate lane channels, each lane channel corresponding to a byte. After completing 8B/10B encoding by XGXS (XGMII Extender Sublayer) , the four lanes correspond to 4 independent channels of XAUI, with a XAUI port rate: (1/2) *2.5Gbps *1.25 *4 = 6.25Gbps, which can transmit 5Gbps signals.
With reference to FIG. 30 and FIG. 31, an operational problem associated with transfer of FlexE data over a transport network is described.
FIG. 30 shows an example transport network that is unaware of FlexE carrying FlexE data. Starting from the left side of FIG. 30 to the right side, data traffic from a transmitting FlexE is shown to be sent over four fiber connections through the transport network to the receiving FlexE shim. Data packets from a single Flex client may thus be carried over multiple different physical connections, e.g., optical fibers in FIG. 30. In practice, the physical distance between the transmitting shim and the receiving shim may be hundreds or even thousands of kilometers. As a result, the data packets for the same transmitting FlexE client may experience large differences in data propagation times during transmission from the transmitting shim to the receiving shim. While Ethernet allows for some clock adjustments to remove differential delays or skews, the differential delays experienced in long haul networks may be too large to overcome by the currently prescribed FlexE technologies.
FIG. 31 shows an example of a transport network that is aware of Flex E carrying FlexE data. A similar clock skew problem may exist in when data packets from the same FlexE client at FlexE shim 3102 travel through two different paths (the 150G path, or the 25G/50G path)  to the receiving Flex shim 3104. This clock skew may be due to the differential propagation delay and also due to processing delay through the protocol stack implementation at the shim 3106.
In order to overcome such clock skew problems, in some embodiments, a timing skew correction mechanism similar to the Precision Time Protocol mechanism (PTP) specified in IEEE 1558, which is incorporated by reference herein, may be utilized. While PTP was specified for improving time synchronization in a local area network with smaller physical reach, e.g., few hundred meters, its basic principles can be applied for achieving timing synchronization in FlexE data transport through long haul networks. The equipment running one of the FlexE shims may be selected or chosen as the timing server, and may provide accurate timing information to the other equipment. The clock synchronization may be performed at a protocol sublayer that is between the FlexE shim and the physical layer, e.g., in the PCS.
Alternatively, or additionally, in some embodiments, a mechanism similar to synchronous Ethernet (SyncE) may be used for correcting clock skews. In this embodiment, clock information may be passed between phy layers of all receiving and transmitting devices and may be used for correcting clock skews. In some embodiments, a very high precision clock source, e.g., a master clock, may be made available in the network. The precision of this clock source may be of the order of 10-11 clock inaccuracy.
In some embodiments, the clock synchronization is achieved by using an operations administration and maintenance (OAM) protocol data unit (PDU) that is identified by a specific Ethernet frame header. The synchronization may be achieved at the FlexE physical layer by processing OAM PDUs after the 64B/66B encoding is performed.
FIG. 21 illustrates a flowchart for an example method 2100 for communicating data from multiple Ethernet inputs having multiple interface rates to an optical network are disclosed. The method 2100 includes constructing (2104) one or more elastic/flexible containers operating on data at a media access control (MAC) layer. The elastic/flexible container comprises a variable length data structure and variable logical resources. The method 2100 includes configuring (2106) an output interface having an interface rate equal to a total of the multiple Ethernet interface rates, to carry data in time slots according to a transmission schedule corresponding to the constructed one or more elastic/flexible containers in the MAC layer. The method 2100 includes processing (2108) data packets received from the multiple Ethernet  interfaces through the one or more elastic/flexible containers to generate multiple processed multiple Ethernet data streams. The method 2100 includes allocating (2110) data packets from the processed multiple Ethernet data streams according to time slots of the transmission schedule to generate an output data stream. The method 2100 includes communicating (2112) , to an optical network, a routing policy by which multiple optical data units of the optical network are to receive the output data stream.
FIG. 22 illustrates a block diagram for an example apparatus 2200 for communicating data from multiple Ethernet inputs having multiple interface rates to an optical network are disclosed. The apparatus 2200 includes a module for constructing (2204) one or more elastic/flexible containers operating on data at a media access control (MAC) layer, wherein each elastic/flexible container comprises a variable length data structure and variable logic resources, a module for configuring (2206) an output interface having an interface rate equal to a total of the multiple Ethernet interface rates to carry data in time slots according to a transmission schedule corresponding to the constructed one or more elastic/flexible containers in the MAC layer, a module for processing (2208) data packets received from the multiple Ethernet interfaces through the one or more elastic/flexible containers to generate processed multiple Ethernet data streams, a module for allocating (2210) data packets from the processed multiple Ethernet data streams according to time slots of the transmission schedule to generate an output data stream, and a module for communicating (2212) , to an optical network, a routing policy by which multiple optical data units of the optical network are to receive the output data stream.
FIG. 23 illustrates a flowchart for an example method 2300 for receiving data from an optical network and transmitting over multiple Ethernet interfaces are disclosed. The method 2300 includes constructing (2304) one or more elastic/flexible container operating on data at a media access control (MAC) layer, wherein each elastic/flexible container comprises a variable length data structure and variable logic resources, configuring (2306) an input interface having an interface rate equal to a total of the multiple Ethernet interface rates, wherein the input interface is configured to carry data in time slots according to a transmission schedule corresponding to the constructed one or more elastic/flexible containers in the MAC layer, receiving (2308) , , a routing policy by which multiple optical data units of the optical network are transmitting data on to the input interface, and selecting (2310) data packets from the input interface according to the transmission schedule for processing through a network protocol stack  implemented on the network apparatus for transmission to the multiple Ethernet outputs. The method 2300 may be implemented to transfer data from an ingress point of a network to an egress point of the transmission network using the FlexE technology and the flexible/elastic container described herein.
FIG. 24 illustrates a block diagram of an example apparatus 2400 for receiving data from an optical network and transmitting over multiple Ethernet interfaces are disclosed. The apparatus 2400 includes a module for constructing (2404) one or more elastic/flexible container operating on data at a media access control (MAC) layer, wherein each elastic/flexible container comprises a variable length data structure and variable logic resources, a module for configuring (2406) an input interface having an interface rate equal to a total of the multiple Ethernet interface rates, wherein the input interface is configured to carry data in time slots according to a transmission schedule corresponding to the constructed one or more elastic/flexible containers in the MAC layer, a module for receiving (2408) , a routing policy by which multiple optical data units of the optical network are transmitting data on to the input interface, and a module for selecting (2410) data packets from the input interface according to the transmission schedule for processing through a network protocol stack implemented on the network apparatus for transmission to the multiple Ethernet outputs.
With reference to  methods  2100, 2300 and  apparatus  2200, 2400, in some embodiments, the output interface is configured in form of one or multiple ODUflexes. In some embodiments, each elastic/flexible container further comprises at least some of a data cache, a first in first out (FIFO) structure and a logic circuit. In some embodiments, the  method  2100 or 2300 may include constructing the one or more elastic/flexible containers such that there is one elastic/flexible container corresponding to each of the multiple Ethernet inputs. In some embodiments, the output interface rate is a linear combination of submultiples of at least two standard media independent interface rates.
In some embodiments, the standard media independent interface rates include 5 Gbps, 10 Gbps, 25 Gbps, 40 Gbps, 50 Gbps, 100 Gbps, 200 Gbps, 300 Gbps, 400 Gbps and 1 T Gbps. In some embodiments, in method 2100, achieving a submultiple rate for a given standard media independent interface rate by dividing a clock for the given standard media independent interface rate by an integer factor. In some embodiments, the linear combination may be obtained by increasing clock rate of at least one standard media independent interface by an integer factor.
FIG. 26 shows a flowchart example of a method 2600 of transferring data from one or more input data streams to a flexible Ethernet (FlexE) shim layer for transmission. The method may be implemented in a network processor or another network apparatus such as a router or a switch that is used to transmit data to a transport network at an ingress point.
The method 2600 includes, at 2602, operating flexible containers to receive the one or more input data streams to generate, from each flexible container, one or more output streams.
The method 2600 includes, at 2604, processing one or more output steams from each flexible container in one or more reconciliation sublayers to generate 64-bit data signals.
The method 2600 includes, at 2606, providing, over a modified industry-standard interface, the 64-bit data signals to a FlexE shim layer in which the 64-bit data signals are each encoded using a 64B/66B encoding to generate logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots for transmission. In some embodiments, the master calendar mechanism, e.g., as described in the FlexE specification document, may be used for the timing slot allocation.
The modified industry-standard interface may include one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate;
During implementation of method 2600, each flexible container may correspond to a FlexE client and the processing rate of each flexible container may match a rate of a logical data stream for a corresponding FlexE client.
In some embodiments, an apparatus for transferring data from one or more input data streams to a flexible Ethernet (FlexE) shim layer for transmission includes a number of flexible containers for receiving the one or more input data streams to generate, from each flexible container, one or more output streams, one or more reconciliation sublayer modules for processing the one or more output steams from each flexible container to generate 64-bit data signals, and a modified industry-standard interface for providing the 64-bit data signals to a FlexE shim layer in which the 64-bit data signals are each encoded using a 64B/66B encoding to generate logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots for transmission. The modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having  a non-standard rate. Each flexible container corresponds to a FlexE client and the processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client. In some embodiments, the method 2600 further includes performing, by the FlexE shim layer, idle insert/delete processing on the 64B/66B blocks according to IEEE 802.3 standard. In some embodiments, the 2600 may also include inserting, prior to the transmission, timing information for clock synchronization. The timing information is inserted according to the Precision Time Protocol of IEEE-1558 or the Synchronous Ethernet protocol. In some embodiments, the clock synchronization information is inserted in an operations administration and maintenance (OAM) protocol data unit (PDU) .
FIG. 27 shows a flowchart example of a method 2700 transferring data from a FlexE shim layer to one or more output data streams during reception of the data. The input data streams may a transport network ingress point, while the output data stream may be at a transport network egress point.
The method 2700 includes, at 2702, providing, over a modified industry standard interface, 64-bit data signals from a FlexE shim layer that decodes, using a 64B/66B decoding, logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots in the data. In some embodiments, the data may have been allocated to timing slots using the master calendar mechanism, e.g., as described in the FlexE specification document.
The method 2700 includes, at 2704, processing the 64-bit data signals through one or more reconciliation sublayers to generate one or more data stream inputs to flexible containers.
The method 2700 includes, at 2706, operating the flexible containers to output the one or more output data streams from the received one or more input streams.
In some embodiments, an apparatus for transferring data from a flexible Ethernet (FlexE) shim layer to one or more output data streams during reception of the data includes a modified industry standard interface that provides 64-bit data signals from a FlexE shim layer that decodes, using a 64B/66B decoding, logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots in the data, one or more reconciliation sublayers to process the 64-bit data signals to generate one or more data stream inputs to flexible containers and flexible containers to output the one or more output data streams from the received one or more input streams. In some embodiments, the method 2700 includes  performing clock synchronization using the above-described techniques for achieving clock synchronization.
With respect to the  methods  2600, 2700 and apparatus for transferring data described above, the non-standard rate may be constructed by reducing a rate of a standard MII. Alternately, or in addition, the non-standard rate may constructed by increasing a rate of a standard MII. The non-standard rate may be constructed by changing clock rate of a standard MII or by changing data transfer rate of a SerDes interface or by using a 4-pulse amplitude modulation (PAM-4) data signal modulation technique, e.g., in place of the NRZ modulation. In some embodiments, the non-standard rate is constructed by combining two or more standard MII rates to form a logic MII interface. In some embodiments, the non-standard rate is constructed by combining two or more different MII interfaces wherein one of the different MII interface rates is a rate that is adjusted from a standard rate.
In various embodiments, the standard rate corresponds to a rate of a 100 Gigabit MII interface (CGMII) , a 10 Gigabit MII (XGMII) , a 10 Gigabit attachment unit interface (XAUI) or a 400 Gigabit MII interface (CDMII) , or another rate, e.g., a reduced media independent interface rate (RMII) , an RGMII, a quad serial GMII, a serial MII (SMII) , and so on.
Furthermore with reference to the apparatus for transferring and  methods  2600 and 2700 described above, the assignment of which input and output streams are to be processed by which flexible container may be performed by a stream distributor module. In some embodiments, the resources allocated to each flexible container, e.g., hardware resources, may be proportional to the bandwidth of the corresponding input data stream processed by the flexible container. The hardware resources may, e.g., be a cache space or a buffer. In some embodiments, e.g., as shown in FIGs. 9, 10 and 11, some flexible containers may receive from, or provide to, data over multiple data streams and/or MII streams.
In some embodiments, the rate of operation of the flexible/elastic container matches the data rate of the targeted FlexE client. In some embodiments, method 2600 further includes encoding the data into a logical series stream as 64B/66B blocks and performing idle insert/delete processing on the logical series stream according to IEEE 802.3 standard. An example of this processing is described with reference to FIG. 28.
In another example aspect, a technique for transmitting and receiving data using a FlexE shim includes an apparatus for transferring data from a first group of one or more input  data streams to a flexible Ethernet (FlexE) shim layer for transmission, and transferring, from the FlexE shim layer to a first group of one or more output data streams for reception. The apparatus includes a number of flexible containers for receiving the first group one or more input data streams to generate, from each flexible container, a second group of one or more output streams, one or more reconciliation sublayer modules for processing the second group of one or more output steams from each flexible container to generate a first group of 64-bit data signals, and a modified industry-standard interface for providing the first group of 64-bit data signals to a FlexE shim layer in which the first group of 64-bit data signals are each encoded using a 64B/66B encoding to generate a first group of logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to a first group allocated timing slots for transmission. The one or more reconciliation sublayers further process the second group of 64-bit data signals to generate a second group of one or more data stream inputs to flexible containers. The flexible containers further outputs the first group of one or more output data streams from the received second group of one or more input streams. The modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate. In the technique, each flexible container corresponds to a FlexE client and a processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client. Examples of transmitting embodiments are shown and described with respect to FIGs. 9, 10, 11 and 19, and examples of receive side operations are described with respect to FIG. 20.
It will be appreciated that several techniques have been disclosed for facilitating transfer of flexible Ethernet traffic rates to and from optical networks in which flexible data rates can be used for data communication. It will further be appreciated that the disclosed techniques may be able to be combined with an industry standard FlexE shim layer implementation to allow the use of multiple FlexE clients and mapping/de-mapping of data generated to and from the clients for facilitating transportation and reception of FlexE data streams at transmission network ingress points or egress points.
The disclosed and other embodiments and the functional operations and modules described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their  structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) . A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this document contains many specifics, these should not be construed as limitations on the scope of an invention that is claimed or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or a variation of a sub-combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results.
Only a few examples and implementations are disclosed. Variations, modifications, and enhancements to the described examples and implementations and other implementations can be made based on what is disclosed.

Claims (76)

  1. A method of transferring data from one or more input data streams to a flexible Ethernet (FlexE) shim layer for transmission, comprising:
    operating flexible containers to receive the one or more input data streams to generate, from each flexible container, one or more output streams;
    processing one or more output steams from each flexible container in one or more reconciliation sublayers to generate data signals; and
    providing, over a modified industry-standard interface, the data signals to a FlexE shim layer in which the data signals are encoded to generate logical serial data streams of encoded blocks representing FlexE clients which are mapped to allocated timing slots for transmission;
    wherein each flexible container and the processing in the one or more reconciliation sublayers enable the modified industry-standard interface to include one or more interfaces having respective rates to accommodate for transmission via the FlexE shim layer.
  2. The method of claim 1, wherein the one or more reconciliation sublayers generate 64-bit data signals; and
    the 64-bit data signals are each encoded using a 64B/66B encoding to generate logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots for transmission.
  3. The method of claim 2, wherein the modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate.
  4. The method of claim 1 through 3, wherein each flexible container corresponds to a FlexE client.
  5. The method of claim 2 or 3, wherein a processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client before the 64B/66B encoding.
  6. The method of claim 3, wherein the non-standard rate is constructed by reducing a rate of a standard media independent interface (MII) .
  7. The method of claim 3, wherein the non-standard rate is constructed by increasing a rate of a standard media independent interface (MII) .
  8. The method of claim 3, wherein the non-standard rate is constructed by changing clock rate of a standard media independent interface (MII) or by changing data transfer rate of a SerDes interface or by using a 4-pulse amplitude modulation (PAM-4) data signal modulation technique.
  9. The method of claim 3, wherein the non-standard rate is constructed by combining two or more standard media independent interface (MII) rates to form a logic MII interface.
  10. The method of claim 3, wherein the non-standard rate is constructed by combining two or more different media independent interface (MII) interfaces wherein one of the different MII interface rates is a rate that is adjusted from a standard rate.
  11. The method of claim 10, wherein the standard rate corresponds to a rate of a 100 Gigabit MII interface (CGMII) , a 10 Gigabit MII (XGMII) , a 10 Gigabit attachment unit interface (XAUI) or a 400 Gigabit MII interface (CDMII) , or a different rate.
  12. The method of claim 1 through 3, wherein the operating the flexible containers to receive the one or more input data stream and generate corresponding output streams, includes assigning an input data stream to a flexible container using a stream distributer module.
  13. The method of claim 1, further including:
    allocating resources to each flexible container proportional to a bandwidth of a corresponding input data stream processed by that flexible container.
  14. The method of claim 13, wherein the resources include a cache space.
  15. The method of claim 1, further including:
    operating at least one of the flexible containers to produce two or more output streams.
  16. The method of claim 1, further including:
    performing, by the FlexE shim layer, idle insert/delete processing on the 64B/66B blocks according to IEEE 802.3 standard.
  17. The method of claim 1, further including:
    inserting, prior to the transmission, timing information for clock synchronization.
  18. The method of claim 17, wherein the inserting the timing information includes inserting the timing information in an operations administration and maintenance (OAM) protocol data unit (PDU) .
  19. The method of claim 17, wherein the timing information is inserted according to the Precision Time Protocol of IEEE-1558 or the Synchronous Ethernet protocol.
  20. An apparatus for transferring data from one or more input data streams to a flexible Ethernet (FlexE) shim layer for transmission, comprising:
    a number of flexible containers for receiving the one or more input data streams to generate, from each flexible container, one or more output streams;
    one or more reconciliation sublayer modules for processing the one or more output steams from each flexible container to generate 64-bit data signals; and
    a modified industry-standard interface for providing the 64-bit data signals to a FlexE shim layer in which the 64-bit data signals are each encoded using a 64B/66B encoding to generate  logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots for transmission;
    wherein the modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate;
    wherein each flexible container corresponds to a FlexE client; and
    wherein a processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client before the 64B/66B encoding.
  21. The apparatus of claim 20, wherein the non-standard rate is constructed by reducing a rate of a standard media independent interface (MII) .
  22. The apparatus of claim 20, wherein the non-standard rate is constructed by increasing a rate of a standard media independent interface (MII) .
  23. The apparatus of claim 20, wherein the non-standard rate is constructed by changing clock rate of a standard media independent interface (MII) or by changing data transfer rate of a SerDes interface or by using a 4-pulse amplitude modulation (PAM-4) data signal modulation technique.
  24. The apparatus of claim 20, wherein the non-standard rate is constructed by combining two or more standard media independent interface (MII) rates to form a logic MII interface.
  25. The apparatus of claim 20, wherein the non-standard rate is constructed by combining two or more different media independent interface (MII) interfaces wherein one of the different MII interface rates is a rate that is adjusted from a standard rate.
  26. The apparatus of claim 25, wherein the standard rate corresponds to a rate of a 100 Gigabit MII interface (CGMII) , a 10 Gigabit MII (XGMII) , a 10 Gigabit attachment unit interface (XAUI) or a 400 Gigabit MII interface (CDMII) , or another rate.
  27. The apparatus of claim 20, further including:
    a stream distributor module that assigns an input data stream to the flexible containers.
  28. The apparatus of claim 20, wherein each flexible container includes:
    hardware resources proportional to a bandwidth of a corresponding input data stream processed by the flexible container.
  29. The apparatus of claim 28, wherein the hardware resources include a cache space.
  30. The apparatus of claim 20, wherein at least one of the flexible containers produces two or more output streams.
  31. The apparatus of claim 20, wherein the FlexE shim layer performs idle insert/delete processing on the 64B/66B blocks according to IEEE 802.3 standard.
  32. The apparatus of claim 20, further including:
    a module for inserting, prior to the transmission, timing information for clock synchronization.
  33. The apparatus of claim 32, wherein the timing information is inserted according to the Precision Time Protocol of IEEE-1558 or the Synchronous Ethernet protocol.
  34. The apparatus of claim 32, wherein the timing information includes the timing information in an operations administration and maintenance (OAM) protocol data unit (PDU) .
  35. A method of transferring data from a flexible Ethernet (FlexE) shim layer to one or more output data streams during reception of the data, comprising:
    providing, over a modified industry standard interface, 64-bit data signals from a FlexE shim layer that decodes, using a 64B/66B decoding, logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots in the data;
    processing the 64-bit data signals through one or more reconciliation sublayers to generate one or more data stream inputs to flexible containers; and
    operating the flexible containers to output the one or more output data streams from the received one or more input streams;
    wherein the modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate;
    wherein each flexible container corresponds to a FlexE client; and
    wherein a processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client.
  36. The method of claim 35, wherein the non-standard rate is constructed by reducing a rate of a standard media independent interface (MII) .
  37. The method of claim 35, wherein the non-standard rate is constructed by increasing a rate of a standard media independent interface (MII) .
  38. The method of claim 35, wherein the non-standard rate is constructed by changing clock rate of a standard media independent interface (MII) or by changing data transfer rate of a SerDes interface or by using a 4-pulse amplitude modulation (PAM-4) data signal modulation technique.
  39. The method of claim 35, wherein the non-standard rate is constructed by combining two or more standard media independent interface (MII) rates to form a logic MII interface.
  40. The method of claim 35, wherein the non-standard rate is constructed by combining two or more different media independent interface (MII) interfaces wherein one of the different MII interface rates is a rate that is adjusted from a standard rate.
  41. The method of claim 40, wherein the standard rate corresponds to a rate of a 100 Gigabit MII interface (CGMII) , a 10 Gigabit MII (XGMII) , a 10 Gigabit attachment unit interface (XAUI) or a 400 Gigabit MII interface (CDMII) , or a different rate.
  42. The method of claim 35, wherein the operating the flexible containers to receive the one or more input data stream and generate corresponding output streams, includes assigning an input data stream to a flexible container using a stream distributer module.
  43. The method of claim 35, further including:
    allocating resources to each flexible container proportional to a bandwidth of a corresponding input data stream processed by that flexible container.
  44. The method of claim 43, wherein the resources include a cache space.
  45. The method of claim 35, further including:
    operating at least one of the flexible containers to receive two or more input streams.
  46. The method of claim 35, further including:
    performing, by the FlexE shim layer, idle insert/delete processing on the 64B/66B blocks according to IEEE 802.3 standard.
  47. The method of claim 35, further including:
    receiving, prior to the transmission, timing information and performing clock synchronization using the timing information.
  48. The method of claim 47, wherein the timing information conforms to the Precision Time Protocol of IEEE-1558 or the Synchronous Ethernet protocol.
  49. The method of claim 47, wherein the timing information includes timing information in an operations administration and maintenance (OAM) protocol data unit (PDU) .
  50. An apparatus for transferring data from a flexible Ethernet (FlexE) shim layer to one or more output data streams during reception of the data, comprising:
    a modified industry standard interface that provides 64-bit data signals from a FlexE shim layer that decodes, using a 64B/66B decoding, logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to allocated timing slots in the data;
    one or more reconciliation sublayers to process the 64-bit data signals to generate one or more data stream inputs to flexible containers; and
    flexible containers to output the one or more output data streams from the received one or more input streams;
    wherein the modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate;
    wherein each flexible container corresponds to a FlexE client; and
    wherein a processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client.
  51. The apparatus of claim 50, wherein the non-standard rate is constructed by reducing a rate of a standard media independent interface (MII) .
  52. The apparatus of claim 50, wherein the non-standard rate is constructed by increasing a rate of a standard media independent interface (MII) .
  53. The apparatus of claim 50, wherein the non-standard rate is constructed by changing clock rate of a standard media independent interface (MII) or by changing data transfer rate of a SerDes interface or by using a 4-pulse amplitude modulation (PAM-4) data signal modulation technique.
  54. The apparatus of claim 50, wherein the non-standard rate is constructed by combining two or more standard media independent interface (MII) rates to form a logic MII interface.
  55. The apparatus of claim 50, wherein the non-standard rate is constructed by combining two or more different media independent interface (MII) interfaces wherein one of the different MII interface rates is a rate that is adjusted from a standard rate.
  56. The apparatus of claim 55, wherein the standard rate corresponds to a rate of a 100 Gigabit MII interface (CGMII) , a 10 Gigabit MII (XGMII) , a 10 Gigabit attachment unit interface (XAUI) or a 400 Gigabit MII interface (CDMII) , or a different rate.
  57. The apparatus of claim 50, further including a stream distributor that assigns an input data stream to a flexible container.
  58. The apparatus of claim 50, wherein each flexible container includes hardware resources proportional to a bandwidth of a corresponding input data stream processed by that flexible container.
  59. The apparatus of claim 58, wherein the hardware resources include a cache space.
  60. The apparatus of claim 50, wherein at least one of the flexible containers receives two or more input streams.
  61. The apparatus of claim 50, wherein the FlexE shim layer further performs idle insert/delete processing on the 64B/66B blocks according to IEEE 802.3 standard.
  62. The apparatus of claim 50, further including:
    a module for receiving, prior to the transmission, timing information and performing clock synchronization using the timing information.
  63. The apparatus of claim 62, wherein the timing information conforms to the Precision Time Protocol of IEEE-1558 or the Synchronous Ethernet protocol.
  64. An apparatus for transferring data from a first group of one or more input data streams to a flexible Ethernet (FlexE) shim layer for transmission, and transferring, from the FlexE shim layer to a first group of one or more output data streams, comprising:
    a number of flexible containers for receiving the first group one or more input data streams to generate, from each flexible container, a second group of one or more output streams;
    one or more reconciliation sublayer modules for processing the second group of one or more output steams from each flexible container to generate a first group of 64-bit data signals; and
    a modified industry-standard interface for providing the first group of 64-bit data signals to a FlexE shim layer in which the first group of 64-bit data signals are each encoded using a 64B/66B encoding to generate a first group of logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to a first group allocated timing slots for transmission;
    the modified industry standard interface further providing a second group of 64-bit data signals from the FlexE shim layer that decodes, using a 64B/66B decoding, a second group of logical serial data streams of 64B/66B blocks representing FlexE clients which are mapped to a second group of allocated timing slots in the data;
    the one or more reconciliation sublayers further processing the second group of 64-bit data signals to generate a second group of one or more data stream inputs to flexible containers; and
    the flexible containers further outputting the first group of one or more output data streams from the received second group of one or more input streams;
    wherein the modified industry-standard interface comprises one or more industry-standard interfaces having a same standard rate, or one or more MII interface having different standard rates, or one or more industry-standard interfaces having a non-standard rate;
    wherein each flexible container corresponds to a FlexE client; and
    wherein a processing rate of each flexible container matches a rate of a logical data stream for a corresponding FlexE client.
  65. The apparatus of claim 64, wherein the non-standard rate is constructed by reducing a rate of a standard media independent interface (MII) .
  66. The apparatus of claim 64, wherein the non-standard rate is constructed by increasing a rate of a standard media independent interface (MII) .
  67. The apparatus of claim 64, wherein the non-standard rate is constructed by changing clock rate of a standard media independent interface (MII) or by changing data transfer rate of a SerDes interface or by using a 4-pulse amplitude modulation (PAM-4) data signal modulation technique.
  68. The apparatus of claim 64, wherein the non-standard rate is constructed by combining two or more standard media independent interface (MII) rates to form a logic MII interface.
  69. The apparatus of claim 64, wherein the non-standard rate is constructed by combining two or more different media independent interface (MII) interfaces wherein one of the different MII interface rates is a rate that is adjusted from a standard rate.
  70. A method of communicating data from multiple Ethernet inputs having multiple Ethernet interfaces at respective interface rates to an optical network, implemented at a network apparatus, comprising:
    constructing one or more elastic/flexible containers operating on data at a media access control (MAC) layer, wherein each elastic/flexible container comprises a variable length data structure and variable logic resources;
    configuring an output interface having an interface rate equal to a total of the multiple Ethernet interface rates to carry data in time slots according to a transmission schedule corresponding to the constructed one or more elastic/flexible containers in the MAC layer;
    processing data packets received from the multiple Ethernet interfaces through the one or more elastic/flexible containers to generate processed multiple Ethernet data streams;
    allocating data packets from the processed multiple Ethernet data streams according to time slots of the transmission schedule to generate an output data stream.
  71. The method of claim 70, wherein the output interface comprises one or multiple flexible optical data units (ODUflexes) .
  72. The method of claim 70, wherein each elastic/flexible container further comprises at least some of a data cache, a first in first out (FIFO) structure and a logic circuit.
  73. The method of claim 70, further including:
    constructing the one or more elastic/flexible containers such that there is one elastic/flexible container corresponding to each of the multiple Ethernet inputs.
  74. The method of claim 70, wherein the output interface rate is a combination of submultiples of at least two standard media independent interface rates.
  75. The method of claim 74, further comprising:
    achieving a submultiple rate for a given standard media independent interface rate by dividing a clock for the given standard media independent interface rate by an integer factor.
  76. The method of claim 74, wherein the linear combination is obtained by:
    increasing clock rate of at least one standard media independent interface by an integer factor.
PCT/CN2015/092992 2015-10-27 2015-10-27 Channelization for flexible ethernet WO2017070851A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/092992 WO2017070851A1 (en) 2015-10-27 2015-10-27 Channelization for flexible ethernet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/092992 WO2017070851A1 (en) 2015-10-27 2015-10-27 Channelization for flexible ethernet

Publications (1)

Publication Number Publication Date
WO2017070851A1 true WO2017070851A1 (en) 2017-05-04

Family

ID=58629666

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/092992 WO2017070851A1 (en) 2015-10-27 2015-10-27 Channelization for flexible ethernet

Country Status (1)

Country Link
WO (1) WO2017070851A1 (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109217994A (en) * 2017-06-30 2019-01-15 中国电信股份有限公司 Data transmission method, device and computer readable storage medium
CN109254721A (en) * 2017-07-12 2019-01-22 中兴通讯股份有限公司 Flexible Ethernet data cross method, transmitting device and storage medium
CN109688071A (en) * 2017-10-18 2019-04-26 华为技术有限公司 A kind of flexible Ethernet message forwarding method and device
CN109729588A (en) * 2017-10-31 2019-05-07 华为技术有限公司 Service data transmission method and device
WO2019119388A1 (en) 2017-12-22 2019-06-27 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatus for configuring a flex ethernet node
CN109962762A (en) * 2017-12-25 2019-07-02 华为技术有限公司 A kind of data transmission method, sending device and reception device
WO2019165908A1 (en) * 2018-03-01 2019-09-06 中兴通讯股份有限公司 Service transmitting method and device, and service receiving method and device
WO2019165991A1 (en) * 2018-03-01 2019-09-06 中兴通讯股份有限公司 Flexible ethernet-based service protection method, server, and storage medium
CN110224949A (en) * 2018-03-01 2019-09-10 中兴通讯股份有限公司 A kind of method and device, path establishment method and the device of flexible ethernet device port binding
WO2019228428A1 (en) * 2018-05-31 2019-12-05 华为技术有限公司 Method and device for adjusting bandwidth of transmission channel in flexible ethernet
CN110875862A (en) * 2018-08-31 2020-03-10 中兴通讯股份有限公司 Message transmission method and device and computer storage medium
CN111193567A (en) * 2018-11-14 2020-05-22 深圳市中兴微电子技术有限公司 Time synchronization method, equipment and storage medium
EP3694153A4 (en) * 2017-10-31 2020-10-28 Huawei Technologies Co., Ltd. Method, relevant device and system for acquiring a target transmission path
EP3697000A4 (en) * 2017-11-16 2020-12-30 Huawei Technologies Co., Ltd. Method, device and system for transmitting data
CN112333076A (en) * 2020-11-25 2021-02-05 中盈优创资讯科技有限公司 Method and device for bearing VXLAN service through FlexE channel
CN112363998A (en) * 2020-11-12 2021-02-12 浙江非线数联科技有限公司 Data application framework based on data standard and implementation method
CN112491492A (en) * 2019-09-12 2021-03-12 华为技术有限公司 Method and device for time slot negotiation
WO2021190310A1 (en) * 2020-03-24 2021-09-30 中兴通讯股份有限公司 Method, apparatus and device for sending oam information, and storage medium
CN113784437A (en) * 2020-06-10 2021-12-10 烽火通信科技股份有限公司 Method and device for realizing FlexE bearing small-particle service
CN114615136A (en) * 2022-03-04 2022-06-10 浙江国盾量子电力科技有限公司 Flexe interface management method for 5G smart power grid slice
CN114785747A (en) * 2022-04-18 2022-07-22 烽火通信科技股份有限公司 Flexible Ethernet Shim layer cross delay optimization method and system
EP3920438A4 (en) * 2019-01-28 2022-11-16 ZTE Corporation Method, device and system for customer business transfer, and computer readable storage medium
CN115941792A (en) * 2022-11-30 2023-04-07 苏州异格技术有限公司 Method and device for processing data block of flexible Ethernet and storage medium
WO2023124551A1 (en) * 2021-12-31 2023-07-06 中兴通讯股份有限公司 Packet signal sending method and apparatus, and storage medium and electronic apparatus
EP4084406A4 (en) * 2019-12-27 2024-01-17 China Mobile Comm Co Ltd Res Inst Method and equipment for sending interface message

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1124355A2 (en) * 2000-02-09 2001-08-16 Nortel Networks Corporation 10 Gigabit ethernet mappings for a common lan/wan pmd interface
WO2014153784A1 (en) * 2013-03-29 2014-10-02 华为技术有限公司 Method, apparatus, and system for transmitting data in ethernet
WO2015027126A1 (en) * 2013-08-22 2015-02-26 Nec Laboratories America, Inc. Reconfigurable and variable-rate shared multi-transponder architecture for flexible ethernet-based optical networks
CN104426631A (en) * 2013-09-06 2015-03-18 华为技术有限公司 Method and device for processing data
US9148382B2 (en) * 2012-02-15 2015-09-29 Ciena Corporation Adaptive Ethernet flow control systems and methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1124355A2 (en) * 2000-02-09 2001-08-16 Nortel Networks Corporation 10 Gigabit ethernet mappings for a common lan/wan pmd interface
US9148382B2 (en) * 2012-02-15 2015-09-29 Ciena Corporation Adaptive Ethernet flow control systems and methods
WO2014153784A1 (en) * 2013-03-29 2014-10-02 华为技术有限公司 Method, apparatus, and system for transmitting data in ethernet
WO2015027126A1 (en) * 2013-08-22 2015-02-26 Nec Laboratories America, Inc. Reconfigurable and variable-rate shared multi-transponder architecture for flexible ethernet-based optical networks
CN104426631A (en) * 2013-09-06 2015-03-18 华为技术有限公司 Method and device for processing data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IEEE COMPUTER SOCIETY.: "IEEE Standard for Ethernet, Amendment 3: Physical Layer Specifications and Management Parameters for 40Gb/s and 100Gb/s Operation over Fiber Optic Cables.", IEEE STD 802.3BM-2015., 16 February 2015 (2015-02-16), pages 27, 28, 33, 95, XP068106492 *

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109217994B (en) * 2017-06-30 2021-04-30 中国电信股份有限公司 Data transmission method, device and computer readable storage medium
CN109217994A (en) * 2017-06-30 2019-01-15 中国电信股份有限公司 Data transmission method, device and computer readable storage medium
CN109254721A (en) * 2017-07-12 2019-01-22 中兴通讯股份有限公司 Flexible Ethernet data cross method, transmitting device and storage medium
CN109254721B (en) * 2017-07-12 2024-04-05 中兴通讯股份有限公司 Flexible Ethernet data crossing method, transmission device and storage medium
US11206216B2 (en) 2017-10-18 2021-12-21 Huawei Technologies Co., Ltd. Flexible ethernet frame forwarding method and apparatus
CN109688071A (en) * 2017-10-18 2019-04-26 华为技术有限公司 A kind of flexible Ethernet message forwarding method and device
EP3691210A4 (en) * 2017-10-18 2020-11-11 Huawei Technologies Co., Ltd. Flexible ethernet message forwarding method and apparatus
US11101908B2 (en) 2017-10-31 2021-08-24 Huawei Technologies Co., Ltd. Service data transmission method and apparatus
US11171860B2 (en) 2017-10-31 2021-11-09 Huawei Technologies Co., Ltd. Method for obtaining target transmission route, related device, and system
CN109729588A (en) * 2017-10-31 2019-05-07 华为技术有限公司 Service data transmission method and device
CN109729588B (en) * 2017-10-31 2020-12-15 华为技术有限公司 Service data transmission method and device
EP3694153A4 (en) * 2017-10-31 2020-10-28 Huawei Technologies Co., Ltd. Method, relevant device and system for acquiring a target transmission path
US11245470B2 (en) 2017-11-16 2022-02-08 Huawei Technologies Co., Ltd. Method, device, and system for transmitting data
EP3697000A4 (en) * 2017-11-16 2020-12-30 Huawei Technologies Co., Ltd. Method, device and system for transmitting data
US11489607B2 (en) 2017-12-22 2022-11-01 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatus for configuring a flex ethernet node
WO2019119388A1 (en) 2017-12-22 2019-06-27 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatus for configuring a flex ethernet node
EP3729734A4 (en) * 2017-12-22 2021-07-21 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatus for configuring a flex ethernet node
CN111727589A (en) * 2017-12-22 2020-09-29 瑞典爱立信有限公司 Method and device for configuring Flex Ethernet node
CN109962762A (en) * 2017-12-25 2019-07-02 华为技术有限公司 A kind of data transmission method, sending device and reception device
US11438098B2 (en) 2017-12-25 2022-09-06 Huawei Technologies Co., Ltd. Data transmission method, sending apparatus, and receiving apparatus
CN110224949A (en) * 2018-03-01 2019-09-10 中兴通讯股份有限公司 A kind of method and device, path establishment method and the device of flexible ethernet device port binding
CN110224949B (en) * 2018-03-01 2022-05-20 中兴通讯股份有限公司 Method and device for binding flexible Ethernet equipment port, and method and device for establishing path
WO2019165908A1 (en) * 2018-03-01 2019-09-06 中兴通讯股份有限公司 Service transmitting method and device, and service receiving method and device
WO2019165991A1 (en) * 2018-03-01 2019-09-06 中兴通讯股份有限公司 Flexible ethernet-based service protection method, server, and storage medium
US11349719B2 (en) 2018-05-31 2022-05-31 Huawei Technologies Co., Ltd. Method and apparatus for adjusting bandwidth of transmission channel in flexible ethernet
WO2019228428A1 (en) * 2018-05-31 2019-12-05 华为技术有限公司 Method and device for adjusting bandwidth of transmission channel in flexible ethernet
CN110875862A (en) * 2018-08-31 2020-03-10 中兴通讯股份有限公司 Message transmission method and device and computer storage medium
EP3846391A4 (en) * 2018-08-31 2021-10-20 ZTE Corporation Message transmission method and device and computer storage medium
US20210320819A1 (en) * 2018-08-31 2021-10-14 Zte Corporation Packet Transmission Method and Device, and Computer Storage Medium
CN110875862B (en) * 2018-08-31 2022-07-19 中兴通讯股份有限公司 Message transmission method and device and computer storage medium
US11700148B2 (en) 2018-08-31 2023-07-11 Zte Corporation Packet transmission method and device, and computer storage medium
CN111193567B (en) * 2018-11-14 2023-09-26 深圳市中兴微电子技术有限公司 Time synchronization method, equipment and storage medium
CN111193567A (en) * 2018-11-14 2020-05-22 深圳市中兴微电子技术有限公司 Time synchronization method, equipment and storage medium
EP3920438A4 (en) * 2019-01-28 2022-11-16 ZTE Corporation Method, device and system for customer business transfer, and computer readable storage medium
US11750313B2 (en) 2019-01-28 2023-09-05 Zte Corporation Client signal transmission method, device and system and computer-readable storage medium
CN112491492A (en) * 2019-09-12 2021-03-12 华为技术有限公司 Method and device for time slot negotiation
EP4084406A4 (en) * 2019-12-27 2024-01-17 China Mobile Comm Co Ltd Res Inst Method and equipment for sending interface message
WO2021190310A1 (en) * 2020-03-24 2021-09-30 中兴通讯股份有限公司 Method, apparatus and device for sending oam information, and storage medium
CN113784437A (en) * 2020-06-10 2021-12-10 烽火通信科技股份有限公司 Method and device for realizing FlexE bearing small-particle service
CN113784437B (en) * 2020-06-10 2023-09-26 烽火通信科技股份有限公司 Method and device for realizing FlexE bearing small particle service
CN112363998A (en) * 2020-11-12 2021-02-12 浙江非线数联科技有限公司 Data application framework based on data standard and implementation method
CN112333076A (en) * 2020-11-25 2021-02-05 中盈优创资讯科技有限公司 Method and device for bearing VXLAN service through FlexE channel
WO2023124551A1 (en) * 2021-12-31 2023-07-06 中兴通讯股份有限公司 Packet signal sending method and apparatus, and storage medium and electronic apparatus
CN114615136A (en) * 2022-03-04 2022-06-10 浙江国盾量子电力科技有限公司 Flexe interface management method for 5G smart power grid slice
CN114615136B (en) * 2022-03-04 2023-10-27 浙江国盾量子电力科技有限公司 FlexE interface management method for 5G smart grid slice
CN114785747A (en) * 2022-04-18 2022-07-22 烽火通信科技股份有限公司 Flexible Ethernet Shim layer cross delay optimization method and system
CN114785747B (en) * 2022-04-18 2023-10-03 烽火通信科技股份有限公司 Flexible Ethernet Shim layer cross time delay optimization method and system
CN115941792A (en) * 2022-11-30 2023-04-07 苏州异格技术有限公司 Method and device for processing data block of flexible Ethernet and storage medium
CN115941792B (en) * 2022-11-30 2024-02-02 苏州异格技术有限公司 Method and device for processing data blocks of flexible Ethernet and storage medium

Similar Documents

Publication Publication Date Title
WO2017070851A1 (en) Channelization for flexible ethernet
US10637604B2 (en) Flexible ethernet and multi link gearbox mapping procedure to optical transport network
US8412040B2 (en) Method and apparatus for mapping traffic using virtual concatenation
WO2018090856A1 (en) Method and device for building flexible ethernet group
CN109947681A (en) Stringization/deserializer and high speed interface protocol exchange chip
US11843452B2 (en) Clock synchronization method and apparatus
CA2693674C (en) Transmission of data over parallel links
WO2018228420A1 (en) Transmission network system, and data exchange and transmission method, device and apparatus
WO2006072053A2 (en) Techniques for transmitting and receiving traffic over advanced switching compatible switch fabrics
US20090063889A1 (en) Aligning data on parallel transmission lines
WO2018210169A1 (en) Data transmission methods, devices, apparatuses, and system
US20070274351A1 (en) Multiple fiber optic gigabit ethernet liniks channelized over single optical link
WO2021169289A1 (en) Binding method and device for flexible ethernet group, and computer readable storage medium
CN106717111A (en) Method, device, and system for receiving CPRI data stream and ethernet frame
EP4236248A2 (en) Otn transport over a leaf/spine packet network
WO2020177414A1 (en) Flexe one-layer cross architecture-based data processing method and system
KR20200103070A (en) Data transmission method, communication device, and storage medium
WO2020187418A1 (en) Routing flexe data in a network
CN105790875B (en) A kind of cross scheduling method and device thereof
US7000158B2 (en) Simplifying verification of an SFI converter by data format adjustment
CN105027491A (en) Systems and methods to explicitly realign packets
US9281953B1 (en) Systems and methods for routing multicast packets
WO2018196833A1 (en) Message sending method and message receiving method and apparatus
CN102439883B (en) Method, system and device for transmitting synchronous digital hierarchy signals by microwave
JP2013009089A (en) Transmission side device, receiving side device, relay device, communication device, transmission method, receiving method, and transmission and receiving program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15906917

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15906917

Country of ref document: EP

Kind code of ref document: A1