WO2006103169A1 - Adaptateur ethernet hote pour delestage reseau dans un environnement serveur - Google Patents

Adaptateur ethernet hote pour delestage reseau dans un environnement serveur Download PDF

Info

Publication number
WO2006103169A1
WO2006103169A1 PCT/EP2006/060734 EP2006060734W WO2006103169A1 WO 2006103169 A1 WO2006103169 A1 WO 2006103169A1 EP 2006060734 W EP2006060734 W EP 2006060734W WO 2006103169 A1 WO2006103169 A1 WO 2006103169A1
Authority
WO
WIPO (PCT)
Prior art keywords
processor
adapter
packet
layer
ethernet
Prior art date
Application number
PCT/EP2006/060734
Other languages
English (en)
Inventor
Ravi Kumar Arimilli
Jean Calvignac
Claude Basso
Chih-Jen Chang
Philippe Damon
Ronald Edward Fuhs
Satya Prakash Sharma
Natarajan Vaidhyanathan
Fabrice Verplanken
Colin Beaton Verrilli
Original Assignee
International Business Machines Corporation
Compagnie Ibm France
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corporation, Compagnie Ibm France filed Critical International Business Machines Corporation
Priority to CN2006800108202A priority Critical patent/CN101151851B/zh
Priority to EP06708771A priority patent/EP1864444A1/fr
Priority to JP2008503471A priority patent/JP4807861B2/ja
Publication of WO2006103169A1 publication Critical patent/WO2006103169A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/407Bus networks with decentralised control
    • H04L12/413Bus networks with decentralised control with random access, e.g. carrier-sense multiple-access with collision detection [CSMA-CD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/40006Architecture of a communication node
    • H04L12/40032Details regarding a bus interface enhancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/12Protocol engines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/324Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the data link layer [OSI layer 2], e.g. HDLC

Definitions

  • RPS920050076US1/3505P entitled “Method and Apparatus for Blind Checksum and Correction for Network Transmissions”, filed on even date herewith and assigned to the assignee of the present invention.
  • RPS920050082US1/3512P entitled “Method and System for Performing a Packet Header Lookup”, filed on even date herewith and assigned to the assignee of the present invention.
  • FIG. 1 illustrates a conventional server system 10.
  • the server system 10 includes a processor 12 which is coupled to a main memory 14.
  • the processor 12 is coupled via its private bus (GX) 16 to systems which include a network interface system 18.
  • the network interface system 18 is in turn coupled to an adapter 20 via a PCI bus 22 or the like.
  • the PCI 22 bus has a limited bandwidth which affects the amount of traffic that can flow therethrough.
  • the internet and its applications have tremendously increased the number of clients' requests a server has to satisfy. Each client's request generates both network and storage I/Os.
  • the advent of 10 gigabit Ethernet and IP storage makes it possible to consolidate the data center communications on a single backbone infrastructure: Ethernet, TCP/IP.
  • TCP/IP protocol at 10 gigabit speed consumes tremendous processing and memory bandwidth in the mainstream servers, therefore severely limiting server's ability to run applications.
  • NICs server network interface controllers
  • TCP and IP checksums are supported.
  • TCP and IP checksums are adequate up to IG, but do not solve the problem for higher speeds such as 1OG and higher.
  • TCP offload engine it is known to use a TCP offload engine to totally offload the complete TCP/IP protocol stack from the server.
  • the TOE's implementation is generally implemented in hardware or in picocode in pico processor architectures which are relatively complex. There are also debugging, problem determination and stack maintainability issues. In addition, there are scability issues when using picocode because picoengines do not follow main processor roadmap.
  • the offload engines typically introduce new protocols and APIs and thus require changes in applications as well as interoperability issues.
  • the Ethernet adapter comprises a plurality of layers for allowing the adapter to receive and transmit packets from and to a processor.
  • the plurality of layers include a demultiplexing mechanism to allow for partitioning of the processor.
  • a Host Ethernet Adapter (HEA) is an integrated Ethernet adapter providing a new approach to Ethernet and TCP acceleration.
  • a set of TCP/IP acceleration features have been introduced in a toolkit approach: Servers TCP/IP stacks use these accelerators when and as required.
  • the interface between the server and the network interface controller has been streamlined by bypassing the PCI bus.
  • the HEA supports network virtualization.
  • the HEA can be shared by multiple OSs providing the essential isolation and protection without affecting its performance. BRIEF DESCRIPTION OF THE DRAWINGS
  • Figure 1 illustrates a conventional server system.
  • FIG. 2 is a block diagram of a server system in accordance with the present invention.
  • FIG 3 is a simple block diagram of the HEA in accordance with the present invention.
  • Figure 4 is a block diagram of the HEA with a more detailed view of the MAC and
  • Figure 5 shows the components and dataflow for one of the RxNet.
  • Figure 6 shows the components and dataflow for one TxEnet.
  • Figure 7 is a block diagram of the HEA with amore detailed view of the Packet Acceleration and Visualization Layer.
  • FIG. 8 is a more detailed view of the RxAccel unit.
  • FIG. 9 shows that the RxAccel unit is composed of two Transmit Backbones (XBB), two Transmit Checksum units, two Transmit MIBs, one Wrap Unit and one Pause Unit.
  • Figure 10 is a block diagram of the HEA 110 with a more detailed view of the Host Interface Layer.
  • Figure 11 illustrates the HEA providing a logical layer 2 switch per physical port.
  • Figure 12 shows the HEA used with Legacy OS TCP/IP stacks.
  • Figure 13 shows the HEA used in a system where some partitions are supporting User Space TCP stacks.
  • Figure 14 illustrates all the HEA supporting acceleration features including per connection queueing.
  • Figure 15 illustrates inbound multicast transmission.
  • Figure 16 illustrates outbound multicast transmission.
  • FIG. 2 is a block diagram of a server system 100 in accordance with the present invention.
  • the server system 100 includes a processor 102 which is coupled between a memory 104 and an interface adapter chip 106.
  • the interface adapter chip 106 includes an interface 108 to the private (Gx) bus of the processor 102 and a Host Ethernet Adapter (HEA) 110.
  • the HEA 110 receives and transmits signals from and to the processor 102.
  • the HEA 110 is an integrated Ethernet adapter.
  • a set of accelerator features are provided in a TCP/IP stack within the server.
  • the interface 100 between the processor 102 and the interface adapter chip 106 has been streamlined by bypassing the PCI bus and providing interface techniques that enable demultiplexing and multiqueueing and packet header separation.
  • the HEA 110 achieves unmatched performance level by being directly connected to the GX+ bus and therefore enjoying a tremendous bandwidth (55.42 Gbps at 866 Mhz) to really support the full 40 Gbps bandwidth of two 10 Gbps ports.
  • a 64 bits PCI-X 133 MHz bus is limited to 8.51 Mbps and at least a PCI Express x8 bus is required to match the throughput of two 10 Gbps ports.
  • Being on the GX bus also removes intermediate logic and therefore improves transfer latency.
  • Ethernet adapter allows for improved functionality with high speed system while allowing for compatibility with legacy server environments.
  • the HEA 110 supports advanced acceleration features.
  • One key observation is that the current acceleration functions perform adequately on the transmit side (i.e., transmitting packets from the processor) but are not adequate on the receive side (ie receiving packets via the adapter).
  • the HEA 110 addresses this gap by introducing new features such as Packet Demultiplexing and Multiqueueing, and Header separation.
  • All of the HEA 110 new features are optional; it is up to the TCP/IP stack to take advantage of them if and when required.
  • a TCP/IP stack can use the HEA 110 and take advantage of the other features of HEA such as throughput, low latency and virtualization support. Packets Demultiplexing and Multiqueueing
  • Multiqueueing and Demultiplexing is the key feature to support functions such as virtualization, per connection queueing, and OS bypass.
  • HEA demultiplexing uses the concept of Queue Pairs, Completion Queues and Event Queues. Enhancements have been added to better address OS protocol stacks requirements and short packet latency reduction.
  • HEA can demultiplex incoming packets based on:
  • the HEA 110 is capable of separating the TCP/IP header from the data payload. This feature allows the header to be directed to the protocol stack for processing without polluting the received buffers posted by the applications.
  • the Queue Pair concept is extended to support more than one receive queue per pair. This enables the stack to better manage its buffer pool memory.
  • one queue can be assigned to small packets, one to medium packets and one to large packets.
  • the HEA 110 will select the ad hoc queue according to the received packet size.
  • Low Latency Queue On the transmit side a descriptor (WQE) may contain immediate data, in such case no indirection, i.e., no additional DMA from system memory is required to send the data. On the receive side, low latency queues doe not supply buffers but rather receive immediate packet data. The HEA 110 writes directly to the receive queue. Short packets take advantage of this feature leading to a dramatic reduction of DMA operations: one single DMA write per packet as opposed to one DMA read and one DMA write per packet.
  • WQE descriptor
  • Receive low latency queues are also used to support the packet header separation: the header is written in the low latency queue while the payload is DMAed to a buffer indicated in the ad-hoc receive queues.
  • Demultiplexing and Multiqueueing, and Packet Header Separation are the basic building blocks to virtualization and provide low latency operation. Furthermore, it should be noted that these features can also be used to improve traditional OS protocol stack performance, for example, per-connection queueing allows for the removal of code and more importantly reduces the memory accesses - and associated stalls/cache pollution - consumed to locate the appropriate information in memory.
  • per-connection queueing allows for the removal of code and more importantly reduces the memory accesses - and associated stalls/cache pollution - consumed to locate the appropriate information in memory.
  • FIG. 3 is a simple block diagram of the HEA 110 in accordance with the present invention.
  • the HEA 110 has a three layer architecture.
  • the first layer comprises a Media Access Controller (MAC) and Serialization/Deserialization (Serdes) layer 202 which provides a plurality of interfaces from and to other devices on the Ethernet network.
  • MAC Media Access Controller
  • Serdes Serialization/Deserialization
  • the same chip I/Os are used to provide a plurality of interfaces.
  • the same chip I/Os are utilized to provide either a 10 Gigabit interface or a 1 Gigabit interface.
  • the second layer comprises a Packet Acceleration and Virtualization Layer 204.
  • the layer 204 provides for receiving packets and demultiplexing the flow of packets for enabling virtualization.
  • the layer 204 enables virtualization or partitioning of the operating system of a server based upon the packets.
  • the layer 204 also provides packet header separation to enable zero copy operations and therefore provide improved latency. Also since layer 204 interacts directly with the private bus (Gx) through the Host Interface Layer 206, a low latency, high bandwidth connection is provided.
  • the third layer comprises the Host Interface Layer 206.
  • the Host Interface Layer 206 The Host Interface Layer
  • the 206 provides the interface to the Gx or private bus of the processor and communicates with layer 204.
  • the layer 206 provides for multiple receive sub-queues per Queue Pair (QP) to enable effective buffer management for a TCP stack.
  • QP receive sub-queues per Queue Pair
  • the host layer 206 provides the context management for a given flow of data packets.
  • FIG. 4 is a block diagram of the HEA 110 with a more detailed view of the MAC and Serdes Layer 202.
  • the MACs 302, 304 and 304b include analog coding units 308a, 308b and 308c for aligning and coding the received packets.
  • the MACs 302, 304a and 304b are coupled to a High Speed Serializer/Deserialization (HSS) 306.
  • HSS High Speed Serializer/Deserialization
  • the high speed serdes 306 is capable of receiving data from one 10 Gigabit source or four 1 Gigabit sources. Receive Ethernet Function fRxNef) Overview
  • FIG. 1 shows the high level structure and flow through the receive Ethernet function within layer 202.
  • the Rx accelerator unit 400 as will be explained in more detail hereinafter is part of Packet Acceleration and Virtualization layer 204.
  • Figure 5 shows the components and dataflow for one of the RxNet. Data arrives on the interface 302 and is processed by the high speed serdes 304, analog coding units 308a and 308b and MAC which assembles and aligns the packet data in this embodiment in a 64 bit (10G) or 32 bit (IG) parallel data bus. Control signals are also generated which indicate start and end of frame and other packet information.
  • 10G 10G
  • IG 32 bit
  • the data and control pass through the RxAccel unit 400 which performs parsing, filtering, checksum and lookup functions in preparation for processing by the Receive Packet Processor (RPP) of the layer 206 ( Figure 2).
  • RPP Receive Packet Processor
  • the clock is converted to a 4.6ns clock and the data width is converted to 128b as it enters the RxAccel unit 400.
  • the RxAccel unit 400 As data flows through the RxAccel unit 400 to the data buffers within the host layer 206, the RxAccel unit 400 snoops on the control and data and starts its processing. The data flow is delayed in the RxAccel unit 400 such that the results of the RxAccel unit 400 are synchronized with the end of the packet. At this time, the results of the RxAccel unit 400 are passed to a command queue along with some original control information from the MAC 302. This control information is stored along with the data in the buffers. If the RxAccel unit 400 does not have the lookup entry cached, it may need to go to main memory through the GX bus interface. The GX bus operates at 4.6ns. The host layer 206 can asynchronously read the queue pair resolution information from the RxAccel unit 400.
  • the Tx accelerator unit 500 as will be explained in more detail hereinafter is part of Packet Acceleration and Virtualization layer 204.
  • Figure 6 shows the components and dataflow for one TxEnet. Packet data and control arrives from the TxAccel 500 component of the HEA 110.
  • the Tx Accelerator (TxAccel) unit 500 interprets the control information and modifies fields in a header of a packet that flows through the unit 500. It makes the wrap versus port decision based on control information or information found in the packet header. It also generates the appropriate controls for the TxMAC 302 and 304.
  • the data flow is delayed in the TxAccel unit 500 such that the TxAccel unit 500 can update packet headers before flowing to the MAC 302 and 304.
  • the data width is converted from 128 bits to 64 bits (10G) or 32 bits (IG).
  • the data and control pass through a clock conversion function in the TxAccel unit 500 in order to enter the differing clock domain of the MAC 302 and 304.
  • the MAC 302 and 304, analog converters 508a and 508b and high speed serdes 306 format packets for the Ethernet interface.
  • FIG. 7 is a block diagram of the HEA 110 with a more detailed view of the Packet Acceleration and Visualization Layer 204.
  • the HEA Layer 204 comprises the previously mentioned receive (RxAccel) acceleration unit 400 and the transmit acceleration (TxAccel) unit 500.
  • the RxAccel unit 400 comprises a receive backbone (RBB) 402, a parser filter checksum unit (PFC) 404, a lookup engine (LUE) 406 and a MIB database 408.
  • the TxAccel unit 500 comprises the transmit backbone 502, lookup checks 504 and an MIB engine 506. The operation of the Rx acceleration unit 400 and the Tx acceleration unit 500 will be described in more detail hereinbelow.
  • the RxAccel unit 400 includes the Receive Backbone (RBB) 402, the Parser, Filter and Checksum Unit (PFC) 404, the Local Lookup Unit (LLU) 406, the Remote Lookup Unit (RLU) 408 and an MIB database 410.
  • RBB Receive Backbone
  • PFC Parser, Filter and Checksum Unit
  • LLU Local Lookup Unit
  • RLU Remote Lookup Unit
  • the RBB 402 manages the flow of data and is responsible for the clock and data bus width conversion functions. Control and Data received from the receive MAC is used by the PFC 404 to perform acceleration functions and to make a discard decision.
  • the PFC 404 passes control and data extracted from the frame, including the 5 -tuple key, to the LLU 406 in order to resolve a Queue Pair number (QPN) for the RBB 402.
  • QPN Queue Pair number
  • the LLU 406 either finds the QPN immediately or allocates a cache entry to reserve the slot. If the current key is not in the cache, the LLU 406 searches for the key in main memory.
  • the PFC 404 interfaces to the MIB database 410 to store packet statistics.
  • This section describes the high level structure and flow through the Transmit Acceleration unit 500 (TxAccel).
  • FIG. 9 shows that the TxAccel unit 500 is composed of two Transmit Backbones (XBB) 502a and 502b, two Transmit Checksum units (XCS) 504a and 504b, two Transmit MIBs 506a and 506b, one Wrap Unit (WRP) 508 and one Pause Unit (PAU) logic 510.
  • Data flows through the TxAccel from the ENop and is modified to adjust the IP and TCP checksum fields.
  • the XBB 502a and 502b manages the flow of data and is responsible for the clock and data bus width conversion functions. Control and Data received from the ENop is used by the XCS 504a and 504b to perform checksum functions.
  • the XBB 502 transforms the information to the clock domain of the TxAccel unit 500.
  • the status information is merged with original information obtained from the packet by the XCS 504 and passed to the MIB Counter logic 506a and 506b.
  • the MIB logic 506a and 506b updates the appropriate counters in the MIB array.
  • the Wrap Unit (WRP) 508 is responsible for transferring to the receive side packets that the XCSs 504a and 504b have decided to wrap.
  • the Pause Unit (PAU) 510 orders the MAC to transmit pause frames based on the receive buffer's occupancy.
  • FIG 10 is a block diagram of the HEA 110 with a more detailed view of the Host Interface Layer 206.
  • the Host Interface Layer 206 includes input and output buffers 602 and 604 for receiving packets from the layer 204 and providing packets to layer 204.
  • the layer 206 includes a Receive Packet Processor (RPP) 606 for appropriately processing the packets in the input buffer.
  • RPP Receive Packet Processor
  • the context management mechanism 908 provides multiple sub-queues per queue prior to enable effective buffer management for the TCP stack.
  • the queue pair context Before the Receive Packet Processor (RPP) 606 can work on a received packet, the queue pair context must be retrieved.
  • the QP connection manager does this using a QP number. Since QP numbers are not transported in TCP/IP packets, the number must be must be determined by other means. There are two general classes of QPs, a per-connection QP and a default QP. Per-connection Queue Pairs (Ops)
  • Per-connection QP is intended to be used for long-lived connections where fragmentation of the IP packets is not expected and for which low-latency is expected. It requires that the application supports a user-spacing queueing mechanism provided by the HEA 110. In this embodiment the logical port must first be found using the destination MAC address. Three types of lookup exist for per-connection QP:
  • New TCP connections for a particular destination IP address and destination TCP port are made based on the TCP/IP (DA, DP, Logical port) if the packet was a TCP SYN packet.
  • New TCP connections for a particular destination TCP port only are made based on the TCP/IP (DA, DP, Logical port) if the packet was a TCP SYN packet.
  • a lookup is performed based on the TCP/IP (DP, Logical port) if the packet was a TCP SYN packet.
  • Default QP are used if no per-connection QP can be found for the packet or if per-connection lookup is not enabled for a MAC address or if the packet is a recirculated multicast/broadcast packet. Generally default QP are handled by the kernel networking stack. These types of default QPs exist in the HEA 110: 1. Default OS queue per logical port. A logical port corresponds to a logical
  • Ethernet interface with its own default queue.
  • Each logical port has a separate port on the logical switch.
  • a lookup is performed based on MAC address.
  • a direct index (logical port number) to the default OS queue is provided with recirculated (wrapped) multicast/broadcast packets.
  • Multicast (MC) or Broadcast (BC) queue are examples of Multicast (MC) or Broadcast (BC) queue.
  • This mechanism allows for flexibility between the two extremes of queueing per connection and queueing per logical port (OS queue). Both models can operate together with some connections having their own queueing and some connections being queued with the default logical port queues.
  • Connection lookup is performed by the RxAccel unit 400.
  • One such unit exists for each port group. Within the RxAccel unit 400, each component performs a portion of the process.
  • the PFC 404 extracts the needed fields from the packet header and determines the logical port number based on the destination MAC address.
  • the Local Lookup Unit (LLU) 406 and Remote Lookup Unit (RLU) 408 are then responsible for resolving the QP number.
  • the LLU 406 attempts to find a QPN using local resources only (cache and registers).
  • the purpose of the LLU 406 is to attempt to determine the QP number associated with the received packet.
  • the QP number is required by the VLIM and RPP 606. It performs this task locally if possible (i.e. without going to system memory).
  • the QP number can be found locally in one of several ways:
  • the LLU 406 communicates with the RBB 402 providing the QP number and/or the queue index to use for temporary queueing. If no eligible entries are available in the cache, the LLU 406 indicates to the RBB 402 that the search is busy. The packet must be dropped in this case.
  • the LLU 406 provides the QPN to the host layer 406 when a queue index resolution is requested and has been resolved.
  • the RLU 408 attempts to find a QPN using system memory tables.
  • the LLU 406 utilizes a local 64 entry cache in order to find the QPN for TCP/UDP packets. If the entry is found in the cache, the RLU 408 does not need to be invoked. If the entry is not found in the cache, a preliminary check is made in the cache to see if the entry might be in the connection table.
  • the cache is useful for eliminating unnecessary accesses to main memory when there are a few number of configured queues.
  • the RLU 408 uses a hash of a 6-tuple (including logical port number) to fetch an 128 byte Direct Table (DT) entry from memory???.
  • This DT entry contains up to eight 6-tuple patterns and associated QPN. If a match is found, no further action is required.
  • the QPN can not be determined on the fly as the packet is being placed into the input buffers. In fact the QPN may be determined several packets later. For this reason, the RxAccel unit 400 may either provide a QPN or a queue index to the host layer 206 for packet queueing. If a QPN is provided, then the host layer 206 (unloader) may queue the packet directly for work by the RPP. If a queue index is provided, then the host layer 206 must hold this packet to wait for resolution of the QPN. The QPN is always determined by the time the RPP is dispatched.
  • Partitions must be able to communicate transparently, i.e., the same way regardless of whether they are collocated on the same physical server or located on different physical servers connected by a real Ethernet.
  • Today Ethernet virtualization is supported by switching or routing in the Server partition owning the adapter, this extra hop creates performance bottlenecks (data copy, three drivers driver, ).
  • the HEA 110 is designed to provide direct data and control paths (no extra hop) between the using partitions and the adapter.
  • the HEA provides each partition with its own "virtual" adapter and "logical" ports.
  • all HEA resources and functions can be allocated / enabled per partition, the exact same mechanisms are used to provide inter partitions protection and isolation. Data Path
  • the HEA 110 provides a logical layer 2 switch 906 and 908 per physical port 902 and 904 in order to provide multicast handling and partition to partition 910a-910c communication.
  • a logical layer 2 switch 906 and 908 per physical port 902 and 904 in order to provide multicast handling and partition to partition 910a-910c communication.
  • Implementing this support within the HEA keeps the overall system solution simple (In particular, transparency for software) and provides high performance. All the HEA hardware acceleration and protection are available for partition to partition communication.
  • a convenient way to think is to picture a logical Layer 2 switch 902 and 904 to which all the logical ports associated to a given physical port as well as the physical port itself are attached.
  • the issue is how and where this logical switch is implemented, alternatives span from a complete emulation in Firmware/Software to a complete implementation in the HEA hardware.
  • Figure 12 shows the HEA used with a Legacy OS TCP/IP stacks 1102, 1104 and 1106.
  • -TCP/IP stacks 1102,1104 and 1106 are unchanged -Device Drivers 1107a, 1107b and 1107c supporting the HEA 110 are required
  • TCP/IP stack can be optionally enhanced to take advantage of features such as low latency queues for short packet or packets demultiplexing per TCP connection. As seen the demultiplexing of packets are performed based upon the MAC address and the QPN per partiton. Virtualized HEA with Legacy OS stacks and User Space TCP/IP
  • Figure 13 shows the HEA 110 used in a system where some partitions are supporting User Space TCP stacks 1220 as well as legacy OS stacks 108a and 1208b:
  • the User Space TCP1220 is demultiplexed by the HEA base upon customer identification (Cid)information and the QPN for the customer.
  • the logical switch is completely supported in the adapter.
  • the HEA relies on a software entity, the Multicast manager, for Multicast/Broadcast packet replication.
  • HEA provides assist to the Multicast manager to deliver packet copies to the destination partitions.
  • Received unicast traffic is handled through QPs allocated to the partitions. It an be a dedicated queue pair per connection or a single queue pair per logical port or both. Fair scheduling among the Send queues is provided by the HEA. Depending upon system configuration, the QP access can be granted to the application (User space) or only to the OS stack (Privileged). Received unicast traffic is demultiplexed as follows:
  • Figure 14 is a block diagram that illustrates all the HEA acceleration features including per connection queueing are supported. Full transparency is offered to the partition's device drivers.
  • the partition stack uses either the per connection QPs or default QP to transmit a packet.
  • the HEA detects that the destination MAC address is a MAC address associated to a logical port defined on the same physical port (in other words the destination MAC address identifies a receiving logical link belonging to the same Layer 2 Logical Switch than the transmit logical link). Therefore, the HEA wraps the packet.
  • the HEA receive side then processes the packet as if it was received from the physical link and therefore the exact same acceleration features are used.
  • the IP stack can use regular mechanism to find out the destination MAC address of a destination partition located on the same IP subnet.
  • This partition can be collocated on the same server or not, this is transparent for both the stack and device drivers.
  • the HEA has no provision for replicating multicast and broadcast packets to the interested partitions. Instead, it forwards all received MC/BC packets to QP owned by a Multicast Manager function. This function replicates the packets as required and uses the HEA transport capabilities to distribute the copies to the interested partitions.
  • FIG. 15 illustrates inbound multicast transmission.
  • Received Multicast and Broadcast packets go first through an HEA filtering function. If not discarded, the packet is directed to the QP owned by the Multicast Manager 1500. The packet is transferred to the system memory and Multicast Manager 1500 is activated. The Multicast Manager 1500 determines which logical ports should receive a copy of the packet (Multicast filtering) and handles packet replication. The Multicast Manager can use the HEA 110 facilities to redistribute the packet to the recipient partitions 1502a- 1502c. To do so the Multicast Manager enqueues n- number of recipients - descriptors (WQE) referencing the received packet into its Send Queue.
  • WQE recipients - descriptors
  • the packet descriptor must contain information so that the HEA can direct the packet to its proper destination. This information can be either the default QP of the recipient or its logical port ID or its MAC address.
  • the HEA transmit side determines thanks to information contained in both the QP and the WQE that the packet needs to be sent over the wrap. Along with the data, information to determine the recipient QP is transferred to the HEA receive side. HEA receive side uses this information to enqueue the packet to the recipient QP.
  • Figure 16 illustrates outbound multicast transmission. Broadcast/Multicast packets are transmitted using the normal procedures by originating partitions. As the packet is processed by the HEA transmit side, the HEA detects that the destination MAC address is broadcast or multicast and that the "Force Out" option is not set in the WQE. The HEA therefore wraps the packet. The HEA received side then processes the packet as described above. The Multicast manager processes the packet as described above with the following additions: -It must ensure that the sender is removed from the list of recipients, it does to using the source MAC address of the packet as a filter.
  • -VLAN filtering may be performed during the packet replication process. Packets will only be sent to members of the VLAN.
  • the packet must be sent out the physical port. It does so by enabling the force out function of its QP and setting the "Force out" bit in the WQE. When this bit is set, the HEA sends directly the packet out on the physical link. Multicast filtering in multipartitions environment
  • the HEA On the receive side, the HEA provides Multicast filtering.
  • the HEA like other "off the shelf adapters provides best effort filtering based on a hash value of the destination MAC address and lookup into one filtering table per physical port. The intent of this function is to limit the multicast traffic, but the "final" filtering is left to the stack. In case of multi-partitions, the filtering requirements from all the involved partitions should be merged by the Multicast manager, then configured in the HEA.
  • the Multicast manager can then do the multicast filtering per partition when handling the packet distribution to the interested partitions.
  • the HEA 110 is capable of separating the TCP/IP header from the data payload. This feature enables zero-copy operations and therefore improves latency.
  • Packet header separation is performed by the HEA 110 when configured in the QP context.
  • an Ethernet/IP/TCP or Ethernet/IP/UDP header is separated from the body of the packet and placed in different memory.
  • the TCP/IP stack processes the header and the application processes the body. Separation in hardware allows to align user data into the user buffer thus avoiding copies.
  • the PFC 404 within the layer 204 passes the total header length (8 bits) to the RPP 606 of the host interface 206 ( Figure 10) indicating the number of bytes of Ethernet, IP and TCP/UDP header.
  • the header length is set to 0 when there is no header split performed.
  • the QP must be configured for two or more receive queries (RQs). If the packet is TCP or UDP (header length not zero), the RPP 606 places the header into the RQl WQE. The RPP 606 then chooses an appropriate RQ for the data part of the packet (RQ2 or RQ3). The descriptors in the RQ2 or RQ3 WQE are used to place the remaining data. The RPP 606 indicates that a CQE should be generated with the complete information. The header split flag is set. The correlator in the correlator field of the CQE is copied from the RQ2 or RQ3 WQE used. The count of header bytes placed in the first WQE is also put in the CQE.
  • RQs receive queries
  • the WQE is filled with as much data as possible and the Header Too Long flag is set in the CQE. The remainder of the header is placed with the data in the RQ2/RQ3 WQE.
  • header split mode When header split mode is set to ALL and header split is being performed (header length is non-zero), none of the body of the packet is ever placed in the RQl WQE.
  • the packet is an IP fragment or is not TCP or UDP (header length is zero) and the packet was too large to fit in the RQl WQE, then the entire packet is placed using the RQ2 or RQ3 WQE.
  • the header count is set to zero.
  • the header split flag is off. A RQl WQE is not consumed (unless competition information is to be placed in the RQl WQE).
  • the HEA 110 is capable of separating the TCP/IP header from the data payload. This feature allows the header to be directed to the protocol stack for processing without polluting the received buffers posted by the applications and therefore reduces the latency period for certain transactions .
  • a Host Ethernet Adapter in accordance with the present invention achieves unmatched performance level by being directly connected to the private bus of the processor and therefore having sufficient bandwidth (for example 55.42 Gbps at 866 MHz) to support the full 40 Gbps bandwidth of two 10 Gbps ports.
  • a network interface controller NIC
  • NIC network interface controller

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Small-Scale Networks (AREA)
  • Communication Control (AREA)

Abstract

L'invention concerne un adaptateur Ethernet. Cet adaptateur Ethernet comprend une pluralité de couches lui permettant de recevoir et d'émettre des paquets en provenance et en direction d'un processeur. La pluralité de couches comprennent un mécanisme de démultiplexage permettant de segmenter le processeur. Un adaptateur Ethernet hôte (HEA) est un adaptateur Ethernet intégré offrant une nouvelle conception en matière d'Ethernet et d'accélération TCP. Un ensemble d'éléments d'accélération TCP/IP ont été introduits dans une conception de boîte à outils. Des empilements TCP/IP serveurs utilisent ces accélérateurs de façon appropriée lorsque cela est nécessaire. L'interface entre le serveur et le contrôleur d'interface réseau a été modernisée par contournement du bus PCI. Ce HEA prend en charge une virtualisation du réseau. Ledit HEA peut être partagé par plusieurs SE assurant l'isolement et la protection nécessaires sans altération de performance.
PCT/EP2006/060734 2005-04-01 2006-03-15 Adaptateur ethernet hote pour delestage reseau dans un environnement serveur WO2006103169A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN2006800108202A CN101151851B (zh) 2005-04-01 2006-03-15 用于服务器环境中的联网卸载的主机以太网适配器
EP06708771A EP1864444A1 (fr) 2005-04-01 2006-03-15 Adaptateur ethernet hote pour delestage reseau dans un environnement serveur
JP2008503471A JP4807861B2 (ja) 2005-04-01 2006-03-15 サーバ環境においてオフロードをネットワーク化するためのホスト・イーサネット・アダプタ

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/097,608 US7586936B2 (en) 2005-04-01 2005-04-01 Host Ethernet adapter for networking offload in server environment
US11/097,608 2005-04-01

Publications (1)

Publication Number Publication Date
WO2006103169A1 true WO2006103169A1 (fr) 2006-10-05

Family

ID=36441302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2006/060734 WO2006103169A1 (fr) 2005-04-01 2006-03-15 Adaptateur ethernet hote pour delestage reseau dans un environnement serveur

Country Status (6)

Country Link
US (2) US7586936B2 (fr)
EP (1) EP1864444A1 (fr)
JP (1) JP4807861B2 (fr)
CN (1) CN101151851B (fr)
TW (1) TWI392275B (fr)
WO (1) WO2006103169A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008058254A2 (fr) * 2006-11-08 2008-05-15 Standard Microsystems Corporation Contrôleur de trafic de réseau (ntc)
CN101206623B (zh) * 2006-12-19 2010-04-21 国际商业机器公司 迁移虚拟端点的***和方法
CN102710813A (zh) * 2012-06-21 2012-10-03 杭州华三通信技术有限公司 一种mac地址表项存取方法和设备

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7007103B2 (en) * 2002-04-30 2006-02-28 Microsoft Corporation Method to offload a network stack
US7586936B2 (en) * 2005-04-01 2009-09-08 International Business Machines Corporation Host Ethernet adapter for networking offload in server environment
US8706987B1 (en) 2006-12-01 2014-04-22 Synopsys, Inc. Structured block transfer module, system architecture, and method for transferring
US8289966B1 (en) * 2006-12-01 2012-10-16 Synopsys, Inc. Packet ingress/egress block and system and method for receiving, transmitting, and managing packetized data
US8127113B1 (en) 2006-12-01 2012-02-28 Synopsys, Inc. Generating hardware accelerators and processor offloads
US7715428B2 (en) * 2007-01-31 2010-05-11 International Business Machines Corporation Multicore communication processing
US20080285551A1 (en) * 2007-05-18 2008-11-20 Shamsundar Ashok Method, Apparatus, and Computer Program Product for Implementing Bandwidth Capping at Logical Port Level for Shared Ethernet Port
US8284792B2 (en) * 2007-06-01 2012-10-09 Apple Inc. Buffer minimization in interface controller
US7930462B2 (en) * 2007-06-01 2011-04-19 Apple Inc. Interface controller that has flexible configurability and low cost
US8250254B2 (en) * 2007-07-31 2012-08-21 Intel Corporation Offloading input/output (I/O) virtualization operations to a processor
KR101365595B1 (ko) * 2007-08-16 2014-02-21 삼성전자주식회사 Gui기반의 디스플레이부를 구비한 디바이스의 입력 방법및 그 장치
US20090055831A1 (en) * 2007-08-24 2009-02-26 Bauman Ellen M Allocating Network Adapter Resources Among Logical Partitions
US8103785B2 (en) * 2007-12-03 2012-01-24 Seafire Micros, Inc. Network acceleration techniques
US7751400B2 (en) * 2008-02-25 2010-07-06 International Business Machines Coproration Method, system, and computer program product for ethernet virtualization using an elastic FIFO memory to facilitate flow of unknown traffic to virtual hosts
US7760736B2 (en) * 2008-02-25 2010-07-20 International Business Machines Corporation Method, system, and computer program product for ethernet virtualization using an elastic FIFO memory to facilitate flow of broadcast traffic to virtual hosts
JP5125777B2 (ja) * 2008-06-03 2013-01-23 富士通株式会社 入出力ポートの割当て識別装置、その割当て識別方法及び情報処理装置
TWI379567B (en) * 2008-09-12 2012-12-11 Realtek Semiconductor Corp Single network interface card (nic) with multiple-ports and method thereof
US8214653B1 (en) 2009-09-04 2012-07-03 Amazon Technologies, Inc. Secured firmware updates
US10177934B1 (en) * 2009-09-04 2019-01-08 Amazon Technologies, Inc. Firmware updates inaccessible to guests
US8887144B1 (en) 2009-09-04 2014-11-11 Amazon Technologies, Inc. Firmware updates during limited time period
US9565207B1 (en) 2009-09-04 2017-02-07 Amazon Technologies, Inc. Firmware updates from an external channel
US8601170B1 (en) 2009-09-08 2013-12-03 Amazon Technologies, Inc. Managing firmware update attempts
US8971538B1 (en) 2009-09-08 2015-03-03 Amazon Technologies, Inc. Firmware validation from an external channel
US8959611B1 (en) 2009-09-09 2015-02-17 Amazon Technologies, Inc. Secure packet management for bare metal access
US8300641B1 (en) 2009-09-09 2012-10-30 Amazon Technologies, Inc. Leveraging physical network interface functionality for packet processing
US8381264B1 (en) 2009-09-10 2013-02-19 Amazon Technologies, Inc. Managing hardware reboot and reset in shared environments
US20110283278A1 (en) 2010-05-13 2011-11-17 Vmware, Inc. User interface for managing a distributed virtual switch
US8873389B1 (en) * 2010-08-09 2014-10-28 Chelsio Communications, Inc. Method for flow control in a packet switched network
US8843628B2 (en) * 2010-08-31 2014-09-23 Harris Corporation System and method of capacity management for provisioning and managing network access and connectivity
US8576864B2 (en) 2011-01-21 2013-11-05 International Business Machines Corporation Host ethernet adapter for handling both endpoint and network node communications
US8838837B2 (en) 2011-06-23 2014-09-16 Microsoft Corporation Failover mechanism
US9444651B2 (en) 2011-08-17 2016-09-13 Nicira, Inc. Flow generation from second level controller to first level controller to managed switching element
US8839044B2 (en) 2012-01-05 2014-09-16 International Business Machines Corporation Debugging of adapters with stateful offload connections
US9135092B2 (en) 2012-02-02 2015-09-15 International Business Machines Corporation Multicast message filtering in virtual environments
CN103269333A (zh) * 2013-04-23 2013-08-28 深圳市京华科讯科技有限公司 基于虚拟化的多媒体加速***
US10809866B2 (en) 2013-12-31 2020-10-20 Vmware, Inc. GUI for creating and managing hosts and virtual machines
US9444754B1 (en) 2014-05-13 2016-09-13 Chelsio Communications, Inc. Method for congestion control in a network interface card
JP6300366B2 (ja) * 2014-08-01 2018-03-28 日本電信電話株式会社 仮想ネットワーク設定情報管理装置および仮想ネットワーク設定情報管理プログラム
JP2016201683A (ja) * 2015-04-10 2016-12-01 ソニー株式会社 ビデオサーバー、ビデオサーバーシステムおよび命令処理方法
US10243848B2 (en) 2015-06-27 2019-03-26 Nicira, Inc. Provisioning logical entities in a multi-datacenter environment
GB201616413D0 (en) 2016-09-28 2016-11-09 International Business Machines Corporation Monitoring network addresses and managing data transfer
US11165635B2 (en) * 2018-09-11 2021-11-02 Dell Products L.P. Selecting and configuring multiple network components in enterprise hardware
US11088902B1 (en) 2020-04-06 2021-08-10 Vmware, Inc. Synchronization of logical network state between global and local managers
US11115301B1 (en) 2020-04-06 2021-09-07 Vmware, Inc. Presenting realized state of multi-site logical network
US11882000B2 (en) 2020-04-06 2024-01-23 VMware LLC Network management system for federated multi-site logical network
US11303557B2 (en) 2020-04-06 2022-04-12 Vmware, Inc. Tunnel endpoint group records for inter-datacenter traffic
US11777793B2 (en) 2020-04-06 2023-10-03 Vmware, Inc. Location criteria for security groups
US20220103430A1 (en) 2020-09-28 2022-03-31 Vmware, Inc. Defining logical networks for deploying applications
CN113127390B (zh) * 2021-05-13 2023-03-14 西安微电子技术研究所 一种多协议数据总线适配器引擎架构设计方法
US11522931B1 (en) * 2021-07-30 2022-12-06 Avago Technologies International Sales Pte. Limited Systems and methods for controlling high speed video

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050053060A1 (en) 2003-01-21 2005-03-10 Nextio Inc. Method and apparatus for a shared I/O network interface controller

Family Cites Families (102)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5058110A (en) 1989-05-03 1991-10-15 Ultra Network Technologies Protocol processor
EP0549924A1 (fr) * 1992-01-03 1993-07-07 International Business Machines Corporation Méthode de transfert de données à processeur auxiliaire asynchrone et dispositif
US5430842A (en) 1992-05-29 1995-07-04 Hewlett-Packard Company Insertion of network data checksums by a network adapter
JPH06187178A (ja) * 1992-12-18 1994-07-08 Hitachi Ltd 仮想計算機システムの入出力割込み制御方法
JPH086882A (ja) * 1994-06-17 1996-01-12 Hitachi Cable Ltd 通信装置
US5752078A (en) 1995-07-10 1998-05-12 International Business Machines Corporation System for minimizing latency data reception and handling data packet error if detected while transferring data packet from adapter memory to host memory
US6330005B1 (en) * 1996-02-23 2001-12-11 Visionael Corporation Communication protocol binding in a computer system for designing networks
US5831610A (en) * 1996-02-23 1998-11-03 Netsuite Development L.P. Designing networks
US5821937A (en) * 1996-02-23 1998-10-13 Netsuite Development, L.P. Computer method for updating a network design
US6112015A (en) * 1996-12-06 2000-08-29 Northern Telecom Limited Network management graphical user interface
US5983274A (en) * 1997-05-08 1999-11-09 Microsoft Corporation Creation and use of control information associated with packetized network data by protocol drivers and device drivers
US6434620B1 (en) 1998-08-27 2002-08-13 Alacritech, Inc. TCP/IP offload network interface device
US6226680B1 (en) 1997-10-14 2001-05-01 Alacritech, Inc. Intelligent network interface system method for protocol processing
US6408393B1 (en) * 1998-01-09 2002-06-18 Hitachi, Ltd. CPU power adjustment method
US6658002B1 (en) 1998-06-30 2003-12-02 Cisco Technology, Inc. Logical operation unit for packet processing
US6970419B1 (en) * 1998-08-07 2005-11-29 Nortel Networks Limited Method and apparatus for preserving frame ordering across aggregated links between source and destination nodes
US6510552B1 (en) 1999-01-29 2003-01-21 International Business Machines Corporation Apparatus for keeping several versions of a file
US6650640B1 (en) 1999-03-01 2003-11-18 Sun Microsystems, Inc. Method and apparatus for managing a network flow in a high performance network interface
US6400730B1 (en) 1999-03-10 2002-06-04 Nishan Systems, Inc. Method and apparatus for transferring data between IP network devices and SCSI and fibre channel devices over an IP network
US6937574B1 (en) * 1999-03-16 2005-08-30 Nortel Networks Limited Virtual private networks and methods for their operation
GB2352360B (en) * 1999-07-20 2003-09-17 Sony Uk Ltd Network terminator
US6427169B1 (en) 1999-07-30 2002-07-30 Intel Corporation Parsing a packet header
US6404752B1 (en) 1999-08-27 2002-06-11 International Business Machines Corporation Network switch using network processor and methods
US6724769B1 (en) 1999-09-23 2004-04-20 Advanced Micro Devices, Inc. Apparatus and method for simultaneously accessing multiple network switch buffers for storage of data units of data frames
US6788697B1 (en) 1999-12-06 2004-09-07 Nortel Networks Limited Buffer management scheme employing dynamic thresholds
US6822968B1 (en) * 1999-12-29 2004-11-23 Advanced Micro Devices, Inc. Method and apparatus for accounting for delays caused by logic in a network interface by integrating logic into a media access controller
US7308006B1 (en) * 2000-02-11 2007-12-11 Lucent Technologies Inc. Propagation and detection of faults in a multiplexed communication system
US6988235B2 (en) * 2000-03-02 2006-01-17 Agere Systems Inc. Checksum engine and a method of operation thereof
US6795870B1 (en) 2000-04-13 2004-09-21 International Business Machines Corporation Method and system for network processor scheduler
US6735670B1 (en) 2000-05-12 2004-05-11 3Com Corporation Forwarding table incorporating hash table and content addressable memory
US6678746B1 (en) 2000-08-01 2004-01-13 Hewlett-Packard Development Company, L.P. Processing network packets
US6754662B1 (en) 2000-08-01 2004-06-22 Nortel Networks Limited Method and apparatus for fast and consistent packet classification via efficient hash-caching
EP1305931B1 (fr) * 2000-08-04 2006-06-28 Avaya Technology Corp. Procédé et système permettant la reconnaissance suivant la démande des transactions en mode connexion
US8019901B2 (en) 2000-09-29 2011-09-13 Alacritech, Inc. Intelligent network storage interface system
US7218632B1 (en) 2000-12-06 2007-05-15 Cisco Technology, Inc. Packet processing engine architecture
US6954463B1 (en) 2000-12-11 2005-10-11 Cisco Technology, Inc. Distributed packet processing architecture for network access servers
US7131140B1 (en) * 2000-12-29 2006-10-31 Cisco Technology, Inc. Method for protecting a firewall load balancer from a denial of service attack
US7023811B2 (en) 2001-01-17 2006-04-04 Intel Corporation Switched fabric network and method of mapping nodes using batch requests
WO2002061592A1 (fr) * 2001-01-31 2002-08-08 International Business Machines Corporation Procede et appareil permettant de commander les flux de donnees entre des systemes informatiques via une memoire
US7149817B2 (en) * 2001-02-15 2006-12-12 Neteffect, Inc. Infiniband TM work queue to TCP/IP translation
US6728929B1 (en) 2001-02-16 2004-04-27 Spirent Communications Of Calabasas, Inc. System and method to insert a TCP checksum in a protocol neutral manner
US7292586B2 (en) 2001-03-30 2007-11-06 Nokia Inc. Micro-programmable protocol packet parser and encapsulator
JP4291964B2 (ja) * 2001-04-19 2009-07-08 株式会社日立製作所 仮想計算機システム
CA2444066A1 (fr) * 2001-04-20 2002-10-31 Egenera, Inc. Systeme et procede de reseautage virtuel dans un systeme de traitement
US7274706B1 (en) * 2001-04-24 2007-09-25 Syrus Ziai Methods and systems for processing network data
JP3936550B2 (ja) 2001-05-14 2007-06-27 富士通株式会社 パケットバッファ
US7164678B2 (en) 2001-06-25 2007-01-16 Intel Corporation Control of processing order for received network packets
US20030026252A1 (en) * 2001-07-31 2003-02-06 Thunquest Gary L. Data packet structure for directly addressed multicast protocol
US6976205B1 (en) 2001-09-21 2005-12-13 Syrus Ziai Method and apparatus for calculating TCP and UDP checksums while preserving CPU resources
US7124198B2 (en) 2001-10-30 2006-10-17 Microsoft Corporation Apparatus and method for scaling TCP off load buffer requirements by segment size
US6907466B2 (en) 2001-11-08 2005-06-14 Extreme Networks, Inc. Methods and systems for efficiently delivering data to a plurality of destinations in a computer network
WO2003043271A1 (fr) 2001-11-09 2003-05-22 Vitesse Semiconductor Corporation Moyen et procede pour commuter des paquets ou des trames de donnees
US7286557B2 (en) * 2001-11-16 2007-10-23 Intel Corporation Interface and related methods for rate pacing in an ethernet architecture
US7236492B2 (en) 2001-11-21 2007-06-26 Alcatel-Lucent Canada Inc. Configurable packet processor
JP4018900B2 (ja) * 2001-11-22 2007-12-05 株式会社日立製作所 仮想計算機システム及びプログラム
AU2002365837A1 (en) 2001-12-03 2003-06-17 Tagore-Brage, Jens P. Interface to operate groups of inputs/outputs
US8370936B2 (en) * 2002-02-08 2013-02-05 Juniper Networks, Inc. Multi-method gateway-based network security systems and methods
US7269661B2 (en) 2002-02-12 2007-09-11 Bradley Richard Ree Method using receive and transmit protocol aware logic modules for confirming checksum values stored in network packet
US20040022094A1 (en) 2002-02-25 2004-02-05 Sivakumar Radhakrishnan Cache usage for concurrent multiple streams
US7283528B1 (en) 2002-03-22 2007-10-16 Raymond Marcelino Manese Lim On the fly header checksum processing using dedicated logic
US20040030766A1 (en) * 2002-08-12 2004-02-12 Michael Witkowski Method and apparatus for switch fabric configuration
US7251704B2 (en) * 2002-08-23 2007-07-31 Intel Corporation Store and forward switch device, system and method
US7031304B1 (en) 2002-09-11 2006-04-18 Redback Networks Inc. Method and apparatus for selective packet Mirroring
KR100486713B1 (ko) 2002-09-17 2005-05-03 삼성전자주식회사 멀티미디어 스트리밍 장치 및 방법
US7271706B2 (en) * 2002-10-09 2007-09-18 The University Of Mississippi Termite acoustic detection
KR100454681B1 (ko) * 2002-11-07 2004-11-03 한국전자통신연구원 프레임 다중화를 이용한 이더넷 스위칭 장치 및 방법
KR100460672B1 (ko) * 2002-12-10 2004-12-09 한국전자통신연구원 10 기가비트 이더넷 회선 정합 장치 및 그 제어 방법
US7493409B2 (en) * 2003-04-10 2009-02-17 International Business Machines Corporation Apparatus, system and method for implementing a generalized queue pair in a system area network
US20040218623A1 (en) 2003-05-01 2004-11-04 Dror Goldenberg Hardware calculation of encapsulated IP, TCP and UDP checksums by a switch fabric channel adapter
US7298761B2 (en) * 2003-05-09 2007-11-20 Institute For Information Industry Link path searching and maintaining method for a bluetooth scatternet
US20050022017A1 (en) * 2003-06-24 2005-01-27 Maufer Thomas A. Data structures and state tracking for network protocol processing
US7098685B1 (en) 2003-07-14 2006-08-29 Lattice Semiconductor Corporation Scalable serializer-deserializer architecture and programmable interface
JP4587446B2 (ja) * 2003-08-07 2010-11-24 キヤノン株式会社 ネットワークシステム、並びにスイッチ装置及び経路管理サーバ及びそれらの制御方法、及び、コンピュータプログラム及びコンピュータ可読記憶媒体
US8776050B2 (en) 2003-08-20 2014-07-08 Oracle International Corporation Distributed virtual machine monitor for managing multiple virtual resources across multiple physical nodes
JP4437650B2 (ja) 2003-08-25 2010-03-24 株式会社日立製作所 ストレージシステム
US20050091383A1 (en) * 2003-10-14 2005-04-28 International Business Machines Corporation Efficient zero copy transfer of messages between nodes in a data processing system
US20050080869A1 (en) * 2003-10-14 2005-04-14 International Business Machines Corporation Transferring message packets from a first node to a plurality of nodes in broadcast fashion via direct memory to memory transfer
US20050080920A1 (en) * 2003-10-14 2005-04-14 International Business Machines Corporation Interpartition control facility for processing commands that effectuate direct memory to memory information transfer
US7668923B2 (en) * 2003-10-14 2010-02-23 International Business Machines Corporation Master-slave adapter
US7441179B2 (en) 2003-10-23 2008-10-21 Intel Corporation Determining a checksum from packet data
US7219294B2 (en) * 2003-11-14 2007-05-15 Intel Corporation Early CRC delivery for partial frame
US20050114710A1 (en) 2003-11-21 2005-05-26 Finisar Corporation Host bus adapter for secure network devices
US7873693B1 (en) * 2004-02-13 2011-01-18 Habanero Holdings, Inc. Multi-chassis fabric-backplane enterprise servers
US7633955B1 (en) * 2004-02-13 2009-12-15 Habanero Holdings, Inc. SCSI transport for fabric-backplane enterprise servers
US7292591B2 (en) 2004-03-30 2007-11-06 Extreme Networks, Inc. Packet processing system architecture and method
US7502474B2 (en) 2004-05-06 2009-03-10 Advanced Micro Devices, Inc. Network interface with security association data prefetch for high speed offloaded security processing
US20050265252A1 (en) * 2004-05-27 2005-12-01 International Business Machines Corporation Enhancing ephemeral port allocation
US7461183B2 (en) * 2004-08-03 2008-12-02 Lsi Corporation Method of processing a context for execution
US7134796B2 (en) 2004-08-25 2006-11-14 Opnext, Inc. XFP adapter module
US7436773B2 (en) * 2004-12-07 2008-10-14 International Business Machines Corporation Packet flow control in switched full duplex ethernet networks
US8040903B2 (en) 2005-02-01 2011-10-18 Hewlett-Packard Development Company, L.P. Automated configuration of point-to-point load balancing between teamed network resources of peer devices
US7620754B2 (en) 2005-03-25 2009-11-17 Cisco Technology, Inc. Carrier card converter for 10 gigabit ethernet slots
US20060230098A1 (en) * 2005-03-30 2006-10-12 International Business Machines Corporation Routing requests to destination application server partitions via universal partition contexts
US7586936B2 (en) * 2005-04-01 2009-09-08 International Business Machines Corporation Host Ethernet adapter for networking offload in server environment
US7903687B2 (en) 2005-04-01 2011-03-08 International Business Machines Corporation Method for scheduling, writing, and reading data inside the partitioned buffer of a switch, router or packet processing device
US20070079103A1 (en) * 2005-10-05 2007-04-05 Yasuyuki Mimatsu Method for resource management in a logically partitioned storage system
US20070094402A1 (en) * 2005-10-17 2007-04-26 Stevenson Harold R Method, process and system for sharing data in a heterogeneous storage network
JP4542514B2 (ja) * 2006-02-13 2010-09-15 株式会社日立製作所 計算機の制御方法、プログラム及び仮想計算機システム
US7716356B2 (en) * 2006-06-30 2010-05-11 International Business Machines Corporation Server-based acquisition, distributed acquisition and usage of dynamic MAC addresses in a virtualized Ethernet environment
US7827331B2 (en) * 2006-12-06 2010-11-02 Hitachi, Ltd. IO adapter and data transferring method using the same
US20090055831A1 (en) * 2007-08-24 2009-02-26 Bauman Ellen M Allocating Network Adapter Resources Among Logical Partitions
JP5216336B2 (ja) * 2008-01-23 2013-06-19 株式会社日立製作所 計算機システム、管理サーバ、および、不一致接続構成検知方法

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050053060A1 (en) 2003-01-21 2005-03-10 Nextio Inc. Method and apparatus for a shared I/O network interface controller

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ROSS M ET AL: "FX1000: a high performance single chip Gigabit Ethernet NIC", COMPCON '97. PROCEEDINGS, IEEE SAN JOSE, CA, USA 23-26 FEB. 1997, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 23 February 1997 (1997-02-23), pages 218 - 223, XP010219537, ISBN: 0-8186-7804-6 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008058254A2 (fr) * 2006-11-08 2008-05-15 Standard Microsystems Corporation Contrôleur de trafic de réseau (ntc)
WO2008058254A3 (fr) * 2006-11-08 2008-12-04 Standard Microsyst Smc Contrôleur de trafic de réseau (ntc)
US9794378B2 (en) 2006-11-08 2017-10-17 Standard Microsystems Corporation Network traffic controller (NTC)
US10749994B2 (en) 2006-11-08 2020-08-18 Standard Microsystems Corporation Network traffic controller (NTC)
CN101206623B (zh) * 2006-12-19 2010-04-21 国际商业机器公司 迁移虚拟端点的***和方法
CN102710813A (zh) * 2012-06-21 2012-10-03 杭州华三通信技术有限公司 一种mac地址表项存取方法和设备

Also Published As

Publication number Publication date
US20060251120A1 (en) 2006-11-09
CN101151851A (zh) 2008-03-26
JP4807861B2 (ja) 2011-11-02
US8291050B2 (en) 2012-10-16
TWI392275B (zh) 2013-04-01
JP2008535343A (ja) 2008-08-28
US7586936B2 (en) 2009-09-08
CN101151851B (zh) 2013-03-06
EP1864444A1 (fr) 2007-12-12
TW200644512A (en) 2006-12-16
US20070283286A1 (en) 2007-12-06

Similar Documents

Publication Publication Date Title
US7586936B2 (en) Host Ethernet adapter for networking offload in server environment
US7492771B2 (en) Method for performing a packet header lookup
JP4898781B2 (ja) オペレーティング・システム・パーティションのためのネットワーク通信
US7397797B2 (en) Method and apparatus for performing network processing functions
US20220400147A1 (en) Network Interface Device
US8094670B1 (en) Method and apparatus for performing network processing functions
US6754222B1 (en) Packet switching apparatus and method in data network
US11394664B2 (en) Network interface device
US20030200315A1 (en) Sharing a network interface card among multiple hosts
WO2006103167A1 (fr) Bornes configurables pour adaptateur ethernet hote
EP1044406A1 (fr) Procede et dispositif de traitement de trames dans un reseau
US8924605B2 (en) Efficient delivery of completion notifications
WO2006063298A1 (fr) Techniques de gestion de la commande de debit
US8959265B2 (en) Reducing size of completion notifications
TWI257790B (en) System for protocol processing engine
US7903687B2 (en) Method for scheduling, writing, and reading data inside the partitioned buffer of a switch, router or packet processing device
CN110995507B (zh) 一种网络加速控制器及方法
US20060120405A1 (en) Method and apparatus for intermediate buffer segmentation and reassembly
Shimizu et al. Achieving wire-speed protocol processing in a multi-Gbps network using BSD UNIX and an OC-48c network interface card
Specialistica Universita di Pisa

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2006708771

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2008503471

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 200680010820.2

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Country of ref document: RU

WWP Wipo information: published in national office

Ref document number: 2006708771

Country of ref document: EP