US20170237672A1 - Network server systems, architectures, components and related methods - Google Patents
Network server systems, architectures, components and related methods Download PDFInfo
- Publication number
- US20170237672A1 US20170237672A1 US15/396,318 US201615396318A US2017237672A1 US 20170237672 A1 US20170237672 A1 US 20170237672A1 US 201615396318 A US201615396318 A US 201615396318A US 2017237672 A1 US2017237672 A1 US 2017237672A1
- Authority
- US
- United States
- Prior art keywords
- network
- server system
- server
- hardware accelerator
- servers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2441—Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1605—Handling requests for interconnection or transfer for access to memory bus based on arbitration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1605—Handling requests for interconnection or transfer for access to memory bus based on arbitration
- G06F13/1652—Handling requests for interconnection or transfer for access to memory bus based on arbitration in a multiprocessor architecture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1668—Details of memory controller
- G06F13/1673—Details of memory controller using buffers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/28—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/36—Handling requests for interconnection or transfer for access to common bus or bus system
- G06F13/362—Handling requests for interconnection or transfer for access to common bus or bus system with centralised access control
- G06F13/364—Handling requests for interconnection or transfer for access to common bus or bus system with centralised access control using independent requests or grants, e.g. using separated request and grant lines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4022—Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4027—Coupling between buses using bus bridges
- G06F13/404—Coupling between buses using bus bridges with address mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
- G06F13/4204—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
- G06F13/4234—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being a memory bus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/70—Virtual switches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/02—Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
- H04L63/0227—Filtering policies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present disclosure relates generally to network server systems, and more particularly to systems having servers with hardware accelerator components that can operate independently of server host processors, thus forming a hardware acceleration plane.
- FIG. 1 is a block diagram of a server system according to an embodiment.
- FIG. 2 is a block diagram of a hardware accelerated server that can be included in embodiments.
- FIG. 3 is a block diagram of a hardware accelerator module that can be included in embodiments.
- FIG. 4 is a block diagram of a server system according to another embodiment.
- FIG. 5 is a block diagram of a hardware accelerated server that can be included in embodiments.
- FIG. 6 is a diagram of a server system according to embodiments.
- FIG. 7 is a diagram of a server system according to embodiments.
- FIG. 8 is a diagram showing one particular hardware accelerator module that can be included in embodiments.
- FIG. 9 is a diagram showing one particular hardware accelerated server that can be included in embodiments.
- Embodiments disclosed herein include server systems having servers equipped with hardware accelerator modules.
- Hardware accelerator modules can form a mid-plane and accelerate the processing of network packet data independent of any host processors on the servers.
- Network packet processing can include, but is not limited to, classifying packets, encrypting packets and/or decrypting packets.
- Hardware accelerator modules can be attached to a bus in a server, and can include one or more programmable logic devices, such as field programmable gate array (FPGA) devices.
- FPGA field programmable gate array
- Embodiments can also include a server system having servers interconnected to one another by network connections, where each server includes a host processor, a network interface device, and a hardware accelerator module.
- Each server includes a host processor, a network interface device, and a hardware accelerator module.
- One or more hardware accelerator modules can be mounted in each server, and can include one or more programmable logic devices (e.g., FPGAs).
- the hardware accelerator modules can form a hardware acceleration plane for processing network packet data independent of the host processors. Further, network packet data can be transmitted between hardware acceleration modules independent of the host processors.
- FIG. 1 shows a server system 100 according to an embodiment.
- a server system 100 can include servers equipped with hardware accelerator modules that can process network packet data received by the system 100 .
- a server system 100 can be organized into groups of servers 126 - 0 / 1 .
- groups of servers 126 - 0 / 1 can be a physical organization of servers, such as racks in which the server components are mounted.
- such a grouping can be a logical grouping.
- a server group 126 - 0 can include a switching tier 102 , a mid-tier 104 , and one or more server tiers 110 .
- a switching tier 102 can provide network connections between various components of the system 100 .
- a switching tier 102 can be formed by a top-of-rack (TOR) switch device.
- a mid-tier 104 can be formed by a number of hardware accelerator modules, which are described in more detail below.
- a mid-tier 104 can be conceptualized, architecturally, as being placed near a top-of-rack.
- a mid-tier 104 can perform any number of packet processing tasks, as will also be described in more detail below.
- a server tier 110 can include server components (apart from the hardware accelerator modules), including host processors.
- a system 100 can include various data communication paths for interconnecting the various tiers 102 / 104 / 110 .
- Such communication paths can include intra-group switch/server connections 131 - 0 , which can provide connections between a switching tier 102 and server tier(s) 110 of the same group 126 - 0 ; inter-group switch/server connections 131 - 1 , which can provide connections between a switching tier 102 and server tier of different groups 126 - 0 / 1 ; intra-group switch/module connections 133 - 0 , which can provide connections between a switching tier 102 and hardware accelerator modules of the same group 126 - 0 ; inter-group switch/module connections 133 - 1 , which can provide connections between a switching tier 102 and hardware accelerator modules of different groups 126 - 0 / 1 ; intra-group module/server connections 135 - 0 , which can provide connections between hardware accelerator modules and server components of a same group 126 - 0
- FIG. 2 is a diagram of a server that can be included in embodiments, including the embodiment shown in FIG. 1 .
- a server 206 can include one or more host processors 214 and one or more hardware accelerator modules 208 .
- a server 206 can receive network packet data over first data path 209 from a network data packet source 212 , which can be a TOR switch in the embodiment shown.
- a hardware accelerator module 208 can be connected to a host processor 214 by a second data path 211 .
- a second data path 208 can include a bus formed on the server 208 .
- second data path 208 can be a memory mapped bus.
- a hardware accelerator module 208 can enable network data processing tasks to be completely offloaded for execution by the hardware accelerator module 208 . In this way, a hardware accelerator module 208 can receive and process network packet data independent of host processor 214 .
- FIG. 3 is a block diagram of a hardware accelerator module 308 that can be included in any of the embodiments shown herein.
- a hardware accelerator module 308 can include one or more programmable logic devices 316 that can be connected to random access memory (RAM) 318 .
- a programmable logic device 316 can be a field programmable gate array (FPGA) device (e.g., an FPGA integrated circuit (IC)), and RAM 318 can include be one or more dynamic RAM (DRAM) ICs.
- FPGA field programmable gate array
- IC an FPGA integrated circuit
- DRAM dynamic RAM
- An FPGA 316 and RAM 318 can be in separate IC packages, or can be integrated in the same IC package.
- Programmable logic device 316 can receive network packet data over a first connection 309 .
- Programmable logic device 316 can be connected to RAM 318 by a bus 320 , which in particular embodiments can be a memory mapped bus.
- a programmable logic device 316 can be connected to another device by a third connection 320 .
- Such another device could include another programmable logic device or processor, as but two of many possible examples.
- FIG. 4 is a block diagram of a server system 400 according to another embodiment.
- a system 400 can be one implementation of that shown in FIG. 1 .
- a system 400 can include multiple racks (one shown as 426 ) each connected through respective TOR switches 402 .
- TOR switches 402 can communicate with each other through an aggregation layer 430 .
- Aggregation layer 430 may include several switches and routers and can act as the interface between an external network and the server racks 426 .
- Server racks 426 can each include a number of servers. All or some of the servers in each rack 426 can be a hardware accelerated server (one shown as 406 ). Each hardware accelerated server 406 can include one or more network interfaces 424 , one or more host processors 414 , and one or more hardware accelerator modules 408 , according to any of the embodiments described herein, or equivalents.
- FIG. 5 is a block diagram of a hardware accelerated server 506 that can be included in embodiments.
- a server 506 can include a network interface 524 , one or more hardware accelerator modules 508 , and one or more host processors 514 .
- Network interface 524 can receive network packet data from a network or another computer or virtual machine.
- a network interface 524 can include a network interface card (NIC).
- Network interface 524 can be connected to a host processor 514 and hardware accelerator module 508 by one or more buses 527 .
- bus(es) 527 can include a peripheral component interconnect (PCI) type bus.
- a network interface 524 can be a NIC PCI and/or PCI express (PCIe) device connected with a host motherboard via PCI or PCIe bus (included in 527 ).
- a host processor 514 can be any suitable processor device.
- a host processor 514 can include processors with “brawny” cores, such x86 based processors, as but one example.
- a hardware accelerator module 508 can be connected to bus(es) 527 of server 506 .
- hardware accelerator module 508 can be a circuit board that inserts into a bus socket on a main board of a server 506 .
- a hardware accelerator module 508 can include one or more FPGAs 526 .
- FPGA(s) 526 can include circuits capable of receiving network packet data from bus(es) 527 , and can process network packet data in any of various ways described herein.
- FPGA(s) 526 can also include circuits, or be connected to circuits, which can access data stored in a buffer memories of the hardware accelerator module 508 .
- hardware accelerator module 508 can serve as part of a switch fabric.
- hardware accelerator modules can include managed output queues. Session flows queued in each such queue can be sent out through an output port to a downstream network element of the system in which the server is employed.
- FIG. 6 is a diagram showing a server system 600 according to another embodiment.
- a server system 600 can include a network packet data source 630 , a mid-plane formed from hardware accelerator modules, hereinafter referred to as a hardware acceleration plane 604 , and a plane formed by host processors, hereinafter referred to as a host processor plane 634 .
- a network packet data source 630 can be a network, including the Internet, and/or can include an aggregation layer, like that shown as 430 in FIG. 4 .
- hardware acceleration plane 604 and host processor plane 634 can be a logical representation of system resources.
- components of the same server can form different planes of the system.
- a system 600 can include hardware accelerated servers (one shown as 606 ) that include one or more hardware acceleration modules 608 - 0 and one or more host processors 614 - 0 .
- Such hardware accelerated servers can take the form of any of those shown herein, or equivalents.
- FIG. 6 shows two of various possible network data processing paths ( 630 , 632 ) that can be executed in a system 600 . It is understood that such processing paths ( 630 , 632 ) are provided by way of example, and should not be construed as limiting.
- Processing path 630 can include processing by two hardware accelerator modules 608 - 1 / 2 . In some embodiments, such processing can be independent of any host processor (i.e., independent of host processor plane 634 ).
- processing path 632 can include processing by a hardware accelerator module 608 - 3 and a host processor 614 - 1 .
- hardware accelerator module 608 - 3 and host processor 614 - 1 can be in the same server (i.e., a same hardware accelerated server), or can be in different servers (e.g., hardware accelerator module 608 - 3 is in one hardware accelerated server, while host processor 614 - 1 is in a different server, which may or may not be a hardware accelerated server).
- FIG. 7 is a diagram of a system 700 according to another embodiment.
- system 700 can be one implementation of that shown in FIG. 6 .
- a system 700 can provide a mid-plane switch architecture.
- One or more server units 706 - 0 / 1 can be equipped hardware accelerator modules 708 - 0 / 1 , and thus can be considered hardware accelerated servers.
- Each hardware accelerator module 708 - 0 / 1 can act as a virtual switch 736 - 0 / 1 that is capable of receiving and forwarding packets. All the virtual switches 736 - 0 / 1 can be connected to each other, which can form a hardware acceleration plane 704 .
- ingress packets can be examined and classified by the hardware accelerated modules 708 - 0 / 1 .
- Hardware accelerated modules 708 - 0 / 1 can be capable of processing a relatively large number of packets.
- a system 700 can include TOR switches (not shown) configured in conventional tree-like topologies, which can forward packets based on MAC address.
- Hardware accelerator modules 708 - 0 / 1 can perform deep packet inspection and classify packets with much more granularity before they are forwarded to other locations.
- the role of layer 2 TOR switches can be limited to forwarding packets to hardware accelerated modules 708 - 0 / 1 such that essentially all the packet processing can be handled by the hardware accelerated modules 708 - 0 / 1 .
- progressively more server units can be equipped with hardware accelerated modules 708 - 0 / 1 to scale the packet handling capabilities instead of upgrading the TOR switches (which can be more costly).
- FIG. 8 is a diagram of a hardware accelerator module 808 according to one particular embodiment.
- a hardware accelerator module 808 can include a printed circuit board 838 having a physical interface 840 .
- Physical interface 840 can enable hardware accelerator module 808 to be inserted into a slot on a server board.
- Mounted on the hardware accelerator module 808 can be circuit components 826 , which can include programmable logic devices, such as an FPGA devices.
- circuit components 826 can include any of: memory, including both volatile and nonvolatile memory; a programmable switch (e.g., network switch); and/or one or more processor cores.
- hardware accelerator module 808 can include one or more network I/Fs 824 .
- a network I/F 824 can enable a physical connection to a network. In some embodiments, this can include a wired network connection compatible with IEEE 802 and related standards. However, in other embodiments, a network I/F 824 can be any other suitable wired connection and/or a wireless connection.
- a hardware accelerated server 906 can include a network I/F 924 , a bus system 927 , a host processor 914 , and a hardware accelerator module 908 .
- a network I/F 924 can receive packet or other I/O data from an external source.
- network I/F 924 can include physical or virtual functions to receive a packet or other I/O data from a network or another computer or virtual machine.
- a network I/F 24 can include, but is not limited to, PCI and/or PCIe devices connecting with a server motherboard via PCI or PCIe bus (e.g., 927 - 0 ).
- Examples of network I/Fs 924 can include, but are not limited to, a NIC, a host bus adapter, a converged network adapter, or an ATM network interface.
- a hardware accelerated server 906 can employ an abstraction scheme that allows multiple logical entities to access the same network I/F 924 .
- a network I/F 924 can be virtualized to provide for multiple virtual devices, each of which can perform some of the functions of a physical network I/F.
- Such IO virtualization can redirect network packet traffic to different addresses of the hardware accelerated server 906 .
- a network I/F 924 can include NIC having input buffer 924 a and in some embodiments, an I/O virtualization function 924 b . While a network I/F 924 can be configured to trigger host processor interrupts in response to incoming packets, in some embodiments, such interrupts can be disabled, thereby reducing processing overhead for a host processor 914 .
- a hardware accelerated server 906 can also include an I/O management unit 940 which can translate virtual addresses to corresponding physical addresses of the server 906 . This can enable data to be transferred between various components the hardware accelerated server 906 .
- a host processor 906 can perform certain processing tasks on network packet data, however, as noted herein, other network packet data processing tasks can be performed by hardware accelerator module 908 independent of host processor 914 .
- a host processor 914 can be a “brawny core” type processor (e.g., an x86 or any other processor capable of handling “heavy touch” computational operations).
- a hardware accelerator module 908 can interface with a server bus 927 - 1 via a standard module connection.
- a server bus 927 - 1 can be any suitable bus, including a PCI type bus, but other embodiments can include a memory bus.
- a hardware accelerator module 908 can be implemented with one or more FPGAs 926 - 0 / 1 .
- hardware accelerator module 908 can include FPGA(s) 926 - 0 / 1 in which can be formed any of the following: a host bus interface 942 , an arbiter 944 , a scheduler circuit 948 , a classifier circuit 950 , and/or processing circuits 952 .
- a host bus interface 942 can be connected to server bus 927 - 1 , and can be capable of block data transfers over server bus 927 - 1 . Packets can be queued in a memory 918 .
- Memory 918 can be any suitable memory, including volatile and/or nonvolatile memory devices, separate and/or integrated with FGPA(s) 926 - 0 / 1 .
- An arbiter 944 can provide access to resources (e.g., processing circuits 952 ) on the hardware accelerator module 908 to one or more requestors. If multiple requestors request access, an arbiter 944 can determine which requestor becomes the accessor and then pass data from the accessor to the resource, and the resource can begin executing processing on the data. After the data has been transferred to a resource, and the resource has competed execution, an arbiter 944 can transfer control to a different requestor and this cycle can repeat for all available requestors. In the embodiment of FIG. 9 , arbiter 944 can notify other portions of hardware accelerator module 908 of incoming data. Arbiter 944 can input and output data via data ingress path 946 - 0 and data egress path 946 - 1 .
- resources e.g., processing circuits 952 .
- a scheduler circuit 948 can perform traffic management on incoming packets by categorizing them according to flow using session metadata. Packets from a certain source, relating to a certain traffic class, pertaining to a specific application, or flowing to a certain socket, are referred to as part of a session flow and can be classified using session metadata. In some embodiments, such classification can be performed by classifier circuit 950 . Packets can be queued for output in memory (e.g., 918 ) based on session priority.
- a scheduler circuit 948 can allocate a priority to each of many output queues (e.g., in 918 ) and carry out reordering of incoming packets to maintain persistence of session flows in these queues.
- a scheduler circuit 948 can be configured to control the scheduling of each of these persistent sessions in processing circuits 952 . Packets of a particular session flow can belong to a particular queue.
- a scheduler circuit 948 can control the prioritization of these queues such that they are arbitrated for handling by a processing resource (e.g., processing circuits 952 ) located downstream. Processing circuits 952 can be configured to allocate execution resources to a particular queue.
- Embodiments contemplate multiple sessions running on a processing circuits 952 , with portions of processing circuits 952 each handling data from a particular session flow resident in a queue established by the scheduler circuit 948 , to tightly integrate the scheduler circuit 948 and its downstream resources (e.g., 952 ). This can bring about persistence of session information across the traffic management and scheduling circuit 948 and processing circuits 952
- Processing circuits 952 can be capable of processing packet data.
- processing circuit 952 can be capable of handling packets of different application or transport sessions.
- processing circuits 952 can provide dedicated computing resources for handling, processing and/or terminating session flows.
- Processing circuits 952 can include any suitable circuits of the FPGA(s) 926 - 0 / 1 .
- processing circuits 952 can include processors, including CPU type processors.
- processing circuits 952 can include low power processors capable of executing general purpose instructions, including but not limited to: ARM, ARC, Tensilica, MIPS, StrongARM or any other suitable processor that serve the functions described herein.
- a hardware accelerated server 906 can receive network data packets from an external network. Based on their classification, the packets can be destined for a host processor 914 or processing circuits 952 on hardware accelerator module 908 .
- the network data packets can have certain characteristics, including transport protocol number, source and destination port numbers, source and destination IP addresses, for example.
- the network data packets can further have metadata that helps in their classification and/or management.
- any of multiple devices of the hardware accelerated server 906 can be used to redirect traffic to specific addresses.
- Such network data packets can be transferred to addresses where they can be handled by one or more processing circuits (e.g., 952 ).
- processing circuits e.g. 952
- such transfers can be to physical addresses, thus logical entities can be removed from the processing, and a host processor 914 can be free from such packet handling.
- embodiments can be conceptualized as providing a “black box” to which specific network data can be fed for processing.
- session metadata can serve as the criteria by which packets are prioritized and scheduled and as such, incoming packets can be reordered based on their session metadata. This reordering of packets can occur in one or more buffers (e.g., 918 ) and can modify the traffic shape of these flows.
- the scheduling discipline chosen for this prioritization, or traffic management can affect the traffic shape of flows and micro-flows through delay (buffering), bursting of traffic (buffering and bursting), smoothing of traffic (buffering and rate-limiting flows), dropping traffic (choosing data to discard so as to avoid exhausting the buffer), delay jitter (temporally shifting cells of a flow by different amounts) and by not admitting a connection (e.g., cannot simultaneously guarantee existing service level agreements (SLAs) with an additional flow's SLA).
- SLAs service level agreements
- a hardware accelerator module 908 can serve as part of a switch fabric, and provide traffic management with output queues (e.g., in 918 ), the access to which is arbitrated by a scheduling circuit 948 .
- Such output queues can be managed using a scheduling that provides traffic management for incoming flows.
- the session flows queued in each of these queues can be sent out through an output port to a downstream network element.
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 13/900,318 filed May 22, 2013, which claims the benefit of U.S. Provisional Patent Application Nos. 61/650,373 filed May 22, 2012, 61/753,892 filed on Jan. 17, 2013, 61/753,895 filed on Jan. 17, 2013, 61/753,899 filed on Jan. 17, 2013, 61/753,901 filed on Jan. 17, 2013, 61/753,903 filed on Jan. 17, 2013, 61/753,904 filed on Jan. 17, 2013, 61/753,906 filed on Jan. 17, 2013, 61/753,907 filed on Jan. 17, 2013, 61/753,910 filed on Jan. 17, 2013, and is a continuation of U.S. patent application Ser. No. 15/283,287 filed Sep. 30, 2016, which is a continuation of International Application no. PCT/US2015/023730, filed Mar. 31, 2015, which claims the benefit of U.S. Provisional Patent Application No. 61/973,205 filed Mar. 31, 2014, and a continuation of International Application no. PCT/US2015/023746, filed Mar. 31, 2015, which claims the benefit of U.S. Provisional Patent Application Nos. 61/973,207 filed Mar. 31, 2014 and 61/976,471 filed Apr. 7, 2014. The contents of all of these applications are incorporated by reference herein.
- The present disclosure relates generally to network server systems, and more particularly to systems having servers with hardware accelerator components that can operate independently of server host processors, thus forming a hardware acceleration plane.
-
FIG. 1 is a block diagram of a server system according to an embodiment. -
FIG. 2 is a block diagram of a hardware accelerated server that can be included in embodiments. -
FIG. 3 is a block diagram of a hardware accelerator module that can be included in embodiments. -
FIG. 4 is a block diagram of a server system according to another embodiment. -
FIG. 5 is a block diagram of a hardware accelerated server that can be included in embodiments. -
FIG. 6 is a diagram of a server system according to embodiments. -
FIG. 7 is a diagram of a server system according to embodiments. -
FIG. 8 is a diagram showing one particular hardware accelerator module that can be included in embodiments. -
FIG. 9 is a diagram showing one particular hardware accelerated server that can be included in embodiments. - Embodiments disclosed herein include server systems having servers equipped with hardware accelerator modules. Hardware accelerator modules can form a mid-plane and accelerate the processing of network packet data independent of any host processors on the servers. Network packet processing can include, but is not limited to, classifying packets, encrypting packets and/or decrypting packets. Hardware accelerator modules can be attached to a bus in a server, and can include one or more programmable logic devices, such as field programmable gate array (FPGA) devices.
- Embodiments can also include a server system having servers interconnected to one another by network connections, where each server includes a host processor, a network interface device, and a hardware accelerator module. One or more hardware accelerator modules can be mounted in each server, and can include one or more programmable logic devices (e.g., FPGAs). The hardware accelerator modules can form a hardware acceleration plane for processing network packet data independent of the host processors. Further, network packet data can be transmitted between hardware acceleration modules independent of the host processors.
-
FIG. 1 shows aserver system 100 according to an embodiment. Aserver system 100 can include servers equipped with hardware accelerator modules that can process network packet data received by thesystem 100. Aserver system 100 can be organized into groups of servers 126-0/1. In some embodiments, groups of servers 126-0/1 can be a physical organization of servers, such as racks in which the server components are mounted. However, in other embodiments such a grouping can be a logical grouping. - A server group 126-0 can include a switching
tier 102, a mid-tier 104, and one ormore server tiers 110. Aswitching tier 102 can provide network connections between various components of thesystem 100. In a particular embodiment, a switchingtier 102 can be formed by a top-of-rack (TOR) switch device. - A mid-tier 104 can be formed by a number of hardware accelerator modules, which are described in more detail below. In some embodiments, a mid-tier 104 can be conceptualized, architecturally, as being placed near a top-of-rack. A mid-tier 104 can perform any number of packet processing tasks, as will also be described in more detail below. A
server tier 110 can include server components (apart from the hardware accelerator modules), including host processors. - A
system 100 can include various data communication paths for interconnecting thevarious tiers 102/104/110. Such communication paths can include intra-group switch/server connections 131-0, which can provide connections between aswitching tier 102 and server tier(s) 110 of the same group 126-0; inter-group switch/server connections 131-1, which can provide connections between aswitching tier 102 and server tier of different groups 126-0/1; intra-group switch/module connections 133-0, which can provide connections between aswitching tier 102 and hardware accelerator modules of the same group 126-0; inter-group switch/module connections 133-1, which can provide connections between aswitching tier 102 and hardware accelerator modules of different groups 126-0/1; intra-group module/server connections 135-0, which can provide connections between hardware accelerator modules and server components of a same group 126-0; and inter-group module/server connections 135-1, which can provide connections between hardware accelerator modules and server components of different groups 126-0/1. -
FIG. 2 is a diagram of a server that can be included in embodiments, including the embodiment shown inFIG. 1 . Aserver 206 can include one ormore host processors 214 and one or morehardware accelerator modules 208. Aserver 206 can receive network packet data overfirst data path 209 from a networkdata packet source 212, which can be a TOR switch in the embodiment shown. - A
hardware accelerator module 208 can be connected to ahost processor 214 by asecond data path 211. In some embodiments, asecond data path 208 can include a bus formed on theserver 208. In particular embodiments,second data path 208 can be a memory mapped bus. - A
hardware accelerator module 208 can enable network data processing tasks to be completely offloaded for execution by thehardware accelerator module 208. In this way, ahardware accelerator module 208 can receive and process network packet data independent ofhost processor 214. -
FIG. 3 is a block diagram of ahardware accelerator module 308 that can be included in any of the embodiments shown herein. Ahardware accelerator module 308 can include one or moreprogrammable logic devices 316 that can be connected to random access memory (RAM) 318. In particular embodiments, aprogrammable logic device 316 can be a field programmable gate array (FPGA) device (e.g., an FPGA integrated circuit (IC)), andRAM 318 can include be one or more dynamic RAM (DRAM) ICs. AnFPGA 316 andRAM 318 can be in separate IC packages, or can be integrated in the same IC package. -
Programmable logic device 316 can receive network packet data over afirst connection 309.Programmable logic device 316 can be connected toRAM 318 by abus 320, which in particular embodiments can be a memory mapped bus. In some embodiments, aprogrammable logic device 316 can be connected to another device by athird connection 320. Such another device could include another programmable logic device or processor, as but two of many possible examples. -
FIG. 4 is a block diagram of aserver system 400 according to another embodiment. Asystem 400 can be one implementation of that shown inFIG. 1 . Asystem 400 can include multiple racks (one shown as 426) each connected throughrespective TOR switches 402.TOR switches 402 can communicate with each other through anaggregation layer 430.Aggregation layer 430 may include several switches and routers and can act as the interface between an external network and the server racks 426. -
Server racks 426 can each include a number of servers. All or some of the servers in eachrack 426 can be a hardware accelerated server (one shown as 406). Each hardware acceleratedserver 406 can include one ormore network interfaces 424, one ormore host processors 414, and one or morehardware accelerator modules 408, according to any of the embodiments described herein, or equivalents. -
FIG. 5 is a block diagram of a hardware acceleratedserver 506 that can be included in embodiments. Aserver 506 can include anetwork interface 524, one or morehardware accelerator modules 508, and one ormore host processors 514. -
Network interface 524 can receive network packet data from a network or another computer or virtual machine. In the very particular embodiment shown, anetwork interface 524 can include a network interface card (NIC).Network interface 524 can be connected to ahost processor 514 andhardware accelerator module 508 by one ormore buses 527. In some embodiments, bus(es) 527 can include a peripheral component interconnect (PCI) type bus. In very particular embodiments, anetwork interface 524 can be a NIC PCI and/or PCI express (PCIe) device connected with a host motherboard via PCI or PCIe bus (included in 527). - A
host processor 514 can be any suitable processor device. In particular embodiments, ahost processor 514 can include processors with “brawny” cores, such x86 based processors, as but one example. - A
hardware accelerator module 508 can be connected to bus(es) 527 ofserver 506. In particular embodiments,hardware accelerator module 508 can be a circuit board that inserts into a bus socket on a main board of aserver 506. As shown InFIG. 5 , ahardware accelerator module 508 can include one ormore FPGAs 526. FPGA(s) 526 can include circuits capable of receiving network packet data from bus(es) 527, and can process network packet data in any of various ways described herein. FPGA(s) 526 can also include circuits, or be connected to circuits, which can access data stored in a buffer memories of thehardware accelerator module 508. - In some embodiments,
hardware accelerator module 508 can serve as part of a switch fabric. In such embodiments, hardware accelerator modules can include managed output queues. Session flows queued in each such queue can be sent out through an output port to a downstream network element of the system in which the server is employed. -
FIG. 6 is a diagram showing aserver system 600 according to another embodiment. Aserver system 600 can include a networkpacket data source 630, a mid-plane formed from hardware accelerator modules, hereinafter referred to as ahardware acceleration plane 604, and a plane formed by host processors, hereinafter referred to as ahost processor plane 634. A networkpacket data source 630 can be a network, including the Internet, and/or can include an aggregation layer, like that shown as 430 inFIG. 4 . - It is understood that
hardware acceleration plane 604 andhost processor plane 634 can be a logical representation of system resources. In particular, components of the same server can form different planes of the system. As but one particular example, asystem 600 can include hardware accelerated servers (one shown as 606) that include one or more hardware acceleration modules 608-0 and one or more host processors 614-0. Such hardware accelerated servers can take the form of any of those shown herein, or equivalents. -
FIG. 6 shows two of various possible network data processing paths (630, 632) that can be executed in asystem 600. It is understood that such processing paths (630, 632) are provided by way of example, and should not be construed as limiting. Processingpath 630 can include processing by two hardware accelerator modules 608-1/2. In some embodiments, such processing can be independent of any host processor (i.e., independent of host processor plane 634). - In contrast, processing
path 632 can include processing by a hardware accelerator module 608-3 and a host processor 614-1. It is understood that hardware accelerator module 608-3 and host processor 614-1 can be in the same server (i.e., a same hardware accelerated server), or can be in different servers (e.g., hardware accelerator module 608-3 is in one hardware accelerated server, while host processor 614-1 is in a different server, which may or may not be a hardware accelerated server). -
FIG. 7 is a diagram of asystem 700 according to another embodiment. In a particular embodiment,system 700 can be one implementation of that shown inFIG. 6 . Asystem 700 can provide a mid-plane switch architecture. One or more server units 706-0/1 can be equipped hardware accelerator modules 708-0/1, and thus can be considered hardware accelerated servers. Each hardware accelerator module 708-0/1 can act as a virtual switch 736-0/1 that is capable of receiving and forwarding packets. All the virtual switches 736-0/1 can be connected to each other, which can form ahardware acceleration plane 704. - In some embodiments, ingress packets can be examined and classified by the hardware accelerated modules 708-0/1. Hardware accelerated modules 708-0/1 can be capable of processing a relatively large number of packets. Accordingly, in some embodiments, a
system 700 can include TOR switches (not shown) configured in conventional tree-like topologies, which can forward packets based on MAC address. Hardware accelerator modules 708-0/1 can perform deep packet inspection and classify packets with much more granularity before they are forwarded to other locations. - In certain embodiments, the role of layer 2 TOR switches can be limited to forwarding packets to hardware accelerated modules 708-0/1 such that essentially all the packet processing can be handled by the hardware accelerated modules 708-0/1. In such embodiments, progressively more server units can be equipped with hardware accelerated modules 708-0/1 to scale the packet handling capabilities instead of upgrading the TOR switches (which can be more costly).
- While embodiments herein show hardware accelerator modules having particular components, such arrangements should not be construed as limiting. Based on the descriptions herein, a person skilled in the relevant art will recognize that other hardware components are within the spirit and scope of the embodiments described herein.
-
FIG. 8 is a diagram of ahardware accelerator module 808 according to one particular embodiment. Ahardware accelerator module 808 can include a printedcircuit board 838 having aphysical interface 840.Physical interface 840 can enablehardware accelerator module 808 to be inserted into a slot on a server board. Mounted on thehardware accelerator module 808 can becircuit components 826, which can include programmable logic devices, such as an FPGA devices. In addition or alternatively,circuit components 826 can include any of: memory, including both volatile and nonvolatile memory; a programmable switch (e.g., network switch); and/or one or more processor cores. - In addition,
hardware accelerator module 808 can include one or more network I/Fs 824. A network I/F 824 can enable a physical connection to a network. In some embodiments, this can include a wired network connection compatible with IEEE 802 and related standards. However, in other embodiments, a network I/F 824 can be any other suitable wired connection and/or a wireless connection. - Referring now to
FIG. 9 , a hardware acceleratedserver 906, according to one particular embodiment, is shown in a block diagram. A hardware acceleratedserver 906 can include a network I/F 924, abus system 927, ahost processor 914, and ahardware accelerator module 908. A network I/F 924 can receive packet or other I/O data from an external source. In some embodiments, network I/F 924 can include physical or virtual functions to receive a packet or other I/O data from a network or another computer or virtual machine. A network I/F 24 can include, but is not limited to, PCI and/or PCIe devices connecting with a server motherboard via PCI or PCIe bus (e.g., 927-0). Examples of network I/Fs 924 can include, but are not limited to, a NIC, a host bus adapter, a converged network adapter, or an ATM network interface. - In some embodiments, a hardware accelerated
server 906 can employ an abstraction scheme that allows multiple logical entities to access the same network I/F 924. In such an arrangement, a network I/F 924 can be virtualized to provide for multiple virtual devices, each of which can perform some of the functions of a physical network I/F. Such IO virtualization can redirect network packet traffic to different addresses of the hardware acceleratedserver 906. - In the very particular embodiment shown, a network I/
F 924 can include NIC having input buffer 924 a and in some embodiments, an I/O virtualization function 924 b. While a network I/F 924 can be configured to trigger host processor interrupts in response to incoming packets, in some embodiments, such interrupts can be disabled, thereby reducing processing overhead for ahost processor 914. - In some embodiments, a hardware accelerated
server 906 can also include an I/O management unit 940 which can translate virtual addresses to corresponding physical addresses of theserver 906. This can enable data to be transferred between various components the hardware acceleratedserver 906. - A
host processor 906 can perform certain processing tasks on network packet data, however, as noted herein, other network packet data processing tasks can be performed byhardware accelerator module 908 independent ofhost processor 914. In some embodiments, ahost processor 914 can be a “brawny core” type processor (e.g., an x86 or any other processor capable of handling “heavy touch” computational operations). - A
hardware accelerator module 908 can interface with a server bus 927-1 via a standard module connection. A server bus 927-1 can be any suitable bus, including a PCI type bus, but other embodiments can include a memory bus. Ahardware accelerator module 908 can be implemented with one or more FPGAs 926-0/1. In the embodiments ofFIG. 9 ,hardware accelerator module 908 can include FPGA(s) 926-0/1 in which can be formed any of the following: ahost bus interface 942, anarbiter 944, ascheduler circuit 948, aclassifier circuit 950, and/orprocessing circuits 952. - A
host bus interface 942 can be connected to server bus 927-1, and can be capable of block data transfers over server bus 927-1. Packets can be queued in amemory 918.Memory 918 can be any suitable memory, including volatile and/or nonvolatile memory devices, separate and/or integrated with FGPA(s) 926-0/1. - An
arbiter 944 can provide access to resources (e.g., processing circuits 952) on thehardware accelerator module 908 to one or more requestors. If multiple requestors request access, anarbiter 944 can determine which requestor becomes the accessor and then pass data from the accessor to the resource, and the resource can begin executing processing on the data. After the data has been transferred to a resource, and the resource has competed execution, anarbiter 944 can transfer control to a different requestor and this cycle can repeat for all available requestors. In the embodiment ofFIG. 9 ,arbiter 944 can notify other portions ofhardware accelerator module 908 of incoming data.Arbiter 944 can input and output data via data ingress path 946-0 and data egress path 946-1. - In some embodiments, a
scheduler circuit 948 can perform traffic management on incoming packets by categorizing them according to flow using session metadata. Packets from a certain source, relating to a certain traffic class, pertaining to a specific application, or flowing to a certain socket, are referred to as part of a session flow and can be classified using session metadata. In some embodiments, such classification can be performed byclassifier circuit 950. Packets can be queued for output in memory (e.g., 918) based on session priority. - In particular embodiments, a
scheduler circuit 948 can allocate a priority to each of many output queues (e.g., in 918) and carry out reordering of incoming packets to maintain persistence of session flows in these queues. Ascheduler circuit 948 can be configured to control the scheduling of each of these persistent sessions inprocessing circuits 952. Packets of a particular session flow can belong to a particular queue. Ascheduler circuit 948 can control the prioritization of these queues such that they are arbitrated for handling by a processing resource (e.g., processing circuits 952) located downstream.Processing circuits 952 can be configured to allocate execution resources to a particular queue. Embodiments contemplate multiple sessions running on aprocessing circuits 952, with portions ofprocessing circuits 952 each handling data from a particular session flow resident in a queue established by thescheduler circuit 948, to tightly integrate thescheduler circuit 948 and its downstream resources (e.g., 952). This can bring about persistence of session information across the traffic management andscheduling circuit 948 andprocessing circuits 952Processing circuits 952 can be capable of processing packet data. In particular embodiments,processing circuit 952 can be capable of handling packets of different application or transport sessions. According to some embodiments, processingcircuits 952 can provide dedicated computing resources for handling, processing and/or terminating session flows.Processing circuits 952 can include any suitable circuits of the FPGA(s) 926-0/1. However, in some embodiments, processingcircuits 952 can include processors, including CPU type processors. In particular embodiments, processingcircuits 952 can include low power processors capable of executing general purpose instructions, including but not limited to: ARM, ARC, Tensilica, MIPS, StrongARM or any other suitable processor that serve the functions described herein. - In operation, a hardware accelerated
server 906 can receive network data packets from an external network. Based on their classification, the packets can be destined for ahost processor 914 orprocessing circuits 952 onhardware accelerator module 908. The network data packets can have certain characteristics, including transport protocol number, source and destination port numbers, source and destination IP addresses, for example. In some embodiments, the network data packets can further have metadata that helps in their classification and/or management. - In some embodiments, any of multiple devices of the hardware accelerated
server 906 can be used to redirect traffic to specific addresses. Such network data packets can be transferred to addresses where they can be handled by one or more processing circuits (e.g., 952). In particular embodiments, such transfers can be to physical addresses, thus logical entities can be removed from the processing, and ahost processor 914 can be free from such packet handling. Accordingly, embodiments can be conceptualized as providing a “black box” to which specific network data can be fed for processing. - In some embodiments, session metadata can serve as the criteria by which packets are prioritized and scheduled and as such, incoming packets can be reordered based on their session metadata. This reordering of packets can occur in one or more buffers (e.g., 918) and can modify the traffic shape of these flows. The scheduling discipline chosen for this prioritization, or traffic management, can affect the traffic shape of flows and micro-flows through delay (buffering), bursting of traffic (buffering and bursting), smoothing of traffic (buffering and rate-limiting flows), dropping traffic (choosing data to discard so as to avoid exhausting the buffer), delay jitter (temporally shifting cells of a flow by different amounts) and by not admitting a connection (e.g., cannot simultaneously guarantee existing service level agreements (SLAs) with an additional flow's SLA).
- In some embodiments, a
hardware accelerator module 908 can serve as part of a switch fabric, and provide traffic management with output queues (e.g., in 918), the access to which is arbitrated by ascheduling circuit 948. Such output queues can be managed using a scheduling that provides traffic management for incoming flows. The session flows queued in each of these queues can be sent out through an output port to a downstream network element. - It should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
- It is also understood that the embodiments of the invention may be practiced in the absence of an element and/or step not specifically disclosed. That is, an inventive feature of the invention may be elimination of an element.
- Accordingly, while the various aspects of the particular embodiments set forth herein have been described in detail, the present invention could be subject to various changes, substitutions, and alterations without departing from the spirit and scope of the invention.
Claims (23)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/396,318 US20170237672A1 (en) | 2012-05-22 | 2016-12-30 | Network server systems, architectures, components and related methods |
US16/129,762 US11082350B2 (en) | 2012-05-22 | 2018-09-12 | Network server systems, architectures, components and related methods |
US18/085,196 US20230231811A1 (en) | 2012-05-22 | 2022-12-20 | Systems, devices and methods with offload processing devices |
Applications Claiming Priority (18)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261650373P | 2012-05-22 | 2012-05-22 | |
US201361753910P | 2013-01-17 | 2013-01-17 | |
US201361753901P | 2013-01-17 | 2013-01-17 | |
US201361753895P | 2013-01-17 | 2013-01-17 | |
US201361753903P | 2013-01-17 | 2013-01-17 | |
US201361753904P | 2013-01-17 | 2013-01-17 | |
US201361753899P | 2013-01-17 | 2013-01-17 | |
US201361753907P | 2013-01-17 | 2013-01-17 | |
US201361753906P | 2013-01-17 | 2013-01-17 | |
US201361753892P | 2013-01-17 | 2013-01-17 | |
US13/900,318 US9558351B2 (en) | 2012-05-22 | 2013-05-22 | Processing structured and unstructured data using offload processors |
US201461973205P | 2014-03-31 | 2014-03-31 | |
US201461973207P | 2014-03-31 | 2014-03-31 | |
US201461976471P | 2014-04-07 | 2014-04-07 | |
PCT/US2015/023746 WO2015153699A1 (en) | 2014-03-31 | 2015-03-31 | Computing systems, elements and methods for processing unstructured data |
PCT/US2015/023730 WO2015153693A1 (en) | 2014-03-31 | 2015-03-31 | Interface, interface methods, and systems for operating memory bus attached computing elements |
US15/283,287 US20170109299A1 (en) | 2014-03-31 | 2016-09-30 | Network computing elements, memory interfaces and network connections to such elements, and related systems |
US15/396,318 US20170237672A1 (en) | 2012-05-22 | 2016-12-30 | Network server systems, architectures, components and related methods |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/900,318 Continuation US9558351B2 (en) | 2012-05-22 | 2013-05-22 | Processing structured and unstructured data using offload processors |
US15/283,287 Continuation US20170109299A1 (en) | 2012-05-22 | 2016-09-30 | Network computing elements, memory interfaces and network connections to such elements, and related systems |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/129,762 Continuation-In-Part US11082350B2 (en) | 2012-05-22 | 2018-09-12 | Network server systems, architectures, components and related methods |
US18/085,196 Continuation US20230231811A1 (en) | 2012-05-22 | 2022-12-20 | Systems, devices and methods with offload processing devices |
Publications (1)
Publication Number | Publication Date |
---|---|
US20170237672A1 true US20170237672A1 (en) | 2017-08-17 |
Family
ID=49622482
Family Applications (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/900,273 Abandoned US20130318280A1 (en) | 2012-05-22 | 2013-05-22 | Offloading of computation for rack level servers and corresponding methods and systems |
US13/900,367 Abandoned US20140165196A1 (en) | 2012-05-22 | 2013-05-22 | Efficient packet handling, redirection, and inspection using offload processors |
US13/900,333 Abandoned US20130318269A1 (en) | 2012-05-22 | 2013-05-22 | Processing structured and unstructured data using offload processors |
US13/900,359 Expired - Fee Related US9286472B2 (en) | 2012-05-22 | 2013-05-22 | Efficient packet handling, redirection, and inspection using offload processors |
US13/900,262 Abandoned US20130318268A1 (en) | 2012-05-22 | 2013-05-22 | Offloading of computation for rack level servers and corresponding methods and systems |
US13/900,318 Active 2034-01-03 US9558351B2 (en) | 2012-05-22 | 2013-05-22 | Processing structured and unstructured data using offload processors |
US15/396,318 Abandoned US20170237672A1 (en) | 2012-05-22 | 2016-12-30 | Network server systems, architectures, components and related methods |
US15/396,330 Active - Reinstated US10212092B2 (en) | 2012-05-22 | 2016-12-30 | Architectures and methods for processing data in parallel using offload processing modules insertable into servers |
Family Applications Before (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/900,273 Abandoned US20130318280A1 (en) | 2012-05-22 | 2013-05-22 | Offloading of computation for rack level servers and corresponding methods and systems |
US13/900,367 Abandoned US20140165196A1 (en) | 2012-05-22 | 2013-05-22 | Efficient packet handling, redirection, and inspection using offload processors |
US13/900,333 Abandoned US20130318269A1 (en) | 2012-05-22 | 2013-05-22 | Processing structured and unstructured data using offload processors |
US13/900,359 Expired - Fee Related US9286472B2 (en) | 2012-05-22 | 2013-05-22 | Efficient packet handling, redirection, and inspection using offload processors |
US13/900,262 Abandoned US20130318268A1 (en) | 2012-05-22 | 2013-05-22 | Offloading of computation for rack level servers and corresponding methods and systems |
US13/900,318 Active 2034-01-03 US9558351B2 (en) | 2012-05-22 | 2013-05-22 | Processing structured and unstructured data using offload processors |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/396,330 Active - Reinstated US10212092B2 (en) | 2012-05-22 | 2016-12-30 | Architectures and methods for processing data in parallel using offload processing modules insertable into servers |
Country Status (1)
Country | Link |
---|---|
US (8) | US20130318280A1 (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160321113A1 (en) * | 2015-04-30 | 2016-11-03 | Virtual Open Systems | Virtualization manager for reconfigurable hardware accelerators |
US20180083864A1 (en) * | 2015-05-29 | 2018-03-22 | Huawei Technologies Co., Ltd. | Data processing method and apparatus |
WO2019092593A1 (en) * | 2017-11-08 | 2019-05-16 | Mellanox Technologies, Ltd. | Nic with programmable pipeline |
US10320677B2 (en) * | 2017-01-02 | 2019-06-11 | Microsoft Technology Licensing, Llc | Flow control and congestion management for acceleration components configured to accelerate a service |
US10326696B2 (en) | 2017-01-02 | 2019-06-18 | Microsoft Technology Licensing, Llc | Transmission of messages by acceleration components configured to accelerate a service |
US10382350B2 (en) | 2017-09-12 | 2019-08-13 | Mellanox Technologies, Ltd. | Maintaining packet order in offload of packet processing functions |
US10708240B2 (en) | 2017-12-14 | 2020-07-07 | Mellanox Technologies, Ltd. | Offloading communication security operations to a network interface controller |
US10715451B2 (en) | 2015-05-07 | 2020-07-14 | Mellanox Technologies, Ltd. | Efficient transport flow processing on an accelerator |
US10824469B2 (en) | 2018-11-28 | 2020-11-03 | Mellanox Technologies, Ltd. | Reordering avoidance for flows during transition between slow-path handling and fast-path handling |
CN112054971A (en) * | 2019-06-06 | 2020-12-08 | 英业达科技有限公司 | Servo and exchanger system and operation method thereof |
US11005771B2 (en) | 2017-10-16 | 2021-05-11 | Mellanox Technologies, Ltd. | Computational accelerator for packet payload operations |
US20210194831A1 (en) * | 2019-12-20 | 2021-06-24 | Board Of Trustees Of The University Of Illinois | Accelerating distributed reinforcement learning with in-switch computing |
US11184439B2 (en) | 2019-04-01 | 2021-11-23 | Mellanox Technologies, Ltd. | Communication with accelerator via RDMA-based network adapter |
US11502948B2 (en) | 2017-10-16 | 2022-11-15 | Mellanox Technologies, Ltd. | Computational accelerator for storage operations |
US11558175B2 (en) | 2020-08-05 | 2023-01-17 | Mellanox Technologies, Ltd. | Cryptographic data communication apparatus |
US11909856B2 (en) | 2020-08-05 | 2024-02-20 | Mellanox Technologies, Ltd. | Cryptographic data communication apparatus |
US11934333B2 (en) | 2021-03-25 | 2024-03-19 | Mellanox Technologies, Ltd. | Storage protocol emulation in a peripheral device |
US11934658B2 (en) | 2021-03-25 | 2024-03-19 | Mellanox Technologies, Ltd. | Enhanced storage protocol emulation in a peripheral device |
Families Citing this family (117)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7490325B2 (en) | 2004-03-13 | 2009-02-10 | Cluster Resources, Inc. | System and method for providing intelligent pre-staging of data in a compute environment |
US8782654B2 (en) | 2004-03-13 | 2014-07-15 | Adaptive Computing Enterprises, Inc. | Co-allocating a reservation spanning different compute resources types |
US20070266388A1 (en) | 2004-06-18 | 2007-11-15 | Cluster Resources, Inc. | System and method for providing advanced reservations in a compute environment |
US8176490B1 (en) | 2004-08-20 | 2012-05-08 | Adaptive Computing Enterprises, Inc. | System and method of interfacing a workload manager and scheduler with an identity manager |
US8271980B2 (en) | 2004-11-08 | 2012-09-18 | Adaptive Computing Enterprises, Inc. | System and method of providing system jobs within a compute environment |
US8863143B2 (en) | 2006-03-16 | 2014-10-14 | Adaptive Computing Enterprises, Inc. | System and method for managing a hybrid compute environment |
US9075657B2 (en) | 2005-04-07 | 2015-07-07 | Adaptive Computing Enterprises, Inc. | On-demand access to compute resources |
US9231886B2 (en) | 2005-03-16 | 2016-01-05 | Adaptive Computing Enterprises, Inc. | Simple integration of an on-demand compute environment |
US8041773B2 (en) | 2007-09-24 | 2011-10-18 | The Research Foundation Of State University Of New York | Automatic clustering for self-organizing grids |
US9077654B2 (en) | 2009-10-30 | 2015-07-07 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging managed server SOCs |
US8599863B2 (en) | 2009-10-30 | 2013-12-03 | Calxeda, Inc. | System and method for using a multi-protocol fabric module across a distributed server interconnect fabric |
US9054990B2 (en) | 2009-10-30 | 2015-06-09 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging server SOCs or server fabrics |
US9465771B2 (en) | 2009-09-24 | 2016-10-11 | Iii Holdings 2, Llc | Server on a chip and node cards comprising one or more of same |
US20110103391A1 (en) | 2009-10-30 | 2011-05-05 | Smooth-Stone, Inc. C/O Barry Evans | System and method for high-performance, low-power data center interconnect fabric |
US9876735B2 (en) | 2009-10-30 | 2018-01-23 | Iii Holdings 2, Llc | Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect |
US20130107444A1 (en) | 2011-10-28 | 2013-05-02 | Calxeda, Inc. | System and method for flexible storage and networking provisioning in large scalable processor installations |
US9311269B2 (en) | 2009-10-30 | 2016-04-12 | Iii Holdings 2, Llc | Network proxy for high-performance, low-power data center interconnect fabric |
US9680770B2 (en) | 2009-10-30 | 2017-06-13 | Iii Holdings 2, Llc | System and method for using a multi-protocol fabric module across a distributed server interconnect fabric |
US11720290B2 (en) | 2009-10-30 | 2023-08-08 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US9648102B1 (en) * | 2012-12-27 | 2017-05-09 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US10877695B2 (en) | 2009-10-30 | 2020-12-29 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US9092594B2 (en) | 2011-10-31 | 2015-07-28 | Iii Holdings 2, Llc | Node card management in a modular and large scalable server system |
US9495308B2 (en) | 2012-05-22 | 2016-11-15 | Xockets, Inc. | Offloading of computation for rack level servers and corresponding methods and systems |
US20130318280A1 (en) | 2012-05-22 | 2013-11-28 | Xockets IP, LLC | Offloading of computation for rack level servers and corresponding methods and systems |
US10270709B2 (en) | 2015-06-26 | 2019-04-23 | Microsoft Technology Licensing, Llc | Allocating acceleration component functionality for supporting services |
JP5939123B2 (en) * | 2012-10-09 | 2016-06-22 | 富士通株式会社 | Execution control program, execution control method, and information processing apparatus |
US10289418B2 (en) | 2012-12-27 | 2019-05-14 | Nvidia Corporation | Cooperative thread array granularity context switch during trap handling |
US9448837B2 (en) * | 2012-12-27 | 2016-09-20 | Nvidia Corporation | Cooperative thread array granularity context switch during trap handling |
US10311014B2 (en) * | 2012-12-28 | 2019-06-04 | Iii Holdings 2, Llc | System, method and computer readable medium for offloaded computation of distributed application protocols within a cluster of data processing nodes |
US9250954B2 (en) | 2013-01-17 | 2016-02-02 | Xockets, Inc. | Offload processor modules for connection to system memory, and corresponding methods and systems |
WO2015041706A1 (en) * | 2013-09-23 | 2015-03-26 | Mcafee, Inc. | Providing a fast path between two entities |
KR20150033453A (en) * | 2013-09-24 | 2015-04-01 | 주식회사 엘지씨엔에스 | Method of big data processing, apparatus performing the same and storage media storing the same |
US9811467B2 (en) * | 2014-02-03 | 2017-11-07 | Cavium, Inc. | Method and an apparatus for pre-fetching and processing work for procesor cores in a network processor |
US10459767B2 (en) * | 2014-03-05 | 2019-10-29 | International Business Machines Corporation | Performing data analytics utilizing a user configurable group of reusable modules |
CN103905337B (en) * | 2014-03-31 | 2018-01-23 | 华为技术有限公司 | A kind of processing unit of Internet resources, method and system |
US9383989B1 (en) | 2014-06-16 | 2016-07-05 | Symantec Corporation | Systems and methods for updating applications |
US20160026605A1 (en) * | 2014-07-28 | 2016-01-28 | Emulex Corporation | Registrationless transmit onload rdma |
US10261817B2 (en) * | 2014-07-29 | 2019-04-16 | Nxp Usa, Inc. | System on a chip and method for a controller supported virtual machine monitor |
US10990288B2 (en) * | 2014-08-01 | 2021-04-27 | Software Ag Usa, Inc. | Systems and/or methods for leveraging in-memory storage in connection with the shuffle phase of MapReduce |
US9697114B2 (en) * | 2014-08-17 | 2017-07-04 | Mikhael Lerman | Netmory |
US9397952B2 (en) * | 2014-09-05 | 2016-07-19 | Futurewei Technologies, Inc. | Segment based switching architecture with hybrid control in SDN |
US9858104B2 (en) * | 2014-09-24 | 2018-01-02 | Pluribus Networks, Inc. | Connecting fabrics via switch-to-switch tunneling transparent to network servers |
US10095654B2 (en) * | 2014-09-30 | 2018-10-09 | International Business Machines Corporation | Mapping and reducing |
US9762508B2 (en) | 2014-10-02 | 2017-09-12 | Microsoft Technology Licensing, Llc | Relay optimization using software defined networking |
US10467569B2 (en) * | 2014-10-03 | 2019-11-05 | Datameer, Inc. | Apparatus and method for scheduling distributed workflow tasks |
US10250464B2 (en) * | 2014-10-15 | 2019-04-02 | Accedian Networks Inc. | Area efficient traffic generator |
US9400674B2 (en) | 2014-12-11 | 2016-07-26 | Amazon Technologies, Inc. | Managing virtual machine instances utilizing a virtual offload device |
US9424067B2 (en) | 2014-12-11 | 2016-08-23 | Amazon Technologies, Inc. | Managing virtual machine instances utilizing an offload device |
US9292332B1 (en) | 2014-12-11 | 2016-03-22 | Amazon Technologies, Inc. | Live updates for virtual machine monitor |
US9886297B2 (en) | 2014-12-11 | 2018-02-06 | Amazon Technologies, Inc. | Systems and methods for loading a virtual machine monitor during a boot process |
US9832876B2 (en) * | 2014-12-18 | 2017-11-28 | Intel Corporation | CPU package substrates with removable memory mechanical interfaces |
US9535798B1 (en) | 2014-12-19 | 2017-01-03 | Amazon Technologies, Inc. | Systems and methods for maintaining virtual component checkpoints on an offload device |
WO2016118559A1 (en) * | 2015-01-20 | 2016-07-28 | Ultrata Llc | Object based memory fabric |
WO2016118630A1 (en) | 2015-01-20 | 2016-07-28 | Ultrata Llc | Utilization of a distributed index to provide object memory fabric coherency |
US11086521B2 (en) * | 2015-01-20 | 2021-08-10 | Ultrata, Llc | Object memory data flow instruction execution |
WO2016122498A1 (en) * | 2015-01-28 | 2016-08-04 | Hewlett-Packard Development Company, L.P. | Supporting differfent types of memory devices |
US9667414B1 (en) | 2015-03-30 | 2017-05-30 | Amazon Technologies, Inc. | Validating using an offload device security component |
US20160292117A1 (en) * | 2015-03-30 | 2016-10-06 | Integrated Device Technology, Inc. | Methods and Apparatus for Efficient Network Analytics and Computing Card |
US10243739B1 (en) | 2015-03-30 | 2019-03-26 | Amazon Technologies, Inc. | Validating using an offload device security component |
US10211985B1 (en) * | 2015-03-30 | 2019-02-19 | Amazon Technologies, Inc. | Validating using an offload device security component |
US10511478B2 (en) | 2015-04-17 | 2019-12-17 | Microsoft Technology Licensing, Llc | Changing between different roles at acceleration components |
US10198294B2 (en) | 2015-04-17 | 2019-02-05 | Microsoft Licensing Technology, LLC | Handling tenant requests in a system that uses hardware acceleration components |
US10296392B2 (en) | 2015-04-17 | 2019-05-21 | Microsoft Technology Licensing, Llc | Implementing a multi-component service using plural hardware acceleration components |
US9792154B2 (en) | 2015-04-17 | 2017-10-17 | Microsoft Technology Licensing, Llc | Data processing system having a hardware acceleration plane and a software plane |
US9886210B2 (en) | 2015-06-09 | 2018-02-06 | Ultrata, Llc | Infinite memory fabric hardware implementation with router |
US10698628B2 (en) | 2015-06-09 | 2020-06-30 | Ultrata, Llc | Infinite memory fabric hardware implementation with memory |
US9971542B2 (en) | 2015-06-09 | 2018-05-15 | Ultrata, Llc | Infinite memory fabric streams and APIs |
US9959306B2 (en) * | 2015-06-12 | 2018-05-01 | International Business Machines Corporation | Partition-based index management in hadoop-like data stores |
US10216555B2 (en) | 2015-06-26 | 2019-02-26 | Microsoft Technology Licensing, Llc | Partially reconfiguring acceleration components |
US9667657B2 (en) * | 2015-08-04 | 2017-05-30 | AO Kaspersky Lab | System and method of utilizing a dedicated computer security service |
US9934395B2 (en) | 2015-09-11 | 2018-04-03 | International Business Machines Corporation | Enabling secure big data analytics in the cloud |
US10235063B2 (en) | 2015-12-08 | 2019-03-19 | Ultrata, Llc | Memory fabric operations and coherency using fault tolerant objects |
WO2017100292A1 (en) | 2015-12-08 | 2017-06-15 | Ultrata, Llc. | Object memory interfaces across shared links |
US10241676B2 (en) | 2015-12-08 | 2019-03-26 | Ultrata, Llc | Memory fabric software implementation |
WO2017100281A1 (en) | 2015-12-08 | 2017-06-15 | Ultrata, Llc | Memory fabric software implementation |
US10268521B2 (en) * | 2016-01-22 | 2019-04-23 | Samsung Electronics Co., Ltd. | Electronic system with data exchange mechanism and method of operation thereof |
US9984009B2 (en) * | 2016-01-28 | 2018-05-29 | Silicon Laboratories Inc. | Dynamic containerized system memory protection for low-energy MCUs |
US10303646B2 (en) | 2016-03-25 | 2019-05-28 | Microsoft Technology Licensing, Llc | Memory sharing for working data using RDMA |
US20170302438A1 (en) * | 2016-04-15 | 2017-10-19 | The Florida International University Board Of Trustees | Advanced bus architecture for aes-encrypted high-performance internet-of-things (iot) embedded systems |
US11115385B1 (en) * | 2016-07-27 | 2021-09-07 | Cisco Technology, Inc. | Selective offloading of packet flows with flow state management |
DK3358463T3 (en) * | 2016-08-26 | 2020-11-16 | Huawei Tech Co Ltd | METHOD, DEVICE AND SYSTEM FOR IMPLEMENTING HARDWARE ACCELERATION TREATMENT |
US11119813B1 (en) * | 2016-09-30 | 2021-09-14 | Amazon Technologies, Inc. | Mapreduce implementation using an on-demand network code execution system |
CN106776024B (en) * | 2016-12-13 | 2020-07-21 | 苏州浪潮智能科技有限公司 | Resource scheduling device, system and method |
US10317967B2 (en) * | 2017-03-03 | 2019-06-11 | Klas Technologies Limited | Power bracket system |
US10990291B2 (en) * | 2017-06-12 | 2021-04-27 | Dell Products, L.P. | Software assist memory module hardware architecture |
US11153289B2 (en) * | 2017-07-28 | 2021-10-19 | Alibaba Group Holding Limited | Secure communication acceleration using a System-on-Chip (SoC) architecture |
US11474555B1 (en) * | 2017-08-23 | 2022-10-18 | Xilinx, Inc. | Data-driven platform characteristics capture and discovery for hardware accelerators |
CN108055342B (en) * | 2017-12-26 | 2021-05-04 | 北京奇艺世纪科技有限公司 | Data monitoring method and device |
US11112972B2 (en) | 2018-12-05 | 2021-09-07 | Samsung Electronics Co., Ltd. | System and method for accelerated data processing in SSDs |
TWI727607B (en) | 2019-02-14 | 2021-05-11 | 美商萬國商業機器公司 | Method, computer system and computer program product for directed interrupt virtualization with interrupt table |
EP3924819A1 (en) | 2019-02-14 | 2021-12-22 | International Business Machines Corporation | Directed interrupt for multilevel virtualization with interrupt table |
CA3130164A1 (en) * | 2019-02-14 | 2020-08-20 | International Business Machines Corporation | Directed interrupt for multilevel virtualization |
WO2020164820A1 (en) | 2019-02-14 | 2020-08-20 | International Business Machines Corporation | Directed interrupt virtualization |
TWI764082B (en) | 2019-02-14 | 2022-05-11 | 美商萬國商業機器公司 | Method, computer system and computer program product for interrupt signaling for directed interrupt virtualization |
WO2020164935A1 (en) | 2019-02-14 | 2020-08-20 | International Business Machines Corporation | Directed interrupt virtualization with running indicator |
TWI759677B (en) | 2019-02-14 | 2022-04-01 | 美商萬國商業機器公司 | Method, computer system and computer program product for directed interrupt virtualization with fallback |
US11128490B2 (en) * | 2019-04-26 | 2021-09-21 | Microsoft Technology Licensing, Llc | Enabling access to dedicated resources in a virtual network using top of rack switches |
US10999084B2 (en) | 2019-05-31 | 2021-05-04 | Microsoft Technology Licensing, Llc | Leveraging remote direct memory access (RDMA) for packet capture |
CN112115521B (en) * | 2019-06-19 | 2023-02-07 | 华为技术有限公司 | Data access method and device |
CN110278278A (en) * | 2019-06-26 | 2019-09-24 | 深圳市迅雷网络技术有限公司 | A kind of data transmission method, system, device and computer media |
CN110908600B (en) * | 2019-10-18 | 2021-07-20 | 华为技术有限公司 | Data access method and device and first computing equipment |
WO2021084309A1 (en) * | 2019-10-30 | 2021-05-06 | Telefonaktiebolaget Lm Ericsson (Publ) | In-band protocol-based in-network computation offload framework |
CN111274013B (en) * | 2020-01-16 | 2022-05-03 | 北京思特奇信息技术股份有限公司 | Method and system for optimizing timed task scheduling based on memory database in container |
US11962518B2 (en) | 2020-06-02 | 2024-04-16 | VMware LLC | Hardware acceleration techniques using flow selection |
US11743189B2 (en) * | 2020-09-14 | 2023-08-29 | Microsoft Technology Licensing, Llc | Fault tolerance for SDN gateways using network switches |
US11829793B2 (en) | 2020-09-28 | 2023-11-28 | Vmware, Inc. | Unified management of virtual machines and bare metal computers |
US11736566B2 (en) | 2020-09-28 | 2023-08-22 | Vmware, Inc. | Using a NIC as a network accelerator to allow VM access to an external storage via a PF module, bus, and VF module |
US11636053B2 (en) | 2020-09-28 | 2023-04-25 | Vmware, Inc. | Emulating a local storage by accessing an external storage through a shared port of a NIC |
US11792134B2 (en) | 2020-09-28 | 2023-10-17 | Vmware, Inc. | Configuring PNIC to perform flow processing offload using virtual port identifiers |
US11593278B2 (en) | 2020-09-28 | 2023-02-28 | Vmware, Inc. | Using machine executing on a NIC to access a third party storage not supported by a NIC or host |
US20220103488A1 (en) * | 2020-09-28 | 2022-03-31 | Vmware, Inc. | Packet processing with hardware offload units |
US20220206869A1 (en) * | 2020-12-28 | 2022-06-30 | Advanced Micro Devices, Inc. | Virtualizing resources of a memory-based execution device |
US20220405104A1 (en) * | 2021-06-22 | 2022-12-22 | Vmware, Inc. | Cross platform and platform agnostic accelerator remoting service |
US11863376B2 (en) | 2021-12-22 | 2024-01-02 | Vmware, Inc. | Smart NIC leader election |
US11928367B2 (en) | 2022-06-21 | 2024-03-12 | VMware LLC | Logical memory addressing for network devices |
US11899594B2 (en) | 2022-06-21 | 2024-02-13 | VMware LLC | Maintenance of data message classification cache on smart NIC |
US11928062B2 (en) | 2022-06-21 | 2024-03-12 | VMware LLC | Accelerating data message classification with smart NICs |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7496670B1 (en) * | 1997-11-20 | 2009-02-24 | Amdocs (Israel) Ltd. | Digital asset monitoring system and method |
US20100115174A1 (en) * | 2008-11-05 | 2010-05-06 | Aprius Inc. | PCI Express Load Sharing Network Interface Controller Cluster |
Family Cites Families (147)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62214464A (en) | 1986-03-17 | 1987-09-21 | Hitachi Ltd | Coprocessor coupling system |
US5446844A (en) * | 1987-10-05 | 1995-08-29 | Unisys Corporation | Peripheral memory interface controller as a cache for a large data processing system |
US5237662A (en) | 1991-06-27 | 1993-08-17 | Digital Equipment Corporation | System and method with a procedure oriented input/output mechanism |
US5247675A (en) | 1991-08-09 | 1993-09-21 | International Business Machines Corporation | Preemptive and non-preemptive scheduling and execution of program threads in a multitasking operating system |
US5577213A (en) | 1994-06-03 | 1996-11-19 | At&T Global Information Solutions Company | Multi-device adapter card for computer |
US6085307A (en) | 1996-11-27 | 2000-07-04 | Vlsi Technology, Inc. | Multiple native instruction set master/slave processor arrangement and method thereof |
US5870350A (en) | 1997-05-21 | 1999-02-09 | International Business Machines Corporation | High performance, high bandwidth memory bus architecture utilizing SDRAMs |
US6092146A (en) | 1997-07-31 | 2000-07-18 | Ibm | Dynamically configurable memory adapter using electronic presence detects |
US6157989A (en) * | 1998-06-03 | 2000-12-05 | Motorola, Inc. | Dynamic bus arbitration priority and task switching based on shared memory fullness in a multi-processor system |
US6157955A (en) * | 1998-06-15 | 2000-12-05 | Intel Corporation | Packet processing system including a policy engine having a classification unit |
US20060117274A1 (en) | 1998-08-31 | 2006-06-01 | Tseng Ping-Sheng | Behavior processor system and method |
US6446163B1 (en) | 1999-01-04 | 2002-09-03 | International Business Machines Corporation | Memory card with signal processing element |
US6578110B1 (en) * | 1999-01-21 | 2003-06-10 | Sony Computer Entertainment, Inc. | High-speed processor system and cache memories with processing capabilities |
US6625685B1 (en) | 2000-09-20 | 2003-09-23 | Broadcom Corporation | Memory controller with programmable configuration |
US7120155B2 (en) | 2000-10-03 | 2006-10-10 | Broadcom Corporation | Switch having virtual shared memory |
TWI240864B (en) | 2001-06-13 | 2005-10-01 | Hitachi Ltd | Memory device |
US6751113B2 (en) | 2002-03-07 | 2004-06-15 | Netlist, Inc. | Arrangement of integrated circuits in a memory module |
US7472205B2 (en) | 2002-04-24 | 2008-12-30 | Nec Corporation | Communication control apparatus which has descriptor cache controller that builds list of descriptors |
WO2004027649A1 (en) | 2002-09-18 | 2004-04-01 | Netezza Corporation | Asymmetric streaming record data processor method and apparatus |
US7454749B2 (en) | 2002-11-12 | 2008-11-18 | Engineered Intelligence Corporation | Scalable parallel processing on shared memory computers |
US20040133720A1 (en) | 2002-12-31 | 2004-07-08 | Steven Slupsky | Embeddable single board computer |
US7089412B2 (en) | 2003-01-17 | 2006-08-08 | Wintec Industries, Inc. | Adaptive memory module |
US7421694B2 (en) | 2003-02-18 | 2008-09-02 | Microsoft Corporation | Systems and methods for enhancing performance of a coprocessor |
US7155379B2 (en) | 2003-02-25 | 2006-12-26 | Microsoft Corporation | Simulation of a PCI device's memory-mapped I/O registers |
US7337314B2 (en) * | 2003-04-12 | 2008-02-26 | Cavium Networks, Inc. | Apparatus and method for allocating resources within a security processor |
US6982892B2 (en) | 2003-05-08 | 2006-01-03 | Micron Technology, Inc. | Apparatus and methods for a physical layout of simultaneously sub-accessible memory modules |
US20050038946A1 (en) * | 2003-08-12 | 2005-02-17 | Tadpole Computer, Inc. | System and method using a high speed interface in a system having co-processors |
US8776050B2 (en) | 2003-08-20 | 2014-07-08 | Oracle International Corporation | Distributed virtual machine monitor for managing multiple virtual resources across multiple physical nodes |
US7657706B2 (en) | 2003-12-18 | 2010-02-02 | Cisco Technology, Inc. | High speed memory and input/output processor subsystem for efficiently allocating and using high-speed memory and slower-speed memory |
US20050018495A1 (en) | 2004-01-29 | 2005-01-27 | Netlist, Inc. | Arrangement of integrated circuits in a memory module |
US7916574B1 (en) | 2004-03-05 | 2011-03-29 | Netlist, Inc. | Circuit providing load isolation and memory domain translation for memory module |
US7286436B2 (en) | 2004-03-05 | 2007-10-23 | Netlist, Inc. | High-density memory module utilizing low-density memory components |
US7289386B2 (en) | 2004-03-05 | 2007-10-30 | Netlist, Inc. | Memory module decoder |
US7532537B2 (en) | 2004-03-05 | 2009-05-12 | Netlist, Inc. | Memory module with a circuit providing load isolation and memory domain translation |
US7668165B2 (en) | 2004-03-31 | 2010-02-23 | Intel Corporation | Hardware-based multi-threading for packet processing |
US7254036B2 (en) | 2004-04-09 | 2007-08-07 | Netlist, Inc. | High density memory module using stacked printed circuit boards |
US7480611B2 (en) | 2004-05-13 | 2009-01-20 | International Business Machines Corporation | Method and apparatus to increase the usable memory capacity of a logic simulation hardware emulator/accelerator |
US7436845B1 (en) | 2004-06-08 | 2008-10-14 | Sun Microsystems, Inc. | Input and output buffering |
US20060004965A1 (en) | 2004-06-30 | 2006-01-05 | Tu Steven J | Direct processor cache access within a system having a coherent multi-processor protocol |
US7305574B2 (en) | 2004-10-29 | 2007-12-04 | International Business Machines Corporation | System, method and storage medium for bus calibration in a memory subsystem |
KR100666169B1 (en) | 2004-12-17 | 2007-01-09 | 삼성전자주식회사 | Flash memory data storing device |
US7380038B2 (en) * | 2005-02-04 | 2008-05-27 | Microsoft Corporation | Priority registers for biasing access to shared resources |
US8072887B1 (en) | 2005-02-07 | 2011-12-06 | Extreme Networks, Inc. | Methods, systems, and computer program products for controlling enqueuing of packets in an aggregated queue including a plurality of virtual queues using backpressure messages from downstream queues |
CN101727429B (en) | 2005-04-21 | 2012-11-14 | 提琴存储器公司 | Interconnection system |
US8438328B2 (en) | 2008-02-21 | 2013-05-07 | Google Inc. | Emulation of abstracted DIMMs using abstracted DRAMs |
US8244971B2 (en) | 2006-07-31 | 2012-08-14 | Google Inc. | Memory circuit system and method |
US20080304481A1 (en) | 2005-07-12 | 2008-12-11 | Paul Thomas Gurney | System and Method of Offloading Protocol Functions |
US20070016906A1 (en) | 2005-07-18 | 2007-01-18 | Mistletoe Technologies, Inc. | Efficient hardware allocation of processes to processors |
US7442050B1 (en) | 2005-08-29 | 2008-10-28 | Netlist, Inc. | Circuit card with flexible connection for memory module with heat spreader |
US7650557B2 (en) * | 2005-09-19 | 2010-01-19 | Network Appliance, Inc. | Memory scrubbing of expanded memory |
US8862783B2 (en) | 2005-10-25 | 2014-10-14 | Broadbus Technologies, Inc. | Methods and system to offload data processing tasks |
US7899864B2 (en) | 2005-11-01 | 2011-03-01 | Microsoft Corporation | Multi-user terminal services accelerator |
US8225297B2 (en) | 2005-12-07 | 2012-07-17 | Microsoft Corporation | Cache metadata identifiers for isolation and sharing |
US7904688B1 (en) | 2005-12-21 | 2011-03-08 | Trend Micro Inc | Memory management unit for field programmable gate array boards |
WO2007084422A2 (en) * | 2006-01-13 | 2007-07-26 | Sun Microsystems, Inc. | Modular blade server |
US7619893B1 (en) | 2006-02-17 | 2009-11-17 | Netlist, Inc. | Heat spreader for electronic modules |
US20070226745A1 (en) | 2006-02-28 | 2007-09-27 | International Business Machines Corporation | Method and system for processing a service request |
US7421552B2 (en) | 2006-03-17 | 2008-09-02 | Emc Corporation | Techniques for managing data within a data storage system utilizing a flash-based memory vault |
US7434002B1 (en) | 2006-04-24 | 2008-10-07 | Vmware, Inc. | Utilizing cache information to manage memory access and cache utilization |
US7716411B2 (en) | 2006-06-07 | 2010-05-11 | Microsoft Corporation | Hybrid memory device with single interface |
US8948166B2 (en) * | 2006-06-14 | 2015-02-03 | Hewlett-Packard Development Company, Lp. | System of implementing switch devices in a server system |
US7957280B2 (en) | 2006-06-16 | 2011-06-07 | Bittorrent, Inc. | Classification and verification of static file transfer protocols |
US7636800B2 (en) | 2006-06-27 | 2009-12-22 | International Business Machines Corporation | Method and system for memory address translation and pinning |
US7624118B2 (en) | 2006-07-26 | 2009-11-24 | Microsoft Corporation | Data processing over very large databases |
US8943245B2 (en) | 2006-09-28 | 2015-01-27 | Virident Systems, Inc. | Non-volatile type memory modules for main memory |
US20080082750A1 (en) | 2006-09-28 | 2008-04-03 | Okin Kenneth A | Methods of communicating to, memory modules in a memory channel |
US8074022B2 (en) | 2006-09-28 | 2011-12-06 | Virident Systems, Inc. | Programmable heterogeneous memory controllers for main memory with different memory modules |
WO2008051940A2 (en) | 2006-10-23 | 2008-05-02 | Virident Systems, Inc. | Methods and apparatus of dual inline memory modules for flash memory |
US7913055B2 (en) | 2006-11-04 | 2011-03-22 | Virident Systems Inc. | Seamless application access to hybrid main memory |
US8149834B1 (en) | 2007-01-25 | 2012-04-03 | World Wide Packets, Inc. | Forwarding a packet to a port from which the packet is received and transmitting modified, duplicated packets on a single port |
US20080215996A1 (en) * | 2007-02-22 | 2008-09-04 | Chad Farrell Media, Llc | Website/Web Client System for Presenting Multi-Dimensional Content |
US20080229049A1 (en) | 2007-03-16 | 2008-09-18 | Ashwini Kumar Nanda | Processor card for blade server and process. |
WO2008127698A2 (en) | 2007-04-12 | 2008-10-23 | Rambus Inc. | Memory system with point-to-point request interconnect |
US8301833B1 (en) | 2007-06-01 | 2012-10-30 | Netlist, Inc. | Non-volatile memory module |
US8874831B2 (en) | 2007-06-01 | 2014-10-28 | Netlist, Inc. | Flash-DRAM hybrid memory module |
US8904098B2 (en) | 2007-06-01 | 2014-12-02 | Netlist, Inc. | Redundant backup using non-volatile memory |
US8347005B2 (en) | 2007-07-31 | 2013-01-01 | Hewlett-Packard Development Company, L.P. | Memory controller with multi-protocol interface |
US7840748B2 (en) | 2007-08-31 | 2010-11-23 | International Business Machines Corporation | Buffered memory module with multiple memory device data interface ports supporting double the memory capacity |
US7949683B2 (en) | 2007-11-27 | 2011-05-24 | Cavium Networks, Inc. | Method and apparatus for traversing a compressed deterministic finite automata (DFA) graph |
US8862706B2 (en) | 2007-12-14 | 2014-10-14 | Nant Holdings Ip, Llc | Hybrid transport—application network fabric apparatus |
US8990799B1 (en) | 2008-01-30 | 2015-03-24 | Emc Corporation | Direct memory access through virtual switch in device driver |
JP5186982B2 (en) | 2008-04-02 | 2013-04-24 | 富士通株式会社 | Data management method and switch device |
US20110235260A1 (en) | 2008-04-09 | 2011-09-29 | Apacer Technology Inc. | Dram module with solid state disk |
US8417870B2 (en) | 2009-07-16 | 2013-04-09 | Netlist, Inc. | System and method of increasing addressable memory space on a memory board |
US8516185B2 (en) | 2009-07-16 | 2013-08-20 | Netlist, Inc. | System and method utilizing distributed byte-wise buffers on a memory module |
US8001434B1 (en) | 2008-04-14 | 2011-08-16 | Netlist, Inc. | Memory board with self-testing capability |
US8154901B1 (en) | 2008-04-14 | 2012-04-10 | Netlist, Inc. | Circuit providing load isolation and noise reduction |
US8462791B2 (en) | 2008-05-22 | 2013-06-11 | Nokia Siemens Networks Oy | Adaptive scheduler for communication systems apparatus, system and method |
US8190699B2 (en) | 2008-07-28 | 2012-05-29 | Crossfield Technology LLC | System and method of multi-path data communications |
US20100031253A1 (en) * | 2008-07-29 | 2010-02-04 | Electronic Data Systems Corporation | System and method for a virtualization infrastructure management environment |
US20100031235A1 (en) | 2008-08-01 | 2010-02-04 | Modular Mining Systems, Inc. | Resource Double Lookup Framework |
US7886103B2 (en) | 2008-09-08 | 2011-02-08 | Cisco Technology, Inc. | Input-output module, processing platform and method for extending a memory interface for input-output operations |
JP5272265B2 (en) * | 2008-09-29 | 2013-08-28 | 株式会社日立製作所 | PCI device sharing method |
US8054832B1 (en) | 2008-12-30 | 2011-11-08 | Juniper Networks, Inc. | Methods and apparatus for routing between virtual resources based on a routing location policy |
US20100183033A1 (en) | 2009-01-20 | 2010-07-22 | Nokia Corporation | Method and apparatus for encapsulation of scalable media |
US8498349B2 (en) | 2009-03-11 | 2013-07-30 | Texas Instruments Incorporated | Demodulation and decoding for frequency modulation (FM) receivers with radio data system (RDS) or radio broadcast data system (RBDS) |
US8200800B2 (en) * | 2009-03-12 | 2012-06-12 | International Business Machines Corporation | Remotely administering a server |
US8264903B1 (en) | 2009-05-05 | 2012-09-11 | Netlist, Inc. | Systems and methods for refreshing a memory module |
US8489837B1 (en) | 2009-06-12 | 2013-07-16 | Netlist, Inc. | Systems and methods for handshaking with a memory module |
US9128632B2 (en) | 2009-07-16 | 2015-09-08 | Netlist, Inc. | Memory module with distributed data buffers and method of operation |
US9535849B2 (en) | 2009-07-24 | 2017-01-03 | Advanced Micro Devices, Inc. | IOMMU using two-level address translation for I/O and computation offload devices on a peripheral interconnect |
US20110035540A1 (en) * | 2009-08-10 | 2011-02-10 | Adtron, Inc. | Flash blade system architecture and method |
US8848513B2 (en) | 2009-09-02 | 2014-09-30 | Qualcomm Incorporated | Seamless overlay connectivity using multi-homed overlay neighborhoods |
US9876735B2 (en) * | 2009-10-30 | 2018-01-23 | Iii Holdings 2, Llc | Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect |
US8442048B2 (en) | 2009-11-04 | 2013-05-14 | Juniper Networks, Inc. | Methods and apparatus for configuring a virtual network switch |
US9389895B2 (en) | 2009-12-17 | 2016-07-12 | Microsoft Technology Licensing, Llc | Virtual storage target offload techniques |
US9390035B2 (en) | 2009-12-21 | 2016-07-12 | Sanmina-Sci Corporation | Method and apparatus for supporting storage modules in standard memory and/or hybrid memory bus architectures |
US8473695B2 (en) | 2011-03-31 | 2013-06-25 | Mosys, Inc. | Memory system including variable write command scheduling |
EP2363812B1 (en) | 2010-03-04 | 2018-02-28 | Karlsruher Institut für Technologie | Reconfigurable processor architecture |
EP2553573A4 (en) | 2010-03-26 | 2014-02-19 | Virtualmetrix Inc | Fine grain performance resource management of computer systems |
CN101794271B (en) | 2010-03-31 | 2012-05-23 | 华为技术有限公司 | Implementation method and device of consistency of multi-core internal memory |
US8824492B2 (en) | 2010-05-28 | 2014-09-02 | Drc Computer Corporation | Accelerator system for remote data storage |
US8631271B2 (en) | 2010-06-24 | 2014-01-14 | International Business Machines Corporation | Heterogeneous recovery in a redundant memory system |
US10803066B2 (en) * | 2010-06-29 | 2020-10-13 | Teradata Us, Inc. | Methods and systems for hardware acceleration of database operations and queries for a versioned database based on multiple hardware accelerators |
US9118591B2 (en) | 2010-07-30 | 2015-08-25 | Broadcom Corporation | Distributed switch domain of heterogeneous components |
US8386887B2 (en) | 2010-09-24 | 2013-02-26 | Texas Memory Systems, Inc. | High-speed memory system |
US8483046B2 (en) | 2010-09-29 | 2013-07-09 | International Business Machines Corporation | Virtual switch interconnect for hybrid enterprise servers |
WO2012061633A2 (en) | 2010-11-03 | 2012-05-10 | Netlist, Inc. | Method and apparatus for optimizing driver load in a memory package |
US8405668B2 (en) | 2010-11-19 | 2013-03-26 | Apple Inc. | Streaming translation in display pipe |
US8499222B2 (en) * | 2010-12-14 | 2013-07-30 | Microsoft Corporation | Supporting distributed key-based processes |
US20120239874A1 (en) | 2011-03-02 | 2012-09-20 | Netlist, Inc. | Method and system for resolving interoperability of multiple types of dual in-line memory modules |
US8885334B1 (en) * | 2011-03-10 | 2014-11-11 | Xilinx, Inc. | Computing system with network attached processors |
US8774213B2 (en) * | 2011-03-30 | 2014-07-08 | Amazon Technologies, Inc. | Frameworks and interfaces for offload device-based packet processing |
US8825900B1 (en) | 2011-04-05 | 2014-09-02 | Nicira, Inc. | Method and apparatus for stateless transport layer tunneling |
US8930647B1 (en) | 2011-04-06 | 2015-01-06 | P4tents1, LLC | Multiple class memory systems |
WO2012141694A1 (en) | 2011-04-13 | 2012-10-18 | Hewlett-Packard Development Company, L.P. | Input/output processing |
US8442056B2 (en) | 2011-06-28 | 2013-05-14 | Marvell International Ltd. | Scheduling packets in a packet-processing pipeline |
US20130019057A1 (en) | 2011-07-15 | 2013-01-17 | Violin Memory, Inc. | Flash disk array and controller |
RU2014106859A (en) | 2011-07-25 | 2015-08-27 | Серверджи, Инк. | METHOD AND SYSTEM FOR CONSTRUCTION OF A LOW POWER COMPUTER SYSTEM |
US8767463B2 (en) | 2011-08-11 | 2014-07-01 | Smart Modular Technologies, Inc. | Non-volatile dynamic random access memory system with non-delay-lock-loop mechanism and method of operation thereof |
US9424188B2 (en) | 2011-11-23 | 2016-08-23 | Smart Modular Technologies, Inc. | Non-volatile memory packaging system with caching and method of operation thereof |
EP2798804A4 (en) | 2011-12-26 | 2015-09-23 | Intel Corp | Direct link synchronization communication between co-processors |
US9542437B2 (en) | 2012-01-06 | 2017-01-10 | Sap Se | Layout-driven data selection and reporting |
US8918634B2 (en) * | 2012-02-21 | 2014-12-23 | International Business Machines Corporation | Network node with network-attached stateless security offload device employing out-of-band processing |
US8924606B2 (en) | 2012-03-02 | 2014-12-30 | Hitachi, Ltd. | Storage system and data transfer control method |
US9513845B2 (en) | 2012-03-30 | 2016-12-06 | Violin Memory Inc. | Memory module virtualization |
US10019371B2 (en) * | 2012-04-27 | 2018-07-10 | Hewlett Packard Enterprise Development Lp | Data caching using local and remote memory |
US9495308B2 (en) | 2012-05-22 | 2016-11-15 | Xockets, Inc. | Offloading of computation for rack level servers and corresponding methods and systems |
US20130318280A1 (en) | 2012-05-22 | 2013-11-28 | Xockets IP, LLC | Offloading of computation for rack level servers and corresponding methods and systems |
US9268716B2 (en) | 2012-10-19 | 2016-02-23 | Yahoo! Inc. | Writing data from hadoop to off grid storage |
US20140157287A1 (en) | 2012-11-30 | 2014-06-05 | Advanced Micro Devices, Inc | Optimized Context Switching for Long-Running Processes |
JP6188093B2 (en) | 2012-12-26 | 2017-08-30 | リアルテック シンガポール プライベート リミテッド | Communication traffic processing architecture and method |
US9250954B2 (en) | 2013-01-17 | 2016-02-02 | Xockets, Inc. | Offload processor modules for connection to system memory, and corresponding methods and systems |
US10031820B2 (en) | 2013-01-17 | 2018-07-24 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Mirroring high performance and high availablity applications across server computers |
US9378161B1 (en) * | 2013-01-17 | 2016-06-28 | Xockets, Inc. | Full bandwidth packet handling with server systems including offload processors |
US10372551B2 (en) | 2013-03-15 | 2019-08-06 | Netlist, Inc. | Hybrid memory system with configurable error thresholds and failure analysis capability |
US9792154B2 (en) | 2015-04-17 | 2017-10-17 | Microsoft Technology Licensing, Llc | Data processing system having a hardware acceleration plane and a software plane |
-
2013
- 2013-05-22 US US13/900,273 patent/US20130318280A1/en not_active Abandoned
- 2013-05-22 US US13/900,367 patent/US20140165196A1/en not_active Abandoned
- 2013-05-22 US US13/900,333 patent/US20130318269A1/en not_active Abandoned
- 2013-05-22 US US13/900,359 patent/US9286472B2/en not_active Expired - Fee Related
- 2013-05-22 US US13/900,262 patent/US20130318268A1/en not_active Abandoned
- 2013-05-22 US US13/900,318 patent/US9558351B2/en active Active
-
2016
- 2016-12-30 US US15/396,318 patent/US20170237672A1/en not_active Abandoned
- 2016-12-30 US US15/396,330 patent/US10212092B2/en active Active - Reinstated
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7496670B1 (en) * | 1997-11-20 | 2009-02-24 | Amdocs (Israel) Ltd. | Digital asset monitoring system and method |
US20100115174A1 (en) * | 2008-11-05 | 2010-05-06 | Aprius Inc. | PCI Express Load Sharing Network Interface Controller Cluster |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10275288B2 (en) * | 2015-04-30 | 2019-04-30 | Virtual Open Systems | Virtualization manager for reconfigurable hardware accelerators |
US20160321113A1 (en) * | 2015-04-30 | 2016-11-03 | Virtual Open Systems | Virtualization manager for reconfigurable hardware accelerators |
US10715451B2 (en) | 2015-05-07 | 2020-07-14 | Mellanox Technologies, Ltd. | Efficient transport flow processing on an accelerator |
US20180083864A1 (en) * | 2015-05-29 | 2018-03-22 | Huawei Technologies Co., Ltd. | Data processing method and apparatus |
US10432506B2 (en) * | 2015-05-29 | 2019-10-01 | Huawei Technologies Co., Ltd. | Data processing method and apparatus |
US10320677B2 (en) * | 2017-01-02 | 2019-06-11 | Microsoft Technology Licensing, Llc | Flow control and congestion management for acceleration components configured to accelerate a service |
US10326696B2 (en) | 2017-01-02 | 2019-06-18 | Microsoft Technology Licensing, Llc | Transmission of messages by acceleration components configured to accelerate a service |
US20190253354A1 (en) * | 2017-01-02 | 2019-08-15 | Microsoft Technology Licensing, Llc | Flow control and congestion management for acceleration components configured to accelerate a service |
US10791054B2 (en) * | 2017-01-02 | 2020-09-29 | Microsoft Technology Licensing, Llc | Flow control and congestion management for acceleration components configured to accelerate a service |
US10382350B2 (en) | 2017-09-12 | 2019-08-13 | Mellanox Technologies, Ltd. | Maintaining packet order in offload of packet processing functions |
US11683266B2 (en) | 2017-10-16 | 2023-06-20 | Mellanox Technologies, Ltd. | Computational accelerator for storage operations |
US11502948B2 (en) | 2017-10-16 | 2022-11-15 | Mellanox Technologies, Ltd. | Computational accelerator for storage operations |
US11765079B2 (en) | 2017-10-16 | 2023-09-19 | Mellanox Technologies, Ltd. | Computational accelerator for storage operations |
US11418454B2 (en) | 2017-10-16 | 2022-08-16 | Mellanox Technologies, Ltd. | Computational accelerator for packet payload operations |
US11005771B2 (en) | 2017-10-16 | 2021-05-11 | Mellanox Technologies, Ltd. | Computational accelerator for packet payload operations |
WO2019092593A1 (en) * | 2017-11-08 | 2019-05-16 | Mellanox Technologies, Ltd. | Nic with programmable pipeline |
US10841243B2 (en) | 2017-11-08 | 2020-11-17 | Mellanox Technologies, Ltd. | NIC with programmable pipeline |
US10708240B2 (en) | 2017-12-14 | 2020-07-07 | Mellanox Technologies, Ltd. | Offloading communication security operations to a network interface controller |
US10824469B2 (en) | 2018-11-28 | 2020-11-03 | Mellanox Technologies, Ltd. | Reordering avoidance for flows during transition between slow-path handling and fast-path handling |
US11184439B2 (en) | 2019-04-01 | 2021-11-23 | Mellanox Technologies, Ltd. | Communication with accelerator via RDMA-based network adapter |
CN112054971A (en) * | 2019-06-06 | 2020-12-08 | 英业达科技有限公司 | Servo and exchanger system and operation method thereof |
US20210194831A1 (en) * | 2019-12-20 | 2021-06-24 | Board Of Trustees Of The University Of Illinois | Accelerating distributed reinforcement learning with in-switch computing |
US11706163B2 (en) * | 2019-12-20 | 2023-07-18 | The Board Of Trustees Of The University Of Illinois | Accelerating distributed reinforcement learning with in-switch computing |
US11558175B2 (en) | 2020-08-05 | 2023-01-17 | Mellanox Technologies, Ltd. | Cryptographic data communication apparatus |
US11909856B2 (en) | 2020-08-05 | 2024-02-20 | Mellanox Technologies, Ltd. | Cryptographic data communication apparatus |
US11909855B2 (en) | 2020-08-05 | 2024-02-20 | Mellanox Technologies, Ltd. | Cryptographic data communication apparatus |
US11934333B2 (en) | 2021-03-25 | 2024-03-19 | Mellanox Technologies, Ltd. | Storage protocol emulation in a peripheral device |
US11934658B2 (en) | 2021-03-25 | 2024-03-19 | Mellanox Technologies, Ltd. | Enhanced storage protocol emulation in a peripheral device |
Also Published As
Publication number | Publication date |
---|---|
US20130318268A1 (en) | 2013-11-28 |
US9558351B2 (en) | 2017-01-31 |
US20130318269A1 (en) | 2013-11-28 |
US10212092B2 (en) | 2019-02-19 |
US20140165196A1 (en) | 2014-06-12 |
US20170235699A1 (en) | 2017-08-17 |
US20130318277A1 (en) | 2013-11-28 |
US20140157397A1 (en) | 2014-06-05 |
US9286472B2 (en) | 2016-03-15 |
US20130318280A1 (en) | 2013-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20170237672A1 (en) | Network server systems, architectures, components and related methods | |
US10649924B2 (en) | Network overlay systems and methods using offload processors | |
US11736402B2 (en) | Fast data center congestion response based on QoS of VL | |
US11082350B2 (en) | Network server systems, architectures, components and related methods | |
US20190068509A1 (en) | Technologies for managing a latency-efficient pipeline through a network interface controller | |
US11394649B2 (en) | Non-random flowlet-based routing | |
US10142231B2 (en) | Technologies for network I/O access | |
CN115516832A (en) | Network and edge acceleration tile (NEXT) architecture | |
US20210112002A1 (en) | Receiver-based precision congestion control | |
CN113728315A (en) | System and method for facilitating efficient message matching in a Network Interface Controller (NIC) | |
US9485200B2 (en) | Network switch with external buffering via looparound path | |
US11700209B2 (en) | Multi-path packet descriptor delivery scheme | |
WO2022132278A1 (en) | Network interface device with flow control capability | |
Gao et al. | Gearbox: A hierarchical packet scheduler for approximate weighted fair queuing | |
US10616116B1 (en) | Network traffic load balancing using rotating hash | |
CN117015963A (en) | Server architecture adapter for heterogeneous and accelerated computing system input/output scaling | |
Doo et al. | Multicore Flow Processor with Wire‐Speed Flow Admission Control | |
US20240089219A1 (en) | Packet buffering technologies | |
Zyla et al. | FlexPipe: Fast, Flexible and Scalable Packet Processing for High-Performance SmartNICs | |
US20240080276A1 (en) | Path selection for packet transmission | |
US20240073151A1 (en) | Mice-elephant aware shared buffer schema | |
US20240048489A1 (en) | Dynamic fabric reaction for optimized collective communication | |
Wang et al. | High Performance Network Virtualization Architecture on FPGA SmartNIC | |
Amaro Jr | Improving Bandwidth Allocation in Cloud Computing Environments via" Bandwidth as a Service" Partitioning Scheme |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: XOCKETS, INC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DALAL, PARIN BHADRIK;REEL/FRAME:043588/0059 Effective date: 20170913 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
STCC | Information on status: application revival |
Free format text: WITHDRAWN ABANDONMENT, AWAITING EXAMINER ACTION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STCV | Information on status: appeal procedure |
Free format text: NOTICE OF APPEAL FILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |