US20160124754A1 - Virtual Function Boot In Single-Root and Multi-Root I/O Virtualization Environments - Google Patents
Virtual Function Boot In Single-Root and Multi-Root I/O Virtualization Environments Download PDFInfo
- Publication number
- US20160124754A1 US20160124754A1 US14/816,864 US201514816864A US2016124754A1 US 20160124754 A1 US20160124754 A1 US 20160124754A1 US 201514816864 A US201514816864 A US 201514816864A US 2016124754 A1 US2016124754 A1 US 2016124754A1
- Authority
- US
- United States
- Prior art keywords
- virtual
- boot
- storage adapter
- iov
- utilizing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/4401—Bootstrapping
- G06F9/4416—Network booting; Remote initial program loading [RIPL]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4022—Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
- G06F13/4282—Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/4401—Bootstrapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45579—I/O management, e.g. providing access to device drivers or storage
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
Definitions
- the present invention generally relates to single-root and multi-root I/O virtualization in computer based systems and more particularly to virtual function boot single-root and multi-root I/O virtualization environments.
- Single-root input/output virtualization (SR-IOV) and multi-root input/output (MR-IOV) specifications allow for a single PCIe device to appear as multiple separate PCIe devices.
- a physical device having SR-IOV capabilities may be configured to appear in the PCI configuration space as multiple functions.
- SR-IOV operates by introducing the concept of physical functions (PF) and virtual functions (VFs).
- PF physical functions
- VFs virtual functions
- physical functions are full-featured functions associated with the PCIe device.
- Virtual functions represent less than simple functions that lack configuration resources and only process I/O, wherein each physical function is derived from a physical function. It is further known in the art that virtual functions may be assigned to guest hosts, commonly referred to as virtual machines.
- the system 100 includes a physical server 102 configured to operate a host OS 106 and host Guest OS 0 through Guest Host OS 15 , labeled as 108 a and 108 b respectively.
- the system 100 may further include a SAS controller with associated physical function PF 0 110 and virtual functions VF 1 112 a through VF 16 112 b .
- the physical function, the multiple guest hosts, and the multiple virtual functions may include a variety of communication and mapping features as illustrated in FIG. 1 .
- SR-IOV is the virtualization of the PCIe bus enabling single physical instances of any controller to appear as 16 to 32 virtual controllers.
- SR-IOV single-root I/O virtualization
- VMs virtual machines
- VMM virtual machine manager
- PF physical function
- a method may include, but is not limited to, upon interconnection of the storage adapter with the SR-IOV enabled server and boot of the SR-IOV enabled server and storage adapter, loading a PF driver of the PF of the storage adapter onto the SR-IOV enabled server utilizing the virtual machine manager of the SR-IOV enable server; creating a plurality of virtual functions utilizing the PF driver; detecting each of the plurality of virtual functions on an interconnection bus utilizing the VMM; maintaining a boot list associated with the plurality of virtual functions; querying the storage adapter for the boot list associated with the plurality of virtual functions utilizing a VMBIOS associated with the plurality of VMs, the VMBIOS being configured to detect the boot list associated with the plurality of virtual functions; presenting the detected boot list to a VM boot manager of the VMM utilizing the VMBIOS; and booting each of the plurality of virtual machines utilizing each of the virtual functions, wherein each VF of the plurality of VFs is assigned
- MR-IOV multi-root I/O virtualization
- PF physical function
- a method may include, but is not limited to, upon interconnection of the at least one storage adapter with the at least one MR-IOV switch, loading a physical function (PF) driver of the at least one storage adapter onto the MR-IOV switch; creating a plurality of virtual functions (VFs) utilizing the PF driver on MR-IOV switch; assigning each of the VFs to an MR-IOV server of the plurality of MR-IOV servers; identifying each of the plurality of VFs as a virtual storage adapter by the plurality of MR-IOV servers, wherein each MR-IOV server identifies a VF as a virtual storage adapter; loading a UEFI driver onto each of the VFs; obtaining a boot list associated with the plurality of virtual functions from firmware of the at least one storage adapter utilizing the UEFI driver loaded on each of the VFs, wherein the boot list is configured to associate each virtual function with a corresponding boot disk; and booting a plurality
- a system may include, but is not limited to, a single-root I/O virtualization (SR-IOV) server configured to implement a plurality of virtual machines (VMs) and a virtual machine manager (VMM); and a storage adapter including at least one physical function (PF), storage adapter configured to implement a plurality of virtual functions, the storage adapter being communicatively couplable to the SR-IOV enabled server via a PCIe slot of the SR-IOV enabled server, wherein, upon interconnection of the storage adapter with the SR-IOV enabled server, the storage adapter and the SR-IOV enabled server are configured to: load a PF driver of the PF of the storage adapter onto the SR-IOV enabled server utilizing the virtual machine manager of the SR-IOV enable server; create a plurality of virtual functions utilizing the PF driver; detect each of the plurality of virtual functions on an interconnection bus utilizing the VMM;
- SR-IOV single-root I/O virtualization
- VMM virtual machine
- a system may include, but is not limited to, at least one MR-IOV switch; a plurality of multi-root I/O virtualization (MR-IOV) servers, each of the plurality of MR-IOV servers being communicatively coupled to the MR-IOV switch via a PCIe link; and at least one storage adapter including at least one physical function (PF), the at least one storage adapter configured to implement a plurality of virtual functions, the at least one storage adapter being communicatively couplable to the at least one MR-IOV switch via a PCIe slot of the MR-IOV switch, wherein, upon interconnection of the at least one storage adapter with the at least one MR-IOV switch, the at least one storage adapter, the MR-IOV switch, and the plurality of MR-IOV servers are configured to: load a physical function (PF) driver of the at least one storage adapter onto the MR-IO
- PF physical function
- FIG. 1 illustrates a block diagram view of an SR-IOV virtualization environment.
- FIG. 2A illustrates a block diagram view of a system suitable for virtual function boot in a single-root I/O virtualization (SR-IOV) environment, in accordance with one embodiment of the present invention.
- SR-IOV single-root I/O virtualization
- FIG. 2B illustrates a block diagram view of the kernel view of a system suitable for virtual function boot in a single-root I/O virtualization (SR-IOV) environment, in accordance with one embodiment of the present invention.
- SR-IOV single-root I/O virtualization
- FIG. 3 illustrates a block diagram view of a system suitable for virtual function boot in a multi-root I/O virtualization (MR-IOV) environment, in accordance with one embodiment of the present invention.
- MR-IOV multi-root I/O virtualization
- FIG. 4 illustrates a block diagram view of a system suitable for virtual function boot in a MR-IOV environment equipped with multi-node clustering capabilities, in accordance with one embodiment of the present invention.
- FIG. 5 illustrates a block diagram view of a system suitable for virtual function boot in a MR-IOV environment equipped with multi-level HA capabilities, in accordance with one embodiment of the present invention.
- FIG. 6 illustrates a block diagram view of a system suitable for virtual function boot in a SR-IOV environment equipped with diagnostic messaging capabilities, in accordance with a further embodiment of the present invention.
- FIG. 7 illustrates a flow diagram depicting a process for VF function boot in a SR-IOV environment, in accordance with one embodiment of the present invention.
- FIG. 8 illustrates a flow diagram depicting a process for VF function boot in a MR-IOV environment, in accordance with one embodiment of the present invention.
- FIG. 1 through 8 systems and methods for physical storage adapter virtual function booting in single-root and multi-root I/O virtualization environments is described in accordance with the present disclosure.
- FIG. 2A illustrates a block diagram view of a system 200 suitable for virtual function boot in a single-root I/O virtualization (SR-IOV) environment, in accordance with one embodiment of the present invention.
- the system may include an SR-IOV enabled server 201 and a storage adapter 202 (e.g., MegaRAID controller).
- SR-IOV enabled server 201 e.g., SR-IOV enabled server
- storage adapter 202 e.g., MegaRAID controller
- the SR-IOV enabled server 201 of the present invention may include any server known in the art capable of implementing SR-IOV.
- the SR-IOV enabled server 201 may include a VT-D enabled Intel® server.
- the SR-IOV enabled server 201 may include, but is not limited to, an Intel® Xenon® 5500 or 5600 server.
- the SR-IOV enabled server 201 is not limited to Intel or Xenon® based server technology, but, rather, the above description should be interpreted merely as an illustration.
- the SR-IOV enabled server 201 and the MegaRAID card 202 are communicatively couplable via an interconnection bus.
- the interconnection bus may include a PCI Express (PCIe) interconnection bus 204 (e.g., PCI Express 2.0).
- PCIe PCI Express
- a user may insert/connect the MegaRAID card 202 in the PCIe server slot (not shown) of the SR-IOV enabled server 201 , thereby establishing a communication link between the server 201 and physical function 208 of the MegaRAID card 202 .
- the SR-IOV enabled server 201 may be configured to host multiple virtual machines (VMs).
- VMs virtual machines
- the SR-IOV enabled server 201 may host a first VM 214 a , a second VM 214 b , a third VM, and up to and including an Nth VM 214 d .
- the server 201 may be configured to host a virtual machine manager (VMM) 206 .
- the server 201 may host a hypervisor (e.g., Xen or KVM) configured to manage the VMs 214 a - 214 d .
- hypervisor e.g., Xen or KVM
- VMM and a hypervisor are generally known in the art to be equivalent.
- a hypervisor is software installed on a server utilized to run guest operating systems (i.e., virtual machines) on the given server.
- a hypervisor may be installed on the SR-IOV enabled server 201 in order to manage the VMs 214 a - 214 d , wherein virtual functions of the system 200 are assigned and operated by the VMs 214 a - 214 d , as will be discussed in greater detail further herein.
- the MegaRAID controller 202 includes a physical function (PF) 203 .
- the PF 203 may be configured to implement a plurality of virtual functions (VFs) on the MegaRAID controller 202 .
- VFs virtual functions
- virtual functions VF- 1 , VF- 2 , VF- 3 , and up to and including VF-N may be implemented on MegaRAID controller 202 .
- FIG. 2B represents a block view diagram illustrating the kernel space view of the SR-IOV enabled server 201 following interconnection of the MegaRAID card 202 and the server 201 via the PCIe interconnect 204 .
- the VM Manager 206 includes a virtual disk VD- 0 , a PF driver 223 loaded from the MegaRAID controller 202 , a kernel 228 , and system BIOS and/or UEFI 226 .
- Each of the virtual machines includes a virtual disk, a virtual function driver, a virtual function, a kernel, and a VMBIOS.
- virtual machine 214 a includes virtual disk VD- 1 , VF driver 222 a , virtual function VF 1 218 a , kernel 224 a , and VMBIOS 216 a.
- the system 200 may boot firmware of the SR-IOV enabled server 201 .
- the system 200 may boot the BIOS or UEFI 226 of the SR-IOV server 201 .
- the system 200 may boot firmware of the MegaRAID controller 202 .
- the VM manager 206 e.g., hypervisor
- the VM manager 206 may identify the physical function (PF) of the storage adapter 202 as the controller of the SR-IOV enabled server 201 .
- the VM Manager 206 may load a PF driver 208 onto the SR-IOV enabled server 201 .
- FIG. 2B illustrates the kernel level view of the SR-IOV enabled server 201 following this PF driver 208 loading process.
- the system 200 may create a set of virtual functions 210 using the PF driver 208 .
- the PF driver 223 may enumerate the virtual functions of the storage adapter 202 for use by the VM manager 206 . As shown in FIG.
- the MegaRAID card 202 may host a first virtual function VF- 1 , a second virtual function VF- 2 , a third virtual function VF- 3 , and up to and including an Nth virtual function VF-N. It is contemplated herein that the creation and enumeration of the virtual functions 210 may depend on a variety of factors. These factors may include, but are not limited to, operational configuration of the VM manager 206 (i.e., the hypervisor) or hardware capabilities (e.g., SR-IOV enabled server capabilities) of the system 200 .
- operational configuration of the VM manager 206 i.e., the hypervisor
- hardware capabilities e.g., SR-IOV enabled server capabilities
- each of a set of virtual disks (VDs) 212 may be assigned to a virtual function of the storage adapter 202 .
- VD- 0 may be assigned to VF- 1
- VD- 1 may be assigned to VF- 1
- VD- 2 may be assigned to VF- 2
- VD- 3 may be assigned to VF- 3
- VD-N may be assigned to VF-N, as shown in the logical view of storage adapter 202 of FIG. 2A .
- the set of virtual disks 212 may create a RAID volume 218 (e.g., DAS RAID).
- an enclosure 220 of the system 200 may host one or more physical disks (e.g., HDDs or SSDs), as illustrated in FIG. 2A .
- the physical disks of the enclosure 218 are not necessarily the same in number as the number of VDs of the RAID volume.
- multiple RAID volumes may be formed from a single disk.
- any number of VDs may be created from any number of physical disks.
- VD- 0 . . . VD-N are illustrated within the enclosure 118 in order to illustrate that the VDs of the RAID volume 118 are hosted on the physical disks of the enclosure 220 .
- the DAS RAID 218 and the storage adapter 202 may be communicatively coupled via a serial-attached-SCSI (SAS) interconnection bus 219 .
- SAS serial-attached-SCSI
- the VM manager 206 of the SR-IOV enabled server 201 may detect each of the set of virtual functions on the PCIe bus 204 .
- the VM manager 206 detects a given VF (e.g., VF- 0 . . . VF-N) as a PCIe device.
- the storage adapter 202 may maintain and track boot data for each of the virtual functions utilizing firmware running on the storage adapter 202 .
- the storage adapter 202 may maintain and track each virtual function boot data separately.
- the storage adapter 202 may maintain a boot list associated with the set of virtual function 210 .
- the virtual machines i.e., guest domains
- the system 200 may be configured to automatically load and execute expansion VMBIOS whenever a user creates a new virtual machine.
- a BIOS emulation module of the VM manager 206 may execute the boot sequence.
- the BIOS emulation module may load the bootstrap from the boot disk via the BIOS.
- a user may add a VF driver into OS. As such, the VF driver will have full access to the associate disk.
- the VMBIOS may query the storage adapter 202 for the boot list associated with the set of virtual functions 210 .
- the VMBIOS e.g., 216 a . . . 216 d
- the firmware may be configured to return boot data for VF- 1 of 210 .
- the firmware may be configured to return boot data for VF-N of 210 .
- the storage adapter 202 may be configured to maintain boot data in a manner to correlate a first virtual function VF- 1 to VD- 1 , a second virtual function VF- 2 to VD- 2 , and up to an Nth virtual function VF-N to VD-N. Applicant notes that the numbering scheme disclosed above is merely implemented for illustrative purposes and should not be interpreted as a limitation on the present invention.
- the VMBIOS may be utilized to present the detected boot list to a VM boot manager of the VM manager 206 .
- each of the set of virtual disks (e.g., VD- 0 . . . VD-N) may be mapped to a specific virtual function of the set of virtual functions utilizing the VM boot manager of the VM manager 206 .
- the virtual functions may be utilized to boot each of the set of virtual machines 214 a . . . 214 d .
- the virtual functions 218 a . . . 218 d may be utilized to boot each of the set of virtual machines 214 a . . . 214 d respectively.
- each of the virtual functions 214 . . . 214 d is assigned to a single virtual machine of the group 214 a . . . 214 d via the PCIe passthrough 209 .
- the VM manager 206 may designate the given virtual function for PCIe passthrough. It should be recognized by those skilled in the art that PCIe passthrough may be managed utilizing the VM manager 206 (e.g., hypervisor).
- FIG. 3 illustrates a block diagram view of a system 300 suitable for virtual function boot in a MR-IOV environment, in accordance with a further embodiment of the present invention. Applicant notes that unless otherwise noted the features and components as described previously herein with respect to system 200 should be interpreted to extend through the remainder of the disclosure.
- the system 300 may include a plurality of servers including a first server 314 a , a second server 314 b , and an up to an including an Nth server 314 c .
- standard server technology is suitable for implementation in the context of the MR-IOV environment of the present invention. In this sense, any suitable server technology known in the art may be implemented as one of the plurality of servers of the present invention.
- the system 300 may include a storage adapter 302 .
- the storage adapter 302 may include a MegaRAID controller 302 .
- the adapter 302 may include a physical function 308 , a plurality of virtual functions 310 (e.g., VF- 1 . . . VF-N) and a corresponding plurality of virtual disks 312 (e.g., VD- 1 . . . VD-N).
- the storage adapter 302 may be coupled to a RAID volume 216 formed from a multiple physical disks (e.g., HDDs) of enclosure 318 via a SAS connection 319 .
- the system 300 may include a MR-IOV switch 304 .
- the MR-IOV switch 304 may include, but is not limited to, a PCIe switch 305 .
- the PCIe switch 305 may include a plurality of ports P- 1 , P- 2 and up to and including P-N.
- MegaRAID card 302 and the MR-IOV switch 304 are communicatively couplable via a interconnection bus.
- the interconnection bus may include a PCI Express (PCIe) interconnection bus (not shown) (e.g., PCI Express 2.0).
- PCIe PCI Express
- a user may insert/connect the MegaRAID card 302 in the PCIe server slot (not shown) of the MR-IOV switch 304 , thereby establishing a communication link between the MR-IOV switch 304 and physical function 308 of the MegaRAID card 302 .
- each of the MR-IOV servers 314 a . . . 314 c and the MegaRAID card 302 are communicatively couplable via an interconnection link.
- each server 314 a . . . 314 c may individually be coupled to the MR-IOV switch 304 via an interconnection link (e.g., interconnection cables).
- the interconnection link may include a PCI Express cable.
- the MR-IOV switch 302 is configured to assign each virtual function of the system 300 to a server (e.g., 314 a . . . 314 c ) through PCIe communication.
- a physical function driver of the storage adapter 302 may be loaded on the MR-IOV switch 304 .
- the PF driver loaded on the MR-IOV switch may be utilized to create a plurality of virtual functions VF- 1 through VF-N.
- the MR-IOV switch 304 may then assign each of the virtual functions VF- 1 . . . VF-N to an individual MR-IOV server 314 a . . . 314 c.
- each of the MR-IOV servers 314 a . . . 314 c is capable of booting with standard system BIOS/UEFI.
- the UEFI/BIOS of the MR-IOV servers 314 a . . . 314 c may identify each of the virtual functions VF- 1 . . . VF-N as virtual adapters. In this manner, each MR-IOV server identifies a single virtual function as a virtual storage adapter. Then, system UEFI/BIOS loads UEFI drivers (or Adapters option ROM) for the storage adapter 302 .
- the UEFI driver may obtain a boot list associated with the plurality of virtual functions from firmware of the storage adapter 302 .
- the UEFI driver loaded on each of the virtual functions may be utilized to obtain a boot list from the firmware of the storage adapter 302 .
- the boot list is configured to associate each virtual function VF- 1 . . . VF-N with a corresponding boot disk VD- 1 . . . VD-N.
- the virtual function may issue a command to the storage adapter 302 .
- the storage adapter 302 may determine the requesting virtual function and provide that virtual function with the associated boot disk information. Further, once a given disk is identified as a boot disk for a given server this disk is mark as the dedicated boot disk for this server. This information may be utilized in future queries.
- the boot manager of each of the MR-IOV servers 314 a . . . 314 c may utilize the boot list to boot the plurality of boot disks. In this manner, the boot manager may utilize the boot list and the virtual functions VF- 1 . . . VF-N assigned to each MR-IOV server 314 a . . . 314 c to boot each of the plurality of disks VD- 1 . . . VD-N.
- the kernel may prompt for a kernel driver for a given virtual function.
- the OS Once the OS loaded, the OS will provide direct access to the boot disk information.
- FIG. 4 illustrates a system 400 suitable for virtual function boot in a MR-IOV environment equipped with multi-node clustering capabilities, in accordance with one embodiment of the present invention.
- the system 400 is built on an architecture similar to that described with respect to system 200 . As such, the components and features of system 200 should be interpreted to extend to system 400 .
- the system 400 includes a plurality of servers 414 a . . . 414 c hosting a plurality of virtual machines 416 a . . .
- a MR-IOV switch 404 including a PCIe switch 405 with multiple ports 406 , a storage adapter (e.g., MegaRAID card 402 ) having a plurality of virtual functions 410 associated with a plurality of virtual disks 412 .
- a storage adapter e.g., MegaRAID card 402
- the MR-IOV switch is configured to perform multi-node clustering using the single storage adapter 402 .
- a first virtual function e.g., VF- 1
- a second virtual function e.g., VF- 2
- a first MR-IOV server e.g., 414 c
- the degree of clustering implemented by system 400 is not limited to two. Rather, it is only limited by the number of available virtual functions VF- 1 . . . VF-N. As such, in a general sense, the system 400 may implement N-node clustering.
- all cluster volumes may represent shared volumes.
- the volumes are only visible to predefined nodes.
- all of the disks e.g., LUN
- VD- 1 may be visible to server- 1 414 a and server-N 414 c , as shown in FIG. 4 .
- a virtual machine (VM) 416 a may be created and assigned to VD- 1 for storage in server- 1 414 a .
- server- 1 414 a may issue PERSISTENT RESERVE via the storage adapter 402 firmware and takes ownership of this volume.
- VD- 1 All of the operating system and associated data are then stored in VD- 1 from VM 416 a .
- VD- 1 is also available to server-N 414 c , however it does not have the ability to modify the arrangement as VD- 1 has ownership of the volume.
- a process performed by Live Migration (Hyper-V) or vMotion (VMware) software may carry out the transfer. Since the VD- 1 contains the pertinent information, Live Migration or vMotion need only transfer ownership from Server- 1 to Server-N by issuing a RELEASE from Server- 1 and RESERVE from Server-N. It is noted herein that the process only transfers control from server- 1 to server-N. Migration of actual data from server- 1 to server- 3 is not required.
- FIG. 5 illustrates a system 500 suitable for virtual function boot in a MR-IOV environment equipped with multi-level HA capabilities, in accordance with one embodiment of the present invention.
- the system 500 includes a plurality of servers 514 a . . . 514 c hosting a plurality of virtual machines 516 a . . . 516 g .
- the system 500 further includes two or more MR-IOV switches.
- the system 500 may include MR-IOV switch include a first PCIe switch 505 a with multiple ports 506 a and a second PCIe switch 505 b with multiple ports 506 b .
- the system 500 may include multiple storage adapters.
- the system 500 may include a first storage adapter (e.g., MegaRAID card 502 a ) having a plurality of virtual functions 510 a and a plurality of virtual disks 512 a and a second storage adapter (e.g., MegaRAID card 502 b ) having a plurality of virtual functions 510 b and a plurality of virtual disks 512 b .
- Each adapter 502 a and 502 b may also include a physical function (PF) 508 a and 508 b respectively.
- PF physical function
- the multiple PCIe switches are configured to perform N-node utilizing multiple storage adapters (e.g., 502 a and 502 b ).
- a first virtual function e.g., VF- 1 of 510 a
- a second virtual function e.g., VF- 1 of 510 b
- This concept may be extended to all servers 514 a . . . 514 c with all virtual functions of all of the storage adapters 502 a and 502 b of the system 500 , as illustrated by the dotted lines in FIG. 5 .
- the storage adapters 502 a - 502 b firmware may be configured to provide TPGS/ALUA (SCSI3) support. Further, one of the two volumes available to all servers is Active path, where as the second of the two volumes is passive path. In this sense, it should be straightforward for the multi-path solution to identify which adapter is Active optimized and which adapter is Non-active optimized.
- FIG. 6 illustrates a block diagram view of a system 600 suitable for virtual function boot in a SR-IOV environment equipped with diagnostic messaging capabilities, in accordance with a further embodiment of the present invention.
- the system 600 includes, but is not limited to, an SR-IOV enabled server 601 configured to host multiple virtual machines 614 a - 614 b and a storage adapter 602 (e.g., MegaRAID controller) communicatively couplable to the server 601 via a PCIe interconnect 604 .
- system 600 also includes a set of virtual functions 610 and virtual disks 612 of the storage adapter (e.g., MegaRAID card 602 ).
- the system 600 includes a set of virtual machines 614 a - 614 b hosted on the SR-IOV enabled server 601 .
- Each virtual machine may include an application set (e.g., 616 a or 616 b ) and a kernel (e.g., 618 a or 618 b ).
- Each kernel may include a virtual function driver (e.g., 620 a or 620 b ).
- a virtual function (VF) driver (e.g., 620 a or 620 b ) may be configured to issue a status of the VF driver to an interface the storage adapter 202 .
- this issuance may allow the storage adapter firmware to acknowledge the received status and forward the status to a PF driver 622 in the associated VM manager 606 (coupled to the adapter 602 via PCIe).
- the storage adapter 602 may take action based on status received from the VF driver 614 a - 614 b .
- the PF driver 622 may further forward the status to a user interface suitable for user notification 628 .
- the PF driver 622 may forward the status to an error handler of the 624 of the VM manager 606 .
- a VF driver 614 a or 614 b may transmit a status signal 621 a or 621 b from the VF driver 614 a or 614 b to the storage adapter 602 .
- the status signal 621 a or 621 b may be indicative of a status of the VF driver 614 a or 614 b .
- the status signal 621 a or 621 b may be received from a VF driver by a corresponding VF function.
- a signal 621 a transmitted by a first VF driver 614 a (representing the VF driver of the VM associated with VF- 1 ) may be received by VF- 1 of the storage adapter 602 .
- a signal 621 b transmitted by a fourth VF driver 614 b may be received by VF- 4 of the storage adapter 202 .
- the storage adapter 202 may store information indicative of the status transmitted by the status signal 621 a or 621 b utilizing the storage adapter firmware and a memory of the adapter 202 .
- the storage adapter 202 may relay the original status by transmitting a signal 623 indicative of the status to the PF driver 622 in the VM manager 606 .
- the PF driver 622 may relay the status by transmitting a signal 625 to an error handler 624 of the VM manager 624 .
- the error handler 624 may be pre-programmed by a user to implement a particular course of action based on the information content of the signal 625 received by the error handler 624 .
- the PF driver 22 may relay the status to a management tool 626 of the VM manager 602 via signal 629 .
- the management tool 626 may transmit a user signal 627 to a user interface (not shown), wherein the user signal is configured to trigger a pre-determined message (e.g., textual message, audio message, video message) selected based on one or more characteristics (e.g., information content related to status of VF driver 614 a or 614 b ) of the status signal 629 received by the management tool 626 .
- a pre-determined message e.g., textual message, audio message, video message
- characteristics e.g., information content related to status of VF driver 614 a or 614 b
- diagnostic messaging process may be extended to an MR-IOV environment.
- the storing of status information, error handling, a transmission of signals to a user interface may be handled by a MR-IOV switch rather than a VM manager.
- FIG. 7 illustrates a flow diagram depicting a process for VF function boot in a SR-IOV environment, in accordance with one embodiment of the present invention.
- Step 702 may load a PF driver of the PF of the storage adapter onto the SR-IOV enabled server utilizing the virtual machine manager of the SR-IOV enable server.
- Step 704 may create a plurality of virtual functions utilizing the PF driver.
- Step 706 may maintain a boot list associated with the plurality of virtual functions.
- Step 708 may detect each of the plurality of virtual functions on an interconnection bus utilizing the VMM.
- Step 710 may query the storage adapter for the boot list associated with the plurality of virtual functions utilizing a VMBIOS associated with the plurality of VMs, the VMBIOS being configured to detect the boot list associated with the plurality of virtual functions.
- Step 712 may present the detected boot list to a VM boot manager of the VMM utilizing the VMBIOS.
- Step 714 may booting each of the plurality of virtual machines utilizing each of the virtual functions, wherein each VF of the plurality of VFs is assigned to a VM of the plurality of VMs via an interconnect passthrough between the VMM and the plurality of VMs, wherein each of a plurality of virtual disks (VDs) is mapped to a VF of the plurality of virtual functions utilizing the VM boot manager.
- VDs virtual disks
- FIG. 8 illustrates a flow diagram depicting a process for VF function boot in a MR-IOV environment, in accordance with one embodiment of the present invention.
- Step 802 may load a physical function (PF) driver of the at least one storage adapter onto the MR-IOV switch.
- Step 804 may create a plurality of virtual functions (VFs) utilizing the PF driver on MR-IOV switch.
- Step 806 may assign each of the VFs to an MR-IOV server of the plurality of MR-IOV servers.
- Step 808 may identify each of the plurality of VFs as a virtual storage adapter by the plurality of MR-IOV servers, wherein each MR-IOV server identifies a VF as a virtual storage adapter.
- Step 810 may loading a UEFI driver onto each of the VFs.
- Step 812 may obtain a boot list associated with the plurality of virtual functions from firmware of the at least one storage adapter utilizing the UEFI driver loaded on each of the VFs, wherein the boot list is configured to associate each virtual function with a corresponding boot disk.
- Step 814 may boot a plurality of boot disks utilizing each of the VFs assigned to each of the MR-IOV servers utilizing the obtained boot list
- an implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
- any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary.
- Those skilled in the art will recognize that optical aspects of implementations will typically employ optically-oriented hardware, software, and or firmware.
- a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities).
- a typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
- any two components so associated can also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable”, to each other to achieve the desired functionality.
- operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Mathematical Physics (AREA)
- Computer Hardware Design (AREA)
- Stored Programmes (AREA)
Abstract
Description
- The present application is related to and claims the benefit of the earliest available effective filing date(s) from the following listed application(s) (the “Related Applications”) (e.g., claims earliest available priority dates for other than provisional patent applications or claims benefits under 35 USC §119(e) for provisional patent applications, for any and all parent, grandparent, great-grandparent, etc. applications of the Related Application(s)).
-
-
- For purposes of the USPTO extra-statutory requirements, the present application constitutes a divisional patent application of United States Non-Provisional patent application entitled VIRTUAL FUNCTION BOOT IN MULTI-ROOT I/O VIRTUALIZATION ENVIRONMENTS TO ENABLE MULTIPLE SERVERS TO SHARE VIRTUAL FUNCTIONS OF A STORAGE ADAPTER THROUGH A MR-IOV SWITCH, naming Parag R. Maharana as inventor, filed on Oct. 6, 2011, application Ser. No. 13/267,646, which constitutes a regular (non-provisional) application of United States Provisional patent application entitled MEGARAID-SRIOV/MRIOV, naming Parag R. Maharana as inventor, filed Oct. 26, 2010, Application Ser. No. 61/406,601. All of the above-named applications are incorporated herein by reference in their entirety.
- The present invention generally relates to single-root and multi-root I/O virtualization in computer based systems and more particularly to virtual function boot single-root and multi-root I/O virtualization environments.
- Single-root input/output virtualization (SR-IOV) and multi-root input/output (MR-IOV) specifications allow for a single PCIe device to appear as multiple separate PCIe devices. In this sense, a physical device having SR-IOV capabilities may be configured to appear in the PCI configuration space as multiple functions. For example, SR-IOV operates by introducing the concept of physical functions (PF) and virtual functions (VFs). In a general sense, physical functions are full-featured functions associated with the PCIe device. Virtual functions, however, represent less than simple functions that lack configuration resources and only process I/O, wherein each physical function is derived from a physical function. It is further known in the art that virtual functions may be assigned to guest hosts, commonly referred to as virtual machines.
FIG. 1 represents a block diagram view of an SR-IOV system 100 known in the art. Thesystem 100 includes aphysical server 102 configured to operate a host OS 106 and host Guest OS0 through Guest Host OS15, labeled as 108 a and 108 b respectively. Thesystem 100 may further include a SAS controller with associatedphysical function PF 0 110 and virtual functions VF1 112 a through VF16 112 b. The physical function, the multiple guest hosts, and the multiple virtual functions may include a variety of communication and mapping features as illustrated inFIG. 1 . In a general sense, SR-IOV is the virtualization of the PCIe bus enabling single physical instances of any controller to appear as 16 to 32 virtual controllers. - A method for virtual function boot in a system including a single-root I/O virtualization (SR-IOV) enabled server configured to implement a plurality of virtual machines (VMs) and a virtual machine manager (VMM) and a storage adapter including at least one physical function (PF) and configured to implement a plurality of virtual functions, wherein the SR-IOV enabled server and the physical storage adapter are communicatively couplable, is disclosed. In one aspect, a method may include, but is not limited to, upon interconnection of the storage adapter with the SR-IOV enabled server and boot of the SR-IOV enabled server and storage adapter, loading a PF driver of the PF of the storage adapter onto the SR-IOV enabled server utilizing the virtual machine manager of the SR-IOV enable server; creating a plurality of virtual functions utilizing the PF driver; detecting each of the plurality of virtual functions on an interconnection bus utilizing the VMM; maintaining a boot list associated with the plurality of virtual functions; querying the storage adapter for the boot list associated with the plurality of virtual functions utilizing a VMBIOS associated with the plurality of VMs, the VMBIOS being configured to detect the boot list associated with the plurality of virtual functions; presenting the detected boot list to a VM boot manager of the VMM utilizing the VMBIOS; and booting each of the plurality of virtual machines utilizing each of the virtual functions, wherein each VF of the plurality of VFs is assigned to a VM of the plurality of VMs via an interconnect passthrough between the VMM and the plurality of VMs, wherein each of a plurality of virtual disks (VDs) is mapped to a VF of the plurality of virtual functions utilizing the VM boot manager.
- A method for virtual function boot in a system including a plurality of multi-root I/O virtualization (MR-IOV) servers, at least one MR-IOV switch, and at least one storage adapter including at least one physical function (PF) and configured to implement a plurality of virtual functions, each of the MR-IOV servers being communicatively coupled to the at least one MR-IOV switch, the at least one storage adapter being communicatively couplable to the at least one MR-IOV switch, is disclosed. In one aspect, a method may include, but is not limited to, upon interconnection of the at least one storage adapter with the at least one MR-IOV switch, loading a physical function (PF) driver of the at least one storage adapter onto the MR-IOV switch; creating a plurality of virtual functions (VFs) utilizing the PF driver on MR-IOV switch; assigning each of the VFs to an MR-IOV server of the plurality of MR-IOV servers; identifying each of the plurality of VFs as a virtual storage adapter by the plurality of MR-IOV servers, wherein each MR-IOV server identifies a VF as a virtual storage adapter; loading a UEFI driver onto each of the VFs; obtaining a boot list associated with the plurality of virtual functions from firmware of the at least one storage adapter utilizing the UEFI driver loaded on each of the VFs, wherein the boot list is configured to associate each virtual function with a corresponding boot disk; and booting a plurality of boot disks utilizing each of the VFs assigned to each of the MR-IOV servers utilizing the obtained boot list.
- A system for virtual function boot in a SR-IOV environment is disclosed. In one aspect, a system may include, but is not limited to, a single-root I/O virtualization (SR-IOV) server configured to implement a plurality of virtual machines (VMs) and a virtual machine manager (VMM); and a storage adapter including at least one physical function (PF), storage adapter configured to implement a plurality of virtual functions, the storage adapter being communicatively couplable to the SR-IOV enabled server via a PCIe slot of the SR-IOV enabled server, wherein, upon interconnection of the storage adapter with the SR-IOV enabled server, the storage adapter and the SR-IOV enabled server are configured to: load a PF driver of the PF of the storage adapter onto the SR-IOV enabled server utilizing the virtual machine manager of the SR-IOV enable server; create a plurality of virtual functions utilizing the PF driver; detect each of the plurality of virtual functions on an interconnection bus utilizing the VMM; maintain a boot list associated with the plurality of virtual functions; query the storage adapter for the boot list associated with the plurality of virtual functions utilizing a VMBIOS associated with the plurality of VMs, the VMBIOS being configured to detect the boot list associated with the plurality of virtual functions; present the detected boot list to a VM boot manager of the VMM utilizing the VMBIOS; and boot each of the plurality of virtual machines utilizing each of the virtual functions, wherein each VF of the plurality of VFs is assigned to a VM of the plurality of VMs via an interconnect passthrough between the VMM and the plurality of VMs, wherein each of a plurality of virtual disks (VDs) is mapped to a VF of the plurality of virtual functions utilizing the VM boot manager.
- A system for virtual function boot in a MR-IOV environment is disclosed. In one aspect, a system may include, but is not limited to, at least one MR-IOV switch; a plurality of multi-root I/O virtualization (MR-IOV) servers, each of the plurality of MR-IOV servers being communicatively coupled to the MR-IOV switch via a PCIe link; and at least one storage adapter including at least one physical function (PF), the at least one storage adapter configured to implement a plurality of virtual functions, the at least one storage adapter being communicatively couplable to the at least one MR-IOV switch via a PCIe slot of the MR-IOV switch, wherein, upon interconnection of the at least one storage adapter with the at least one MR-IOV switch, the at least one storage adapter, the MR-IOV switch, and the plurality of MR-IOV servers are configured to: load a physical function (PF) driver of the at least one storage adapter onto the MR-IOV switch; create plurality of virtual functions (VFs) utilizing the PF driver on MR-IOV switch; assign each of the VFs to an MR-IOV server of the plurality of MR-IOV servers; identify each of the plurality of VFs as a virtual storage adapter by the plurality of MR-IOV servers, wherein each MR-IOV server identifies a VF as a virtual storage adapter; load a UEFI driver onto each of the VFs; obtain a boot list associated with the plurality of virtual functions from firmware of the at least one storage adapter utilizing the UEFI driver loaded on each of the VFs, wherein the boot list is configured to associate each virtual function with a corresponding boot disk; and boot a plurality of boot disks utilizing each of the VFs assigned to each of the MR-IOV servers utilizing the obtained boot list.
- The numerous advantages of the disclosure may be better understood by those skilled in the art by reference to the accompanying figures in which:
-
FIG. 1 illustrates a block diagram view of an SR-IOV virtualization environment. -
FIG. 2A illustrates a block diagram view of a system suitable for virtual function boot in a single-root I/O virtualization (SR-IOV) environment, in accordance with one embodiment of the present invention. -
FIG. 2B illustrates a block diagram view of the kernel view of a system suitable for virtual function boot in a single-root I/O virtualization (SR-IOV) environment, in accordance with one embodiment of the present invention. -
FIG. 3 illustrates a block diagram view of a system suitable for virtual function boot in a multi-root I/O virtualization (MR-IOV) environment, in accordance with one embodiment of the present invention. -
FIG. 4 illustrates a block diagram view of a system suitable for virtual function boot in a MR-IOV environment equipped with multi-node clustering capabilities, in accordance with one embodiment of the present invention. -
FIG. 5 illustrates a block diagram view of a system suitable for virtual function boot in a MR-IOV environment equipped with multi-level HA capabilities, in accordance with one embodiment of the present invention. -
FIG. 6 illustrates a block diagram view of a system suitable for virtual function boot in a SR-IOV environment equipped with diagnostic messaging capabilities, in accordance with a further embodiment of the present invention. -
FIG. 7 illustrates a flow diagram depicting a process for VF function boot in a SR-IOV environment, in accordance with one embodiment of the present invention. -
FIG. 8 illustrates a flow diagram depicting a process for VF function boot in a MR-IOV environment, in accordance with one embodiment of the present invention. - It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not necessarily restrictive of the invention as claimed. The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and together with the general description, serve to explain the principles of the invention. Reference will now be made in detail to the subject matter disclosed, which is illustrated in the accompanying drawings.
- Referring generally to
FIG. 1 through 8 , systems and methods for physical storage adapter virtual function booting in single-root and multi-root I/O virtualization environments is described in accordance with the present disclosure. -
FIG. 2A illustrates a block diagram view of asystem 200 suitable for virtual function boot in a single-root I/O virtualization (SR-IOV) environment, in accordance with one embodiment of the present invention. The system may include an SR-IOV enabledserver 201 and a storage adapter 202 (e.g., MegaRAID controller). The present disclosure will focus on an implementation of a MegaRAIDcontroller 202. Those skilled in the art, however, should recognize that the concepts described in the present disclosure may be extended to include storage adapters other than MegaRAID controllers. As such, the description of MegaRAIDcontroller 202 should not be interpreted as a limitation but rather merely as an illustration. - The SR-IOV enabled
server 201 of the present invention may include any server known in the art capable of implementing SR-IOV. For instance, the SR-IOV enabledserver 201 may include a VT-D enabled Intel® server. For example, the SR-IOV enabledserver 201 may include, but is not limited to, an Intel® Xenon® 5500 or 5600 server. Those skilled in the art should recognized that the SR-IOV enabledserver 201 is not limited to Intel or Xenon® based server technology, but, rather, the above description should be interpreted merely as an illustration. - In one aspect, the SR-IOV enabled
server 201 and the MegaRAIDcard 202 are communicatively couplable via an interconnection bus. For example, the interconnection bus may include a PCI Express (PCIe) interconnection bus 204 (e.g., PCI Express 2.0). In this manner, a user may insert/connect the MegaRAIDcard 202 in the PCIe server slot (not shown) of the SR-IOV enabledserver 201, thereby establishing a communication link between theserver 201 andphysical function 208 of the MegaRAIDcard 202. - In one aspect, the SR-IOV enabled
server 201 may be configured to host multiple virtual machines (VMs). For example, the SR-IOV enabledserver 201 may host a first VM 214 a, asecond VM 214 b, a third VM, and up to and including an NthVM 214 d. Further, theserver 201 may be configured to host a virtual machine manager (VMM) 206. For example, theserver 201 may host a hypervisor (e.g., Xen or KVM) configured to manage the VMs 214 a-214 d. Throughout the present invention the terms “hypervisor” and “virtual machine manager (VMM)” will be used interchangeably. Those skilled in the art should recognize that a VMM and a hypervisor are generally known in the art to be equivalent. In a general sense, those skilled in the art should recognize that a hypervisor is software installed on a server utilized to run guest operating systems (i.e., virtual machines) on the given server. In this manner, a hypervisor may be installed on the SR-IOV enabledserver 201 in order to manage the VMs 214 a-214 d, wherein virtual functions of thesystem 200 are assigned and operated by the VMs 214 a-214 d, as will be discussed in greater detail further herein. - In another aspect, the
MegaRAID controller 202 includes a physical function (PF) 203. The PF 203 may be configured to implement a plurality of virtual functions (VFs) on theMegaRAID controller 202. For example, virtual functions VF-1, VF-2, VF-3, and up to and including VF-N may be implemented onMegaRAID controller 202. -
FIG. 2B represents a block view diagram illustrating the kernel space view of the SR-IOV enabledserver 201 following interconnection of theMegaRAID card 202 and theserver 201 via thePCIe interconnect 204. As shown inFIG. 2B , theVM Manager 206 includes a virtual disk VD-0, aPF driver 223 loaded from theMegaRAID controller 202, akernel 228, and system BIOS and/orUEFI 226. Each of the virtual machines includes a virtual disk, a virtual function driver, a virtual function, a kernel, and a VMBIOS. For example, virtual machine 214 a includes virtual disk VD-1, VF driver 222 a, virtual function VF1 218 a, kernel 224 a, and VMBIOS 216 a. - Upon interconnection of the
MegaRAID card 201 with the SR-IOV enabledserver 201 via the PCIe slot of theserver 201, thesystem 200 may boot firmware of the SR-IOV enabledserver 201. For example, thesystem 200 may boot the BIOS orUEFI 226 of the SR-IOV server 201. Likewise, thesystem 200 may boot firmware of theMegaRAID controller 202. During the boot process of the SR-IOV enabledserver 201 and thestorage adapter 202firmware 226, the VM manager 206 (e.g., hypervisor) may identify the physical function (PF) of thestorage adapter 202 as the controller of the SR-IOV enabledserver 201. - Following the firmware boot sequence, the
VM Manager 206 may load aPF driver 208 onto the SR-IOV enabledserver 201. Applicant notes thatFIG. 2B illustrates the kernel level view of the SR-IOV enabledserver 201 following thisPF driver 208 loading process. Further, thesystem 200 may create a set ofvirtual functions 210 using thePF driver 208. In this sense, thePF driver 223 may enumerate the virtual functions of thestorage adapter 202 for use by theVM manager 206. As shown inFIG. 2A , theMegaRAID card 202 may host a first virtual function VF-1, a second virtual function VF-2, a third virtual function VF-3, and up to and including an Nth virtual function VF-N. It is contemplated herein that the creation and enumeration of thevirtual functions 210 may depend on a variety of factors. These factors may include, but are not limited to, operational configuration of the VM manager 206 (i.e., the hypervisor) or hardware capabilities (e.g., SR-IOV enabled server capabilities) of thesystem 200. - In a further aspect, each of a set of virtual disks (VDs) 212 may be assigned to a virtual function of the
storage adapter 202. For example, VD-0 may be assigned to VF-1, VD-1 may be assigned to VF-1, VD-2 may be assigned to VF-2, VD-3 may be assigned to VF-3, and VD-N may be assigned to VF-N, as shown in the logical view ofstorage adapter 202 ofFIG. 2A . Further, the set ofvirtual disks 212 may create a RAID volume 218 (e.g., DAS RAID). In this sense, anenclosure 220 of thesystem 200 may host one or more physical disks (e.g., HDDs or SSDs), as illustrated inFIG. 2A . Applicant notes that the physical disks of theenclosure 218 are not necessarily the same in number as the number of VDs of the RAID volume. Those skilled in the art should recognize that multiple RAID volumes may be formed from a single disk. In a general sense, any number of VDs may be created from any number of physical disks. As the focus of the present invention is on the VDs of thesystem 200, VD-0 . . . VD-N are illustrated within the enclosure 118 in order to illustrate that the VDs of the RAID volume 118 are hosted on the physical disks of theenclosure 220. It should further be recognized that theDAS RAID 218 and thestorage adapter 202 may be communicatively coupled via a serial-attached-SCSI (SAS)interconnection bus 219. - Next, the
VM manager 206 of the SR-IOV enabledserver 201 may detect each of the set of virtual functions on thePCIe bus 204. In this regard, theVM manager 206 detects a given VF (e.g., VF-0 . . . VF-N) as a PCIe device. Thestorage adapter 202 may maintain and track boot data for each of the virtual functions utilizing firmware running on thestorage adapter 202. In this regard, thestorage adapter 202 may maintain and track each virtual function boot data separately. As such, thestorage adapter 202 may maintain a boot list associated with the set ofvirtual function 210. - It is noted herein that the virtual machines (i.e., guest domains) of the present invention (and in a general sense) do not include system BIOS or UEFI to detect the boot drive. As such, the
system 200 may be configured to automatically load and execute expansion VMBIOS whenever a user creates a new virtual machine. In this setting, when the boot disk is exposed then a BIOS emulation module of theVM manager 206 may execute the boot sequence. First, the BIOS emulation module may load the bootstrap from the boot disk via the BIOS. Once the OS loader is loaded then a user may add a VF driver into OS. As such, the VF driver will have full access to the associate disk. - In one aspect, the VMBIOS may query the
storage adapter 202 for the boot list associated with the set ofvirtual functions 210. For example, the VMBIOS (e.g., 216 a . . . 216 d) may query the firmware ofstorage adapter 202 for the boot list associated with the set of virtual functions. For example, in the case where first virtual function VF-1 218 a queries the storage adapter firmware, the firmware may be configured to return boot data for VF-1 of 210. By way of another example, in the case where the Nth virtual function VF-N 218 d queries the storage adapter firmware, the firmware may be configured to return boot data for VF-N of 210. - In another aspect, the
storage adapter 202 may be configured to maintain boot data in a manner to correlate a first virtual function VF-1 to VD-1, a second virtual function VF-2 to VD-2, and up to an Nth virtual function VF-N to VD-N. Applicant notes that the numbering scheme disclosed above is merely implemented for illustrative purposes and should not be interpreted as a limitation on the present invention. - Further, the VMBIOS may be utilized to present the detected boot list to a VM boot manager of the
VM manager 206. In turn, each of the set of virtual disks (e.g., VD-0 . . . VD-N) may be mapped to a specific virtual function of the set of virtual functions utilizing the VM boot manager of theVM manager 206. - In another aspect, the virtual functions may be utilized to boot each of the set of virtual machines 214 a . . . 214 d. For example, in terms of the kernel view of
FIG. 2B , the virtual functions 218 a . . . 218 d may be utilized to boot each of the set of virtual machines 214 a . . . 214 d respectively. In this regard, each of the virtual functions 214 . . . 214 d is assigned to a single virtual machine of the group 214 a . . . 214 d via the PCIe passthrough 209. In this regard, when a user creates a given virtual machine and assigns a given virtual function as a PCIe resource to the given virtual machine theVM manager 206 may designate the given virtual function for PCIe passthrough. It should be recognized by those skilled in the art that PCIe passthrough may be managed utilizing the VM manager 206 (e.g., hypervisor). -
FIG. 3 illustrates a block diagram view of asystem 300 suitable for virtual function boot in a MR-IOV environment, in accordance with a further embodiment of the present invention. Applicant notes that unless otherwise noted the features and components as described previously herein with respect tosystem 200 should be interpreted to extend through the remainder of the disclosure. - The
system 300 may include a plurality of servers including a first server 314 a, a second server 314 b, and an up to an including anNth server 314 c. It should be recognized that standard server technology is suitable for implementation in the context of the MR-IOV environment of the present invention. In this sense, any suitable server technology known in the art may be implemented as one of the plurality of servers of the present invention. - In another aspect, the
system 300 may include astorage adapter 302. As insystem 200, thestorage adapter 302 may include aMegaRAID controller 302. Theadapter 302 may include aphysical function 308, a plurality of virtual functions 310 (e.g., VF-1 . . . VF-N) and a corresponding plurality of virtual disks 312 (e.g., VD-1 . . . VD-N). In addition, thestorage adapter 302 may be coupled to a RAID volume 216 formed from a multiple physical disks (e.g., HDDs) ofenclosure 318 via aSAS connection 319. - In another aspect, the
system 300 may include a MR-IOV switch 304. The MR-IOV switch 304 may include, but is not limited to, aPCIe switch 305. ThePCIe switch 305 may include a plurality of ports P-1, P-2 and up to and including P-N. - In a further aspect,
MegaRAID card 302 and the MR-IOV switch 304 are communicatively couplable via a interconnection bus. For example, the interconnection bus may include a PCI Express (PCIe) interconnection bus (not shown) (e.g., PCI Express 2.0). In this manner, a user may insert/connect theMegaRAID card 302 in the PCIe server slot (not shown) of the MR-IOV switch 304, thereby establishing a communication link between the MR-IOV switch 304 andphysical function 308 of theMegaRAID card 302. - Further, each of the MR-IOV servers 314 a . . . 314 c and the
MegaRAID card 302 are communicatively couplable via an interconnection link. For example, each server 314 a . . . 314 c may individually be coupled to the MR-IOV switch 304 via an interconnection link (e.g., interconnection cables). For example, the interconnection link may include a PCI Express cable. In this regard, the MR-IOV switch 302 is configured to assign each virtual function of thesystem 300 to a server (e.g., 314 a . . . 314 c) through PCIe communication. - Upon interconnection of the
storage adapter 302 with the MR-IOV switch 304, a physical function driver of thestorage adapter 302 may be loaded on the MR-IOV switch 304. Then, the PF driver loaded on the MR-IOV switch may be utilized to create a plurality of virtual functions VF-1 through VF-N. The MR-IOV switch 304 may then assign each of the virtual functions VF-1 . . . VF-N to an individual MR-IOV server 314 a . . . 314 c. - It is noted herein that each of the MR-IOV servers 314 a . . . 314 c is capable of booting with standard system BIOS/UEFI. The UEFI/BIOS of the MR-IOV servers 314 a . . . 314 c may identify each of the virtual functions VF-1 . . . VF-N as virtual adapters. In this manner, each MR-IOV server identifies a single virtual function as a virtual storage adapter. Then, system UEFI/BIOS loads UEFI drivers (or Adapters option ROM) for the
storage adapter 302. - Next, the UEFI driver (or Option ROM) may obtain a boot list associated with the plurality of virtual functions from firmware of the
storage adapter 302. For example, the UEFI driver loaded on each of the virtual functions may be utilized to obtain a boot list from the firmware of thestorage adapter 302. The boot list is configured to associate each virtual function VF-1 . . . VF-N with a corresponding boot disk VD-1 . . . VD-N. In this manner, once a UEFI driver or Option ROM has been loaded on a virtual function, the virtual function may issue a command to thestorage adapter 302. Upon receiving the command, the storage adapter 302 (via firmware) may determine the requesting virtual function and provide that virtual function with the associated boot disk information. Further, once a given disk is identified as a boot disk for a given server this disk is mark as the dedicated boot disk for this server. This information may be utilized in future queries. - In a further aspect, the boot manager of each of the MR-IOV servers 314 a . . . 314 c may utilize the boot list to boot the plurality of boot disks. In this manner, the boot manager may utilize the boot list and the virtual functions VF-1 . . . VF-N assigned to each MR-IOV server 314 a . . . 314 c to boot each of the plurality of disks VD-1 . . . VD-N.
- It is recognized herein that once the kernel is loaded it may prompt for a kernel driver for a given virtual function. Once the OS loaded, the OS will provide direct access to the boot disk information.
-
FIG. 4 illustrates asystem 400 suitable for virtual function boot in a MR-IOV environment equipped with multi-node clustering capabilities, in accordance with one embodiment of the present invention. Thesystem 400 is built on an architecture similar to that described with respect tosystem 200. As such, the components and features ofsystem 200 should be interpreted to extend tosystem 400. Thesystem 400 includes a plurality of servers 414 a . . . 414 c hosting a plurality of virtual machines 416 a . . . 416 g, a MR-IOV switch 404 including aPCIe switch 405 withmultiple ports 406, a storage adapter (e.g., MegaRAID card 402) having a plurality ofvirtual functions 410 associated with a plurality ofvirtual disks 412. - In a further embodiment, the MR-IOV switch is configured to perform multi-node clustering using the
single storage adapter 402. In this regard, a first virtual function (e.g., VF-1) is assigned to a first MR-IOV server (e.g., 414 a) and a second virtual function (e.g., VF-2) is assigned to a first MR-IOV server (e.g., 414 c) utilizing the MR-IOV switch 404. It should be noted that the degree of clustering implemented bysystem 400 is not limited to two. Rather, it is only limited by the number of available virtual functions VF-1 . . . VF-N. As such, in a general sense, thesystem 400 may implement N-node clustering. - In this embodiment, all cluster volumes may represent shared volumes. In this regard, the volumes are only visible to predefined nodes. When a given cluster is enabled all of the disks (e.g., LUN) are visible to all nodes of the cluster. For example, VD-1 may be visible to server-1 414 a and server-N 414 c, as shown in
FIG. 4 . Further, a virtual machine (VM) 416 a may be created and assigned to VD-1 for storage in server-1 414 a. Prior to creation of VM 416 a, server-1 414 a may issue PERSISTENT RESERVE via thestorage adapter 402 firmware and takes ownership of this volume. All of the operating system and associated data are then stored in VD-1 from VM 416 a. At the same time, VD-1 is also available to server-N 414 c, however it does not have the ability to modify the arrangement as VD-1 has ownership of the volume. In the event a user is required to move VM 416 a from Server-1 to Server-N then a process performed by Live Migration (Hyper-V) or vMotion (VMware) software may carry out the transfer. Since the VD-1 contains the pertinent information, Live Migration or vMotion need only transfer ownership from Server-1 to Server-N by issuing a RELEASE from Server-1 and RESERVE from Server-N. It is noted herein that the process only transfers control from server-1 to server-N. Migration of actual data from server-1 to server-3 is not required. -
FIG. 5 illustrates asystem 500 suitable for virtual function boot in a MR-IOV environment equipped with multi-level HA capabilities, in accordance with one embodiment of the present invention. Thesystem 500 includes a plurality ofservers 514 a . . . 514 c hosting a plurality ofvirtual machines 516 a . . . 516 g. Thesystem 500 further includes two or more MR-IOV switches. For example, thesystem 500 may include MR-IOV switch include a first PCIe switch 505 a with multiple ports 506 a and a second PCIe switch 505 b withmultiple ports 506 b. Further, thesystem 500 may include multiple storage adapters. For example, thesystem 500 may include a first storage adapter (e.g.,MegaRAID card 502 a) having a plurality of virtual functions 510 a and a plurality of virtual disks 512 a and a second storage adapter (e.g., MegaRAID card 502 b) having a plurality of virtual functions 510 b and a plurality ofvirtual disks 512 b. Eachadapter 502 a and 502 b may also include a physical function (PF) 508 a and 508 b respectively. Applicant notes that the present embodiment is not limited to two storage adapters or two PCIe switches. In a general sense, thesystem 500 may be extended to N nodes and the illustration of two adapters operating in conjunction with two PCIe switches has been utilized for purposes of simplicity. - In a further embodiment, the multiple PCIe switches (e.g., 505 a and 505 b) are configured to perform N-node utilizing multiple storage adapters (e.g., 502 a and 502 b). In this manner, a first virtual function (e.g., VF-1 of 510 a) may be assigned to a first MR-
IOV server 514 a utilizing the first PCIe switch 505 a. Further, a second virtual function (e.g., VF-1 of 510 b) may be assigned to the first MR-IOV server 514 a utilizing the second PCIe switch 505 b. This concept may be extended to allservers 514 a . . . 514 c with all virtual functions of all of thestorage adapters 502 a and 502 b of thesystem 500, as illustrated by the dotted lines inFIG. 5 . - This configuration allows the same RAID volume to appear twice in each node via the two assigned virtual functions. In turn, this allows for multi-path solution for multi-path redundancy. In this embodiment, the storage adapters 502 a-502 b firmware may be configured to provide TPGS/ALUA (SCSI3) support. Further, one of the two volumes available to all servers is Active path, where as the second of the two volumes is passive path. In this sense, it should be straightforward for the multi-path solution to identify which adapter is Active optimized and which adapter is Non-active optimized.
- When a given RAID volume is “owned” by a given storage adapter (e.g., 502 a or 502 b), all of the associated virtual functions belonging to the same controller will have an Active path. In a general sense, when a path is labeled as Active, I/O through that path will be optimized and may deliver faster speeds than the non-active path.
-
FIG. 6 illustrates a block diagram view of asystem 600 suitable for virtual function boot in a SR-IOV environment equipped with diagnostic messaging capabilities, in accordance with a further embodiment of the present invention. Thesystem 600 includes, but is not limited to, an SR-IOV enabledserver 601 configured to host multiple virtual machines 614 a-614 b and a storage adapter 602 (e.g., MegaRAID controller) communicatively couplable to theserver 601 via aPCIe interconnect 604. Similarly tosystem 200 of FIG. 2A,system 600 also includes a set ofvirtual functions 610 andvirtual disks 612 of the storage adapter (e.g., MegaRAID card 602). In addition, thesystem 600 includes a set of virtual machines 614 a-614 b hosted on the SR-IOV enabledserver 601. Each virtual machine may include an application set (e.g., 616 a or 616 b) and a kernel (e.g., 618 a or 618 b). Each kernel may include a virtual function driver (e.g., 620 a or 620 b). - It is recognized herein that a virtual function (VF) driver (e.g., 620 a or 620 b) may be configured to issue a status of the VF driver to an interface the
storage adapter 202. In turn, this issuance may allow the storage adapter firmware to acknowledge the received status and forward the status to aPF driver 622 in the associated VM manager 606 (coupled to theadapter 602 via PCIe). In addition, it is further contemplated herein that thestorage adapter 602 may take action based on status received from the VF driver 614 a-614 b. ThePF driver 622 may further forward the status to a user interface suitable foruser notification 628. Alternatively, thePF driver 622 may forward the status to an error handler of the 624 of theVM manager 606. - In one embodiment, after detecting an event (or lack of an event), a VF driver 614 a or 614 b may transmit a
status signal 621 a or 621 b from the VF driver 614 a or 614 b to thestorage adapter 602. For example, the status signal 621 a or 621 b may be indicative of a status of the VF driver 614 a or 614 b. Further, the status signal 621 a or 621 b may be received from a VF driver by a corresponding VF function. For instance a signal 621 a transmitted by a first VF driver 614 a (representing the VF driver of the VM associated with VF-1) may be received by VF-1 of thestorage adapter 602. Similarly, asignal 621 b transmitted by a fourth VF driver 614 b may be received by VF-4 of thestorage adapter 202. Then, thestorage adapter 202 may store information indicative of the status transmitted by the status signal 621 a or 621 b utilizing the storage adapter firmware and a memory of theadapter 202. - Next, the
storage adapter 202 may relay the original status by transmitting asignal 623 indicative of the status to thePF driver 622 in theVM manager 606. - Then, the
PF driver 622 may relay the status by transmitting asignal 625 to anerror handler 624 of theVM manager 624. In this manner, theerror handler 624 may be pre-programmed by a user to implement a particular course of action based on the information content of thesignal 625 received by theerror handler 624. Alternatively, the PF driver 22 may relay the status to amanagement tool 626 of theVM manager 602 viasignal 629. In turn, themanagement tool 626 may transmit auser signal 627 to a user interface (not shown), wherein the user signal is configured to trigger a pre-determined message (e.g., textual message, audio message, video message) selected based on one or more characteristics (e.g., information content related to status of VF driver 614 a or 614 b) of thestatus signal 629 received by themanagement tool 626. - It is further contemplated herein that the above described diagnostic messaging process may be extended to an MR-IOV environment. In this regard, the storing of status information, error handling, a transmission of signals to a user interface may be handled by a MR-IOV switch rather than a VM manager.
-
FIG. 7 illustrates a flow diagram depicting a process for VF function boot in a SR-IOV environment, in accordance with one embodiment of the present invention. Step 702 may load a PF driver of the PF of the storage adapter onto the SR-IOV enabled server utilizing the virtual machine manager of the SR-IOV enable server. Step 704 may create a plurality of virtual functions utilizing the PF driver. Step 706 may maintain a boot list associated with the plurality of virtual functions. Step 708 may detect each of the plurality of virtual functions on an interconnection bus utilizing the VMM. Step 710 may query the storage adapter for the boot list associated with the plurality of virtual functions utilizing a VMBIOS associated with the plurality of VMs, the VMBIOS being configured to detect the boot list associated with the plurality of virtual functions. Step 712 may present the detected boot list to a VM boot manager of the VMM utilizing the VMBIOS. Step 714 may booting each of the plurality of virtual machines utilizing each of the virtual functions, wherein each VF of the plurality of VFs is assigned to a VM of the plurality of VMs via an interconnect passthrough between the VMM and the plurality of VMs, wherein each of a plurality of virtual disks (VDs) is mapped to a VF of the plurality of virtual functions utilizing the VM boot manager. -
FIG. 8 illustrates a flow diagram depicting a process for VF function boot in a MR-IOV environment, in accordance with one embodiment of the present invention. Step 802 may load a physical function (PF) driver of the at least one storage adapter onto the MR-IOV switch. Step 804 may create a plurality of virtual functions (VFs) utilizing the PF driver on MR-IOV switch. Step 806 may assign each of the VFs to an MR-IOV server of the plurality of MR-IOV servers. Step 808 may identify each of the plurality of VFs as a virtual storage adapter by the plurality of MR-IOV servers, wherein each MR-IOV server identifies a VF as a virtual storage adapter. Step 810 may loading a UEFI driver onto each of the VFs. Step 812 may obtain a boot list associated with the plurality of virtual functions from firmware of the at least one storage adapter utilizing the UEFI driver loaded on each of the VFs, wherein the boot list is configured to associate each virtual function with a corresponding boot disk. Step 814 may boot a plurality of boot disks utilizing each of the VFs assigned to each of the MR-IOV servers utilizing the obtained boot list - Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically-oriented hardware, software, and or firmware.
- Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use engineering practices to integrate such described devices and/or processes into data processing systems. That is, at least a portion of the devices and/or processes described herein can be integrated into a data processing system via a reasonable amount of experimentation. Those having skill in the art will recognize that a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities). A typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
- The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable”, to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
- While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein.
- Furthermore, it is to be understood that the invention is defined by the appended claims. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
- Although particular embodiments of this invention have been illustrated, it is apparent that various modifications and embodiments of the invention may be made by those skilled in the art without departing from the scope and spirit of the foregoing disclosure. Accordingly, the scope of the invention should be limited only by the claims appended hereto.
- It is believed that the present disclosure and many of its attendant advantages will be understood by the foregoing description, and it will be apparent that various changes may be made in the form, construction and arrangement of the components without departing from the disclosed subject matter or without sacrificing all of its material advantages. The form described is merely explanatory, and it is the intention of the following claims to encompass and include such changes.
Claims (11)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/816,864 US20160124754A1 (en) | 2010-10-26 | 2015-08-03 | Virtual Function Boot In Single-Root and Multi-Root I/O Virtualization Environments |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US40660110P | 2010-10-26 | 2010-10-26 | |
US13/267,646 US9135044B2 (en) | 2010-10-26 | 2011-10-06 | Virtual function boot in multi-root I/O virtualization environments to enable multiple servers to share virtual functions of a storage adapter through a MR-IOV switch |
US14/816,864 US20160124754A1 (en) | 2010-10-26 | 2015-08-03 | Virtual Function Boot In Single-Root and Multi-Root I/O Virtualization Environments |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/267,646 Division US9135044B2 (en) | 2010-10-26 | 2011-10-06 | Virtual function boot in multi-root I/O virtualization environments to enable multiple servers to share virtual functions of a storage adapter through a MR-IOV switch |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160124754A1 true US20160124754A1 (en) | 2016-05-05 |
Family
ID=45974095
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/267,646 Expired - Fee Related US9135044B2 (en) | 2010-10-26 | 2011-10-06 | Virtual function boot in multi-root I/O virtualization environments to enable multiple servers to share virtual functions of a storage adapter through a MR-IOV switch |
US14/816,864 Abandoned US20160124754A1 (en) | 2010-10-26 | 2015-08-03 | Virtual Function Boot In Single-Root and Multi-Root I/O Virtualization Environments |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/267,646 Expired - Fee Related US9135044B2 (en) | 2010-10-26 | 2011-10-06 | Virtual function boot in multi-root I/O virtualization environments to enable multiple servers to share virtual functions of a storage adapter through a MR-IOV switch |
Country Status (1)
Country | Link |
---|---|
US (2) | US9135044B2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160253276A1 (en) * | 2015-02-27 | 2016-09-01 | Samsung Electronics Co., Ltd. | Method of communicating with peripheral device in electronic device on which plurality of operating systems are driven, and the electronic device |
US20180341419A1 (en) * | 2016-02-03 | 2018-11-29 | Surcloud Corp. | Storage System |
Families Citing this family (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9262196B2 (en) * | 2010-11-30 | 2016-02-16 | International Business Machines Corporation | Virtual machine deployment planning method and associated apparatus |
WO2012151392A1 (en) * | 2011-05-04 | 2012-11-08 | Citrix Systems, Inc. | Systems and methods for sr-iov pass-thru via an intermediary device |
US8601473B1 (en) | 2011-08-10 | 2013-12-03 | Nutanix, Inc. | Architecture for managing I/O and storage for a virtualization environment |
US9009106B1 (en) | 2011-08-10 | 2015-04-14 | Nutanix, Inc. | Method and system for implementing writable snapshots in a virtualized storage environment |
US9747287B1 (en) | 2011-08-10 | 2017-08-29 | Nutanix, Inc. | Method and system for managing metadata for a virtualization environment |
US8549518B1 (en) | 2011-08-10 | 2013-10-01 | Nutanix, Inc. | Method and system for implementing a maintenanece service for managing I/O and storage for virtualization environment |
US8850130B1 (en) | 2011-08-10 | 2014-09-30 | Nutanix, Inc. | Metadata for managing I/O and storage for a virtualization |
US9652265B1 (en) * | 2011-08-10 | 2017-05-16 | Nutanix, Inc. | Architecture for managing I/O and storage for a virtualization environment with multiple hypervisor types |
US8863124B1 (en) | 2011-08-10 | 2014-10-14 | Nutanix, Inc. | Architecture for managing I/O and storage for a virtualization environment |
US9390294B2 (en) * | 2011-09-30 | 2016-07-12 | Hewlett-Packard Development Company, L.P. | Virtualized device control in computer systems |
US20130159572A1 (en) * | 2011-12-16 | 2013-06-20 | International Business Machines Corporation | Managing configuration and system operations of a non-shared virtualized input/output adapter as virtual peripheral component interconnect root to multi-function hierarchies |
US9772866B1 (en) | 2012-07-17 | 2017-09-26 | Nutanix, Inc. | Architecture for implementing a virtualization environment and appliance |
US9154451B2 (en) * | 2012-08-21 | 2015-10-06 | Advanced Micro Devices, Inc. | Systems and methods for sharing devices in a virtualization environment |
JP5874879B2 (en) * | 2012-11-26 | 2016-03-02 | 株式会社日立製作所 | I / O device control method and virtual computer system |
US9047208B1 (en) * | 2012-12-10 | 2015-06-02 | Qlogic, Corporation | Method and system of configuring virtual function in peripheral devices |
WO2014100273A1 (en) | 2012-12-18 | 2014-06-26 | Dynavisor, Inc. | Dynamic device virtualization |
US9400704B2 (en) * | 2013-06-12 | 2016-07-26 | Globalfoundries Inc. | Implementing distributed debug data collection and analysis for a shared adapter in a virtualized system |
KR102147629B1 (en) | 2013-11-18 | 2020-08-27 | 삼성전자 주식회사 | Flexible server system |
US20150149995A1 (en) * | 2013-11-22 | 2015-05-28 | International Business Machines Corporation | Implementing dynamic virtualization of an sriov capable sas adapter |
US9910689B2 (en) | 2013-11-26 | 2018-03-06 | Dynavisor, Inc. | Dynamic single root I/O virtualization (SR-IOV) processes system calls request to devices attached to host |
US10031767B2 (en) * | 2014-02-25 | 2018-07-24 | Dynavisor, Inc. | Dynamic information virtualization |
TWI556174B (en) | 2014-03-05 | 2016-11-01 | 威盛電子股份有限公司 | System and method for assigning virtual functions and management host thereof |
US9652421B2 (en) * | 2014-04-25 | 2017-05-16 | Hitachi, Ltd. | Computer system and coupling configuration control method |
TWI502348B (en) * | 2014-05-02 | 2015-10-01 | Via Tech Inc | System and method for managing expansion read-only memory and management host thereof |
US9665309B2 (en) | 2014-06-27 | 2017-05-30 | International Business Machines Corporation | Extending existing storage devices in virtualized environments |
US9692698B2 (en) | 2014-06-30 | 2017-06-27 | Nicira, Inc. | Methods and systems to offload overlay network packet encapsulation to hardware |
US9419897B2 (en) * | 2014-06-30 | 2016-08-16 | Nicira, Inc. | Methods and systems for providing multi-tenancy support for Single Root I/O Virtualization |
KR102308782B1 (en) * | 2014-08-19 | 2021-10-05 | 삼성전자주식회사 | Memory controller, storage device, server virtualization system, and storage device identification in server virtualization system |
CN104461958B (en) * | 2014-10-31 | 2018-08-21 | 华为技术有限公司 | Support storage resource access method, storage control and the storage device of SR-IOV |
US9892037B2 (en) | 2014-12-29 | 2018-02-13 | International Business Machines Corporation | Efficient and secure direct storage device sharing in virtualized environments |
JP6565219B2 (en) * | 2015-03-03 | 2019-08-28 | 株式会社ジェイテクト | Operation board |
CN106293502B (en) * | 2015-06-29 | 2019-09-24 | 联想(北京)有限公司 | A kind of configuration method, method for interchanging data and server system |
EP3306870B1 (en) * | 2015-07-03 | 2019-09-11 | Huawei Technologies Co., Ltd. | Network configuration method, network system and device |
US10191864B1 (en) | 2015-11-12 | 2019-01-29 | Amazon Technologies, Inc. | Standardized interface for storage using an input/output (I/O) adapter device |
US9836421B1 (en) * | 2015-11-12 | 2017-12-05 | Amazon Technologies, Inc. | Standardized interface for network using an input/output (I/O) adapter device |
US9910690B2 (en) | 2015-11-20 | 2018-03-06 | Red Hat, Inc. | PCI slot hot-addition deferral for multi-function devices |
US9846592B2 (en) * | 2015-12-23 | 2017-12-19 | Intel Corporation | Versatile protected input/output device access and isolated servicing for virtual machines |
US10467103B1 (en) | 2016-03-25 | 2019-11-05 | Nutanix, Inc. | Efficient change block training |
TWI616759B (en) * | 2016-08-10 | 2018-03-01 | 創義達科技股份有限公司 | Apparatus assigning controller and apparatus assigning method |
CN107894913B (en) * | 2016-09-30 | 2022-05-13 | 超聚变数字技术有限公司 | Computer system and storage access device |
CN111078353A (en) | 2016-10-28 | 2020-04-28 | 华为技术有限公司 | Operation method of storage equipment and physical server |
CN107229590B (en) * | 2017-06-26 | 2021-06-18 | 郑州云海信息技术有限公司 | Method and system for realizing system stability during plugging and unplugging of physical network card |
US10459751B2 (en) * | 2017-06-30 | 2019-10-29 | ATI Technologies ULC. | Varying firmware for virtualized device |
CN109144672A (en) * | 2018-09-07 | 2019-01-04 | 郑州云海信息技术有限公司 | A kind of method, system and associated component for distributing PCIe device |
US10754660B2 (en) * | 2018-10-10 | 2020-08-25 | International Business Machines Corporation | Rack level server boot |
US11093301B2 (en) | 2019-06-07 | 2021-08-17 | International Business Machines Corporation | Input output adapter error recovery concurrent diagnostics |
US10901930B1 (en) * | 2019-10-21 | 2021-01-26 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Shared virtual media in a composed system |
US11962518B2 (en) | 2020-06-02 | 2024-04-16 | VMware LLC | Hardware acceleration techniques using flow selection |
US11875172B2 (en) | 2020-09-28 | 2024-01-16 | VMware LLC | Bare metal computer for booting copies of VM images on multiple computing devices using a smart NIC |
US11593278B2 (en) | 2020-09-28 | 2023-02-28 | Vmware, Inc. | Using machine executing on a NIC to access a third party storage not supported by a NIC or host |
US11636053B2 (en) | 2020-09-28 | 2023-04-25 | Vmware, Inc. | Emulating a local storage by accessing an external storage through a shared port of a NIC |
US11792134B2 (en) | 2020-09-28 | 2023-10-17 | Vmware, Inc. | Configuring PNIC to perform flow processing offload using virtual port identifiers |
US11736565B2 (en) | 2020-09-28 | 2023-08-22 | Vmware, Inc. | Accessing an external storage through a NIC |
US11995024B2 (en) | 2021-12-22 | 2024-05-28 | VMware LLC | State sharing between smart NICs |
US11863376B2 (en) | 2021-12-22 | 2024-01-02 | Vmware, Inc. | Smart NIC leader election |
US11899594B2 (en) | 2022-06-21 | 2024-02-13 | VMware LLC | Maintenance of data message classification cache on smart NIC |
US11928367B2 (en) | 2022-06-21 | 2024-03-12 | VMware LLC | Logical memory addressing for network devices |
US11928062B2 (en) | 2022-06-21 | 2024-03-12 | VMware LLC | Accelerating data message classification with smart NICs |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090133028A1 (en) * | 2007-11-15 | 2009-05-21 | Brown Aaron C | System and method for management of an iov adapter through a virtual intermediary in a hypervisor with functional management in an iov management partition |
US20090144731A1 (en) * | 2007-12-03 | 2009-06-04 | Brown Aaron C | System and method for distribution of resources for an i/o virtualized (iov) adapter and management of the adapter through an iov management partition |
US20090276773A1 (en) * | 2008-05-05 | 2009-11-05 | International Business Machines Corporation | Multi-Root I/O Virtualization Using Separate Management Facilities of Multiple Logical Partitions |
US20100082874A1 (en) * | 2008-09-29 | 2010-04-01 | Hitachi, Ltd. | Computer system and method for sharing pci devices thereof |
US20110179414A1 (en) * | 2010-01-18 | 2011-07-21 | Vmware, Inc. | Configuring vm and io storage adapter vf for virtual target addressing during direct data access |
US20110219164A1 (en) * | 2007-08-23 | 2011-09-08 | Jun Suzuki | I/o system and i/o control method |
US20120166690A1 (en) * | 2010-12-28 | 2012-06-28 | Plx Technology, Inc. | Multi-root sharing of single-root input/output virtualization |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4934642B2 (en) * | 2008-06-11 | 2012-05-16 | 株式会社日立製作所 | Computer system |
JP5232602B2 (en) * | 2008-10-30 | 2013-07-10 | 株式会社日立製作所 | Storage device and storage controller internal network data path failover method |
US8144582B2 (en) * | 2008-12-30 | 2012-03-27 | International Business Machines Corporation | Differentiating blade destination and traffic types in a multi-root PCIe environment |
-
2011
- 2011-10-06 US US13/267,646 patent/US9135044B2/en not_active Expired - Fee Related
-
2015
- 2015-08-03 US US14/816,864 patent/US20160124754A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110219164A1 (en) * | 2007-08-23 | 2011-09-08 | Jun Suzuki | I/o system and i/o control method |
US20090133028A1 (en) * | 2007-11-15 | 2009-05-21 | Brown Aaron C | System and method for management of an iov adapter through a virtual intermediary in a hypervisor with functional management in an iov management partition |
US20090144731A1 (en) * | 2007-12-03 | 2009-06-04 | Brown Aaron C | System and method for distribution of resources for an i/o virtualized (iov) adapter and management of the adapter through an iov management partition |
US20090276773A1 (en) * | 2008-05-05 | 2009-11-05 | International Business Machines Corporation | Multi-Root I/O Virtualization Using Separate Management Facilities of Multiple Logical Partitions |
US20100082874A1 (en) * | 2008-09-29 | 2010-04-01 | Hitachi, Ltd. | Computer system and method for sharing pci devices thereof |
US20110179414A1 (en) * | 2010-01-18 | 2011-07-21 | Vmware, Inc. | Configuring vm and io storage adapter vf for virtual target addressing during direct data access |
US20120166690A1 (en) * | 2010-12-28 | 2012-06-28 | Plx Technology, Inc. | Multi-root sharing of single-root input/output virtualization |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160253276A1 (en) * | 2015-02-27 | 2016-09-01 | Samsung Electronics Co., Ltd. | Method of communicating with peripheral device in electronic device on which plurality of operating systems are driven, and the electronic device |
US10146712B2 (en) * | 2015-02-27 | 2018-12-04 | Samsung Electronics Co., Ltd. | Method of communicating with peripheral device in electronic device on which plurality of operating systems are driven, and the electronic device |
US20180341419A1 (en) * | 2016-02-03 | 2018-11-29 | Surcloud Corp. | Storage System |
Also Published As
Publication number | Publication date |
---|---|
US20120102491A1 (en) | 2012-04-26 |
US9135044B2 (en) | 2015-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9135044B2 (en) | Virtual function boot in multi-root I/O virtualization environments to enable multiple servers to share virtual functions of a storage adapter through a MR-IOV switch | |
US9619308B2 (en) | Executing a kernel device driver as a user space process | |
US9798682B2 (en) | Completion notification for a storage device | |
US9262189B2 (en) | Configuring VM and IO storage adapter VF for virtual target addressing during direct data access | |
US9734096B2 (en) | Method and system for single root input/output virtualization virtual functions sharing on multi-hosts | |
US8239655B2 (en) | Virtual target addressing during direct data access via VF of IO storage adapter | |
US8719817B2 (en) | Virtualization intermediary/virtual machine guest operating system collaborative SCSI path management | |
US8990459B2 (en) | Peripheral device sharing in multi host computing systems | |
US8141092B2 (en) | Management of an IOV adapter through a virtual intermediary in a hypervisor with functional management in an IOV management partition | |
US10599458B2 (en) | Fabric computing system having an embedded software defined network | |
US20120137292A1 (en) | Virtual machine migrating system and method | |
US20150261952A1 (en) | Service partition virtualization system and method having a secure platform | |
US20170277573A1 (en) | Multifunction option virtualization for single root i/o virtualization | |
US9460040B2 (en) | Method, device and system for aggregation of shared address devices | |
US20100100892A1 (en) | Managing hosted virtualized operating system environments | |
US10157074B2 (en) | Systems and methods for multi-root input/output virtualization-based management by single service processor | |
US10990436B2 (en) | System and method to handle I/O page faults in an I/O memory management unit | |
US10853284B1 (en) | Supporting PCI-e message-signaled interrupts in computer system with shared peripheral interrupts | |
US11194606B2 (en) | Managing related devices for virtual machines utilizing shared device data | |
US10754676B2 (en) | Sharing ownership of an input/output device using a device driver partition | |
US11100033B1 (en) | Single-root input/output virtualization-based storage solution for software defined storage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001 Effective date: 20160201 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001 Effective date: 20170119 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001 Effective date: 20170119 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |