WO2017095424A1 - Integrated zone storage - Google Patents

Integrated zone storage Download PDF

Info

Publication number
WO2017095424A1
WO2017095424A1 PCT/US2015/063778 US2015063778W WO2017095424A1 WO 2017095424 A1 WO2017095424 A1 WO 2017095424A1 US 2015063778 W US2015063778 W US 2015063778W WO 2017095424 A1 WO2017095424 A1 WO 2017095424A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage
servers
drive
zoned
drives
Prior art date
Application number
PCT/US2015/063778
Other languages
French (fr)
Inventor
Michael S. Bunker
Troy Anthony Della Fiora
David M. KOONCE
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/063778 priority Critical patent/WO2017095424A1/en
Publication of WO2017095424A1 publication Critical patent/WO2017095424A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools

Definitions

  • an enclosure, rack, or module may be a physical structure to house a computing device and provide services such as power, cooling, interconnections, and networking, among other services, to the device.
  • a blade server enclosure or rack can house multiple blade servers for a computing network. Multiple slots or bays, for example, in the rack or enclosure may receive the blade servers.
  • Each blade server may have multiple components including a processor(s), system memory, a hard-drive(s), an input/output (I/O) card(s), and interconnects, among other components.
  • the internal hard drives of the blade servers may have limited internal storage and thus unable to store large amounts of data.
  • Additional storage such as network-attached storage or a storage area network (SAN), may be coupled to the blade servers as external storage to provide additional storage capacity.
  • SAN storage area network
  • FIG. 1 is a diagram of an integrated zoned storage system in accordance with examples
  • FIG. 2 is a diagram of the integrated zoned storage system of Fig. 1 but depicting additional detail in accordance with examples;
  • FIG. 3A is diagram of a front portion of an integrated zoned storage system with an storage module extended from the chassis in accordance with examples;
  • FIG. 3B is a diagram of an integrated zoned storage system in accordance with examples
  • FIG. 4 is a diagram of integrated zoned storage systems in accordance with examples.
  • FIG. 5 is block diagram of a method of manufacturing an integrated zoned storage system in accordance with examples. DETAILED DSCRIPTION
  • a blade server may be a relatively small modular server that provides processing power and memory, while forgoing significant storage and I/O components, among other components typically found on stand-alone servers.
  • the blade servers on the market accept up to two 2.5-inch hard drives that provide limited internal storage capacity. Further, the operating system (OS) of the blade server may consume a substantial portion of that capacity. Consequently, the memory (e.g., internal hard drives) in the blade server may not have much available storage space for larger capacity data.
  • OS operating system
  • the memory e.g., internal hard drives
  • Direct Attached Storage and I/O components not of the blade server may be housed in a dedicated blade enclosure (e.g., chassis, rack, case, housing, frame, frame with walls, etc.) along with the blade server.
  • the blade enclosure may house storage modules, cooling modules, management modules, and power supplies, among other components not contained by the blade server.
  • the storage module(s) housed in the blade enclosure or chassis may supplement the internal storage of the blade servers.
  • the storage modules may include twelve (1 2) 2.5-inch hard drives.
  • the storage modules may have multiple drive bays to receive the hard drives, and the drive bays may be coupled and dedicated to a single server, or to more than one server.
  • a single storage module may include more drive bays (and storage drives) than a single server utilizes or, on the other hand, may not adequately supplement the internal storage capacity of a single server.
  • External storage enclosures or modules such as external JBODs (just a bunch of drives), may provide the servers or blade servers with more storage capacity.
  • the external storage enclosures or storage modules may couple to multiple servers using zoned storage drives.
  • external storage may lead to external cabling without sufficient lanes to support the required performance of multiple servers.
  • the use of external storage may provide additional expander hops.
  • examples herein may implement a blade enclosure or server enclosure having slots or bays that receive servers (e.g., blade servers) and integrated storage (e.g., a storage module having disk drives). Moreover, the storage memory may be zoned to the blade servers. As described herein, an integrated zoned storage system may include a chassis to house and
  • Each storage module may house a substantial number of storage drives, for example, 40 storage drives.
  • the chassis of the overall integrated system may house multiple servers along with the storage module(s) that are coupled to the servers with a sufficient number of lanes. Zoning technology, via the switch or switches, may separate and group the storage drives of the storage module(s) into zoned storage drive groups to provide traffic management among the components of the chassis. Zoning may isolate one or more of the servers to a group of zoned storage drive groups or to a single storage drive group to provide zoned storage in a unified system.
  • Fig. 1 is an example integrated zoned storage system 100.
  • the system 100 includes servers 102 which each may have a processor(s) and memory.
  • the servers 102 may be blade servers.
  • One or more of the servers 102 may include a software program stored in memory and executable by a processor to provide data and manage resources of the system 100.
  • the servers 102 may be multiple servers stored (housed, received, etc.) in a chassis 1 04 (e.g., enclosure, rack, frame, walls, supports, etc.) along with other network components and devices.
  • the system 100 and chassis 104 may have server slots or server bays to receive servers 102. Further, the system 100 and the chassis 104 may have a communication plane 1 16.
  • the system 100 and the chassis 104 may additionally house storage drives, power supplies, input/output (I/O) devices, storage modules (having storage drives and I/O), cooling devices, interconnections, along with other components and services.
  • the servers 102 and the other components housed in the chassis 104 may be hot- swappable components because their removal and replacement from the chassis 104 in certain examples can be carried out generally without excessive
  • larger capacity storage via one or more storage modules 106 may be integrated within the chassis 104 to be used by the individual servers 102.
  • the chassis 104 may house the storage module 106 that can include drive bays 108 and one or more input/output (I/O) modules 1 10.
  • the drive bays 108 collectively may receive multiple storage drives 1 12, with one storage drive 1 12 per each individual drive bay 1 08 in certain examples.
  • the storage enclosure 106 may include a 40-drive bay 108 configuration where each storage drive 1 12 is stored in a single drive bay 1 08.
  • the 40-drive bay configuration may implement a drive density of about 240 drives within 10 rack units (U).
  • the 40-drive bay configuration may provide four storage drives 1 1 2 per individual blade server 102 to provide a 200K lOPs workload using SSD (solid state drive) technology.
  • a storage module 106 is a physical structural module that may have its own chassis, e.g., housing, frame, walls, etc., holding various components, e.g., I/O and other features, and to form drive bays for receiving storage drives 1 12.
  • the system 100 may implement zoning along with the coupling of the multiple storage drives 1 1 2 to the individual blade servers 102.
  • the zoning may reduce physical interference and management complexity, increase security standards, and address other issues. Indeed, to efficiently handle a substantial number of storage drives 1 12, the system 100 may implement zoning technology to enhance traffic management and security measures, among other features.
  • zoning technology may separate larger physical topologies, such as the storage drives 1 12, into zoned storage drive groups to facilitate access within and between the zone groups. Zoning may isolate an individual blade server 102 or multiple blade servers 102 to a group of zoned drive bays 108 or to a single drive bay 108, as will be further described.
  • a zone component or switch 1 14 may configure and group the drive bays 108 containing the storage drives 1 12 into storage drive zones. More than one switch 1 14 may be employed in example. Employing more than one switch 1 14 may provide for redundancy in certain examples. Moreover, one or more zoned storage drive groups may be zoned directly to and accessed respectively by individual blade servers 102. Indeed, the I/O module 1 10 of the storage module 106 may
  • the chassis 104 may house a unified zoned storage system including the drive bays 108 that can be zoned directly to individual blade servers 1 02 to support larger capacity data storage.
  • system 100 and chassis 1 04 may include an interconnection plane 1 16 that communicatively couples various components of the system 1 00 to each other.
  • the interconnection plane may include an interconnection plane 1 16 that communicatively couples various components of the system 1 00 to each other.
  • the interconnection plane may include an interconnection plane 1 16 that communicatively couples various components of the system 1 00 to each other.
  • the interconnection plane may include an interconnection plane 1 16 that communicatively couples various components of the system 1 00 to each other.
  • the interconnection plane may
  • the interconnection plane 1 16 is a communication plane.
  • the communication plane is disposed as a mid- plane in the system 100.
  • Fig. 2 is an example of an integrated, zoned storage system 200.
  • the integrated system 200 has an enclosure or chassis 202 to house and
  • the servers 204 may be communicatively coupled to the storage modules 208.
  • the chassis 202 may have slots or bays 212 to receive the servers 204.
  • the storage module 208 has drive bays 210 to receive and couple storage drives 214 to the servers 204.
  • the servers 204, the switches 206, the storage module 208, and other system components may implement a Serial Attached Small Computer System Interface (SAS) protocol.
  • SAS Serial Attached Small Computer System Interface
  • the SAS protocol provides interconnect technology to enable multiple components stored in the chassis 202 to be simultaneously connected.
  • the SAS protocol may provide increased signal transmission rates of 12 gigabits/second (Gb/s) and may implement zoning technology.
  • the SAS protocol may implement zoning technology to group resources, such as drive bays 210, and assign those resources to selective users, such as the blade servers 204.
  • the blade servers 204, the switch 206, drive bays 210, and storage drives 214, among other components housed in the chassis 202 may be referred to with an SAS protocol designation.
  • the chassis 202 may include one or more bays 21 2 to receive an individual blade server 204, where the number of bays 212, may vary based on manufacturing specifications and other factors. As shown in Fig. 2, each bay 212, may be coupled to the switch 206. Further, the switch 206 may be coupled to the storage module 208 to operatively couple the blade servers 204 housed in the bays 212 with the storage drives 214 housed in the drive bays 210. Accordingly, the blade enclosure 202 may include a storage solution to house the blade servers 204, the switch 206, and the storage drives 214 in a single, unified enclosure that integrates computing operations with storage capabilities, among other services.
  • each individual blade server 204 may connect to one or more storage drives 214 to support larger capacity data.
  • the storage drives 214 may include JBODs, or other types of drive architectures.
  • the switch 206 may include a 1 2-port switch, such as a 1 2-port SAS switch. The switch 206, as implemented, may allow any of the blade servers 204 to connect to one or more of the storage drives 214. As shown in Fig. 2, two switches 206 can be used in a configuration.
  • the integrated system 200 may include dual-path switches 206 to implement a redundant configuration for enhanced availability, scalability, and increased I/O bandwidth during signal transmission. In this manner, if one of the switches 206 fails, the second one will continue normal operation.
  • the storage drives 214 may include, for example, solid state drives (SSDs) or other non-volatile memory such as hard disk drives (HDDs) peripheral component express (PCIe) nonvolatile memory (NVMe) drives, SAS Smart drives, and so on.
  • SSDs solid state drives
  • HDDs hard disk drives
  • PCIe peripheral component express
  • NVMe nonvolatile memory
  • SAS Smart drives and so on.
  • a bus (not shown) may extend into a receptacle or slot such as a mezzanine slot 216.
  • the slot 21 6 may be configured to receive a controller 218, such as a RAID controller, to support storage capacity for the blade server 204.
  • the controller 218 may perform at a 12 Gbs SAS bandwidth, or similar bandwidth, to provide RAID protection and to support the storage drives 214 that provide storage capacity to the blade servers 204.
  • the controller 218 may be coupled to the switch 206 in order to transmit signals between the individual blade servers 204 and the storage drives 214.
  • one or more blade servers 204 may be operatively connected to the one or more storage drives 214, respectively, via the switch 206, to transmit or receive signals related to storage requests or capacity.
  • Each blade server 204 may include memory 1 19 which may include volatile memory, nonvolatile memory, system memory, firmware, hard disk(s), and so forth.
  • zoning technology implemented in the chassis 202 may provide enhanced management of the component devices and increased security standards.
  • the zoning technology may permit the blade servers 204 to access one or more particular storage drives 214 while preventing access to other storage drives 214.
  • the zoning technology may facilitate an administrator of the system 200 to regulate and control what a server 204 may see.
  • a zone manager 220 may be embedded in the firmware of the switch 206 to configure the switch 206 and its components and to implement zoning technology. Specifically, the zone manager 220 may configure an expander 222 of the switch 206 to separate the drive bays 210 into multiple zones to control and manage access to the storage drives 214.
  • the signals transmitted by the controller 218 in the blade server 204 may be routed through the expander 222 located on the switch 206 to be received by the expander located in the storage module.
  • the expander 222 may link the controller 218 of each individual blade server 204 to one or more storage drives 214.
  • two or more expanders 222 may be located in the switch 206 to generate commands for zoning configuration and management.
  • the controller 21 8 may support up to 200 or more storage drives 214.
  • Expanders 222 and other expanders herein may be SAS expanders that facilitate
  • Expanders may contain two or more external expander-ports.
  • the zoning methods used may include the Drive Bay Zoning technique or the Port Zoning technique.
  • the Drive Bay Zoning technique may identify each individual drive bay in the same zone group.
  • the Port Zoning technique may allow all drive bays attached to a particular port of the switch 206 in the same zone group.
  • the switch 206 may manage a permission table of the expander 222 to indicate which ports of the bays 212 may have the permission to communicate with the ports of the drive bays 210.
  • the drive bays 210 may be grouped into zone group A 224, zone group B 226, and zone group C 228, as shown in Fig. 2.
  • the number of zone groups and the number of drive bays assigned to a particular zone group may vary based on the needs of the individual blade servers.
  • the expander 222 may assign a zone group 224, 226, 228 to one or more of the individual blade servers 204 based on its requests, access permissions, and limitations. In this manner, the storage drives 214 may not be dedicated to as single, individual blade server 204 but may be zoned and accessed by multiple individual blade servers 204.
  • the expander 222 may be operatively coupled to the storage module 208, and in operation, signals (e.g., communication, data, control, etc.) may be received and transmitted to a re-driver (not shown) which may boost the quality of the signals, if needed.
  • the signal may continue to an I/O module 232 located in the storage enclosure 208.
  • An I/O module 232, or multiple I/O modules 232 for redundancy, may receive the signal into the storage enclosure 208.
  • the I/O module 232 may include a storage expander 234 to route the signal from the controller 218 to one or more of the storage drives 214.
  • each storage drive 214 zoned to a particular zone group 224, 226, 228 may be operatively coupled to one or more of the individual blade servers 204.
  • the capabilities of the controller 218 to meet the requests of the blade servers 204 may be enhanced.
  • zoning technology may be implemented to distribute (e.g., substantially evenly) access to the drives 214.
  • the servers 204 depicted are blade servers 204, other types of servers may be employed.
  • Fig. 3A is an example of a front portion of an integrated zoned storage system 300.
  • the system 300 may have a chassis including a number of slots or bays 302, for example, twelve bays as shown in Fig. 3A.
  • ten of the twelve bays 302 house respective blade servers 310, e.g., half-height blade servers as depicted.
  • the blade servers 310 may include full or quarter-height blade servers, depending on the manufacturing specifications and other considerations.
  • the blade servers 31 0 may include file servers, servers of an object store, mail servers, and virtual servers, among others.
  • a storage module 304 may include a double-wide pull-out cabinet that can occupy two bays 302, for example, in the chassis of the system 300.
  • the storage module 304 may house a number of storage drives 306, such as 40 small form factor (SFF) drives, and two I/O modules 308, and so forth.
  • SFF small form factor
  • Each individual storage drive 306 may be housed in a drive bay 307.
  • the storage enclosure 304 may support virtual storage appliances (VSA), virtual storage area networks (VSAN), file storage, object storage, and so forth.
  • VSA virtual storage appliances
  • VSAN virtual storage area networks
  • file storage e.g., object storage, and so forth.
  • the storage enclosure 304 in some cases, may also support continuous integration (CI) virtualization with caching.
  • CI continuous integration
  • the CI virtualization workloads may utilize block storage, for example, composable virtual machines (VMs) with integrated compute, storage, fabric, and infrastructure management software, and the like.
  • VMs composable virtual machines
  • infrastructure management software to deploy and manages, a user may provide hypervisor clusters, flexible ratio of compute to storage, and scalability by adding additional storage enclosures in a virtual environment.
  • Fig. 3B is an example of the integrated zoned storage system 300 having a fabric 312.
  • the system 300 can house and provide electrical and communication connectivity between the storage module 304, a switch 314, and a controller 31 6.
  • the storage module 304 may include two installed I/O modules 318, each having multiple SAS lanes, e.g., 8 Gbs each, 12 Gbs each, 18 Gbs each, etc.
  • the number of SAS lanes may be 20, 40, 80, 1 20 lanes, etc. collectively or per drive bay 307.
  • an I/O module 318 has forty (40) 12 Gbs SAS lanes to a single drive bay 307.
  • each of the storage drives 306 in the storage enclosure 304 may be plumbed with a dual path 320 so that sixteen (16) 12 Gbs (i.e., 1 92 Gbs) SAS links can be routed to the one or more switches 314.
  • the switch 314 may include a 12-port (i.e., 48-lanes), 12 Gbs SAS switch, as well as other sizes and types of switches.
  • the ports of the switch 314 may be capable of routing two X 48 Gbs (i.e., 96 Gbs) to the controller 316.
  • the fabric 312, as installed in the system 300 may provide an enhanced controller 316 bandwidth, e.g., 96 Gbs, and an enhanced storage module 304 bandwidth, e.g., 192 Gbs, for increased performance.
  • Fig. 4 is diagram 400 depicting three integrated zoned storage systems 401 , 407, and 413.
  • the ellipsis denotes that diagram 400 may represent a range of various configurations of integrated zone storage systems, with three shown. The variations may involve a differing numbers of storage drives versus servers.
  • the system and overall chassis may represent twelve slots that could receive servers, or two of the twelve slots could receive a storage module.
  • the system could have 10 servers and one storage module, or the system could be configured with 2 servers and 5 storage modules, and variations there between.
  • an integrated zone storage system may have one or more storage modules, e.g., each having 40 or more drive bays to receive storage drives. Other arrangements are accommodated.
  • the chassis of the system 401 houses two storage modules 402, each having 40 storage drives 404 to give a total of 80 storage drives for the system 401 .
  • the system 401 also houses 8 blade servers 406.
  • the system 401 can evenly distribute the 80 storage drives, providing 10 storage drives 404 per individual server device 406 with a 500K lOPs performance measurement.
  • the system 407 supports three storage modules 408 (40 storage drives 41 0 each) and houses 6 blade servers 412.
  • the system 407 can provide 20 storage drives 410 per server 412 with a 1 million (M) lOPs performance measurement, or variations thereof.
  • the integrated zoned storage system 413 may house five storage modules 414 (e.g., each having 40 drive bays holding 40 storage drives 416) to provide a total of 200 storage drives 416 and 2 servers 418.
  • the system 413 can assign 1 00 drive bays and 100 storage drives 41 6 per server 418 to provide a 1 million (M) lOPs performance measurement, and the like.
  • an integrated storage system may have the capabilities of supporting the enhanced demands of system applications, programs, and so forth.
  • Each storage module may house about 40 storage drives, 80 storage drives, 120 storage drives, or 200 storage drives, and so on.
  • an internally- located storage module may implement an increased drive density, e.g., 24 per U, to provide an enhanced power-to -performance ratio and non-blocking solid-state device (SSD) performance, e.g., 2 M lOPs, among other features.
  • SSD solid-state device
  • a storage module, servers, and a switch may be communicatively coupled and housed together within a single enclosure or chassis.
  • a communication plane such as a backplane or a mid-plane (e.g., a mid-plane is constructed with slots for connecting to devices on both sides), may be installed in the enclosure and/or as part of the chassis to communicatively couple the various components held in the chassis and/or housed in the enclosure.
  • the communication plane may connect components at a front side, e.g., a side having (1 ) the storage module(s) that includes storage drives and (2) the server portion of the enclosure that includes servers.
  • the communication plane may connect additional components housed in the enclosure, such as the switch, power supplies, input/output (I/O) devices, cooling devices, at a rear side of the enclosure.
  • the communication plane may serve as a connector to electrically and mechanically couple the storage module, the server slots or server bays, the servers, and the one or more switches to form a single unified fabric within the integrated, zoned storage system.
  • Fig. 5 is an example of method 500 of assembling an integrated zoned storage system.
  • the method 500 includes coupling components of the integrated zoned storage system.
  • the method includes disposing a storage module in a chassis of the integrated zoned storage system and communicatively coupling the storage module to a communication plane of the chassis.
  • the storage module may include drive bays to receive storage drives.
  • the method 500 may include disposing storage drives into the drive bays.
  • the method includes disposing servers (e.g., blade servers) in the chassis and communicatively coupling the servers to the communication plane.
  • the chassis may have slots or bays to receive the servers and to couple the servers to the communication plane and to other devices.
  • Each server may have one or more processors.
  • the method 500 includes communicatively coupling switches to the communication plane, wherein the switches to configure the storage drives into storage drive zones and to assign the storage drive zones to the blade servers.
  • the switches may transmit signals between the storage drive zones and the servers.
  • the storage drives and the switches may comprise a serial attached SCSI (SAS) protocol standard, disposing the storage drives in the drive bays of the storage module.
  • SAS serial attached SCSI
  • the method 500 may include configuring the blade servers to provide redundant array of independent disks (RAID) functionality to the storage drive zones.
  • communicatively coupling the storage module may involve communicatively coupling an I/O module of the storage module to at least one of the switches.
  • the storage module may be a storage enclosure communicatively coupled to a communication plane of the chassis, e.g., the chassis as the overall enclosure of the integrated zoned storage system.
  • the smaller storage enclosure houses multiple storage drives.
  • Each storage drive may be housed in a drive bay of the storage enclosure.
  • the storage enclosure may include zoned storage wherein each drive bay may be zoned to individual server devices housed in the overall enclosure.
  • a server enclosure or server portion of the overall enclosure may be communicatively coupled to the communication plane, wherein the server enclosure houses the individual server devices. Accordingly, the storage enclosure (or storage module) and the server enclosure (or server portion of the overall enclosure, or server bays of the chassis to receive servers) may be communicatively coupled together via the communication plane in the overall enclosure.
  • a switch may be communicatively coupled to the communication plane of the overall enclosure (e.g., chassis or housing). In this manner, the storage enclosure, the server enclosure, and the switch may be communicatively coupled together in the enclosure.
  • the switch may configure the individual storage drives into storage drive zones.
  • the storage drive zones may be assigned or zoned to each individual server device, as opposed to a single server.
  • the switch may transmit a signal from the individual server device to the storage drive zones.
  • an example integrated zoned storage system includes (e.g., houses, contains) (1 ) servers (e.g., blade servers each having at least one processor) and (2) a storage module having drive bays and input/output (I/O) modules communicatively coupled to the drive bays, the drive bays to receive and house storage drives.
  • the system may include server bays to receive the servers.
  • One or more of the server bays may be implemented to receive the storage module.
  • the system includes switches to zone the storage drives to the servers.
  • the system includes a chassis to house the servers, the storage module, and the switches, wherein the chassis comprises an interconnection plane (e.g., communication plane, backplane, mid-plane, etc.) to communicatively couple the servers, the storage module, and the switches.
  • the storage module may comprise multiple storage modules, each storage module comprising drive bays to receive storage drives, wherein the interconnection plane is a communication plane, wherein the servers comprise blade servers, and wherein each blade server comprises a processor.
  • the switches may include a zone manager to zone the storage drives into zoned storage drive groups and to assign the zoned storage drive groups to the servers, and wherein the zone manager may include serial attached small computer system interface (SAS) firmware.
  • the servers may include a storage controller to provide redundant array of independent disks (RAID) functionality to the zoned storage drive groups.
  • the I/O modules may include an expander to provide a signal from the servers to zoned storage drive groups.
  • the storage drives may be disposed in the drive bays. Moreover, the storage drives may be software-defined block, file, or object storage, or any combination thereof.
  • the storage module may have 20 drive bays. On the other hand the storage module may have at least 40 drive bays, wherein the storage module to house at least 40 storage drives.
  • the storage module includes: drive bays to receive storage drives; and an input/output (I/O) module communicatively coupled to the drive bays, wherein the storage module to be housed with servers in a chassis of the integrated zoned storage system.
  • the storage module, the storage drives, and the servers to be communicatively coupled to a communication plane of the chassis.
  • the storage drives to be zoned to the servers.
  • the storage drives may be disposed in the drive bays, wherein: the storage drives are grouped into zones to create zoned storage drive groups; the zoned storage drive groups implement Serial Attached SCSI (SAS) protocol; and at least one zoned storage drive group is assigned to more than one blade server.
  • SAS Serial Attached SCSI
  • the I/O module includes a zoning expander to transmit a signal from a server to a drive bay, and wherein the signal to terminate at a storage drive received in the drive bay.
  • the I/O module may be at least two I/O modules.
  • the drive bays may be at least 40 drive bays.
  • the communication plane may be disposed as a mid- plane in the chassis.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multi Processors (AREA)

Abstract

A method and system for implementing integrated zoned storage. In examples, a storage module may include drive bays to receive storage drives and an input/output (I/O) module. A chassis may house and communicatively couple the storage module and multiple servers via a communication plane. The storage drives can be zoned and assigned to the servers.

Description

INTEGRATED ZONE STORAGE
BACKGROUND
[0001] In the computing context, an enclosure, rack, or module may be a physical structure to house a computing device and provide services such as power, cooling, interconnections, and networking, among other services, to the device. For example, a blade server enclosure or rack can house multiple blade servers for a computing network. Multiple slots or bays, for example, in the rack or enclosure may receive the blade servers. Each blade server may have multiple components including a processor(s), system memory, a hard-drive(s), an input/output (I/O) card(s), and interconnects, among other components. However, the internal hard drives of the blade servers may have limited internal storage and thus unable to store large amounts of data. Additional storage, such as network-attached storage or a storage area network (SAN), may be coupled to the blade servers as external storage to provide additional storage capacity.
DESCRIPTION OF THE DRAWINGS
[0002] The advantages of the present techniques are better understood by referring to the following detailed description and the attached drawings, in which:
[0003] Fig. 1 is a diagram of an integrated zoned storage system in accordance with examples;
[0004] Fig. 2 is a diagram of the integrated zoned storage system of Fig. 1 but depicting additional detail in accordance with examples;
[0ΘΘ5] Fig. 3A is diagram of a front portion of an integrated zoned storage system with an storage module extended from the chassis in accordance with examples;
[0006] Fig. 3B is a diagram of an integrated zoned storage system in accordance with examples;
[ΘΘΘ7] Fig. 4 is a diagram of integrated zoned storage systems in accordance with examples; and
[0008] Fig. 5 is block diagram of a method of manufacturing an integrated zoned storage system in accordance with examples. DETAILED DSCRIPTION
[ΘΘΘ9] Individual blade servers in a server enclosure or server rack may have a relatively small amount of memory. Therefore, coupling the servers to additional memory external to the server enclosure housing the multiple blade servers may be advantageous. However, such may result in increased external cabling and complexity. Therefore, examples herein integrate external storage memory into the server enclosure with the servers. The integrated external memory within the overall enclosure with the servers may be communicatively coupled with to the servers, for example, via an interconnection plane without significant cabling. Zoning of the integrated storage memory with the blade servers may improve efficiency and security. The external memory, such as a storage module having disk drives, may be integrated with the servers in an overall enclosure or chassis to give an integrated zoned storage system having servers and uniquely included zoned memory in addition to the memory internal to the servers.
[0010] As indicated, a blade server may be a relatively small modular server that provides processing power and memory, while forgoing significant storage and I/O components, among other components typically found on stand-alone servers.
Often, the blade servers on the market accept up to two 2.5-inch hard drives that provide limited internal storage capacity. Further, the operating system (OS) of the blade server may consume a substantial portion of that capacity. Consequently, the memory (e.g., internal hard drives) in the blade server may not have much available storage space for larger capacity data.
[0011] Direct Attached Storage and I/O components not of the blade server may be housed in a dedicated blade enclosure (e.g., chassis, rack, case, housing, frame, frame with walls, etc.) along with the blade server. For instance, the blade enclosure may house storage modules, cooling modules, management modules, and power supplies, among other components not contained by the blade server. The storage module(s) housed in the blade enclosure or chassis may supplement the internal storage of the blade servers. In some cases, the storage modules may include twelve (1 2) 2.5-inch hard drives. For example, the storage modules may have multiple drive bays to receive the hard drives, and the drive bays may be coupled and dedicated to a single server, or to more than one server. A single storage module may include more drive bays (and storage drives) than a single server utilizes or, on the other hand, may not adequately supplement the internal storage capacity of a single server.
[001 ] Larger capacity or denser storage enclosures or modules, such as external JBODs (just a bunch of drives), may provide the servers or blade servers with more storage capacity. The external storage enclosures or storage modules may couple to multiple servers using zoned storage drives. However, external storage may lead to external cabling without sufficient lanes to support the required performance of multiple servers. Also, the use of external storage may provide additional expander hops.
[0013] Thus, examples herein may implement a blade enclosure or server enclosure having slots or bays that receive servers (e.g., blade servers) and integrated storage (e.g., a storage module having disk drives). Moreover, the storage memory may be zoned to the blade servers. As described herein, an integrated zoned storage system may include a chassis to house and
communicatively couple a storage module, switches, and so on. More than one storage module may be included. Each storage module may house a substantial number of storage drives, for example, 40 storage drives. The chassis of the overall integrated system may house multiple servers along with the storage module(s) that are coupled to the servers with a sufficient number of lanes. Zoning technology, via the switch or switches, may separate and group the storage drives of the storage module(s) into zoned storage drive groups to provide traffic management among the components of the chassis. Zoning may isolate one or more of the servers to a group of zoned storage drive groups or to a single storage drive group to provide zoned storage in a unified system.
[0014] Fig. 1 is an example integrated zoned storage system 100. The system 100 includes servers 102 which each may have a processor(s) and memory. The servers 102 may be blade servers. One or more of the servers 102 may include a software program stored in memory and executable by a processor to provide data and manage resources of the system 100.
[0Θ15] The servers 102 may be multiple servers stored (housed, received, etc.) in a chassis 1 04 (e.g., enclosure, rack, frame, walls, supports, etc.) along with other network components and devices. The system 100 and chassis 104 may have server slots or server bays to receive servers 102. Further, the system 100 and the chassis 104 may have a communication plane 1 16. Furthermore, the system 100 and the chassis 104 may additionally house storage drives, power supplies, input/output (I/O) devices, storage modules (having storage drives and I/O), cooling devices, interconnections, along with other components and services. Moreover, the servers 102 and the other components housed in the chassis 104 may be hot- swappable components because their removal and replacement from the chassis 104 in certain examples can be carried out generally without excessive
reconfiguration or shutdown of the system 1 00.
[0018] In examples, as indicated, larger capacity storage via one or more storage modules 106 may be integrated within the chassis 104 to be used by the individual servers 102. Specifically, the chassis 104 may house the storage module 106 that can include drive bays 108 and one or more input/output (I/O) modules 1 10. In some examples, the drive bays 108 collectively may receive multiple storage drives 1 12, with one storage drive 1 12 per each individual drive bay 1 08 in certain examples. For instance, the storage enclosure 106 may include a 40-drive bay 108 configuration where each storage drive 1 12 is stored in a single drive bay 1 08. In a particular example, the 40-drive bay configuration may implement a drive density of about 240 drives within 10 rack units (U). In other examples, the 40-drive bay configuration may provide four storage drives 1 1 2 per individual blade server 102 to provide a 200K lOPs workload using SSD (solid state drive) technology. In general, a storage module 106 is a physical structural module that may have its own chassis, e.g., housing, frame, walls, etc., holding various components, e.g., I/O and other features, and to form drive bays for receiving storage drives 1 12.
[0017] The system 100 may implement zoning along with the coupling of the multiple storage drives 1 1 2 to the individual blade servers 102. The zoning may reduce physical interference and management complexity, increase security standards, and address other issues. Indeed, to efficiently handle a substantial number of storage drives 1 12, the system 100 may implement zoning technology to enhance traffic management and security measures, among other features. In operation, zoning technology may separate larger physical topologies, such as the storage drives 1 12, into zoned storage drive groups to facilitate access within and between the zone groups. Zoning may isolate an individual blade server 102 or multiple blade servers 102 to a group of zoned drive bays 108 or to a single drive bay 108, as will be further described.
[0018] A zone component or switch 1 14 may configure and group the drive bays 108 containing the storage drives 1 12 into storage drive zones. More than one switch 1 14 may be employed in example. Employing more than one switch 1 14 may provide for redundancy in certain examples. Moreover, one or more zoned storage drive groups may be zoned directly to and accessed respectively by individual blade servers 102. Indeed, the I/O module 1 10 of the storage module 106 may
communicatively couple the drive bays 108 and the storage drives 1 12 of a zoned storage drive group to one or more of the individual blade servers 102. In this case, the chassis 104 may house a unified zoned storage system including the drive bays 108 that can be zoned directly to individual blade servers 1 02 to support larger capacity data storage.
[0019] Lastly, as mentioned, the system 100 and chassis 1 04 may include an interconnection plane 1 16 that communicatively couples various components of the system 1 00 to each other. For instance, the interconnection plane may
communicatively couple the servers 102, storage module 106, and switches 1 14. In certain examples, the interconnection plane 1 16 is a communication plane.
Moreover, in a particular example, the communication plane is disposed as a mid- plane in the system 100.
[0020] Fig. 2 is an example of an integrated, zoned storage system 200. The integrated system 200 has an enclosure or chassis 202 to house and
communicatively couple individual servers 204 (e.g., blade servers), one or more switches 206, one or more storage modules 208, and so on. The servers 204 may be communicatively coupled to the storage modules 208. The chassis 202 may have slots or bays 212 to receive the servers 204. Likewise, the storage module 208 has drive bays 210 to receive and couple storage drives 214 to the servers 204.
[0021] In the present examples, the servers 204, the switches 206, the storage module 208, and other system components may implement a Serial Attached Small Computer System Interface (SAS) protocol. The SAS protocol provides interconnect technology to enable multiple components stored in the chassis 202 to be simultaneously connected. In examples, the SAS protocol may provide increased signal transmission rates of 12 gigabits/second (Gb/s) and may implement zoning technology. The SAS protocol may implement zoning technology to group resources, such as drive bays 210, and assign those resources to selective users, such as the blade servers 204. In examples, the blade servers 204, the switch 206, drive bays 210, and storage drives 214, among other components housed in the chassis 202, may be referred to with an SAS protocol designation.
[ΘΘ22] The chassis 202 may include one or more bays 21 2 to receive an individual blade server 204, where the number of bays 212, may vary based on manufacturing specifications and other factors. As shown in Fig. 2, each bay 212, may be coupled to the switch 206. Further, the switch 206 may be coupled to the storage module 208 to operatively couple the blade servers 204 housed in the bays 212 with the storage drives 214 housed in the drive bays 210. Accordingly, the blade enclosure 202 may include a storage solution to house the blade servers 204, the switch 206, and the storage drives 214 in a single, unified enclosure that integrates computing operations with storage capabilities, among other services.
[0023] For additional storage, each individual blade server 204 may connect to one or more storage drives 214 to support larger capacity data. In examples, the storage drives 214 may include JBODs, or other types of drive architectures. In examples, the switch 206 may include a 1 2-port switch, such as a 1 2-port SAS switch. The switch 206, as implemented, may allow any of the blade servers 204 to connect to one or more of the storage drives 214. As shown in Fig. 2, two switches 206 can be used in a configuration.
[0024] The integrated system 200 may include dual-path switches 206 to implement a redundant configuration for enhanced availability, scalability, and increased I/O bandwidth during signal transmission. In this manner, if one of the switches 206 fails, the second one will continue normal operation. The storage drives 214 may include, for example, solid state drives (SSDs) or other non-volatile memory such as hard disk drives (HDDs) peripheral component express (PCIe) nonvolatile memory (NVMe) drives, SAS Smart drives, and so on. The communication path for the transmission of signals between the individual blade server 204 and the storage drives 214 will be described.
[0025] Within each individual blade server 204, a bus (not shown) may extend into a receptacle or slot such as a mezzanine slot 216. The slot 21 6 may be configured to receive a controller 218, such as a RAID controller, to support storage capacity for the blade server 204. In some examples, the controller 218 may perform at a 12 Gbs SAS bandwidth, or similar bandwidth, to provide RAID protection and to support the storage drives 214 that provide storage capacity to the blade servers 204. The controller 218 may be coupled to the switch 206 in order to transmit signals between the individual blade servers 204 and the storage drives 214. For example, one or more blade servers 204 may be operatively connected to the one or more storage drives 214, respectively, via the switch 206, to transmit or receive signals related to storage requests or capacity. Each blade server 204 may include memory 1 19 which may include volatile memory, nonvolatile memory, system memory, firmware, hard disk(s), and so forth.
[0026] As previously discussed, zoning technology implemented in the chassis 202 may provide enhanced management of the component devices and increased security standards. The zoning technology may permit the blade servers 204 to access one or more particular storage drives 214 while preventing access to other storage drives 214. The zoning technology may facilitate an administrator of the system 200 to regulate and control what a server 204 may see. A zone manager 220 may be embedded in the firmware of the switch 206 to configure the switch 206 and its components and to implement zoning technology. Specifically, the zone manager 220 may configure an expander 222 of the switch 206 to separate the drive bays 210 into multiple zones to control and manage access to the storage drives 214.
[0027] The signals transmitted by the controller 218 in the blade server 204 may be routed through the expander 222 located on the switch 206 to be received by the expander located in the storage module. Specifically, the expander 222 may link the controller 218 of each individual blade server 204 to one or more storage drives 214. In some examples, two or more expanders 222 may be located in the switch 206 to generate commands for zoning configuration and management. In some examples, the controller 21 8 may support up to 200 or more storage drives 214. Expanders 222 and other expanders herein may be SAS expanders that facilitate
communication between relatively large numbers of SAS devices. Expanders may contain two or more external expander-ports.
[0028] The zoning methods used may include the Drive Bay Zoning technique or the Port Zoning technique. The Drive Bay Zoning technique may identify each individual drive bay in the same zone group. The Port Zoning technique may allow all drive bays attached to a particular port of the switch 206 in the same zone group. The switch 206 may manage a permission table of the expander 222 to indicate which ports of the bays 212 may have the permission to communicate with the ports of the drive bays 210.
[0029] For illustrative purposes, the drive bays 210 may be grouped into zone group A 224, zone group B 226, and zone group C 228, as shown in Fig. 2. The number of zone groups and the number of drive bays assigned to a particular zone group may vary based on the needs of the individual blade servers. The expander 222 may assign a zone group 224, 226, 228 to one or more of the individual blade servers 204 based on its requests, access permissions, and limitations. In this manner, the storage drives 214 may not be dedicated to as single, individual blade server 204 but may be zoned and accessed by multiple individual blade servers 204.
[003Θ] The expander 222 may be operatively coupled to the storage module 208, and in operation, signals (e.g., communication, data, control, etc.) may be received and transmitted to a re-driver (not shown) which may boost the quality of the signals, if needed. The signal may continue to an I/O module 232 located in the storage enclosure 208. An I/O module 232, or multiple I/O modules 232 for redundancy, may receive the signal into the storage enclosure 208. The I/O module 232 may include a storage expander 234 to route the signal from the controller 218 to one or more of the storage drives 214. In this manner, each storage drive 214 zoned to a particular zone group 224, 226, 228 may be operatively coupled to one or more of the individual blade servers 204. With the blade servers 204 and the storage drives 214 housed and coupled in the chassis 202, the capabilities of the controller 218 to meet the requests of the blade servers 204 may be enhanced. Additionally, to reduce occurrence of overwhelming any one storage drive 214, zoning technology may be implemented to distribute (e.g., substantially evenly) access to the drives 214. Lastly, while the servers 204 depicted are blade servers 204, other types of servers may be employed.
[0031] Fig. 3A is an example of a front portion of an integrated zoned storage system 300. The system 300 may have a chassis including a number of slots or bays 302, for example, twelve bays as shown in Fig. 3A. In the illustrated example, ten of the twelve bays 302 house respective blade servers 310, e.g., half-height blade servers as depicted. In other examples, the blade servers 310 may include full or quarter-height blade servers, depending on the manufacturing specifications and other considerations. In some examples, the blade servers 31 0 may include file servers, servers of an object store, mail servers, and virtual servers, among others.
[0032] A storage module 304 may include a double-wide pull-out cabinet that can occupy two bays 302, for example, in the chassis of the system 300. The storage module 304 may house a number of storage drives 306, such as 40 small form factor (SFF) drives, and two I/O modules 308, and so forth. Each individual storage drive 306 may be housed in a drive bay 307. The storage enclosure 304 may support virtual storage appliances (VSA), virtual storage area networks (VSAN), file storage, object storage, and so forth. The storage enclosure 304, in some cases, may also support continuous integration (CI) virtualization with caching. The CI virtualization workloads may utilize block storage, for example, composable virtual machines (VMs) with integrated compute, storage, fabric, and infrastructure management software, and the like. In particular, using the infrastructure management software to deploy and manages, a user may provide hypervisor clusters, flexible ratio of compute to storage, and scalability by adding additional storage enclosures in a virtual environment.
[0033] Fig. 3B is an example of the integrated zoned storage system 300 having a fabric 312. The system 300 can house and provide electrical and communication connectivity between the storage module 304, a switch 314, and a controller 31 6. The storage module 304 may include two installed I/O modules 318, each having multiple SAS lanes, e.g., 8 Gbs each, 12 Gbs each, 18 Gbs each, etc. The number of SAS lanes may be 20, 40, 80, 1 20 lanes, etc. collectively or per drive bay 307. In a particular example, an I/O module 318 has forty (40) 12 Gbs SAS lanes to a single drive bay 307. In another specific example, each of the storage drives 306 in the storage enclosure 304 may be plumbed with a dual path 320 so that sixteen (16) 12 Gbs (i.e., 1 92 Gbs) SAS links can be routed to the one or more switches 314. In present examples, the switch 314 may include a 12-port (i.e., 48-lanes), 12 Gbs SAS switch, as well as other sizes and types of switches. In one example, the ports of the switch 314 may be capable of routing two X 48 Gbs (i.e., 96 Gbs) to the controller 316. In the some examples, the fabric 312, as installed in the system 300, may provide an enhanced controller 316 bandwidth, e.g., 96 Gbs, and an enhanced storage module 304 bandwidth, e.g., 192 Gbs, for increased performance.
[0034] Fig. 4 is diagram 400 depicting three integrated zoned storage systems 401 , 407, and 413. The ellipsis denotes that diagram 400 may represent a range of various configurations of integrated zone storage systems, with three shown. The variations may involve a differing numbers of storage drives versus servers. In the specific three examples expressly depicted in Fig. 4, the system and overall chassis may represent twelve slots that could receive servers, or two of the twelve slots could receive a storage module. Thus, the system could have 10 servers and one storage module, or the system could be configured with 2 servers and 5 storage modules, and variations there between. As previously described with respect to Figs. 3A and 3B, an integrated zone storage system may have one or more storage modules, e.g., each having 40 or more drive bays to receive storage drives. Other arrangements are accommodated.
[0035] In the illustrated example, the chassis of the system 401 houses two storage modules 402, each having 40 storage drives 404 to give a total of 80 storage drives for the system 401 . The system 401 also houses 8 blade servers 406.
Moreover, in a particular example, the system 401 can evenly distribute the 80 storage drives, providing 10 storage drives 404 per individual server device 406 with a 500K lOPs performance measurement.
[0038] As for the integrated zoned storage system 407, the system 407 supports three storage modules 408 (40 storage drives 41 0 each) and houses 6 blade servers 412. The system 407 can provide 20 storage drives 410 per server 412 with a 1 million (M) lOPs performance measurement, or variations thereof. Lastly, the integrated zoned storage system 413 may house five storage modules 414 (e.g., each having 40 drive bays holding 40 storage drives 416) to provide a total of 200 storage drives 416 and 2 servers 418. In a specific example, the system 413 can assign 1 00 drive bays and 100 storage drives 41 6 per server 418 to provide a 1 million (M) lOPs performance measurement, and the like. Other configurations may be implemented depending on the workload capacity desired by a consumer and other issues. In this way, an integrated storage system may have the capabilities of supporting the enhanced demands of system applications, programs, and so forth. Each storage module may house about 40 storage drives, 80 storage drives, 120 storage drives, or 200 storage drives, and so on.
[0037] Often, storage drives of external storage are dedicated to a single server of a server enclosure. However, the present integrated, zoned storage system may remove or reduce external cabling, for example, external cabling to couple the server enclosure to an external storage system. As presently described, an internally- located storage module (storage enclosure) may implement an increased drive density, e.g., 24 per U, to provide an enhanced power-to -performance ratio and non-blocking solid-state device (SSD) performance, e.g., 2 M lOPs, among other features.
[0038] In examples, a storage module, servers, and a switch, among other components, may be communicatively coupled and housed together within a single enclosure or chassis. A communication plane, such as a backplane or a mid-plane (e.g., a mid-plane is constructed with slots for connecting to devices on both sides), may be installed in the enclosure and/or as part of the chassis to communicatively couple the various components held in the chassis and/or housed in the enclosure. For example, the communication plane may connect components at a front side, e.g., a side having (1 ) the storage module(s) that includes storage drives and (2) the server portion of the enclosure that includes servers. The communication plane may connect additional components housed in the enclosure, such as the switch, power supplies, input/output (I/O) devices, cooling devices, at a rear side of the enclosure. In this manner, the communication plane may serve as a connector to electrically and mechanically couple the storage module, the server slots or server bays, the servers, and the one or more switches to form a single unified fabric within the integrated, zoned storage system. [0Θ39] Fig. 5 is an example of method 500 of assembling an integrated zoned storage system. The method 500 includes coupling components of the integrated zoned storage system. At block 502, the method includes disposing a storage module in a chassis of the integrated zoned storage system and communicatively coupling the storage module to a communication plane of the chassis. The storage module may include drive bays to receive storage drives. The method 500 may include disposing storage drives into the drive bays.
[0Θ4Θ] At block 504, the method includes disposing servers (e.g., blade servers) in the chassis and communicatively coupling the servers to the communication plane. The chassis may have slots or bays to receive the servers and to couple the servers to the communication plane and to other devices. Each server may have one or more processors.
[ΘΘ41] At block 506, the method 500 includes communicatively coupling switches to the communication plane, wherein the switches to configure the storage drives into storage drive zones and to assign the storage drive zones to the blade servers. The switches may transmit signals between the storage drive zones and the servers. Moreover, in some examples the storage drives and the switches may comprise a serial attached SCSI (SAS) protocol standard, disposing the storage drives in the drive bays of the storage module. The method 500 may include configuring the blade servers to provide redundant array of independent disks (RAID) functionality to the storage drive zones. Lastly, communicatively coupling the storage module may involve communicatively coupling an I/O module of the storage module to at least one of the switches.
[ΘΘ42] The storage module may be a storage enclosure communicatively coupled to a communication plane of the chassis, e.g., the chassis as the overall enclosure of the integrated zoned storage system. The smaller storage enclosure houses multiple storage drives. Each storage drive may be housed in a drive bay of the storage enclosure. In the present examples, the storage enclosure may include zoned storage wherein each drive bay may be zoned to individual server devices housed in the overall enclosure. A server enclosure or server portion of the overall enclosure may be communicatively coupled to the communication plane, wherein the server enclosure houses the individual server devices. Accordingly, the storage enclosure (or storage module) and the server enclosure (or server portion of the overall enclosure, or server bays of the chassis to receive servers) may be communicatively coupled together via the communication plane in the overall enclosure. A switch may be communicatively coupled to the communication plane of the overall enclosure (e.g., chassis or housing). In this manner, the storage enclosure, the server enclosure, and the switch may be communicatively coupled together in the enclosure. In examples, the switch may configure the individual storage drives into storage drive zones. The storage drive zones may be assigned or zoned to each individual server device, as opposed to a single server. To fulfill a storage request of an individual server blade for storage capacity, the switch may transmit a signal from the individual server device to the storage drive zones.
[0043] In sum, an example integrated zoned storage system includes (e.g., houses, contains) (1 ) servers (e.g., blade servers each having at least one processor) and (2) a storage module having drive bays and input/output (I/O) modules communicatively coupled to the drive bays, the drive bays to receive and house storage drives. The system may include server bays to receive the servers. One or more of the server bays may be implemented to receive the storage module. The system includes switches to zone the storage drives to the servers. The system includes a chassis to house the servers, the storage module, and the switches, wherein the chassis comprises an interconnection plane (e.g., communication plane, backplane, mid-plane, etc.) to communicatively couple the servers, the storage module, and the switches. The storage module may comprise multiple storage modules, each storage module comprising drive bays to receive storage drives, wherein the interconnection plane is a communication plane, wherein the servers comprise blade servers, and wherein each blade server comprises a processor.
[0044] The switches may include a zone manager to zone the storage drives into zoned storage drive groups and to assign the zoned storage drive groups to the servers, and wherein the zone manager may include serial attached small computer system interface (SAS) firmware. The servers may include a storage controller to provide redundant array of independent disks (RAID) functionality to the zoned storage drive groups. The I/O modules may include an expander to provide a signal from the servers to zoned storage drive groups. The storage drives may be disposed in the drive bays. Moreover, the storage drives may be software-defined block, file, or object storage, or any combination thereof. The storage module may have 20 drive bays. On the other hand the storage module may have at least 40 drive bays, wherein the storage module to house at least 40 storage drives.
[0045] In another example of a storage module for an integrated zoned storage system, the storage module includes: drive bays to receive storage drives; and an input/output (I/O) module communicatively coupled to the drive bays, wherein the storage module to be housed with servers in a chassis of the integrated zoned storage system. The storage module, the storage drives, and the servers to be communicatively coupled to a communication plane of the chassis. The storage drives to be zoned to the servers. The storage drives may be disposed in the drive bays, wherein: the storage drives are grouped into zones to create zoned storage drive groups; the zoned storage drive groups implement Serial Attached SCSI (SAS) protocol; and at least one zoned storage drive group is assigned to more than one blade server. The I/O module includes a zoning expander to transmit a signal from a server to a drive bay, and wherein the signal to terminate at a storage drive received in the drive bay. The I/O module may be at least two I/O modules. The drive bays may be at least 40 drive bays. The communication plane may be disposed as a mid- plane in the chassis.
[ΘΘ48] While the present techniques may be susceptible to various modifications and alternative forms, the embodiments discussed above have been shown only by way of example. However, it should again be understood that the techniques is not intended to be limited to the particular embodiments disclosed herein. Indeed, the present techniques include all alternatives, modifications, and equivalents falling within the true spirit and scope of the appended claims.

Claims

1 . An integrated zoned storage system comprising:
servers;
a storage module comprising drive bays and input/output (I/O) modules
communicatively coupled to the drive bays, the drive bays to receive and house storage drives;
switches to zone the storage drives to the servers; and
a chassis to house the servers, the storage module, and the switches,
wherein the chassis comprises an interconnection plane to communicatively couple the servers, the storage module, and the switches.
2. The integrated zoned storage system of claim 1 , wherein the storage module comprises multiple storage modules, each storage module comprising drive bays to receive storage drives, wherein the interconnection plane is a
communication plane, wherein the servers comprise blade servers, and wherein each blade server comprises a processor.
3. The integrated zoned storage system of claim 1 , wherein the switches comprises a zone manager to zone the storage drives into zoned storage drive groups and to assign the zoned storage drive groups to the servers, and wherein the zone manager comprises serial attached small computer system interface (SAS) firmware.
4. The integrated zoned storage system of claim 3, wherein the servers comprise a storage controller to provide redundant array of independent disks (RAID) functionality to the zoned storage drive groups.
5. The integrated zoned storage system of claim 1 , wherein the I/O modules comprise an expander to provide a signal from the servers to zoned storage drive groups.
6. The integrated zoned storage system of claim 1 , comprising the storage drives disposed in the drive bays, wherein the storage drives comprise software-defined block, file, or object storage, or any combination thereof.
7. The integrated zoned storage system of claim 1 , wherein the storage module comprises at least 40 drive bays, wherein the storage module to house at least 40 storage drives, wherein the I/O modules comprise at least two I/O modules, and wherein the interconnection plane comprises a communication plane disposed as mid-plane.
8. A storage module for an integrated zoned storage system, the storage module comprising:
drive bays to receive storage drives;
an input/output (I/O) module communicatively coupled to the drive bays, wherein the storage module to be housed with servers in a chassis of the integrated zoned storage system, wherein:
the storage module, the storage drives, and the servers to be communicatively coupled to a communication plane of the chassis; and
the storage drives to be zoned to the servers.
9. The storage module of claim 8, comprising the storage drives disposed in the drive bays, wherein:
the storage drives are grouped into zones to create zoned storage drive
groups;
the zoned storage drive groups implement Serial Attached SCSI (SAS)
protocol; and
at least one zoned storage drive group is assigned to more than one blade server.
10. The system of claim 8, wherein the I/O module comprises a zoning expander to transmit a signal from a server to a drive bay, and wherein the signal to terminate at a storage drive received in the drive bay.
1 1 . The system of claim 8, wherein the I/O module comprises at least two I/O modules, wherein the drive bays comprise at least 40 drive bays, and wherein the communication plane is disposed as a mid-plane in the chassis.
12. A method of assembling an integrated zoned storage system, comprising:
disposing a storage module in a chassis of the integrated zoned storage
system and communicatively coupling the storage module to a communication plane of the chassis, wherein the storage module comprises drive bays to receive storage drives;
disposing blade servers in the chassis and communicatively coupling the blade servers to the communication plane;
communicatively coupling switches to the communication plane, wherein the switches to configure the storage drives into storage drive zones and to assign the storage drive zones to the blade servers, and wherein the switches to transmit signals between the storage drive zones and the servers.
13. The method of claim 12, wherein disposing the blade servers comprises disposing the blade servers in slots of the chassis, wherein disposing the storage module comprises disposing the storage module in a slot of the chassis, and wherein the storage drives and the switches comprise a serial attached SCSI (SAS) protocol standard.
14. The method of claim 12, comprising:
disposing the storage drives in the drive bays of the storage module; and configuring the blade servers to provide redundant array of independent disks
(RAID) functionality to the storage drive zones, wherein the blade servers each comprise a processor.
15. The method of claim 12, wherein communicatively coupling the storage module comprises communicatively coupling an I/O module of the storage module to at least one of the switches.
PCT/US2015/063778 2015-12-03 2015-12-03 Integrated zone storage WO2017095424A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/063778 WO2017095424A1 (en) 2015-12-03 2015-12-03 Integrated zone storage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/063778 WO2017095424A1 (en) 2015-12-03 2015-12-03 Integrated zone storage

Publications (1)

Publication Number Publication Date
WO2017095424A1 true WO2017095424A1 (en) 2017-06-08

Family

ID=58797665

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/063778 WO2017095424A1 (en) 2015-12-03 2015-12-03 Integrated zone storage

Country Status (1)

Country Link
WO (1) WO2017095424A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060265449A1 (en) * 2005-04-28 2006-11-23 Satoru Uemura Blade server system
US20070266110A1 (en) * 2006-03-29 2007-11-15 Rohit Chawla System and method for managing switch and information handling system SAS protocol communication
US20080126715A1 (en) * 2006-07-26 2008-05-29 Yoshihiro Fujie Apparatus, system, and method for integrated blade raid controller and storage
US20090193150A1 (en) * 2008-01-24 2009-07-30 International Business Machines Corporation Interfacing devices that include device specific information to a device receiving unit
US20100036948A1 (en) * 2008-08-06 2010-02-11 Sun Microsystems, Inc. Zoning scheme for allocating sas storage within a blade server chassis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060265449A1 (en) * 2005-04-28 2006-11-23 Satoru Uemura Blade server system
US20070266110A1 (en) * 2006-03-29 2007-11-15 Rohit Chawla System and method for managing switch and information handling system SAS protocol communication
US20080126715A1 (en) * 2006-07-26 2008-05-29 Yoshihiro Fujie Apparatus, system, and method for integrated blade raid controller and storage
US20090193150A1 (en) * 2008-01-24 2009-07-30 International Business Machines Corporation Interfacing devices that include device specific information to a device receiving unit
US20100036948A1 (en) * 2008-08-06 2010-02-11 Sun Microsystems, Inc. Zoning scheme for allocating sas storage within a blade server chassis

Similar Documents

Publication Publication Date Title
US11615044B2 (en) Graphics processing unit peer-to-peer arrangements
US10223315B2 (en) Front end traffic handling in modular switched fabric based data storage systems
US20190095294A1 (en) Storage unit for high performance computing system, storage network and methods
EP3158455B1 (en) Modular switched fabric for data storage systems
US20180027063A1 (en) Techniques to determine and process metric data for physical resources
US9250687B1 (en) High performance flexible storage system architecture
US7787482B2 (en) Independent drive enclosure blades in a blade server system with low cost high speed switch modules
US8788753B2 (en) Systems configured for improved storage system communication for N-way interconnectivity
US20170220506A1 (en) Modular Software Defined Storage Technology
JP6137313B2 (en) High availability computer system
US9582218B2 (en) Serial attached storage drive virtualization
US9940280B1 (en) Provisioning an enclosure with PCIe connectivity and storage devices
Dufrasne et al. IBM DS8870 Architecture and Implementation (release 7.5)
WO2017095424A1 (en) Integrated zone storage
US9489151B2 (en) Systems and methods including an application server in an enclosure with a communication link to an external controller

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15909938

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15909938

Country of ref document: EP

Kind code of ref document: A1