US20080140930A1 - Virtual drive mapping - Google Patents

Virtual drive mapping Download PDF

Info

Publication number
US20080140930A1
US20080140930A1 US11/636,108 US63610806A US2008140930A1 US 20080140930 A1 US20080140930 A1 US 20080140930A1 US 63610806 A US63610806 A US 63610806A US 2008140930 A1 US2008140930 A1 US 2008140930A1
Authority
US
United States
Prior art keywords
drives
physical
virtual
stripe
recited
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/636,108
Inventor
Thomas Richmond Hotchkiss
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Emulex Design and Manufacturing Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Emulex Design and Manufacturing Corp filed Critical Emulex Design and Manufacturing Corp
Priority to US11/636,108 priority Critical patent/US20080140930A1/en
Assigned to EMULEX DESIGN & MANUFACTURING CORPORATION reassignment EMULEX DESIGN & MANUFACTURING CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HOTCHKISS, THOMAS RICHMOND
Publication of US20080140930A1 publication Critical patent/US20080140930A1/en
Assigned to EMULEX CORPORATION reassignment EMULEX CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EMULEX DESIGN AND MANUFACTURING CORPORATION
Priority to US14/882,590 priority patent/US9395932B2/en
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EMULEX CORPORATION
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0664Virtualisation aspects at device level, e.g. emulation of a storage device or system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Definitions

  • This invention relates to the mapping of virtual drives to servers, and more particularly, to the automated mapping of virtual drives to servers in a system that allows for additional servers and physical drives to be subsequently added to the system in a manner that does not require any change to the original mapping.
  • FIG. 1 is an exemplary illustration of a conventional blade server 100 connected to an external switched fabric.
  • Blade servers overcome some of the inefficiencies of individual standalone or rack-mounted one unit (1U) high servers, each of which is self-contained and includes separate power supplies, fans, and the like. Individual servers are therefore inefficient in terms of space, power, cooling, and other characteristics.
  • Blade servers 100 utilize a modular, plug-in approach wherein the housing for each server is eliminated along with self-contained components such as power supplies and fans.
  • Each previously standalone server is therefore reduced to a server “blade” 102 (typically eight to 14 in a blade server chassis 106 ) capable of being plugged into a midplane 104 within the blade server chassis 106 from the front of the chassis.
  • the midplane 104 contains connectors for receiving the server blades 102 and typically contains from one to four “lanes” or paths on a Printed Circuit Board (PCB) for carrying signals.
  • the midplane 104 therefore eliminates much of the cabling that was required with individual servers.
  • the blade server chassis 106 also provides redundant common cooling and power to the server blades 102 through the midplane 104 .
  • blade servers 100 may be connected to redundant external switch fabrics 108 through an “A” side Input/Output (I/O) switch 10 and a “B” side I/O switch 112 , which plug into the midplane 104 from the back of the chassis 106 .
  • I/O Input/Output
  • the redundancy enables one switch to take over if the other fails.
  • the blade server midplane is typically plumbed to allow for multiple independent redundant fabrics or I/O protocols, such as Fibre Channel (FC), Serial Attached SCSI (SAS), SATA, Ethernet or InfiniBand.
  • FC Fibre Channel
  • SAS Serial Attached SCSI
  • SATA Serial Attached SCSI
  • Ethernet InfiniBand
  • each embedded switch 110 and 112 may be a FC Arbitrated Loop (FC_AL) switch or a full fabric switch, with a separate port to receive a FC link 116 from each of the multiple server blades 102 , and output ports for connecting to each of the external switched fabrics 108 .
  • FC_AL FC Arbitrated Loop
  • mezzanine I/O card 114 that performs a Host Bus Adapter (HBA) (a.k.a. I/O Controller (IOC)) function is required in each server blade 102 .
  • HBA Host Bus Adapter
  • IOC I/O Controller
  • mezzanine I/O cards 114 are typically mounted to the server blades 102 as daughter cards. Note that this may also be accomplished by embedding an IOC directly on the server blade. However, this increases complexity for the Original Equipment Manufacturer (OEM), who must now make a different server blade for each type of I/O that will be supported.
  • mezzanine I/O cards 114 include both daughter cards and IOCs mounted directly onto the server blade.
  • the output of a mezzanine I/O card 114 is two I/O links 116 routed to each of the two embedded switches 110 and 112 .
  • the mezzanine I/O cards 114 follow the standard device driver model, so that when a server blade 102 with a mezzanine I/O card 114 is plugged into the midplane 104 and connected to an embedded switch 110 or 112 , it appears to be a standalone server with a Peripheral Component Interconnect (PCI) card communicating with an external switch.
  • PCI Peripheral Component Interconnect
  • Each conventional server blade 102 has traditionally included two disk drives 118 for redundancy.
  • the compact nature of blade servers 100 and the desired small size of the server blades 102 means that the two disk drives 118 normally contained in each server blade take up valuable space.
  • diskless server blades have been developed in which the physical disk drives are located either in another board within the blade server (an “embedded” implementation) or even in an enclosure outside the blade server (e.g. a storage array connected to the blade server).
  • an “embedded” implementation e.g. a storage array connected to the blade server.
  • Engenera One company that makes diskless server blades for non-FC applications is Engenera.
  • Diskless server blades boot off of virtual drives, which are formed within the physical drives.
  • the mapping of server blades and virtual drives has conventionally been a manual process involving adjusting Basic Input/Output System (BIOS) settings and setting up the storage array with a World-Wide Port Name (WWPN) that maps to the server blades and the blade server.
  • BIOS Basic Input/Output System
  • WWPN World-Wide Port Name
  • Embodiments of the present invention are directed to automatically mapping a set of physical drives to a larger number of virtual drives for use by a set of computer servers. Users of this invention will save costs, space and power by using fewer physical drives than the number of physical servers.
  • embodiments of the present invention define a set of algorithms implemented in firmware to automatically create and map a set of virtual drives, denoted V 1 -V n , to the physical drives D 1 -D m , given the following assumptions: (1) the maximum number of supported servers, n, is fixed and known, (2) the maximum number of supported physical drives, m, is fixed and known, and (3) one virtual drive is created per server (i.e. n total virtual drives are presented).
  • Striping also known as a Redundant Array of Independent Disks 0 (RAID 0)
  • RAID 0 Redundant Array of Independent Disks 0
  • Physical drives are organized into “Stripe Sets,” with each Stripe Set containing an equal number of physical drives. There are a maximum of p Stripe Sets denoted SS 1 -SS p . Because each Stripe Set has an equal number of drives, the maximum number of physical drives m, must be divisible by the maximum number of Stripe Sets, p, with m/p physical drives per Stripe Set.
  • the number of physical drives currently installed in the system NUMdrives, and the size (capacity) in bytes of the installed physical drives is first discovered by querying each drive for its capacity.
  • the smallest reported capacity of any of the physical drives, Dsize is then assumed to be the capacity of all physical drives that are installed, or will be installed, in the system.
  • the number of Stripe Sets must be greater than or equal to 1, and less than or equal to the maximum number of physical drives m, with m being divisible by p. In other words, 1 ⁇ p ⁇ m, where m is divisible by p.
  • embodiments of the present invention may select the number of Stripe Sets, p, to yield the smallest number of physical drives per Stripe Set greater than 1.
  • Virtual drives are mapped sequentially to Stripe Sets, starting with V 1 mapped to SS 1 . Successive virtual drives are mapped to the Stripe Sets, in order, until all virtual drives have been mapped to a Stripe Set.
  • a validation step may be performed in which computations are made to determine if any of the configuration assumptions are being violated.
  • the number of drives present, NUMdrives determined above, must be checked to ensure that it maps into an integer number of Stripe Sets.
  • the number of physical drives present, NUMdrives must be a multiple of the number of drives in a Stripe Set, m/p.
  • NUMservers The actual number of servers present, NUMservers, must be discovered by querying the interconnect to the servers. Next, to ensure that the number of servers present, NUMservers, can be supported by the number of physical drives present, NUMdrives, the virtual drives are mapped to the physical drives as described above.
  • the user may be notified of the cause of the problem, and provided with instructions to create a valid configuration.
  • a valid configuration can be reached by adding a number of physical drives until all requirements are satisfied.
  • FIG. 1 is an exemplary illustration of a conventional blade server connected to an external switched fabric.
  • FIG. 2 is an illustration of an exemplary blade server employing diskless server blades coupled through a midplane to storage concentrators in a purpose-built embodiment capable of supporting the present invention.
  • FIG. 3 is an illustration of an exemplary blade server employing diskless server blades coupled through a midplane to I/O switches to external storage arrays in an alternative embodiment capable of supporting the present invention.
  • FIG. 4 is a conceptual illustration of the mapping of servers to virtual drives according to embodiments of the present invention.
  • FIG. 5 is an illustration of an exemplary mapping of servers to virtual drives according to embodiments of the present invention.
  • FIG. 6 is a conceptual illustration of the mapping of servers to virtual drives in a mirrored configuration according to embodiments of the present invention.
  • Embodiments of the present invention are directed to automatically mapping a set of physical drives to a larger number of virtual drives for use by a set of computer servers. Users of this invention will likely save costs by using fewer physical drives than the number of physical servers.
  • FIG. 2 is an illustration of an exemplary blade server 200 employing diskless server blades 202 coupled through a midplane 204 to storage concentrators 210 and 212 in a purpose-built embodiment capable of supporting the present invention.
  • the storage concentrators 210 and 212 may include I/O switch functionality and a CPU 222 , and may be connected to physical drives 224 within the blade server 200 , or alternatively may connect to physical drives located outside the blade server.
  • the storage concentrators 210 and 212 connect the diskless server blades 202 to redundant external FC links 220 .
  • the CPU 222 within each storage concentrator 210 and 212 executes the firmware 226 of the present invention.
  • the firmware 226 will create virtual drives 218 that are associated with each of the diskless server blades 202 .
  • FIG. 3 is an illustration of an exemplary blade server 300 employing diskless server blades 302 coupled through a midplane 304 to I/O switches 310 and 312 in an alternative embodiment capable of supporting the present invention.
  • a storage array 326 separate from the blade server 300 may contain the physical drives 324 , a processor 322 , and an Application Programming Interface (API) 328 including firmware for automatically performing virtual drive creation and mapping according to embodiments of the present invention.
  • the firmware will create virtual drives 318 that are associated with each of the diskless server blades 302 .
  • FIGS. 2 and 3 illustrate two exemplary systems capable of employing embodiments of the present invention.
  • the functionality of the present invention may be implemented in firmware, an API, or any other type of computer program that may be executed by a processor or CPU or other instruction-processing device or circuit located in any switching device, storage concentrator or storage array in a system utilizing the concept of virtual drives or devices.
  • FIG. 4 is a conceptual illustration of the mapping of servers to virtual drives according to embodiments of the present invention.
  • FIG. 4 shows a maximum set of n physical servers, denoted S 1 -S n , and a maximum set of m physical drives, denoted D 1 -D m .
  • the servers S 1 -S n in FIG. 4 correlate to the server blades 202 in FIG. 2 or the server blades 302 in FIG. 3
  • the physical drives D 1 -D m in FIG. 4 correlate to the physical drives 224 connected to one of the storage concentrators 210 or 212 in FIG. 2 , or correlate to the physical drives 324 in storage array 326 in FIG. 3 .
  • Embodiments of the present invention define a set of algorithms implemented in firmware to automatically create and map a set of virtual drives (denoted V 1 -V n in FIG. 4 ) given the following assumptions: (1) the maximum number of supported servers, n, is fixed and known, (2) the maximum number of supported physical drives, m, is fixed and known, and (3) one virtual drive is created per server (i.e. n total virtual drives are presented). The n and m values may be provided to the firmware of the present invention. These assumptions enable virtual drives to be created and mapped to servers using current quantities of servers and physical drives, and allows for adding servers and physical drives up to the maximum numbers n and m without having to perform any re-mappings of virtual drives to servers. In alternative embodiments, there may be more than one virtual drive per server. The actual number of virtual drives present at any time is of course limited by the actual number of servers installed in the system.
  • Striping (also known as a Redundant Array of Independent Disks 0 (RAID 0)) is used to map the virtual drives to the physical drives. Striping is a technique to distribute data from a single virtual drive to multiple physical drives. Physical drives are organized into “Stripe Sets,” with each Stripe Set containing an equal number of physical drives. There are a maximum of p Stripe Sets denoted SS 1 -SS p in the example of FIG. 4 . Because each Stripe Set has an equal number of drives, the maximum number of physical drives m, must be divisible by the maximum number of Stripe Sets, p, with m/p physical drives per Stripe Set.
  • FIG. 4 shows two physical drives per Stripe Set as an example only; other numbers of physical drives per Stripe Set are possible.
  • the number of physical drives currently installed in the system NUMdrives, and the size (capacity) in bytes of the installed physical drives is first discovered by querying each drive for its capacity.
  • Query methods depend on the specific protocol being used, and the invention does not depend on any specific query method.
  • the “Read Capacity” command can be used in the SCSI protocol to determine the block size and total number of blocks on a drive.
  • the smallest reported capacity of any of the physical drives, Dsize is then assumed to be the capacity of all physical drives that are installed, or will be installed, in the system.
  • any new drives added must have a size greater than or equal to Dsize. If the new drive has a size smaller than Dsize, it results in an unsupported configuration. In this case, the user may be notified of the error, and provided with instructions to replace the drive with a different drive of capacity greater than or equal to Dsize.
  • Stripe Sets must be selected.
  • the use of Stripe Sets allows the flexibility to upgrade the number of physical drives as long as entire Stripe Sets are added at a time. Because physical drives must be added in quantities equal to complete Stripe Sets, the number of drives in a Stripe Set represents a “cost granularity” to the user. However, having more drives in a Stripe Set improves performance because it is faster to access information from multiple physical drives at the same time, so there is a trade off between cost granularity and performance.
  • a default value for the number of Stripe Sets will be used to provide automatic configuration, although in alternative embodiments users can specify a different value to optimize cost granularity vs. performance for a given application.
  • the number of Stripe Sets, p must be greater than or equal to 1, and less than or equal to the maximum number of physical drives m, with m being divisible by p. In other words, 1 ⁇ p ⁇ m, where m is divisible by p.
  • embodiments of the present invention may select the number of Stripe Sets, p, to yield the smallest number of physical drives per Stripe Set greater than 1. Thus, if there is only one physical drive, then there will be one Stripe Set with one physical drive per Stripe Set. Note that if the maximum number of physical drives, m, is prime, then by default there will be only one Stripe Set with m physical drives per Stripe Set, resulting in the highest cost granularity. In alternative embodiments, other methods may be used to select the default number of Stripe Sets.
  • Each Stripe Set has a size, SSsize, equal to the size of a single physical drive Dsize multiplied by the number of physical drives in a stripe set, m/p.
  • SSsize Dsize*m/p (rounded down to the nearest integer).
  • the next step is to map the virtual drives to the physical drives.
  • Physical drives are added to the system a Stripe Set at a time. Each Stripe Set can support a number of physical servers determined by the number of virtual drives that fit within a Stripe Set.
  • Virtual drives are mapped sequentially to Stripe Sets, starting with V 1 mapped to SS 1 . Virtual drives continue to be mapped to SS 1 until SS 1 does not have enough capacity left to support another virtual drive.
  • the number of whole virtual drives mapped to SS 1 is equal to the size of a Stripe Set divided by the size of a virtual drive, rounded down to the nearest integer.
  • Unused capacity in SS 1 is combined with enough capacity from the second Stripe Set SS 2 to support the next sequential virtual drive.
  • Virtual drives continue to be mapped to SS 2 until it no longer has enough capacity to support the next virtual drive.
  • This iterative process continues under control of firmware until all virtual drives have been mapped.
  • the firmware knows the size of each virtual drive, Vsize, and the size of each Stripe Set, SSsize, it can track how much of each Stripe Set is “consumed” as each successive virtual drive is mapped to it, and in this manner iteratively determine which successive whole and partial virtual drives are mapped to successive Stripe Sets using straightforward calculations easily implemented by those skilled in the art.
  • One example of this process is as follows: (I) map first virtual drive to first Stripe Set; (2) compute remaining space in Stripe Set; (3) as long as remaining space in Stripe Set ⁇ Vsize, map next virtual drive to Stripe Set, compute new remaining space in Stripe Set, and repeat step (3); (4) when remaining space in Stripe Set ⁇ Vsize, map portion of next virtual drive equal to remaining space in Stripe Set to the Stripe Set, and map the remaining space in next virtual drive to next Stripe Set; and (5) repeat steps (2) through (4) until last virtual drive has been mapped to last Stripe Set.
  • FIG. 5 is an illustration of an exemplary mapping of servers to virtual drives according to embodiments of the present invention.
  • FIG. 5 illustrates an example with a maximum of seven servers, six physical drives, and three Stripe Sets.
  • a default of three Stripe Sets is used because it gives two physical drives per Stripe Set (the smallest number of drives per Stripe Set greater than one).
  • seven virtual drives must be mapped, each having a size of ((6 physical drives)*Dsize)/7 virtual drives.
  • Each Stripe Set has a size of 2*Dsize because there are two physical drives per Stripe Set.
  • FIG. 5 illustrates an example with a maximum of seven servers, six physical drives, and three Stripe Sets.
  • a default of three Stripe Sets is used because it gives two physical drives per Stripe Set (the smallest number of
  • V 1 and V 2 are mapped to SS 1 .
  • V 3 is mapped to the remaining capacity in SS 1 until all capacity in SS 1 is consumed, and the remainder of V 3 is mapped to SS 2 .
  • V 4 is mapped to SS 2 .
  • V 5 is mapped to the reminder of SS 2 , until all capacity in SS 2 is consumed. The remainder of V 5 is mapped to SS 3 .
  • V 6 and V 7 are mapped to S 3 . Given this mapping, supported configurations of actual numbers of physical drives and servers are shown in the following Table 1:
  • a validation step is performed in which computations are made to determine if any of the configuration assumptions are being violated. For example, if only D 1 was installed (one physical drive installed) in FIG. 5 , this would be an invalid configuration (see Table 1) because each Stripe Set requires two physical drives.
  • NUMdrives the number of drives present, NUMdrives, determined above, must be checked to ensure that it maps into an integer number of Stripe Sets.
  • the number of drives in each Stripe Set is equal to the maximum number of physical drives m, divided by the number of Stripe Sets p.
  • the number of physical drives present, NUMdrives must be a multiple of the number of drives in a Stripe Set, m/p.
  • the user may be notified of the cause of the problem, and provided with instructions to create a valid configuration.
  • a valid configuration can be reached by adding a number of physical drives until all requirements are satisfied.
  • the configuration may be re-validated as described above when drives or servers are added or removed.
  • embodiments of the present invention may first verify that the size of all new drives is greater than or equal to Dsize.
  • automated mapping of virtual drives to servers as described above may be used in a configuration that uses RAID 1 (mirroring) on each server to provide high availability.
  • RAID 1 mirroring
  • two independent sets of physical drives D 1 -D 6 and D 7 -D 12 can be mapped using this invention to create two independent sets of virtual drives V 1 -V 7 and V 8 -V 14 .
  • Each server is mapped to two virtual drives of equal size, that in turn map to separate physical drives.
  • RAID 1 on each server mirrors all writes to both virtual drives. Since all data is mirrored (copied) to both drives, no single drive failure will cause the system to fail.
  • each instance of the virtual drive mapping algorithm independently discovers the size of the lowest capacity physical drive present, Dsize. Each instance then communicates the Dsize it discovered, then both instances use the smaller of the two as a common Dsize. Additionally, when validating the configuration, both instances must communicate to ensure that the configuration present is supported.
  • Embodiments of the present invention can support more than one virtual drive per server, given a known maximum number of virtual drives. Such a variation might be useful to have multiple operating system images per server. For example, each server could have one virtual drive loaded with a Windows system image, and another virtual drive loaded with a Linux system image. Users could choose which virtual drive to use at boot time.
  • each physical port on an embedded storage concentrator is connected to a specific physical server slot.
  • a unique name (example: WWNN in Fibre Channel) is identified for each active server in each physical slot.
  • a table can be created that saves the mapping of server names to physical slots.
  • the invention can update the mapping table. If a server is removed from a physical slot and a new server is added with a different name in the same slot, the invention can detect this situation by comparing the server name of the new server with server name saved in the mapping table for that physical slot. Several options are possible for this situation.
  • a user may desire that the existing virtual drive image be mapped to the new physical server to enable rapid restoration of existing applications on a new server when an existing server fails.
  • the invention can be configured to automatically map an existing virtual drive and all of its data to a new physical server replacing a failed server in a specific physical slot. Additionally, if an existing server is moved from one physical slot to another physical slot, the invention can detect this case by searching the mapping table to find the server name of the server that was just inserted into a different slot. Since that server name was previously recorded in the mapping table in a different slot, the invention can detect that the server has been moved from one physical slot to another physical slot. In this case, one option is to map the existing virtual drive and all of its data to the new physical slot.

Abstract

The automatic mapping of a set of physical drives to virtual drives is disclosed. Given a maximum set of n physical servers, S1-Sn, and a maximum set of m physical drives, D1-Dm, a mapping of a set of virtual drives, V1-Vn, to the physical drives D1-Dm, is created, assuming n and m are fixed and known, and one virtual drive is created per server. Physical drives of size Dsize are organized into a maximum of p “Stripe Sets” SS1-SSp, each Stripe Set containing an equal number of physical drives. Each virtual drive will have a size, Vsize=(m*Dsize)/n (rounded down to the nearest integer). Virtual drives are mapped sequentially to Stripe Sets, starting with V1 mapped to SS1. Successive virtual drives are mapped to Stripe Sets until all virtual drives have been mapped to a Stripe Set.

Description

    FIELD OF THE INVENTION
  • This invention relates to the mapping of virtual drives to servers, and more particularly, to the automated mapping of virtual drives to servers in a system that allows for additional servers and physical drives to be subsequently added to the system in a manner that does not require any change to the original mapping.
  • BACKGROUND OF THE INVENTION
  • FIG. 1 is an exemplary illustration of a conventional blade server 100 connected to an external switched fabric. Blade servers overcome some of the inefficiencies of individual standalone or rack-mounted one unit (1U) high servers, each of which is self-contained and includes separate power supplies, fans, and the like. Individual servers are therefore inefficient in terms of space, power, cooling, and other characteristics. Blade servers 100 utilize a modular, plug-in approach wherein the housing for each server is eliminated along with self-contained components such as power supplies and fans. Each previously standalone server is therefore reduced to a server “blade” 102 (typically eight to 14 in a blade server chassis 106) capable of being plugged into a midplane 104 within the blade server chassis 106 from the front of the chassis. The midplane 104 contains connectors for receiving the server blades 102 and typically contains from one to four “lanes” or paths on a Printed Circuit Board (PCB) for carrying signals. The midplane 104 therefore eliminates much of the cabling that was required with individual servers. The blade server chassis 106 also provides redundant common cooling and power to the server blades 102 through the midplane 104.
  • Conventional blade servers 100 may be connected to redundant external switch fabrics 108 through an “A” side Input/Output (I/O) switch 10 and a “B” side I/O switch 112, which plug into the midplane 104 from the back of the chassis 106. Typically, the redundancy enables one switch to take over if the other fails. In addition, the blade server midplane is typically plumbed to allow for multiple independent redundant fabrics or I/O protocols, such as Fibre Channel (FC), Serial Attached SCSI (SAS), SATA, Ethernet or InfiniBand. In the case of a FC configuration, each embedded switch 110 and 112 may be a FC Arbitrated Loop (FC_AL) switch or a full fabric switch, with a separate port to receive a FC link 116 from each of the multiple server blades 102, and output ports for connecting to each of the external switched fabrics 108.
  • To enable the server blades 102 to communicate with the switch fabric, typically a mezzanine I/O card 114 that performs a Host Bus Adapter (HBA) (a.k.a. I/O Controller (IOC)) function is required in each server blade 102. These mezzanine I/O cards 114 are typically mounted to the server blades 102 as daughter cards. Note that this may also be accomplished by embedding an IOC directly on the server blade. However, this increases complexity for the Original Equipment Manufacturer (OEM), who must now make a different server blade for each type of I/O that will be supported. For purposes of this specification, mezzanine I/O cards 114, referred to herein, include both daughter cards and IOCs mounted directly onto the server blade. The output of a mezzanine I/O card 114 is two I/O links 116 routed to each of the two embedded switches 110 and 112. The mezzanine I/O cards 114 follow the standard device driver model, so that when a server blade 102 with a mezzanine I/O card 114 is plugged into the midplane 104 and connected to an embedded switch 110 or 112, it appears to be a standalone server with a Peripheral Component Interconnect (PCI) card communicating with an external switch.
  • Each conventional server blade 102 has traditionally included two disk drives 118 for redundancy. However, the compact nature of blade servers 100 and the desired small size of the server blades 102 means that the two disk drives 118 normally contained in each server blade take up valuable space.
  • Modern disk drives contain more storage capacity that is typically needed by a server blade, and thus diskless server blades have been developed in which the physical disk drives are located either in another board within the blade server (an “embedded” implementation) or even in an enclosure outside the blade server (e.g. a storage array connected to the blade server). One company that makes diskless server blades for non-FC applications is Engenera.
  • Diskless server blades boot off of virtual drives, which are formed within the physical drives. The mapping of server blades and virtual drives has conventionally been a manual process involving adjusting Basic Input/Output System (BIOS) settings and setting up the storage array with a World-Wide Port Name (WWPN) that maps to the server blades and the blade server.
  • Heretofore, in both blade server and non-blade server applications, there has been no way to automatically create virtual drives and map servers to the virtual drives. However, if the maximum number of allowable servers and drives is known, then a processor executing firmware either within one of the servers or external to the servers can automatically create virtual drives from existing physical drives, map them to existing servers, and allow for servers and drives to be subsequently added to the system (up to the maximum allowable numbers) without disrupting the mapping.
  • Therefore, there is a need to automatically create virtual drives from existing physical drives and map existing servers to the virtual drives when the maximum number of allowable servers and drives is known, and also to allow for additional servers and drives (up to the maximum allowable numbers) to be added and mapped without disrupting the original mapping.
  • SUMMARY OF THE INVENTION
  • Embodiments of the present invention are directed to automatically mapping a set of physical drives to a larger number of virtual drives for use by a set of computer servers. Users of this invention will save costs, space and power by using fewer physical drives than the number of physical servers.
  • Given a maximum set of n physical servers, denoted S1-Sn, and a maximum set of m physical drives, denoted D1-Dm, embodiments of the present invention define a set of algorithms implemented in firmware to automatically create and map a set of virtual drives, denoted V1-Vn, to the physical drives D1-Dm, given the following assumptions: (1) the maximum number of supported servers, n, is fixed and known, (2) the maximum number of supported physical drives, m, is fixed and known, and (3) one virtual drive is created per server (i.e. n total virtual drives are presented).
  • In the virtual drive mapping algorithm, all virtual drives are the same size. Striping (also known as a Redundant Array of Independent Disks 0 (RAID 0)) is used to map the virtual drives to the physical drives. Physical drives are organized into “Stripe Sets,” with each Stripe Set containing an equal number of physical drives. There are a maximum of p Stripe Sets denoted SS1-SSp. Because each Stripe Set has an equal number of drives, the maximum number of physical drives m, must be divisible by the maximum number of Stripe Sets, p, with m/p physical drives per Stripe Set.
  • To automatically configure a maximum set of m physical drives into a maximum set of n virtual drives, the number of physical drives currently installed in the system, NUMdrives, and the size (capacity) in bytes of the installed physical drives is first discovered by querying each drive for its capacity. The smallest reported capacity of any of the physical drives, Dsize, is then assumed to be the capacity of all physical drives that are installed, or will be installed, in the system.
  • Because there are a maximum of n virtual drives supported by a maximum of m physical drives of size Dsize, and all virtual drives are the same size, each virtual drive will have a size, Vsize, equal to the maximum total size of all physical drives, m times Dsize, divided by the maximum number of virtual drives n, rounded down to the nearest integer. In other words, Vsize=(m*Dsize)/n (rounded down to the nearest integer).
  • Next, the number of Stripe Sets must be selected. A default value for the number of Stripe Sets may be used to provide automatic configuration. The number of Stripe Sets, p, must be greater than or equal to 1, and less than or equal to the maximum number of physical drives m, with m being divisible by p. In other words, 1≦p≦m, where m is divisible by p.
  • By default, embodiments of the present invention may select the number of Stripe Sets, p, to yield the smallest number of physical drives per Stripe Set greater than 1. Each Stripe Set has a size, SSsize, equal to the size of a single physical drive Dsize multiplied by the number of physical drives in a stripe set, m/p. In other words, SSsize=Dsize*m/p (rounded down to the nearest integer).
  • Virtual drives are mapped sequentially to Stripe Sets, starting with V1 mapped to SS1. Successive virtual drives are mapped to the Stripe Sets, in order, until all virtual drives have been mapped to a Stripe Set.
  • Next, a validation step may be performed in which computations are made to determine if any of the configuration assumptions are being violated. To validate the configuration, the number of drives present, NUMdrives, determined above, must be checked to ensure that it maps into an integer number of Stripe Sets. In other words, the number of physical drives present, NUMdrives, must be a multiple of the number of drives in a Stripe Set, m/p.
  • The actual number of servers present, NUMservers, must be discovered by querying the interconnect to the servers. Next, to ensure that the number of servers present, NUMservers, can be supported by the number of physical drives present, NUMdrives, the virtual drives are mapped to the physical drives as described above.
  • If the configuration is not valid for any of the above reasons, the user may be notified of the cause of the problem, and provided with instructions to create a valid configuration. Typically, a valid configuration can be reached by adding a number of physical drives until all requirements are satisfied.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is an exemplary illustration of a conventional blade server connected to an external switched fabric.
  • FIG. 2 is an illustration of an exemplary blade server employing diskless server blades coupled through a midplane to storage concentrators in a purpose-built embodiment capable of supporting the present invention.
  • FIG. 3 is an illustration of an exemplary blade server employing diskless server blades coupled through a midplane to I/O switches to external storage arrays in an alternative embodiment capable of supporting the present invention.
  • FIG. 4 is a conceptual illustration of the mapping of servers to virtual drives according to embodiments of the present invention.
  • FIG. 5 is an illustration of an exemplary mapping of servers to virtual drives according to embodiments of the present invention.
  • FIG. 6 is a conceptual illustration of the mapping of servers to virtual drives in a mirrored configuration according to embodiments of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • In the following description of preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the preferred embodiments of the present invention.
  • Although embodiments of the present invention are described herein in terms of blade servers and server blades, it should be understood that the present invention is not limited to blade servers and server blades, but is generally applicable to any multiple-server system employing virtual drives. In addition, the present invention is not limited to systems that support FC, but includes InfiniBand, Ethernet, Serial Attached Small Computer System Interconnect (SAS) signaling and the like. Implementation of these protocols requires that the midplane or other connectivity support the protocols.
  • Embodiments of the present invention are directed to automatically mapping a set of physical drives to a larger number of virtual drives for use by a set of computer servers. Users of this invention will likely save costs by using fewer physical drives than the number of physical servers.
  • FIG. 2 is an illustration of an exemplary blade server 200 employing diskless server blades 202 coupled through a midplane 204 to storage concentrators 210 and 212 in a purpose-built embodiment capable of supporting the present invention. The storage concentrators 210 and 212 may include I/O switch functionality and a CPU 222, and may be connected to physical drives 224 within the blade server 200, or alternatively may connect to physical drives located outside the blade server. The storage concentrators 210 and 212 connect the diskless server blades 202 to redundant external FC links 220. The CPU 222 within each storage concentrator 210 and 212 executes the firmware 226 of the present invention. The firmware 226 will create virtual drives 218 that are associated with each of the diskless server blades 202.
  • FIG. 3 is an illustration of an exemplary blade server 300 employing diskless server blades 302 coupled through a midplane 304 to I/O switches 310 and 312 in an alternative embodiment capable of supporting the present invention. In FIG. 3, a storage array 326 separate from the blade server 300 may contain the physical drives 324, a processor 322, and an Application Programming Interface (API) 328 including firmware for automatically performing virtual drive creation and mapping according to embodiments of the present invention. The firmware will create virtual drives 318 that are associated with each of the diskless server blades 302.
  • It should be noted that FIGS. 2 and 3 illustrate two exemplary systems capable of employing embodiments of the present invention. In general, the functionality of the present invention may be implemented in firmware, an API, or any other type of computer program that may be executed by a processor or CPU or other instruction-processing device or circuit located in any switching device, storage concentrator or storage array in a system utilizing the concept of virtual drives or devices.
  • FIG. 4 is a conceptual illustration of the mapping of servers to virtual drives according to embodiments of the present invention. FIG. 4 shows a maximum set of n physical servers, denoted S1-Sn, and a maximum set of m physical drives, denoted D1-Dm. For purposes of comparison and correlation to an actual physical system, the servers S1-Sn in FIG. 4 correlate to the server blades 202 in FIG. 2 or the server blades 302 in FIG. 3, and the physical drives D1-Dm in FIG. 4 correlate to the physical drives 224 connected to one of the storage concentrators 210 or 212 in FIG. 2, or correlate to the physical drives 324 in storage array 326 in FIG. 3.
  • Embodiments of the present invention define a set of algorithms implemented in firmware to automatically create and map a set of virtual drives (denoted V1-Vn in FIG. 4) given the following assumptions: (1) the maximum number of supported servers, n, is fixed and known, (2) the maximum number of supported physical drives, m, is fixed and known, and (3) one virtual drive is created per server (i.e. n total virtual drives are presented). The n and m values may be provided to the firmware of the present invention. These assumptions enable virtual drives to be created and mapped to servers using current quantities of servers and physical drives, and allows for adding servers and physical drives up to the maximum numbers n and m without having to perform any re-mappings of virtual drives to servers. In alternative embodiments, there may be more than one virtual drive per server. The actual number of virtual drives present at any time is of course limited by the actual number of servers installed in the system.
  • In the virtual drive mapping algorithm, all virtual drives are the same size. Alternative embodiments may support different size virtual drives. (Size means the capacity, in bytes, of a drive, virtual or physical). Any number of servers, from 1 to n, can be supported. In general, a user would start with a smaller number of servers and add servers over time. When adding servers, the size of existing virtual drives remains fixed. Virtual drive sizes are not reduced.
  • Striping (also known as a Redundant Array of Independent Disks 0 (RAID 0)) is used to map the virtual drives to the physical drives. Striping is a technique to distribute data from a single virtual drive to multiple physical drives. Physical drives are organized into “Stripe Sets,” with each Stripe Set containing an equal number of physical drives. There are a maximum of p Stripe Sets denoted SS1-SSp in the example of FIG. 4. Because each Stripe Set has an equal number of drives, the maximum number of physical drives m, must be divisible by the maximum number of Stripe Sets, p, with m/p physical drives per Stripe Set. Any actual number of Stripe Sets, from 1 to p, can be supported, provided that enough actual Stripe Sets are present to support the number of servers present. It should be noted that FIG. 4 shows two physical drives per Stripe Set as an example only; other numbers of physical drives per Stripe Set are possible.
  • To automatically configure a maximum set of m physical drives into a maximum set of n virtual drives, the number of physical drives currently installed in the system, NUMdrives, and the size (capacity) in bytes of the installed physical drives is first discovered by querying each drive for its capacity. Query methods depend on the specific protocol being used, and the invention does not depend on any specific query method. As an example, the “Read Capacity” command can be used in the SCSI protocol to determine the block size and total number of blocks on a drive. The smallest reported capacity of any of the physical drives, Dsize, is then assumed to be the capacity of all physical drives that are installed, or will be installed, in the system. Note that using the smallest size of a single drive as the capacity of all drives leaves unused capacity on drives larger than Dsize, but simplifies the mapping algorithm and allows for expansion of the number of installed servers and physical drives (up to the assumed maximums n and m) without requiring re-mapping. If less than m physical drives are present during the initial discovery, then any new drives added must have a size greater than or equal to Dsize. If the new drive has a size smaller than Dsize, it results in an unsupported configuration. In this case, the user may be notified of the error, and provided with instructions to replace the drive with a different drive of capacity greater than or equal to Dsize.
  • Because there are a maximum of n virtual drives supported by a maximum of m physical drives of size Dsize, and all virtual drives are the same size, each virtual drive will have a size, Vsize, equal to the maximum total size of all physical drives, m times Dsize, divided by the maximum number of virtual drives n, rounded down to the nearest integer. In other words, Vsize=(m*Dsize)/n (rounded down to the nearest integer).
  • Next, the number of Stripe Sets must be selected. The use of Stripe Sets, while optional, allows the flexibility to upgrade the number of physical drives as long as entire Stripe Sets are added at a time. Because physical drives must be added in quantities equal to complete Stripe Sets, the number of drives in a Stripe Set represents a “cost granularity” to the user. However, having more drives in a Stripe Set improves performance because it is faster to access information from multiple physical drives at the same time, so there is a trade off between cost granularity and performance. A default value for the number of Stripe Sets will be used to provide automatic configuration, although in alternative embodiments users can specify a different value to optimize cost granularity vs. performance for a given application. The number of Stripe Sets, p, must be greater than or equal to 1, and less than or equal to the maximum number of physical drives m, with m being divisible by p. In other words, 1≦p≦m, where m is divisible by p.
  • By default, embodiments of the present invention may select the number of Stripe Sets, p, to yield the smallest number of physical drives per Stripe Set greater than 1. Thus, if there is only one physical drive, then there will be one Stripe Set with one physical drive per Stripe Set. Note that if the maximum number of physical drives, m, is prime, then by default there will be only one Stripe Set with m physical drives per Stripe Set, resulting in the highest cost granularity. In alternative embodiments, other methods may be used to select the default number of Stripe Sets.
  • Each Stripe Set has a size, SSsize, equal to the size of a single physical drive Dsize multiplied by the number of physical drives in a stripe set, m/p. In other words, SSsize=Dsize*m/p (rounded down to the nearest integer).
  • The next step is to map the virtual drives to the physical drives. Physical drives are added to the system a Stripe Set at a time. Each Stripe Set can support a number of physical servers determined by the number of virtual drives that fit within a Stripe Set. Virtual drives are mapped sequentially to Stripe Sets, starting with V1 mapped to SS1. Virtual drives continue to be mapped to SS1 until SS1 does not have enough capacity left to support another virtual drive. The number of whole virtual drives mapped to SS1 is equal to the size of a Stripe Set divided by the size of a virtual drive, rounded down to the nearest integer. In other words, the number of whole virtual drives mapped to SS1 is equal to SSsize/Vsize=(Dsize*m/p)/((m*Dsize)/n)=(m/p)/(m/n)=n/p, rounded down to the nearest integer.
  • Unused capacity in SS1 is combined with enough capacity from the second Stripe Set SS2 to support the next sequential virtual drive. Virtual drives continue to be mapped to SS2 until it no longer has enough capacity to support the next virtual drive. This iterative process continues under control of firmware until all virtual drives have been mapped. During this process, because the firmware knows the size of each virtual drive, Vsize, and the size of each Stripe Set, SSsize, it can track how much of each Stripe Set is “consumed” as each successive virtual drive is mapped to it, and in this manner iteratively determine which successive whole and partial virtual drives are mapped to successive Stripe Sets using straightforward calculations easily implemented by those skilled in the art. One example of this process is as follows: (I) map first virtual drive to first Stripe Set; (2) compute remaining space in Stripe Set; (3) as long as remaining space in Stripe Set≧Vsize, map next virtual drive to Stripe Set, compute new remaining space in Stripe Set, and repeat step (3); (4) when remaining space in Stripe Set<Vsize, map portion of next virtual drive equal to remaining space in Stripe Set to the Stripe Set, and map the remaining space in next virtual drive to next Stripe Set; and (5) repeat steps (2) through (4) until last virtual drive has been mapped to last Stripe Set.
  • Users may need to know how many total servers can be supported by a given number of Stripe Sets. If the number of installed Stripe Sets is s, the number of supported servers equals the total capacity of all installed Stripe Sets divided by the size of a virtual drive, rounded down to the nearest integer. From the equation above, the number of supported servers=s*n/p (rounded down to the nearest integer).
  • FIG. 5 is an illustration of an exemplary mapping of servers to virtual drives according to embodiments of the present invention. FIG. 5 illustrates an example with a maximum of seven servers, six physical drives, and three Stripe Sets. In this example, it is possible to have one, two, three or six Stripe Sets, given the maximum of six physical drives. A default of three Stripe Sets is used because it gives two physical drives per Stripe Set (the smallest number of drives per Stripe Set greater than one). Given that there is a maximum of seven servers, seven virtual drives must be mapped, each having a size of ((6 physical drives)*Dsize)/7 virtual drives. Each Stripe Set has a size of 2*Dsize because there are two physical drives per Stripe Set. In the example of FIG. 5, V1 and V2 are mapped to SS1. V3 is mapped to the remaining capacity in SS1 until all capacity in SS1 is consumed, and the remainder of V3 is mapped to SS2. V4 is mapped to SS2. V5 is mapped to the reminder of SS2, until all capacity in SS2 is consumed. The remainder of V5 is mapped to SS3. Finally, V6 and V7 are mapped to S3. Given this mapping, supported configurations of actual numbers of physical drives and servers are shown in the following Table 1:
  • TABLE 1
    Number of
    Physical Drives Number of Number of
    Present Virtual Drives Servers Supported
    1 Not Supported Not Supported
    2 2 1 or 2
    3 Not Supported Not Supported
    4 4 1, 2, 3 or 4
    5 Not Supported Not Supported
    6 7 1, 2, 3, 4, 5, 6 or 7
  • Next, a validation step is performed in which computations are made to determine if any of the configuration assumptions are being violated. For example, if only D1 was installed (one physical drive installed) in FIG. 5, this would be an invalid configuration (see Table 1) because each Stripe Set requires two physical drives.
  • To validate the configuration, the number of drives present, NUMdrives, determined above, must be checked to ensure that it maps into an integer number of Stripe Sets. The number of drives in each Stripe Set is equal to the maximum number of physical drives m, divided by the number of Stripe Sets p. The number of physical drives present, NUMdrives, must be a multiple of the number of drives in a Stripe Set, m/p. The number of Stripe Sets, s, is equal to the number of drives present, NUMdrives, divided by the number of drives in a stripe set, m/p. In other words, s=NUMdrives/(m/p).
  • The actual number of servers present, NUMservers, must be discovered by querying the interconnect to the servers. Specific query methods vary based on interconnect and protocol, and the invention is not dependent on the specific methods. As an example, in Fibre Channel a PLOGI command can be used to determine whether or not a server is present, and obtain the WWN of the server if present. Next, to ensure that the number of servers present, NUMservers, can be supported by the number of physical drives present, NUMdrives, the virtual drives are mapped to the physical drives as described above. Of course, the number of servers present, NUMservers, must be less than or equal to the number of supported servers, In other words, NUMservers<=s*n/p (rounded down to the nearest integer).
  • If the configuration is not valid for any of the above reasons, the user may be notified of the cause of the problem, and provided with instructions to create a valid configuration. Typically, a valid configuration can be reached by adding a number of physical drives until all requirements are satisfied.
  • The configuration may be re-validated as described above when drives or servers are added or removed. In particular, when drives are added, embodiments of the present invention may first verify that the size of all new drives is greater than or equal to Dsize.
  • In alternative embodiments of the present invention illustrated symbolically in the example of FIG. 6, automated mapping of virtual drives to servers as described above may be used in a configuration that uses RAID 1 (mirroring) on each server to provide high availability. In the example of FIG. 6, two independent sets of physical drives D1-D6 and D7-D12 can be mapped using this invention to create two independent sets of virtual drives V1-V7 and V8-V14. Each server is mapped to two virtual drives of equal size, that in turn map to separate physical drives. RAID 1 on each server mirrors all writes to both virtual drives. Since all data is mirrored (copied) to both drives, no single drive failure will cause the system to fail.
  • In a RAID 1 application, the redundant virtual drives seen by each server should be equal in size. To achieve equal size virtual drives, each instance of the virtual drive mapping algorithm independently discovers the size of the lowest capacity physical drive present, Dsize. Each instance then communicates the Dsize it discovered, then both instances use the smaller of the two as a common Dsize. Additionally, when validating the configuration, both instances must communicate to ensure that the configuration present is supported.
  • Embodiments of the present invention can support more than one virtual drive per server, given a known maximum number of virtual drives. Such a variation might be useful to have multiple operating system images per server. For example, each server could have one virtual drive loaded with a Windows system image, and another virtual drive loaded with a Linux system image. Users could choose which virtual drive to use at boot time.
  • In a blade server environment, each physical port on an embedded storage concentrator is connected to a specific physical server slot. During the discovery process, a unique name (example: WWNN in Fibre Channel) is identified for each active server in each physical slot. A table can be created that saves the mapping of server names to physical slots. Each time a server is removed or inserted the invention can update the mapping table. If a server is removed from a physical slot and a new server is added with a different name in the same slot, the invention can detect this situation by comparing the server name of the new server with server name saved in the mapping table for that physical slot. Several options are possible for this situation. A user may desire that the existing virtual drive image be mapped to the new physical server to enable rapid restoration of existing applications on a new server when an existing server fails. The invention can be configured to automatically map an existing virtual drive and all of its data to a new physical server replacing a failed server in a specific physical slot. Additionally, if an existing server is moved from one physical slot to another physical slot, the invention can detect this case by searching the mapping table to find the server name of the server that was just inserted into a different slot. Since that server name was previously recorded in the mapping table in a different slot, the invention can detect that the server has been moved from one physical slot to another physical slot. In this case, one option is to map the existing virtual drive and all of its data to the new physical slot.
  • Although the present invention has been fully described in connection with embodiments thereof with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the present invention as defined by the appended claims.

Claims (39)

1. A processor programmed for automatically mapping a set of physical drives to a larger number of virtual drives in a storage system, given a maximum number of physical drives and virtual drives in the system, by performing the steps of:
determining a common physical drive size for all physical drives;
determining a virtual drive size for all virtual drives;
determining a number of Stripe Sets, each Stripe Set having an integer number of physical drives; and
mapping each virtual drive to one or more Stripe Sets.
2. The processor as recited in claim 1, wherein the step of determining a common physical drive size for all physical drives comprises discovering the size of all installed physical drives and utilizing the smallest discovered size as the common physical drive size.
3. The processor as recited in claim 2, wherein the step of determining a common physical drive size for all physical drives comprises querying each installed physical drive for its capacity.
4. The processor as recited in claim 1, wherein the step of determining a common virtual drive size for all virtual drives comprises multiplying the maximum number of physical drives by the common physical drive size and dividing by the maximum number of virtual drives.
5. The processor as recited in claim 1, wherein the step of determining a number of Stripe Sets comprises selecting the number of Stripe Sets to yield the smallest number of physical drives per Stripe Set greater than one.
6. The processor as recited in claim 5, the processor further programmed for computing a Stripe Set size for each Stripe Set by multiplying the common physical drive size by the integer number of physical drives in each Stripe Set.
7. The processor as recited in claim 1, wherein the step of mapping each virtual drive to one or more Stripe Sets comprises:
(1) mapping a first virtual drive to a Stripe Set;
(2) computing a remaining space in the Stripe Set;
(3) as long as the remaining space in the Stripe Set≧Vsize, mapping a next virtual drive to the Stripe Set, computing a new remaining space in Stripe Set, and repeating step (3);
(4) when the remaining space in the Stripe Set<Vsize, mapping the portion of the next virtual drive equal to the remaining space in the Stripe Set to the Stripe Set, and mapping a remaining portion of the next virtual drive to a next Stripe Set; and
(5) repeating steps (2) through (4) until a last virtual drive has been mapped to a last Stripe Set.
8. The processor as recited in claim 1, the processor further programmed for performing the step of validating the mapping to determine if any configuration assumptions have been violated by:
discovering a number of installed physical drives; and
verifying that the number of installed physical drives is a multiple of the number of physical drives in a Stripe Set.
9. The processor as recited in claim 1, wherein the processor is further capable of automatically re-mapping the physical drives to the virtual drives when drives or servers are added or removed.
10. The processor as recited in claim 1, wherein the storage system is a mirrored configuration.
11. The processor as recited in claim 1, wherein when the storage system includes one or more servers, the processor is further programmed for performing the step of assigning one or more virtual drives per server.
12. A storage array comprising the processor of claim 1.
13. A blade server comprising the storage array of claim 12.
14. A Storage Area Network (SAN) comprising the blade server of claim 13.
15. The processor as recited in claim 1, wherein the processor is a component of a storage concentrator.
16. The processor as recited in claim 15, wherein the storage concentrator is a component of a blade server.
17. The processor as recited in claim 16, wherein the blade server is a component of a Storage Area Network (SAN).
18. One or more storage media including a computer program which, when executed by one or more processors, automatically maps a set of physical drives to a larger number of virtual drives in a storage system, given a maximum number of physical drives and virtual drives in the system, by causing the one or more processors to perform the steps of:
determining a common physical drive size for all physical drives;
determining a virtual drive size for all virtual drives;
determining a number of Stripe Sets, each Stripe Set having an integer number of physical drives; and
mapping each virtual drive to one or more Stripe Sets.
19. The one or more storage media as recited in claim 18, wherein the step of determining a common physical drive size for all physical drives comprises discovering the size of all installed physical drives and utilizing the smallest discovered size as the common physical drive size.
20. The one or more storage media as recited in claim 19, wherein the step of determining a common physical drive size for all physical drives comprises querying each installed physical drive for its capacity.
21. The one or more storage media as recited in claim 20, wherein the step of determining a common virtual drive size for all virtual drives comprises multiplying the maximum number of physical drives by the common physical drive size and dividing by the maximum number of virtual drives.
22. The one or more storage media as recited in claim 18, wherein the step of determining a number of Stripe Sets comprises selecting the number of Stripe Sets to yield the smallest number of physical drives per Stripe Set greater than one.
23. The one or more storage media as recited in claim 22, wherein the computer program further causes the one or more processors to perform the step of computing a Stripe Set size for each Stripe Set by multiplying the common physical drive size by the integer number of physical drives in each Stripe Set.
24. The processor as recited in claim 18, wherein the step of mapping each virtual drive to one or more Stripe Sets comprises:
(1) mapping a first virtual drive to a Stripe Set;
(2) computing a remaining space in the Stripe Set;
(3) as long as the remaining space in the Stripe Set≧Vsize, mapping a next virtual drive to the Stripe Set, computing a new remaining space in Stripe Set, and repeating step (3);
(4) when the remaining space in the Stripe Set<Vsize, mapping the portion of the next virtual drive equal to the remaining space in the Stripe Set to the Stripe Set, and mapping a remaining portion of the next virtual drive to a next Stripe Set; and
(5) repeating steps (2) through (4) until a last virtual drive has been mapped to a last Stripe Set.
25. The one or more storage media as recited in claim 18, wherein the computer program further causes the one or more processors to perform the step of validating the mapping to determine if any configuration assumptions have been violated by:
discovering a number of installed physical drives; and
verifying that the number of installed physical drives is a multiple of the number of physical drives in a Stripe Set.
26. The one or more storage media as recited in claim 18, wherein the computer program further causes the one or more processors to automatically re-map the physical drives to the virtual drives when drives or servers are added or removed.
27. The one or more storage media as recited in claim 18, wherein the storage system is a mirrored configuration.
28. The one or more storage media as recited in claim 18, wherein when the storage system includes one or more servers, the computer program further causes the one or more processors to perform the step of assigning one virtual drive per server.
29. A method for automatically mapping a set of physical drives to a larger number of virtual drives in a storage system, given a maximum number of physical drives and virtual drives in the system, comprising:
determining a common physical drive size for all physical drives;
determining a common virtual drive size for all virtual drives;
determining a number of Stripe Sets, each Stripe Set having an integer number of physical drives; and
mapping each virtual drive to one or more Stripe Sets.
30. The method as recited in claim 29, wherein the step of determining a common physical drive size for all physical drives comprises discovering the size of all installed physical drives and utilizing the smallest discovered size as the common physical drive size.
31. The method as recited in claim 30, wherein the step of determining a common physical drive size for all physical drives comprises querying each installed physical drive for its capacity.
32. The method as recited in claim 29, wherein the step of determining a common virtual drive size for all virtual drives comprises multiplying the maximum number of physical drives by the common physical drive size and dividing by the maximum number of virtual drives.
33. The method as recited in claim 29, wherein the step of determining a number of Stripe Sets comprises selecting the number of Stripe Sets to yield the smallest number of physical drives per Stripe Set greater than one.
34. The method as recited in claim 33, further comprising computing a Stripe Set size for each Stripe Set by multiplying the common physical drive size by the integer number of physical drives in each Stripe Set.
35. The method as recited in claim 29, wherein the step of mapping each virtual drive to one or more Stripe Sets comprises:
(1) mapping a first virtual drive to a Stripe Set;
(2) computing a remaining space in the Stripe Set;
(3) as long as the remaining space in the Stripe Set≧Vsize, mapping a next virtual drive to the Stripe Set, computing a new remaining space in Stripe Set, and repeating step (3);
(4) when the remaining space in the Stripe Set<Vsize, mapping the portion of the next virtual drive equal to the remaining space in the Stripe Set to the Stripe Set, and mapping a remaining portion of the next virtual drive to a next Stripe Set; and
(5) repeating steps (2) through (4) until a last virtual drive has been mapped to a last Stripe Set.
36. The method as recited in claim 29, further comprising performing the step of validating the mapping to determine if any configuration assumptions have been violated by:
discovering a number of installed physical drives; and
verifying that the number of installed physical drives is a multiple of the number of physical drives in a Stripe Set.
37. The method as recited in claim 29, further comprising automatically re-mapping the physical drives to the virtual drives when drives or servers are added or removed.
38. The method as recited in claim 29, wherein the storage system is a mirrored configuration.
39. The method as recited in claim 28, wherein when the storage system includes one or more servers, the method further comprises performing the step of assigning one virtual drive per server.
US11/636,108 2006-12-08 2006-12-08 Virtual drive mapping Abandoned US20080140930A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/636,108 US20080140930A1 (en) 2006-12-08 2006-12-08 Virtual drive mapping
US14/882,590 US9395932B2 (en) 2006-12-08 2015-10-14 Virtual drive mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/636,108 US20080140930A1 (en) 2006-12-08 2006-12-08 Virtual drive mapping

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/882,590 Continuation US9395932B2 (en) 2006-12-08 2015-10-14 Virtual drive mapping

Publications (1)

Publication Number Publication Date
US20080140930A1 true US20080140930A1 (en) 2008-06-12

Family

ID=39499670

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/636,108 Abandoned US20080140930A1 (en) 2006-12-08 2006-12-08 Virtual drive mapping
US14/882,590 Active US9395932B2 (en) 2006-12-08 2015-10-14 Virtual drive mapping

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/882,590 Active US9395932B2 (en) 2006-12-08 2015-10-14 Virtual drive mapping

Country Status (1)

Country Link
US (2) US20080140930A1 (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090055599A1 (en) * 2007-08-13 2009-02-26 Linda Van Patten Benhase Consistent data storage subsystem configuration replication
US20090210619A1 (en) * 2008-02-19 2009-08-20 Atul Mukker Method for handling more than a maximum number of supported drives in a raid configuration
US20090216818A1 (en) * 2008-02-26 2009-08-27 Buffalo Inc. Method and apparatus for managing folder
US20120096211A1 (en) * 2009-10-30 2012-04-19 Calxeda, Inc. Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect
US20130282944A1 (en) * 2012-04-23 2013-10-24 Microsoft Corporation Sas integration with tray and midplane server architecture
US8595714B1 (en) * 2009-03-04 2013-11-26 Amazon Technologies, Inc. User controlled environment updates in server cluster
US9008079B2 (en) 2009-10-30 2015-04-14 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric
US9054990B2 (en) 2009-10-30 2015-06-09 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US9069929B2 (en) 2011-10-31 2015-06-30 Iii Holdings 2, Llc Arbitrating usage of serial port in node card of scalable and modular servers
US9077654B2 (en) 2009-10-30 2015-07-07 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US9311269B2 (en) 2009-10-30 2016-04-12 Iii Holdings 2, Llc Network proxy for high-performance, low-power data center interconnect fabric
US20160188394A1 (en) * 2013-03-28 2016-06-30 Hewlett-Packard Development Company, L.P. Error coordination message for a blade device having a logical processor in another system firmware domain
US9465771B2 (en) 2009-09-24 2016-10-11 Iii Holdings 2, Llc Server on a chip and node cards comprising one or more of same
US20160335008A1 (en) * 2015-05-15 2016-11-17 Sundar Dasar Systems And Methods For RAID Storage Configuration Using Hetereogenous Physical Disk (PD) Set Up
US9585281B2 (en) 2011-10-28 2017-02-28 Iii Holdings 2, Llc System and method for flexible storage and networking provisioning in large scalable processor installations
US9648102B1 (en) 2012-12-27 2017-05-09 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9680770B2 (en) 2009-10-30 2017-06-13 Iii Holdings 2, Llc System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US10079716B2 (en) 2009-03-04 2018-09-18 Amazon Technologies, Inc. User controlled environment updates in server cluster
US10140245B2 (en) 2009-10-30 2018-11-27 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11194487B2 (en) * 2019-10-31 2021-12-07 EMC IP Holding Company LLC Method, electronic device and computer program product of allocating storage disks
US11385818B2 (en) * 2019-10-31 2022-07-12 EMC IP Holding Company LLC Method, electronic device and computer program product for managing disks
US11467883B2 (en) 2004-03-13 2022-10-11 Iii Holdings 12, Llc Co-allocating a reservation spanning different compute resources types
US11494235B2 (en) 2004-11-08 2022-11-08 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11496415B2 (en) 2005-04-07 2022-11-08 Iii Holdings 12, Llc On-demand access to compute resources
US11522952B2 (en) 2007-09-24 2022-12-06 The Research Foundation For The State University Of New York Automatic clustering for self-organizing grids
US11630704B2 (en) 2004-08-20 2023-04-18 Iii Holdings 12, Llc System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information
US11652706B2 (en) 2004-06-18 2023-05-16 Iii Holdings 12, Llc System and method for providing dynamic provisioning within a compute environment
US11650857B2 (en) 2006-03-16 2023-05-16 Iii Holdings 12, Llc System and method for managing a hybrid computer environment
US11658916B2 (en) 2005-03-16 2023-05-23 Iii Holdings 12, Llc Simple integration of an on-demand compute environment
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11960937B2 (en) 2022-03-17 2024-04-16 Iii Holdings 12, Llc System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5598549A (en) * 1993-06-11 1997-01-28 At&T Global Information Solutions Company Array storage system for returning an I/O complete signal to a virtual I/O daemon that is separated from software array driver and physical device driver
US6266784B1 (en) * 1998-09-15 2001-07-24 International Business Machines Corporation Direct storage of recovery plan file on remote server for disaster recovery and storage management thereof
US20030097487A1 (en) * 2001-11-20 2003-05-22 Rietze Paul D. Common boot environment for a modular server system
US6581135B2 (en) * 1998-05-27 2003-06-17 Fujitsu Limited Information storage system for redistributing information to information storage devices when a structure of the information storage devices is changed
US20030188114A1 (en) * 2002-03-26 2003-10-02 Clark Lubbers Data replication with virtualized volumes
US6728831B1 (en) * 1998-10-23 2004-04-27 Oracle International Corporation Method and system for managing storage systems containing multiple data storage devices
US20040111559A1 (en) * 2002-12-10 2004-06-10 Thomas Heil Apparatus and method for sharing boot volume among server blades
US6834326B1 (en) * 2000-02-04 2004-12-21 3Com Corporation RAID method and device with network protocol between controller and storage devices
US6848034B2 (en) * 2002-04-04 2005-01-25 International Business Machines Corporation Dense server environment that shares an IDE drive
US6895467B2 (en) * 2001-10-22 2005-05-17 Hewlett-Packard Development Company, L.P. System and method for atomizing storage

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6311251B1 (en) * 1998-11-23 2001-10-30 Storage Technology Corporation System for optimizing data storage in a RAID system
US6487633B1 (en) * 1999-05-03 2002-11-26 3Ware, Inc. Methods and systems for accessing disks using forward and reverse seeks
US6591339B1 (en) * 1999-05-03 2003-07-08 3Ware, Inc. Methods and systems for selecting block sizes for use with disk arrays
US6795895B2 (en) * 2001-03-07 2004-09-21 Canopy Group Dual axis RAID systems for enhanced bandwidth and reliability

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5598549A (en) * 1993-06-11 1997-01-28 At&T Global Information Solutions Company Array storage system for returning an I/O complete signal to a virtual I/O daemon that is separated from software array driver and physical device driver
US6581135B2 (en) * 1998-05-27 2003-06-17 Fujitsu Limited Information storage system for redistributing information to information storage devices when a structure of the information storage devices is changed
US6266784B1 (en) * 1998-09-15 2001-07-24 International Business Machines Corporation Direct storage of recovery plan file on remote server for disaster recovery and storage management thereof
US6728831B1 (en) * 1998-10-23 2004-04-27 Oracle International Corporation Method and system for managing storage systems containing multiple data storage devices
US6834326B1 (en) * 2000-02-04 2004-12-21 3Com Corporation RAID method and device with network protocol between controller and storage devices
US6895467B2 (en) * 2001-10-22 2005-05-17 Hewlett-Packard Development Company, L.P. System and method for atomizing storage
US20030097487A1 (en) * 2001-11-20 2003-05-22 Rietze Paul D. Common boot environment for a modular server system
US20030188114A1 (en) * 2002-03-26 2003-10-02 Clark Lubbers Data replication with virtualized volumes
US6848034B2 (en) * 2002-04-04 2005-01-25 International Business Machines Corporation Dense server environment that shares an IDE drive
US20040111559A1 (en) * 2002-12-10 2004-06-10 Thomas Heil Apparatus and method for sharing boot volume among server blades

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Paul Massiglia, "The RAID Book: A Storage System Technology Handbook", 6th Edition, 1997, Pages 82 - 91 *
The PC Guide, "RAID Level 0", December 5, 2005, Pages 1 - 2,http://web.archive.org/web/20051205023703/http://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel0-c.html *

Cited By (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11467883B2 (en) 2004-03-13 2022-10-11 Iii Holdings 12, Llc Co-allocating a reservation spanning different compute resources types
US11652706B2 (en) 2004-06-18 2023-05-16 Iii Holdings 12, Llc System and method for providing dynamic provisioning within a compute environment
US11630704B2 (en) 2004-08-20 2023-04-18 Iii Holdings 12, Llc System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information
US11656907B2 (en) 2004-11-08 2023-05-23 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11537434B2 (en) 2004-11-08 2022-12-27 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11494235B2 (en) 2004-11-08 2022-11-08 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11886915B2 (en) 2004-11-08 2024-01-30 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11861404B2 (en) 2004-11-08 2024-01-02 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11537435B2 (en) 2004-11-08 2022-12-27 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11709709B2 (en) 2004-11-08 2023-07-25 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11762694B2 (en) 2004-11-08 2023-09-19 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11658916B2 (en) 2005-03-16 2023-05-23 Iii Holdings 12, Llc Simple integration of an on-demand compute environment
US11533274B2 (en) 2005-04-07 2022-12-20 Iii Holdings 12, Llc On-demand access to compute resources
US11831564B2 (en) 2005-04-07 2023-11-28 Iii Holdings 12, Llc On-demand access to compute resources
US11496415B2 (en) 2005-04-07 2022-11-08 Iii Holdings 12, Llc On-demand access to compute resources
US11522811B2 (en) 2005-04-07 2022-12-06 Iii Holdings 12, Llc On-demand access to compute resources
US11765101B2 (en) 2005-04-07 2023-09-19 Iii Holdings 12, Llc On-demand access to compute resources
US11650857B2 (en) 2006-03-16 2023-05-16 Iii Holdings 12, Llc System and method for managing a hybrid computer environment
US20090055599A1 (en) * 2007-08-13 2009-02-26 Linda Van Patten Benhase Consistent data storage subsystem configuration replication
US7716309B2 (en) * 2007-08-13 2010-05-11 International Business Machines Corporation Consistent data storage subsystem configuration replication
US11522952B2 (en) 2007-09-24 2022-12-06 The Research Foundation For The State University Of New York Automatic clustering for self-organizing grids
US20090210619A1 (en) * 2008-02-19 2009-08-20 Atul Mukker Method for handling more than a maximum number of supported drives in a raid configuration
US8015439B2 (en) * 2008-02-19 2011-09-06 Lsi Corporation Method for handling more than a maximum number of supported drives in a raid configuration
US8126864B2 (en) * 2008-02-26 2012-02-28 Buffalo Inc. Method and apparatus for managing folder
US20090216818A1 (en) * 2008-02-26 2009-08-27 Buffalo Inc. Method and apparatus for managing folder
US9229703B1 (en) * 2009-03-04 2016-01-05 Amazon Technologies, Inc. User controlled environment updates in server cluster
US10079716B2 (en) 2009-03-04 2018-09-18 Amazon Technologies, Inc. User controlled environment updates in server cluster
US8595714B1 (en) * 2009-03-04 2013-11-26 Amazon Technologies, Inc. User controlled environment updates in server cluster
US11095505B1 (en) 2009-03-04 2021-08-17 Amazon Technologies, Inc. User controlled environment updates in server cluster
US10735256B2 (en) 2009-03-04 2020-08-04 Amazon Technologies, Inc. User controlled environment updates in server cluster
US9465771B2 (en) 2009-09-24 2016-10-11 Iii Holdings 2, Llc Server on a chip and node cards comprising one or more of same
US9311269B2 (en) 2009-10-30 2016-04-12 Iii Holdings 2, Llc Network proxy for high-performance, low-power data center interconnect fabric
US9262225B2 (en) 2009-10-30 2016-02-16 Iii Holdings 2, Llc Remote memory access functionality in a cluster of data processing nodes
US9876735B2 (en) * 2009-10-30 2018-01-23 Iii Holdings 2, Llc Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect
US9929976B2 (en) 2009-10-30 2018-03-27 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US20120096211A1 (en) * 2009-10-30 2012-04-19 Calxeda, Inc. Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect
US9977763B2 (en) 2009-10-30 2018-05-22 Iii Holdings 2, Llc Network proxy for high-performance, low-power data center interconnect fabric
US9008079B2 (en) 2009-10-30 2015-04-14 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric
US10050970B2 (en) 2009-10-30 2018-08-14 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US9054990B2 (en) 2009-10-30 2015-06-09 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US10135731B2 (en) 2009-10-30 2018-11-20 Iii Holdings 2, Llc Remote memory access functionality in a cluster of data processing nodes
US10140245B2 (en) 2009-10-30 2018-11-27 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9077654B2 (en) 2009-10-30 2015-07-07 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9749326B2 (en) 2009-10-30 2017-08-29 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US9075655B2 (en) 2009-10-30 2015-07-07 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric with broadcast or multicast addressing
US9866477B2 (en) 2009-10-30 2018-01-09 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric
US9680770B2 (en) 2009-10-30 2017-06-13 Iii Holdings 2, Llc System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US9405584B2 (en) 2009-10-30 2016-08-02 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric with addressing and unicast routing
US9454403B2 (en) 2009-10-30 2016-09-27 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric
US9479463B2 (en) 2009-10-30 2016-10-25 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US9509552B2 (en) 2009-10-30 2016-11-29 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US11526304B2 (en) 2009-10-30 2022-12-13 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US10021806B2 (en) 2011-10-28 2018-07-10 Iii Holdings 2, Llc System and method for flexible storage and networking provisioning in large scalable processor installations
US9585281B2 (en) 2011-10-28 2017-02-28 Iii Holdings 2, Llc System and method for flexible storage and networking provisioning in large scalable processor installations
US9792249B2 (en) 2011-10-31 2017-10-17 Iii Holdings 2, Llc Node card utilizing a same connector to communicate pluralities of signals
US9965442B2 (en) 2011-10-31 2018-05-08 Iii Holdings 2, Llc Node card management in a modular and large scalable server system
US9092594B2 (en) 2011-10-31 2015-07-28 Iii Holdings 2, Llc Node card management in a modular and large scalable server system
US9069929B2 (en) 2011-10-31 2015-06-30 Iii Holdings 2, Llc Arbitrating usage of serial port in node card of scalable and modular servers
US20130282944A1 (en) * 2012-04-23 2013-10-24 Microsoft Corporation Sas integration with tray and midplane server architecture
US9829935B2 (en) * 2012-04-23 2017-11-28 Microsoft Technology Licensing, Llc SAS integration with tray and midplane server architecture
US9648102B1 (en) 2012-12-27 2017-05-09 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US10289467B2 (en) * 2013-03-28 2019-05-14 Hewlett Packard Enterprise Development Lp Error coordination message for a blade device having a logical processor in another system firmware domain
US20160188394A1 (en) * 2013-03-28 2016-06-30 Hewlett-Packard Development Company, L.P. Error coordination message for a blade device having a logical processor in another system firmware domain
US20160335008A1 (en) * 2015-05-15 2016-11-17 Sundar Dasar Systems And Methods For RAID Storage Configuration Using Hetereogenous Physical Disk (PD) Set Up
US9612759B2 (en) * 2015-05-15 2017-04-04 Dell Products Lp Systems and methods for RAID storage configuration using hetereogenous physical disk (PD) set up
US11194487B2 (en) * 2019-10-31 2021-12-07 EMC IP Holding Company LLC Method, electronic device and computer program product of allocating storage disks
US11385818B2 (en) * 2019-10-31 2022-07-12 EMC IP Holding Company LLC Method, electronic device and computer program product for managing disks
US11960937B2 (en) 2022-03-17 2024-04-16 Iii Holdings 12, Llc System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter

Also Published As

Publication number Publication date
US9395932B2 (en) 2016-07-19
US20160034222A1 (en) 2016-02-04

Similar Documents

Publication Publication Date Title
US9395932B2 (en) Virtual drive mapping
CA2671333C (en) Method and apparatus for identifying enclosures and devices
US7525957B2 (en) Input/output router for storage networks
KR101506368B1 (en) Active-active failover for a direct-attached storage system
US8948000B2 (en) Switch fabric management
US8103902B2 (en) Disk array including plural exchangeable magnetic disk unit
US10394573B2 (en) Host bus adapter with built-in storage for local boot-up
US8205016B2 (en) Controller receiving a configuration command while receiving an auxiliary supply voltage
US7599392B2 (en) Devices and methods for matching link speeds between controllers and controlled devices
US10162786B2 (en) Storage node based on PCI express interface
US20080034067A1 (en) Configurable blade enclosure
US7610418B2 (en) Maximizing blade slot utilization in a storage blade enclosure
GB2490591A (en) Storage Area Network (SAN) multi-pathing
US10229085B2 (en) Fibre channel hardware card port assignment and management method for port names
US6829658B2 (en) Compatible signal-to-pin connector assignments for usage with fibre channel and advanced technology attachment disk drives
US7793089B2 (en) Configurable backplane connectivity for an electrical device
US20080306991A1 (en) Apparatus and method to configure, format, and test, a data storage subsystem product
KR100992033B1 (en) Reconfigurable fc-al storage loops in a data storage system
US7065661B2 (en) Using request and grant signals to read revision information from an adapter board that interfaces a disk drive
US7917660B2 (en) Consistent data storage subsystem configuration replication in accordance with port enablement sequencing of a zoneable switch
EP1349070A2 (en) Method of locating a storage device
US20130081012A1 (en) Storage drive virtualization
CN101252593B (en) Data storage enclosure management system and providing method thereof
US7486083B2 (en) Managing system stability
JP6777722B2 (en) Route selection policy setting system and route selection policy setting method

Legal Events

Date Code Title Description
AS Assignment

Owner name: EMULEX DESIGN & MANUFACTURING CORPORATION, CALIFOR

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HOTCHKISS, THOMAS RICHMOND;REEL/FRAME:018670/0675

Effective date: 20061130

AS Assignment

Owner name: EMULEX CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EMULEX DESIGN AND MANUFACTURING CORPORATION;REEL/FRAME:032087/0842

Effective date: 20131205

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EMULEX CORPORATION;REEL/FRAME:036942/0213

Effective date: 20150831

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001

Effective date: 20170119

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001

Effective date: 20170119