WO2002037282A2 - A method for provisioning complex data storage devices - Google Patents

A method for provisioning complex data storage devices Download PDF

Info

Publication number
WO2002037282A2
WO2002037282A2 PCT/US2001/042860 US0142860W WO0237282A2 WO 2002037282 A2 WO2002037282 A2 WO 2002037282A2 US 0142860 W US0142860 W US 0142860W WO 0237282 A2 WO0237282 A2 WO 0237282A2
Authority
WO
WIPO (PCT)
Prior art keywords
host
switch
storage device
port
commands
Prior art date
Application number
PCT/US2001/042860
Other languages
French (fr)
Other versions
WO2002037282A3 (en
Inventor
Jay Soffian
Gerardo Lopez-Fernandez
Original Assignee
Loudcloud, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Loudcloud, Inc. filed Critical Loudcloud, Inc.
Priority to AU2002214680A priority Critical patent/AU2002214680A1/en
Publication of WO2002037282A2 publication Critical patent/WO2002037282A2/en
Publication of WO2002037282A3 publication Critical patent/WO2002037282A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0637Permissions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • the present invention relates generally to centralized storage devices, and more particularly to techniques for provisioning centralized storage devices to be used in network computing applications.
  • Storage networks are used to tie multiple hosts to a single storage system, and may be either a storage area network (SAN), or a network attached storage system (NAS). These two types of storage networks differ primarily in the manner in which the devices on the network are attached to the storage system.
  • SAN storage area network
  • NAS network attached storage system
  • the devices are attached to the centralized storage device by way of channels, or direct connections from the devices to the centralized storage device.
  • NAS the devices of the computer network are attached to the centralized storage device by way of a network, or virtual, connection.
  • storage devices in SANs are considered to be channel attached devices
  • storage devices in NASs are known as network attached devices.
  • RAID redundant array of independent disks
  • a RAID uses a controller and two or more disk drives to store data.
  • RAID systems have different configuration levels, such as RAIDO, RAID1, RAID2, RAID3, RAID4, RAID5, RAID6, RAIDJ, RAID 10, and RAID53.
  • RAIDO redundant array of independent disks
  • RAID1 RAID2, RAID3, RAID4, RAID5, RAID6, RAIDJ, RAID 10, and RAID53.
  • Some of the advantages generally obtained through use of a RAID device include increased input/output (I/O) performance, increased fault tolerance, and data redundancy. The degree to which each of these advantages is obtained depends upon the specific RAID configuration. For more information regarding RAID technology in general, a general description of each of the RAID configurations can be found on the Internet at the following URL: http://www.raid5.com.
  • Data striping is a technique whereby data elements are broken into specific blocks and written to different disks within the disk array of the RAID storage device. This improves access time as the controller, which accesses each of the disks within the disk array, may spread the load of I/O requests across many channels and many disk drives. Additionally, data may be backed up using various data redundancy algorithms, such as parity storage algorithms, or the like.
  • RAID storage devices One of the leading manufactures of RAID storage devices is EMC Corporation of Hopkinton, Massachusetts. EMC Corporation manufactures a variety of RAID storage devices that may be used in storage networks such as SANs and NASs. Enterprise Storage Systems, or EMC devices manufactured by EMC Corporation, are generally highly scalable and provide high availability of information to network clients. While EMC devices are among some of the more commonly used RAID devices for a large storage network applications, other systems using RAID technology are also used in many storage network applications. While the advantages of using RAID storage devices, such as EMC devices for example, are desirable, there are also some difficulties associated with using these types of storage devices.
  • client A have access to only client A's data, and that no other clients, such as clients B and C have access to client A's data, e.g., financial transaction information, such as credit card account information, or proprietary business operations information.
  • financial transaction information such as credit card account information, or proprietary business operations information.
  • a central storage device 8 e.g., an EMC frame
  • the number of ports provided to the central storage device 8 may be very limited, e.g., twelve.
  • using the direct attachment scheme depicted in Figure 1 would only permit the central storage device to be connected to N hosts, where N is defined by the number of ports provided to the central storage device 8.
  • a switching matrix 18 can be introduced between the central storage device 8 and the hosts which store their data on the central storage device as shown in Figure 2.
  • the switching matrix 18 may, for example, be comprised of a number of fiber switches (not shown).
  • a larger number of hosts 1-Y can share the limited number of ports 1-N available at the central storage device 8.
  • This provides a more efficient usage of the capabilities of the central storage device 8, but at the expense of introducing additional complexity into the pathway between any given host and its data.
  • This complexity makes it particularly challenging to accomplish the task of provisioning these types of central storage devices, i.e., to identify and allocate storage space to any of the hosts 1-Y on an ongoing basis. Accordingly, it would be desirable to provide a system and method for provisioning a central storage device, which provides a consistent, and easy to follow, set of techniques for performing this task.
  • unallocated storage space in the storage device is selected for allocation.
  • the central storage device is then informed of the host's authorization to access this space by way of changes to a volume database.
  • a path is created to enable the host to access the allocated storage though a switching matrix by performing a zoning process in the switch.
  • the host is configured to access the allocated storage.
  • a computer-readable medium can be executed to determine at least one of: (a) a first set of commands for modifying a database associated with access rights on a central storage device; (b) a second set of commands for modifying a switch configuration to provide a path between a host device and the central storage device; and (c) a third set of commands for modifying a configure file associated with the host device to inform the host device that a predetermined storage area has been allocated thereto.
  • Figure 1 depicts a conventional direct attachment topology between a central storage device and a plurality of hosts.
  • Figure 2 depicts a conventional attachment topology wherein a switching matrix has been inserted between the central storage device and a plurality of hosts.
  • Figure 3 depicts an attachment topology according to the present invention.
  • Figure 4 is a diagram illustrating details of storage media used in connection with the present invention.
  • Figure 5 is a diagram depicting ports which are available at the central storage device.
  • Figure 6 illustrates an exemplary zoning of the ports of a switch.
  • Figure 7 is a flow diagram depicting an overall method according to the present invention.
  • Figures 8 and 9 are flow diagrams which illustrate in more detail some of the steps set forth in Figure 7 according to exemplary embodiments of the present invention.
  • FIG. 3 An exemplary storage network of the type in which the present invention can be employed is illustrated in block diagram form in Figure 3.
  • a central storage device 30, e.g., an EMC frame is connected via a plurality of optical links to a switching matrix 32.
  • four fiber switches 34, 36, 38 and 40 are employed in the switching matrix 32, each of which are connected to the central storage device 30 using three ports and three optical links.
  • the switches 34, 36, 38 and 40 are each connected to two of the other switches 34, 36, 38 and 40 as shown to enable a single switch configuration to promulgate throughout the switching matrix 32 rather than requiring that each switch in the matrix be separately configured.
  • FIG. 3 is only one exemplary manner in which a switching matrix 32 can be configured for interfacing between central storage device 30 and a plurality of hosts and that the present invention is applicable to any such configurations.
  • a host 42 is illustrated in Figure 3, to permit the exemplary configuration of the switching matrix 32 to stand out more clearly, those skilled in the art will further appreciate that a large number of hosts may be connected to the switching matrix 32 so as to access their data stored in the central data storage device 30.
  • An access terminal 43 is also shown which permits an operator to have access to each of the switches 34, 36, 38 and 40, as well as the central storage device 30 and the host 42 for configuration purposes associated with the provisioning of the central storage device 30 which will be described in more detail below.
  • FIG. 4 is a diagram of the storage media contained within an exemplary central storage device 30. It will be recognized by those skilled in the art that similar diagrams may be devised for specific network storage devices, such as EMC devices and other devices, by way of modifications to the elements of this exemplary diagram.
  • multiple segments of the storage medium are defined as logical volumes (sometimes referred to as "hypers") 202, 204, 206, 208, 210, 212, 214, 216, and 218.
  • Each of these volumes is defined by a beginning and ending address, and has a field indicating the size of the volume.
  • each volume contains a volume identification field that uniquely identifies that particular volume within the central storage device 30.
  • Volumes may be of different sizes, as illustrated by the varying widths of the volumes in Figure 4.
  • the first volume 202 is approximately one half of the size of the third volume 206, and approximately one third of the size of the fifth volume 210. That is, if the first volume 202 has a size of one gigabyte, the third volume 206 would have a size of approximately two gigabytes, and the fifth volume 210 would have a size of approximately three gigabytes.
  • the storage medium illustrated in the volume diagram 200 of Figure 4 is also characterized by larger segments of memory called groups (sometimes referred to as"metas"). A group is made up of multiple, contiguous volumes within the storage medium.
  • group A 220 is made up of the first five volumes 202, 204, 206, 208, 210.
  • Group B 222 is made up of the next four contiguous volumes 212, 214, 216, 218.
  • the storage medium 200 of Figure 4 may include a larger number of groups, however only two are illustrated therein for the sake of convenience.
  • the groups of the storage medium 200 are defined by the beginning address of the first volume contained within that group, and the ending address of the last volume of the contiguous of volumes contained within the group.
  • group A 220 is defined by the beginning address of volume 202, which is the first volume contained within group A 220, and the ending address of volume 210, which is the last contiguous volume contained within group 220.
  • group B 222 is defined by the beginning address of volume 212, which is the first volume contained within group B 222, and the ending address of group 218, which is the last contiguous volume contained within group B 222.
  • an access panel 300 is provided for allowing direct connections to the central storage device for accessing data contained on the storage mediums thereof.
  • This access is provided by way of multiple adaptors 302, 304, 306, 308, 310, 312.
  • Each of these adaptors may be an individual computer interface card, which may be interchangeable, or more permanent.
  • Upon each of these adaptors 302, 304, 306, 308, 310, 312 is one or more adaptor ports 314, 316, 318, 320, 322, 324, 326, 328, 330, 332, 334, 336, which correspond to the ports 1-N described above with respect to Figures 1 and 2.
  • adaptor ports of the adaptor cards access may be gained to the storage medium of the storage device.
  • group A 220 of the storage medium 200 in Figure 4 may be displayed, or accessed, through one or more ports contained within the access panel 300. Which group is seen through which port or ports may be a dynamic definition, that changes as data contained within the storage device changes.
  • Figure 6 is a block diagram which depicts an exemplary set of ports associated with fiber switch unit 34. Although sixteen ports are illustrated by the dots set forth therein, those skilled in the art will appreciate that any number of ports may be built into fiber switch 34. Three of the ports 400, 402 and 404, in the upper left-hand portion of Figure 6, are connected to the central storage device 30. Two of the ports are connected to switches 36 and 40 in switching matrix 32. Some or all of the remaining ports are connected to host devices (not shown in this Figure). These connections may be accomplished by way of any communication channel, such as direct cable wiring, infrared line-of-sight connections, or the like.
  • the switch ports shown in Figure 6 may be grouped into zones, which, by way of wiring or other connections, provide common access to the ports within each zone.
  • the six ports in zone 1 share common access to the portions of the media within central storage device 30 that are made available by device 30's internal controller (not shown) to the adaptor port to which port 400 is connected.
  • those ports zoned in zones 2 and zone 3 share common access with respective portions of the media of central control device 30 to which ports 402 and 404 are connected via adaptor ports, respectively.
  • ports can belong to more than one zone, e.g., port 408, such that those ports can transact data through multiple adaptor ports on the central storage device 30.
  • Switch port zoning provides the mechanism whereby a plurality of hosts can be aggregated into the three data pipes available to switch 34 for connection with the central storage device 30.
  • this feature also adds to the complexity associated with allocating storage on the central storage device and defining the pathway thereto. Accordingly, exemplary embodiments of the present invention provide techniques for handling these issues.
  • provisioning of central storage devices like those described above comprises three general steps which are depicted by the flow diagram of Figure 7.
  • the controller (not shown) in the central storage device 30 needs to be informed that a particular host is authorized to access a particular range of storage within the central storage device.
  • a path is created from the host through the switching matrix to the allocated storage at step 702.
  • steps 700 and 702 can be performed in the reverse of the order in which they are described herein.
  • the host is informed of how to find the allocated storage via the path that has been created at step 704. A more detailed example of how each of these steps can be performed according to an exemplary embodiment of the present invention will now be discussed.
  • Symm - is an identifier for the central storage device 30;
  • Meta - is a group storage space identifier
  • T/L - identifies the device target/lun within the central storage device to be allocated
  • Hyper Range - identifies the specific volumes of storage within the identified Meta to be allocated
  • Range - provides the high and low address of the hypers being allocated
  • Host - identifies the host which is to be allowed to access the identified storage space.
  • the storage space identified in the table can, for the purpose of this exemplary embodiment, be identified by manually referring to a spreadsheet which contains the current allocation of storage space on the central storage device 32.
  • the available storage space can be determined automatically according to an exemplary embodiment of the present invention by running a software routine that probes the central storage device and returns, as output, the unallocated storage space on the device.
  • the unallocated storage space can be identified in the output of this software tool by sets of parameters which include, for example, a serial number associated with the central storage device, a range of unallocated volumes, a logical unit number (LUN) associated with the identified range, a size (e.g., in gigabytes) associated with the identified range and the port(s) on the central storage device via which the identified range can be accessed.
  • the provisioning process can be performed as follows.
  • Step 700 Informing the Central Data Storage Device of the Host's Authorization
  • each central storage device 30 will typically include a volume on which is stored a database that masks the data based on the authorization granted to each host device. This database can be edited for the purpose of granting new rights, e.g., those described in table above, as part of the provisioning process.
  • this editing process can be performed using a utility provided by the EMC Corporation known as ⁇ ath.
  • the ⁇ ath utility includes functions for backing up the database, adding a new host device, changing the name of a host, listing the contents of the database and refreshing the configuration of the database which are used in this process. These functions can be used as described in the flow diagram of to inform the central storage device 30 of the new host's access authorization.
  • step 800 the database is backed up to preserve the original records.
  • step 802 an entry is added to the data base using the appropriate ⁇ ath command for adding a device.
  • This command will include, as arguments, a unique identifier of the host (sometimes referred to as a worldwide name (wwn)), the port on the central storage device through which this host will obtain access and the range of storage (e.g. , as in the table) to which this host will be permitted access.
  • wwn worldwide name
  • an alias of the entry created in step 802 may be added to the database at step 804 to make it easier to visually locate the entry of step 802 in the database.
  • the entries are verified using the ⁇ ath listing function (step 806) and the database can be saved (step 808) to complete the task of informing the central storage device 30 of the new host's access authorization.
  • Step 702 Creating a Path Through the Switching Matrix by Zoning
  • the next step in the provisioning process of Figure 7 is to establish the zones on the switch which aggregate multiple hosts ports on the switch to a single central storage device port on the switch, as depicted conceptually in Figure 6.
  • the switch zoning will be changed.
  • step 900 it is first determined the switch port to which the host of interest is connected. This can be accomplished by establishing a Telnet session with the switch using access terminal 43 which can establish a communication link with a processor (not shown) embedded in the switch.
  • a processor not shown
  • zoning requires the addition of port 11 to the zone which includes port 15. This is performed in step 902 by executing a "zoneCreate" command via the Telnet session with the switch as:
  • zoneCreate "AAACORP_m0042_fcaO_sym2621_falA", "1,15; 1,11"
  • the zone is added to a selected configuration file to be stored on the switch, again using a suitable Telnet command.
  • Each fiber switch 34, 36, 38, and 40 may have multiple configuration files which accumulate the matchings between host ports and central storage device ports on the switch.
  • the new configuration is enabled via the terminal 43 which performs compilation and verification of the new configuration. If errors are detected by the switch processor, then the switch will indicate the error to the terminal 43 and the configuration will not be modified. Otherwise, the new configuration is saved and the zoning step is complete. Step 704 - Informing the Host How to Find the Allocated Storage
  • the next step is to configure the host so that it can access the specified storage space. This is accomplished by modifying a configure file (sd.conf) stored in the host and rebooting the host.
  • the syntax for this command is:
  • ⁇ fa-wwn> is the symm's fa wwn
  • a process referred to as persistent binding can then be applied to tie devices to a specific wwn in the central storage device.
  • each of the steps described above with respect to Figure 7 can be further automated by providing software code to perform the configuration steps for the central storage device 30, switching matrix 32 and host 42.
  • Exemplary code for automating these processes is provided below in Appendix B.
  • This software tool takes, as input, the switch port assignments provide by the Switchshow command described above, the host identifying names (i.e., "worldwidenames (WWN)) and free space available on the central storage device 32, e.g., as output by the software tool described above and provided as Appendix A.
  • WWN worldwidenames
  • the software tool of Appendix B will determine (1) to which port(s) the central data storage device 30 is connected, (2) to which port the host 42 is connected and (3) which unallocated space to assign to this particular host as its storage area. As output, the software tool of Appendix B generates (1) the ⁇ ath commands used in step 700 (as well as steps 800-808) to modify the central storage device's masking database, (2) the switch commands needed to create the path of step 702 (as well as steps 900-906) and (3) the configuration file information to be added to the host per step 704.
  • This software tool provides a valuable alternative to attempting to manually generate the necessary commands to perform the steps of Figure 7, which can reduce the time associated with provisioning a central storage device by several fold, as well as reducing the possibility of making errors during that process. Moreover, automating this process enforces consistency, e.g., in naming conventions, in the provisioning process. Still further, this automation makes it much easier to train people to provision such storage devices.
  • # emit fpath config print "sym fpath commands ⁇ n"; print " - ⁇ n”; foreach my $sym (sort keys %fpath_config) ⁇ print "$sym ⁇ n”; print q(fpath backupdb -o /root/vcm/$MYSYM/date +%d%m%Y_%H%M%S')," ⁇ n"; print @ ⁇ $fpath_config ⁇ $sym ⁇ ; # print "fpath Isdb -s on ⁇ n”;
  • $visited-> ⁇ $switchWwn ⁇ 1 ; foreach my $partner_wwn (keys % ⁇ $san ⁇ $switchWwn ⁇ ) ⁇ next if $visited-> ⁇ $partner_wwn ⁇ ;
  • ") or die "fpath Isdb: $! ⁇ n"; while( ⁇ IN>) ⁇ chomp; if ( ⁇ listing VCM Database for FA ⁇ s+( ⁇ S+)/) ⁇ $fa $1 ; next;

Abstract

Techniques for provisioning complex storage devices are described. Storage space in the storage device is selected for allocation. The central storage device is then informed of the host's authorization to access this space by way of changes to a volume database. A path is created to enable the host to access the allocated storage though a switching matrix by performing a zoning process in the switch. Finally, the host is configured to access the allocated storage.

Description

A METHOD FOR PROVISIONING COMPLEX DATA STORAGE DEVICES
BACKGROUND OF THE INVENTION
The present invention relates generally to centralized storage devices, and more particularly to techniques for provisioning centralized storage devices to be used in network computing applications.
Many modern day computer systems rely heavily on networking technology. As computer networks become more and more prevalent, a common practice of using centralized storage devices is gaining popularity. This technique ensures that all devices on the computer network have access to the same data, and allows for better maintenance and monitoring by a system administrator.
There are multiple configurations in which centralized data storage devices may be used in a computer networking environment. Some of the more commonly used configurations are implemented in what are known as storage networks. Storage networks are used to tie multiple hosts to a single storage system, and may be either a storage area network (SAN), or a network attached storage system (NAS). These two types of storage networks differ primarily in the manner in which the devices on the network are attached to the storage system. In an SAN configuration, the devices are attached to the centralized storage device by way of channels, or direct connections from the devices to the centralized storage device. In an NAS configuration, the devices of the computer network are attached to the centralized storage device by way of a network, or virtual, connection. Thus, storage devices in SANs are considered to be channel attached devices, while storage devices in NASs are known as network attached devices.
In storage networks, many different types of storage devices may be used. One common type of storage device, which may be used in a central location is a redundant array of independent disks (RAID). A RAID uses a controller and two or more disk drives to store data. RAID systems have different configuration levels, such as RAIDO, RAID1, RAID2, RAID3, RAID4, RAID5, RAID6, RAIDJ, RAID 10, and RAID53. Depending upon the specific configuration of the RAID device used for centralized data storage, various advantages may be obtained. Some of the advantages generally obtained through use of a RAID device include increased input/output (I/O) performance, increased fault tolerance, and data redundancy. The degree to which each of these advantages is obtained depends upon the specific RAID configuration. For more information regarding RAID technology in general, a general description of each of the RAID configurations can be found on the Internet at the following URL: http://www.raid5.com.
One of the elements involved in RAID storage is data striping. Data striping is a technique whereby data elements are broken into specific blocks and written to different disks within the disk array of the RAID storage device. This improves access time as the controller, which accesses each of the disks within the disk array, may spread the load of I/O requests across many channels and many disk drives. Additionally, data may be backed up using various data redundancy algorithms, such as parity storage algorithms, or the like.
One of the leading manufactures of RAID storage devices is EMC Corporation of Hopkinton, Massachusetts. EMC Corporation manufactures a variety of RAID storage devices that may be used in storage networks such as SANs and NASs. Enterprise Storage Systems, or EMC devices manufactured by EMC Corporation, are generally highly scalable and provide high availability of information to network clients. While EMC devices are among some of the more commonly used RAID devices for a large storage network applications, other systems using RAID technology are also used in many storage network applications. While the advantages of using RAID storage devices, such as EMC devices for example, are desirable, there are also some difficulties associated with using these types of storage devices. For example, because data is written in blocks to multiple disk drives, rather than a single disk drive, it is difficult to ascertain the location of a single file, as it is distributed among the various disk drives of the RAID storage device. Nonetheless, knowledge of where data files are stored on a central storage device is important, particularly for applications wherein the hosts which store their data on the same RAID storage device are not permitted to access each others data. This situation may occur, for example, in an Internet service hosting network environment, wherein a RAID storage device is used to centrally store data corresponding to multiple clients' accounts. In such a situation, it would be crucial that client A have access to only client A's data, and that no other clients, such as clients B and C have access to client A's data, e.g., financial transaction information, such as credit card account information, or proprietary business operations information.
Another complicating factor involves the connective topology between the central storage device and the various hosts whose data reside on the central storage device. As shown in Figure 1, typically a central storage device 8, e.g., an EMC frame, has a number 1-N of different ports (some of which are illustrated using reference numerals 10, 12, 14, 16) through which data can be passed via direct connections between hosts 1-N and the central storage device 8. The number of ports provided to the central storage device 8 may be very limited, e.g., twelve. Thus, using the direct attachment scheme depicted in Figure 1 would only permit the central storage device to be connected to N hosts, where N is defined by the number of ports provided to the central storage device 8. This result will be inefficient in an environment wherein the central storage device 8 is able to provide a greater data throughput to each port 10-16 than will be used by an individual host 1-N. Since the central storage device 8 is a very expensive piece of equipment, it is highly desirable to make more efficient use of its limited number of ports.
Accordingly, a switching matrix 18 can be introduced between the central storage device 8 and the hosts which store their data on the central storage device as shown in Figure 2. The switching matrix 18 may, for example, be comprised of a number of fiber switches (not shown). By introducing the switching matrix 18 between the central storage device 8 and the hosts, a larger number of hosts 1-Y can share the limited number of ports 1-N available at the central storage device 8. This provides a more efficient usage of the capabilities of the central storage device 8, but at the expense of introducing additional complexity into the pathway between any given host and its data. This complexity makes it particularly challenging to accomplish the task of provisioning these types of central storage devices, i.e., to identify and allocate storage space to any of the hosts 1-Y on an ongoing basis. Accordingly, it would be desirable to provide a system and method for provisioning a central storage device, which provides a consistent, and easy to follow, set of techniques for performing this task.
SUMMARY
According to exemplary embodiments of the present invention, these and other problems associated with provisioning complex storage devices are addressed by providing automated techniques for performing the tasks associated with this automation.
According to exemplary embodiments of the present invention, unallocated storage space in the storage device is selected for allocation. The central storage device is then informed of the host's authorization to access this space by way of changes to a volume database. A path is created to enable the host to access the allocated storage though a switching matrix by performing a zoning process in the switch. Finally, the host is configured to access the allocated storage. Each of these steps can be performed, at least in part, by use of various software routines described here to reduce the time associated with these tasks, as well as reducing errors associated therewith. According to another exemplary embodiment of the present invention, a computer-readable medium can be executed to determine at least one of: (a) a first set of commands for modifying a database associated with access rights on a central storage device; (b) a second set of commands for modifying a switch configuration to provide a path between a host device and the central storage device; and (c) a third set of commands for modifying a configure file associated with the host device to inform the host device that a predetermined storage area has been allocated thereto.
Further features of the invention, and the advantages offered thereby, are explained in greater detail hereinafter with reference to specific embodiments illustrated in the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 depicts a conventional direct attachment topology between a central storage device and a plurality of hosts.
Figure 2 depicts a conventional attachment topology wherein a switching matrix has been inserted between the central storage device and a plurality of hosts.
Figure 3 depicts an attachment topology according to the present invention.
Figure 4 is a diagram illustrating details of storage media used in connection with the present invention. Figure 5 is a diagram depicting ports which are available at the central storage device.
Figure 6 illustrates an exemplary zoning of the ports of a switch. Figure 7 is a flow diagram depicting an overall method according to the present invention.
Figures 8 and 9 are flow diagrams which illustrate in more detail some of the steps set forth in Figure 7 according to exemplary embodiments of the present invention.
DETAILED DESCRIPTION
To facilitate an understanding of the principles and features of the present invention, it is explained hereinafter with reference to its implementation in an illustrative embodiment. In particular, the invention is described in the context of a RAID network storage device, such as an EMC storage device, used to store data, which is accessible via a plurality of hosts. It will be appreciated, however, that this is not the only embodiment in which the invention can be implemented. Rather, it can find utility in a variety of computer network configurations, using a variety of suitable storage devices, as will become apparent from an understanding of the principles that underscore the invention.
An exemplary storage network of the type in which the present invention can be employed is illustrated in block diagram form in Figure 3. Therein, a central storage device 30, e.g., an EMC frame, is connected via a plurality of optical links to a switching matrix 32. In this exemplary embodiment, four fiber switches 34, 36, 38 and 40 are employed in the switching matrix 32, each of which are connected to the central storage device 30 using three ports and three optical links. The switches 34, 36, 38 and 40 are each connected to two of the other switches 34, 36, 38 and 40 as shown to enable a single switch configuration to promulgate throughout the switching matrix 32 rather than requiring that each switch in the matrix be separately configured. Those skilled in the art will appreciate that the exemplary topology of Figure 3 is only one exemplary manner in which a switching matrix 32 can be configured for interfacing between central storage device 30 and a plurality of hosts and that the present invention is applicable to any such configurations. Moreover, although only one host 42 is illustrated in Figure 3, to permit the exemplary configuration of the switching matrix 32 to stand out more clearly, those skilled in the art will further appreciate that a large number of hosts may be connected to the switching matrix 32 so as to access their data stored in the central data storage device 30. An access terminal 43 is also shown which permits an operator to have access to each of the switches 34, 36, 38 and 40, as well as the central storage device 30 and the host 42 for configuration purposes associated with the provisioning of the central storage device 30 which will be described in more detail below.
Figure 4 is a diagram of the storage media contained within an exemplary central storage device 30. It will be recognized by those skilled in the art that similar diagrams may be devised for specific network storage devices, such as EMC devices and other devices, by way of modifications to the elements of this exemplary diagram. Therein, multiple segments of the storage medium are defined as logical volumes (sometimes referred to as "hypers") 202, 204, 206, 208, 210, 212, 214, 216, and 218. Each of these volumes is defined by a beginning and ending address, and has a field indicating the size of the volume. Additionally, each volume contains a volume identification field that uniquely identifies that particular volume within the central storage device 30. Volumes may be of different sizes, as illustrated by the varying widths of the volumes in Figure 4. For example, the first volume 202 is approximately one half of the size of the third volume 206, and approximately one third of the size of the fifth volume 210. That is, if the first volume 202 has a size of one gigabyte, the third volume 206 would have a size of approximately two gigabytes, and the fifth volume 210 would have a size of approximately three gigabytes. The storage medium illustrated in the volume diagram 200 of Figure 4 is also characterized by larger segments of memory called groups (sometimes referred to as"metas"). A group is made up of multiple, contiguous volumes within the storage medium. For example, group A 220 is made up of the first five volumes 202, 204, 206, 208, 210. Group B 222 is made up of the next four contiguous volumes 212, 214, 216, 218. The storage medium 200 of Figure 4 may include a larger number of groups, however only two are illustrated therein for the sake of convenience. The groups of the storage medium 200 are defined by the beginning address of the first volume contained within that group, and the ending address of the last volume of the contiguous of volumes contained within the group. For example, group A 220 is defined by the beginning address of volume 202, which is the first volume contained within group A 220, and the ending address of volume 210, which is the last contiguous volume contained within group 220. Similarly, group B 222 is defined by the beginning address of volume 212, which is the first volume contained within group B 222, and the ending address of group 218, which is the last contiguous volume contained within group B 222.
It will be appreciated by those skilled in the art that the number of groups and volumes defined within a central storage device 30 may vary according to the various data characteristics intended to be stored thereon. Thus, it will be appreciated that the definition groups and volumes need not be limited to those illustrated in Figure 4, which are provided for illustration purposes only. Additionally, the number and size of volumes contained within each group may vary widely depending upon the specific application for which a storage device containing such a storage medium configuration is used, and need not be limited by the figures or illustrations associated with the volumes contained in Figure 4.
In Figure 5, an access panel 300 is provided for allowing direct connections to the central storage device for accessing data contained on the storage mediums thereof. This access is provided by way of multiple adaptors 302, 304, 306, 308, 310, 312. Each of these adaptors may be an individual computer interface card, which may be interchangeable, or more permanent. Upon each of these adaptors 302, 304, 306, 308, 310, 312 is one or more adaptor ports 314, 316, 318, 320, 322, 324, 326, 328, 330, 332, 334, 336, which correspond to the ports 1-N described above with respect to Figures 1 and 2. By way of these adaptor ports of the adaptor cards, access may be gained to the storage medium of the storage device. For example, group A 220 of the storage medium 200 in Figure 4 may be displayed, or accessed, through one or more ports contained within the access panel 300. Which group is seen through which port or ports may be a dynamic definition, that changes as data contained within the storage device changes.
It will be appreciated by those skilled in the art that actual physical adaptor ports, such as the ones shown in Figure 5, are to be used in SAN storage networks, as they provide a channel attached storage network by allowing devices to be connected directly via cable to the storage device. However, it should be recognized that the ports illustrated in Figure 5 are only one embodiment of the ports that may be used in storage devices associated with the present invention. It is contemplated that virtual adaptor ports, which allow for the storage device to provide access to portions of data stored therein in a NAS storage network, which is networked attached, may also be used with an embodiment of the present invention. Accordingly, any such network attached devices making use of virtual ports or network ports are intended to be encompassed within the scope of the present invention. Figure 6 is a block diagram which depicts an exemplary set of ports associated with fiber switch unit 34. Although sixteen ports are illustrated by the dots set forth therein, those skilled in the art will appreciate that any number of ports may be built into fiber switch 34. Three of the ports 400, 402 and 404, in the upper left-hand portion of Figure 6, are connected to the central storage device 30. Two of the ports are connected to switches 36 and 40 in switching matrix 32. Some or all of the remaining ports are connected to host devices (not shown in this Figure). These connections may be accomplished by way of any communication channel, such as direct cable wiring, infrared line-of-sight connections, or the like. The switch ports shown in Figure 6 may be grouped into zones, which, by way of wiring or other connections, provide common access to the ports within each zone. Thus, for example, the six ports in zone 1 (in addition to port 400) share common access to the portions of the media within central storage device 30 that are made available by device 30's internal controller (not shown) to the adaptor port to which port 400 is connected. Likewise, those ports zoned in zones 2 and zone 3 share common access with respective portions of the media of central control device 30 to which ports 402 and 404 are connected via adaptor ports, respectively. Note that although only contiguous ports are zoned together in the example of Figure 6, such is not required and is portrayed therein to more clearly describe this zoning concept. Moreover, while some of the ports are depicted as not belonging to any zones, such will only be the case for ports that are not connected to a host. Lastly, it will be seen that ports can belong to more than one zone, e.g., port 408, such that those ports can transact data through multiple adaptor ports on the central storage device 30.
Switch port zoning provides the mechanism whereby a plurality of hosts can be aggregated into the three data pipes available to switch 34 for connection with the central storage device 30. However, as mentioned above, this feature also adds to the complexity associated with allocating storage on the central storage device and defining the pathway thereto. Accordingly, exemplary embodiments of the present invention provide techniques for handling these issues.
As an overview, provisioning of central storage devices like those described above comprises three general steps which are depicted by the flow diagram of Figure 7. First, at step 700, the controller (not shown) in the central storage device 30 needs to be informed that a particular host is authorized to access a particular range of storage within the central storage device. Then, a path is created from the host through the switching matrix to the allocated storage at step 702. Note that steps 700 and 702 can be performed in the reverse of the order in which they are described herein. Finally, the host is informed of how to find the allocated storage via the path that has been created at step 704. A more detailed example of how each of these steps can be performed according to an exemplary embodiment of the present invention will now be discussed.
Consider, solely for the sake of illustration, that it is desired to provision the central storage device 30 by allowing a host to access storage as follows:
Figure imgf000013_0001
therein the headings refer to the following parameters: Symm - is an identifier for the central storage device 30;
Meta - is a group storage space identifier; T/L - identifies the device target/lun within the central storage device to be allocated;
Hyper Range - identifies the specific volumes of storage within the identified Meta to be allocated;
Range - provides the high and low address of the hypers being allocated; and
Host - identifies the host which is to be allowed to access the identified storage space. The storage space identified in the table can, for the purpose of this exemplary embodiment, be identified by manually referring to a spreadsheet which contains the current allocation of storage space on the central storage device 32.
Alternatively, the available storage space can be determined automatically according to an exemplary embodiment of the present invention by running a software routine that probes the central storage device and returns, as output, the unallocated storage space on the device. According to an exemplary tool which performs this function (which is attached hereto as Appendix A) the unallocated storage space can be identified in the output of this software tool by sets of parameters which include, for example, a serial number associated with the central storage device, a range of unallocated volumes, a logical unit number (LUN) associated with the identified range, a size (e.g., in gigabytes) associated with the identified range and the port(s) on the central storage device via which the identified range can be accessed. In either case, once the available storage space on the central storage device 32 is identified, the provisioning process can be performed as follows.
Step 700 - Informing the Central Data Storage Device of the Host's Authorization
As mentioned above, employing a switching matrix enables the data pipes between each fiber switch and the central storage device to aggregate data transactions associated with multiple host devices. Since multiple host devices are, therefore, using each port on the central data storage device, it is necessary for the central data storage device to provide access to different storage areas via each of its ports. To accomplish this function, each central storage device 30 will typically include a volume on which is stored a database that masks the data based on the authorization granted to each host device. This database can be edited for the purpose of granting new rights, e.g., those described in table above, as part of the provisioning process.
For example, if the central storage device 30 is an EMC frame, then this editing process can be performed using a utility provided by the EMC Corporation known as φath. More specifically, the φath utility includes functions for backing up the database, adding a new host device, changing the name of a host, listing the contents of the database and refreshing the configuration of the database which are used in this process. These functions can be used as described in the flow diagram of to inform the central storage device 30 of the new host's access authorization.
Therein, at step 800, the database is backed up to preserve the original records. Next, at step 802, an entry is added to the data base using the appropriate φath command for adding a device. This command will include, as arguments, a unique identifier of the host (sometimes referred to as a worldwide name (wwn)), the port on the central storage device through which this host will obtain access and the range of storage (e.g. , as in the table) to which this host will be permitted access. Next, optionally, an alias of the entry created in step 802 may be added to the database at step 804 to make it easier to visually locate the entry of step 802 in the database. Finally, the entries are verified using the φath listing function (step 806) and the database can be saved (step 808) to complete the task of informing the central storage device 30 of the new host's access authorization.
Step 702 - Creating a Path Through the Switching Matrix by Zoning
The next step in the provisioning process of Figure 7 is to establish the zones on the switch which aggregate multiple hosts ports on the switch to a single central storage device port on the switch, as depicted conceptually in Figure 6. Each time a host is added, and occasionally when a reconfiguration is performed, the switch zoning will be changed. This can be accomplished as set forth in the flow diagram of Figure 9. Therein, at step 900, it is first determined the switch port to which the host of interest is connected. This can be accomplished by establishing a Telnet session with the switch using access terminal 43 which can establish a communication link with a processor (not shown) embedded in the switch. Those skilled in the art will be aware of various Telnet emulation programs which can be used to establish these types of communication sessions. Once the Telnet session is established, the port connections of the switch can be revealed using a "Switchshow" command, which command might, for example, reveal:
port 11 20:00:00:e0:69:c0:18:3cm0042scll fca-pciO port 12 20:00:00:e0:69:c0: 17:59m0043scl 1 fca-pciO port 13 50:06:04:82:bc:01:be:00sym2584 fala port 15 50:06:04:82:bc:01:c7:40sym2621 fala
For this example, therefore, zoning requires the addition of port 11 to the zone which includes port 15. This is performed in step 902 by executing a "zoneCreate" command via the Telnet session with the switch as:
zoneCreate "AAACORP_m0042_fcaO_sym2621_falA", "1,15; 1,11"
Then, at step 904, the zone is added to a selected configuration file to be stored on the switch, again using a suitable Telnet command. Each fiber switch 34, 36, 38, and 40 may have multiple configuration files which accumulate the matchings between host ports and central storage device ports on the switch. At step 906, the new configuration is enabled via the terminal 43 which performs compilation and verification of the new configuration. If errors are detected by the switch processor, then the switch will indicate the error to the terminal 43 and the configuration will not be modified. Otherwise, the new configuration is saved and the zoning step is complete. Step 704 - Informing the Host How to Find the Allocated Storage
Once the central storage device 30 and the switching matrix 32 are ready to present the specified storage space to the new host, the next step is to configure the host so that it can access the specified storage space. This is accomplished by modifying a configure file (sd.conf) stored in the host and rebooting the host. The syntax for this command is:
name= "sd" class="scsi" target= <t> lun= <d> hba=" <hba> " wwn=" <fa-wwn> "; where <t> is the target number < d > is the lun number <hba> is the hba
<fa-wwn> is the symm's fa wwn
Thus, for host m0042scll:
# switch 1 name="sd" class="scsi" target=0 lun=3 hba="fca-pciO" wwn="50060482bc01c740"; name="sd" class="scsi" target=0 lun=4 hba="fca-pciO" wwn="50060482bc01c740"; name="sd" class="scsi" target=0 lun=5 hba="fca-pciO" wwn="50060482bc01c740";
A process referred to as persistent binding can then be applied to tie devices to a specific wwn in the central storage device.
According to exemplary embodiments of the present invention, each of the steps described above with respect to Figure 7 can be further automated by providing software code to perform the configuration steps for the central storage device 30, switching matrix 32 and host 42. Exemplary code for automating these processes is provided below in Appendix B. This software tool takes, as input, the switch port assignments provide by the Switchshow command described above, the host identifying names (i.e., "worldwidenames (WWN)) and free space available on the central storage device 32, e.g., as output by the software tool described above and provided as Appendix A. Given this information, the software tool of Appendix B will determine (1) to which port(s) the central data storage device 30 is connected, (2) to which port the host 42 is connected and (3) which unallocated space to assign to this particular host as its storage area. As output, the software tool of Appendix B generates (1) the φath commands used in step 700 (as well as steps 800-808) to modify the central storage device's masking database, (2) the switch commands needed to create the path of step 702 (as well as steps 900-906) and (3) the configuration file information to be added to the host per step 704. This software tool provides a valuable alternative to attempting to manually generate the necessary commands to perform the steps of Figure 7, which can reduce the time associated with provisioning a central storage device by several fold, as well as reducing the possibility of making errors during that process. Moreover, automating this process enforces consistency, e.g., in naming conventions, in the provisioning process. Still further, this automation makes it much easier to train people to provision such storage devices.
The presently disclosed embodiments are, therefore, considered in all respects to be illustrative and not restrictive. The scope of the invention is indicated by the appended claims, rather than the foregoing description, and all changes that come within the meaning and range of equivalence thereof are intended to be embraced therein.
APPENDIX A
=head1 SAMPLE INPUT scl1_switch3:admin> switchshow switchName: scl1_switch3 switchType: 2.4 switchState: Online switchRole: Principal switchDomain: 3 switch Id: fffc03 switchWwn: 10:00:00:60:69:10:28:98 port 0: sw Online F-Port 20:00:00:e0:69:c0:22:6f port 1 : sw Online F-Port 20:00:00:e0:69:c0:1f:fb port 2: sw NoJJght port 3: sw NoJJght port 4: sw NoJJght port 5: sw Online L-Port port 6: sw Online Loopback->6 port 7: sw Online F-Port 20:00:00:e0:69:c0:22:3b port 8: sw Online F-Port 20:00:00:e0:69:c0:18:e1 port 9: sw NoJJght port 10: sw Online F-Port 20:00:00:e0:69:c0:19:60 port 11 : sw Online F-Port 20:00:00:e0:69:c0:20:a6 port 12: sw Online F-Port 20:00:00:e0:69:c0:20:8e port 13: sw Online F-Port 50:06.O4:82:bc.O1 :be. 0 port 14: sw Online F-Port 20:00:00:e0:69:c0: 15:21 port 15: sw Online F-Port 50:06:04:82:bc:01 :c7:50
> dmesg | grep 'Fibre Channel WWN'
Jul 28 03:58:22 m0322sc11.cust.loudcloud.com unix: fca-pciO: Fibre Channel WWN:
200000e069c0224c Jul 28 03:58:34 m0322sc11.cust.loudcloud.com unix: fca-pci1 : Fibre Channel WWN:
200000e069c018e1
Jul 31 23:19:45 m0322sc11.cust.loudcloud.com unix: fca-pciO: Fibre Channel WWN:
200000e069c0224c
Jul 31 23:19:57 m0322sc11.cust.loudcloud.com unix: fca-pci1 : Fibre Channel WWN: 200000e069c018e1
> metafind ~metas=free sym1360 033-03c 12 43.11 3b,5b,13b syml 360 130-134 8 21.55 3b,5b,13b syml 360 135-139 9 21.55 3b,5b,13b sym1424 033-03C 12 43.11 3b,5b,13b syml 424 130-134 8 21.55 3b,5b,13b syml 424 135-139 9 21.55 3b,5b,13b
=cut use strict; # note populate from 'fpath Issymmfas' for each sym frame, my %wwn_sym = (
'50060482bcc3dc02' => 'sym1360 a3a', '50060482bcc3dc03' => 'sym1360 a4a', '50060482bcc3dc04' => 'syml360 a5a',
'50060482bcc3dc0c' => 'sym1360Ja13a', '50060482bcc3dc0d' => 'sym1360 a14a', '50060482bcc3dc12' => 'sym1360Ja3b', '50060482bcc3dc13' => 'sym1360 a4b', '50060482bcc3dc14' => 'syml360 a5b',
'50060482bcc3dc1c' => 'sym1360Ja13b', '50060482bcc3dc1d' => 'sym1360 a14b', '50060482bcc3ec02' => 'sym1424 a3a', '50060482bcc3ec03' => 'sym1424Ja4a\ '50060482bcc3ec04' => 'syml424 a5a',
'50060482bcc3ec0c' => 'sym1424 a13a', '50060482bcc3ec0d' => 'sym1424Ja14a', 50060482bcc3ec12' => 'sym1424 a3b', '50060482bcc3ec13' => 'sym1424 a4b', '50060482bcc3ec14' => 'sym1424 a5b',
'50060482bcc3ec1c' => 'sym1424Ja13b\ '50060482bcc3ec1d' => 'sym1424 a14b', '50060482bc01a680' => 'sym2490Ja1a', '50060482bc01a683' => 'sym2490Ja4a', '50060482bc01a684' => 'sym2490 a5a',
'50060482bc01a68b' => 'sym2490Ja12a', '50060482bc01a68c' => 'sym2490Ja13a\ "50060482bc01a68f => 'sym2490Ja16a', '50060482bc01a690' => *sym2490Ja1b', '50060482bc01a693' => 'sym2490Ja4b',
'50060482bc01a694' => 'sym2490 a5b', "50060482bc01a69b' => 'sym2490Ja12b\ '50060482bc01a69c' => 'sym2490Ja13b', '50060482bc01a69f => 'sym2490Ja16b', '50060482bc01ab40' => 'sym2509 a1a',
'50060482bc01ab43' => 'sym2509Ja4a', '50060482bc01ab44' => 'sym2509Ja5a', '50060482bc01ab4b' => 'sym2509Ja12a', '50060482bc01ab4c' => 'sym2509 a13a', '50060482bc01ab4f => 'sym2509 a16a',
'50060482bc01ab50' => 'sym2509Ja1b', '50060482bc01ab53' => 'sym2509Ja4b', '50060482bc01ab54' => 'sym2509Ja5b', '50060482bc01ab5b' => 'sym2509Ja12b', '50060482bc01ab5c' => 'sym2509Ja13b',
'50060482bc01ab5f => 'sym2509Ja16b', '50060482bc01be00' => 'sym2584Ja1a', '50060482bc01be03' => 'sym2584Ja4a', '50060482bc01be04' => 'sym2584Ja5a', '50060482bc01beOb' => 'sym2584Ja12a', '50060482bc01be0c' => 'sym2584 a13a', '50060482bc01be0f => 'sym2584 a16a', '50060482bc01be10' => 'sym2584Ja1b', '50060482bc01be13' => 'sym2584Ja4b\ '50060482bc01be14' => 'sym2584Ja5b',
'50060482bc01be1b' => 'sym2584Ja12b', '50060482bc01be1c' => 'sym2584 a13b', '50060482bc01 e1f => 'sym2584Ja16b', '50060482bc01c740' => 'sym2621Jala', '50060482bc01c743' => 'sym2621Ja4a',
'50060482bc01c744' => 'sym2621Ja5a', '50060482bc01c74b' => 'sym2621Ja12a', '50060482bc01c74c' => 'sym2621Ja13a', '50060482bc01c74f => "sym2621Ja16a', '50060482bc01c750' => 'sym2621Ja1b',
'50060482bc01c753' => "sym2621Ja4b', '50060482bc01c754' => 'sym2621Ja5b', '50060482bc01c75b' => 'sym2621Ja12b', '50060482bc01c75c' => "sym2621Ja13b\ '50060482bc01c75f => 'sym2621Ja16b',
'50060482bc012740' => 'syml981Jala', '50060482bc012743' => 'sym1981Ja4a', '50060482bc012744' => 'sym1981Ja5a', '50060482bc01274b' => 'sym1981Ja12a", '50060482bc01274c' => 'sym1981Ja13a\
'50060482bc01274f => 'sym1981Ja16a', '50060482bc012750' => 'sym1981Ja1b', "50060482bc012753' => 'sym1981Ja4b', '50060482bc012754' => 'sym1981Ja5b', '50060482bc01275b' => 'sym1981Ja12b\
'50060482bc01275c' => 'sym1981Ja13b', '50060482bc01275f => 'sym1981Ja16b', '50060482bc01e980' => 'sym2758Ja1a', '50060482bc01e983' => 'sym2758Ja4a', '50060482bc01e984' => 'sym2758Ja5a',
'50060482bc01e98b' => 'sym2758Ja12a', '50060482bc01e98c' => 'sym2758Ja13a', '50060482bc01e98F => 'sym2758Ja16a\ '50060482bc01e990' => 'sym2758Ja1b', '50060482bc01e993' => 'sym2758Ja4b',
'50060482bc01e994' => 'sym2758Ja5b', 50060482bc01e99b' => 'sym2758Ja12b', '50060482bc01e99c' => 'sym2758Ja13b', '50060482bc01e99f => 'sym2758Ja16b', '50060482bfd0c782' => 'sym0960Ja3a',
'50060482bfd0c783' => 'sym0960Ja4a', '50060482bfd0c784' => 'sym0960Ja5a', '50060482bfd0c78b' => 'sym0960Ja12a', '50060482bfd0c78c' => 'sym0960Ja13a', '50060482bfd0c78d' => 'sym0960Ja14a', '50060482bfd0c792' => 'sym0960Ja3b\ '50060482bfd0c793' => 'sym0960Ja4b', '50060482bfd0c794' => 'sym0960Ja5b', '50060482bfd0c79b' => 'sym0960Ja12b', '50060482bfd0c79c' => 'sym0960Ja13b',
'50060482bfd0c79d' => 'sym0960Ja14b', '50060482bfd0b802' => 'sym1064Ja3a', '50060482bfd0b803' => 'sym1064Ja4a', '50060482bfd0b804' => 'sym1064Ja5a', '50060482bfd0b80b' => 'syml064Ja12a',
'50060482bfd0b80c' => 'sym1064Ja13a\ '50060482bfd0b80d' => 'sym1064Ja14a', '50060482bfd0b812' => 'sym1064Ja3b', 50060482bfd0b813' => 'syml064Ja4b', '50060482bfd0b814' => 'sym1064Ja5b*,
'50060482bfd0b81b' => 'sym1064 a12b', '50060482bfd0b81c' => 'sym1064Ja13b\ '50060482bfd0b81d' => 'sym1064Ja14b',
); my (%sym_wwn); # key $sym; value {$sym} -> {$fa} = $wwn
# create the %sym_wwn hash from the %wwn_sym hash above while (my ($wwn, $symja) = each %wwn_sym) { unless ($wwn =~ /Λ[a-z0-9]{16}$/) { die "invalid key in \%wwn: $wwn => $sym a\n"; } unless ($sym a =~ /Λsym\d+Ja\d+[ab]$/) { die "invalid value in \%wwn: $wwn => $sym Ja\n";
} my ($sym, $fa) = split(/J,$sym Ja); $sym_wwn{$sym}->{$fa} = $wwn;
} my (%w); # key: $wwn; value [$switchName, $port] my (%s); # key: $switchName; value {switchjoc} = $switchJoc
# {switch_num} = $switch_num # {switchDomain} = $switchDomain
# {switchWwn} = $switchWwn
# {ports} -> [$port] = $wwn
# {lowestName} = $lowestName my (%h); # key $hostName; value {fca} -> {$num} = $wwn; # {sym} -> $sym; my (%sym); # key $sym; value {$sym} -> {$meta} -> {hypers} = "$hypers"
# -> {fas} -> {$fa} = 1
# -> {lun} = $lun my (%san); # key $switchWwn; value {$partnerWwn} = $count; my (%san_wwn); # key $switchWwn; value $switchName my (%switchJo_sym); # key $switchName; value {$sym} -> {$fa} = $port; my($switchName, $hostName, $clustered); if (@ARGV[0] eq "-clustered") { $clustered = 1 ; shift @ARGV; } unless (@ARGV) { usage (); } while(<>) { if (/ΛswitchName:\s+(\w+)/) {
$switchName = $1 ; unless ($switchName =~ /Λ([ΛJ+)J?:switch|pfs)(\d+)/) { die "invalid switch name: $switchName\n";
} $s{$switchName}->{switchJoc} = $1 ;
$s{$switchName}->{switch_num} = $2; next;
} if (/ΛswitchDomain:\s+(\d+)/) { unless ($1 == $s{$switchName}->{switch_num}) { warn "switchDomain ($1) is not consistent with switch name ($switchName)\n";
}
$s{$switchName}->{switchDomain} = $1 ; next; } if (/Λswitch Wwn :\s+([0-9a-f :]+)/) { $s{$switchName}->{switchWwn} = $1; $san_wwn{$1} = $switchName; next; } if (/Λport\s+(\d+):\s+(?:sw|id)\s+Online\s+F\-Port\s+((?:[0-9a-f|{2}:){7}[0-9a-f|{2})/) { my ($port, $wwn) = ($1 , $2); $wwn =~ s/://g;
$s{$switchName}->{ports}->[$port] = $wwn; $w{$wwn} = [$switchName, Sport]; if ($wwn =~ /Λ500604/) { my $sym Ja = $wwn_sym{$wwn}; die "unknown sym WWN: $wwn\n" unless $symja; my ($sym, $fa) = split(/ ,$symja); $switchJo_sym{$switchName}->{$sym}->{$fa} = Sport;
} next;
} if (/Λport\s+(\d+):\s+(?:sw|id)\s+Online\s+E\-Port\s+((?:[0-9a-f]{2}:){7}[0-9a-f|{2})/) { my (Sport, SpartnerWwn) = ($1 , $2); my $myWwn = $s{$switchName}->{switchWwn}; die "can't determine wwn for switch named '$switchName'\n" unless defined SmyWwn; $san{$myWwn}->{$partnerWwn}++; next;
} if (/(\w+)[\.\w]* unix: fca-pci(\d+): Fibre Channel WWN: ([a-z0-9]{16})/) { ShostName = $1 ; my ($fca_pci, $wwn) = ($2,$3);
$h{$hostName}->{fca}->{$fca_pci} = $wwn; next;
} if (m%Λ([Λ/]+)/fca-pci(\d+)\s+([a-zO-9]{16})%) {
$hostName = $1 ; my ($fca_pci, $wwn) = ($2,$3); $h{$hostName}->{fca}->{$fca_pci} = $wwn; next; } if (/Λ(sym\d+)\s+(([0-9a-fI+)-[0-9a-fl+)\s+([0-9a-η+)\s+[\d\.]+\s+([0-9a-f,]+)/) { my ($sym, $hypers, $meta, $lun, $fas) = ($1 ,$2,$3,$4,$5); $sym{$sym}->{$meta}->{Iun} = $lun; $sym{$sym}->{$meta}->{fas} = [split(/,/,$fas)]; $sym{$sym}->{$meta}->{hypers} = $hypers; next; } } findJowest_san_members(); my %switch_config; my %fpath_config; my %sdconf_config; my %symJo_host; foreach $hostName (sort keys %h) { # foreach host my %fca_wwn = %{$h{$hostName}->{fca}}; foreach my $fca_n (sort keys %fca_wwn) { # foreach fca my $target = 0; my $wwn = $fca_wwn{$fca_n}; # which switch and port does this fca attach to? unless (exists $w{$wwn}) { die "can't find fca with wwn $wwn on any switch\n";
} my ($switchName, $fca_port) = @{$w{$wwn}}; my $sym; if ($clustered) { my @syms = keys %{$switch Jo_sym{$switchName}}; if (@syms > 1 ) { die "switch $switchName connects to multiple symms. This is not\n" . "compatible with -clusteredΛn"; }
$sym = @syms[0] } else { # if we haven't already mapped this host to a specific sym, do so by # choosing the first sym (by name lexograpically) on this switch that
# we haven't already allocated to the other host, unless ($h{$hostName}->{sym}) { foreach my $sym (sort keys %{$switchJo_sym{$switchName}}) { next if $symJo_host{$sym};
$h{$hostName}->{sym} = $sym; $symJo_host{$sym} = ShostName; last;
} die "all frames allocated, I don't know what to do with $hostName\n" unless $h{$hostName}->{sym};
}
# lookup which sym we've mapped this host to. $sym = $h{$hostName}->{sym}; }
# lookup all FA's for this sym on this switch. foreach my $fa (keys %{$switch Jo_sym{$switchName}->{$sym}}) {
# create fpath commands and sd.conf entries my ($fpath Ja) = lc $fa; $fpathja =~ s/Λfa//; # argh, fpath has its own variation on the fa name push(@{$sdconf_config{$hostName}}, "\n# $switchName ($sym fa$fpathja)\n"); foreach my $meta (sort keys %{$sym{$sym}}) { unless (grep(/Λ$fpathJa$/, @{$sym{$sym}->{$meta}->{fas}})) { die "meta $meta not accessible via fa $fpathja\n"; } push(@{$sdconf_config{$hostName}}, sprintf
"name=\"sd\" class=\"scsi\" target=%d lun=%s hba=\"fca-pci%s\" wwn=\"%s\";\n", $target, $sym{$sym}->{$meta}->{lun},
$fca_n, $sym_wwn{$sym}->{$fa}
); push(@{$fpath_config{$sym}}, sprintfC'fpath adddev -w %s -f %s -r \"%s\"\n",
$wwn, $fpathja, $sym{$sym}->{$meta}->{hypers}
) );
}
$target++; push(@{$fpath_config{$sym}}, sprintfC'fpath chgname -w %s -f %s -n \"%s/fca-pci%s\"\n", $wwn,
$fpathja,
$hostName,$fca_n
)
); # create switch commands my $fa_port = $switchJo_sym{$switchName}->{$sym}->{$fa}; my $zone_name = (sprintf "%sjca%d_%s_%s", $hostName, $fca_n, $sym,
$fa); my $lowestName = $s{$switchName}->{lowestName}; push (@{$switch_config{$lowestName}->{zonecreate}}, sprintf "\"%s\",\"%d,%d;%d,%d\"", $zone_name,
$s{$switchName}->{switchDomain}, $fca_port, $s{$switchName}->{switchDomain}, $fa_port); my $cfg_name = sprintf("%s_pfs%d",
$s{$lowestName}->{switchJoc}, $s{$lowestName}->{switch_num}); push (@{$switch_config{$lowestName}->{cfgadd}}, sprintf "\"%s\",\"%s\"", $cfg_name, $zone_name); push (@{$switch_config{$lowestName}->{cfgenable}},"\"$cfg_name\"") unless $switch_config{$lowestName}->{cfgenable}; # need only one push (@{$switch_config{$lowestName}->{cfgsave}},") unless $switch_config{$lowestName}->{cfgsave}; # need only one
}
}
# emit switch config foreach $switchName (sort keys %switch_config) { printf "%s\n",$switchName; print " -\n"; foreach my $command (sort {&switch_cmd_order} keys %{$switch_config{$switchName}}) { foreach my $line (@{$switch_config{$switchName}->{$command}}) { printf "%s %s\n", $command, $line;
} } print "\n"; }
# emit fpath config print "sym fpath commands\n"; print " -\n"; foreach my $sym (sort keys %fpath_config) { print "$sym\n"; print q(fpath backupdb -o /root/vcm/$MYSYM/date +%d%m%Y_%H%M%S'),"\n"; print @{$fpath_config{$sym}}; # print "fpath Isdb -s on\n";
# print "[verify]\n"; print q(fpath backupdb -o /root/vcm/$MYSYM.'date +%d%m%Y_%H%M%S,),"\n"; print "fpath refresh\n\n"; }
# emit sd.conf config foreach my $host (sort keys %sdconf_config) { printf "%s sd.conf\n",$host; print " "; print @{$sdconf_config{$host}}; print "\n"; } sub switch_cmd_order { my %cmd_order = (zonecreate => 0, cfgadd => 1 , cfgenable => 2, cfgsave => 3); $cmd_order{$a} <=> $cmd_order{$b};
} sub lookup_switchName { my ($switchWwn) = @_; my $switchName = $san_wwn{$switchWwn}; die "can't determine switch name for WWN $switchWwn\n" unless defined $switchName; return $switchName;
} sub lookup_switchDomain { my ($switchWwn) = @_; my $switchName = lookup_switchName($switchWwn); my $switchDomain = $s{$switchName}->{switchDomain}; die "can't determine switch domain for $switchName\n" unless defined $switchDomain; return $switchDomain; } sub findJowest_san_member { my ($switchWwn, $visited, $lowestDomain, $lowestWwn) = @_;
$visited = {} unless defined $visited; my $switchDomain = lookup_switchDomain($switchWwn); if (not defined $lowestDomain or $switchDomain < $lowestDomain) { $lowestDomain = $switchDomain;
$lowestWwn = $switchWwn;
}
$visited->{$switchWwn} = 1 ; foreach my $partner_wwn (keys %{$san{$switchWwn}}) { next if $visited->{$partner_wwn};
($lowestWwn, $lowestDomain) =
(findJowest_san_member($partner_wwn, $visited, $lowestDomain, $lowestWwn))[1 ,2];
} return ($visited, $lowestWwn, $lowestDomain); } sub findJowest_san_members { foreach my $wwn (keys %san) { my $switchName = lookup_switchName($wwn); next if defined $s{$switchName}->{lowestName}; my ($visted, $lowestWwn) = findJowest_san_member($wwn); my $lowestName = lookup_switchName($lowestWwn); foreach my $vwwn (keys %$visted) { my $vname = lookup_switchName($vwwn); $s{$vname}->{lowestName} = $lowestName;
} } foreach my $switchName (keys %s) { $s{$switchName}->{lowestName} ||= $switchName; }
} sub usage { warn "usage: emc_config_gen <inputjile1> <inputjile2> ...\n\n"; warn "As many input files may be specified as desired. Collectively, they musfλn"; warn "present the following input:\n\n"; warn "1) 'switchshow' from each switch to which the customer's DB hosts connect;\n"; warn "2) 'dmesg | grep V'Fibre Channel WWNV" from the DB hostsΛn"; warn "3) the appropriate metas from 'metafind ~metas=free'\n\n"; warn "Note: if a switch connects to a sym frame more than once, multiple paths\n"; warn "will be provided for the HBA which connects to that switch. If this is\n"; warn "not desired, you may comment out the ports in the 'switchshow' output\n"; warn "input Jile for which paths are _not_ desired Λn\n"; exit 1 ; }
APPENDIX B use strict; use Getopt::Long; my ($optJormat, $opt_metas, $fmt_string); GetOptions('format=s'=>\$optJormat,
'metas=s'=>\$opt_metas); if ($opt Jormat eq 'extended') { $fmt_string = $opt_metas eq 'free' ? "%s %s %3d %6.2f %s%s%s%s\n" :
"%s %s %3d %6.2f %16s %33s %3s %s\n"; } else { $fmt_string = $opt_metas eq 'free' ? "%s %s %3d %6.2f %s%s%s%s\n" :
"%s %s %3d %6.2f %16s %33s %3s\n"; }
$ENV{PATH}="/root/bin:/usr/bin:/usr/sbin:/lc/bin:/sbin:/usr/symmapps/vcm:/usr/symmapps/vcm"; my %sym_dev = ('snv2' => { 'sym 1360' => '/dev/rdsk/c2t0d255s2\
'sym1424' => Vdev/rdsk/c1 t0d255s2',
}■ 'scl1' => {
'sym2584' => 7dev/rdsk/c1 t0d0s2', 'sym2621 ' => 7dev/rdsk/c2t0d0s2',
}■ 'str1' => {
'sym2490' => Vdev/rdsk/c2t1d0s2', 'sym2509' => Vdev/rdsk/c1 t0d0s2', },
'sjc1' => {
'sym1981' => 7dev/rdsk/c2t1 d0s2', 'sym2758' => Vdev/rdsk/c1t0d0s2',
}, ); my $hostname = 'hostname'; my $loc; foreach my $testJoc (keys %sym_dev) { if ($hostname =~ Λ.$testJoc\./) { $loc = $testJoc; last;
} } die "cannot determine locationΛn" unless $loc; foreach my $sym (sort keys %{$sym_dev{$loc}}) { my (%hyperJo_meta); # key: $meta; val: $hyper my (%meta); # key: $meta; val: {hypers} = [hyperl , hyper2, ...]
# val: {fas} = [fa1, fa2, ...]
# val: {wwns} -> {$wwn} -> {$fa} = 1 # val: {address} = $address
# val: {size} = $size my (%faJo_meta); # key: $meta; val: {$fa} -> {$meta} = $address my (%wwnJo_awwn); # key: $wwn; val: $awwn $ENV{VCMDBDEVICE} = $sym_dev{$loc}->{$sym}; open(IN, "fpath lssymmdev|") or die "fpath Issymmdev: $!\n"; my ($fa,$start_device,$address,$end_device, $size); my ($wwn, $awwn); while(<IN>){ chomp; if (/ΛDevices Available on FA\s+(\d+[ab])/) {
$fa = $1 ; next;
} if (/Λ\s*([0-9a-f]{3})\s+([0-9A-F]{4})\s+([0-9\.]+)\s+FBA\s+meta head/) { ($start_device, $address, $size) = ($1 ,$2,$3); next;
} if (/Λ\s*([0-9a-f]{3})\s+[0-9\.]+\s+FBA\s+meta tail/) { ($end_device) = $1 ; unless ($meta{$start_device}) {
$meta{$start_device}->{address} = $address; # populate %hyperJo_meta reverse index and %meta->{hypers} for (my $i = hex $start_device; $i <= hex $end_device; $i++) { $hyperjo_meta{sprintf("%03x",$i)} = $start_device; push (@{$meta{$start_device}->{hypers}}, sprintf("%03x",$i));
$meta{$start_device}->{size} += $size; } } push(@{$meta{$start_device}->{fas}}, $fa); # list of fas this meta is mapped to $faJo_meta{$fa}->{$start_device} = $address; # mark this meta available on this fa next; } } close IN; open(IN,"fpath lsdb|") or die "fpath Isdb: $!\n"; while(<IN>) { chomp; if (^listing VCM Database for FA\s+(\S+)/) { $fa = $1 ; next;
} if (/ΛWWN\s+=\s+([0-9a-f|{16})(?:\s+AWWN\s+=\s+(\S+))?$/) {
($wwn, $awwn) = ($1 ,$2);
$wwnJo_awwn{$wwn} = $awwn; next; } if (/Λ\s*([0-9a-f]{3})\s*$/) { my ($hyper) = $1 ; my ($meta) = $hyperJo_meta{$hyper}; if ($meta) {
$meta{$meta}->{wwns}->{$wwn}->{$fa} = 1;
} next;
} } close IN; foreach my $meta (sort keys %meta) { if ($opt_metas eq 'free') { next if $meta{$meta}->{wwns}; } if ($opt_metas eq 'allocated') { next unless $meta{$meta}->{wwns};
} my @wwns = $meta{$meta}->{wwns} ? sort keys %{$meta{$meta}->{wwns}} : ("); $wwn = shift @wwns; printf $fmt_string,
$sym,
@{$meta{$meta}->{hypers}}[0] . '-' . @{$meta{$meta}->{hypers}}[-1], hex $meta{$meta}->{address}, $meta{$meta}->{size},
$wwn,
$wwnJo_awwn{$wwn},
$wwn ? join(",", sort keys %{$meta{$meta}->{wwns}->{$wwn}}) : ", join (',', @{$meta{$meta}->{fas}}); foreach $wwn (@wwns) { printf "%26s %16s %33s %3s\n",
$wwn,
$wwnJo_awwn{$wwn}, joinf,", sort keys %{$meta{$meta}->{wwns}->{$wwn}});
} } }
=head1

Claims

WHAT IS CLAIMED IS:
1. A method for allocating storage to a host within a central data storage device having at least one switch disposed therebetween, said method comprising the steps of: running a software routine that automatically determines unallocated storage space within said central data storage device; informing the central storage device that the host is authorized to access a predetermined storage area, said predetermined storage area being a subset of said unallocated storage space; creating a path through the switch between said central data storage device and said host; and informing the host that said predetermined storage area has been allocated thereto.
2. The method of claim 1, wherein said step of informing the central storage device further comprises the steps of: modifying a database which is used to coordinate access to said central storage device.
3. The method of claim 2, wherein said step of informing further comprises the steps of: backing up said database; adding a new entry to said database which includes an identifier of said host and said predetermined storage area; verifying said new entry; and saving said database with said new entry.
4. The method of claim 3, further comprising the step of: adding an alias entry to said database.
5. The method of claim 1, wherein said step of creating a path through said switch further comprises the step of: determining an identity of a port in said switch that is connected to said host.
6. The method of claim 5, wherein said step of determining further comprises the steps of: establishing a communication session with said switch using an access terminal; and executing a command which reveals a correspondence between switch ports and host identifiers.
7. The method of claim 5, wherein said step of creating a path further comprises the step of: creating a zone between a port within said switch that is connected to said host and a port within said switch that is connected to a port on said central storage device through which said host can access said predetermined storage area.
8. The method of claim 7, wherein said step of creating a path further comprises the steps of: adding said zone to a configure file stored in said switch; and enabling and saving said configure file within said switch.
9. The method of claim 1, wherein said step of informing the host that said predetermined storage area has been allocated thereto further comprises the step of: modifying a configure file stored in said host to provide information regarding said predetermined storage area.
10. The method of claim 9, further comprising the step of: running another software routine which identifies how said configure file needs to be modified.
11. The method of claim 1, wherein said step of informing the central storage device further comprises the step of: running another software routine that determines a plurality of commands for execution to inform said central storage device of said predetermined storage area.
12. The method of claim 1, wherein said step of creating a path further comprises the step of: running another software routine that determines a plurality of commands for execution which create said path through said switch.
13. The method of claim 1, wherein said step of informing the host further comprises the step of: running another software routine that determines modifications to a configure file which inform said host that said predetermined storage area has been allocated thereo.
14. A computer-readable medium which stores machine readable program code for executing the steps of: determining at least one of:
(a) a first set of commands for modifying a database associated with access rights on a central storage device;
(b) a second set of commands for modifying a switch configuration to provide a path between a host device and said central storage device; and
(c) a third set of commands for modifying a configure file associated with said host device to inform said host device that a predetermined storage area has been allocated thereto.
15. The computer-readable medium of claim 14, wherein said program code is further executable to perform the steps of: determining unallocated storage space on said central storage device; and using said determined unallocated storage space in the at least one of steps (a), (b) and (c).
16. The computer-readable medium of claim 14, wherein said step of determining (a) a first set of commands for modifying a database, further comprises determining commands usable to perform the steps of: backing up said database; adding a new entry to said database which includes an identifier of said host device and said predetermined storage area; verifying said new entry; and saving said database with said new entry.
17. The computer-readable medium of claim 16, wherein said first set of commands further comprises a command for performing the step of: adding an alias entry to said database.
18. The computer-readable medium of claim 14, wherein said second set of commands for modifying said switch configuration includes a command for performing the step of: determining an identity of a port in said switch that is connected to said host device.
19. The computer-readable medium of claim 14, wherein said second set of commands for modifying said switch configuration includes a command for performing the step of: creating a zone between a port within said switch that is connected to said host device and a port within said switch that is connected to a port on said central storage device through which said host can access said predetermined storage area.
20. The computer-readable medium of claim 19, wherein said second set of commands for modifying said switch configuration includes a command for performing the steps of: adding said zone to a configure file stored in said switch; and enabling and saving said configure file within said switch.
21. The computer-readable medium claim 14, wherein said third set of commands for modifying a configure file associated with said host device includes a command for performing the step of: modifying a configure file stored in said host device to provide information regarding said predetermined storage area.
PCT/US2001/042860 2000-10-31 2001-10-31 A method for provisioning complex data storage devices WO2002037282A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002214680A AU2002214680A1 (en) 2000-10-31 2001-10-31 A method for provisioning complex data storage devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US69935200A 2000-10-31 2000-10-31
US09/699,352 2000-10-31

Publications (2)

Publication Number Publication Date
WO2002037282A2 true WO2002037282A2 (en) 2002-05-10
WO2002037282A3 WO2002037282A3 (en) 2003-07-24

Family

ID=24808942

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/042860 WO2002037282A2 (en) 2000-10-31 2001-10-31 A method for provisioning complex data storage devices

Country Status (2)

Country Link
AU (1) AU2002214680A1 (en)
WO (1) WO2002037282A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9100283B2 (en) 2002-06-12 2015-08-04 Bladelogic, Inc. Method and system for simplifying distributed server management

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0935186A1 (en) * 1998-02-06 1999-08-11 NCR International, Inc. Volume set configuration using a single operational view
WO2000029954A1 (en) * 1998-11-14 2000-05-25 Mti Technology Corporation Logical unit mapping in a storage area network (san) environment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0935186A1 (en) * 1998-02-06 1999-08-11 NCR International, Inc. Volume set configuration using a single operational view
WO2000029954A1 (en) * 1998-11-14 2000-05-25 Mti Technology Corporation Logical unit mapping in a storage area network (san) environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TOM CLARK: "Zoning for fibre channel optic" SNIA, [Online] 4 July 2000 (2000-07-04), XP002240326 Retrieved from the Internet: <URL:http://flexobuyer.co.uk/snia/educatio n/whitepapers.php?code=WP10> [retrieved on 2003-05-07] *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9100283B2 (en) 2002-06-12 2015-08-04 Bladelogic, Inc. Method and system for simplifying distributed server management
US9794110B2 (en) 2002-06-12 2017-10-17 Bladlogic, Inc. Method and system for simplifying distributed server management
US10659286B2 (en) 2002-06-12 2020-05-19 Bladelogic, Inc. Method and system for simplifying distributed server management

Also Published As

Publication number Publication date
WO2002037282A3 (en) 2003-07-24
AU2002214680A1 (en) 2002-05-15

Similar Documents

Publication Publication Date Title
EP3244297B1 (en) Configuring object storage system for input/output operations
EP2715511B1 (en) Data storage system exporting logical volumes as storage objects
EP2712438B1 (en) Computer system accessing object storage system
EP3125103B1 (en) Data storage system and data storage control method
US8650566B2 (en) Virtual machine provisioning in object storage system
US7536491B2 (en) System, method and apparatus for multiple-protocol-accessible OSD storage subsystem
US8769174B2 (en) Method of balancing workloads in object storage system
US6751702B1 (en) Method for automated provisioning of central data storage devices using a data model
US20030126225A1 (en) System and method for peripheral device virtual functionality overlay
US20070079060A1 (en) Method, apparatus and program storage device for providing virtual disk service (VDS) hints based storage
US8898418B2 (en) Method, apparatus and computer program for provisioning a storage volume to a virtual server
US10852980B1 (en) Data migration techniques
US20050138316A1 (en) Method and system for assigning a resource
WO2002037282A2 (en) A method for provisioning complex data storage devices
US8904141B2 (en) Merging a storage cluster into another storage cluster
US11922043B2 (en) Data migration between storage systems
WO2002037212A2 (en) A data model for use in the automatic provisioning of central data storage devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP