US20050235132A1 - System and method for dynamic LUN mapping - Google Patents
System and method for dynamic LUN mapping Download PDFInfo
- Publication number
- US20050235132A1 US20050235132A1 US11/156,821 US15682105A US2005235132A1 US 20050235132 A1 US20050235132 A1 US 20050235132A1 US 15682105 A US15682105 A US 15682105A US 2005235132 A1 US2005235132 A1 US 2005235132A1
- Authority
- US
- United States
- Prior art keywords
- storage device
- host
- virtual
- storage
- physical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0607—Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0664—Virtualisation aspects at device level, e.g. emulation of a storage device or system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Definitions
- This invention relates to computer systems and, more particularly, to off-host virtualization within storage environments.
- specialized storage management software and hardware may be used to provide a more uniform storage model to storage consumers.
- Such software and hardware may also be configured to present physical storage devices as virtual storage devices (e.g., virtual SCSI disks) to computer hosts, and to add storage features not present in individual storage devices to the storage model.
- features to increase fault tolerance such as data mirroring, snapshot/fixed image creation, or data parity, as well as features to increase data access performance, such as disk striping, may be implemented in the storage model via hardware or software.
- the added storage features may be referred to as storage virtualization features, and the software and/or hardware providing the virtual storage devices and the added storage features may be termed “virtualizers” or “virtualization controllers”.
- Virtualization may be performed within computer hosts, such as within a volume manager layer of a storage software stack at the host, and/or in devices external to the host, such as virtualization switches or virtualization appliances.
- Such external devices providing virtualization may be termed “off-host” virtualizers, and may be utilized in order to offload processing required for virtualization from the host.
- Off-host virtualizers may be connected to the external physical storage devices for which they provide virtualization functions via a variety of interconnects, such as Fiber Channel links, Internet Protocol (IP) networks, and the like.
- IP Internet Protocol
- storage software within a computer host consists of a number of layers, such as a file system layer, a disk driver layer, etc. Some of the storage software layers may form part of the operating system in use at the host, and may differ from one operating system to another.
- a layer such as the disk driver layer for a given operating system may be configured to expect certain types of configuration information for the disk to be laid out in a specific format, for example in a header (located at the first few blocks of the disk) containing disk partition layout information.
- the storage stack software layers used to access local physical disks may also be utilized to access external storage devices presented as virtual storage devices by off-host virtualizers.
- an off-host virtualizer may provide configuration information for the virtual storage devices in a format expected by the storage stack software layers.
- the off-host virtualizer may implement a technique to flexibly and dynamically map storage within external physical storage devices to the virtual storage devices presented to the host storage software layers, e.g., without requiring a reboot of the host.
- a system may include a first host and an off-host virtualizer, such as a virtualization switch or a virtualization appliance.
- the off-host virtualizer may be configured to present a virtual storage device, such as a virtual LUN, that comprises one or more regions that are initially unmapped to physical storage, and make the virtual storage device accessible to the first host.
- the first host may include a storage software stack including a first layer, such as a disk driver layer, configured to detect and access the virtual storage device as if the virtual storage device were mapped to physical storage.
- the off-host virtualizer may be configured to generate metadata formatted according to a requirement of an operating system in use at the host and map a portion of the virtual storage device to the metadata, where the metadata makes the virtual storage device appear to be mapped to physical storage.
- the recognition of the virtual storage device as a “normal” storage device that is backed by physical storage may occur during a system initialization stage prior to an initiation of production I/O operations. In this way, an unmapped or “blank” virtual LUN may be prepared for subsequent dynamic mapping by the off-host virtualizer.
- the unmapped LUN may be given an initial size equal to the maximum allowed LUN size supported by the operating system in use at the host, so that the size of the virtual LUN may not require modification after initialization.
- multiple virtual LUNs may be pre-generated for use at a single host, for example in order to isolate storage for different applications, or to accommodate limits on maximum LUN sizes.
- the system may also include two or more physical storage devices, and the off-host virtualizer may be configured to dynamically map physical storage from a first and a second physical storage device to a respective range of addresses within the first virtual storage device.
- the off-host virtualizer may be configured to perform an N-to-1 mapping between the physical storage devices (which may be called physical LUNs) and virtual LUNs, allowing storage in the physical storage devices to be accessed from the host via the pre-generated virtual LUNs.
- Configuration information regarding the location of the first and/or the second address ranges within the virtual LUN may be passed from the off-host virtualizer to a second layer of the storage stack at the host (e.g., an intermediate driver layer above a disk driver layer) using a variety of different mechanisms.
- Such mechanisms may include, for example, the off-host virtualizer writing the configuration information to certain special blocks within the virtual LUN, sending messages to the host over a network, or special extended SCSI mode pages.
- two or more different ranges of physical storage within a single physical storage device may be mapped to corresponding pre-generated virtual storage devices such as virtual LUNs and presented to corresponding hosts.
- the off-host virtualizer may allow each host of a plurality of hosts to access a respective portion of a physical storage device through a respective virtual LUN.
- the off-host virtualizer may also be configured to implement a security policy isolating the ranges of physical storage within the shared physical storage device; i.e., to allow a host to access only those regions to which the host has been granted access, and to prevent unauthorized accesses.
- the off-host virtualizer may be further configured to aggregate storage within one or more physical storage device into a logical volume, map the logical volume to a range of addresses within a pre-generated virtual storage device, and make the logical volume accessible to the second layer of the storage stack (e.g., by providing logical volume metadata to the second layer), allowing I/O operations to be performed on the logical volume.
- Storage from a single physical storage device may be aggregated into any desired number of different logical volumes, and any desired number of logical volumes may be mapped to a single virtual storage device or virtual LUN.
- the off-host virtualizer may be further configured to provide volume-level security, i.e., to prevent unauthorized access from a host to a logical volume, even when the physical storage corresponding to the logical volume is part of a shared physical storage device.
- volume-level security i.e., to prevent unauthorized access from a host to a logical volume, even when the physical storage corresponding to the logical volume is part of a shared physical storage device.
- physical storage from any desired number of physical storage devices may be aggregated into a logical volume using a virtual LUN, thereby allowing a single volume to extend over a larger address range than the maximum allowed size of a single physical LUN.
- the virtual storage devices or virtual LUNs may be distributed among a number of independent front-end storage networks, such as fiber channel fabrics, and the physical storage devices backing the logical volumes may be distributed among a number of independent back-end storage networks.
- a first host may access its virtual storage devices through a first storage network
- a second host may access its virtual storage devices through a second storage network independent from the first (that is, reconfigurations and/or failures in the first storage network may not affect the second storage network).
- the off-host virtualizer may access a first physical storage device through a third storage network, and a second physical storage device through a fourth storage network.
- the ability of the off-host virtualizer to dynamically map storage across pre-generated virtual storage devices distributed among independent storage networks may support a robust and flexible storage environment.
- FIG. 1 a is a block diagram illustrating one embodiment of a computer system.
- FIG. 1 b is a block diagram illustrating an embodiment of a system configured to utilize off-host block virtualization.
- FIG. 2 a is a block diagram illustrating the addition of operating-system specific metadata to a virtual logical unit (LUN) encapsulating a source volume, according to one embodiment.
- LUN virtual logical unit
- FIG. 2 b is a block diagram illustrating an example of an unmapped virtual LUN according to one embodiment.
- FIG. 3 is a block diagram illustrating an embodiment including an off-host virtualizer configured to create a plurality of unmapped virtual LUNs.
- FIG. 4 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to map physical storage from within two different physical storage devices to a single virtual LUN.
- FIG. 5 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to map physical storage from within a single physical storage device to two virtual LUNs assigned to different hosts.
- FIG. 6 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage of a physical storage device into a logical volume and map the logical volume to a range of blocks of a virtual LUN.
- FIG. 7 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to map multiple logical volumes to a single virtual LUN.
- FIG. 8 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage from a physical storage device into two logical volumes, and to map each of the two logical volumes to a different virtual LUN.
- FIG. 9 is a block diagram illustrating an embodiment employing multiple storage networks.
- FIG. 10 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage from two physical storage devices into a single logical volume.
- FIG. 11 is a flow diagram illustrating aspects of the operation of a system according to one embodiment where an off-host virtualizer is configured to support physical LUN tunneling.
- FIG. 12 is a flow diagram illustrating aspects of the operation of a system according to one embodiment where an off-host virtualizer is configured to support volume tunneling.
- FIG. 13 is a block diagram illustrating a computer-accessible medium.
- FIG. 1 a is a block diagram illustrating a computer system 100 according to one embodiment.
- System 100 includes a host 110 coupled to a physical block device 120 via an interconnect 130 .
- Host 110 includes a traditional block storage software stack 140 A that may be used to perform I/O operations on a physical block device 120 via interconnect 130 .
- a physical block device 120 may comprise any hardware entity that provides a collection of linearly addressed data blocks that can be read or written.
- a physical block device may be a single disk drive configured to present all of its sectors as an indexed array of blocks.
- the physical block device may be a disk array device, or a disk configured as part of a disk array device.
- any suitable type of storage device may be configured as a block device, such as fixed or removable magnetic media drives (e.g., hard drives, floppy or Zip-based drives), writable or read-only optical media drives (e.g., CD or DVD), tape drives, solid-state mass storage devices, or any other type of storage device.
- the interconnect 130 may utilize any desired storage connection technology, such as various variants of the Small Computer System Interface (SCSI) protocol, Fiber Channel, Internet Protocol (OP), Internet SCSI (iSCSI), or a combination of such storage networking technologies.
- the block storage software stack 140 A may comprise layers of software within an operating system at host 110 , and may be accessed by a client application to perform I/O (input/output) on a desired physical block device 120 .
- a client application may initiate an I/O request, for example as a request to read a block of data at a specified offset within a file.
- the request may be received (e.g., in the form of a reado system call) at the file system layer 112 , translated into a request to read a block within a particular device object (i.e., a software entity representing a storage device), and passed to the disk driver layer 114 .
- the disk driver layer 114 may then select the targeted physical block device 120 corresponding to the disk device object, and send a request to an address at the targeted physical block device over the interconnect 130 using the interconnect-dependent I/o driver layer 116 .
- a host bus adapter such as a SCSI HBA
- SCSI HBA SCSI HBA
- a physical link of the interconnect e.g., a SCSI bus
- an interconnect-dependent firmware layer 122 may receive the request, perform the desired physical I/O operation at the physical storage layer 124 , and send the results back to the host over the interconnect.
- the results (e.g., the desired blocks of the file) may then be transferred through the various layers of storage stack 140 A in reverse order (i.e., from the interconnect-dependent I/O driver to the file system) before being passed to the requesting client application.
- the storage devices addressable from a host 110 may be detected only during system initialization, e.g., during boot.
- an operating system may employ a four-level hierarchical addressing scheme of the form ⁇ “hba”, “bus”, “target”, ”lun”> for SCSI devices, including a SCSI HBA identifier (“hba”), a SCSI bus identifier (“bus”), a SCSI target identifier (“target”), and a logical unit identifier (“lun”), and may be configured to populate a device database with addresses for available SCSI devices during boot.
- Host 110 may include multiple SCSI HBAs, and a different SCSI adapter identifier may be used for each HBA.
- the SCSI adapter identifiers may be numbers issued by the operating system kernel, for example based on the physical placement of the HBA cards relative to each other (i.e., based on slot numbers used for the adapter cards).
- Each HBA may control one or more SCSI buses, and a unique SCSI bus number may be used to identify each SCSI bus within an HBA.
- the HBA may be configured to probe each bus to identify the SCSI devices currently attached to the bus.
- the number of devices (such as disks or disk arrays) that may be attached on a SCSI bus may be limited, e.g., to 15 devices excluding the HBA itself.
- SCSI devices that may initiate I/O operations, such as the HBA, are termed SCSI initiators, while devices where the physical I/O may be performed are called SCSI targets.
- SCSI targets Each target on the SCSI bus may identify itself to the HBA in response to the probe.
- each target device may also accommodate up to a protocol-specific maximum number of “logical units” (LUNs) representing independently addressable units of physical storage within the target device, and may inform the HBA of the logical unit identifiers.
- LUNs protocol-specific maximum number of “logical units”
- a target device may contain a single LUN (e.g., a LUN may represent an entire disk or even a disk array) in some embodiments.
- the SCSI device configuration information such as the target device identifiers and LUN identifiers may be passed to the disk driver layer 114 by the HBAs.
- disk driver layer 114 may utilize the hierarchical SCSI address described above.
- disk driver layer 114 may expect to see OS-specific metadata at certain specific locations within the LUN.
- the disk driver layer 114 may be responsible for implementing logical partitioning (i.e., subdividing the space within a physical disk into partitions, where each partition may be used for a smaller file system).
- Metadata describing the layout of a partition e.g., a starting block offset for the partition within the LUN, and the length of a partition
- Metadata describing the layout of a partition may be stored in an operating-system dependent format, and in an operating system-dependent location, such as in a header or a trailer, within a LUN.
- VTOC virtual table of contents
- the operating system metadata may include cylinder alignment and/or cylinder size information, as well as boot code if the volume is bootable.
- Operating system metadata for various versions of Microsoft WindowsTM may include a “magic number” (a special number or numbers that the operating system expects to find, usually at or near the start of a disk), subdisk layout information, etc. If the disk driver layer 114 does not find the metadata in the expected location and in the expected format, the disk driver layer may not be able to perform I/O operations at the LUN.
- block virtualization refers to a process of creating or aggregating logical or virtual block devices out of one or more underlying physical or logical block devices, and making the virtual block devices accessible to block device consumers for storage operations.
- storage within multiple physical block devices e.g. in a fiber channel storage area network (SAN) may be aggregated and presented to a host as a single virtual storage device such as a virtual LUN (VLUN), as described below in further detail.
- VLUN virtual LUN
- one or more layers of software may rearrange blocks from one or more block devices, such as disks, and add various kinds of functions.
- the resulting rearranged collection of blocks may then be presented to a storage consumer, such as an application or a file system, as one or more aggregated devices with the appearance of one or more basic disk drives. That is, the more complex structure resulting from rearranging blocks and adding functionality may be presented as if it were one or more simple arrays of blocks, or logical block devices.
- multiple layers of virtualization may be implemented. That is, one or more block devices may be mapped into a particular virtualized block device, which may be in turn mapped into still another virtualized block device, allowing complex storage functions to be implemented with simple block devices. Further details on block virtualization, and advanced storage features supported by block virtualization, are provided below.
- Block virtualization may be implemented at various places within a storage stack and the associated storage environment, in both hardware and software.
- a block virtualization layer in the form of a volume manager such as the VERITAS Volume ManagerTM from VERITAS Software Corporation, may be added between the disk driver layer 114 and the file system layer 112 .
- virtualization functionality may be added to host bus adapters, i.e., in a layer between the interconnect-dependent I/O driver layer 116 and interconnect 130 .
- Block virtualization may also be performed outside the host 110 , e.g., in a virtualization appliance or a virtualizing switch, which may form part of the interconnect 130 .
- block virtualization functionality may be implemented by an off-host virtualizer in cooperation with a host-based virtualizer. That is, some block virtualization functionality may be performed off-host, and other block virtualization features may be implemented at the host.
- off-host virtualizers may typically be implemented in a manner that allows the existing storage software layers to continue to operate, even when the storage devices being presented to the operating system are virtual rather than physical, and remote rather than local.
- an off-host virtualizer may present a virtualized storage device to the disk driver layer as a virtual LUN.
- on off-host virtualizer may encapsulate, or emulate the metadata for, a LUN when providing a host 110 access to a virtualized storage device.
- one or more software modules or layers may be added to storage stack 140 A to support additional forms of virtualization using virtual LUNs.
- FIG. 1 b is a block diagram illustrating an embodiment of system 100 configured to utilize off-host block virtualization.
- the system may include an off-host virtualizer 180 , such as a virtualization switch or a virtualization appliance, which may be included within interconnect 130 linking host 110 to physical block device 120 .
- Host 110 may comprise an enhanced storage software stack 140 B, which may include an intermediate driver layer 113 between the disk driver layer 114 and file system layer 112 .
- off-host virtualizer 180 may be configured to present a virtual storage device (e.g., a virtual LUN or VLUN) that includes one or more regions that are not initially mapped to physical storage to disk driver layer 114 using a technique (such as metadata emulation) that allows disk driver layer to detect and access the virtual storage device as if it were mapped to physical storage.
- a virtual storage device e.g., a virtual LUN or VLUN
- a technique such as metadata emulation
- off-host virtualizer 180 may map storage within physical block device 120 , or multiple physical block devices 120 , into the virtual storage device.
- the back-end storage within a physical block device 120 that is mapped to a virtual LUN may be termed a “physical LUN (PLUN)” in the subsequent description.
- PLUN physical LUN
- off-host virtualizer 180 may be configured to aggregate storage within one or more physical block devices 120 as one or more logical volumes, and map the logical volumes within the address space of a virtual LUN presented to host 110 . Off-host virtualizer 180 may further be configured to make the portions of the virtual LUN that are mapped to the logical volumes accessible to intermediate driver layer 113 . For example, in some embodiments, off-host virtualizer 180 may be configured to provide metadata or configuration information on the logical volumes to intermediate driver layer 113 , allowing intermediate driver layer 113 to locate the blocks of the logical volumes and perform desired I/O operations on the logical volumes located within the virtual LUN on behalf of clients such as file system layer 112 or other applications.
- File system layer 112 and applications (such as database management systems) configured to utilize intermediate driver layer 113 and lower layers of storage stack 140 B may be termed “virtual storage clients” or “virtual storage consumers” herein.
- off-host virtualizer 180 is shown within interconnect 130 in the embodiment depicted in FIG. 1 b, it is noted that in other embodiments, off-host virtualization may also be provided within physical block device 120 (e.g., by a virtualization layer between physical storage layer 124 and firmware layer 122 ), or at another device outside interconnect 130 .
- FIG. 2 a is a block diagram illustrating the addition of operating-system specific metadata to a virtual LUN 210 encapsulating a source volume 205 , according to one embodiment.
- the source volume 205 consists of N blocks, numbered 0 through (N-1).
- the virtual LUN 210 may include two regions of inserted metadata: a header 215 containing H blocks of metadata, and a trailer 225 including T blocks of metadata.
- blocks 220 of the virtual LUN 210 may be mapped to the source volume 205 , thereby making the virtual LUN 210 a total of (H+N+T) blocks long (i.e., the virtual LUN may contain blocks numbered 0 through (H+N+T ⁇ 1)).
- Operating-system specific metadata included in header 215 and/or trailer 225 may be used by disk driver layer 114 to recognize the virtual LUN 210 as a “normal” storage device (i.e. a storage device that is mapped to physical storage).
- additional configuration information or logical volume metadata may also be included within header 215 and/or trailer 225 .
- header 215 and trailer 225 may vary with the operating system in use at host 110 . It is noted that in some embodiments, the metadata may require only a header 215 , or only a trailer 225 , rather than both a header and a trailer; and that in other embodiments, the metadata may be stored at any arbitrary offset within the LUN.
- the metadata may include a partition table with one or more partition entries, where at least one partition may correspond to a region that is unmapped to physical storage. The location (e.g., the offset of the metadata within the virtual storage device) and contents of the metadata generated by off-host virtualizer 180 may indicate to disk driver layer in one embodiment that a corresponding storage device has been successfully initialized according to the operating system in use.
- the metadata inserted within virtual LUN 210 may be stored in persistent storage, e.g., within some blocks of physical block device 120 or at off-host virtualizer 180 , in some embodiments, and logically concatenated with the mapped blocks 220 .
- the metadata may be generated dynamically, whenever a host 110 accesses the virtual LUN 210 .
- the metadata may be generated by an external agent other than off-host virtualizer 180 .
- the external agent may be capable of emulating metadata in a variety of formats for different operating systems, including operating systems that may not have been known when the off-host virtualizer 180 was deployed.
- off-host virtualizer 180 may be configured to support more than one operating system; i.e., off-host virtualizer may logically insert metadata blocks corresponding to any one of a number of different operating systems when presenting virtual LUN 210 to a host 110 , thereby allowing hosts with different operating systems to share access to a storage device 120 .
- LUN reconfiguration operations may typically be fairly slow. Some LUN reconfiguration operations may be at least partially asynchronous, and may have unbounded completion times and/or ambiguous failure states. On many operating systems, LUN reconfiguration may only be completed after a system reboot; for example, a newly created physical or virtual LUN may not be detected by the operating system without a reboot.
- unmapped virtual LUNs e.g., to create operating system metadata for virtual LUNs that are not initially mapped to any physical LUNs or logical volumes
- pre-assign the unmapped virtual LUNs to hosts 110 as part of an initialization process.
- the initialization process may be completed prior to performing storage operations on the virtual LUNs on behalf of applications.
- the layers of the software storage stack 140 B may be configured to detect the existence of the virtual LUNs as addressable storage devices.
- off-host virtualizer 180 may dynamically map physical LUNs and/or logical volumes to the virtual LUNs (e.g., by modifying portions of the operating system metadata), as described below in further detail.
- dynamic mapping refers to a mapping of a virtual storage device (such as a VLUN) that is performed by modifying one or more blocks of metadata, and/or by communicating via one or more messages to a host 110 , without requiring a reboot of the host 110 to which the virtual storage device is presented.
- FIG. 2 b is a block diagram illustrating an example of an unmapped virtual LUN 230 according to one embodiment.
- the unmapped virtual LUN 230 may include an operating system metadata header 215 and an operating system metadata trailer 225 , as well as a region of unmapped blocks 235 .
- the size of the region of unmapped blocks (X blocks in the depicted example) may be set to a maximum permissible LUN or volume size supported by an operating system, so that any subsequent mapping of a volume or physical LUN to the virtual LUN does not require an expansion of the size of the virtual LUN.
- the unmapped virtual LUN may consist of only the emulated metadata (e.g., header 215 and/or trailer 225 ), and the size of the virtual LUN may be increased dynamically when the volume or physical LUN is mapped.
- disk driver layer 114 may have to modify some of its internal data structures when the virtual LUN is expanded, and may have to re-read the emulated metadata in order to do so.
- Off-host virtualizer 180 may be configured to send a metadata change notification message to disk driver layer 114 in order to trigger the re-reading of the metadata.
- FIG. 3 is a block diagram illustrating an embodiment including an off-host virtualizer 180 configured to create a plurality of unmapped virtual LUNs (VLUNs) 230 .
- VLUNs virtual LUNs
- more than one VLUN may be associated with a single host 110 .
- off-host virtualizer 180 may assign unmapped VLUNs 230 A and 230 B to host 110 A, and unmapped VLUNs 230 C, 230 D and 230 E to host 110 B.
- multiple VLUNs may be associated with a given host to allow for isolation of storage used for different applications, or to allow access to storage beyond the maximum allowable LUN size supported in the system.
- hosts 110 A and 110 B may be coupled to off-host virtualizer 180 via interconnect 130 A, and off-host virtualizer 180 may be coupled to storage devices 340 A, 340 B and 340 C (collectively, storage devices 340 ) via interconnect 130 B.
- Storage devices 340 may include physical block devices 120 as well as virtual block devices (e.g., in embodiments employing multiple layers of virtualization, as described below).
- Off-host virtualizer 180 may be configured to dynamically map physical and/or virtual storage from storage devices 340 to the unmapped virtual LUNs.
- Hosts 110 A and 110 B may be configured to use different operating systems in some embodiments, and may utilize the same operating system in other embodiments.
- VLUN 230 After VLUN 230 has been recognized by disk driver layer 114 (e.g., as a result of the generation of operating system metadata such as a partition table in an expected format and location), a block at any offset within the VLUN address space may be accessed by the disk driver layer 114 , and thus by any other layer above the disk driver layer.
- intermediate driver layer 113 may be configured to communicate with off-host virtualizer 180 by reading from, and/or writing to, a designated set of blocks emulated within VLUN 230 .
- Such designated blocks may provide a mechanism for off-host virtualizer 180 to provide intermediate driver layer 113 with configuration information associated with logical volumes or physical LUNs mapped to VLUN 230 in some embodiments.
- off-host virtualizer 180 may be configured to map storage from a back-end physical LUN directly to a VLUN 230 , without any additional virtualization (i.e., without creating a logical volume). Such a technique of mapping a PLUN to a VLUN 230 may be termed “PLUN tunneling”. Each PLUN may be mapped to a corresponding VLUN 230 (i.e., a 1-to-1 mapping of PLUNs to VLUNs may be implemented by off-host virtualizer 180 ) in some embodiments. In other embodiments, as described below in conjunction with the description of FIG. 4 , storage from multiple PLUNs may be mapped into subranges of a given VLUN 230 .
- PLUN tunneling may allow the off-host virtualizer 180 to act as an isolation layer between VLUNs 230 (the storage entities directly accessible to hosts 110 ) and back-end PLUNs, allowing the off-host virtualizer to hide details related to physical storage protocol implementation from the hosts.
- the back-end PLUNs may implement a different version of a storage protocol (e.g., SCSI-3) than the version seen by hosts 100 (e.g., SCSI-2), and the off-host virtualizer may provide any needed translation between the two versions.
- off-host virtualizer 180 may be configured to implement a cooperative access control mechanism for the back-end PLUNs, and the details of the mechanism may remain hidden from the hosts 110 .
- off-host virtualizer 180 may also be configured to increase the level of data sharing using PLUN tunneling. Disk array devices often impose limits on the total number of concurrent “logins”, i.e., the total number of entities that may access a given disk array device. In a storage environment employing PLUN tunneling for disk arrays (i.e., where the PLUNs are disk array devices), off-host virtualizers 180 may allow multiple hosts to access the disk arrays through a single login. That is, for example, multiple hosts 110 may log in to the off-host virtualizer 180 , while the off-host virtualizer may log in to a disk array PLUN once on behalf of the multiple hosts 110 .
- Off-host virtualizer 180 may then pass on I/O requests from the multiple hosts 110 to the disk array PLUN using a single login.
- the number of logins i.e., distinct entities logged in
- the number of hosts 110 as seen by a disk array PLUN may thereby be reduced as a result of PLUN tunneling, without reducing the number of hosts 110 from which I/O operations targeted at the disk array PLUN may be initiated.
- the total number of hosts 110 that may access storage at a single disk array PLUN with login count restrictions may thereby be increased, thus increasing the overall level of data sharing.
- FIG. 4 is a block diagram illustrating an embodiment where an off-host virtualizer 180 is configured to map physical storage from within two different physical storage devices 340 A and 340 B to a single VLUN 230 B. That is, off-host virtualizer 180 may be configured to map a first range of physical storage from device 340 A into a first region of mapped blocks 321 A within VLUN 230 B, and map a second range of physical storage from device 340 B into a second region of mapped blocks 321 B within VLUN 230 B.
- the first and second ranges of physical storage may each represent a respective PLUN, such as a disk array, or a respective subset of a PLUN.
- Configuration information indicating the offsets within VLUN 230 B at which mapped blocks 321 A and 321 B are located may be provided by off-host virtualizer 180 to intermediate driver layer 113 using a variety of mechanisms in different embodiments.
- off-host virtualizer 180 may write the configuration information to a designated set of blocks within VLUN 230 , and intermediate driver layer 113 may be configured to read the designated set of blocks, as described above.
- off-host virtualizer 180 may send a message containing the configuration information to host 110 A, either directly (over interconnect 130 A or another network) or through an intermediate coordination server.
- the configuration information may be supplied within a special SCSI mode page (i.e., intermediate driver layer 113 may be configured to read a special SCSI mode page containing configuration information updated by off-host virtualizer 180 ). Combinations of these techniques may be used in some embodiments: for example, in one embodiment off-host virtualizer 180 may send a message to intermediate driver layer 113 requesting that intermediate driver layer read a special SCSI mode page containing the configuration information.
- FIG. 5 is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to map physical storage from within a single physical storage device 340 A to two VLUNs assigned to different hosts 110 A and 110 B.
- a first range of physical storage 555 A of physical storage device 340 A may be mapped to a first range of mapped blocks 321 A within VLUN 230 B assigned to host 110 A.
- a second range of physical storage 555 B of the same physical storage device 340 A may be mapped to a second range of mapped blocks 321 C of VLUN 230 E assigned to host 110 B.
- off-host virtualizer 180 may be configured to prevent unauthorized access to physical storage range 555 A from host 110 B, and to prevent unauthorized access to physical storage 555 B from host 110 A.
- off-host virtualizer 180 may also be configured to provide security for each range of physical storage 555 A and 555 B, e.g., in accordance with a specified security protocol.
- the security protocol may allow I/O operations to a given VLUN 230 (and to its backing physical storage) from only a single host 110 .
- Off-host virtualizer 180 may be configured to maintain access rights information for the hosts 110 and VLUNs 230 in some embodiments, while in other embodiments security tokens may be provided to each host 110 indicating the specific VLUNs to which access from the host is allowed, and the security tokens may be included with I/O requests.
- off-host virtualizer 180 may be configured to aggregate physical storage into a logical volume, and map the logical volume to an address range within a VLUN 230 .
- a set of two or more physical storage regions may be aggregated into a logical volume.
- a logical volume may also be created from a single contiguous region of physical storage; i.e., the set of physical storage regions being aggregated may minimally consist of a single region).
- Mapping a logical volume through a VLUN may also be termed “volume tunneling” or “logical volume tunneling”.
- off-host virtualizer 180 is configured to aggregate a set of storage regions 655 A of physical storage device 340 A into a logical volume 660 A, and map logical volume 660 A to a range of blocks (designated as mapped volume 365 A in FIG. 6 ) of VLUN 230 B.
- configuration information or metadata associated with the tunneled logical volume 660 A may be provided to intermediate driver layer 113 using any of a variety of mechanisms, such as an extended SCSI mode page, emulated virtual blocks within VLUN 230 A, and/or direct or indirect messages sent from off-host virtualizer 180 to host 110 A.
- logical volume 660 A is shown as being backed by a portion of a single physical storage device 340 A in the depicted embodiment, in other embodiments logical volume 660 A may be aggregated from all the storage within a single physical storage device, or from storage of two or more physical devices. In some embodiments employing multiple layers of virtualization, logical volume 660 A may itself be aggregated from other logical storage devices rather than directly from physical storage devices. In one embodiment, each host 110 (i.e., host 110 B in addition to host 110 A) may be provided access to logical volume 660 A via a separate VLUN, while in another embodiment different sets of logical volumes may be presented to different hosts 110 .
- FIG. 7 is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to map multiple logical volumes to a single VLUN 230 .
- off-host virtualizer 180 may be configured to aggregate storage region 755 A from physical storage device 340 A, and physical storage region 755 C from physical storage device 340 C, into a logical volume 760 A, and map logical volume 760 A to a first mapped volume region 765 A of VLUN 230 B.
- off-host virtualizer 180 may also aggregate physical storage region 755 B from physical storage device 340 A into a second logical volume 760 B, and map logical volume 760 B to a second mapped volume region 765 B of VLUN 230 B.
- off-host virtualizer 180 may aggregate any suitable selection of physical storage blocks from one or more physical storage devices 340 into one or more logical volumes, and map the logical volumes to one or more of the pre-generated unmapped VLUNs 230 .
- FIG. 8 is a block diagram illustrating another embodiment, where off-host virtualizer 180 is configured to aggregate storage regions 855 A and 855 B from physical storage device 340 A into logical volumes 860 A and 860 B respectively, and to map each of the two logical volumes to a different VLUN 230 .
- logical volume 860 A may be mapped to a first address range within VLUN 230 B, accessible from host 110 A
- logical volume 860 B may be mapped to a second address range within VLUN 230 E, accessible from host 110 B.
- Off-host virtualizer 180 may further be configured to implement a security protocol to prevent unauthorized access and/or data corruption, similar to the security protocol described above for PLUN tunneling.
- Off-host virtualizer 180 may implement the security protocol at the logical volume level: that is, off-host virtualizer 180 may prevent unauthorized access to logical volumes 860 A (e.g., from host 110 B) and 860 B (e.g., from host 110 A) whose data may be stored within a single physical storage device 340 A.
- off-host virtualizer 180 may be configured to maintain access rights information for logical volumes 860 to which each host 110 has been granted access.
- security tokens may be provided to each host 110 (e.g., by off-host virtualizer 180 , or by an external security server) indicating the specific logical volumes 860 to which access from the host is allowed, and the security tokens may be included with I/O requests.
- SANs storage area networks
- SAN fabric reconfiguration e.g., to provide access to a particular PLUN or logical volume from a particular host that did not previously have access to the desired PLUN or logical volume
- switch reconfigurations recabling, rebooting, etc.
- the techniques of PLUN tunneling and volume tunneling, described above, may allow a simplification of SAN reconfiguration operations.
- Storage devices may be more easily shared across multiple hosts 110 , or logically transferred from one host to another, using PLUN tunneling and/or volume tunneling. Allocation and/or provisioning of storage, e.g., from a pool maintained by a coordinating storage allocator, may also be simplified.
- FIG. 9 is a block diagram illustrating an embodiment employing multiple storage networks.
- off-host virtualizer 180 may be configured to access physical storage device 340 A via a first storage network 910 A, and to access physical storage device 340 B via a second storage network 910 B.
- Off-host virtualizer 180 may aggregate storage region 355 A from physical storage device 340 A into logical volume 860 A, and map logical volume 860 A to VLUN 230 B.
- off-host virtualizer 180 may aggregate storage region 355 B from physical storage device 340 B into logical volume 860 B, and map logical volume 860 B to VLUN 230 E.
- Host 110 A may be configured to access VLUN 230 A via a third storage network 910 C, and to access VLUN 230 B via a fourth storage network 910 D.
- Each storage network 910 may be independently configurable: that is, a reconfiguration operation performed within a given storage network 910 may not affect any other storage network 910 .
- a failure or a misconfiguration within a given storage network 910 may also not affect any other independent storage network 910 .
- hosts 110 may include multiple HBAs, allowing each host to access multiple independent storage networks.
- host 110 A may include two HBAs in the embodiment depicted in FIG. 9 , with the first HBA allowing access to storage network 910 C, and the second HBA to storage network 910 D.
- host 110 A may be provided full connectivity to back-end physical storage devices 340 , while still maintaining the advantages of configuration isolation.
- FIG. 9 depicts the use of multiple independent storage networks in conjunction with volume tunneling, in other embodiments multiple independent storage networks may also be used with PLUN tunneling, or with a combination of PLUN and volume tunneling.
- the use of independent storage networks 910 may be asymmetric: e.g., in one embodiment, multiple independent storage networks 910 may be used for front-end connections (i.e., between off-host virtualizer 180 and hosts 110 ), while only a single storage network may be used for back-end connections (i.e., between off-host virtualizer 180 and physical storage devices 340 ).
- any desired interconnection technology and/or protocol may be used to implement storage networks 910 , such as fiber channel, IP-based protocols, etc.
- the interconnect technology or protocol used within a first storage network 910 may differ from the interconnect technology or protocol used within a second storage network 910 .
- volume tunneling may also allow maximum LUN size limitations to be overcome.
- the SCSI protocol may be configured to use a 32 -bit unsigned integer as a LUN block address, thereby limiting the maximum amount of storage that can be accessed at a single LUN to 2 terabytes (for 512-byte blocks) or 32 terabytes (for 8-kilobyte blocks).
- Volume tunneling may allow an intermediate driver layer 113 to access storage from multiple physical LUNs as a volume mapped to a single VLUN, thereby overcoming the maximum LUN size limitation.
- off-host virtualizer 180 may be configured to aggregate storage regions 1055 A and 1055 B from two physical storage devices 340 A and 340 B into a single logical volume 1060 A, where the size of the volume 1060 A exceeds the allowed maximum LUN size supported by the storage protocol in use at storage devices 340 .
- Off-host virtualizer 180 may further be configured to map logical volume 1060 A to VLUN 230 B, and to make the logical volume accessible to intermediate driver layer 113 at host 110 A.
- off-host virtualizer 180 may provide logical volume metadata to intermediate driver layer 113 , including sufficient information for intermediate driver layer 113 to access a larger address space within VLUN 230 B than the maximum allowed LUN size.
- FIG. 11 is a flow diagram illustrating aspects of the operation of system 100 according to one embodiment, where off-host virtualizer 180 is configured to support PLUN tunneling.
- Off-host virtualizer 180 may be configured to present a virtual storage device (e.g., a VLUN) that comprises one or more regions that are initially not mapped to physical storage (block 1110 ), and make the virtual storage device accessible to a host 110 (block 1115 ).
- a first layer of a storage software stack at host 110 such as disk driver layer 114 of FIG. 1 b, may be configured to detect and access the virtual storage device as if the virtual storage device were mapped to physical storage (block 1120 ).
- the off-host virtualizer may be configured to generate operating system metadata indicating the presence of a normal or mapped storage device.
- the metadata may be formatted according to the requirements of the operating system in use at the host 110 , and may be mapped to a region of the virtual storage device.
- the metadata may include a partition table including entries for one or more partitions, where at least one partition corresponds to or maps to one of the regions that are unmapped to physical storage.
- off-host virtualizer 180 may be configured to dynamically map physical storage from one or more back-end physical storage devices 340 (e.g., PLUNs) to an address range within the virtual storage device.
- back-end physical storage devices 340 e.g., PLUNs
- FIG. 12 is a flow diagram illustrating aspects of the operation of system 100 according to one embodiment, where off-host virtualizer 180 is configured to support volume tunneling.
- the first three blocks depicted in FIG. 12 may represent functionality similar to the first three blocks shown in FIG. 11 . That is, off-host virtualizer 180 may be configured to present a virtual storage device (e.g., a VLUN) comprising one or more regions unmapped to physical storage (block 1210 ) and make the virtual storage device accessible to a host 110 (block 1215 ).
- a virtual storage device e.g., a VLUN
- a first layer of a storage software stack such as disk driver layer 114 of FIG.
- off-host virtualizer 180 may be configured to detect and access the virtual storage device as if the virtual storage device were mapped to physical storage (e.g., as a LUN) (block 1220 ).
- off-host virtualizer 180 may be configured to aggregate storage at one or physical storage devices 340 into a logical volume (block 1225 ), and to dynamically map the logical volume to an address range within the previously unmapped virtual storage device (block 1230 ).
- Off-host virtualizer 180 may further be configured to make the mapped portion of the virtual storage device accessible to a second layer of the storage software stack at host 110 (e.g., intermediate driver layer 113 ) (block 1235 ), allowing the second layer to locate the blocks of the logical volume and to perform desired I/O operations on the logical volume.
- off-host virtualizer 180 may be configured to provide logical volume metadata to the second layer to support the I/O operations.
- off-host virtualizer 180 may implement numerous different types of storage functions using block virtualization.
- a virtual block device such as a logical volume may implement device striping, where data blocks may be distributed among multiple physical or logical block devices, and/or device spanning, in which multiple physical or logical block devices may be joined to appear as a single large logical block device.
- virtualized block devices may provide mirroring and other forms of redundant data storage, the ability to create a snapshot or static image of a particular block device at a point in time, and/or the ability to replicate data blocks among storage systems connected through a network such as a local area network (LAN) or a wide area network (WAN), for example.
- LAN local area network
- WAN wide area network
- virtualized block devices may implement certain performance optimizations, such as load distribution, and/or various capabilities for online reorganization of virtual device structure, such as online data migration between devices.
- one or more block devices may be mapped into a particular virtualized block device, which may be in turn mapped into still another virtualized block device, allowing complex storage functions to be implemented with simple block devices.
- More than one virtualization feature, such as striping and mirroring, may thus be combined within a single virtual block device in some embodiments, creating a logically hierarchical virtual storage device.
- the off-host virtualizer 180 may provide functions such as configuration management of virtualized block devices and distributed coordination of block device virtualization. For example, after a reconfiguration of a logical volume shared by two hosts 110 (e.g., when the logical volume is expanded, or when a new mirror is added to the logical volume), the off-host virtualizer 180 may be configured to distribute metadata or a volume description indicating the reconfiguration to the two hosts 110 .
- the storage stacks at the hosts may be configured to interact directly with various storage devices 340 according to the volume description (i.e., to transform logical I/O requests into physical I/O requests using the volume description).
- Distribution of a virtualized block device as a volume to one or more virtual device clients, such as hosts 110 may be referred to as distributed block virtualization.
- multiple layers of virtualization may be employed, for example at the host level as well as at an off-host level, such as at a virtualization switch or at a virtualization appliance.
- some aspects of virtualization may be visible to a virtual device consumer such as file system layer 112 , while other aspects may be implemented transparently by the off-host level.
- the virtualization details of one block device e.g., one volume
- the virtualization details of another block device may be partially or entirely transparent to the virtual device consumer.
- a virtualizer such as off-host virtualizer 180
- Such embodiments may be referred to as symmetric distributed block virtualization systems.
- specific volumes may be distributed only to respective virtual device consumers or hosts, such that at least one volume is not common to two virtual device consumers.
- Such embodiments may be referred to as asymmetric distributed block virtualization systems.
- off-host virtualizer 180 may be any type of device, external to host 110 , that is capable of providing the virtualization functionality, including PLUN and volume tunneling, described above.
- off-host virtualizer 180 may include a virtualization switch, a virtualization appliance, a special additional host dedicated to providing block virtualization, or an embedded system configured to use application specific integrated circuit (ASIC) or field-programmable gate array (FPGA) technology to provide block virtualization functionality.
- ASIC application specific integrated circuit
- FPGA field-programmable gate array
- off-host block virtualization may be provided by a collection of cooperating devices, such as two or more virtualizing switches, instead of a single device.
- An off-host virtualizer 180 may incorporate one or more processors, as well as volatile and/or non-volatile memory. In some embodiments, configuration information associated with virtualization may be maintained at a database separate from the off-host virtualizer 180 , and may be accessed by off-host virtualizer over a network. In one embodiment, an off-host virtualizer may be programmable and/or configurable. Numerous other configurations of off-host virtualizer 180 are possible and contemplated.
- a host 110 may be any computer system, such as a server comprising one or more processors and one or more memories, capable of supporting the storage software stack described above. Any desired operating system may be used at a host 110 , including various versions of Microsoft WindowsTM, SolarisTM from Sun Microsystems, various versions of Linux, other operating systems based on UNIX, and the like.
- the intermediate driver layer 113 may be included within a volume manager in some embodiments.
- FIG. 13 is a block diagram illustrating a computer-accessible medium 1300 comprising virtualization software 1310 capable of providing the functionality of off-host virtualizer 180 and block storage software stack 140 B described above.
- Virtualization software 1310 may be provided to a computer system using a variety of computer-accessible media including electronic media (e.g., flash memory), magnetic media such as RAM (e.g., SDRAM, RDRAM, SRAM, etc.), optical storage media such as CD-ROM, etc., as well as transmission media or signals such as electrical, electromagnetic or digital signals, conveyed via a communication medium such as a network and/or a wireless link.
- electronic media e.g., flash memory
- magnetic media e.g., SDRAM, RDRAM, SRAM, etc.
- optical storage media such as CD-ROM, etc.
- transmission media or signals such as electrical, electromagnetic or digital signals, conveyed via a communication medium such as a network and/or a wireless link.
Abstract
Description
- This application is a continuation-in-part of U.S. patent application Ser. No. 10/722,614, entitled “SYSTEM AND METHOD FOR EMULATING OPERATING SYSTEM METADATA TO PROVIDE CROSS-PLATFORM ACCESS TO STORAGE VOLUMES”, filed Nov. 26, 2003.
- 1. Field of the Invention
- This invention relates to computer systems and, more particularly, to off-host virtualization within storage environments.
- 2. Description of the Related Art
- Many business organizations and governmental entities rely upon applications that access large amounts of data, often exceeding a terabyte of data, for mission-critical applications. Often such data is stored on many different storage devices, which may be heterogeneous in nature, including many different types of devices from many different manufacturers.
- Configuring individual applications that consume data, or application server systems that host such applications, to recognize and directly interact with each different storage device that may possibly be encountered in a heterogeneous storage environment would be increasingly difficult as the environment scaled in size and complexity. Therefore, in some storage environments, specialized storage management software and hardware may be used to provide a more uniform storage model to storage consumers. Such software and hardware may also be configured to present physical storage devices as virtual storage devices (e.g., virtual SCSI disks) to computer hosts, and to add storage features not present in individual storage devices to the storage model. For example, features to increase fault tolerance, such as data mirroring, snapshot/fixed image creation, or data parity, as well as features to increase data access performance, such as disk striping, may be implemented in the storage model via hardware or software. The added storage features may be referred to as storage virtualization features, and the software and/or hardware providing the virtual storage devices and the added storage features may be termed “virtualizers” or “virtualization controllers”. Virtualization may be performed within computer hosts, such as within a volume manager layer of a storage software stack at the host, and/or in devices external to the host, such as virtualization switches or virtualization appliances. Such external devices providing virtualization may be termed “off-host” virtualizers, and may be utilized in order to offload processing required for virtualization from the host. Off-host virtualizers may be connected to the external physical storage devices for which they provide virtualization functions via a variety of interconnects, such as Fiber Channel links, Internet Protocol (IP) networks, and the like.
- Traditionally, storage software within a computer host consists of a number of layers, such as a file system layer, a disk driver layer, etc. Some of the storage software layers may form part of the operating system in use at the host, and may differ from one operating system to another. When accessing a physical disk, a layer such as the disk driver layer for a given operating system may be configured to expect certain types of configuration information for the disk to be laid out in a specific format, for example in a header (located at the first few blocks of the disk) containing disk partition layout information. The storage stack software layers used to access local physical disks may also be utilized to access external storage devices presented as virtual storage devices by off-host virtualizers. Therefore, it may be desirable for an off-host virtualizer to provide configuration information for the virtual storage devices in a format expected by the storage stack software layers. In addition, it may be desirable for the off-host virtualizer to implement a technique to flexibly and dynamically map storage within external physical storage devices to the virtual storage devices presented to the host storage software layers, e.g., without requiring a reboot of the host.
- Various embodiments of a system and method for dynamic logical unit (LUN) mapping are disclosed. According to a first embodiment, a system may include a first host and an off-host virtualizer, such as a virtualization switch or a virtualization appliance. The off-host virtualizer may be configured to present a virtual storage device, such as a virtual LUN, that comprises one or more regions that are initially unmapped to physical storage, and make the virtual storage device accessible to the first host. The first host may include a storage software stack including a first layer, such as a disk driver layer, configured to detect and access the virtual storage device as if the virtual storage device were mapped to physical storage. A number of different techniques may be used by the off-host virtualizer in various embodiments to present the virtual storage device as if it were mapped to physical storage. For example, in one embodiment, the off-host virtualizer may be configured to generate metadata formatted according to a requirement of an operating system in use at the host and map a portion of the virtual storage device to the metadata, where the metadata makes the virtual storage device appear to be mapped to physical storage. The recognition of the virtual storage device as a “normal” storage device that is backed by physical storage may occur during a system initialization stage prior to an initiation of production I/O operations. In this way, an unmapped or “blank” virtual LUN may be prepared for subsequent dynamic mapping by the off-host virtualizer. The unmapped LUN may be given an initial size equal to the maximum allowed LUN size supported by the operating system in use at the host, so that the size of the virtual LUN may not require modification after initialization. In some embodiments, multiple virtual LUNs may be pre-generated for use at a single host, for example in order to isolate storage for different applications, or to accommodate limits on maximum LUN sizes.
- In one embodiment, the system may also include two or more physical storage devices, and the off-host virtualizer may be configured to dynamically map physical storage from a first and a second physical storage device to a respective range of addresses within the first virtual storage device. For example, the off-host virtualizer may be configured to perform an N-to-1 mapping between the physical storage devices (which may be called physical LUNs) and virtual LUNs, allowing storage in the physical storage devices to be accessed from the host via the pre-generated virtual LUNs. Configuration information regarding the location of the first and/or the second address ranges within the virtual LUN (i.e., the regions of the virtual LUN that are mapped to the physical storage devices) may be passed from the off-host virtualizer to a second layer of the storage stack at the host (e.g., an intermediate driver layer above a disk driver layer) using a variety of different mechanisms. Such mechanisms may include, for example, the off-host virtualizer writing the configuration information to certain special blocks within the virtual LUN, sending messages to the host over a network, or special extended SCSI mode pages. In one embodiment, two or more different ranges of physical storage within a single physical storage device may be mapped to corresponding pre-generated virtual storage devices such as virtual LUNs and presented to corresponding hosts. That is, the off-host virtualizer may allow each host of a plurality of hosts to access a respective portion of a physical storage device through a respective virtual LUN. In such embodiments, the off-host virtualizer may also be configured to implement a security policy isolating the ranges of physical storage within the shared physical storage device; i.e., to allow a host to access only those regions to which the host has been granted access, and to prevent unauthorized accesses.
- In another embodiment, the off-host virtualizer may be further configured to aggregate storage within one or more physical storage device into a logical volume, map the logical volume to a range of addresses within a pre-generated virtual storage device, and make the logical volume accessible to the second layer of the storage stack (e.g., by providing logical volume metadata to the second layer), allowing I/O operations to be performed on the logical volume. Storage from a single physical storage device may be aggregated into any desired number of different logical volumes, and any desired number of logical volumes may be mapped to a single virtual storage device or virtual LUN. The off-host virtualizer may be further configured to provide volume-level security, i.e., to prevent unauthorized access from a host to a logical volume, even when the physical storage corresponding to the logical volume is part of a shared physical storage device. In addition, physical storage from any desired number of physical storage devices may be aggregated into a logical volume using a virtual LUN, thereby allowing a single volume to extend over a larger address range than the maximum allowed size of a single physical LUN. The virtual storage devices or virtual LUNs may be distributed among a number of independent front-end storage networks, such as fiber channel fabrics, and the physical storage devices backing the logical volumes may be distributed among a number of independent back-end storage networks. For example, a first host may access its virtual storage devices through a first storage network, and a second host may access its virtual storage devices through a second storage network independent from the first (that is, reconfigurations and/or failures in the first storage network may not affect the second storage network). Similarly, the off-host virtualizer may access a first physical storage device through a third storage network, and a second physical storage device through a fourth storage network. The ability of the off-host virtualizer to dynamically map storage across pre-generated virtual storage devices distributed among independent storage networks may support a robust and flexible storage environment.
-
FIG. 1 a is a block diagram illustrating one embodiment of a computer system. -
FIG. 1 b is a block diagram illustrating an embodiment of a system configured to utilize off-host block virtualization. -
FIG. 2 a is a block diagram illustrating the addition of operating-system specific metadata to a virtual logical unit (LUN) encapsulating a source volume, according to one embodiment. -
FIG. 2 b is a block diagram illustrating an example of an unmapped virtual LUN according to one embodiment. -
FIG. 3 is a block diagram illustrating an embodiment including an off-host virtualizer configured to create a plurality of unmapped virtual LUNs. -
FIG. 4 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to map physical storage from within two different physical storage devices to a single virtual LUN. -
FIG. 5 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to map physical storage from within a single physical storage device to two virtual LUNs assigned to different hosts. -
FIG. 6 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage of a physical storage device into a logical volume and map the logical volume to a range of blocks of a virtual LUN. -
FIG. 7 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to map multiple logical volumes to a single virtual LUN. -
FIG. 8 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage from a physical storage device into two logical volumes, and to map each of the two logical volumes to a different virtual LUN. -
FIG. 9 is a block diagram illustrating an embodiment employing multiple storage networks. -
FIG. 10 is a block diagram illustrating an embodiment where an off-host virtualizer is configured to aggregate storage from two physical storage devices into a single logical volume. -
FIG. 11 is a flow diagram illustrating aspects of the operation of a system according to one embodiment where an off-host virtualizer is configured to support physical LUN tunneling. -
FIG. 12 is a flow diagram illustrating aspects of the operation of a system according to one embodiment where an off-host virtualizer is configured to support volume tunneling. -
FIG. 13 is a block diagram illustrating a computer-accessible medium. - While the invention is susceptible to various modifications and alternative forms, specific embodiments are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.
-
FIG. 1 a is a block diagram illustrating acomputer system 100 according to one embodiment.System 100 includes ahost 110 coupled to aphysical block device 120 via aninterconnect 130.Host 110 includes a traditional block storage software stack 140A that may be used to perform I/O operations on aphysical block device 120 viainterconnect 130. - Generally speaking, a
physical block device 120 may comprise any hardware entity that provides a collection of linearly addressed data blocks that can be read or written. For example, in one embodiment a physical block device may be a single disk drive configured to present all of its sectors as an indexed array of blocks. In another embodiment the physical block device may be a disk array device, or a disk configured as part of a disk array device. It is contemplated that any suitable type of storage device may be configured as a block device, such as fixed or removable magnetic media drives (e.g., hard drives, floppy or Zip-based drives), writable or read-only optical media drives (e.g., CD or DVD), tape drives, solid-state mass storage devices, or any other type of storage device. Theinterconnect 130 may utilize any desired storage connection technology, such as various variants of the Small Computer System Interface (SCSI) protocol, Fiber Channel, Internet Protocol (OP), Internet SCSI (iSCSI), or a combination of such storage networking technologies. The block storage software stack 140A may comprise layers of software within an operating system athost 110, and may be accessed by a client application to perform I/O (input/output) on a desiredphysical block device 120. - In the traditional block storage stack, a client application may initiate an I/O request, for example as a request to read a block of data at a specified offset within a file. The request may be received (e.g., in the form of a reado system call) at the
file system layer 112, translated into a request to read a block within a particular device object (i.e., a software entity representing a storage device), and passed to thedisk driver layer 114. Thedisk driver layer 114 may then select the targetedphysical block device 120 corresponding to the disk device object, and send a request to an address at the targeted physical block device over theinterconnect 130 using the interconnect-dependent I/o driver layer 116. For example, a host bus adapter (such as a SCSI HBA) may be used to transfer the I/O request, formatted according to the appropriate storage protocol (e.g., SCSI), to a physical link of the interconnect (e.g., a SCSI bus). At thephysical block device 120, an interconnect-dependent firmware layer 122 may receive the request, perform the desired physical I/O operation at thephysical storage layer 124, and send the results back to the host over the interconnect. The results (e.g., the desired blocks of the file) may then be transferred through the various layers of storage stack 140A in reverse order (i.e., from the interconnect-dependent I/O driver to the file system) before being passed to the requesting client application. - In some operating systems, the storage devices addressable from a
host 110 may be detected only during system initialization, e.g., during boot. For example, an operating system may employ a four-level hierarchical addressing scheme of the form <“hba”, “bus”, “target”, ”lun”> for SCSI devices, including a SCSI HBA identifier (“hba”), a SCSI bus identifier (“bus”), a SCSI target identifier (“target”), and a logical unit identifier (“lun”), and may be configured to populate a device database with addresses for available SCSI devices during boot. Host 110 may include multiple SCSI HBAs, and a different SCSI adapter identifier may be used for each HBA. The SCSI adapter identifiers may be numbers issued by the operating system kernel, for example based on the physical placement of the HBA cards relative to each other (i.e., based on slot numbers used for the adapter cards). Each HBA may control one or more SCSI buses, and a unique SCSI bus number may be used to identify each SCSI bus within an HBA. During system initialization, or in response to special configuration commands, the HBA may be configured to probe each bus to identify the SCSI devices currently attached to the bus. Depending on the version of the SCSI protocol in use, the number of devices (such as disks or disk arrays) that may be attached on a SCSI bus may be limited, e.g., to 15 devices excluding the HBA itself. SCSI devices that may initiate I/O operations, such as the HBA, are termed SCSI initiators, while devices where the physical I/O may be performed are called SCSI targets. Each target on the SCSI bus may identify itself to the HBA in response to the probe. In addition, each target device may also accommodate up to a protocol-specific maximum number of “logical units” (LUNs) representing independently addressable units of physical storage within the target device, and may inform the HBA of the logical unit identifiers. A target device may contain a single LUN (e.g., a LUN may represent an entire disk or even a disk array) in some embodiments. The SCSI device configuration information, such as the target device identifiers and LUN identifiers may be passed to thedisk driver layer 114 by the HBAs. When issuing an I/O request,disk driver layer 114 may utilize the hierarchical SCSI address described above. - When accessing a LUN,
disk driver layer 114 may expect to see OS-specific metadata at certain specific locations within the LUN. For example, in many operating systems, thedisk driver layer 114 may be responsible for implementing logical partitioning (i.e., subdividing the space within a physical disk into partitions, where each partition may be used for a smaller file system). Metadata describing the layout of a partition (e.g., a starting block offset for the partition within the LUN, and the length of a partition) may be stored in an operating-system dependent format, and in an operating system-dependent location, such as in a header or a trailer, within a LUN. In the Solaris™ operating system from Sun Microsystems, for example, a virtual table of contents (VTOC) structure may be located in the first partition of a disk volume, and a copy of the VTOC may also be located in the last two cylinders of the volume. In addition, the operating system metadata may include cylinder alignment and/or cylinder size information, as well as boot code if the volume is bootable. Operating system metadata for various versions of Microsoft Windows™ may include a “magic number” (a special number or numbers that the operating system expects to find, usually at or near the start of a disk), subdisk layout information, etc. If thedisk driver layer 114 does not find the metadata in the expected location and in the expected format, the disk driver layer may not be able to perform I/O operations at the LUN. - The relatively simple traditional storage software stack 140A has been enhanced over time to help provide advanced storage features, most significantly by introducing block virtualization layers. In general, block virtualization refers to a process of creating or aggregating logical or virtual block devices out of one or more underlying physical or logical block devices, and making the virtual block devices accessible to block device consumers for storage operations. For example, in one embodiment of block virtualization, storage within multiple physical block devices, e.g. in a fiber channel storage area network (SAN), may be aggregated and presented to a host as a single virtual storage device such as a virtual LUN (VLUN), as described below in further detail. In another embodiment, one or more layers of software may rearrange blocks from one or more block devices, such as disks, and add various kinds of functions. The resulting rearranged collection of blocks may then be presented to a storage consumer, such as an application or a file system, as one or more aggregated devices with the appearance of one or more basic disk drives. That is, the more complex structure resulting from rearranging blocks and adding functionality may be presented as if it were one or more simple arrays of blocks, or logical block devices. In some embodiments, multiple layers of virtualization may be implemented. That is, one or more block devices may be mapped into a particular virtualized block device, which may be in turn mapped into still another virtualized block device, allowing complex storage functions to be implemented with simple block devices. Further details on block virtualization, and advanced storage features supported by block virtualization, are provided below.
- Block virtualization may be implemented at various places within a storage stack and the associated storage environment, in both hardware and software. For example, a block virtualization layer in the form of a volume manager, such as the VERITAS Volume Manager™ from VERITAS Software Corporation, may be added between the
disk driver layer 114 and thefile system layer 112. In some storage environments, virtualization functionality may be added to host bus adapters, i.e., in a layer between the interconnect-dependent I/O driver layer 116 andinterconnect 130. Block virtualization may also be performed outside thehost 110, e.g., in a virtualization appliance or a virtualizing switch, which may form part of theinterconnect 130. Such external devices providing block virtualization (i.e., devices that are not incorporated within host 110) may be termed off-host virtualizers or off-host virtualization controllers. In some storage environments, block virtualization functionality may be implemented by an off-host virtualizer in cooperation with a host-based virtualizer. That is, some block virtualization functionality may be performed off-host, and other block virtualization features may be implemented at the host. - While additional layers may be added to the storage software stack 140A, it is generally difficult to remove or completely bypass existing storage software layers of operating systems. Therefore, off-host virtualizers may typically be implemented in a manner that allows the existing storage software layers to continue to operate, even when the storage devices being presented to the operating system are virtual rather than physical, and remote rather than local. For example, because
disk driver layer 114 expects to deal with SCSI LUNs when performing I/O operations, an off-host virtualizer may present a virtualized storage device to the disk driver layer as a virtual LUN. In some embodiments, as described below in further detail, on off-host virtualizer may encapsulate, or emulate the metadata for, a LUN when providing ahost 110 access to a virtualized storage device. In addition, as also described below, one or more software modules or layers may be added to storage stack 140A to support additional forms of virtualization using virtual LUNs. -
FIG. 1 b is a block diagram illustrating an embodiment ofsystem 100 configured to utilize off-host block virtualization. As shown, the system may include an off-host virtualizer 180, such as a virtualization switch or a virtualization appliance, which may be included withininterconnect 130 linkinghost 110 tophysical block device 120. Host 110 may comprise an enhanced storage software stack 140B, which may include anintermediate driver layer 113 between thedisk driver layer 114 andfile system layer 112. In one embodiment, off-host virtualizer 180 may be configured to present a virtual storage device (e.g., a virtual LUN or VLUN) that includes one or more regions that are not initially mapped to physical storage todisk driver layer 114 using a technique (such as metadata emulation) that allows disk driver layer to detect and access the virtual storage device as if it were mapped to physical storage. After the virtual storage device has been detected, off-host virtualizer 180 may map storage withinphysical block device 120, or multiplephysical block devices 120, into the virtual storage device. The back-end storage within aphysical block device 120 that is mapped to a virtual LUN may be termed a “physical LUN (PLUN)” in the subsequent description. In another embodiment, off-host virtualizer 180 may be configured to aggregate storage within one or morephysical block devices 120 as one or more logical volumes, and map the logical volumes within the address space of a virtual LUN presented to host 110. Off-host virtualizer 180 may further be configured to make the portions of the virtual LUN that are mapped to the logical volumes accessible tointermediate driver layer 113. For example, in some embodiments, off-host virtualizer 180 may be configured to provide metadata or configuration information on the logical volumes tointermediate driver layer 113, allowingintermediate driver layer 113 to locate the blocks of the logical volumes and perform desired I/O operations on the logical volumes located within the virtual LUN on behalf of clients such asfile system layer 112 or other applications.File system layer 112 and applications (such as database management systems) configured to utilizeintermediate driver layer 113 and lower layers of storage stack 140B may be termed “virtual storage clients” or “virtual storage consumers” herein. While off-host virtualizer 180 is shown withininterconnect 130 in the embodiment depicted inFIG. 1 b, it is noted that in other embodiments, off-host virtualization may also be provided within physical block device 120 (e.g., by a virtualization layer betweenphysical storage layer 124 and firmware layer 122), or at another device outsideinterconnect 130. - As described above, in some embodiments,
disk driver layer 114 may expect certain operating system-specific metadata to be present at operating-system specific locations or offsets within a LUN. When presenting a virtual LUN to ahost 110, therefore, in such embodiments off-host virtualizer 180 may logically insert the expected metadata at the expected locations.FIG. 2 a is a block diagram illustrating the addition of operating-system specific metadata to avirtual LUN 210 encapsulating asource volume 205, according to one embodiment. As shown, thesource volume 205 consists of N blocks, numbered 0 through (N-1). Thevirtual LUN 210 may include two regions of inserted metadata: aheader 215 containing H blocks of metadata, and atrailer 225 including T blocks of metadata. Between theheader 215 and thetrailer 225, blocks 220 of thevirtual LUN 210 may be mapped to thesource volume 205, thereby making the virtual LUN 210 a total of (H+N+T) blocks long (i.e., the virtual LUN may contain blocks numbered 0 through (H+N+T−1)). Operating-system specific metadata included inheader 215 and/ortrailer 225 may be used bydisk driver layer 114 to recognize thevirtual LUN 210 as a “normal” storage device (i.e. a storage device that is mapped to physical storage). In some embodiments, additional configuration information or logical volume metadata may also be included withinheader 215 and/ortrailer 225. The lengths ofheader 215 andtrailer 225, as well as the format and content of the metadata, may vary with the operating system in use athost 110. It is noted that in some embodiments, the metadata may require only aheader 215, or only atrailer 225, rather than both a header and a trailer; and that in other embodiments, the metadata may be stored at any arbitrary offset within the LUN. In some embodiments, the metadata may include a partition table with one or more partition entries, where at least one partition may correspond to a region that is unmapped to physical storage. The location (e.g., the offset of the metadata within the virtual storage device) and contents of the metadata generated by off-host virtualizer 180 may indicate to disk driver layer in one embodiment that a corresponding storage device has been successfully initialized according to the operating system in use. - The metadata inserted within
virtual LUN 210 may be stored in persistent storage, e.g., within some blocks ofphysical block device 120 or at off-host virtualizer 180, in some embodiments, and logically concatenated with the mapped blocks 220. In other embodiments, the metadata may be generated dynamically, whenever ahost 110 accesses thevirtual LUN 210. In some embodiments, the metadata may be generated by an external agent other than off-host virtualizer 180. The external agent may be capable of emulating metadata in a variety of formats for different operating systems, including operating systems that may not have been known when the off-host virtualizer 180 was deployed. In one embodiment, off-host virtualizer 180 may be configured to support more than one operating system; i.e., off-host virtualizer may logically insert metadata blocks corresponding to any one of a number of different operating systems when presentingvirtual LUN 210 to ahost 110, thereby allowing hosts with different operating systems to share access to astorage device 120. - While logical volumes such as
source volume 205 may typically be created and dynamically reconfigured (e.g., grown or shrunk, imported tohosts 110 or exported from hosts 110) efficiently, similar configuration operations on LUNs may typically be fairly slow. Some LUN reconfiguration operations may be at least partially asynchronous, and may have unbounded completion times and/or ambiguous failure states. On many operating systems, LUN reconfiguration may only be completed after a system reboot; for example, a newly created physical or virtual LUN may not be detected by the operating system without a reboot. In order to be able to flexibly map logical volumes to virtual LUNs, while avoiding the problems associated with LUN reconfigurations, therefore, it may be advisable to generate unmapped virtual LUNs (e.g., to create operating system metadata for virtual LUNs that are not initially mapped to any physical LUNs or logical volumes) and pre-assign the unmapped virtual LUNs tohosts 110 as part of an initialization process. The initialization process may be completed prior to performing storage operations on the virtual LUNs on behalf of applications. During the initialization process (which may include a reboot of the system in some embodiments) the layers of the software storage stack 140B may be configured to detect the existence of the virtual LUNs as addressable storage devices. Subsequent to the initialization, off-host virtualizer 180 may dynamically map physical LUNs and/or logical volumes to the virtual LUNs (e.g., by modifying portions of the operating system metadata), as described below in further detail. The term “dynamic mapping”, as used herein, refers to a mapping of a virtual storage device (such as a VLUN) that is performed by modifying one or more blocks of metadata, and/or by communicating via one or more messages to ahost 110, without requiring a reboot of thehost 110 to which the virtual storage device is presented. -
FIG. 2 b is a block diagram illustrating an example of an unmapped virtual LUN 230 according to one embodiment. As shown, the unmapped virtual LUN 230 may include an operatingsystem metadata header 215 and an operatingsystem metadata trailer 225, as well as a region ofunmapped blocks 235. In some embodiments, the size of the region of unmapped blocks (X blocks in the depicted example) may be set to a maximum permissible LUN or volume size supported by an operating system, so that any subsequent mapping of a volume or physical LUN to the virtual LUN does not require an expansion of the size of the virtual LUN. In one alternative embodiment, the unmapped virtual LUN may consist of only the emulated metadata (e.g.,header 215 and/or trailer 225), and the size of the virtual LUN may be increased dynamically when the volume or physical LUN is mapped. In such embodiments,disk driver layer 114 may have to modify some of its internal data structures when the virtual LUN is expanded, and may have to re-read the emulated metadata in order to do so. Off-host virtualizer 180 may be configured to send a metadata change notification message todisk driver layer 114 in order to trigger the re-reading of the metadata. -
FIG. 3 is a block diagram illustrating an embodiment including an off-host virtualizer 180 configured to create a plurality of unmapped virtual LUNs (VLUNs) 230. As shown, more than one VLUN may be associated with asingle host 110. For example, off-host virtualizer 180 may assignunmapped VLUNs unmapped VLUNs host virtualizer 180 viainterconnect 130A, and off-host virtualizer 180 may be coupled tostorage devices interconnect 130B. Storage devices 340 may includephysical block devices 120 as well as virtual block devices (e.g., in embodiments employing multiple layers of virtualization, as described below). Off-host virtualizer 180 may be configured to dynamically map physical and/or virtual storage from storage devices 340 to the unmapped virtual LUNs.Hosts - After VLUN 230 has been recognized by disk driver layer 114 (e.g., as a result of the generation of operating system metadata such as a partition table in an expected format and location), a block at any offset within the VLUN address space may be accessed by the
disk driver layer 114, and thus by any other layer above the disk driver layer. For example,intermediate driver layer 113 may be configured to communicate with off-host virtualizer 180 by reading from, and/or writing to, a designated set of blocks emulated within VLUN 230. Such designated blocks may provide a mechanism for off-host virtualizer 180 to provideintermediate driver layer 113 with configuration information associated with logical volumes or physical LUNs mapped to VLUN 230 in some embodiments. - In one embodiment, off-
host virtualizer 180 may be configured to map storage from a back-end physical LUN directly to a VLUN 230, without any additional virtualization (i.e., without creating a logical volume). Such a technique of mapping a PLUN to a VLUN 230 may be termed “PLUN tunneling”. Each PLUN may be mapped to a corresponding VLUN 230 (i.e., a 1-to-1 mapping of PLUNs to VLUNs may be implemented by off-host virtualizer 180) in some embodiments. In other embodiments, as described below in conjunction with the description ofFIG. 4 , storage from multiple PLUNs may be mapped into subranges of a given VLUN 230. PLUN tunneling may allow the off-host virtualizer 180 to act as an isolation layer between VLUNs 230 (the storage entities directly accessible to hosts 110) and back-end PLUNs, allowing the off-host virtualizer to hide details related to physical storage protocol implementation from the hosts. In one implementation, for example, the back-end PLUNs may implement a different version of a storage protocol (e.g., SCSI-3) than the version seen by hosts 100 (e.g., SCSI-2), and the off-host virtualizer may provide any needed translation between the two versions. In another implementation, off-host virtualizer 180 may be configured to implement a cooperative access control mechanism for the back-end PLUNs, and the details of the mechanism may remain hidden from thehosts 110. - In addition, off-
host virtualizer 180 may also be configured to increase the level of data sharing using PLUN tunneling. Disk array devices often impose limits on the total number of concurrent “logins”, i.e., the total number of entities that may access a given disk array device. In a storage environment employing PLUN tunneling for disk arrays (i.e., where the PLUNs are disk array devices), off-host virtualizers 180 may allow multiple hosts to access the disk arrays through a single login. That is, for example,multiple hosts 110 may log in to the off-host virtualizer 180, while the off-host virtualizer may log in to a disk array PLUN once on behalf of the multiple hosts 110. Off-host virtualizer 180 may then pass on I/O requests from themultiple hosts 110 to the disk array PLUN using a single login. The number of logins (i.e., distinct entities logged in) as seen by a disk array PLUN may thereby be reduced as a result of PLUN tunneling, without reducing the number ofhosts 110 from which I/O operations targeted at the disk array PLUN may be initiated. The total number ofhosts 110 that may access storage at a single disk array PLUN with login count restrictions may thereby be increased, thus increasing the overall level of data sharing. -
FIG. 4 is a block diagram illustrating an embodiment where an off-host virtualizer 180 is configured to map physical storage from within two differentphysical storage devices single VLUN 230B. That is, off-host virtualizer 180 may be configured to map a first range of physical storage fromdevice 340A into a first region of mappedblocks 321A withinVLUN 230B, and map a second range of physical storage fromdevice 340B into a second region of mappedblocks 321B withinVLUN 230B. The first and second ranges of physical storage may each represent a respective PLUN, such as a disk array, or a respective subset of a PLUN. Configuration information indicating the offsets withinVLUN 230B at which mappedblocks host virtualizer 180 tointermediate driver layer 113 using a variety of mechanisms in different embodiments. For example, in one embodiment, off-host virtualizer 180 may write the configuration information to a designated set of blocks within VLUN 230, andintermediate driver layer 113 may be configured to read the designated set of blocks, as described above. In another embodiment, off-host virtualizer 180 may send a message containing the configuration information to host 110A, either directly (overinterconnect 130A or another network) or through an intermediate coordination server. In yet another embodiment, the configuration information may be supplied within a special SCSI mode page (i.e.,intermediate driver layer 113 may be configured to read a special SCSI mode page containing configuration information updated by off-host virtualizer 180). Combinations of these techniques may be used in some embodiments: for example, in one embodiment off-host virtualizer 180 may send a message tointermediate driver layer 113 requesting that intermediate driver layer read a special SCSI mode page containing the configuration information. -
FIG. 5 is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to map physical storage from within a singlephysical storage device 340A to two VLUNs assigned todifferent hosts physical storage 555A ofphysical storage device 340A may be mapped to a first range of mappedblocks 321A withinVLUN 230B assigned to host 110A. A second range ofphysical storage 555B of the samephysical storage device 340A may be mapped to a second range of mappedblocks 321C ofVLUN 230E assigned to host 110B. In addition, in some embodiments, off-host virtualizer 180 may be configured to prevent unauthorized access tophysical storage range 555A fromhost 110B, and to prevent unauthorized access tophysical storage 555B fromhost 110A. Thus, in addition to allowing access to a singlephysical storage device 340A frommultiple hosts 110, off-host virtualizer 180 may also be configured to provide security for each range ofphysical storage single host 110. Off-host virtualizer 180 may be configured to maintain access rights information for thehosts 110 and VLUNs 230 in some embodiments, while in other embodiments security tokens may be provided to eachhost 110 indicating the specific VLUNs to which access from the host is allowed, and the security tokens may be included with I/O requests. - As described earlier, in addition to mapping physical storage directly to VLUNs 230, in some embodiments off-
host virtualizer 180 may be configured to aggregate physical storage into a logical volume, and map the logical volume to an address range within a VLUN 230. For example, in some implementations a set of two or more physical storage regions, either within a single physical storage device or from multiple storage devices, may be aggregated into a logical volume. (It is noted that a logical volume may also be created from a single contiguous region of physical storage; i.e., the set of physical storage regions being aggregated may minimally consist of a single region). Mapping a logical volume through a VLUN may also be termed “volume tunneling” or “logical volume tunneling”.FIG. 6 is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to aggregate a set ofstorage regions 655A ofphysical storage device 340A into a logical volume 660A, and map logical volume 660A to a range of blocks (designated as mappedvolume 365A inFIG. 6 ) ofVLUN 230B. In some embodiments, configuration information or metadata associated with the tunneled logical volume 660A may be provided tointermediate driver layer 113 using any of a variety of mechanisms, such as an extended SCSI mode page, emulated virtual blocks withinVLUN 230A, and/or direct or indirect messages sent from off-host virtualizer 180 to host 110A. While logical volume 660A is shown as being backed by a portion of a singlephysical storage device 340A in the depicted embodiment, in other embodiments logical volume 660A may be aggregated from all the storage within a single physical storage device, or from storage of two or more physical devices. In some embodiments employing multiple layers of virtualization, logical volume 660A may itself be aggregated from other logical storage devices rather than directly from physical storage devices. In one embodiment, each host 110 (i.e., host 110B in addition tohost 110A) may be provided access to logical volume 660A via a separate VLUN, while in another embodiment different sets of logical volumes may be presented todifferent hosts 110. -
FIG. 7 is a block diagram illustrating an embodiment where off-host virtualizer 180 is configured to map multiple logical volumes to a single VLUN 230. As shown, off-host virtualizer 180 may be configured to aggregate storage region 755A fromphysical storage device 340A, andphysical storage region 755C fromphysical storage device 340C, into alogical volume 760A, and maplogical volume 760A to a first mappedvolume region 765A ofVLUN 230B. In addition, off-host virtualizer 180 may also aggregatephysical storage region 755B fromphysical storage device 340A into a secondlogical volume 760B, and maplogical volume 760B to a second mappedvolume region 765B ofVLUN 230B. In general, off-host virtualizer 180 may aggregate any suitable selection of physical storage blocks from one or more physical storage devices 340 into one or more logical volumes, and map the logical volumes to one or more of the pre-generated unmapped VLUNs 230. -
FIG. 8 is a block diagram illustrating another embodiment, where off-host virtualizer 180 is configured toaggregate storage regions physical storage device 340A intological volumes logical volume 860A may be mapped to a first address range withinVLUN 230B, accessible fromhost 110A, whilelogical volume 860B may be mapped to a second address range withinVLUN 230E, accessible fromhost 110B. Off-host virtualizer 180 may further be configured to implement a security protocol to prevent unauthorized access and/or data corruption, similar to the security protocol described above for PLUN tunneling. Off-host virtualizer 180 may implement the security protocol at the logical volume level: that is, off-host virtualizer 180 may prevent unauthorized access tological volumes 860A (e.g., fromhost 110B) and 860B (e.g., fromhost 110A) whose data may be stored within a singlephysical storage device 340A. In one embodiment, off-host virtualizer 180 may be configured to maintain access rights information for logical volumes 860 to which eachhost 110 has been granted access. In other embodiments security tokens may be provided to each host 110 (e.g., by off-host virtualizer 180, or by an external security server) indicating the specific logical volumes 860 to which access from the host is allowed, and the security tokens may be included with I/O requests. - Many storage environments utilize storage area networks (SANs), such as fibre channel fabrics, to access physical storage devices. SAN fabric reconfiguration (e.g., to provide access to a particular PLUN or logical volume from a particular host that did not previously have access to the desired PLUN or logical volume), which may require switch reconfigurations, recabling, rebooting, etc., may typically be fairly complex and error-prone. The techniques of PLUN tunneling and volume tunneling, described above, may allow a simplification of SAN reconfiguration operations. By associating pre-generated, unmapped VLUNs to hosts, and mapping PLUNs and logical volumes to VLUNs dynamically as needed, many reconfiguration operations may require only a change of a mapping table at a switch, and a recognition of new metadata by
intermediate driver layer 113. Storage devices may be more easily shared acrossmultiple hosts 110, or logically transferred from one host to another, using PLUN tunneling and/or volume tunneling. Allocation and/or provisioning of storage, e.g., from a pool maintained by a coordinating storage allocator, may also be simplified. - In addition to simplifying SAN configuration changes, PLUN tunneling and volume tunneling may also support storage interconnection across independently configured storage networks (e.g., interconnection across multiple fiber channel fabrics).
FIG. 9 is a block diagram illustrating an embodiment employing multiple storage networks. As shown, off-host virtualizer 180 may be configured to accessphysical storage device 340A via afirst storage network 910A, and to accessphysical storage device 340B via asecond storage network 910B. Off-host virtualizer 180 may aggregatestorage region 355A fromphysical storage device 340A intological volume 860A, and maplogical volume 860A toVLUN 230B. Similarly, off-host virtualizer 180 may aggregatestorage region 355B fromphysical storage device 340B intological volume 860B, and maplogical volume 860B toVLUN 230E.Host 110A may be configured to accessVLUN 230A via athird storage network 910C, and to accessVLUN 230B via afourth storage network 910D. - Each storage network 910 (i.e.,
storage network host 110A may include two HBAs in the embodiment depicted inFIG. 9 , with the first HBA allowing access tostorage network 910C, and the second HBA tostorage network 910D. In such an embodiment,host 110A may be provided full connectivity to back-end physical storage devices 340, while still maintaining the advantages of configuration isolation. WhileFIG. 9 depicts the use of multiple independent storage networks in conjunction with volume tunneling, in other embodiments multiple independent storage networks may also be used with PLUN tunneling, or with a combination of PLUN and volume tunneling. In addition, it is noted that in some embodiments, the use of independent storage networks 910 may be asymmetric: e.g., in one embodiment, multiple independent storage networks 910 may be used for front-end connections (i.e., between off-host virtualizer 180 and hosts 110), while only a single storage network may be used for back-end connections (i.e., between off-host virtualizer 180 and physical storage devices 340). Any desired interconnection technology and/or protocol may be used to implement storage networks 910, such as fiber channel, IP-based protocols, etc. In another embodiment, the interconnect technology or protocol used within a first storage network 910 may differ from the interconnect technology or protocol used within a second storage network 910. - In one embodiment, volume tunneling may also allow maximum LUN size limitations to be overcome. For example, the SCSI protocol may be configured to use a 32-bit unsigned integer as a LUN block address, thereby limiting the maximum amount of storage that can be accessed at a single LUN to 2 terabytes (for 512-byte blocks) or 32 terabytes (for 8-kilobyte blocks). Volume tunneling may allow an
intermediate driver layer 113 to access storage from multiple physical LUNs as a volume mapped to a single VLUN, thereby overcoming the maximum LUN size limitation.FIG. 10 is a block diagram illustrating an embodiment where off-host virtualizer 180 may be configured toaggregate storage regions physical storage devices logical volume 1060A, where the size of thevolume 1060A exceeds the allowed maximum LUN size supported by the storage protocol in use at storage devices 340. Off-host virtualizer 180 may further be configured to maplogical volume 1060A toVLUN 230B, and to make the logical volume accessible tointermediate driver layer 113 athost 110A. In one embodiment, off-host virtualizer 180 may provide logical volume metadata tointermediate driver layer 113, including sufficient information forintermediate driver layer 113 to access a larger address space withinVLUN 230B than the maximum allowed LUN size. -
FIG. 11 is a flow diagram illustrating aspects of the operation ofsystem 100 according to one embodiment, where off-host virtualizer 180 is configured to support PLUN tunneling. Off-host virtualizer 180 may be configured to present a virtual storage device (e.g., a VLUN) that comprises one or more regions that are initially not mapped to physical storage (block 1110), and make the virtual storage device accessible to a host 110 (block 1115). A first layer of a storage software stack athost 110, such asdisk driver layer 114 ofFIG. 1 b, may be configured to detect and access the virtual storage device as if the virtual storage device were mapped to physical storage (block 1120). A number of different techniques may be used to present the virtual storage device in such a way that the first layer of the storage software stack may detect the virtual storage device in different embodiments. For example, in one embodiment, the off-host virtualizer may be configured to generate operating system metadata indicating the presence of a normal or mapped storage device. In such an embodiment, the metadata may be formatted according to the requirements of the operating system in use at thehost 110, and may be mapped to a region of the virtual storage device. In one specific embodiment, the metadata may include a partition table including entries for one or more partitions, where at least one partition corresponds to or maps to one of the regions that are unmapped to physical storage. After the unmapped virtual storage device is detected, off-host virtualizer 180 may be configured to dynamically map physical storage from one or more back-end physical storage devices 340 (e.g., PLUNs) to an address range within the virtual storage device. -
FIG. 12 is a flow diagram illustrating aspects of the operation ofsystem 100 according to one embodiment, where off-host virtualizer 180 is configured to support volume tunneling. The first three blocks depicted inFIG. 12 may represent functionality similar to the first three blocks shown inFIG. 11 . That is, off-host virtualizer 180 may be configured to present a virtual storage device (e.g., a VLUN) comprising one or more regions unmapped to physical storage (block 1210) and make the virtual storage device accessible to a host 110 (block 1215). A first layer of a storage software stack, such asdisk driver layer 114 ofFIG. 1 b, may be configured to detect and access the virtual storage device as if the virtual storage device were mapped to physical storage (e.g., as a LUN) (block 1220). In addition, off-host virtualizer 180 may be configured to aggregate storage at one or physical storage devices 340 into a logical volume (block 1225), and to dynamically map the logical volume to an address range within the previously unmapped virtual storage device (block 1230). Off-host virtualizer 180 may further be configured to make the mapped portion of the virtual storage device accessible to a second layer of the storage software stack at host 110 (e.g., intermediate driver layer 113) (block 1235), allowing the second layer to locate the blocks of the logical volume and to perform desired I/O operations on the logical volume. In some embodiments, off-host virtualizer 180 may be configured to provide logical volume metadata to the second layer to support the I/O operations. - In various embodiments, off-
host virtualizer 180 may implement numerous different types of storage functions using block virtualization. For example, in one embodiment a virtual block device such as a logical volume may implement device striping, where data blocks may be distributed among multiple physical or logical block devices, and/or device spanning, in which multiple physical or logical block devices may be joined to appear as a single large logical block device. In some embodiments, virtualized block devices may provide mirroring and other forms of redundant data storage, the ability to create a snapshot or static image of a particular block device at a point in time, and/or the ability to replicate data blocks among storage systems connected through a network such as a local area network (LAN) or a wide area network (WAN), for example. Additionally, in some embodiments virtualized block devices may implement certain performance optimizations, such as load distribution, and/or various capabilities for online reorganization of virtual device structure, such as online data migration between devices. In other embodiments, one or more block devices may be mapped into a particular virtualized block device, which may be in turn mapped into still another virtualized block device, allowing complex storage functions to be implemented with simple block devices. More than one virtualization feature, such as striping and mirroring, may thus be combined within a single virtual block device in some embodiments, creating a logically hierarchical virtual storage device. - The off-
host virtualizer 180, either alone or in cooperation with one or more other virtualizers such as a volume manager athost 110 or other off-host virtualizers, may provide functions such as configuration management of virtualized block devices and distributed coordination of block device virtualization. For example, after a reconfiguration of a logical volume shared by two hosts 110 (e.g., when the logical volume is expanded, or when a new mirror is added to the logical volume), the off-host virtualizer 180 may be configured to distribute metadata or a volume description indicating the reconfiguration to the two hosts 110. In one embodiment, once the volume description has been provided to the hosts, the storage stacks at the hosts may be configured to interact directly with various storage devices 340 according to the volume description (i.e., to transform logical I/O requests into physical I/O requests using the volume description). Distribution of a virtualized block device as a volume to one or more virtual device clients, such ashosts 110, may be referred to as distributed block virtualization. - As noted previously, in some embodiments, multiple layers of virtualization may be employed, for example at the host level as well as at an off-host level, such as at a virtualization switch or at a virtualization appliance. In such embodiments, some aspects of virtualization may be visible to a virtual device consumer such as
file system layer 112, while other aspects may be implemented transparently by the off-host level. Further, in some multilayer embodiments, the virtualization details of one block device (e.g., one volume) may be fully defined to a virtual device consumer (i.e., without further virtualization at an off-host level), while the virtualization details of another block device (e.g., another volume) may be partially or entirely transparent to the virtual device consumer. - In some embodiments, a virtualizer, such as off-
host virtualizer 180, may be configured to distribute all defined logical volumes to each virtual device client, such ashost 110, present within a system. Such embodiments may be referred to as symmetric distributed block virtualization systems. In other embodiments, specific volumes may be distributed only to respective virtual device consumers or hosts, such that at least one volume is not common to two virtual device consumers. Such embodiments may be referred to as asymmetric distributed block virtualization systems. - It is noted that off-
host virtualizer 180 may be any type of device, external to host 110, that is capable of providing the virtualization functionality, including PLUN and volume tunneling, described above. For example, off-host virtualizer 180 may include a virtualization switch, a virtualization appliance, a special additional host dedicated to providing block virtualization, or an embedded system configured to use application specific integrated circuit (ASIC) or field-programmable gate array (FPGA) technology to provide block virtualization functionality. In some embodiments, off-host block virtualization may be provided by a collection of cooperating devices, such as two or more virtualizing switches, instead of a single device. Such a collection of cooperating devices may be configured for failover, i.e., a standby cooperating device may be configured to take over the virtualization functions supported by a failed cooperating device. An off-host virtualizer 180 may incorporate one or more processors, as well as volatile and/or non-volatile memory. In some embodiments, configuration information associated with virtualization may be maintained at a database separate from the off-host virtualizer 180, and may be accessed by off-host virtualizer over a network. In one embodiment, an off-host virtualizer may be programmable and/or configurable. Numerous other configurations of off-host virtualizer 180 are possible and contemplated. Ahost 110 may be any computer system, such as a server comprising one or more processors and one or more memories, capable of supporting the storage software stack described above. Any desired operating system may be used at ahost 110, including various versions of Microsoft Windows™, Solaris™ from Sun Microsystems, various versions of Linux, other operating systems based on UNIX, and the like. Theintermediate driver layer 113 may be included within a volume manager in some embodiments. -
FIG. 13 is a block diagram illustrating a computer-accessible medium 1300 comprisingvirtualization software 1310 capable of providing the functionality of off-host virtualizer 180 and block storage software stack 140B described above.Virtualization software 1310 may be provided to a computer system using a variety of computer-accessible media including electronic media (e.g., flash memory), magnetic media such as RAM (e.g., SDRAM, RDRAM, SRAM, etc.), optical storage media such as CD-ROM, etc., as well as transmission media or signals such as electrical, electromagnetic or digital signals, conveyed via a communication medium such as a network and/or a wireless link. - Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Claims (24)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/156,821 US20050235132A1 (en) | 2003-11-26 | 2005-06-20 | System and method for dynamic LUN mapping |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/722,614 US20050114595A1 (en) | 2003-11-26 | 2003-11-26 | System and method for emulating operating system metadata to provide cross-platform access to storage volumes |
WOPCT/US04/39306 | 2004-11-22 | ||
PCT/US2004/039306 WO2005055043A1 (en) | 2003-11-26 | 2004-11-22 | System and method for emulating operating system metadata to provide cross-platform access to storage volumes |
US11/156,821 US20050235132A1 (en) | 2003-11-26 | 2005-06-20 | System and method for dynamic LUN mapping |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/722,614 Continuation-In-Part US20050114595A1 (en) | 2003-11-26 | 2003-11-26 | System and method for emulating operating system metadata to provide cross-platform access to storage volumes |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050235132A1 true US20050235132A1 (en) | 2005-10-20 |
Family
ID=34592023
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/722,614 Abandoned US20050114595A1 (en) | 2003-11-26 | 2003-11-26 | System and method for emulating operating system metadata to provide cross-platform access to storage volumes |
US11/156,635 Expired - Fee Related US7689803B2 (en) | 2003-11-26 | 2005-06-20 | System and method for communication using emulated LUN blocks in storage virtualization environments |
US11/156,636 Abandoned US20050228950A1 (en) | 2003-11-26 | 2005-06-20 | External encapsulation of a volume into a LUN to allow booting and installation on a complex volume |
US11/156,821 Abandoned US20050235132A1 (en) | 2003-11-26 | 2005-06-20 | System and method for dynamic LUN mapping |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/722,614 Abandoned US20050114595A1 (en) | 2003-11-26 | 2003-11-26 | System and method for emulating operating system metadata to provide cross-platform access to storage volumes |
US11/156,635 Expired - Fee Related US7689803B2 (en) | 2003-11-26 | 2005-06-20 | System and method for communication using emulated LUN blocks in storage virtualization environments |
US11/156,636 Abandoned US20050228950A1 (en) | 2003-11-26 | 2005-06-20 | External encapsulation of a volume into a LUN to allow booting and installation on a complex volume |
Country Status (5)
Country | Link |
---|---|
US (4) | US20050114595A1 (en) |
EP (1) | EP1687706A1 (en) |
JP (1) | JP4750040B2 (en) |
CN (1) | CN100552611C (en) |
WO (1) | WO2005055043A1 (en) |
Cited By (184)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050228950A1 (en) * | 2003-11-26 | 2005-10-13 | Veritas Operating Corporation | External encapsulation of a volume into a LUN to allow booting and installation on a complex volume |
US20060112251A1 (en) * | 2003-11-26 | 2006-05-25 | Veritas Operating Corporation | Host-based virtualization optimizations in storage environments employing off-host storage virtualization |
US20060259650A1 (en) * | 2005-05-16 | 2006-11-16 | Infortrend Technology, Inc. | Method of transmitting data between storage virtualization controllers and storage virtualization controller designed to implement the method |
US20070070535A1 (en) * | 2005-09-27 | 2007-03-29 | Fujitsu Limited | Storage system and component replacement processing method thereof |
US20070180167A1 (en) * | 2006-02-02 | 2007-08-02 | Seagate Technology Llc | Dynamic partition mapping in a hot-pluggable data storage apparatus |
US20070233992A1 (en) * | 2006-03-28 | 2007-10-04 | Hitachi, Ltd. | Storage apparatus |
US20080183965A1 (en) * | 2007-01-29 | 2008-07-31 | Kenta Shiga | Controller for controlling a plurality of logical resources of a storage system |
US7441009B2 (en) * | 2005-12-27 | 2008-10-21 | Fujitsu Limited | Computer system and storage virtualizer |
US20090089498A1 (en) * | 2007-10-02 | 2009-04-02 | Michael Cameron Hay | Transparently migrating ongoing I/O to virtualized storage |
US20090119452A1 (en) * | 2007-11-02 | 2009-05-07 | Crossroads Systems, Inc. | Method and system for a sharable storage device |
US7536503B1 (en) * | 2006-06-30 | 2009-05-19 | Emc Corporation | Methods and systems for preserving disk geometry when migrating existing data volumes |
US7568051B1 (en) * | 2007-06-29 | 2009-07-28 | Emc Corporation | Flexible UCB |
US20090249018A1 (en) * | 2008-03-28 | 2009-10-01 | Hitachi Ltd. | Storage management method, storage management program, storage management apparatus, and storage management system |
GB2460841A (en) * | 2008-06-10 | 2009-12-16 | Virtensys Ltd | Identifier mapping in a storage network switch |
US20090320041A1 (en) * | 2007-03-23 | 2009-12-24 | Fujitsu Limited | Computer program and method for balancing processing load in storage system, and apparatus for managing storage devices |
US20100293349A1 (en) * | 2009-05-12 | 2010-11-18 | Microsoft Corporation | Converting luns into files or files into luns in real time |
US7840790B1 (en) * | 2007-02-16 | 2010-11-23 | Vmware, Inc. | Method and system for providing device drivers in a virtualization system |
US20100306269A1 (en) * | 2009-05-26 | 2010-12-02 | Roger Frederick Osmond | Method and apparatus for large scale data storage |
US20110208924A1 (en) * | 2006-04-18 | 2011-08-25 | Hitachi, Ltd. | Data storage control on storage devices |
US8028062B1 (en) * | 2007-12-26 | 2011-09-27 | Emc Corporation | Non-disruptive data mobility using virtual storage area networks with split-path virtualization |
US8032701B1 (en) * | 2004-03-26 | 2011-10-04 | Emc Corporation | System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network |
US8095715B1 (en) * | 2006-09-05 | 2012-01-10 | Nvidia Corporation | SCSI HBA management using logical units |
US20120042114A1 (en) * | 2010-08-11 | 2012-02-16 | Lsi Corporation | Apparatus and methods for managing expanded capacity of virtual volumes in a storage system |
US20120060203A1 (en) * | 2010-09-07 | 2012-03-08 | Susumu Aikawa | Logical unit number management device, logical unit number management method, and program therefor |
US8166314B1 (en) | 2008-12-30 | 2012-04-24 | Emc Corporation | Selective I/O to logical unit when encrypted, but key is not available or when encryption status is unknown |
US8261068B1 (en) | 2008-09-30 | 2012-09-04 | Emc Corporation | Systems and methods for selective encryption of operating system metadata for host-based encryption of data at rest on a logical unit |
US8332613B1 (en) * | 2006-09-29 | 2012-12-11 | Emc Corporation | Methods and systems for managing I/O requests to minimize disruption required for data encapsulation and de-encapsulation |
US8416954B1 (en) | 2008-09-30 | 2013-04-09 | Emc Corporation | Systems and methods for accessing storage or network based replicas of encrypted volumes with no additional key management |
US8443134B2 (en) | 2006-12-06 | 2013-05-14 | Fusion-Io, Inc. | Apparatus, system, and method for graceful cache device degradation |
US20130179660A1 (en) * | 2011-07-21 | 2013-07-11 | International Business Machines Corporation | Virtual Logical Volume for Overflow Storage of Special Data Sets |
US8489817B2 (en) | 2007-12-06 | 2013-07-16 | Fusion-Io, Inc. | Apparatus, system, and method for caching data |
US8706968B2 (en) | 2007-12-06 | 2014-04-22 | Fusion-Io, Inc. | Apparatus, system, and method for redundant write caching |
US20140164581A1 (en) * | 2012-12-10 | 2014-06-12 | Transparent Io, Inc. | Dispersed Storage System with Firewall |
US8825937B2 (en) | 2011-02-25 | 2014-09-02 | Fusion-Io, Inc. | Writing cached data forward on read |
US8838931B1 (en) * | 2012-03-30 | 2014-09-16 | Emc Corporation | Techniques for automated discovery and performing storage optimizations on a component external to a data storage system |
US8966184B2 (en) | 2011-01-31 | 2015-02-24 | Intelligent Intellectual Property Holdings 2, LLC. | Apparatus, system, and method for managing eviction of data |
US9098325B2 (en) | 2012-02-28 | 2015-08-04 | Hewlett-Packard Development Company, L.P. | Persistent volume at an offset of a virtual block device of a storage server |
US9104599B2 (en) | 2007-12-06 | 2015-08-11 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for destaging cached data |
US9158568B2 (en) | 2012-01-30 | 2015-10-13 | Hewlett-Packard Development Company, L.P. | Input/output operations at a virtual block device of a storage server |
US9251086B2 (en) | 2012-01-24 | 2016-02-02 | SanDisk Technologies, Inc. | Apparatus, system, and method for managing a cache |
US20160277499A1 (en) * | 2005-12-19 | 2016-09-22 | Commvault Systems, Inc. | Systems and methods for granular resource management in a storage network |
US20160301752A1 (en) * | 2015-04-09 | 2016-10-13 | Pure Storage, Inc. | Point to point based backend communication layer for storage processing |
US9519540B2 (en) | 2007-12-06 | 2016-12-13 | Sandisk Technologies Llc | Apparatus, system, and method for destaging cached data |
US9525738B2 (en) | 2014-06-04 | 2016-12-20 | Pure Storage, Inc. | Storage system architecture |
US9600184B2 (en) | 2007-12-06 | 2017-03-21 | Sandisk Technologies Llc | Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment |
US9672125B2 (en) | 2015-04-10 | 2017-06-06 | Pure Storage, Inc. | Ability to partition an array into two or more logical arrays with independently running software |
US9734086B2 (en) | 2006-12-06 | 2017-08-15 | Sandisk Technologies Llc | Apparatus, system, and method for a device shared between multiple independent hosts |
US9747229B1 (en) | 2014-07-03 | 2017-08-29 | Pure Storage, Inc. | Self-describing data format for DMA in a non-volatile solid-state storage |
US9767032B2 (en) | 2012-01-12 | 2017-09-19 | Sandisk Technologies Llc | Systems and methods for cache endurance |
US9768953B2 (en) | 2015-09-30 | 2017-09-19 | Pure Storage, Inc. | Resharing of a split secret |
US9798477B2 (en) | 2014-06-04 | 2017-10-24 | Pure Storage, Inc. | Scalable non-uniform storage sizes |
US9817576B2 (en) | 2015-05-27 | 2017-11-14 | Pure Storage, Inc. | Parallel update to NVRAM |
US9843453B2 (en) | 2015-10-23 | 2017-12-12 | Pure Storage, Inc. | Authorizing I/O commands with I/O tokens |
US9940234B2 (en) | 2015-03-26 | 2018-04-10 | Pure Storage, Inc. | Aggressive data deduplication using lazy garbage collection |
US9948615B1 (en) | 2015-03-16 | 2018-04-17 | Pure Storage, Inc. | Increased storage unit encryption based on loss of trust |
US10007457B2 (en) | 2015-12-22 | 2018-06-26 | Pure Storage, Inc. | Distributed transactions with token-associated execution |
US10082985B2 (en) | 2015-03-27 | 2018-09-25 | Pure Storage, Inc. | Data striping across storage nodes that are assigned to multiple logical arrays |
US10108355B2 (en) | 2015-09-01 | 2018-10-23 | Pure Storage, Inc. | Erase block state detection |
US10114757B2 (en) | 2014-07-02 | 2018-10-30 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
US10141050B1 (en) | 2017-04-27 | 2018-11-27 | Pure Storage, Inc. | Page writes for triple level cell flash memory |
US10140149B1 (en) | 2015-05-19 | 2018-11-27 | Pure Storage, Inc. | Transactional commits with hardware assists in remote memory |
US10185506B2 (en) | 2014-07-03 | 2019-01-22 | Pure Storage, Inc. | Scheduling policy for queues in a non-volatile solid-state storage |
US10203903B2 (en) | 2016-07-26 | 2019-02-12 | Pure Storage, Inc. | Geometry based, space aware shelf/writegroup evacuation |
US10210926B1 (en) | 2017-09-15 | 2019-02-19 | Pure Storage, Inc. | Tracking of optimum read voltage thresholds in nand flash devices |
US10216411B2 (en) | 2014-08-07 | 2019-02-26 | Pure Storage, Inc. | Data rebuild on feedback from a queue in a non-volatile solid-state storage |
US10216420B1 (en) | 2016-07-24 | 2019-02-26 | Pure Storage, Inc. | Calibration of flash channels in SSD |
US10261690B1 (en) | 2016-05-03 | 2019-04-16 | Pure Storage, Inc. | Systems and methods for operating a storage system |
US10303547B2 (en) | 2014-06-04 | 2019-05-28 | Pure Storage, Inc. | Rebuilding data across storage nodes |
US10324812B2 (en) | 2014-08-07 | 2019-06-18 | Pure Storage, Inc. | Error recovery in a storage cluster |
US10366004B2 (en) | 2016-07-26 | 2019-07-30 | Pure Storage, Inc. | Storage system with elective garbage collection to reduce flash contention |
US10372617B2 (en) | 2014-07-02 | 2019-08-06 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
US10379763B2 (en) | 2014-06-04 | 2019-08-13 | Pure Storage, Inc. | Hyperconverged storage system with distributable processing power |
US10430306B2 (en) | 2014-06-04 | 2019-10-01 | Pure Storage, Inc. | Mechanism for persisting messages in a storage system |
US10454498B1 (en) | 2018-10-18 | 2019-10-22 | Pure Storage, Inc. | Fully pipelined hardware engine design for fast and efficient inline lossless data compression |
US10467527B1 (en) | 2018-01-31 | 2019-11-05 | Pure Storage, Inc. | Method and apparatus for artificial intelligence acceleration |
US20190362075A1 (en) * | 2018-05-22 | 2019-11-28 | Fortinet, Inc. | Preventing users from accessing infected files by using multiple file storage repositories and a secure data transfer agent logically interposed therebetween |
US10498580B1 (en) | 2014-08-20 | 2019-12-03 | Pure Storage, Inc. | Assigning addresses in a storage system |
US10496330B1 (en) | 2017-10-31 | 2019-12-03 | Pure Storage, Inc. | Using flash storage devices with different sized erase blocks |
US10515701B1 (en) | 2017-10-31 | 2019-12-24 | Pure Storage, Inc. | Overlapping raid groups |
US10524022B2 (en) * | 2017-05-02 | 2019-12-31 | Seagate Technology Llc | Data storage system with adaptive data path routing |
US10528488B1 (en) | 2017-03-30 | 2020-01-07 | Pure Storage, Inc. | Efficient name coding |
US10528419B2 (en) | 2014-08-07 | 2020-01-07 | Pure Storage, Inc. | Mapping around defective flash memory of a storage array |
US10545687B1 (en) | 2017-10-31 | 2020-01-28 | Pure Storage, Inc. | Data rebuild when changing erase block sizes during drive replacement |
US10574754B1 (en) | 2014-06-04 | 2020-02-25 | Pure Storage, Inc. | Multi-chassis array with multi-level load balancing |
US10572176B2 (en) | 2014-07-02 | 2020-02-25 | Pure Storage, Inc. | Storage cluster operation using erasure coded data |
US10579474B2 (en) | 2014-08-07 | 2020-03-03 | Pure Storage, Inc. | Die-level monitoring in a storage cluster |
US10650902B2 (en) | 2017-01-13 | 2020-05-12 | Pure Storage, Inc. | Method for processing blocks of flash memory |
US10671480B2 (en) | 2014-06-04 | 2020-06-02 | Pure Storage, Inc. | Utilization of erasure codes in a storage system |
US10678452B2 (en) | 2016-09-15 | 2020-06-09 | Pure Storage, Inc. | Distributed deletion of a file and directory hierarchy |
US10691812B2 (en) | 2014-07-03 | 2020-06-23 | Pure Storage, Inc. | Secure data replication in a storage grid |
US10705732B1 (en) | 2017-12-08 | 2020-07-07 | Pure Storage, Inc. | Multiple-apartment aware offlining of devices for disruptive and destructive operations |
US10733053B1 (en) | 2018-01-31 | 2020-08-04 | Pure Storage, Inc. | Disaster recovery for high-bandwidth distributed archives |
US10768819B2 (en) | 2016-07-22 | 2020-09-08 | Pure Storage, Inc. | Hardware support for non-disruptive upgrades |
US10817202B2 (en) | 2012-05-29 | 2020-10-27 | International Business Machines Corporation | Application-controlled sub-LUN level data migration |
US10831594B2 (en) | 2016-07-22 | 2020-11-10 | Pure Storage, Inc. | Optimize data protection layouts based on distributed flash wear leveling |
US10831727B2 (en) | 2012-05-29 | 2020-11-10 | International Business Machines Corporation | Application-controlled sub-LUN level data migration |
US10831728B2 (en) | 2012-05-29 | 2020-11-10 | International Business Machines Corporation | Application-controlled sub-LUN level data migration |
US10853146B1 (en) | 2018-04-27 | 2020-12-01 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US10853266B2 (en) | 2015-09-30 | 2020-12-01 | Pure Storage, Inc. | Hardware assisted data lookup methods |
US10860475B1 (en) | 2017-11-17 | 2020-12-08 | Pure Storage, Inc. | Hybrid flash translation layer |
US10877827B2 (en) | 2017-09-15 | 2020-12-29 | Pure Storage, Inc. | Read voltage optimization |
US10877861B2 (en) | 2014-07-02 | 2020-12-29 | Pure Storage, Inc. | Remote procedure call cache for distributed system |
US10884919B2 (en) | 2017-10-31 | 2021-01-05 | Pure Storage, Inc. | Memory management in a storage system |
US10929053B2 (en) | 2017-12-08 | 2021-02-23 | Pure Storage, Inc. | Safe destructive actions on drives |
US10929031B2 (en) | 2017-12-21 | 2021-02-23 | Pure Storage, Inc. | Maximizing data reduction in a partially encrypted volume |
US10931450B1 (en) | 2018-04-27 | 2021-02-23 | Pure Storage, Inc. | Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers |
US10944671B2 (en) | 2017-04-27 | 2021-03-09 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US10976948B1 (en) | 2018-01-31 | 2021-04-13 | Pure Storage, Inc. | Cluster expansion mechanism |
US10979223B2 (en) | 2017-01-31 | 2021-04-13 | Pure Storage, Inc. | Separate encryption for a solid-state drive |
US10976947B2 (en) | 2018-10-26 | 2021-04-13 | Pure Storage, Inc. | Dynamically selecting segment heights in a heterogeneous RAID group |
US10983732B2 (en) | 2015-07-13 | 2021-04-20 | Pure Storage, Inc. | Method and system for accessing a file |
US10983866B2 (en) | 2014-08-07 | 2021-04-20 | Pure Storage, Inc. | Mapping defective memory in a storage system |
US10990537B1 (en) | 2020-01-07 | 2021-04-27 | International Business Machines Corporation | Logical to virtual and virtual to physical translation in storage class memory |
US10990566B1 (en) | 2017-11-20 | 2021-04-27 | Pure Storage, Inc. | Persistent file locks in a storage system |
US11016667B1 (en) | 2017-04-05 | 2021-05-25 | Pure Storage, Inc. | Efficient mapping for LUNs in storage memory with holes in address space |
US11024390B1 (en) | 2017-10-31 | 2021-06-01 | Pure Storage, Inc. | Overlapping RAID groups |
US11036856B2 (en) | 2018-09-16 | 2021-06-15 | Fortinet, Inc. | Natively mounting storage for inspection and sandboxing in the cloud |
US11068389B2 (en) | 2017-06-11 | 2021-07-20 | Pure Storage, Inc. | Data resiliency with heterogeneous storage |
US11080155B2 (en) | 2016-07-24 | 2021-08-03 | Pure Storage, Inc. | Identifying error types among flash memory |
US11099986B2 (en) | 2019-04-12 | 2021-08-24 | Pure Storage, Inc. | Efficient transfer of memory contents |
US11119703B2 (en) * | 2019-10-29 | 2021-09-14 | EMC IP Holding Company LLC | Utilizing a set of virtual storage units distributed across physical storage units |
US11188432B2 (en) | 2020-02-28 | 2021-11-30 | Pure Storage, Inc. | Data resiliency by partially deallocating data blocks of a storage device |
US11190580B2 (en) | 2017-07-03 | 2021-11-30 | Pure Storage, Inc. | Stateful connection resets |
US11232079B2 (en) | 2015-07-16 | 2022-01-25 | Pure Storage, Inc. | Efficient distribution of large directories |
US11256587B2 (en) | 2020-04-17 | 2022-02-22 | Pure Storage, Inc. | Intelligent access to a storage device |
US11281394B2 (en) | 2019-06-24 | 2022-03-22 | Pure Storage, Inc. | Replication across partitioning schemes in a distributed storage system |
US11294893B2 (en) | 2015-03-20 | 2022-04-05 | Pure Storage, Inc. | Aggregation of queries |
US11334254B2 (en) | 2019-03-29 | 2022-05-17 | Pure Storage, Inc. | Reliability based flash page sizing |
US11354058B2 (en) | 2018-09-06 | 2022-06-07 | Pure Storage, Inc. | Local relocation of data stored at a storage device of a storage system |
US11399063B2 (en) | 2014-06-04 | 2022-07-26 | Pure Storage, Inc. | Network authentication for a storage system |
US11416338B2 (en) | 2020-04-24 | 2022-08-16 | Pure Storage, Inc. | Resiliency scheme to enhance storage performance |
US11416144B2 (en) | 2019-12-12 | 2022-08-16 | Pure Storage, Inc. | Dynamic use of segment or zone power loss protection in a flash device |
US11436023B2 (en) | 2018-05-31 | 2022-09-06 | Pure Storage, Inc. | Mechanism for updating host file system and flash translation layer based on underlying NAND technology |
US11438279B2 (en) | 2018-07-23 | 2022-09-06 | Pure Storage, Inc. | Non-disruptive conversion of a clustered service from single-chassis to multi-chassis |
US11449232B1 (en) | 2016-07-22 | 2022-09-20 | Pure Storage, Inc. | Optimal scheduling of flash operations |
US11467913B1 (en) | 2017-06-07 | 2022-10-11 | Pure Storage, Inc. | Snapshots with crash consistency in a storage system |
US11474986B2 (en) | 2020-04-24 | 2022-10-18 | Pure Storage, Inc. | Utilizing machine learning to streamline telemetry processing of storage media |
US11487455B2 (en) | 2020-12-17 | 2022-11-01 | Pure Storage, Inc. | Dynamic block allocation to optimize storage system performance |
US11494109B1 (en) | 2018-02-22 | 2022-11-08 | Pure Storage, Inc. | Erase block trimming for heterogenous flash memory storage devices |
US11500570B2 (en) | 2018-09-06 | 2022-11-15 | Pure Storage, Inc. | Efficient relocation of data utilizing different programming modes |
US11507597B2 (en) | 2021-03-31 | 2022-11-22 | Pure Storage, Inc. | Data replication to meet a recovery point objective |
US11507297B2 (en) | 2020-04-15 | 2022-11-22 | Pure Storage, Inc. | Efficient management of optimal read levels for flash storage systems |
US11513974B2 (en) | 2020-09-08 | 2022-11-29 | Pure Storage, Inc. | Using nonce to control erasure of data blocks of a multi-controller storage system |
US11520514B2 (en) | 2018-09-06 | 2022-12-06 | Pure Storage, Inc. | Optimized relocation of data based on data characteristics |
US11544143B2 (en) | 2014-08-07 | 2023-01-03 | Pure Storage, Inc. | Increased data reliability |
US11550752B2 (en) | 2014-07-03 | 2023-01-10 | Pure Storage, Inc. | Administrative actions via a reserved filename |
US11567917B2 (en) | 2015-09-30 | 2023-01-31 | Pure Storage, Inc. | Writing data and metadata into storage |
US11581943B2 (en) | 2016-10-04 | 2023-02-14 | Pure Storage, Inc. | Queues reserved for direct access via a user application |
US11604690B2 (en) | 2016-07-24 | 2023-03-14 | Pure Storage, Inc. | Online failure span determination |
US11604598B2 (en) | 2014-07-02 | 2023-03-14 | Pure Storage, Inc. | Storage cluster with zoned drives |
US11614893B2 (en) | 2010-09-15 | 2023-03-28 | Pure Storage, Inc. | Optimizing storage device access based on latency |
US11614880B2 (en) | 2020-12-31 | 2023-03-28 | Pure Storage, Inc. | Storage system with selectable write paths |
US11630593B2 (en) | 2021-03-12 | 2023-04-18 | Pure Storage, Inc. | Inline flash memory qualification in a storage system |
US11650976B2 (en) | 2011-10-14 | 2023-05-16 | Pure Storage, Inc. | Pattern matching using hash tables in storage system |
US11652884B2 (en) | 2014-06-04 | 2023-05-16 | Pure Storage, Inc. | Customized hash algorithms |
US11675762B2 (en) | 2015-06-26 | 2023-06-13 | Pure Storage, Inc. | Data structures for key management |
US11681448B2 (en) | 2020-09-08 | 2023-06-20 | Pure Storage, Inc. | Multiple device IDs in a multi-fabric module storage system |
US11704192B2 (en) | 2019-12-12 | 2023-07-18 | Pure Storage, Inc. | Budgeting open blocks based on power loss protection |
US11714708B2 (en) | 2017-07-31 | 2023-08-01 | Pure Storage, Inc. | Intra-device redundancy scheme |
US11714572B2 (en) | 2019-06-19 | 2023-08-01 | Pure Storage, Inc. | Optimized data resiliency in a modular storage system |
US11722455B2 (en) | 2017-04-27 | 2023-08-08 | Pure Storage, Inc. | Storage cluster address resolution |
US11734169B2 (en) | 2016-07-26 | 2023-08-22 | Pure Storage, Inc. | Optimizing spool and memory space management |
US11768763B2 (en) | 2020-07-08 | 2023-09-26 | Pure Storage, Inc. | Flash secure erase |
US11775189B2 (en) | 2019-04-03 | 2023-10-03 | Pure Storage, Inc. | Segment level heterogeneity |
US11782625B2 (en) | 2017-06-11 | 2023-10-10 | Pure Storage, Inc. | Heterogeneity supportive resiliency groups |
US11797212B2 (en) | 2016-07-26 | 2023-10-24 | Pure Storage, Inc. | Data migration for zoned drives |
US11822444B2 (en) | 2014-06-04 | 2023-11-21 | Pure Storage, Inc. | Data rebuild independent of error detection |
US11832410B2 (en) | 2021-09-14 | 2023-11-28 | Pure Storage, Inc. | Mechanical energy absorbing bracket apparatus |
US11836348B2 (en) | 2018-04-27 | 2023-12-05 | Pure Storage, Inc. | Upgrade for system with differing capacities |
US11842053B2 (en) | 2016-12-19 | 2023-12-12 | Pure Storage, Inc. | Zone namespace |
US11847013B2 (en) | 2018-02-18 | 2023-12-19 | Pure Storage, Inc. | Readable data determination |
US11847324B2 (en) | 2020-12-31 | 2023-12-19 | Pure Storage, Inc. | Optimizing resiliency groups for data regions of a storage system |
US11847331B2 (en) | 2019-12-12 | 2023-12-19 | Pure Storage, Inc. | Budgeting open blocks of a storage unit based on power loss prevention |
US11861188B2 (en) | 2016-07-19 | 2024-01-02 | Pure Storage, Inc. | System having modular accelerators |
US11868309B2 (en) | 2018-09-06 | 2024-01-09 | Pure Storage, Inc. | Queue management for data relocation |
US11886308B2 (en) | 2014-07-02 | 2024-01-30 | Pure Storage, Inc. | Dual class of service for unified file and object messaging |
US11886334B2 (en) | 2016-07-26 | 2024-01-30 | Pure Storage, Inc. | Optimizing spool and memory space management |
US11893126B2 (en) | 2019-10-14 | 2024-02-06 | Pure Storage, Inc. | Data deletion for a multi-tenant environment |
US11893023B2 (en) | 2015-09-04 | 2024-02-06 | Pure Storage, Inc. | Deterministic searching using compressed indexes |
US11922070B2 (en) | 2016-10-04 | 2024-03-05 | Pure Storage, Inc. | Granting access to a storage device based on reservations |
US11947814B2 (en) | 2017-06-11 | 2024-04-02 | Pure Storage, Inc. | Optimizing resiliency group formation stability |
US11955187B2 (en) | 2017-01-13 | 2024-04-09 | Pure Storage, Inc. | Refresh of differing capacity NAND |
US11960371B2 (en) | 2014-06-04 | 2024-04-16 | Pure Storage, Inc. | Message persistence in a zoned system |
US11971828B2 (en) | 2020-11-19 | 2024-04-30 | Pure Storage, Inc. | Logic module for use with encoded instructions |
Families Citing this family (123)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB9603582D0 (en) | 1996-02-20 | 1996-04-17 | Hewlett Packard Co | Method of accessing service resource items that are for use in a telecommunications system |
US7024427B2 (en) * | 2001-12-19 | 2006-04-04 | Emc Corporation | Virtual file system |
US7769722B1 (en) | 2006-12-08 | 2010-08-03 | Emc Corporation | Replication and restoration of multiple data storage object types in a data network |
US7461141B2 (en) * | 2004-01-30 | 2008-12-02 | Applied Micro Circuits Corporation | System and method for performing driver configuration operations without a system reboot |
US20050216680A1 (en) * | 2004-03-25 | 2005-09-29 | Itzhak Levy | Device to allow multiple data processing channels to share a single disk drive |
US7945657B1 (en) * | 2005-03-30 | 2011-05-17 | Oracle America, Inc. | System and method for emulating input/output performance of an application |
EP1769395A2 (en) * | 2004-05-21 | 2007-04-04 | Computer Associates Think, Inc. | Object-based storage |
US9264384B1 (en) | 2004-07-22 | 2016-02-16 | Oracle International Corporation | Resource virtualization mechanism including virtual host bus adapters |
US7409495B1 (en) * | 2004-12-22 | 2008-08-05 | Symantec Operating Corporation | Method and apparatus for providing a temporal storage appliance with block virtualization in storage networks |
US7493462B2 (en) * | 2005-01-20 | 2009-02-17 | International Business Machines Corporation | Apparatus, system, and method for validating logical volume configuration |
US8161318B2 (en) * | 2005-02-07 | 2012-04-17 | Mimosa Systems, Inc. | Enterprise service availability through identity preservation |
US8543542B2 (en) * | 2005-02-07 | 2013-09-24 | Mimosa Systems, Inc. | Synthetic full copies of data and dynamic bulk-to-brick transformation |
US7778976B2 (en) * | 2005-02-07 | 2010-08-17 | Mimosa, Inc. | Multi-dimensional surrogates for data management |
US8799206B2 (en) * | 2005-02-07 | 2014-08-05 | Mimosa Systems, Inc. | Dynamic bulk-to-brick transformation of data |
US7657780B2 (en) * | 2005-02-07 | 2010-02-02 | Mimosa Systems, Inc. | Enterprise service availability through identity preservation |
US8271436B2 (en) * | 2005-02-07 | 2012-09-18 | Mimosa Systems, Inc. | Retro-fitting synthetic full copies of data |
US8918366B2 (en) * | 2005-02-07 | 2014-12-23 | Mimosa Systems, Inc. | Synthetic full copies of data and dynamic bulk-to-brick transformation |
US7870416B2 (en) * | 2005-02-07 | 2011-01-11 | Mimosa Systems, Inc. | Enterprise service availability through identity preservation |
US8275749B2 (en) * | 2005-02-07 | 2012-09-25 | Mimosa Systems, Inc. | Enterprise server version migration through identity preservation |
US8812433B2 (en) * | 2005-02-07 | 2014-08-19 | Mimosa Systems, Inc. | Dynamic bulk-to-brick transformation of data |
US7917475B2 (en) * | 2005-02-07 | 2011-03-29 | Mimosa Systems, Inc. | Enterprise server version migration through identity preservation |
US7519851B2 (en) * | 2005-02-08 | 2009-04-14 | Hitachi, Ltd. | Apparatus for replicating volumes between heterogenous storage systems |
US7630998B2 (en) * | 2005-06-10 | 2009-12-08 | Microsoft Corporation | Performing a deletion of a node in a tree data storage structure |
US8433770B2 (en) * | 2005-07-29 | 2013-04-30 | Broadcom Corporation | Combined local and network storage interface |
US20070038749A1 (en) * | 2005-07-29 | 2007-02-15 | Broadcom Corporation | Combined local and network storage interface |
US7802000B1 (en) * | 2005-08-01 | 2010-09-21 | Vmware | Virtual network in server farm |
US9813283B2 (en) | 2005-08-09 | 2017-11-07 | Oracle International Corporation | Efficient data transfer between servers and remote peripherals |
KR101340176B1 (en) * | 2005-08-25 | 2013-12-10 | 실리콘 이미지, 인크. | Smart scalable storage switch architecture |
US20070083653A1 (en) * | 2005-09-16 | 2007-04-12 | Balasubramanian Chandrasekaran | System and method for deploying information handling system images through fibre channel |
US7765187B2 (en) * | 2005-11-29 | 2010-07-27 | Emc Corporation | Replication of a consistency group of data storage objects from servers in a data network |
JP4797636B2 (en) * | 2006-01-16 | 2011-10-19 | 株式会社日立製作所 | Complex information platform apparatus and information processing apparatus configuration method thereof |
US8533409B2 (en) * | 2006-01-26 | 2013-09-10 | Infortrend Technology, Inc. | Method of managing data snapshot images in a storage system |
US20070180287A1 (en) * | 2006-01-31 | 2007-08-02 | Dell Products L. P. | System and method for managing node resets in a cluster |
US7904492B2 (en) * | 2006-03-23 | 2011-03-08 | Network Appliance, Inc. | Method and apparatus for concurrent read-only access to filesystem |
US7617373B2 (en) * | 2006-05-23 | 2009-11-10 | International Business Machines Corporation | Apparatus, system, and method for presenting a storage volume as a virtual volume |
US20080140888A1 (en) * | 2006-05-30 | 2008-06-12 | Schneider Automation Inc. | Virtual Placeholder Configuration for Distributed Input/Output Modules |
US7904681B1 (en) * | 2006-06-30 | 2011-03-08 | Emc Corporation | Methods and systems for migrating data with minimal disruption |
US7610483B2 (en) * | 2006-07-25 | 2009-10-27 | Nvidia Corporation | System and method to accelerate identification of hardware platform classes |
US8909746B2 (en) * | 2006-07-25 | 2014-12-09 | Nvidia Corporation | System and method for operating system installation on a diskless computing platform |
US9003000B2 (en) * | 2006-07-25 | 2015-04-07 | Nvidia Corporation | System and method for operating system installation on a diskless computing platform |
US10013268B2 (en) * | 2006-08-29 | 2018-07-03 | Prometric Inc. | Performance-based testing system and method employing emulation and virtualization |
US7761738B2 (en) | 2006-09-07 | 2010-07-20 | International Business Machines Corporation | Establishing communications across virtual enclosure boundaries |
US7584378B2 (en) | 2006-09-07 | 2009-09-01 | International Business Machines Corporation | Reconfigurable FC-AL storage loops in a data storage system |
JP2008090657A (en) * | 2006-10-03 | 2008-04-17 | Hitachi Ltd | Storage system and control method |
JP2008112399A (en) * | 2006-10-31 | 2008-05-15 | Fujitsu Ltd | Storage virtualization switch and computer system |
US7975135B2 (en) * | 2006-11-23 | 2011-07-05 | Dell Products L.P. | Apparatus, method and product for selecting an iSCSI target for automated initiator booting |
US8706833B1 (en) * | 2006-12-08 | 2014-04-22 | Emc Corporation | Data storage server having common replication architecture for multiple storage object types |
CN100547566C (en) * | 2007-06-28 | 2009-10-07 | 忆正存储技术(深圳)有限公司 | Control method based on multi-passage flash memory apparatus logic strip |
US8635429B1 (en) | 2007-06-29 | 2014-01-21 | Symantec Corporation | Method and apparatus for mapping virtual drives |
US8738871B1 (en) * | 2007-06-29 | 2014-05-27 | Symantec Corporation | Method and apparatus for mapping virtual drives |
US8176405B2 (en) * | 2007-09-24 | 2012-05-08 | International Business Machines Corporation | Data integrity validation in a computing environment |
WO2009070898A1 (en) * | 2007-12-07 | 2009-06-11 | Scl Elements Inc. | Auto-configuring multi-layer network |
US8032689B2 (en) * | 2007-12-18 | 2011-10-04 | Hitachi Global Storage Technologies Netherlands, B.V. | Techniques for data storage device virtualization |
US8055867B2 (en) * | 2008-01-11 | 2011-11-08 | International Business Machines Corporation | Methods, apparatuses, and computer program products for protecting pre-staged provisioned data in a storage system |
US8074020B2 (en) * | 2008-02-13 | 2011-12-06 | International Business Machines Corporation | On-line volume coalesce operation to enable on-line storage subsystem volume consolidation |
US20090216944A1 (en) * | 2008-02-22 | 2009-08-27 | International Business Machines Corporation | Efficient validation of writes for protection against dropped writes |
GB2473356B (en) * | 2008-03-27 | 2012-08-29 | Hewlett Packard Development Co | Raid array access by a raid array-unaware operating system |
US7979260B1 (en) * | 2008-03-31 | 2011-07-12 | Symantec Corporation | Simulating PXE booting for virtualized machines |
US8745336B2 (en) * | 2008-05-29 | 2014-06-03 | Vmware, Inc. | Offloading storage operations to storage hardware |
US8893160B2 (en) * | 2008-06-09 | 2014-11-18 | International Business Machines Corporation | Block storage interface for virtual memory |
US8725688B2 (en) * | 2008-09-05 | 2014-05-13 | Commvault Systems, Inc. | Image level copy or restore, such as image level restore without knowledge of data object metadata |
US8073674B2 (en) * | 2008-09-23 | 2011-12-06 | Oracle America, Inc. | SCSI device emulation in user space facilitating storage virtualization |
US8055842B1 (en) | 2008-09-26 | 2011-11-08 | Nvidia Corporation | Using raid with large sector size ATA mass storage devices |
US8516190B1 (en) * | 2008-09-26 | 2013-08-20 | Nvidia Corporation | Reporting logical sector alignment for ATA mass storage devices |
US20100082715A1 (en) * | 2008-09-30 | 2010-04-01 | Karl Dohm | Reduced-Resource Block Thin Provisioning |
US8510352B2 (en) * | 2008-10-24 | 2013-08-13 | Microsoft Corporation | Virtualized boot block with discovery volume |
US8417969B2 (en) * | 2009-02-19 | 2013-04-09 | Microsoft Corporation | Storage volume protection supporting legacy systems |
US8073886B2 (en) * | 2009-02-20 | 2011-12-06 | Microsoft Corporation | Non-privileged access to data independent of filesystem implementation |
US8238538B2 (en) | 2009-05-28 | 2012-08-07 | Comcast Cable Communications, Llc | Stateful home phone service |
US9973446B2 (en) | 2009-08-20 | 2018-05-15 | Oracle International Corporation | Remote shared server peripherals over an Ethernet network for resource virtualization |
US8495289B2 (en) * | 2010-02-24 | 2013-07-23 | Red Hat, Inc. | Automatically detecting discrepancies between storage subsystem alignments |
US8539124B1 (en) * | 2010-03-31 | 2013-09-17 | Emc Corporation | Storage integration plugin for virtual servers |
US8756338B1 (en) * | 2010-04-29 | 2014-06-17 | Netapp, Inc. | Storage server with embedded communication agent |
US8560825B2 (en) * | 2010-06-30 | 2013-10-15 | International Business Machines Corporation | Streaming virtual machine boot services over a network |
US9331963B2 (en) | 2010-09-24 | 2016-05-03 | Oracle International Corporation | Wireless host I/O using virtualized I/O controllers |
CN101986655A (en) * | 2010-10-21 | 2011-03-16 | 浪潮(北京)电子信息产业有限公司 | Storage network and data reading and writing method thereof |
US8458145B2 (en) * | 2011-01-20 | 2013-06-04 | Infinidat Ltd. | System and method of storage optimization |
US9606747B2 (en) | 2011-05-04 | 2017-03-28 | International Business Machines Corporation | Importing pre-existing data of a prior storage solution into a storage pool for use with a new storage solution |
US8996800B2 (en) | 2011-07-07 | 2015-03-31 | Atlantis Computing, Inc. | Deduplication of virtual machine files in a virtualized desktop environment |
US9152404B2 (en) | 2011-07-13 | 2015-10-06 | Z124 | Remote device filter |
US20130268559A1 (en) | 2011-07-13 | 2013-10-10 | Z124 | Virtual file system remote search |
US20130268703A1 (en) * | 2011-09-27 | 2013-10-10 | Z124 | Rules based hierarchical data virtualization |
CN102567217B (en) * | 2012-01-04 | 2014-12-24 | 北京航空航天大学 | MIPS platform-oriented memory virtualization method |
US9626284B2 (en) | 2012-02-09 | 2017-04-18 | Vmware, Inc. | Systems and methods to test programs |
US9946559B1 (en) * | 2012-02-13 | 2018-04-17 | Veritas Technologies Llc | Techniques for managing virtual machine backups |
US8856484B2 (en) * | 2012-08-14 | 2014-10-07 | Infinidat Ltd. | Mass storage system and methods of controlling resources thereof |
US9116623B2 (en) * | 2012-08-14 | 2015-08-25 | International Business Machines Corporation | Optimizing storage system behavior in virtualized cloud computing environments by tagging input/output operation data to indicate storage policy |
US9083550B2 (en) | 2012-10-29 | 2015-07-14 | Oracle International Corporation | Network virtualization over infiniband |
US9454670B2 (en) | 2012-12-03 | 2016-09-27 | International Business Machines Corporation | Hybrid file systems |
US9280359B2 (en) * | 2012-12-11 | 2016-03-08 | Cisco Technology, Inc. | System and method for selecting a least cost path for performing a network boot in a data center network environment |
US9912713B1 (en) | 2012-12-17 | 2018-03-06 | MiMedia LLC | Systems and methods for providing dynamically updated image sets for applications |
US9277010B2 (en) | 2012-12-21 | 2016-03-01 | Atlantis Computing, Inc. | Systems and apparatuses for aggregating nodes to form an aggregated virtual storage for a virtualized desktop environment |
US9069472B2 (en) | 2012-12-21 | 2015-06-30 | Atlantis Computing, Inc. | Method for dispersing and collating I/O's from virtual machines for parallelization of I/O access and redundancy of storing virtual machine data |
US9633216B2 (en) | 2012-12-27 | 2017-04-25 | Commvault Systems, Inc. | Application of information management policies based on operation with a geographic entity |
US10445229B1 (en) * | 2013-01-28 | 2019-10-15 | Radian Memory Systems, Inc. | Memory controller with at least one address segment defined for which data is striped across flash memory dies, with a common address offset being used to obtain physical addresses for the data in each of the dies |
US9250946B2 (en) | 2013-02-12 | 2016-02-02 | Atlantis Computing, Inc. | Efficient provisioning of cloned virtual machine images using deduplication metadata |
US9471590B2 (en) | 2013-02-12 | 2016-10-18 | Atlantis Computing, Inc. | Method and apparatus for replicating virtual machine images using deduplication metadata |
US9372865B2 (en) | 2013-02-12 | 2016-06-21 | Atlantis Computing, Inc. | Deduplication metadata access in deduplication file system |
US9459968B2 (en) | 2013-03-11 | 2016-10-04 | Commvault Systems, Inc. | Single index to query multiple backup formats |
US9298758B1 (en) | 2013-03-13 | 2016-03-29 | MiMedia, Inc. | Systems and methods providing media-to-media connection |
US9465521B1 (en) | 2013-03-13 | 2016-10-11 | MiMedia, Inc. | Event based media interface |
US9183232B1 (en) | 2013-03-15 | 2015-11-10 | MiMedia, Inc. | Systems and methods for organizing content using content organization rules and robust content information |
US10257301B1 (en) | 2013-03-15 | 2019-04-09 | MiMedia, Inc. | Systems and methods providing a drive interface for content delivery |
US20140359612A1 (en) * | 2013-06-03 | 2014-12-04 | Microsoft Corporation | Sharing a Virtual Hard Disk Across Multiple Virtual Machines |
US9176890B2 (en) | 2013-06-07 | 2015-11-03 | Globalfoundries Inc. | Non-disruptive modification of a device mapper stack |
US9798596B2 (en) | 2014-02-27 | 2017-10-24 | Commvault Systems, Inc. | Automatic alert escalation for an information management system |
US9871889B1 (en) * | 2014-03-18 | 2018-01-16 | EMC IP Holing Company LLC | Techniques for automated capture of configuration data for simulation |
US10001927B1 (en) * | 2014-09-30 | 2018-06-19 | EMC IP Holding Company LLC | Techniques for optimizing I/O operations |
US9389789B2 (en) | 2014-12-15 | 2016-07-12 | International Business Machines Corporation | Migration of executing applications and associated stored data |
JP6435842B2 (en) | 2014-12-17 | 2018-12-12 | 富士通株式会社 | Storage control device and storage control program |
CN107710160B (en) * | 2015-07-08 | 2021-06-22 | 株式会社日立制作所 | Computer and storage area management method |
US10579275B2 (en) * | 2015-07-27 | 2020-03-03 | Hitachi, Ltd. | Storage system and storage control method |
US9965184B2 (en) | 2015-10-19 | 2018-05-08 | International Business Machines Corporation | Multiple storage subpools of a virtual storage pool in a multiple processor environment |
US10296250B2 (en) * | 2016-06-08 | 2019-05-21 | Intel Corporation | Method and apparatus for improving performance of sequential logging in a storage device |
EP3308316B1 (en) * | 2016-07-05 | 2020-09-02 | Viirii, LLC | Operating system independent, secure data storage subsystem |
US10620835B2 (en) * | 2017-01-27 | 2020-04-14 | Wyse Technology L.L.C. | Attaching a windows file system to a remote non-windows disk stack |
US10838821B2 (en) | 2017-02-08 | 2020-11-17 | Commvault Systems, Inc. | Migrating content and metadata from a backup system |
US10776329B2 (en) | 2017-03-28 | 2020-09-15 | Commvault Systems, Inc. | Migration of a database management system to cloud storage |
US10754829B2 (en) | 2017-04-04 | 2020-08-25 | Oracle International Corporation | Virtual configuration systems and methods |
US11409608B2 (en) * | 2020-12-29 | 2022-08-09 | Advanced Micro Devices, Inc. | Providing host-based error detection capabilities in a remote execution device |
EP4281879A1 (en) | 2021-01-25 | 2023-11-29 | Volumez Technologies Ltd. | Remote online volume cloning method and system |
US11816363B2 (en) * | 2021-11-04 | 2023-11-14 | International Business Machines Corporation | File based virtual disk management |
US11907551B2 (en) * | 2022-07-01 | 2024-02-20 | Dell Products, L.P. | Performance efficient and resilient creation of network attached storage objects |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5193184A (en) * | 1990-06-18 | 1993-03-09 | Storage Technology Corporation | Deleted data file space release system for a dynamically mapped virtual data storage subsystem |
US6467023B1 (en) * | 1999-03-23 | 2002-10-15 | Lsi Logic Corporation | Method for logical unit creation with immediate availability in a raid storage environment |
US20020156984A1 (en) * | 2001-02-20 | 2002-10-24 | Storageapps Inc. | System and method for accessing a storage area network as network attached storage |
US20040015864A1 (en) * | 2001-06-05 | 2004-01-22 | Boucher Michael L. | Method and system for testing memory operations of computer program |
US20040030822A1 (en) * | 2002-08-09 | 2004-02-12 | Vijayan Rajan | Storage virtualization by layering virtual disk objects on a file system |
US6816917B2 (en) * | 2003-01-15 | 2004-11-09 | Hewlett-Packard Development Company, L.P. | Storage system with LUN virtualization |
US20050228950A1 (en) * | 2003-11-26 | 2005-10-13 | Veritas Operating Corporation | External encapsulation of a volume into a LUN to allow booting and installation on a complex volume |
US20060112251A1 (en) * | 2003-11-26 | 2006-05-25 | Veritas Operating Corporation | Host-based virtualization optimizations in storage environments employing off-host storage virtualization |
US7082503B2 (en) * | 2001-07-13 | 2006-07-25 | Hitachi, Ltd. | Security for logical unit in storage system |
US7188194B1 (en) * | 2002-04-22 | 2007-03-06 | Cisco Technology, Inc. | Session-based target/LUN mapping for a storage area network and associated method |
Family Cites Families (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5829053A (en) * | 1996-05-10 | 1998-10-27 | Apple Computer, Inc. | Block storage memory management system and method utilizing independent partition managers and device drivers |
US6044367A (en) * | 1996-08-02 | 2000-03-28 | Hewlett-Packard Company | Distributed I/O store |
US6493811B1 (en) * | 1998-01-26 | 2002-12-10 | Computer Associated Think, Inc. | Intelligent controller accessed through addressable virtual space |
US6240416B1 (en) * | 1998-09-11 | 2001-05-29 | Ambeo, Inc. | Distributed metadata system and method |
US6311213B2 (en) * | 1998-10-27 | 2001-10-30 | International Business Machines Corporation | System and method for server-to-server data storage in a network environment |
US6434637B1 (en) * | 1998-12-31 | 2002-08-13 | Emc Corporation | Method and apparatus for balancing workloads among paths in a multi-path computer system based on the state of previous I/O operations |
US6347371B1 (en) * | 1999-01-25 | 2002-02-12 | Dell Usa, L.P. | System and method for initiating operation of a computer system |
US6370605B1 (en) | 1999-03-04 | 2002-04-09 | Sun Microsystems, Inc. | Switch based scalable performance storage architecture |
US6779016B1 (en) * | 1999-08-23 | 2004-08-17 | Terraspring, Inc. | Extensible computing system |
WO2001029647A1 (en) * | 1999-10-22 | 2001-04-26 | Hitachi, Ltd. | Storage area network system |
US20020103889A1 (en) * | 2000-02-11 | 2002-08-01 | Thomas Markson | Virtual storage layer approach for dynamically associating computer storage with processing hosts |
US6658563B1 (en) * | 2000-05-18 | 2003-12-02 | International Business Machines Corporation | Virtual floppy diskette image within a primary partition in a hard disk drive and method for booting system with virtual diskette |
US6532527B2 (en) * | 2000-06-19 | 2003-03-11 | Storage Technology Corporation | Using current recovery mechanisms to implement dynamic mapping operations |
US6912537B2 (en) * | 2000-06-20 | 2005-06-28 | Storage Technology Corporation | Dynamically changeable virtual mapping scheme |
AU2002230585A1 (en) * | 2000-11-02 | 2002-05-15 | Pirus Networks | Switching system |
US6871245B2 (en) * | 2000-11-29 | 2005-03-22 | Radiant Data Corporation | File system translators and methods for implementing the same |
JP4187403B2 (en) * | 2000-12-20 | 2008-11-26 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Data recording system, data recording method, and network system |
WO2002065309A1 (en) * | 2001-02-13 | 2002-08-22 | Candera, Inc. | System and method for policy based storage provisioning and management |
JP4105398B2 (en) * | 2001-02-28 | 2008-06-25 | 株式会社日立製作所 | Information processing system |
US6779063B2 (en) * | 2001-04-09 | 2004-08-17 | Hitachi, Ltd. | Direct access storage system having plural interfaces which permit receipt of block and file I/O requests |
US6782401B2 (en) * | 2001-07-02 | 2004-08-24 | Sepaton, Inc. | Method and apparatus for implementing a reliable open file system |
US7433948B2 (en) * | 2002-01-23 | 2008-10-07 | Cisco Technology, Inc. | Methods and apparatus for implementing virtualization of storage within a storage area network |
US7548975B2 (en) * | 2002-01-09 | 2009-06-16 | Cisco Technology, Inc. | Methods and apparatus for implementing virtualization of storage within a storage area network through a virtual enclosure |
US6934799B2 (en) * | 2002-01-18 | 2005-08-23 | International Business Machines Corporation | Virtualization of iSCSI storage |
US6954839B2 (en) * | 2002-03-13 | 2005-10-11 | Hitachi, Ltd. | Computer system |
US6889309B1 (en) * | 2002-04-15 | 2005-05-03 | Emc Corporation | Method and apparatus for implementing an enterprise virtual storage system |
US6954852B2 (en) * | 2002-04-18 | 2005-10-11 | Ardence, Inc. | System for and method of network booting of an operating system to a client computer using hibernation |
US6973587B1 (en) * | 2002-05-03 | 2005-12-06 | American Megatrends, Inc. | Systems and methods for out-of-band booting of a computer |
US7100089B1 (en) | 2002-09-06 | 2006-08-29 | 3Pardata, Inc. | Determining differences between snapshots |
US7263593B2 (en) * | 2002-11-25 | 2007-08-28 | Hitachi, Ltd. | Virtualization controller and data transfer control method |
US7797392B2 (en) * | 2002-11-26 | 2010-09-14 | International Business Machines Corporation | System and method for efficiently supporting multiple native network protocol implementations in a single system |
US7020760B2 (en) * | 2002-12-16 | 2006-03-28 | International Business Machines Corporation | Hybrid logical block virtualization system for a storage area network |
US7606239B2 (en) * | 2003-01-31 | 2009-10-20 | Brocade Communications Systems, Inc. | Method and apparatus for providing virtual ports with attached virtual devices in a storage area network |
US6990573B2 (en) * | 2003-02-05 | 2006-01-24 | Dell Products L.P. | System and method for sharing storage to boot multiple servers |
US7984108B2 (en) * | 2003-10-08 | 2011-07-19 | Unisys Corporation | Computer system para-virtualization using a hypervisor that is implemented in a partition of the host system |
US20050125538A1 (en) * | 2003-12-03 | 2005-06-09 | Dell Products L.P. | Assigning logical storage units to host computers |
US8190714B2 (en) * | 2004-04-15 | 2012-05-29 | Raytheon Company | System and method for computer cluster virtualization using dynamic boot images and virtual disk |
-
2003
- 2003-11-26 US US10/722,614 patent/US20050114595A1/en not_active Abandoned
-
2004
- 2004-11-22 CN CNB2004800405835A patent/CN100552611C/en active Active
- 2004-11-22 WO PCT/US2004/039306 patent/WO2005055043A1/en active Application Filing
- 2004-11-22 JP JP2006541649A patent/JP4750040B2/en active Active
- 2004-11-22 EP EP04811936A patent/EP1687706A1/en not_active Withdrawn
-
2005
- 2005-06-20 US US11/156,635 patent/US7689803B2/en not_active Expired - Fee Related
- 2005-06-20 US US11/156,636 patent/US20050228950A1/en not_active Abandoned
- 2005-06-20 US US11/156,821 patent/US20050235132A1/en not_active Abandoned
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5193184A (en) * | 1990-06-18 | 1993-03-09 | Storage Technology Corporation | Deleted data file space release system for a dynamically mapped virtual data storage subsystem |
US6467023B1 (en) * | 1999-03-23 | 2002-10-15 | Lsi Logic Corporation | Method for logical unit creation with immediate availability in a raid storage environment |
US20020156984A1 (en) * | 2001-02-20 | 2002-10-24 | Storageapps Inc. | System and method for accessing a storage area network as network attached storage |
US20040015864A1 (en) * | 2001-06-05 | 2004-01-22 | Boucher Michael L. | Method and system for testing memory operations of computer program |
US7082503B2 (en) * | 2001-07-13 | 2006-07-25 | Hitachi, Ltd. | Security for logical unit in storage system |
US7188194B1 (en) * | 2002-04-22 | 2007-03-06 | Cisco Technology, Inc. | Session-based target/LUN mapping for a storage area network and associated method |
US20040030822A1 (en) * | 2002-08-09 | 2004-02-12 | Vijayan Rajan | Storage virtualization by layering virtual disk objects on a file system |
US6816917B2 (en) * | 2003-01-15 | 2004-11-09 | Hewlett-Packard Development Company, L.P. | Storage system with LUN virtualization |
US20050228950A1 (en) * | 2003-11-26 | 2005-10-13 | Veritas Operating Corporation | External encapsulation of a volume into a LUN to allow booting and installation on a complex volume |
US20050228937A1 (en) * | 2003-11-26 | 2005-10-13 | Veritas Operating Corporation | System and method for emulating operating system metadata to provide cross-platform access to storage volumes |
US20060112251A1 (en) * | 2003-11-26 | 2006-05-25 | Veritas Operating Corporation | Host-based virtualization optimizations in storage environments employing off-host storage virtualization |
Cited By (308)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050228950A1 (en) * | 2003-11-26 | 2005-10-13 | Veritas Operating Corporation | External encapsulation of a volume into a LUN to allow booting and installation on a complex volume |
US20050228937A1 (en) * | 2003-11-26 | 2005-10-13 | Veritas Operating Corporation | System and method for emulating operating system metadata to provide cross-platform access to storage volumes |
US20060112251A1 (en) * | 2003-11-26 | 2006-05-25 | Veritas Operating Corporation | Host-based virtualization optimizations in storage environments employing off-host storage virtualization |
US7689803B2 (en) | 2003-11-26 | 2010-03-30 | Symantec Operating Corporation | System and method for communication using emulated LUN blocks in storage virtualization environments |
US7669032B2 (en) | 2003-11-26 | 2010-02-23 | Symantec Operating Corporation | Host-based virtualization optimizations in storage environments employing off-host storage virtualization |
US8032701B1 (en) * | 2004-03-26 | 2011-10-04 | Emc Corporation | System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network |
US20060259650A1 (en) * | 2005-05-16 | 2006-11-16 | Infortrend Technology, Inc. | Method of transmitting data between storage virtualization controllers and storage virtualization controller designed to implement the method |
US7774514B2 (en) * | 2005-05-16 | 2010-08-10 | Infortrend Technology, Inc. | Method of transmitting data between storage virtualization controllers and storage virtualization controller designed to implement the method |
US20070070535A1 (en) * | 2005-09-27 | 2007-03-29 | Fujitsu Limited | Storage system and component replacement processing method thereof |
US20160277499A1 (en) * | 2005-12-19 | 2016-09-22 | Commvault Systems, Inc. | Systems and methods for granular resource management in a storage network |
US9930118B2 (en) * | 2005-12-19 | 2018-03-27 | Commvault Systems, Inc. | Systems and methods for granular resource management in a storage network |
US20180278689A1 (en) * | 2005-12-19 | 2018-09-27 | Commvault Systems, Inc. | Systems and methods for granular resource management in a storage network |
US7441009B2 (en) * | 2005-12-27 | 2008-10-21 | Fujitsu Limited | Computer system and storage virtualizer |
US20070180167A1 (en) * | 2006-02-02 | 2007-08-02 | Seagate Technology Llc | Dynamic partition mapping in a hot-pluggable data storage apparatus |
US20070233992A1 (en) * | 2006-03-28 | 2007-10-04 | Hitachi, Ltd. | Storage apparatus |
US8195913B2 (en) * | 2006-04-18 | 2012-06-05 | Hitachi, Ltd. | Data storage control on storage devices |
US8635427B2 (en) | 2006-04-18 | 2014-01-21 | Hitachi, Ltd. | Data storage control on storage devices |
US20110208924A1 (en) * | 2006-04-18 | 2011-08-25 | Hitachi, Ltd. | Data storage control on storage devices |
US7536503B1 (en) * | 2006-06-30 | 2009-05-19 | Emc Corporation | Methods and systems for preserving disk geometry when migrating existing data volumes |
US8095715B1 (en) * | 2006-09-05 | 2012-01-10 | Nvidia Corporation | SCSI HBA management using logical units |
US8332613B1 (en) * | 2006-09-29 | 2012-12-11 | Emc Corporation | Methods and systems for managing I/O requests to minimize disruption required for data encapsulation and de-encapsulation |
US9734086B2 (en) | 2006-12-06 | 2017-08-15 | Sandisk Technologies Llc | Apparatus, system, and method for a device shared between multiple independent hosts |
US11847066B2 (en) | 2006-12-06 | 2023-12-19 | Unification Technologies Llc | Apparatus, system, and method for managing commands of solid-state storage using bank interleave |
US11573909B2 (en) | 2006-12-06 | 2023-02-07 | Unification Technologies Llc | Apparatus, system, and method for managing commands of solid-state storage using bank interleave |
US11960412B2 (en) | 2006-12-06 | 2024-04-16 | Unification Technologies Llc | Systems and methods for identifying storage resources that are not in use |
US8443134B2 (en) | 2006-12-06 | 2013-05-14 | Fusion-Io, Inc. | Apparatus, system, and method for graceful cache device degradation |
US11640359B2 (en) | 2006-12-06 | 2023-05-02 | Unification Technologies Llc | Systems and methods for identifying storage resources that are not in use |
US8756375B2 (en) | 2006-12-06 | 2014-06-17 | Fusion-Io, Inc. | Non-volatile cache |
US20080183965A1 (en) * | 2007-01-29 | 2008-07-31 | Kenta Shiga | Controller for controlling a plurality of logical resources of a storage system |
US8161487B2 (en) * | 2007-01-29 | 2012-04-17 | Hitachi, Ltd. | Controller for controlling a plurality of logical resources of a storage system |
US7840790B1 (en) * | 2007-02-16 | 2010-11-23 | Vmware, Inc. | Method and system for providing device drivers in a virtualization system |
US20090320041A1 (en) * | 2007-03-23 | 2009-12-24 | Fujitsu Limited | Computer program and method for balancing processing load in storage system, and apparatus for managing storage devices |
US8516070B2 (en) * | 2007-03-23 | 2013-08-20 | Fujitsu Limited | Computer program and method for balancing processing load in storage system, and apparatus for managing storage devices |
US7568051B1 (en) * | 2007-06-29 | 2009-07-28 | Emc Corporation | Flexible UCB |
US20090089498A1 (en) * | 2007-10-02 | 2009-04-02 | Michael Cameron Hay | Transparently migrating ongoing I/O to virtualized storage |
US20090119452A1 (en) * | 2007-11-02 | 2009-05-07 | Crossroads Systems, Inc. | Method and system for a sharable storage device |
US8489817B2 (en) | 2007-12-06 | 2013-07-16 | Fusion-Io, Inc. | Apparatus, system, and method for caching data |
US9519540B2 (en) | 2007-12-06 | 2016-12-13 | Sandisk Technologies Llc | Apparatus, system, and method for destaging cached data |
US8706968B2 (en) | 2007-12-06 | 2014-04-22 | Fusion-Io, Inc. | Apparatus, system, and method for redundant write caching |
US9104599B2 (en) | 2007-12-06 | 2015-08-11 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for destaging cached data |
US9600184B2 (en) | 2007-12-06 | 2017-03-21 | Sandisk Technologies Llc | Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment |
US8028062B1 (en) * | 2007-12-26 | 2011-09-27 | Emc Corporation | Non-disruptive data mobility using virtual storage area networks with split-path virtualization |
US20090249018A1 (en) * | 2008-03-28 | 2009-10-01 | Hitachi Ltd. | Storage management method, storage management program, storage management apparatus, and storage management system |
GB2460841A (en) * | 2008-06-10 | 2009-12-16 | Virtensys Ltd | Identifier mapping in a storage network switch |
US8966135B2 (en) | 2008-06-10 | 2015-02-24 | Micron Technology, Inc. | Methods of providing access to I/O devices |
GB2460841B (en) * | 2008-06-10 | 2012-01-11 | Virtensys Ltd | Methods of providing access to I/O devices |
US8560742B2 (en) | 2008-06-10 | 2013-10-15 | Virtensys Limited | Methods of providing access to I/O devices |
US8261068B1 (en) | 2008-09-30 | 2012-09-04 | Emc Corporation | Systems and methods for selective encryption of operating system metadata for host-based encryption of data at rest on a logical unit |
US8416954B1 (en) | 2008-09-30 | 2013-04-09 | Emc Corporation | Systems and methods for accessing storage or network based replicas of encrypted volumes with no additional key management |
US8166314B1 (en) | 2008-12-30 | 2012-04-24 | Emc Corporation | Selective I/O to logical unit when encrypted, but key is not available or when encryption status is unknown |
US8473698B2 (en) | 2009-05-12 | 2013-06-25 | Microsoft Corporation | Converting LUNs into files or files into LUNs in real |
US8074038B2 (en) | 2009-05-12 | 2011-12-06 | Microsoft Corporation | Converting luns into files or files into luns in real time |
US20100293349A1 (en) * | 2009-05-12 | 2010-11-18 | Microsoft Corporation | Converting luns into files or files into luns in real time |
US20100306269A1 (en) * | 2009-05-26 | 2010-12-02 | Roger Frederick Osmond | Method and apparatus for large scale data storage |
US9015198B2 (en) * | 2009-05-26 | 2015-04-21 | Pi-Coral, Inc. | Method and apparatus for large scale data storage |
US20120042114A1 (en) * | 2010-08-11 | 2012-02-16 | Lsi Corporation | Apparatus and methods for managing expanded capacity of virtual volumes in a storage system |
US8261003B2 (en) * | 2010-08-11 | 2012-09-04 | Lsi Corporation | Apparatus and methods for managing expanded capacity of virtual volumes in a storage system |
US20120060203A1 (en) * | 2010-09-07 | 2012-03-08 | Susumu Aikawa | Logical unit number management device, logical unit number management method, and program therefor |
US8799996B2 (en) * | 2010-09-07 | 2014-08-05 | Nec Corporation | Logical unit number management device, logical unit number management method, and program therefor |
US11614893B2 (en) | 2010-09-15 | 2023-03-28 | Pure Storage, Inc. | Optimizing storage device access based on latency |
US8966184B2 (en) | 2011-01-31 | 2015-02-24 | Intelligent Intellectual Property Holdings 2, LLC. | Apparatus, system, and method for managing eviction of data |
US9092337B2 (en) | 2011-01-31 | 2015-07-28 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for managing eviction of data |
US9141527B2 (en) | 2011-02-25 | 2015-09-22 | Intelligent Intellectual Property Holdings 2 Llc | Managing cache pools |
US8825937B2 (en) | 2011-02-25 | 2014-09-02 | Fusion-Io, Inc. | Writing cached data forward on read |
US8909893B2 (en) * | 2011-07-21 | 2014-12-09 | International Business Machines Corporation | Virtual logical volume for overflow storage of special data sets |
US20130179660A1 (en) * | 2011-07-21 | 2013-07-11 | International Business Machines Corporation | Virtual Logical Volume for Overflow Storage of Special Data Sets |
US8909891B2 (en) | 2011-07-21 | 2014-12-09 | International Business Machines Corporation | Virtual logical volume for overflow storage of special data sets |
US11650976B2 (en) | 2011-10-14 | 2023-05-16 | Pure Storage, Inc. | Pattern matching using hash tables in storage system |
US9767032B2 (en) | 2012-01-12 | 2017-09-19 | Sandisk Technologies Llc | Systems and methods for cache endurance |
US9251086B2 (en) | 2012-01-24 | 2016-02-02 | SanDisk Technologies, Inc. | Apparatus, system, and method for managing a cache |
US9223609B2 (en) | 2012-01-30 | 2015-12-29 | Hewlett Packard Enterprise Development Lp | Input/output operations at a virtual block device of a storage server |
US9158568B2 (en) | 2012-01-30 | 2015-10-13 | Hewlett-Packard Development Company, L.P. | Input/output operations at a virtual block device of a storage server |
US9098325B2 (en) | 2012-02-28 | 2015-08-04 | Hewlett-Packard Development Company, L.P. | Persistent volume at an offset of a virtual block device of a storage server |
US8838931B1 (en) * | 2012-03-30 | 2014-09-16 | Emc Corporation | Techniques for automated discovery and performing storage optimizations on a component external to a data storage system |
US10831727B2 (en) | 2012-05-29 | 2020-11-10 | International Business Machines Corporation | Application-controlled sub-LUN level data migration |
US10831728B2 (en) | 2012-05-29 | 2020-11-10 | International Business Machines Corporation | Application-controlled sub-LUN level data migration |
US10838929B2 (en) | 2012-05-29 | 2020-11-17 | International Business Machines Corporation | Application-controlled sub-LUN level data migration |
US10817202B2 (en) | 2012-05-29 | 2020-10-27 | International Business Machines Corporation | Application-controlled sub-LUN level data migration |
US10831390B2 (en) | 2012-05-29 | 2020-11-10 | International Business Machines Corporation | Application-controlled sub-lun level data migration |
US10831729B2 (en) | 2012-05-29 | 2020-11-10 | International Business Machines Corporation | Application-controlled sub-LUN level data migration |
US20140164581A1 (en) * | 2012-12-10 | 2014-06-12 | Transparent Io, Inc. | Dispersed Storage System with Firewall |
US11671496B2 (en) | 2014-06-04 | 2023-06-06 | Pure Storage, Inc. | Load balacing for distibuted computing |
US11310317B1 (en) | 2014-06-04 | 2022-04-19 | Pure Storage, Inc. | Efficient load balancing |
US11057468B1 (en) | 2014-06-04 | 2021-07-06 | Pure Storage, Inc. | Vast data storage system |
US10671480B2 (en) | 2014-06-04 | 2020-06-02 | Pure Storage, Inc. | Utilization of erasure codes in a storage system |
US9967342B2 (en) | 2014-06-04 | 2018-05-08 | Pure Storage, Inc. | Storage system architecture |
US11652884B2 (en) | 2014-06-04 | 2023-05-16 | Pure Storage, Inc. | Customized hash algorithms |
US11822444B2 (en) | 2014-06-04 | 2023-11-21 | Pure Storage, Inc. | Data rebuild independent of error detection |
US11138082B2 (en) | 2014-06-04 | 2021-10-05 | Pure Storage, Inc. | Action determination based on redundancy level |
US11036583B2 (en) | 2014-06-04 | 2021-06-15 | Pure Storage, Inc. | Rebuilding data across storage nodes |
US10574754B1 (en) | 2014-06-04 | 2020-02-25 | Pure Storage, Inc. | Multi-chassis array with multi-level load balancing |
US9798477B2 (en) | 2014-06-04 | 2017-10-24 | Pure Storage, Inc. | Scalable non-uniform storage sizes |
US11960371B2 (en) | 2014-06-04 | 2024-04-16 | Pure Storage, Inc. | Message persistence in a zoned system |
US11714715B2 (en) | 2014-06-04 | 2023-08-01 | Pure Storage, Inc. | Storage system accommodating varying storage capacities |
US11385799B2 (en) | 2014-06-04 | 2022-07-12 | Pure Storage, Inc. | Storage nodes supporting multiple erasure coding schemes |
US11593203B2 (en) | 2014-06-04 | 2023-02-28 | Pure Storage, Inc. | Coexisting differing erasure codes |
US11399063B2 (en) | 2014-06-04 | 2022-07-26 | Pure Storage, Inc. | Network authentication for a storage system |
US10303547B2 (en) | 2014-06-04 | 2019-05-28 | Pure Storage, Inc. | Rebuilding data across storage nodes |
US10838633B2 (en) | 2014-06-04 | 2020-11-17 | Pure Storage, Inc. | Configurable hyperconverged multi-tenant storage system |
US10809919B2 (en) | 2014-06-04 | 2020-10-20 | Pure Storage, Inc. | Scalable storage capacities |
US9525738B2 (en) | 2014-06-04 | 2016-12-20 | Pure Storage, Inc. | Storage system architecture |
US11500552B2 (en) | 2014-06-04 | 2022-11-15 | Pure Storage, Inc. | Configurable hyperconverged multi-tenant storage system |
US10379763B2 (en) | 2014-06-04 | 2019-08-13 | Pure Storage, Inc. | Hyperconverged storage system with distributable processing power |
US10430306B2 (en) | 2014-06-04 | 2019-10-01 | Pure Storage, Inc. | Mechanism for persisting messages in a storage system |
US11886308B2 (en) | 2014-07-02 | 2024-01-30 | Pure Storage, Inc. | Dual class of service for unified file and object messaging |
US11079962B2 (en) | 2014-07-02 | 2021-08-03 | Pure Storage, Inc. | Addressable non-volatile random access memory |
US10817431B2 (en) | 2014-07-02 | 2020-10-27 | Pure Storage, Inc. | Distributed storage addressing |
US10114757B2 (en) | 2014-07-02 | 2018-10-30 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
US10372617B2 (en) | 2014-07-02 | 2019-08-06 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
US10877861B2 (en) | 2014-07-02 | 2020-12-29 | Pure Storage, Inc. | Remote procedure call cache for distributed system |
US11385979B2 (en) | 2014-07-02 | 2022-07-12 | Pure Storage, Inc. | Mirrored remote procedure call cache |
US11604598B2 (en) | 2014-07-02 | 2023-03-14 | Pure Storage, Inc. | Storage cluster with zoned drives |
US10572176B2 (en) | 2014-07-02 | 2020-02-25 | Pure Storage, Inc. | Storage cluster operation using erasure coded data |
US11922046B2 (en) | 2014-07-02 | 2024-03-05 | Pure Storage, Inc. | Erasure coded data within zoned drives |
US11928076B2 (en) | 2014-07-03 | 2024-03-12 | Pure Storage, Inc. | Actions for reserved filenames |
US10853285B2 (en) | 2014-07-03 | 2020-12-01 | Pure Storage, Inc. | Direct memory access data format |
US9747229B1 (en) | 2014-07-03 | 2017-08-29 | Pure Storage, Inc. | Self-describing data format for DMA in a non-volatile solid-state storage |
US11392522B2 (en) | 2014-07-03 | 2022-07-19 | Pure Storage, Inc. | Transfer of segmented data |
US10198380B1 (en) | 2014-07-03 | 2019-02-05 | Pure Storage, Inc. | Direct memory access data movement |
US10185506B2 (en) | 2014-07-03 | 2019-01-22 | Pure Storage, Inc. | Scheduling policy for queues in a non-volatile solid-state storage |
US10691812B2 (en) | 2014-07-03 | 2020-06-23 | Pure Storage, Inc. | Secure data replication in a storage grid |
US11550752B2 (en) | 2014-07-03 | 2023-01-10 | Pure Storage, Inc. | Administrative actions via a reserved filename |
US11494498B2 (en) | 2014-07-03 | 2022-11-08 | Pure Storage, Inc. | Storage data decryption |
US11620197B2 (en) | 2014-08-07 | 2023-04-04 | Pure Storage, Inc. | Recovering error corrected data |
US10528419B2 (en) | 2014-08-07 | 2020-01-07 | Pure Storage, Inc. | Mapping around defective flash memory of a storage array |
US11656939B2 (en) | 2014-08-07 | 2023-05-23 | Pure Storage, Inc. | Storage cluster memory characterization |
US10990283B2 (en) | 2014-08-07 | 2021-04-27 | Pure Storage, Inc. | Proactive data rebuild based on queue feedback |
US10983866B2 (en) | 2014-08-07 | 2021-04-20 | Pure Storage, Inc. | Mapping defective memory in a storage system |
US10579474B2 (en) | 2014-08-07 | 2020-03-03 | Pure Storage, Inc. | Die-level monitoring in a storage cluster |
US11204830B2 (en) | 2014-08-07 | 2021-12-21 | Pure Storage, Inc. | Die-level monitoring in a storage cluster |
US11080154B2 (en) | 2014-08-07 | 2021-08-03 | Pure Storage, Inc. | Recovering error corrected data |
US10216411B2 (en) | 2014-08-07 | 2019-02-26 | Pure Storage, Inc. | Data rebuild on feedback from a queue in a non-volatile solid-state storage |
US11442625B2 (en) | 2014-08-07 | 2022-09-13 | Pure Storage, Inc. | Multiple read data paths in a storage system |
US11544143B2 (en) | 2014-08-07 | 2023-01-03 | Pure Storage, Inc. | Increased data reliability |
US10324812B2 (en) | 2014-08-07 | 2019-06-18 | Pure Storage, Inc. | Error recovery in a storage cluster |
US10498580B1 (en) | 2014-08-20 | 2019-12-03 | Pure Storage, Inc. | Assigning addresses in a storage system |
US11188476B1 (en) | 2014-08-20 | 2021-11-30 | Pure Storage, Inc. | Virtual addressing in a storage system |
US11734186B2 (en) | 2014-08-20 | 2023-08-22 | Pure Storage, Inc. | Heterogeneous storage with preserved addressing |
US9948615B1 (en) | 2015-03-16 | 2018-04-17 | Pure Storage, Inc. | Increased storage unit encryption based on loss of trust |
US11294893B2 (en) | 2015-03-20 | 2022-04-05 | Pure Storage, Inc. | Aggregation of queries |
US11775428B2 (en) | 2015-03-26 | 2023-10-03 | Pure Storage, Inc. | Deletion immunity for unreferenced data |
US10853243B2 (en) | 2015-03-26 | 2020-12-01 | Pure Storage, Inc. | Aggressive data deduplication using lazy garbage collection |
US9940234B2 (en) | 2015-03-26 | 2018-04-10 | Pure Storage, Inc. | Aggressive data deduplication using lazy garbage collection |
US11188269B2 (en) | 2015-03-27 | 2021-11-30 | Pure Storage, Inc. | Configuration for multiple logical storage arrays |
US10082985B2 (en) | 2015-03-27 | 2018-09-25 | Pure Storage, Inc. | Data striping across storage nodes that are assigned to multiple logical arrays |
US10353635B2 (en) | 2015-03-27 | 2019-07-16 | Pure Storage, Inc. | Data control across multiple logical arrays |
US10178169B2 (en) * | 2015-04-09 | 2019-01-08 | Pure Storage, Inc. | Point to point based backend communication layer for storage processing |
US10693964B2 (en) * | 2015-04-09 | 2020-06-23 | Pure Storage, Inc. | Storage unit communication within a storage system |
US20160301752A1 (en) * | 2015-04-09 | 2016-10-13 | Pure Storage, Inc. | Point to point based backend communication layer for storage processing |
US11722567B2 (en) | 2015-04-09 | 2023-08-08 | Pure Storage, Inc. | Communication paths for storage devices having differing capacities |
US11240307B2 (en) | 2015-04-09 | 2022-02-01 | Pure Storage, Inc. | Multiple communication paths in a storage system |
WO2016164646A1 (en) * | 2015-04-09 | 2016-10-13 | Pure Storage, Inc. | Point to point based backend communication layer for storage processing |
US11144212B2 (en) | 2015-04-10 | 2021-10-12 | Pure Storage, Inc. | Independent partitions within an array |
US9672125B2 (en) | 2015-04-10 | 2017-06-06 | Pure Storage, Inc. | Ability to partition an array into two or more logical arrays with independently running software |
US10496295B2 (en) | 2015-04-10 | 2019-12-03 | Pure Storage, Inc. | Representing a storage array as two or more logical arrays with respective virtual local area networks (VLANS) |
US11231956B2 (en) | 2015-05-19 | 2022-01-25 | Pure Storage, Inc. | Committed transactions in a storage system |
US10140149B1 (en) | 2015-05-19 | 2018-11-27 | Pure Storage, Inc. | Transactional commits with hardware assists in remote memory |
US10712942B2 (en) | 2015-05-27 | 2020-07-14 | Pure Storage, Inc. | Parallel update to maintain coherency |
US9817576B2 (en) | 2015-05-27 | 2017-11-14 | Pure Storage, Inc. | Parallel update to NVRAM |
US11675762B2 (en) | 2015-06-26 | 2023-06-13 | Pure Storage, Inc. | Data structures for key management |
US10983732B2 (en) | 2015-07-13 | 2021-04-20 | Pure Storage, Inc. | Method and system for accessing a file |
US11704073B2 (en) | 2015-07-13 | 2023-07-18 | Pure Storage, Inc | Ownership determination for accessing a file |
US11232079B2 (en) | 2015-07-16 | 2022-01-25 | Pure Storage, Inc. | Efficient distribution of large directories |
US11740802B2 (en) | 2015-09-01 | 2023-08-29 | Pure Storage, Inc. | Error correction bypass for erased pages |
US11099749B2 (en) | 2015-09-01 | 2021-08-24 | Pure Storage, Inc. | Erase detection logic for a storage system |
US10108355B2 (en) | 2015-09-01 | 2018-10-23 | Pure Storage, Inc. | Erase block state detection |
US11893023B2 (en) | 2015-09-04 | 2024-02-06 | Pure Storage, Inc. | Deterministic searching using compressed indexes |
US11567917B2 (en) | 2015-09-30 | 2023-01-31 | Pure Storage, Inc. | Writing data and metadata into storage |
US11489668B2 (en) | 2015-09-30 | 2022-11-01 | Pure Storage, Inc. | Secret regeneration in a storage system |
US10853266B2 (en) | 2015-09-30 | 2020-12-01 | Pure Storage, Inc. | Hardware assisted data lookup methods |
US10211983B2 (en) | 2015-09-30 | 2019-02-19 | Pure Storage, Inc. | Resharing of a split secret |
US11838412B2 (en) | 2015-09-30 | 2023-12-05 | Pure Storage, Inc. | Secret regeneration from distributed shares |
US9768953B2 (en) | 2015-09-30 | 2017-09-19 | Pure Storage, Inc. | Resharing of a split secret |
US10887099B2 (en) | 2015-09-30 | 2021-01-05 | Pure Storage, Inc. | Data encryption in a distributed system |
US11070382B2 (en) | 2015-10-23 | 2021-07-20 | Pure Storage, Inc. | Communication in a distributed architecture |
US11582046B2 (en) | 2015-10-23 | 2023-02-14 | Pure Storage, Inc. | Storage system communication |
US10277408B2 (en) | 2015-10-23 | 2019-04-30 | Pure Storage, Inc. | Token based communication |
US9843453B2 (en) | 2015-10-23 | 2017-12-12 | Pure Storage, Inc. | Authorizing I/O commands with I/O tokens |
US11204701B2 (en) | 2015-12-22 | 2021-12-21 | Pure Storage, Inc. | Token based transactions |
US10599348B2 (en) | 2015-12-22 | 2020-03-24 | Pure Storage, Inc. | Distributed transactions with token-associated execution |
US10007457B2 (en) | 2015-12-22 | 2018-06-26 | Pure Storage, Inc. | Distributed transactions with token-associated execution |
US10649659B2 (en) | 2016-05-03 | 2020-05-12 | Pure Storage, Inc. | Scaleable storage array |
US11550473B2 (en) | 2016-05-03 | 2023-01-10 | Pure Storage, Inc. | High-availability storage array |
US11847320B2 (en) | 2016-05-03 | 2023-12-19 | Pure Storage, Inc. | Reassignment of requests for high availability |
US10261690B1 (en) | 2016-05-03 | 2019-04-16 | Pure Storage, Inc. | Systems and methods for operating a storage system |
US11861188B2 (en) | 2016-07-19 | 2024-01-02 | Pure Storage, Inc. | System having modular accelerators |
US11886288B2 (en) | 2016-07-22 | 2024-01-30 | Pure Storage, Inc. | Optimize data protection layouts based on distributed flash wear leveling |
US11449232B1 (en) | 2016-07-22 | 2022-09-20 | Pure Storage, Inc. | Optimal scheduling of flash operations |
US10831594B2 (en) | 2016-07-22 | 2020-11-10 | Pure Storage, Inc. | Optimize data protection layouts based on distributed flash wear leveling |
US11409437B2 (en) | 2016-07-22 | 2022-08-09 | Pure Storage, Inc. | Persisting configuration information |
US10768819B2 (en) | 2016-07-22 | 2020-09-08 | Pure Storage, Inc. | Hardware support for non-disruptive upgrades |
US11604690B2 (en) | 2016-07-24 | 2023-03-14 | Pure Storage, Inc. | Online failure span determination |
US11080155B2 (en) | 2016-07-24 | 2021-08-03 | Pure Storage, Inc. | Identifying error types among flash memory |
US10216420B1 (en) | 2016-07-24 | 2019-02-26 | Pure Storage, Inc. | Calibration of flash channels in SSD |
US11734169B2 (en) | 2016-07-26 | 2023-08-22 | Pure Storage, Inc. | Optimizing spool and memory space management |
US11886334B2 (en) | 2016-07-26 | 2024-01-30 | Pure Storage, Inc. | Optimizing spool and memory space management |
US10203903B2 (en) | 2016-07-26 | 2019-02-12 | Pure Storage, Inc. | Geometry based, space aware shelf/writegroup evacuation |
US11797212B2 (en) | 2016-07-26 | 2023-10-24 | Pure Storage, Inc. | Data migration for zoned drives |
US11030090B2 (en) | 2016-07-26 | 2021-06-08 | Pure Storage, Inc. | Adaptive data migration |
US10776034B2 (en) | 2016-07-26 | 2020-09-15 | Pure Storage, Inc. | Adaptive data migration |
US11340821B2 (en) | 2016-07-26 | 2022-05-24 | Pure Storage, Inc. | Adjustable migration utilization |
US10366004B2 (en) | 2016-07-26 | 2019-07-30 | Pure Storage, Inc. | Storage system with elective garbage collection to reduce flash contention |
US11422719B2 (en) | 2016-09-15 | 2022-08-23 | Pure Storage, Inc. | Distributed file deletion and truncation |
US11656768B2 (en) | 2016-09-15 | 2023-05-23 | Pure Storage, Inc. | File deletion in a distributed system |
US10678452B2 (en) | 2016-09-15 | 2020-06-09 | Pure Storage, Inc. | Distributed deletion of a file and directory hierarchy |
US11301147B2 (en) | 2016-09-15 | 2022-04-12 | Pure Storage, Inc. | Adaptive concurrency for write persistence |
US11922033B2 (en) | 2016-09-15 | 2024-03-05 | Pure Storage, Inc. | Batch data deletion |
US11581943B2 (en) | 2016-10-04 | 2023-02-14 | Pure Storage, Inc. | Queues reserved for direct access via a user application |
US11922070B2 (en) | 2016-10-04 | 2024-03-05 | Pure Storage, Inc. | Granting access to a storage device based on reservations |
US11842053B2 (en) | 2016-12-19 | 2023-12-12 | Pure Storage, Inc. | Zone namespace |
US11955187B2 (en) | 2017-01-13 | 2024-04-09 | Pure Storage, Inc. | Refresh of differing capacity NAND |
US11289169B2 (en) | 2017-01-13 | 2022-03-29 | Pure Storage, Inc. | Cycled background reads |
US10650902B2 (en) | 2017-01-13 | 2020-05-12 | Pure Storage, Inc. | Method for processing blocks of flash memory |
US10979223B2 (en) | 2017-01-31 | 2021-04-13 | Pure Storage, Inc. | Separate encryption for a solid-state drive |
US10942869B2 (en) | 2017-03-30 | 2021-03-09 | Pure Storage, Inc. | Efficient coding in a storage system |
US11449485B1 (en) | 2017-03-30 | 2022-09-20 | Pure Storage, Inc. | Sequence invalidation consolidation in a storage system |
US10528488B1 (en) | 2017-03-30 | 2020-01-07 | Pure Storage, Inc. | Efficient name coding |
US11016667B1 (en) | 2017-04-05 | 2021-05-25 | Pure Storage, Inc. | Efficient mapping for LUNs in storage memory with holes in address space |
US11592985B2 (en) | 2017-04-05 | 2023-02-28 | Pure Storage, Inc. | Mapping LUNs in a storage memory |
US11869583B2 (en) | 2017-04-27 | 2024-01-09 | Pure Storage, Inc. | Page write requirements for differing types of flash memory |
US11722455B2 (en) | 2017-04-27 | 2023-08-08 | Pure Storage, Inc. | Storage cluster address resolution |
US10141050B1 (en) | 2017-04-27 | 2018-11-27 | Pure Storage, Inc. | Page writes for triple level cell flash memory |
US10944671B2 (en) | 2017-04-27 | 2021-03-09 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US10524022B2 (en) * | 2017-05-02 | 2019-12-31 | Seagate Technology Llc | Data storage system with adaptive data path routing |
US11467913B1 (en) | 2017-06-07 | 2022-10-11 | Pure Storage, Inc. | Snapshots with crash consistency in a storage system |
US11068389B2 (en) | 2017-06-11 | 2021-07-20 | Pure Storage, Inc. | Data resiliency with heterogeneous storage |
US11782625B2 (en) | 2017-06-11 | 2023-10-10 | Pure Storage, Inc. | Heterogeneity supportive resiliency groups |
US11138103B1 (en) | 2017-06-11 | 2021-10-05 | Pure Storage, Inc. | Resiliency groups |
US11947814B2 (en) | 2017-06-11 | 2024-04-02 | Pure Storage, Inc. | Optimizing resiliency group formation stability |
US11190580B2 (en) | 2017-07-03 | 2021-11-30 | Pure Storage, Inc. | Stateful connection resets |
US11689610B2 (en) | 2017-07-03 | 2023-06-27 | Pure Storage, Inc. | Load balancing reset packets |
US11714708B2 (en) | 2017-07-31 | 2023-08-01 | Pure Storage, Inc. | Intra-device redundancy scheme |
US10877827B2 (en) | 2017-09-15 | 2020-12-29 | Pure Storage, Inc. | Read voltage optimization |
US10210926B1 (en) | 2017-09-15 | 2019-02-19 | Pure Storage, Inc. | Tracking of optimum read voltage thresholds in nand flash devices |
US11086532B2 (en) | 2017-10-31 | 2021-08-10 | Pure Storage, Inc. | Data rebuild with changing erase block sizes |
US11704066B2 (en) | 2017-10-31 | 2023-07-18 | Pure Storage, Inc. | Heterogeneous erase blocks |
US10545687B1 (en) | 2017-10-31 | 2020-01-28 | Pure Storage, Inc. | Data rebuild when changing erase block sizes during drive replacement |
US11024390B1 (en) | 2017-10-31 | 2021-06-01 | Pure Storage, Inc. | Overlapping RAID groups |
US11604585B2 (en) | 2017-10-31 | 2023-03-14 | Pure Storage, Inc. | Data rebuild when changing erase block sizes during drive replacement |
US11074016B2 (en) | 2017-10-31 | 2021-07-27 | Pure Storage, Inc. | Using flash storage devices with different sized erase blocks |
US10884919B2 (en) | 2017-10-31 | 2021-01-05 | Pure Storage, Inc. | Memory management in a storage system |
US10496330B1 (en) | 2017-10-31 | 2019-12-03 | Pure Storage, Inc. | Using flash storage devices with different sized erase blocks |
US10515701B1 (en) | 2017-10-31 | 2019-12-24 | Pure Storage, Inc. | Overlapping raid groups |
US11741003B2 (en) | 2017-11-17 | 2023-08-29 | Pure Storage, Inc. | Write granularity for storage system |
US10860475B1 (en) | 2017-11-17 | 2020-12-08 | Pure Storage, Inc. | Hybrid flash translation layer |
US11275681B1 (en) | 2017-11-17 | 2022-03-15 | Pure Storage, Inc. | Segmented write requests |
US10990566B1 (en) | 2017-11-20 | 2021-04-27 | Pure Storage, Inc. | Persistent file locks in a storage system |
US10705732B1 (en) | 2017-12-08 | 2020-07-07 | Pure Storage, Inc. | Multiple-apartment aware offlining of devices for disruptive and destructive operations |
US10929053B2 (en) | 2017-12-08 | 2021-02-23 | Pure Storage, Inc. | Safe destructive actions on drives |
US10719265B1 (en) | 2017-12-08 | 2020-07-21 | Pure Storage, Inc. | Centralized, quorum-aware handling of device reservation requests in a storage system |
US11782614B1 (en) | 2017-12-21 | 2023-10-10 | Pure Storage, Inc. | Encrypting data to optimize data reduction |
US10929031B2 (en) | 2017-12-21 | 2021-02-23 | Pure Storage, Inc. | Maximizing data reduction in a partially encrypted volume |
US11442645B2 (en) | 2018-01-31 | 2022-09-13 | Pure Storage, Inc. | Distributed storage system expansion mechanism |
US10976948B1 (en) | 2018-01-31 | 2021-04-13 | Pure Storage, Inc. | Cluster expansion mechanism |
US10915813B2 (en) | 2018-01-31 | 2021-02-09 | Pure Storage, Inc. | Search acceleration for artificial intelligence |
US10467527B1 (en) | 2018-01-31 | 2019-11-05 | Pure Storage, Inc. | Method and apparatus for artificial intelligence acceleration |
US11966841B2 (en) | 2018-01-31 | 2024-04-23 | Pure Storage, Inc. | Search acceleration for artificial intelligence |
US11797211B2 (en) | 2018-01-31 | 2023-10-24 | Pure Storage, Inc. | Expanding data structures in a storage system |
US10733053B1 (en) | 2018-01-31 | 2020-08-04 | Pure Storage, Inc. | Disaster recovery for high-bandwidth distributed archives |
US11847013B2 (en) | 2018-02-18 | 2023-12-19 | Pure Storage, Inc. | Readable data determination |
US11494109B1 (en) | 2018-02-22 | 2022-11-08 | Pure Storage, Inc. | Erase block trimming for heterogenous flash memory storage devices |
US11836348B2 (en) | 2018-04-27 | 2023-12-05 | Pure Storage, Inc. | Upgrade for system with differing capacities |
US10931450B1 (en) | 2018-04-27 | 2021-02-23 | Pure Storage, Inc. | Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers |
US10853146B1 (en) | 2018-04-27 | 2020-12-01 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US20190362075A1 (en) * | 2018-05-22 | 2019-11-28 | Fortinet, Inc. | Preventing users from accessing infected files by using multiple file storage repositories and a secure data transfer agent logically interposed therebetween |
US11436023B2 (en) | 2018-05-31 | 2022-09-06 | Pure Storage, Inc. | Mechanism for updating host file system and flash translation layer based on underlying NAND technology |
US11438279B2 (en) | 2018-07-23 | 2022-09-06 | Pure Storage, Inc. | Non-disruptive conversion of a clustered service from single-chassis to multi-chassis |
US11846968B2 (en) | 2018-09-06 | 2023-12-19 | Pure Storage, Inc. | Relocation of data for heterogeneous storage systems |
US11868309B2 (en) | 2018-09-06 | 2024-01-09 | Pure Storage, Inc. | Queue management for data relocation |
US11520514B2 (en) | 2018-09-06 | 2022-12-06 | Pure Storage, Inc. | Optimized relocation of data based on data characteristics |
US11500570B2 (en) | 2018-09-06 | 2022-11-15 | Pure Storage, Inc. | Efficient relocation of data utilizing different programming modes |
US11354058B2 (en) | 2018-09-06 | 2022-06-07 | Pure Storage, Inc. | Local relocation of data stored at a storage device of a storage system |
US11036856B2 (en) | 2018-09-16 | 2021-06-15 | Fortinet, Inc. | Natively mounting storage for inspection and sandboxing in the cloud |
US10454498B1 (en) | 2018-10-18 | 2019-10-22 | Pure Storage, Inc. | Fully pipelined hardware engine design for fast and efficient inline lossless data compression |
US10976947B2 (en) | 2018-10-26 | 2021-04-13 | Pure Storage, Inc. | Dynamically selecting segment heights in a heterogeneous RAID group |
US11334254B2 (en) | 2019-03-29 | 2022-05-17 | Pure Storage, Inc. | Reliability based flash page sizing |
US11775189B2 (en) | 2019-04-03 | 2023-10-03 | Pure Storage, Inc. | Segment level heterogeneity |
US11099986B2 (en) | 2019-04-12 | 2021-08-24 | Pure Storage, Inc. | Efficient transfer of memory contents |
US11899582B2 (en) | 2019-04-12 | 2024-02-13 | Pure Storage, Inc. | Efficient memory dump |
US11714572B2 (en) | 2019-06-19 | 2023-08-01 | Pure Storage, Inc. | Optimized data resiliency in a modular storage system |
US11281394B2 (en) | 2019-06-24 | 2022-03-22 | Pure Storage, Inc. | Replication across partitioning schemes in a distributed storage system |
US11822807B2 (en) | 2019-06-24 | 2023-11-21 | Pure Storage, Inc. | Data replication in a storage system |
US11893126B2 (en) | 2019-10-14 | 2024-02-06 | Pure Storage, Inc. | Data deletion for a multi-tenant environment |
US11119703B2 (en) * | 2019-10-29 | 2021-09-14 | EMC IP Holding Company LLC | Utilizing a set of virtual storage units distributed across physical storage units |
US11416144B2 (en) | 2019-12-12 | 2022-08-16 | Pure Storage, Inc. | Dynamic use of segment or zone power loss protection in a flash device |
US11947795B2 (en) | 2019-12-12 | 2024-04-02 | Pure Storage, Inc. | Power loss protection based on write requirements |
US11847331B2 (en) | 2019-12-12 | 2023-12-19 | Pure Storage, Inc. | Budgeting open blocks of a storage unit based on power loss prevention |
US11704192B2 (en) | 2019-12-12 | 2023-07-18 | Pure Storage, Inc. | Budgeting open blocks based on power loss protection |
US11675707B2 (en) | 2020-01-07 | 2023-06-13 | International Business Machines Corporation | Logical to virtual and virtual to physical translation in storage class memory |
US10990537B1 (en) | 2020-01-07 | 2021-04-27 | International Business Machines Corporation | Logical to virtual and virtual to physical translation in storage class memory |
US11656961B2 (en) | 2020-02-28 | 2023-05-23 | Pure Storage, Inc. | Deallocation within a storage system |
US11188432B2 (en) | 2020-02-28 | 2021-11-30 | Pure Storage, Inc. | Data resiliency by partially deallocating data blocks of a storage device |
US11507297B2 (en) | 2020-04-15 | 2022-11-22 | Pure Storage, Inc. | Efficient management of optimal read levels for flash storage systems |
US11256587B2 (en) | 2020-04-17 | 2022-02-22 | Pure Storage, Inc. | Intelligent access to a storage device |
US11416338B2 (en) | 2020-04-24 | 2022-08-16 | Pure Storage, Inc. | Resiliency scheme to enhance storage performance |
US11474986B2 (en) | 2020-04-24 | 2022-10-18 | Pure Storage, Inc. | Utilizing machine learning to streamline telemetry processing of storage media |
US11775491B2 (en) | 2020-04-24 | 2023-10-03 | Pure Storage, Inc. | Machine learning model for storage system |
US11768763B2 (en) | 2020-07-08 | 2023-09-26 | Pure Storage, Inc. | Flash secure erase |
US11513974B2 (en) | 2020-09-08 | 2022-11-29 | Pure Storage, Inc. | Using nonce to control erasure of data blocks of a multi-controller storage system |
US11681448B2 (en) | 2020-09-08 | 2023-06-20 | Pure Storage, Inc. | Multiple device IDs in a multi-fabric module storage system |
US11971828B2 (en) | 2020-11-19 | 2024-04-30 | Pure Storage, Inc. | Logic module for use with encoded instructions |
US11487455B2 (en) | 2020-12-17 | 2022-11-01 | Pure Storage, Inc. | Dynamic block allocation to optimize storage system performance |
US11789626B2 (en) | 2020-12-17 | 2023-10-17 | Pure Storage, Inc. | Optimizing block allocation in a data storage system |
US11847324B2 (en) | 2020-12-31 | 2023-12-19 | Pure Storage, Inc. | Optimizing resiliency groups for data regions of a storage system |
US11614880B2 (en) | 2020-12-31 | 2023-03-28 | Pure Storage, Inc. | Storage system with selectable write paths |
US11630593B2 (en) | 2021-03-12 | 2023-04-18 | Pure Storage, Inc. | Inline flash memory qualification in a storage system |
US11507597B2 (en) | 2021-03-31 | 2022-11-22 | Pure Storage, Inc. | Data replication to meet a recovery point objective |
US11832410B2 (en) | 2021-09-14 | 2023-11-28 | Pure Storage, Inc. | Mechanical energy absorbing bracket apparatus |
Also Published As
Publication number | Publication date |
---|---|
US20050228937A1 (en) | 2005-10-13 |
JP4750040B2 (en) | 2011-08-17 |
US20050114595A1 (en) | 2005-05-26 |
US20050228950A1 (en) | 2005-10-13 |
CN1906569A (en) | 2007-01-31 |
JP2007516523A (en) | 2007-06-21 |
US7689803B2 (en) | 2010-03-30 |
WO2005055043A1 (en) | 2005-06-16 |
EP1687706A1 (en) | 2006-08-09 |
CN100552611C (en) | 2009-10-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20050235132A1 (en) | System and method for dynamic LUN mapping | |
US8819383B1 (en) | Non-disruptive realignment of virtual data | |
US7669032B2 (en) | Host-based virtualization optimizations in storage environments employing off-host storage virtualization | |
US9563469B2 (en) | System and method for storage and deployment of virtual machines in a virtual server environment | |
RU2302034C9 (en) | Multi-protocol data storage device realizing integrated support of file access and block access protocols | |
US9262087B2 (en) | Non-disruptive configuration of a virtualization controller in a data storage system | |
US20090049160A1 (en) | System and Method for Deployment of a Software Image | |
US11860791B2 (en) | Methods for managing input-output operations in zone translation layer architecture and devices thereof | |
US8972657B1 (en) | Managing active—active mapped logical volumes | |
US20100146039A1 (en) | System and Method for Providing Access to a Shared System Image | |
US8984224B2 (en) | Multiple instances of mapping configurations in a storage system or storage appliance | |
US10620843B2 (en) | Methods for managing distributed snapshot for low latency storage and devices thereof | |
US11853234B2 (en) | Techniques for providing access of host-local storage to a programmable network interface component while preventing direct host CPU access | |
US20220012208A1 (en) | Configuring a file server | |
US9641613B2 (en) | Volume hierarchy download in a storage area network | |
US11334441B2 (en) | Distribution of snaps for load balancing data node clusters | |
US11899534B2 (en) | Techniques for providing direct host-based access to backup data using a proxy file system | |
US20230418500A1 (en) | Migration processes utilizing mapping entry timestamps for selection of target logical storage devices | |
US20220188012A1 (en) | Reservation handling in conjunction with switching between storage access protocols |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VERITAS OPERATING CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KARR, RONALD S.;KONG, CHIO FAI AGLAIA;REEL/FRAME:016714/0663 Effective date: 20050606 |
|
AS | Assignment |
Owner name: SYMANTEC CORPORATION, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:019872/0979 Effective date: 20061030 Owner name: SYMANTEC CORPORATION,CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:019872/0979 Effective date: 20061030 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: SYMANTEC OPERATING CORPORATION, CALIFORNIA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE PREVIOUSLY RECORDED ON REEL 019872 FRAME 979. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNEE IS SYMANTEC OPERATING CORPORATION;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:027819/0462 Effective date: 20061030 |