US20150363126A1 - Logical zone mapping - Google Patents
Logical zone mapping Download PDFInfo
- Publication number
- US20150363126A1 US20150363126A1 US14/540,721 US201414540721A US2015363126A1 US 20150363126 A1 US20150363126 A1 US 20150363126A1 US 201414540721 A US201414540721 A US 201414540721A US 2015363126 A1 US2015363126 A1 US 2015363126A1
- Authority
- US
- United States
- Prior art keywords
- storage
- data
- logical
- zone
- resources
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/28—Supervision thereof, e.g. detecting power-supply failure by out of limits supervision
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3206—Monitoring of events, devices or parameters that trigger a change in power modality
- G06F1/3215—Monitoring of peripheral devices
- G06F1/3221—Monitoring of peripheral devices of disk drive devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/3287—Power saving characterised by the action undertaken by switching off individual functional units in the computer system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2015—Redundant power supplies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0625—Power saving in storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0634—Configuration or reconfiguration of storage systems by changing the state or mode of one or more devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0635—Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0658—Controller construction arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0665—Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0685—Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2094—Redundant storage or storage space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/805—Real-time
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B70/00—Technologies for an efficient end-user side electric power management and consumption
- Y02B70/10—Technologies improving the efficiency by using switched-mode power supplies [SMPS], i.e. efficient power electronics conversion e.g. power factor correction or reduction of losses in power supplies or efficient standby modes
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- FIG. 1 illustrates an example mass data storage system with storage resources allocated between multiple logical zones.
- FIG. 2 illustrates another example mass data storage system that implements logical zoning of storage resources.
- FIG. 3 illustrates aspects of another example mass storage system that implements logical zoning of storage resources.
- FIG. 4 illustrates another mass data storage system including an example control system for selecting one or more logical zones to receive data of a write request.
- FIG. 5 illustrates example operations for mapping and selecting logical zones in a mass data storage system.
- FIG. 6 illustrates additional example operations for mapping and selecting logical zones in a mass data storage system.
- Implementations disclosed herein provide for mapping a plurality of storage resources to one or more of multiple logical zones in a storage system. Each of the logical zones is associated with a different storage condition and defines a group of storage resources applying the associated storage condition to data stored therein.
- On-line mass data storage (sometimes referred to as secondary storage) refers to one or more interconnected data storage units that are actively running and available for read/write operations.
- Example on-line mass data storage units include hard disk drives (HDDs), optical drives, and flash memory drives.
- HDDs hard disk drives
- optical drives optical drives
- flash memory drives typically, time-to-data (TTD) for on-line mass data storage units is less than 2 milliseconds.
- TTD time-to-data
- On-line mass data storage benefits from very high TTD capabilities, but is expensive to build and operate. More specifically, individual on-line mass data storage units are of high-quality, driving build costs up, and they consume significant power in an on-line state, driving operating costs up.
- Near-line (or near on-line) mass data storage refers to one or more interconnected data storage units that are powered on, but in a low power consumption state and are brought to an on-line state before running read/write operations.
- Hard disk drives, optical drives, and/or flash memory drives may also be used for near-line storage, with the difference being an added mechanism to bring a selected storage unit to an on-line state for read/write operations.
- Such example mechanisms are robotic near-line storage (i.e., the system is aware of where a desired data chunk resides on a physical volume and utilizes a robotic mechanism to retrieve the physical volume for read/write operations) and hard drive near-line storage (e.g., massive array of idle discs (MAID)).
- MAID massive array of idle discs
- MAID systems archive data in an array of disc drives that are operating in a standby power state, but most of which are not spinning.
- the MAID system spins up each disc drive on demand when desired to perform read/write data on a disc within that drive.
- TTD for MAID-type near-line mass data storage units is less than 4 milliseconds.
- Near-line mass data storage systems have lower operating costs than on-line mass data storage systems due to the reduced power demand, but have similar build costs.
- Off-line (or cold) mass data storage refers to one or more interconnected data storage units that are kept in a power off state and/or utilize remotely located storage media to store data.
- off-line mass data storage utilizes one or more interconnected tape drives, each with numerous tapes associated with the drive.
- a desired tape is retrieved from its storage location and loaded into its associated drive for read/write operations.
- TTD for off-line tape mass data storage units can be greater than 24 hours. While the build and operating costs of off-line tape mass data storage are low, some applications require a faster access time than 24 hours, but not as fast as on-line or near-line mass data storage systems.
- the disclosed off-line HDD mass data storage systems can achieve TTD greater than 4 ms and typically faster than that of off-line tape mass data storage while maintaining build and operating costs competitive with off-line tape mass data storage.
- storage resources in the disclosed off-line HDD mass data storage systems are classified into logical zones based on a storage condition applied to data stored in each respective zone. The usage of logical zones enhances system performance in a variety of ways and also provides a diverse array of storage options for an end user.
- FIG. 1 illustrates an example mass data storage system 100 with storage resources allocated between multiple logical zones (e.g., logical zones 126 , 136 ) that store data according to a common storage condition.
- the storage system 100 e.g., a server cluster or farm
- the storage system 100 is comprised of a number of storage racks (e.g., storage racks 102 , 104 ) oriented in adjacent or separate physical locations or facilities (e.g., data rooms or centers).
- a first quantity of storage racks is located in a first server facility
- a second quantity of storage racks is located in a second server facility, and so on.
- the server facilities may be separated by any distance (e.g., several feet or many miles).
- the storage system 100 may accommodate any number of storage racks and each rack is located in one of any number of server facilities.
- the storage system 100 may accommodate any use of mass data storage (e.g., content delivery, backup, archiving, running scientific simulations such as computational fluid dynamics, and rendering computer generated imagery, such as a render farm).
- the individual storage racks are interconnected to one another via a computer network 106 (e.g., Gigabit Ethernet or a custom interconnect network). Further, the interconnected storage racks may be connected to one or more external data source(s)/destination(s) 108 via the same computer network 106 or an additional interconnected network (e.g., a local area network or a wide area network, not shown) using a variety of communication protocols (e.g., TCP/IP, packet over SONET/SDH, multiprotocol label switching (MPLS), asynchronous transfer mode (ATM), Ethernet, and frame relay). As a result, data may be moved between the individual storage racks and the external data source(s)/destination(s) 108 as desired.
- a computer network 106 e.g., Gigabit Ethernet or a custom interconnect network.
- the interconnected storage racks may be connected to one or more external data source(s)/destination(s) 108 via the same computer network 106 or an additional interconnected network
- Each individual storage rack includes an array of storage media units, each powered by a power supply (e.g., a power supply 164 ) and configured to receive data transfer requests (e.g., read/write requests) from a rack controller (alternatively referred to as a storage rack server or a storage system server).
- a power supply e.g., a power supply 164
- storage rack 102 includes 12 individual storage media units (e.g., storage media unit 110 ) and power supply 164 controlled by rack controller 118 .
- Storage rack 104 includes 6 individual storage media units (e.g., storage media unit 112 ) and power supply 166 controlled by rack controller 120 .
- individual storage racks may include greater or fewer individual storage media units than the depicted 12 and 6 storage media units per storage rack.
- some racks may not include a rack controller and/or an individual rack controller may control multiple racks.
- Each media unit within a storage rack comprises an array of individual storage drives controlled by a same media unit controller.
- the media unit 110 includes 6 individual storage drives (e.g., storage drive 114 ) that are each read and written to by a media unit controller 122 .
- the media unit 112 includes 4 individual storage drives (e.g., storage drive 116 ) that are each read and written to by a media unit controller 124 .
- individual storage media units may include greater or fewer storage drives than the depicted 6 and 4 storage drives per media unit.
- the power supply units 164 , 166 power multiple media units (e.g., an entire associated rack, either rack 102 or rack 104 ).
- each power supply unit powers a single associated media unit.
- An upper end power capability of each individual power supply may determine how many storage drives may be operated simultaneously by that power supply, which may range from a single media unit to multiple media units.
- the individual media units are selectively installed and uninstalled from the storage rack (e.g., configured as a blade, which corresponds to the storage rack physical configuration).
- the individual storage racks are each subdivided into individual rack units (e.g., 42 rack units), where each media unit is physically dimensioned to fill one rack unit (i.e., 19 inches wide by 1.75 inches tall) and thus each storage rack can accommodate a total of 42 media units.
- the storage rack is physically dimensioned to accommodate any desired number of media units.
- each storage drive is a distinct storage medium or set of storage media with some or all of the read/write control functions of the storage drive removed to the corresponding media unit controller and/or rack controller of the mass data storage system 100 .
- the media unit controller and/or rack controller of the mass data storage system can selectively power (e.g., power-on, power-off, spin-up, spin-down, etc.) an individual storage drive as desired to read/write data from the individual storage drive without having to supply power to the individual storage drive continuously.
- the individual storage drives in each of the media unit have characteristics present in existing state of the art storage drives with the exception that some or all of the control hardware and software is removed to the corresponding media unit controller and/or rack controller, thereby centralizing control functions of the individual storage drives to a media unit level and/or a rack level.
- the individual storage drives may utilize any available storage technology (e.g., magnetic storage, optical storage, semiconducting storage (e.g., flash-based solid state)).
- each of the storage resources in an individual media unit is managed by a same controller.
- the media unit controller 122 manages and directs read and write operations to each of the six storage resources (e.g., a disk drive 114 ) in the media unit 110 .
- the individual storage drives may have disparate characteristics and the operation of the mass data storage system 100 may be optimized based on the performance characteristics of the storage drives available within the system 100 .
- each of the individual storage drives within a media unit has disparate performance characteristics, but each media unit has the same performance characteristics (i.e., similar within industry acceptable tolerances).
- Storage resources in the system 100 are partitioned into a number of logical zones (e.g., a logical zone 126 , 136 ) that are each configured to stored data according to a same storage condition.
- logical zones e.g., a logical zone 126 , 136
- the storage resources included in each of the illustrated logical zones 126 and 136 are shown to be in physical proximity to other storage resources in the same logical zone 126 or 136 .
- physical proximity is not a requirement of resources in a shared logical zone.
- a logical zone can include storage resources from different media units, racks, and even facilities (e.g., geographical locations).
- a zone may include storage resources communicatively coupled via the computer network 106 but located at computer farms in different geographical regions.
- a data transfer request (e.g., read or write command) is received by a controller (e.g., one of the rack controllers 118 , 120 ) in the system 100
- the controller selects a logical zone to receive data or act as a data source for execution of the data transfer request.
- Some or all of the individual logical zones (e.g., the logical zones 126 and 136 ) stores data according to a storage condition that is different from a storage condition of one or more other logical zones in the system 100 .
- a “storage condition” of a logical zone may refer to, for example, a performance characteristic satisfying an operational threshold common to storage resources in a logical zone; a method of data storage (e.g., level of integrity or data security) provided by storage resource(s) in a same logical zone; and/or a degree of data integrity attributable to a structural arrangement of storage resource(s) in a same logical zone. Examples of these and other storage conditions utilized in defining logical zones (e.g., the logical zones 126 , 136 , 140 ) are discussed in greater detail below with respect to FIGS. 2-3 .
- logical zones to sort and store incoming data can provide a number of benefits. For example, a likelihood of data loss due to power supply failure can be diminished if storage resources are assigned to logical zones based on a distribution of power resources (e.g., power supply units 164 , 166 ). Further, read and write latencies can by decreased by zoning the storage resources according to common performance characteristics, such as by storage capacity, rotational speed, time-to-data (TTD), etc. Further still, the use of logical zoning can provide a diverse selection of storage options to an end user. For example, a system implementing logical zoning according to the disclosed implementations may allow a user to select a desired type of storage, a desired security protocol, a desired degree of compression, desired degree of redundancy, desired speeds of accessibility, and more.
- FIG. 2 illustrates another example mass data storage system 200 that implements logical zoning of storage resources for data management.
- the mass data storage system 200 includes multiple racks (e.g., racks 202 , 204 , and 206 ) that each includes a rack controller (e.g., rack controllers 218 , 219 , 220 ) and at least one power supply unit (e.g., power supply units 264 , 265 , 266 ) powering the storage resources on the associated rack.
- rack controller e.g., rack controllers 218 , 219 , 220
- at least one power supply unit e.g., power supply units 264 , 265 , 266
- the mass data storage system 200 may include any number of racks and media units at one or more physical storage facilities.
- Each of the racks 202 and 204 further includes an array of media units (e.g., media units 210 , 212 ), and each of the media units includes a media unit controller (e.g., a media unit controller 222 ).
- Each of the rack controllers 218 , 219 , and 220 is communicatively coupled to the media unit controllers within the corresponding rack (e.g., the racks 202 , 204 , and 206 respectively), and each of the media unit controllers are communicatively coupled to an associated nest of storage drives (e.g., via compute nodes, serial attached SCSI (SAS) connections, etc.)
- Controllers e.g., rack controllers and/or media unit controllers
- the rack controller 218 may be able to send and receive data transfer requests to and from the other rack controllers 219 and 220 , and to other racks in the mass data storage system 200 , such as those located at different storage facilities.
- Each storage resource (e.g., a storage drive 214 ) in the mass data storage system 200 is assigned to an associated logical zone (e.g., logical zones A-H).
- logical zones A-H e.g., logical zones A-H
- the storage resources included in each of the illustrated logical zones A-H are shown to be in physical proximity to other storage resources in the same logical zone (e.g., each zone includes storage resources spanning a same row of media units across the racks 202 , 204 , and 206 ).
- physical proximity is not a requirement of resources in a same logical zone.
- one logical zone may include one or more drives within the media unit 210 in a top row of the rack 206 , one or more drives within media unit 212 in a bottom row of the rack 206 , and any number of storage resources from other racks in any physical location relative to the media units 210 and 212 .
- a controller e.g., a rack controller and/or a media unit controller
- the controller selects a logical zone, and also selects one or more specific storage resources in the selected logical zone on which to execute the data transfer request.
- the controller transmits the request along available channels within the identified logical zone to the appropriate media unit controller(s) tasked with managing read/writes of each of the selected resource(s).
- Each of the different logical zones A-H is configured to store data according to one or more common storage conditions.
- common storage conditions For example and not limitation, a number of potential “storage conditions” are discussed below.
- a logical zone “storage condition” is a degree of data integrity provided by a structural arrangement of resources in the logical zone.
- logical zones may be defined so as to distribute available power supply units evenly between the storage resources in each logical zone. This concept is illustrated by the configuration of FIG. 2 , wherein logical zone A applies a storage condition in which no more than two media units share a common power supply unit (e.g., power supply units 264 , 265 , and 266 , respectively).
- a common power supply unit e.g., power supply units 264 , 265 , and 266 , respectively.
- a write of data to logical zone ‘A’ entails writing error correction code (ECC) for the data on multiple different resources (e.g., media units on each of the racks 202 , 204 , 206 ) within the logical zone A. If one power supply unit of logical zone A fails, some of the storage resources go offline. However, data of the offline resource(s) may be recoverable using the ECC on the other resources in the logical zone that are unaffected by the power failure.
- ECC error correction code
- a storage condition applied by a logical zone is a maximum number of storage resources sharing a common field replaceable unit (FRU) (e.g., a media unit).
- FRU field replaceable unit
- ‘FRU’ refers to a group of data resources that are collectively taken “offline” in order to replace an individual storage resource in the group. For example, all six storage drives within the media unit 210 may be taken temporarily offline in order to replace an individual storage drive 216 housed within the media unit 210 . If an individual logical zone includes storage resources that are spread across a relatively large number of FRUs, the system 200 is less likely to be disrupted for data transfer requests occurring while one or more FRUs are offline.
- ECC can be spread across multiple storage resources in a logical zone. If one media unit of logical zone A is ejected to replace a drive, data stored on the offline storage resource(s) may still be recoverable using the ECC stored on the other resources in the logical zone.
- a storage condition applied by a logical zone is a maximum number of storage resources sharing some physical component other than an FRU or power supply (e.g., a fan). This storage condition mitigates system performance degradation in the event that the shared component fails.
- the storage condition(s) applied within each of the logical zones A-H are not necessarily conditions attributable to structure or distribution of storage resources in a zone.
- the different logical zones e.g., logical zones A-H
- the different data management modules may execute a same command type (e.g., a write command, a read command, an erase command) according to a different set of processing operations.
- a data management module of logical zone A may write data according to a first set of processing operations while a data management module of logical zone B writes data according to a second, different set of processing operations.
- the storage condition of a logical zone is an outcome of a processing operation applied to data stored within the logical zone. A few examples of this are provided below.
- the storage condition of a logical zone is a degree of data redundancy with which data is saved in the logical zone.
- a data management module of a first logical zone may implement a non-existent or a low level of redundancy (e.g., redundant array of independent discs (RAID 0-3), while a data management module of a second logical zone implements a medium level of redundancy (e.g., RAID 4-6), and a data management module of a third logical zone implements a high degree of redundancy (e.g., RAID 7-10).
- RAID 0 redundant array of independent discs
- a storage condition of a logical zone is an error correction code (ECC) rate utilized for writing data in each logical zone.
- ECC error correction code
- the data management module of logical zone A may write data according to a different ECC rate than that utilized by a data management module of zones B, C, D, etc.
- a storage condition of a logical zone is a degree or type of encryption applied to data stored in the zone.
- data management modules of various logical zones may each write data with a varying degree of encryption.
- a high-security encryption code is applied to data stored in a first logical zone; a medium security encryption code is applied to data stored in a second logical zone; a low security encryption code is applied to data stored in a third logical zone, etc.
- Yet another example storage condition of a logical zone is a read/write priority that is associated with each read/write request to the logical zone. For example, read/write operations in a low quality of service (QOS) logical zone may only occur during idle time, while read/write operation in a high QOS logical zone may interrupt any current read/write operations being performed.
- QOS quality of service
- a “storage condition” is a storage resource performance characteristic satisfying an operation threshold.
- Example performance characteristics include without limitation storage capacity, rotational speed of a storage resource (e.g., a disk), time-to-data (TTD), and storage resource cost. These, and other, performance characteristics are discussed in greater detail below:
- a storage condition of a logical zone is a same or similar storage capacity shared by storage resources in the logical zone.
- 4 terabyte drives have the capability of storing at least 4 terabytes of data and are formatted to store 4 terabytes of data. Drives that meet this threshold are referred to herein as having the same or similar storage capacity. Drives that do not have the capability of storing 4 terabytes of data and/or drives that are formatted to store a different quantity of data are referred to herein as having disparate storage capacity.
- media units with a high storage capacity are grouped into a high capacity logical zone, while media units with a particularly low storage capacity (or low data compression) are grouped into a low capacity logical zone.
- Zoning storage resources according to storage capacity allows for a uniform organization of meta data and increased disc capacity utilization.
- all storage resources in a same logical zone share the same or similar rotational speeds (e.g., another example performance characteristic).
- a 7,200 RPM storage drive varies from 7,200 RPM by no more than 1% during read/write operations.
- Drives that meet this operating limitation are referred to herein as having the same or similar rotational speeds.
- Drives that fail to meet this operating limitation are referred to herein as having disparate rotational speeds.
- media units with a particularly high read/write speed are grouped into a high speed zone.
- media units with a particularly low read/write speed may be grouped into a low speed zone.
- storage resources in a same logical zone are available are of the same or similar cost. For example, higher-cost (e.g., more reliable) storage resources may be assigned to a first zone, while lower-cost (e.g., less reliable) storage resources may be assigned to another zone.
- different logical zones include randomly writeable data units of a same or similar capacity.
- the storage drive 216 may be a shingled magnetic recording disk drive including multiple shingled data bands that are each assigned to one of multiple different logical zones.
- shingled data band refers to a grouping of adjacent data tracks written as a single unit.
- An update to one or more cells within a shingled data band includes re-writing the entire data band, including both changed and unchanged cells.
- logical zones are used to partition different types of data.
- logical zone A may store medical records from the state of Colorado;
- logical zone B may store uploaded user media files, etc.
- selection of a logical zone for execution of a data write request is based on a storage characteristic of incoming data, or based on a storage characteristic specified in association with the data. Example storage characteristics are discussed in greater detail with respect to FIG. 3 .
- FIG. 3 illustrates aspects of another example storage system 300 that implements logical zoning of storage resources.
- the storage system 300 includes a zone manager 302 that receives data transfer requests from a computer network 306 . Responsive to each data transfer request, the zone manager 302 selects a logical zone of the mass data storage system 300 on which to execute the data transfer request.
- the zone manager 302 may include hardware and/or software implemented via any tangible computer-readable storage media within or communicatively coupled to the mass data storage system.
- tangible computer-readable storage media includes, but is not limited to, random access memory (“RAM”), ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible medium which can be used to store the desired information and which can accessed by mobile device or computer.
- intangible computer-readable communication signals may embody computer readable instructions, data structures, program modules or other data resident in a modulated data signal, such as a carrier wave or other signal transport mechanism.
- some or all functionality of the zone manager 302 may be implemented in rack controller (e.g., a rack controller 118 , 120 of FIG. 1 ) of a mass storage system, one or more media unit controllers (e.g., a media unit controller 124 of FIG. 1 ) or other computing node(s) communicatively coupled to processing resources capable of initiating reads and writes within the storage system 300 .
- the storage system 300 includes a massive network of controllers (e.g., rack controllers, media unit controllers) that manage read and write operations to multiple different storage resources with disparate performance characteristics.
- Storage resources in the storage system 300 are divided into a number of logical zones. Each logical zone stores data according to a storage condition that is different from a storage condition of one or more other logical zones in the storage system 300 .
- FIG. 3 generally illustrates eight logical zones (A-H), but any number of such zones is contemplated.
- the zone manager 302 selects an associated logical zone for execution of the write request.
- the zone manager 302 selects a logical zone and one or more resources within the selected logical zone to receive the data.
- the zone manager 302 then directs the read/write request to one or more appropriate controllers with read/write authority to the selected resources.
- the zone manger 302 selects a logical zone but does not specifically select which resources in the logical zone may receive data of the write request. In this case, the zone manager 302 may simply forward the data write request to another controller, such as a zone-level controller specifically designated for selecting resources to receive incoming data.
- the selection of a logical zone for execution of a data write request is based on a storage characteristic associated with the request. More specifically, the storage characteristic may be identified: (1) based on information implicit in the request, such as the size of a data write; (2) based on information specified by a user initiating the write request (e.g., within the request or in association with the request); (3) based on settings of a user subscription to storage on the storage system 300 ; or (4) based on a source of a particular data transfer request (e.g., an IP address, geographical region, etc.).
- a source of a particular data transfer request e.g., an IP address, geographical region, etc.
- a user may subscribe to storage on the system 300 (e.g., cloud-based storage) and elect, either by indicating preference or selecting a particular subscription plan, one or more storage characteristics that the zone manger 302 associates with data from that user.
- a group of users such as those affiliated with a business (e.g., a shared IP address) may share a subscription.
- certain logical zones may not be available for requests arriving from certain users, IP addresses, etc.
- the zone controller 302 selects a logical zone for a write request based on a size of data to be written (e.g., one example storage characteristic). If, for example, the various logical zones group together shingled data bands of the same or similar size, the zone controller 302 may select a logical zone to receive data that includes shingled data bands of capacity comparable to (e.g., equal to or just slightly larger than) the size of the data.
- the zone controller 302 selects a logical zone for a write operation based on a specified storage characteristic.
- the storage characteristic may be specified by a user or in association with a user's subscription, profile, location, etc.
- a user may specify or subscribe to a desired level of data integrity (e.g., redundancy) with which to save the data of the write request.
- the data is then directed to a logical zone that applies a corresponding degree of redundancy to the data stored therein.
- high integrity data is directed to a high redundancy logical zone (e.g., RAID 7-10); medium integrity data is directed to a “medium redundancy” logical zone (e.g., RAID 4-6); and “low integrity data” is directed to a “low redundancy” logical zone (e.g., RAID 0-3).
- a high redundancy logical zone e.g., RAID 7-10
- medium integrity data is directed to a “medium redundancy” logical zone (e.g., RAID 4-6)
- low integrity data is directed to a “low redundancy” logical zone (e.g., RAID 0-3).
- the storage characteristic used to select a logical zone is a specified type of error correction code (e.g., LDPC, modulation codes, etc.).
- the data management system 300 directs the write request to a logical zone applying requested type of error correction code to data stored therein.
- the zone controller 302 selects a logical zone for a write operation based on a specified frequency or a priority with which associated data is to be accessed. For example, data that needs to be frequently and/or quickly accessed may be directed to storage resources in a high-speed logical zone, while data that is infrequently accessed or not urgent when accessed is directed to storage resources in a low speed zone. Similarly, data considered high priority may be written to a higher QOS zone while data considered lower priority may be written to a lower QOS zone.
- the zone controller 302 selects a logical zone for a write operation based on a specified degree or type of security (e.g., data encryption).
- the write request may specify a level of desired security (e.g., high security, low security).
- Data that is particularly sensitive or confidential may be specified as “high security” and stored in a high security logical zone (e.g., a zone applying a sophisticated encryption scheme), while data that is neither particularly sensitive nor confidential can be specified as “low security” and stored in a low security logical zone.
- a “high-security” logical zone may be, for example, a zone that utilizes advanced encryption techniques, such as techniques that allow for secure erasure and removal of the data.
- the logical zones may overlap, providing different logical zone assignments for storage resources based on one or more performance characteristics associated with a particular data set to be stored therein.
- Logical Zone A may be “fast-access” zone
- Logical Zone G may be a “high QOS priority zone
- Logical Zone H may be a “low QOS priority zone.”
- a storage resource with a short TTD could belong to Logical Zone A and also belong to one of Logical Zones G or H.
- Higher-performing logical zones may have a higher cost associated with them that may be passed along to a user of the mass data storage system 300 in the form of a surcharge.
- lower-performing logical zones may have a lower cost associated with them that is also passed along to the user of the mass data storage system in the form of a discount.
- incoming data is assigned to a logical zone based on an attribute of the data identified by the zone manager 302 .
- the zone manager 302 may be programmed to recognize that incoming data is a medical record, or that the medical record geographically originates from Colorado. Colorado medical records may all be directed to a same logical zone.
- the zone manager 302 assigns data to a logical zone by identifying a data type, geographic source, author or originator, security, confidentiality level, character encoding, a checksum, a cryptographic hash, a digital signature, etc.
- the zone manager 302 appends the incoming data with an extended file attribute describing characteristics of the data.
- FIG. 4 illustrates another mass data storage system 400 including an example control system 402 for selecting one or more logical zones to receive data of a write request.
- aspects of the control system 402 may be contained within a rack controller (e.g., rack controller 118 , 120 of FIG. 1 ) and/or a media unit controller (e.g., media unit controller 124 of FIG. 1 ) associated with each individual storage drive (e.g., a storage drive 214 of FIG. 2 ) of the mass data storage system 400 .
- aspects of the mass data storage system 400 may span multiple racks and/or geographic locations.
- the control system 402 includes a storage interface 440 that allows the mass data storage system to receive incoming data from external data source(s) from a computer network 406 and send outgoing data to external data destination(s) (see, e.g., external data source(s) and destination(s) 108 of FIG. 1 ) that may have disparate operating systems operating over one or more computer networks (see e.g., computer network 106 of FIG. 1 ).
- the control system 402 further includes a power manager 452 that queries computing nodes (e.g., controllers) to discover power information and create a power map 456 detailing physical distribution of storage resources and power availability within the storage system 400 .
- a zone manager 442 discovers performance characteristics of various system storage resources by querying system controllers (e.g., other rack controllers and/or media unit controllers) for such information. Additionally, the zone manager 442 works in conjunction with a read/write manager 446 to identify available types of data management schemes (e.g., encryption, compression, etc.). Using the discovered performance characteristics, power map 456 , and/or information from the read/write manager 446 , the zone manager 442 generates a zone map 444 that groups the various storage resources of the data storage system 400 into logical zones.
- computing nodes e.g., controllers
- a zone manager 442 discovers performance characteristics of various system storage resources by querying system controller
- zone manager 442 selects one or more logical zones for each storage resource based on performance features that are common between different storage resources (e.g., storage capacity, access speed, drive reliability, etc.). In another implementation, the zone manager 442 selects one or more logical zones for each storage resource according to a scheme that ensures a particular distribution of storage drives between various power resources and/or across various FRUs in the storage system 400 . In still other implementations, the zone manager 442 groups one or more storage resources into logical zones based on instructions from the data read/write manager 446 .
- the read/write manager 446 may identify a number of resources to apply a first data management scheme (e.g., a sophisticated encryptions scheme) and a number or resources to apply a different data management scheme (e.g., a low-level encryption scheme).
- a first data management scheme e.g., a sophisticated encryptions scheme
- a different data management scheme e.g., a low-level encryption scheme
- the zone manager 442 may re-generate the zone map 444 to take away or add storage capacity to one or more logical zones in the system. If, for example, storage resources in one logical zone are taken offline (e.g., to replace a drive or other component), the zone manager may elastically re-allocate additional storage resources to that zone to keep zone capacity the same. This reallocation may occur on a time schedule for re-evaluating the storage needs of the mass data storage system or on-demand as the storage needs of the mass data storage system change.
- the zone manager 442 may consult an elastic capacity library 450 including a master listing of storage resources that are either temporarily allocated to a particular zone or available for reallocation. Using such information, the zone manager 442 can dynamically reallocate storage resources between logical zones and dynamically increase logical zone capacity as needed.
- the zone manager 442 may determine that incoming data is to be stored in a first logical zone that does not have sufficient available storage space due to the fact that one or more storage drives are currently offline (e.g., for maintenance). In such case, the zone manager 442 may consult the elastic capacity library 450 to identify a storage location where the incoming data can be temporarily stored according to the storage condition of the first logical zone. Once the offline storage resources become available in the first logical zone, the data can be moved from the temporary storage location to a more permanent storage location within the first logical zone.
- the zone manager 442 works in conjunction with the elastic capacity library 450 to permanently increase a storage capacity of a given logical zone.
- the demand for storage capacity in a logical zone may, at some point, exceed the available storage capacity in that logical zone.
- the zone manager 442 can simply re-allocate unused storage capacity of one logical zone to another logical zone that is in need of increased storage capacity.
- the zone manager 442 tracks health and wear-leveling of various storage resources within each zone, anticipates the failures of the various storage resources, adds capacity to each zone as needed (e.g., via the elastic capacity library 450 ), and automatically moves data between the old (e.g., failing) storage resources and the storage resources newly allocated to the logical zone.
- a consumer leases components of storage system implementing the control system 402 .
- the user may not initially have write access to all of the storage resources in the storage system. For example, some storage resources may be in an offline (powered off) state until remotely activated. If the user upgrades a subscription plan, the zone manager 442 may be instructed to elastically allocate some of the previously inaccessible resources to one or more of the logical zones.
- logical zones in the storage system 400 overlap, and a storage resource may be assigned to two or more logical zones.
- a solid state device SSD may belong to a ‘high access speed logical zone’ exclusively including high speed devices and also belong to ‘high redundancy logical zone’ including both high-speed and low-speed devices.
- the zone manager 442 is an object manager, a file system manager and may incorporate a proprietary interface, such as a block layer interface 454 .
- the zone manager 442 is communicatively coupled, through the block layer interface 454 , to a plurality of storage nodes (e.g., rack controllers, media unit controllers, etc.) within the mass storage system 400 .
- Communication channels may allow for bidirectional data flow between all storage nodes in the mass data storage system 400 .
- the zone manager 442 may be communicatively coupled a plurality of different rack controllers; each rack controller may be communicatively coupled to media unit controllers within the corresponding rack; and each of the media unit controllers may be communicatively coupled to an associated nest of internal storage drives.
- the zone manager 442 consults with an SMR manager 448 to write the incoming data to an appropriate shingled logical zone. More specifically, the zone manger 442 consults the zone map 444 to select appropriate available storage space for the incoming data. The zone manager 442 instructs the SMR manager 448 to write the incoming data to the selected logical zone (and/or specific selected storage resources) within the storage system 400 .
- SMR shingled magnetic recording
- FIG. 5 illustrates example operations 500 for mapping and selecting logical zones in a mass data storage system.
- a mapping operation 505 maps different storage resources to different logical zones in the storage system.
- Each of the logical zones is defined based on a storage condition applied to data stored therein.
- the storage condition may be, for example, a performance characteristic satisfying an operational threshold common to storage resources in a logical zone; a method of data storage provided by storage resource(s) in a same logical zone; and/or a degree of data integrity attributable to a structural arrangement of storage resource(s) in a same logical zone.
- a receiving operation 510 receives an incoming data transfer request (e.g., a read or write request).
- a determination operation 515 determines whether the data transfer request is a write request. If the data transfer request is a write request, a selection operation 520 selects one of the logical zones to receive the data based on a storage characteristic associated with the write request that satisfies the storage condition of the selected logical zone. For example, a logical zone may have a storage condition prohibiting writes of new data in excess of a threshold data size. If the incoming data has a size less than or equal to the threshold data size, the storage condition is satisfied.
- a storage condition is an outcome of a processing operation performed on data stored within a specific zone.
- the storage condition may be a level of security, redundancy, compression, type of encryption, etc.
- Such storage condition(s) may be satisfied, for example, if a user specifies a storage characteristic that matches the storage condition.
- the user may select the storage characteristic (e.g., a desired redundancy) specifically within a read/write request, via a subscription service, a profile election, etc.
- a storage condition is a performance characteristic of storage resources in a given zone.
- the storage resources may have a set capacity, be of a set type (disk, SSD, etc.), have a same or similar TTD, etc.
- Such storage condition(s) may be satisfied, for example, if a user specifies a storage characteristic that matches or otherwise satisfies the storage condition. For example, the user may select a desired TTD that falls within a range of TTDs provided by a particular logical zone.
- a selection and determination operation 525 selects a logical zone that includes the requested data and determines read parameters for the read operation based on the storage condition(s) applied within the selected logical zone. For example, selecting the logical zone including the requested data may entail accessing one or more mapping tables to that associate parameters of the incoming read command with physical storage location(s) within the storage system.
- Storage conditions of the selected logical zone may influence read parameters of the read command. For example, a particular ECC rate may be utilized if the data is stored in a logical zone applying a specific ECC rate. Similarly, the read operation may be postponed until an landil time if the data is stored in a logical zone associated with a low QOS storage condition. Further still, the number of resources from which the data is retrieved may vary based on a redundancy associated with the logical zone where the data is stored. These read parameters are meant to be explanatory and do not by any means contemplate all storage conditions that may affect read parameters related to execution of the read command.
- FIG. 6 illustrates additional example operations 600 for mapping and selecting logical zones in a mass data storage system.
- a mapping operation 605 maps different storage resources to different logical zones in the storage system. Each of the different logical zones is defined to have an independent power source for a set number of storage resources.
- a selection operation 610 selects multiple storage resources within a logical zone to receive data of a write request. For example, an independent power source may power no more than 1 ⁇ 4 of the storage resources in any one zone. If ECC for incoming data of a write request is written to 1 ⁇ 2 of the storage resources in a logical zone, then the ECC is guaranteed to be accessible on at least one storage resource even if one of the power sources fails. This decreases a risk of system performance degradation in the event of a power source failure.
- the embodiments of the disclosed technology described herein are implemented as logical steps in one or more computer systems.
- the logical operations of the presently disclosed technology are implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine or circuit modules within one or more computer systems.
- the implementation is a matter of choice, dependent on the performance requirements of the computer system implementing the disclosed technology. Accordingly, the logical operations making up the embodiments of the disclosed technology described herein are referred to variously as operations, steps, objects, or modules.
- logical operations may be performed in any order, adding and omitting as desired, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language.
Abstract
A mass data storage system includes a plurality of communicatively coupled storage resources that are each controlled by one of a plurality of distinct storage controllers. Each of the storage resources are mapped to one or more logical zones, and the logical zones each apply an associated storage condition to data stored therein.
Description
- The present application claims benefit of priority to U.S. Provisional Patent Application No. 62/012,205 entitled “Off-line/On-line Mass Data Storage Management” and filed on Jun. 13, 2014, and also claims benefit of priority to U.S. Provisional Patent Application No. 62/012,219 entitled “Off-line/On-line Mass Data Storage System” and filed on Jun. 13, 2014. Both of these applications are specifically incorporated by reference for all that they disclose or teach.
-
FIG. 1 illustrates an example mass data storage system with storage resources allocated between multiple logical zones. -
FIG. 2 illustrates another example mass data storage system that implements logical zoning of storage resources. -
FIG. 3 illustrates aspects of another example mass storage system that implements logical zoning of storage resources. -
FIG. 4 illustrates another mass data storage system including an example control system for selecting one or more logical zones to receive data of a write request. -
FIG. 5 illustrates example operations for mapping and selecting logical zones in a mass data storage system. -
FIG. 6 illustrates additional example operations for mapping and selecting logical zones in a mass data storage system. - Implementations disclosed herein provide for mapping a plurality of storage resources to one or more of multiple logical zones in a storage system. Each of the logical zones is associated with a different storage condition and defines a group of storage resources applying the associated storage condition to data stored therein.
- This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. These and various other features and advantages will be apparent from a reading of the following Detailed Description.
- On-line mass data storage (sometimes referred to as secondary storage) refers to one or more interconnected data storage units that are actively running and available for read/write operations. Example on-line mass data storage units include hard disk drives (HDDs), optical drives, and flash memory drives. Typically, time-to-data (TTD) for on-line mass data storage units is less than 2 milliseconds. On-line mass data storage benefits from very high TTD capabilities, but is expensive to build and operate. More specifically, individual on-line mass data storage units are of high-quality, driving build costs up, and they consume significant power in an on-line state, driving operating costs up.
- Near-line (or near on-line) mass data storage refers to one or more interconnected data storage units that are powered on, but in a low power consumption state and are brought to an on-line state before running read/write operations. Hard disk drives, optical drives, and/or flash memory drives may also be used for near-line storage, with the difference being an added mechanism to bring a selected storage unit to an on-line state for read/write operations. Such example mechanisms are robotic near-line storage (i.e., the system is aware of where a desired data chunk resides on a physical volume and utilizes a robotic mechanism to retrieve the physical volume for read/write operations) and hard drive near-line storage (e.g., massive array of idle discs (MAID)). MAID systems archive data in an array of disc drives that are operating in a standby power state, but most of which are not spinning. The MAID system spins up each disc drive on demand when desired to perform read/write data on a disc within that drive. Typically, TTD for MAID-type near-line mass data storage units is less than 4 milliseconds. Near-line mass data storage systems have lower operating costs than on-line mass data storage systems due to the reduced power demand, but have similar build costs.
- Off-line (or cold) mass data storage refers to one or more interconnected data storage units that are kept in a power off state and/or utilize remotely located storage media to store data. Typically, off-line mass data storage utilizes one or more interconnected tape drives, each with numerous tapes associated with the drive. As discussed above with regard to robotic near-line storage, a desired tape is retrieved from its storage location and loaded into its associated drive for read/write operations. In off-line tape mass data storage units, the desired tape is often manually retrieved and loaded, and as a result TTD for off-line tape mass data storage units can be greater than 24 hours. While the build and operating costs of off-line tape mass data storage are low, some applications require a faster access time than 24 hours, but not as fast as on-line or near-line mass data storage systems.
- The disclosed off-line HDD mass data storage systems can achieve TTD greater than 4 ms and typically faster than that of off-line tape mass data storage while maintaining build and operating costs competitive with off-line tape mass data storage. According to one implementation, storage resources in the disclosed off-line HDD mass data storage systems are classified into logical zones based on a storage condition applied to data stored in each respective zone. The usage of logical zones enhances system performance in a variety of ways and also provides a diverse array of storage options for an end user.
-
FIG. 1 illustrates an example massdata storage system 100 with storage resources allocated between multiple logical zones (e.g.,logical zones 126, 136) that store data according to a common storage condition. The storage system 100 (e.g., a server cluster or farm) is comprised of a number of storage racks (e.g.,storage racks 102, 104) oriented in adjacent or separate physical locations or facilities (e.g., data rooms or centers). In some implementations, a first quantity of storage racks is located in a first server facility, a second quantity of storage racks is located in a second server facility, and so on. The server facilities may be separated by any distance (e.g., several feet or many miles). Thestorage system 100 may accommodate any number of storage racks and each rack is located in one of any number of server facilities. Thestorage system 100 may accommodate any use of mass data storage (e.g., content delivery, backup, archiving, running scientific simulations such as computational fluid dynamics, and rendering computer generated imagery, such as a render farm). - The individual storage racks are interconnected to one another via a computer network 106 (e.g., Gigabit Ethernet or a custom interconnect network). Further, the interconnected storage racks may be connected to one or more external data source(s)/destination(s) 108 via the
same computer network 106 or an additional interconnected network (e.g., a local area network or a wide area network, not shown) using a variety of communication protocols (e.g., TCP/IP, packet over SONET/SDH, multiprotocol label switching (MPLS), asynchronous transfer mode (ATM), Ethernet, and frame relay). As a result, data may be moved between the individual storage racks and the external data source(s)/destination(s) 108 as desired. - Each individual storage rack includes an array of storage media units, each powered by a power supply (e.g., a power supply 164) and configured to receive data transfer requests (e.g., read/write requests) from a rack controller (alternatively referred to as a storage rack server or a storage system server). For example,
storage rack 102 includes 12 individual storage media units (e.g., storage media unit 110) andpower supply 164 controlled byrack controller 118.Storage rack 104 includes 6 individual storage media units (e.g., storage media unit 112) andpower supply 166 controlled byrack controller 120. In some implementations, individual storage racks may include greater or fewer individual storage media units than the depicted 12 and 6 storage media units per storage rack. In other implementations, some racks may not include a rack controller and/or an individual rack controller may control multiple racks. - Each media unit within a storage rack comprises an array of individual storage drives controlled by a same media unit controller. For example, the
media unit 110 includes 6 individual storage drives (e.g., storage drive 114) that are each read and written to by amedia unit controller 122. Themedia unit 112 includes 4 individual storage drives (e.g., storage drive 116) that are each read and written to by amedia unit controller 124. In other implementations, individual storage media units may include greater or fewer storage drives than the depicted 6 and 4 storage drives per media unit. - As shown, the
power supply units rack 102 or rack 104). However, in another implementation, there exist multiple power supply units per rack and/or each power supply unit powers a single associated media unit. An upper end power capability of each individual power supply may determine how many storage drives may be operated simultaneously by that power supply, which may range from a single media unit to multiple media units. - In some implementations, the individual media units are selectively installed and uninstalled from the storage rack (e.g., configured as a blade, which corresponds to the storage rack physical configuration). In an example standard server-rack configuration, the individual storage racks are each subdivided into individual rack units (e.g., 42 rack units), where each media unit is physically dimensioned to fill one rack unit (i.e., 19 inches wide by 1.75 inches tall) and thus each storage rack can accommodate a total of 42 media units. In other implementations, the storage rack is physically dimensioned to accommodate any desired number of media units.
- In one implementation, each storage drive is a distinct storage medium or set of storage media with some or all of the read/write control functions of the storage drive removed to the corresponding media unit controller and/or rack controller of the mass
data storage system 100. As a result, one or both of the media unit controller and/or rack controller of the mass data storage system can selectively power (e.g., power-on, power-off, spin-up, spin-down, etc.) an individual storage drive as desired to read/write data from the individual storage drive without having to supply power to the individual storage drive continuously. - In various implementations, the individual storage drives in each of the media unit have characteristics present in existing state of the art storage drives with the exception that some or all of the control hardware and software is removed to the corresponding media unit controller and/or rack controller, thereby centralizing control functions of the individual storage drives to a media unit level and/or a rack level. Further, the individual storage drives may utilize any available storage technology (e.g., magnetic storage, optical storage, semiconducting storage (e.g., flash-based solid state)). In the illustrated implementation, each of the storage resources in an individual media unit is managed by a same controller. For example, the
media unit controller 122 manages and directs read and write operations to each of the six storage resources (e.g., a disk drive 114) in themedia unit 110. - Further, by moving some or all of the control hardware/software of the individual storage drives out of the individual storage drives and into the corresponding media unit controller and/or rack controller, the individual storage drives may have disparate characteristics and the operation of the mass
data storage system 100 may be optimized based on the performance characteristics of the storage drives available within thesystem 100. In one example implementation, each of the individual storage drives within a media unit has disparate performance characteristics, but each media unit has the same performance characteristics (i.e., similar within industry acceptable tolerances). - Storage resources in the
system 100 are partitioned into a number of logical zones (e.g., alogical zone 126, 136) that are each configured to stored data according to a same storage condition. For simplicity, the storage resources included in each of the illustratedlogical zones logical zone computer network 106 but located at computer farms in different geographical regions. - When a data transfer request (e.g., read or write command) is received by a controller (e.g., one of the
rack controllers 118, 120) in thesystem 100, the controller selects a logical zone to receive data or act as a data source for execution of the data transfer request. Some or all of the individual logical zones (e.g., thelogical zones 126 and 136) stores data according to a storage condition that is different from a storage condition of one or more other logical zones in thesystem 100. - A “storage condition” of a logical zone may refer to, for example, a performance characteristic satisfying an operational threshold common to storage resources in a logical zone; a method of data storage (e.g., level of integrity or data security) provided by storage resource(s) in a same logical zone; and/or a degree of data integrity attributable to a structural arrangement of storage resource(s) in a same logical zone. Examples of these and other storage conditions utilized in defining logical zones (e.g., the
logical zones FIGS. 2-3 . - Using logical zones to sort and store incoming data can provide a number of benefits. For example, a likelihood of data loss due to power supply failure can be diminished if storage resources are assigned to logical zones based on a distribution of power resources (e.g.,
power supply units 164, 166). Further, read and write latencies can by decreased by zoning the storage resources according to common performance characteristics, such as by storage capacity, rotational speed, time-to-data (TTD), etc. Further still, the use of logical zoning can provide a diverse selection of storage options to an end user. For example, a system implementing logical zoning according to the disclosed implementations may allow a user to select a desired type of storage, a desired security protocol, a desired degree of compression, desired degree of redundancy, desired speeds of accessibility, and more. -
FIG. 2 illustrates another example massdata storage system 200 that implements logical zoning of storage resources for data management. The massdata storage system 200 includes multiple racks (e.g., racks 202, 204, and 206) that each includes a rack controller (e.g.,rack controllers power supply units FIG. 2 , the massdata storage system 200 may include any number of racks and media units at one or more physical storage facilities. - Each of the
racks media units 210, 212), and each of the media units includes a media unit controller (e.g., a media unit controller 222). Each of therack controllers racks - Controllers (e.g., rack controllers and/or media unit controllers) of each individual rack may be communicatively coupled to the controllers of other racks in the system. For example, the
rack controller 218 may be able to send and receive data transfer requests to and from theother rack controllers data storage system 200, such as those located at different storage facilities. - Each storage resource (e.g., a storage drive 214) in the mass
data storage system 200 is assigned to an associated logical zone (e.g., logical zones A-H). For simplicity, the storage resources included in each of the illustrated logical zones A-H are shown to be in physical proximity to other storage resources in the same logical zone (e.g., each zone includes storage resources spanning a same row of media units across theracks media unit 210 in a top row of therack 206, one or more drives withinmedia unit 212 in a bottom row of therack 206, and any number of storage resources from other racks in any physical location relative to themedia units - When a data transfer request is received by a controller (e.g., a rack controller and/or a media unit controller), the controller selects a logical zone, and also selects one or more specific storage resources in the selected logical zone on which to execute the data transfer request. The controller transmits the request along available channels within the identified logical zone to the appropriate media unit controller(s) tasked with managing read/writes of each of the selected resource(s).
- Each of the different logical zones A-H is configured to store data according to one or more common storage conditions. By example and not limitation, a number of potential “storage conditions” are discussed below.
- In one implementation, a logical zone “storage condition” is a degree of data integrity provided by a structural arrangement of resources in the logical zone. For example, logical zones may be defined so as to distribute available power supply units evenly between the storage resources in each logical zone. This concept is illustrated by the configuration of
FIG. 2 , wherein logical zone A applies a storage condition in which no more than two media units share a common power supply unit (e.g.,power supply units - In one implementation, a write of data to logical zone ‘A’ entails writing error correction code (ECC) for the data on multiple different resources (e.g., media units on each of the
racks - In another implementation, a storage condition applied by a logical zone is a maximum number of storage resources sharing a common field replaceable unit (FRU) (e.g., a media unit). As used herein, ‘FRU’ refers to a group of data resources that are collectively taken “offline” in order to replace an individual storage resource in the group. For example, all six storage drives within the
media unit 210 may be taken temporarily offline in order to replace anindividual storage drive 216 housed within themedia unit 210. If an individual logical zone includes storage resources that are spread across a relatively large number of FRUs, thesystem 200 is less likely to be disrupted for data transfer requests occurring while one or more FRUs are offline. The reason for this is similar to that described above: ECC can be spread across multiple storage resources in a logical zone. If one media unit of logical zone A is ejected to replace a drive, data stored on the offline storage resource(s) may still be recoverable using the ECC stored on the other resources in the logical zone. - In other implementations, a storage condition applied by a logical zone is a maximum number of storage resources sharing some physical component other than an FRU or power supply (e.g., a fan). This storage condition mitigates system performance degradation in the event that the shared component fails.
- In lieu of the above examples, the storage condition(s) applied within each of the logical zones A-H are not necessarily conditions attributable to structure or distribution of storage resources in a zone. In other implementations, the different logical zones (e.g., logical zones A-H) are managed by different data management modules. The different data management modules may execute a same command type (e.g., a write command, a read command, an erase command) according to a different set of processing operations. For example, a data management module of logical zone A may write data according to a first set of processing operations while a data management module of logical zone B writes data according to a second, different set of processing operations. In these implementations, the storage condition of a logical zone is an outcome of a processing operation applied to data stored within the logical zone. A few examples of this are provided below.
- In one implementation, the storage condition of a logical zone is a degree of data redundancy with which data is saved in the logical zone. For example, a data management module of a first logical zone may implement a non-existent or a low level of redundancy (e.g., redundant array of independent discs (RAID 0-3), while a data management module of a second logical zone implements a medium level of redundancy (e.g., RAID 4-6), and a data management module of a third logical zone implements a high degree of redundancy (e.g., RAID 7-10).
- In another implementation, a storage condition of a logical zone is an error correction code (ECC) rate utilized for writing data in each logical zone. For example, the data management module of logical zone A may write data according to a different ECC rate than that utilized by a data management module of zones B, C, D, etc.
- In yet another implementation, a storage condition of a logical zone is a degree or type of encryption applied to data stored in the zone. For example, data management modules of various logical zones may each write data with a varying degree of encryption. A high-security encryption code is applied to data stored in a first logical zone; a medium security encryption code is applied to data stored in a second logical zone; a low security encryption code is applied to data stored in a third logical zone, etc.
- Yet another example storage condition of a logical zone is a read/write priority that is associated with each read/write request to the logical zone. For example, read/write operations in a low quality of service (QOS) logical zone may only occur during idle time, while read/write operation in a high QOS logical zone may interrupt any current read/write operations being performed.
- In still other implementations, a “storage condition” is a storage resource performance characteristic satisfying an operation threshold. Example performance characteristics include without limitation storage capacity, rotational speed of a storage resource (e.g., a disk), time-to-data (TTD), and storage resource cost. These, and other, performance characteristics are discussed in greater detail below:
- In one implementation, a storage condition of a logical zone is a same or similar storage capacity shared by storage resources in the logical zone. For example, 4 terabyte drives have the capability of storing at least 4 terabytes of data and are formatted to store 4 terabytes of data. Drives that meet this threshold are referred to herein as having the same or similar storage capacity. Drives that do not have the capability of storing 4 terabytes of data and/or drives that are formatted to store a different quantity of data are referred to herein as having disparate storage capacity. Accordingly to one implementation, media units with a high storage capacity (or high data compression) are grouped into a high capacity logical zone, while media units with a particularly low storage capacity (or low data compression) are grouped into a low capacity logical zone. Zoning storage resources according to storage capacity allows for a uniform organization of meta data and increased disc capacity utilization.
- In another implementation, all storage resources in a same logical zone share the same or similar rotational speeds (e.g., another example performance characteristic). For example, a 7,200 RPM storage drive varies from 7,200 RPM by no more than 1% during read/write operations. Drives that meet this operating limitation are referred to herein as having the same or similar rotational speeds. Drives that fail to meet this operating limitation are referred to herein as having disparate rotational speeds. According to one implementation, media units with a particularly high read/write speed are grouped into a high speed zone. Similarly, media units with a particularly low read/write speed may be grouped into a low speed zone.
- In still another implementation, storage resources in a same logical zone are available are of the same or similar cost. For example, higher-cost (e.g., more reliable) storage resources may be assigned to a first zone, while lower-cost (e.g., less reliable) storage resources may be assigned to another zone.
- In still another implementation, different logical zones include randomly writeable data units of a same or similar capacity. For example, the
storage drive 216 may be a shingled magnetic recording disk drive including multiple shingled data bands that are each assigned to one of multiple different logical zones. In shingled magnetic recording, the term “shingled data band” refers to a grouping of adjacent data tracks written as a single unit. An update to one or more cells within a shingled data band includes re-writing the entire data band, including both changed and unchanged cells. By assigning shingled data bands to different logical zones based on storage capacity, a data write can be intelligently directed to a shingled band of an appropriate (comparable) size, reducing unnecessary read/write overhead. - In still other implementations, logical zones are used to partition different types of data. For example, logical zone A may store medical records from the state of Colorado; logical zone B may store uploaded user media files, etc.
- In some implementations, selection of a logical zone for execution of a data write request is based on a storage characteristic of incoming data, or based on a storage characteristic specified in association with the data. Example storage characteristics are discussed in greater detail with respect to
FIG. 3 . -
FIG. 3 illustrates aspects of anotherexample storage system 300 that implements logical zoning of storage resources. Thestorage system 300 includes azone manager 302 that receives data transfer requests from acomputer network 306. Responsive to each data transfer request, thezone manager 302 selects a logical zone of the massdata storage system 300 on which to execute the data transfer request. - The
zone manager 302 may include hardware and/or software implemented via any tangible computer-readable storage media within or communicatively coupled to the mass data storage system. The term “tangible computer-readable storage media” includes, but is not limited to, random access memory (“RAM”), ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible medium which can be used to store the desired information and which can accessed by mobile device or computer. In contrast to tangible computer-readable storage media, intangible computer-readable communication signals may embody computer readable instructions, data structures, program modules or other data resident in a modulated data signal, such as a carrier wave or other signal transport mechanism. - In various implementations, some or all functionality of the zone manager 302 (described below) may be implemented in rack controller (e.g., a
rack controller FIG. 1 ) of a mass storage system, one or more media unit controllers (e.g., amedia unit controller 124 ofFIG. 1 ) or other computing node(s) communicatively coupled to processing resources capable of initiating reads and writes within thestorage system 300. In one implementation, thestorage system 300 includes a massive network of controllers (e.g., rack controllers, media unit controllers) that manage read and write operations to multiple different storage resources with disparate performance characteristics. - Storage resources in the
storage system 300 are divided into a number of logical zones. Each logical zone stores data according to a storage condition that is different from a storage condition of one or more other logical zones in thestorage system 300.FIG. 3 generally illustrates eight logical zones (A-H), but any number of such zones is contemplated. - When the
zone manager 302 receives a write request, thezone manager 302 selects an associated logical zone for execution of the write request. In one implementation, thezone manager 302 selects a logical zone and one or more resources within the selected logical zone to receive the data. Thezone manager 302 then directs the read/write request to one or more appropriate controllers with read/write authority to the selected resources. In another implementation, thezone manger 302 selects a logical zone but does not specifically select which resources in the logical zone may receive data of the write request. In this case, thezone manager 302 may simply forward the data write request to another controller, such as a zone-level controller specifically designated for selecting resources to receive incoming data. - The selection of a logical zone for execution of a data write request is based on a storage characteristic associated with the request. More specifically, the storage characteristic may be identified: (1) based on information implicit in the request, such as the size of a data write; (2) based on information specified by a user initiating the write request (e.g., within the request or in association with the request); (3) based on settings of a user subscription to storage on the
storage system 300; or (4) based on a source of a particular data transfer request (e.g., an IP address, geographical region, etc.). - For example, a user may subscribe to storage on the system 300 (e.g., cloud-based storage) and elect, either by indicating preference or selecting a particular subscription plan, one or more storage characteristics that the
zone manger 302 associates with data from that user. A group of users, such as those affiliated with a business (e.g., a shared IP address) may share a subscription. Further, certain logical zones may not be available for requests arriving from certain users, IP addresses, etc. - In one implementation, the
zone controller 302 selects a logical zone for a write request based on a size of data to be written (e.g., one example storage characteristic). If, for example, the various logical zones group together shingled data bands of the same or similar size, thezone controller 302 may select a logical zone to receive data that includes shingled data bands of capacity comparable to (e.g., equal to or just slightly larger than) the size of the data. - In another implementation, the
zone controller 302 selects a logical zone for a write operation based on a specified storage characteristic. For example, the storage characteristic may be specified by a user or in association with a user's subscription, profile, location, etc. For example, a user may specify or subscribe to a desired level of data integrity (e.g., redundancy) with which to save the data of the write request. The data is then directed to a logical zone that applies a corresponding degree of redundancy to the data stored therein. For example, high integrity data is directed to a high redundancy logical zone (e.g., RAID 7-10); medium integrity data is directed to a “medium redundancy” logical zone (e.g., RAID 4-6); and “low integrity data” is directed to a “low redundancy” logical zone (e.g., RAID 0-3). - In other implementations, the storage characteristic used to select a logical zone is a specified type of error correction code (e.g., LDPC, modulation codes, etc.). In this case, the
data management system 300 directs the write request to a logical zone applying requested type of error correction code to data stored therein. - In another implementation, the
zone controller 302 selects a logical zone for a write operation based on a specified frequency or a priority with which associated data is to be accessed. For example, data that needs to be frequently and/or quickly accessed may be directed to storage resources in a high-speed logical zone, while data that is infrequently accessed or not urgent when accessed is directed to storage resources in a low speed zone. Similarly, data considered high priority may be written to a higher QOS zone while data considered lower priority may be written to a lower QOS zone. - In yet another implementation, the
zone controller 302 selects a logical zone for a write operation based on a specified degree or type of security (e.g., data encryption). For example, the write request may specify a level of desired security (e.g., high security, low security). Data that is particularly sensitive or confidential may be specified as “high security” and stored in a high security logical zone (e.g., a zone applying a sophisticated encryption scheme), while data that is neither particularly sensitive nor confidential can be specified as “low security” and stored in a low security logical zone. A “high-security” logical zone may be, for example, a zone that utilizes advanced encryption techniques, such as techniques that allow for secure erasure and removal of the data. - In some implementations, the logical zones may overlap, providing different logical zone assignments for storage resources based on one or more performance characteristics associated with a particular data set to be stored therein. For example, Logical Zone A may be “fast-access” zone, Logical Zone G may be a “high QOS priority zone;” and Logical Zone H may be a “low QOS priority zone.” A storage resource with a short TTD could belong to Logical Zone A and also belong to one of Logical Zones G or H.
- Higher-performing logical zones may have a higher cost associated with them that may be passed along to a user of the mass
data storage system 300 in the form of a surcharge. Similarly, lower-performing logical zones may have a lower cost associated with them that is also passed along to the user of the mass data storage system in the form of a discount. - In some implementations, incoming data is assigned to a logical zone based on an attribute of the data identified by the
zone manager 302. For example, thezone manager 302 may be programmed to recognize that incoming data is a medical record, or that the medical record geographically originates from Colorado. Colorado medical records may all be directed to a same logical zone. In various other implementations, thezone manager 302 assigns data to a logical zone by identifying a data type, geographic source, author or originator, security, confidentiality level, character encoding, a checksum, a cryptographic hash, a digital signature, etc. In some implementations, thezone manager 302 appends the incoming data with an extended file attribute describing characteristics of the data. -
FIG. 4 illustrates another massdata storage system 400 including anexample control system 402 for selecting one or more logical zones to receive data of a write request. Aspects of thecontrol system 402 may be contained within a rack controller (e.g.,rack controller FIG. 1 ) and/or a media unit controller (e.g.,media unit controller 124 ofFIG. 1 ) associated with each individual storage drive (e.g., astorage drive 214 ofFIG. 2 ) of the massdata storage system 400. In some implementations, aspects of the massdata storage system 400 may span multiple racks and/or geographic locations. - The
control system 402 includes astorage interface 440 that allows the mass data storage system to receive incoming data from external data source(s) from acomputer network 406 and send outgoing data to external data destination(s) (see, e.g., external data source(s) and destination(s) 108 ofFIG. 1 ) that may have disparate operating systems operating over one or more computer networks (see e.g.,computer network 106 ofFIG. 1 ). - The
control system 402 further includes apower manager 452 that queries computing nodes (e.g., controllers) to discover power information and create apower map 456 detailing physical distribution of storage resources and power availability within thestorage system 400. Azone manager 442 discovers performance characteristics of various system storage resources by querying system controllers (e.g., other rack controllers and/or media unit controllers) for such information. Additionally, thezone manager 442 works in conjunction with a read/write manager 446 to identify available types of data management schemes (e.g., encryption, compression, etc.). Using the discovered performance characteristics,power map 456, and/or information from the read/write manager 446, thezone manager 442 generates azone map 444 that groups the various storage resources of thedata storage system 400 into logical zones. - In one implementation,
zone manager 442 selects one or more logical zones for each storage resource based on performance features that are common between different storage resources (e.g., storage capacity, access speed, drive reliability, etc.). In another implementation, thezone manager 442 selects one or more logical zones for each storage resource according to a scheme that ensures a particular distribution of storage drives between various power resources and/or across various FRUs in thestorage system 400. In still other implementations, thezone manager 442 groups one or more storage resources into logical zones based on instructions from the data read/write manager 446. For example, the read/write manager 446 may identify a number of resources to apply a first data management scheme (e.g., a sophisticated encryptions scheme) and a number or resources to apply a different data management scheme (e.g., a low-level encryption scheme). - Periodically, the
zone manager 442 may re-generate thezone map 444 to take away or add storage capacity to one or more logical zones in the system. If, for example, storage resources in one logical zone are taken offline (e.g., to replace a drive or other component), the zone manager may elastically re-allocate additional storage resources to that zone to keep zone capacity the same. This reallocation may occur on a time schedule for re-evaluating the storage needs of the mass data storage system or on-demand as the storage needs of the mass data storage system change. - To elastically re-allocate resources, the
zone manager 442 may consult anelastic capacity library 450 including a master listing of storage resources that are either temporarily allocated to a particular zone or available for reallocation. Using such information, thezone manager 442 can dynamically reallocate storage resources between logical zones and dynamically increase logical zone capacity as needed. - For example, the
zone manager 442 may determine that incoming data is to be stored in a first logical zone that does not have sufficient available storage space due to the fact that one or more storage drives are currently offline (e.g., for maintenance). In such case, thezone manager 442 may consult theelastic capacity library 450 to identify a storage location where the incoming data can be temporarily stored according to the storage condition of the first logical zone. Once the offline storage resources become available in the first logical zone, the data can be moved from the temporary storage location to a more permanent storage location within the first logical zone. - In another implementation, the
zone manager 442 works in conjunction with theelastic capacity library 450 to permanently increase a storage capacity of a given logical zone. - For example, the demand for storage capacity in a logical zone may, at some point, exceed the available storage capacity in that logical zone. Rather than query a user to replace disks or media units, the
zone manager 442 can simply re-allocate unused storage capacity of one logical zone to another logical zone that is in need of increased storage capacity. - In yet another implementation, the
zone manager 442 tracks health and wear-leveling of various storage resources within each zone, anticipates the failures of the various storage resources, adds capacity to each zone as needed (e.g., via the elastic capacity library 450), and automatically moves data between the old (e.g., failing) storage resources and the storage resources newly allocated to the logical zone. - In still another implementation, a consumer leases components of storage system implementing the
control system 402. Depending on the terms of the licensing agreement, the user may not initially have write access to all of the storage resources in the storage system. For example, some storage resources may be in an offline (powered off) state until remotely activated. If the user upgrades a subscription plan, thezone manager 442 may be instructed to elastically allocate some of the previously inaccessible resources to one or more of the logical zones. - In some implementations, logical zones in the
storage system 400 overlap, and a storage resource may be assigned to two or more logical zones. For example, a solid state device (SSD) may belong to a ‘high access speed logical zone’ exclusively including high speed devices and also belong to ‘high redundancy logical zone’ including both high-speed and low-speed devices. - In various implementations, the
zone manager 442 is an object manager, a file system manager and may incorporate a proprietary interface, such as ablock layer interface 454. Thezone manager 442 is communicatively coupled, through theblock layer interface 454, to a plurality of storage nodes (e.g., rack controllers, media unit controllers, etc.) within themass storage system 400. Communication channels may allow for bidirectional data flow between all storage nodes in the massdata storage system 400. For example, thezone manager 442 may be communicatively coupled a plurality of different rack controllers; each rack controller may be communicatively coupled to media unit controllers within the corresponding rack; and each of the media unit controllers may be communicatively coupled to an associated nest of internal storage drives. - In implementations where incoming data is to be written to a logical zone that utilizes shingled magnetic recording (SMR) technology, the
zone manager 442 consults with anSMR manager 448 to write the incoming data to an appropriate shingled logical zone. More specifically, thezone manger 442 consults thezone map 444 to select appropriate available storage space for the incoming data. Thezone manager 442 instructs theSMR manager 448 to write the incoming data to the selected logical zone (and/or specific selected storage resources) within thestorage system 400. -
FIG. 5 illustratesexample operations 500 for mapping and selecting logical zones in a mass data storage system. Amapping operation 505 maps different storage resources to different logical zones in the storage system. Each of the logical zones is defined based on a storage condition applied to data stored therein. The storage condition may be, for example, a performance characteristic satisfying an operational threshold common to storage resources in a logical zone; a method of data storage provided by storage resource(s) in a same logical zone; and/or a degree of data integrity attributable to a structural arrangement of storage resource(s) in a same logical zone. - A receiving
operation 510 receives an incoming data transfer request (e.g., a read or write request). Adetermination operation 515 determines whether the data transfer request is a write request. If the data transfer request is a write request, aselection operation 520 selects one of the logical zones to receive the data based on a storage characteristic associated with the write request that satisfies the storage condition of the selected logical zone. For example, a logical zone may have a storage condition prohibiting writes of new data in excess of a threshold data size. If the incoming data has a size less than or equal to the threshold data size, the storage condition is satisfied. - In another implementation, a storage condition is an outcome of a processing operation performed on data stored within a specific zone. For example, the storage condition may be a level of security, redundancy, compression, type of encryption, etc. Such storage condition(s) may be satisfied, for example, if a user specifies a storage characteristic that matches the storage condition. For example, the user may select the storage characteristic (e.g., a desired redundancy) specifically within a read/write request, via a subscription service, a profile election, etc.
- In still other implementations, a storage condition is a performance characteristic of storage resources in a given zone. For example, the storage resources may have a set capacity, be of a set type (disk, SSD, etc.), have a same or similar TTD, etc. Such storage condition(s) may be satisfied, for example, if a user specifies a storage characteristic that matches or otherwise satisfies the storage condition. For example, the user may select a desired TTD that falls within a range of TTDs provided by a particular logical zone.
- If the
determination operation 515 determines that the data transfer request is a read request rather than a write request, a selection anddetermination operation 525 selects a logical zone that includes the requested data and determines read parameters for the read operation based on the storage condition(s) applied within the selected logical zone. For example, selecting the logical zone including the requested data may entail accessing one or more mapping tables to that associate parameters of the incoming read command with physical storage location(s) within the storage system. - Storage conditions of the selected logical zone may influence read parameters of the read command. For example, a particular ECC rate may be utilized if the data is stored in a logical zone applying a specific ECC rate. Similarly, the read operation may be postponed until an landil time if the data is stored in a logical zone associated with a low QOS storage condition. Further still, the number of resources from which the data is retrieved may vary based on a redundancy associated with the logical zone where the data is stored. These read parameters are meant to be explanatory and do not by any means contemplate all storage conditions that may affect read parameters related to execution of the read command.
-
FIG. 6 illustratesadditional example operations 600 for mapping and selecting logical zones in a mass data storage system. Amapping operation 605 maps different storage resources to different logical zones in the storage system. Each of the different logical zones is defined to have an independent power source for a set number of storage resources. - A
selection operation 610 selects multiple storage resources within a logical zone to receive data of a write request. For example, an independent power source may power no more than ¼ of the storage resources in any one zone. If ECC for incoming data of a write request is written to ½ of the storage resources in a logical zone, then the ECC is guaranteed to be accessible on at least one storage resource even if one of the power sources fails. This decreases a risk of system performance degradation in the event of a power source failure. - The embodiments of the disclosed technology described herein are implemented as logical steps in one or more computer systems. The logical operations of the presently disclosed technology are implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine or circuit modules within one or more computer systems. The implementation is a matter of choice, dependent on the performance requirements of the computer system implementing the disclosed technology. Accordingly, the logical operations making up the embodiments of the disclosed technology described herein are referred to variously as operations, steps, objects, or modules. Furthermore, it should be understood that logical operations may be performed in any order, adding and omitting as desired, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language.
- The above specification, examples, and data provide a complete description of the structure and use of exemplary embodiments of the disclosed technology. Since many embodiments of the disclosed technology can be made without departing from the spirit and scope of the disclosed technology, the disclosed technology resides in the claims hereinafter appended. Furthermore, structural features of the different embodiments may be combined in yet another embodiment without departing from the recited claims.
Claims (20)
1. A method comprising:
mapping a plurality of storage resources to one or more of multiple logical zones in a storage system, each of the storage resources having a storage controller and each of the logical zones associated with a different storage condition and defining a group of storage resources applying the associated storage condition to data stored therein.
2. The method of claim 1 further comprising:
selecting one of the logical zones to receive data of a write request based on a storage characteristic of the write request satisfying the storage condition of the logical zone that was selected.
3. The method of claim 2 wherein the storage characteristic is specified in the write request.
4. The method of claim 2 wherein the storage characteristic is determined based on a user's subscription to a storage service.
5. The method of claim 2 wherein the storage condition and the storage characteristic indicate a degree of data integrity.
6. The method of claim 2 wherein the storage condition is a size of a shingled magnetic recording band and the storage characteristic is a size of the data of the write request.
7. The method of claim 2 wherein the storage condition and the storage characteristic indicate a level of encryption for data of the write request.
8. The method of claim 2 wherein the storage condition is an access speed and the storage characteristic refers to a time-to-data (TTD) in which data of the write request can be accessed.
9. The method of claim 1 wherein the storage resources have disparate physical characteristics.
10. A storage system comprising:
a zone manager configured to map a plurality of storage devices to one of multiple logical zones in the storage system, each of the storage devices having a distinct storage controller and each of the logical zones associated with a different storage condition and defining a group of storage devices applying the associated storage condition to data stored therein.
11. The storage system of claim 10 further comprising:
a write manager configured to select one of the logical zones to receive data of a write request based on a storage characteristic of the write request satisfying the storage condition of the selected logical zone.
12. The storage system of claim 11 wherein the storage condition and the storage characteristic indicate a degree of data integrity.
13. The storage system of claim 11 wherein the storage condition and the storage characteristic indicate of a level of encryption for data of the write request.
14. The storage system of claim 11 wherein the storage condition is an access speed and the storage characteristic refers to a time-to-data (TTD) in which data of the write request can be accessed from the storage system.
15. The storage system of claim 11 wherein each of the logical zones is defined to have an independent power source for each of a predetermined number of storage devices.
16. The storage system of claim 10 wherein each of the different logical zones is defined to include storage devices of substantially the same capacity.
17. The storage system of claim 10 wherein the zone manager is configured to dynamically add or remove storage devices from each of the logical zones.
18. A method comprising:
mapping storage resources to different logical zones in a computer system, each of the logical zones defined to have an independent power source for each of a predetermined number of storage resources; and
selecting multiple storage resources in a logical zone to receive data of a write request.
19. The method of claim 18 wherein each of the different logical zones is defined to have a maximum number of storage resources in a same field replaceable unit.
20. The method of claim 18 wherein at least two of the storage resources are managed by different storage controllers.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/540,721 US20150363126A1 (en) | 2014-06-13 | 2014-11-13 | Logical zone mapping |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462012205P | 2014-06-13 | 2014-06-13 | |
US201462012219P | 2014-06-13 | 2014-06-13 | |
US14/540,721 US20150363126A1 (en) | 2014-06-13 | 2014-11-13 | Logical zone mapping |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150363126A1 true US20150363126A1 (en) | 2015-12-17 |
Family
ID=54836119
Family Applications (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/506,101 Active 2034-12-18 US9939865B2 (en) | 2014-06-13 | 2014-10-03 | Selective storage resource powering for data transfer management |
US14/540,721 Abandoned US20150363126A1 (en) | 2014-06-13 | 2014-11-13 | Logical zone mapping |
US14/676,612 Active 2036-01-29 US10152105B2 (en) | 2014-06-13 | 2015-04-01 | Common controller operating multiple storage drives |
US14/720,503 Active 2035-06-20 US9880602B2 (en) | 2014-06-13 | 2015-05-22 | Power characteristics in a system of disparate storage drives |
US14/720,031 Active US9541978B2 (en) | 2014-06-13 | 2015-05-22 | Redundancies for reconstruction in mass data storage systems |
US14/726,146 Active 2035-10-14 US9965011B2 (en) | 2014-06-13 | 2015-05-29 | Controller interface for operation of multiple storage drives |
US14/740,031 Active 2035-10-15 US9874915B2 (en) | 2014-06-13 | 2015-06-15 | Extended file attributes for redundant data storage |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/506,101 Active 2034-12-18 US9939865B2 (en) | 2014-06-13 | 2014-10-03 | Selective storage resource powering for data transfer management |
Family Applications After (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/676,612 Active 2036-01-29 US10152105B2 (en) | 2014-06-13 | 2015-04-01 | Common controller operating multiple storage drives |
US14/720,503 Active 2035-06-20 US9880602B2 (en) | 2014-06-13 | 2015-05-22 | Power characteristics in a system of disparate storage drives |
US14/720,031 Active US9541978B2 (en) | 2014-06-13 | 2015-05-22 | Redundancies for reconstruction in mass data storage systems |
US14/726,146 Active 2035-10-14 US9965011B2 (en) | 2014-06-13 | 2015-05-29 | Controller interface for operation of multiple storage drives |
US14/740,031 Active 2035-10-15 US9874915B2 (en) | 2014-06-13 | 2015-06-15 | Extended file attributes for redundant data storage |
Country Status (1)
Country | Link |
---|---|
US (7) | US9939865B2 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160054921A1 (en) * | 2014-08-25 | 2016-02-25 | Phison Electronics Corp. | Memory management method, memory storage device and memory controlling circuit unit |
US9541978B2 (en) | 2014-06-13 | 2017-01-10 | Seagate Technology Llc | Redundancies for reconstruction in mass data storage systems |
US9977613B2 (en) * | 2015-12-30 | 2018-05-22 | Dell Products L.P. | Systems and methods for zone page allocation for shingled media recording disks |
US10055317B2 (en) | 2016-03-22 | 2018-08-21 | Netapp, Inc. | Deferred, bulk maintenance in a distributed storage system |
US20180246670A1 (en) * | 2017-02-28 | 2018-08-30 | International Business Machines Corporation | Storing data sequentially in zones in a dispersed storage network |
US10379742B2 (en) * | 2015-12-28 | 2019-08-13 | Netapp, Inc. | Storage zone set membership |
US10467172B2 (en) | 2016-06-01 | 2019-11-05 | Seagate Technology Llc | Interconnect for shared control electronics |
US10514984B2 (en) | 2016-02-26 | 2019-12-24 | Netapp, Inc. | Risk based rebuild of data objects in an erasure coded storage system |
US11074937B1 (en) * | 2020-03-17 | 2021-07-27 | Kabushiki Kaisha Toshiba | Magnetic disk device and depop processing method |
US20220113879A1 (en) * | 2020-10-14 | 2022-04-14 | Microchip Technology Incorporated | System with Increasing Protected Storage Area and Erase Protection |
US20230266897A1 (en) * | 2022-02-24 | 2023-08-24 | Micron Technology, Inc. | Dynamic zone group configuration at a memory sub-system |
Families Citing this family (68)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10839321B2 (en) | 1997-01-06 | 2020-11-17 | Jeffrey Eder | Automated data storage system |
US9181441B2 (en) | 2010-04-12 | 2015-11-10 | Jindal Films Americas Llc | Coating or polymeric labels |
US9367562B2 (en) * | 2013-12-05 | 2016-06-14 | Google Inc. | Distributing data on distributed storage systems |
US10114692B2 (en) * | 2015-01-27 | 2018-10-30 | Quantum Corporation | High/low energy zone data storage |
US10404523B2 (en) | 2015-03-09 | 2019-09-03 | Vapor IO Inc. | Data center management with rack-controllers |
US10257268B2 (en) | 2015-03-09 | 2019-04-09 | Vapor IO Inc. | Distributed peer-to-peer data center management |
US11349701B2 (en) | 2015-03-09 | 2022-05-31 | Vapor IO Inc. | Data center management with rack-controllers |
US10817398B2 (en) | 2015-03-09 | 2020-10-27 | Vapor IO Inc. | Data center management via out-of-band, low-pin count, external access to local motherboard monitoring and control |
US10833940B2 (en) | 2015-03-09 | 2020-11-10 | Vapor IO Inc. | Autonomous distributed workload and infrastructure scheduling |
US10156987B1 (en) * | 2015-06-26 | 2018-12-18 | Amazon Technologies, Inc. | Temperature management in a data storage system |
US9983959B2 (en) * | 2015-06-29 | 2018-05-29 | Microsoft Technology Licensing, Llc | Erasure coding of data within a group of storage units based on connection characteristics |
US9880772B2 (en) * | 2015-09-21 | 2018-01-30 | Micron Technology, Inc. | Systems and methods for providing file information in a memory system protocol |
US10467163B1 (en) * | 2015-10-27 | 2019-11-05 | Pavilion Data Systems, Inc. | Solid state drive multiplexer |
US10841148B2 (en) * | 2015-12-13 | 2020-11-17 | Microsoft Technology Licensing, Llc. | Disaster recovery of cloud resources |
US9836238B2 (en) | 2015-12-31 | 2017-12-05 | International Business Machines Corporation | Hybrid compression for large history compressors |
US10067705B2 (en) * | 2015-12-31 | 2018-09-04 | International Business Machines Corporation | Hybrid compression for large history compressors |
WO2017120502A1 (en) * | 2016-01-07 | 2017-07-13 | Vapor IO Inc. | Data center management |
WO2017120498A1 (en) * | 2016-01-07 | 2017-07-13 | Vapor IO Inc. | Data center management via out-of-band, low-pin count, external access to local motherboard monitoring and control |
US20170242613A1 (en) * | 2016-02-24 | 2017-08-24 | Seagate Technology Llc | Processing Circuit Controlled Data Storage Unit Selection |
US9971606B2 (en) * | 2016-04-18 | 2018-05-15 | Super Micro Computer, Inc. | Technique for reordering hard drive activation reports to achieve sequential hard drive ordering |
US10372364B2 (en) * | 2016-04-18 | 2019-08-06 | Super Micro Computer, Inc. | Storage enclosure with daisy-chained sideband signal routing and distributed logic devices |
US11231858B2 (en) | 2016-05-19 | 2022-01-25 | Pure Storage, Inc. | Dynamically configuring a storage system to facilitate independent scaling of resources |
US10198216B2 (en) * | 2016-05-28 | 2019-02-05 | Advanced Micro Devices, Inc. | Low power memory throttling |
US20180046409A1 (en) * | 2016-08-10 | 2018-02-15 | International Business Machines Corporation | Mass storage devices packages and software-defined arrays of such packages |
US10530703B2 (en) * | 2016-08-15 | 2020-01-07 | At&T Intellectual Property I, L.P. | Dynamic provisioning of storage in the cloud |
CN107783726B (en) * | 2016-08-31 | 2019-11-12 | 华为技术有限公司 | The method of signal is transmitted in storage system and storage system |
US10802853B2 (en) | 2016-10-14 | 2020-10-13 | Seagate Technology Llc | Active drive |
US11301144B2 (en) | 2016-12-28 | 2022-04-12 | Amazon Technologies, Inc. | Data storage system |
US10484015B2 (en) | 2016-12-28 | 2019-11-19 | Amazon Technologies, Inc. | Data storage system with enforced fencing |
US10514847B2 (en) * | 2016-12-28 | 2019-12-24 | Amazon Technologies, Inc. | Data storage system with multiple durability levels |
CN106656712A (en) * | 2016-12-30 | 2017-05-10 | 深圳市优必选科技有限公司 | Bus abnormality processing method and robot controller |
US10827433B2 (en) * | 2017-09-29 | 2020-11-03 | Hewlett Packard Enterprise Development Lp | Managing power consumption of a network |
US10664405B2 (en) * | 2017-11-03 | 2020-05-26 | Google Llc | In-memory distributed cache |
US10809926B2 (en) * | 2018-02-05 | 2020-10-20 | Microsoft Technology Licensing, Llc | Server system |
US20190245923A1 (en) * | 2018-02-05 | 2019-08-08 | Microsoft Technology Licensing, Llc | Server system |
US10764180B1 (en) * | 2018-02-20 | 2020-09-01 | Toshiba Memory Corporation | System and method for storing data using software defined networks |
TWI682273B (en) * | 2018-09-13 | 2020-01-11 | 緯創資通股份有限公司 | Power control method for storage devices and electronic system using the same |
US10446186B1 (en) | 2018-09-19 | 2019-10-15 | Seagate Technology Llc | Data storage cartridge with magnetic head-disc interface (HDI) |
US10831592B1 (en) * | 2018-09-27 | 2020-11-10 | Juniper Networks, Inc | Apparatus, system, and method for correcting slow field-replaceable units in network devices |
US11188142B1 (en) * | 2018-12-11 | 2021-11-30 | Amazon Technologies, Inc. | Power management network for communication between racks in a data center |
CN109814804A (en) * | 2018-12-21 | 2019-05-28 | 创新科存储技术(深圳)有限公司 | A kind of method and apparatus reducing distributed memory system energy consumption |
US10996900B2 (en) | 2019-02-25 | 2021-05-04 | Seagate Technology Llc | Multi-cartridge control board with cartridge-external voice coil motor actuator components |
US10818318B2 (en) | 2019-03-19 | 2020-10-27 | Seagate Technology Llc | Storage system with actuated media player |
US10902879B2 (en) | 2019-03-19 | 2021-01-26 | Seagate Technology Llc | Storage system with actuated media player |
US11367464B2 (en) | 2019-03-19 | 2022-06-21 | Seagate Technology, Llc | Data storage system including movable carriage |
TWI708145B (en) | 2019-04-30 | 2020-10-21 | 威聯通科技股份有限公司 | Multi-controller storage system and storage apparatus |
PL3963844T3 (en) | 2019-05-03 | 2022-12-27 | Telefonaktiebolaget Lm Ericsson (Publ) | Providing communication services using sets of i/o user devices |
US11169723B2 (en) | 2019-06-28 | 2021-11-09 | Amazon Technologies, Inc. | Data storage system with metadata check-pointing |
US11294734B2 (en) | 2019-08-01 | 2022-04-05 | EMC IP Holding Company LLC | Method and system optimizing the use of sub-data confidence fabrics |
US11310272B2 (en) | 2019-08-01 | 2022-04-19 | EMC IP Holding Company LLC | Method and system creating and using data confidence fabric processing paths |
US10977055B2 (en) | 2019-08-01 | 2021-04-13 | EMC IP Holding Company LLC | Method and system creating and using sub-data confidence fabrics |
US11102009B2 (en) | 2019-08-01 | 2021-08-24 | EMC IP Holding Company LLC | Method and system transacting data using verifiable claims |
US11475073B2 (en) | 2019-08-02 | 2022-10-18 | EMC IP Holding Company LLC | System and method for management of data from deployments |
US11360703B2 (en) * | 2019-10-22 | 2022-06-14 | EMC IP Holding Company LLC | Method and system for a trusted actuation via data fabric metadata |
US11262830B2 (en) * | 2019-11-11 | 2022-03-01 | Microsoft Technology Licensing, Llc | Managing ephemeral storage on a computing node |
US20210157515A1 (en) * | 2019-11-26 | 2021-05-27 | Seagate Technology Llc | Cartridge-external preamplifier for read/write control of media library |
TWI711361B (en) * | 2019-12-20 | 2020-11-21 | 宜鼎國際股份有限公司 | Stacked structure of circuit boards |
KR20210094773A (en) * | 2020-01-22 | 2021-07-30 | 에스케이하이닉스 주식회사 | Memory system and data processing system including the same |
US11388147B2 (en) | 2020-01-31 | 2022-07-12 | EMC IP Holding Company LLC | System and method for redirecting data access to local trust managers via an indirection logic service |
US11182096B1 (en) | 2020-05-18 | 2021-11-23 | Amazon Technologies, Inc. | Data storage system with configurable durability |
US11681443B1 (en) | 2020-08-28 | 2023-06-20 | Amazon Technologies, Inc. | Durable data storage with snapshot storage space optimization |
US11308990B1 (en) | 2020-10-16 | 2022-04-19 | Seagate Technology Llc | Selectively routing signals to data storage devices within a data storage system that includes a movable carriage |
US11436073B2 (en) * | 2020-11-18 | 2022-09-06 | Hewlett Packard Enterprise Development Lp | Fault indications for storage system commands |
US11570919B2 (en) | 2020-12-21 | 2023-01-31 | Seagate Technology, Llc | Data storage system including movable carriage and physical locking mechanism |
US20220283721A1 (en) * | 2021-03-02 | 2022-09-08 | Seagate Technology Llc | Operating multiple storage devices using nvm interface |
US20220335166A1 (en) * | 2021-04-16 | 2022-10-20 | Seagate Technology Llc | Wireless data storage devices and systems |
US11640270B2 (en) | 2021-07-27 | 2023-05-02 | Beijing Tenafe Electronic Technology Co., Ltd. | Firmware-controlled and table-based conditioning for flexible storage controller |
GB2611575A (en) * | 2021-10-11 | 2023-04-12 | The Sec Dep For Business Energy And Industrial Strategy | Connection of solid-state storage devices |
Citations (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6332139B1 (en) * | 1998-11-09 | 2001-12-18 | Mega Chips Corporation | Information communication system |
US20050141184A1 (en) * | 2003-12-25 | 2005-06-30 | Hiroshi Suzuki | Storage system |
US20050168934A1 (en) * | 2003-12-29 | 2005-08-04 | Wendel Eric J. | System and method for mass storage using multiple-hard-disk-drive enclosure |
US6973455B1 (en) * | 1999-03-03 | 2005-12-06 | Emc Corporation | File server system providing direct data sharing between clients with a server acting as an arbiter and coordinator |
US20060004868A1 (en) * | 2004-07-01 | 2006-01-05 | Claudatos Christopher H | Policy-based information management |
US20060062383A1 (en) * | 2004-09-21 | 2006-03-23 | Yasunori Kaneda | Encryption/decryption management method in computer system having storage hierarchy |
US20060085608A1 (en) * | 2004-10-19 | 2006-04-20 | Nobuyuki Saika | System and method for controlling the updating of storage device |
US20060206603A1 (en) * | 2005-03-08 | 2006-09-14 | Vijayan Rajan | Integrated storage virtualization and switch system |
US7124272B1 (en) * | 2003-04-18 | 2006-10-17 | Symantec Corporation | File usage history log for improved placement of files in differential rate memory according to frequency of utilizations and volatility of allocation space |
US20070079156A1 (en) * | 2005-09-30 | 2007-04-05 | Kazuhisa Fujimoto | Computer apparatus, storage apparatus, system management apparatus, and hard disk unit power supply controlling method |
US20070143542A1 (en) * | 2005-12-16 | 2007-06-21 | Hitachi, Ltd. | Storage controller, and method of controlling storage controller |
US20070205664A1 (en) * | 2006-03-01 | 2007-09-06 | Hitachi, Ltd. | Power supply device and power supply method |
US20080093926A1 (en) * | 2006-10-20 | 2008-04-24 | Hiroshi Suzuki | Power device and power device power supply method |
US20080141332A1 (en) * | 2006-12-11 | 2008-06-12 | International Business Machines Corporation | System, method and program product for identifying network-attack profiles and blocking network intrusions |
US20080256307A1 (en) * | 2006-10-17 | 2008-10-16 | Kazuhisa Fujimoto | Storage subsystem, storage system, and method of controlling power supply to the storage subsystem |
US20090089504A1 (en) * | 2003-08-14 | 2009-04-02 | Soran Philip E | Virtual Disk Drive System and Method |
US20090089343A1 (en) * | 2007-09-27 | 2009-04-02 | Sun Microsystems, Inc. | Method and system for block allocation for hybrid drives |
US20090106255A1 (en) * | 2001-01-11 | 2009-04-23 | Attune Systems, Inc. | File Aggregation in a Switched File System |
US20090135698A1 (en) * | 2007-11-28 | 2009-05-28 | Akira Fujibayashi | Disk Controller and Storage System |
US20090150593A1 (en) * | 2007-12-11 | 2009-06-11 | Microsoft Corporation | Dynamtic storage hierarachy management |
US20090222643A1 (en) * | 2008-02-29 | 2009-09-03 | Phison Electronics Corp. | Block management method for flash memory and controller and storage sysetm using the same |
US20100011229A1 (en) * | 2006-07-17 | 2010-01-14 | Xyratex Technology Limited | Methods of powering up a disk drive storage enclosure and storage enclosures |
US20110161556A1 (en) * | 2009-12-31 | 2011-06-30 | Seagate Technology Llc | Systems and methods for storing data in a multi-level cell solid state storage device |
US7984151B1 (en) * | 2008-10-09 | 2011-07-19 | Google Inc. | Determining placement of user data to optimize resource utilization for distributed systems |
US20110191604A1 (en) * | 2010-01-29 | 2011-08-04 | Hitachi, Ltd. | Storage system |
US20110191601A1 (en) * | 2009-02-27 | 2011-08-04 | Yosuke Tsuyuki | Storage System |
US20110314325A1 (en) * | 2010-06-17 | 2011-12-22 | Hitachi, Ltd. | Storage apparatus and method of detecting power failure in storage apparatus |
US20120151262A1 (en) * | 2010-12-13 | 2012-06-14 | Hitachi, Ltd. | Storage apparatus and method of detecting power failure in storage apparatus |
US20120210169A1 (en) * | 2011-02-15 | 2012-08-16 | Coraid, Inc. | Power failure management in components of storage area network |
US20120272038A1 (en) * | 2011-04-20 | 2012-10-25 | Seagate Technology Llc | Logical block address mapping |
US20130031317A1 (en) * | 2011-04-27 | 2013-01-31 | Seagate Technology Llc | Method and apparatus for redirecting data writes |
US8583838B1 (en) * | 2010-09-27 | 2013-11-12 | Emc Corporation | Techniques for statistics collection in connection with data storage performance |
US20130304963A1 (en) * | 2012-05-10 | 2013-11-14 | Sony Corporation | Memory managing device and method and electronic apparatus |
US20140082117A1 (en) * | 2012-09-14 | 2014-03-20 | ConnectEDU Inc. | Client device lockdown and control system |
US20140279876A1 (en) * | 2013-03-15 | 2014-09-18 | Tactile, Inc. | Storing and processing data organized as flexible records |
US20140281194A1 (en) * | 2013-03-15 | 2014-09-18 | Seagate Technology Llc | Dynamically-sizeable granule storage |
US20140281819A1 (en) * | 2013-03-15 | 2014-09-18 | Fusion-Io, Inc. | Managing Data Reliability |
US20140289451A1 (en) * | 2013-03-20 | 2014-09-25 | Phison Electronics Corp. | Method of recording mapping information, and memory controller and memory storage apparatus using the same |
US20140310441A1 (en) * | 2011-09-21 | 2014-10-16 | Kevin Mark Klughart | Data Storage Architecture Extension System and Method |
US20150067295A1 (en) * | 2013-08-29 | 2015-03-05 | Cleversafe, Inc. | Storage pools for a dispersed storage network |
US20150197330A1 (en) * | 2014-01-14 | 2015-07-16 | Austin Digital Inc. | Methods for matching flight data |
US20150363288A1 (en) * | 2014-06-13 | 2015-12-17 | Seagate Technology Llc | Redundancies for reconstruction in mass data storage systems |
Family Cites Families (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3958088A (en) | 1974-03-29 | 1976-05-18 | Xerox Corporation | Communications systems having a selective facsimile output |
US5396635A (en) | 1990-06-01 | 1995-03-07 | Vadem Corporation | Power conservation apparatus having multiple power reduction levels dependent upon the activity of the computer system |
EP0507503A3 (en) | 1991-04-05 | 1993-09-29 | International Business Machines Corporation | Method and apparatus for directly and automatically accessing a bank of data storage devices with a computer |
US5504882A (en) | 1994-06-20 | 1996-04-02 | International Business Machines Corporation | Fault tolerant data storage subsystem employing hierarchically arranged controllers |
US5778374A (en) | 1995-08-03 | 1998-07-07 | International Business Machines Corporation | Compressed common file directory for mass storage systems |
JPH11195766A (en) | 1997-10-31 | 1999-07-21 | Mitsubishi Electric Corp | Semiconductor integrated circuit device |
US6986075B2 (en) | 2001-02-23 | 2006-01-10 | Hewlett-Packard Development Company, L.P. | Storage-device activation control for a high-availability storage system |
US6959399B2 (en) | 2001-09-24 | 2005-10-25 | International Business Machines Corporation | Selective automated power cycling of faulty disk in intelligent disk array enclosure for error recovery |
US6766420B2 (en) | 2001-09-27 | 2004-07-20 | International Business Machines Corporation | Selectively powering portions of system memory in a network server to conserve energy |
US6560155B1 (en) | 2001-10-24 | 2003-05-06 | Micron Technology, Inc. | System and method for power saving memory refresh for dynamic random access memory devices after an extended interval |
US7780785B2 (en) | 2001-10-26 | 2010-08-24 | Applied Materials, Inc. | Gas delivery apparatus for atomic layer deposition |
US7318164B2 (en) | 2001-12-13 | 2008-01-08 | International Business Machines Corporation | Conserving energy in a data processing system by selectively powering down processors |
WO2003081416A2 (en) * | 2002-03-21 | 2003-10-02 | Tempest Microsystems | A lower power disk array as a replacement for robotic tape storage |
US7210005B2 (en) | 2002-09-03 | 2007-04-24 | Copan Systems, Inc. | Method and apparatus for power-efficient high-capacity scalable storage system |
JP4230189B2 (en) | 2002-10-03 | 2009-02-25 | 株式会社日立製作所 | Disk array device and power supply method thereof |
US20050228943A1 (en) | 2004-04-02 | 2005-10-13 | Decenzo David P | Multipath redundant storage system architecture and method |
US7587042B2 (en) | 2004-07-12 | 2009-09-08 | Phylogy, Inc. | High performance ADSL line conditioner system and method |
US7434090B2 (en) | 2004-09-30 | 2008-10-07 | Copan System, Inc. | Method and apparatus for just in time RAID spare drive pool management |
US7334140B2 (en) | 2005-03-03 | 2008-02-19 | International Business Machines Corporation | Apparatus and method to selectively provide power to one or more components disposed in an information storage and retrieval system |
US7467306B2 (en) | 2005-03-08 | 2008-12-16 | Hewlett-Packard Development Company, L.P. | Methods and systems for allocating power to an electronic device |
US7568122B2 (en) | 2005-03-16 | 2009-07-28 | Dot Hill Systems Corporation | Method and apparatus for identifying a faulty component on a multiple component field replaceable unit |
JP2007066480A (en) | 2005-09-02 | 2007-03-15 | Hitachi Ltd | Disk array device |
US7443759B1 (en) | 2006-04-30 | 2008-10-28 | Sun Microsystems, Inc. | Reduced-power memory with per-sector ground control |
JP2007213721A (en) * | 2006-02-10 | 2007-08-23 | Hitachi Ltd | Storage system and control method thereof |
US7516348B1 (en) | 2006-02-24 | 2009-04-07 | Emc Corporation | Selective power management of disk drives during semi-idle time in order to save power and increase drive life span |
US7661005B2 (en) * | 2006-06-30 | 2010-02-09 | Seagate Technology Llc | Individual storage device power control in a multi-device array |
US7673167B2 (en) | 2007-02-06 | 2010-03-02 | International Business Machines Corporation | RAID array data member copy offload in high density packaging |
US7702853B2 (en) | 2007-05-04 | 2010-04-20 | International Business Machines Corporation | Data storage system with power management control and method |
US7669023B2 (en) | 2007-07-10 | 2010-02-23 | Hitachi, Ltd. | Power efficient storage with data de-duplication |
US8707070B2 (en) | 2007-08-28 | 2014-04-22 | Commvault Systems, Inc. | Power management of data processing resources, such as power adaptive management of data storage operations |
JP2009080603A (en) * | 2007-09-26 | 2009-04-16 | Hitachi Ltd | Storage device, and electric power saving method thereof |
US8495276B2 (en) | 2007-10-12 | 2013-07-23 | HGST Netherlands B.V. | Power saving optimization for disk drives with external cache |
JP2009140357A (en) | 2007-12-07 | 2009-06-25 | Hitachi Ltd | Storage apparatus with power usage control function and power usage control method in storage apparatus |
US20090198928A1 (en) * | 2008-02-04 | 2009-08-06 | General Electric Company | Method and system for providing data backup to multiple storage media |
US8473779B2 (en) | 2008-02-29 | 2013-06-25 | Assurance Software And Hardware Solutions, Llc | Systems and methods for error correction and detection, isolation, and recovery of faults in a fail-in-place storage array |
US20090249003A1 (en) | 2008-03-26 | 2009-10-01 | Allen Keith Bates | Method and system for multiplexing concatenated storage disk arrays to form a rules-based array of disks |
JP5055192B2 (en) * | 2008-04-24 | 2012-10-24 | 株式会社日立製作所 | Management apparatus and storage apparatus control method |
US20100138677A1 (en) | 2008-12-01 | 2010-06-03 | International Business Machines Corporation | Optimization of data distribution and power consumption in a data center |
US8127165B2 (en) * | 2009-02-05 | 2012-02-28 | Lsi Corporation | Multipath power management |
TWI431464B (en) * | 2009-04-29 | 2014-03-21 | Micro Star Int Co Ltd | Computer system with power control and power control method |
US8286015B2 (en) | 2009-06-03 | 2012-10-09 | Microsoft Corporation | Storage array power management using lifecycle information |
US8239701B2 (en) | 2009-07-28 | 2012-08-07 | Lsi Corporation | Methods and apparatus for power allocation in a storage system |
KR101603099B1 (en) | 2009-10-01 | 2016-03-28 | 삼성전자주식회사 | A memory system detecting th distributionof unstable memory cell and the method for detecting the distribution of unstable memory cell |
US20110302224A1 (en) | 2010-06-08 | 2011-12-08 | Rahav Yairi | Data storage device with preloaded content |
JP5388976B2 (en) | 2010-09-22 | 2014-01-15 | 株式会社東芝 | Semiconductor memory control device |
EP2652623B1 (en) | 2010-12-13 | 2018-08-01 | SanDisk Technologies LLC | Apparatus, system, and method for auto-commit memory |
US9594421B2 (en) * | 2011-03-08 | 2017-03-14 | Xyratex Technology Limited | Power management in a multi-device storage array |
US20120297114A1 (en) | 2011-05-19 | 2012-11-22 | Hitachi, Ltd. | Storage control apparatus and managment method for semiconductor-type storage device |
JP5998677B2 (en) | 2012-06-29 | 2016-09-28 | 富士通株式会社 | Storage device and connection device |
US9354683B2 (en) * | 2012-08-08 | 2016-05-31 | Amazon Technologies, Inc. | Data storage power management |
US9766676B2 (en) | 2012-10-26 | 2017-09-19 | Intel Corporation | Computing subsystem hardware recovery via automated selective power cycling |
US9403539B2 (en) | 2013-03-15 | 2016-08-02 | Bright Energy Storage Technologies, Llp | Apparatus and method for controlling a locomotive consist |
US9727577B2 (en) | 2013-03-28 | 2017-08-08 | Google Inc. | System and method to store third-party metadata in a cloud storage system |
US8947816B1 (en) | 2013-05-01 | 2015-02-03 | Western Digital Technologies, Inc. | Data storage assembly for archive cold storage |
US9904486B2 (en) * | 2013-07-17 | 2018-02-27 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Selectively powering a storage device over a data network |
US9235486B1 (en) | 2013-09-30 | 2016-01-12 | Symantec Corporation | Techniques for spare storage pool management |
US9360925B2 (en) * | 2014-05-13 | 2016-06-07 | Netapp, Inc. | Power switching technique for archival data storage enclosure |
US9436524B2 (en) | 2014-05-13 | 2016-09-06 | Netapp, Inc. | Managing archival storage |
-
2014
- 2014-10-03 US US14/506,101 patent/US9939865B2/en active Active
- 2014-11-13 US US14/540,721 patent/US20150363126A1/en not_active Abandoned
-
2015
- 2015-04-01 US US14/676,612 patent/US10152105B2/en active Active
- 2015-05-22 US US14/720,503 patent/US9880602B2/en active Active
- 2015-05-22 US US14/720,031 patent/US9541978B2/en active Active
- 2015-05-29 US US14/726,146 patent/US9965011B2/en active Active
- 2015-06-15 US US14/740,031 patent/US9874915B2/en active Active
Patent Citations (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6332139B1 (en) * | 1998-11-09 | 2001-12-18 | Mega Chips Corporation | Information communication system |
US6973455B1 (en) * | 1999-03-03 | 2005-12-06 | Emc Corporation | File server system providing direct data sharing between clients with a server acting as an arbiter and coordinator |
US20090106255A1 (en) * | 2001-01-11 | 2009-04-23 | Attune Systems, Inc. | File Aggregation in a Switched File System |
US7124272B1 (en) * | 2003-04-18 | 2006-10-17 | Symantec Corporation | File usage history log for improved placement of files in differential rate memory according to frequency of utilizations and volatility of allocation space |
US20090089504A1 (en) * | 2003-08-14 | 2009-04-02 | Soran Philip E | Virtual Disk Drive System and Method |
US20050141184A1 (en) * | 2003-12-25 | 2005-06-30 | Hiroshi Suzuki | Storage system |
US20050168934A1 (en) * | 2003-12-29 | 2005-08-04 | Wendel Eric J. | System and method for mass storage using multiple-hard-disk-drive enclosure |
US20060004868A1 (en) * | 2004-07-01 | 2006-01-05 | Claudatos Christopher H | Policy-based information management |
US20060062383A1 (en) * | 2004-09-21 | 2006-03-23 | Yasunori Kaneda | Encryption/decryption management method in computer system having storage hierarchy |
US20060085608A1 (en) * | 2004-10-19 | 2006-04-20 | Nobuyuki Saika | System and method for controlling the updating of storage device |
US20060206603A1 (en) * | 2005-03-08 | 2006-09-14 | Vijayan Rajan | Integrated storage virtualization and switch system |
US20070079156A1 (en) * | 2005-09-30 | 2007-04-05 | Kazuhisa Fujimoto | Computer apparatus, storage apparatus, system management apparatus, and hard disk unit power supply controlling method |
US20070143542A1 (en) * | 2005-12-16 | 2007-06-21 | Hitachi, Ltd. | Storage controller, and method of controlling storage controller |
US20070205664A1 (en) * | 2006-03-01 | 2007-09-06 | Hitachi, Ltd. | Power supply device and power supply method |
US20100011229A1 (en) * | 2006-07-17 | 2010-01-14 | Xyratex Technology Limited | Methods of powering up a disk drive storage enclosure and storage enclosures |
US20080256307A1 (en) * | 2006-10-17 | 2008-10-16 | Kazuhisa Fujimoto | Storage subsystem, storage system, and method of controlling power supply to the storage subsystem |
US20080093926A1 (en) * | 2006-10-20 | 2008-04-24 | Hiroshi Suzuki | Power device and power device power supply method |
US20080141332A1 (en) * | 2006-12-11 | 2008-06-12 | International Business Machines Corporation | System, method and program product for identifying network-attack profiles and blocking network intrusions |
US20090089343A1 (en) * | 2007-09-27 | 2009-04-02 | Sun Microsystems, Inc. | Method and system for block allocation for hybrid drives |
US20090135698A1 (en) * | 2007-11-28 | 2009-05-28 | Akira Fujibayashi | Disk Controller and Storage System |
US20090150593A1 (en) * | 2007-12-11 | 2009-06-11 | Microsoft Corporation | Dynamtic storage hierarachy management |
US20090222643A1 (en) * | 2008-02-29 | 2009-09-03 | Phison Electronics Corp. | Block management method for flash memory and controller and storage sysetm using the same |
US7984151B1 (en) * | 2008-10-09 | 2011-07-19 | Google Inc. | Determining placement of user data to optimize resource utilization for distributed systems |
US20110191601A1 (en) * | 2009-02-27 | 2011-08-04 | Yosuke Tsuyuki | Storage System |
US20110161556A1 (en) * | 2009-12-31 | 2011-06-30 | Seagate Technology Llc | Systems and methods for storing data in a multi-level cell solid state storage device |
US20110191604A1 (en) * | 2010-01-29 | 2011-08-04 | Hitachi, Ltd. | Storage system |
US20110314325A1 (en) * | 2010-06-17 | 2011-12-22 | Hitachi, Ltd. | Storage apparatus and method of detecting power failure in storage apparatus |
US8583838B1 (en) * | 2010-09-27 | 2013-11-12 | Emc Corporation | Techniques for statistics collection in connection with data storage performance |
US20120151262A1 (en) * | 2010-12-13 | 2012-06-14 | Hitachi, Ltd. | Storage apparatus and method of detecting power failure in storage apparatus |
US20120210169A1 (en) * | 2011-02-15 | 2012-08-16 | Coraid, Inc. | Power failure management in components of storage area network |
US20120272038A1 (en) * | 2011-04-20 | 2012-10-25 | Seagate Technology Llc | Logical block address mapping |
US20130031317A1 (en) * | 2011-04-27 | 2013-01-31 | Seagate Technology Llc | Method and apparatus for redirecting data writes |
US20140310441A1 (en) * | 2011-09-21 | 2014-10-16 | Kevin Mark Klughart | Data Storage Architecture Extension System and Method |
US20130304963A1 (en) * | 2012-05-10 | 2013-11-14 | Sony Corporation | Memory managing device and method and electronic apparatus |
US20140082117A1 (en) * | 2012-09-14 | 2014-03-20 | ConnectEDU Inc. | Client device lockdown and control system |
US20140279876A1 (en) * | 2013-03-15 | 2014-09-18 | Tactile, Inc. | Storing and processing data organized as flexible records |
US20140281194A1 (en) * | 2013-03-15 | 2014-09-18 | Seagate Technology Llc | Dynamically-sizeable granule storage |
US20140281819A1 (en) * | 2013-03-15 | 2014-09-18 | Fusion-Io, Inc. | Managing Data Reliability |
US20140289451A1 (en) * | 2013-03-20 | 2014-09-25 | Phison Electronics Corp. | Method of recording mapping information, and memory controller and memory storage apparatus using the same |
US20150067295A1 (en) * | 2013-08-29 | 2015-03-05 | Cleversafe, Inc. | Storage pools for a dispersed storage network |
US20150197330A1 (en) * | 2014-01-14 | 2015-07-16 | Austin Digital Inc. | Methods for matching flight data |
US20150363288A1 (en) * | 2014-06-13 | 2015-12-17 | Seagate Technology Llc | Redundancies for reconstruction in mass data storage systems |
US9541978B2 (en) * | 2014-06-13 | 2017-01-10 | Seagate Technology Llc | Redundancies for reconstruction in mass data storage systems |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10152105B2 (en) | 2014-06-13 | 2018-12-11 | Seagate Technology Llc | Common controller operating multiple storage drives |
US9541978B2 (en) | 2014-06-13 | 2017-01-10 | Seagate Technology Llc | Redundancies for reconstruction in mass data storage systems |
US9874915B2 (en) | 2014-06-13 | 2018-01-23 | Seagate Technology Llc | Extended file attributes for redundant data storage |
US9880602B2 (en) | 2014-06-13 | 2018-01-30 | Seagate Technology Llc | Power characteristics in a system of disparate storage drives |
US9939865B2 (en) | 2014-06-13 | 2018-04-10 | Seagate Technology Llc | Selective storage resource powering for data transfer management |
US9965011B2 (en) | 2014-06-13 | 2018-05-08 | Seagate Technology Llc | Controller interface for operation of multiple storage drives |
US10459630B2 (en) * | 2014-08-25 | 2019-10-29 | Phison Electronics Corp. | Memory management method, memory storage device and memory controlling circuit unit |
US20160054921A1 (en) * | 2014-08-25 | 2016-02-25 | Phison Electronics Corp. | Memory management method, memory storage device and memory controlling circuit unit |
US10379742B2 (en) * | 2015-12-28 | 2019-08-13 | Netapp, Inc. | Storage zone set membership |
US9977613B2 (en) * | 2015-12-30 | 2018-05-22 | Dell Products L.P. | Systems and methods for zone page allocation for shingled media recording disks |
US10514984B2 (en) | 2016-02-26 | 2019-12-24 | Netapp, Inc. | Risk based rebuild of data objects in an erasure coded storage system |
US10055317B2 (en) | 2016-03-22 | 2018-08-21 | Netapp, Inc. | Deferred, bulk maintenance in a distributed storage system |
US10467172B2 (en) | 2016-06-01 | 2019-11-05 | Seagate Technology Llc | Interconnect for shared control electronics |
US20180246670A1 (en) * | 2017-02-28 | 2018-08-30 | International Business Machines Corporation | Storing data sequentially in zones in a dispersed storage network |
US10642532B2 (en) * | 2017-02-28 | 2020-05-05 | International Business Machines Corporation | Storing data sequentially in zones in a dispersed storage network |
US11550501B2 (en) * | 2017-02-28 | 2023-01-10 | International Business Machines Corporation | Storing data sequentially in zones in a dispersed storage network |
US20230080824A1 (en) * | 2017-02-28 | 2023-03-16 | International Business Machines Corporation | Storing data sequentially in zones in a dispersed storage network |
US11907585B2 (en) * | 2017-02-28 | 2024-02-20 | International Business Machines Corporation | Storing data sequentially in zones in a dispersed storage network |
US11074937B1 (en) * | 2020-03-17 | 2021-07-27 | Kabushiki Kaisha Toshiba | Magnetic disk device and depop processing method |
US20220113879A1 (en) * | 2020-10-14 | 2022-04-14 | Microchip Technology Incorporated | System with Increasing Protected Storage Area and Erase Protection |
US20230266897A1 (en) * | 2022-02-24 | 2023-08-24 | Micron Technology, Inc. | Dynamic zone group configuration at a memory sub-system |
Also Published As
Publication number | Publication date |
---|---|
US10152105B2 (en) | 2018-12-11 |
US20150363127A1 (en) | 2015-12-17 |
US20150363109A1 (en) | 2015-12-17 |
US20150363288A1 (en) | 2015-12-17 |
US20150362983A1 (en) | 2015-12-17 |
US9939865B2 (en) | 2018-04-10 |
US9541978B2 (en) | 2017-01-10 |
US9965011B2 (en) | 2018-05-08 |
US20150362968A1 (en) | 2015-12-17 |
US9874915B2 (en) | 2018-01-23 |
US9880602B2 (en) | 2018-01-30 |
US20150362972A1 (en) | 2015-12-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150363126A1 (en) | Logical zone mapping | |
US11392307B2 (en) | Data-protection-aware capacity provisioning of shared external volume | |
US11137940B2 (en) | Storage system and control method thereof | |
US7971025B2 (en) | Method and apparatus for chunk allocation in a thin provisioning storage system | |
TWI575377B (en) | Storage system with load balancing mechanism and method of operation thereof | |
US7814351B2 (en) | Power management in a storage array | |
US7266668B2 (en) | Method and system for accessing a plurality of storage devices | |
US8271718B2 (en) | Storage system and control method for the same, and program | |
US9348724B2 (en) | Method and apparatus for maintaining a workload service level on a converged platform | |
US8645750B2 (en) | Computer system and control method for allocation of logical resources to virtual storage areas | |
US8977834B2 (en) | Dynamic storage regions | |
US7797487B2 (en) | Command queue loading | |
EP4137924A1 (en) | Fragment management method and fragment management apparatus | |
JP2004013547A (en) | Data allocation method and information processing system | |
US10365845B1 (en) | Mapped raid restripe for improved drive utilization | |
US20170329533A1 (en) | Controlling Operation of a Data Storage System | |
US8447947B2 (en) | Method and interface for allocating storage capacities to plural pools | |
JP2003280950A (en) | File management system | |
US20090006741A1 (en) | Preferred zone scheduling | |
US8898514B2 (en) | SAS storage device drive system with failure information table | |
US11201788B2 (en) | Distributed computing system and resource allocation method | |
CN116917873A (en) | Data access method, memory controller and memory device | |
US20140281300A1 (en) | Opportunistic Tier in Hierarchical Storage | |
US11880589B2 (en) | Storage system and control method | |
WO2024045879A1 (en) | Data access method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SEAGATE TECHNOLOGY LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FRICK, GUY DAVID;REEL/FRAME:034166/0844 Effective date: 20141113 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |