US20050149562A1 - Method and system for managing data access requests utilizing storage meta data processing - Google Patents

Method and system for managing data access requests utilizing storage meta data processing Download PDF

Info

Publication number
US20050149562A1
US20050149562A1 US10/749,879 US74987903A US2005149562A1 US 20050149562 A1 US20050149562 A1 US 20050149562A1 US 74987903 A US74987903 A US 74987903A US 2005149562 A1 US2005149562 A1 US 2005149562A1
Authority
US
United States
Prior art keywords
data
manager
request
preparing
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/749,879
Inventor
Michael Browne
Glenn Wightwick
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US10/749,879 priority Critical patent/US20050149562A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WIGHTWICK, GLENN R., BROWNE, MICHAEL E.
Priority to CNB2004100909773A priority patent/CN1292352C/en
Publication of US20050149562A1 publication Critical patent/US20050149562A1/en
Priority to US11/854,002 priority patent/US7818309B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Definitions

  • This invention relates, in general, to the management of requests to access data in communications environments, and more particularly, to informing a data object manager of an anticipated request to access the data from storage media based on a received request associated with meta data which corresponds to the data.
  • Storage subsystems generally consist of a number of disk drives that can be aggregated and made to appear as virtual disk drives to one or more client computers. To improve performance, storage subsystems usually deploy a cache which is used to hold frequently accessed disk blocks. The choice of which disk blocks to cache can have a significant impact on overall system performance. Some storage subsystems attempt to anticipate which disk blocks may be required by client computers by examining historical patterns of access to disk blocks. The nature of such cache management algorithms is predictive.
  • a manager receives a request associated with meta data corresponding to data maintained separately from the meta data.
  • the manager informs another manager of an anticipated request that will be received by the another manager to enable it to prepare for the anticipated request.
  • FIG. 1 illustrates a flowchart of one embodiment of a technique for managing requests associated with data in a computer environment, in accordance with an aspect of the present invention
  • FIG. 2 illustrates a flowchart of another embodiment of a technique for managing requests associated with data in a computer environment, in accordance with an aspect of the present invention
  • FIG. 3 illustrates one embodiment of a technique for managing requests for access to data in an environment in which data and meta data associated with the data are stored separately, in accordance with an aspect of the present invention
  • FIG. 4 illustrates an example of an environment in which a technique for managing requests for access to data is utilized, in accordance with an aspect of the present invention
  • FIG. 5 illustrates another example of an environment in which a technique for managing requests for access to data is utilized, in accordance with an aspect of the present invention.
  • FIG. 6 illustrates a third example of an environment in which a technique for managing requests for access to data is utilized, in accordance with an aspect of the present invention.
  • a manager receives a request associated with meta data.
  • the manager informs another manager of an anticipated request to be received the another manager to enable the another manager to prepare for the anticipated request.
  • step 61 comprises a request manager receiving a request associated with meta data.
  • step 62 the request manager informs a data object manager of a change in a data object's meta data.
  • the data object manager makes a data object management decision in step 63 and, if necessary, acts to implement the data object management decision in step 64 .
  • step 51 comprises a request manager receiving a usage request. Then, if the communications unit is granted permission to use a data object as determined in step 52 , a request manager sends a request-management message to a data object manager and responds, substantially simultaneously, to a communications unit with a usage-request response in steps 53 and 55 , respectively. Subsequent to steps 53 and 55 , respectively, the data object manager prepares for an anticipated request in step 54 , and data-usage communications are transmitted between the communications unit and data object manager in step 56 .
  • the step of transmitting data-usage communications includes transmitting by a communications unit requests for data blocks and transmitting by a data object manager data comprising the requested data blocks of the data object.
  • step 57 comprises the request manager responding to a communications unit with a usage-request response.
  • a request manager 10 receives usage requests from a communications unit 20 via request management network 12 .
  • Request manager 10 transmits a request-management message to a data object manager 30 via a private network 14 and responds to communications unit 20 with a usage-request response via request management network 12 . If communications unit 20 is granted permission to use a data object 40 , the communications to support use of data object 40 are transmitted via data network 16 between communications unit 20 and data object manager 30 .
  • Meta data is information about user data.
  • meta data associated with data objects include the identity of client computers that have access permission, data object type, names of files associated with a set of disk blocks, the length of a file, the list of blocks that constitute a file, information about user access permissions, and the date and time a file has been created or updated.
  • Data objects comprise data.
  • Data object types include data files, checkpoint files, file systems, logical volumes, and journaled file system (JFS) logical volume logs.
  • Features facilitated by use of the technique in the computer environment illustrated in FIG. 3 include: improved security with respect to access to data objects, regulation of the speed of access to data objects, arbitration of access priorities to data objects, and increased speed of access to data objects.
  • An emerging class of storage environments separates the storage of user data and meta data and provides separate networks over which the user data and meta data traverse.
  • An example of such a storage environment is IBM's Storage TankTM file system wherein a Storage TankTM client (a computer) accesses user data from a storage subsystem (over a storage area network (SAN) using block transfer protocols) and accesses meta data from a centralized Storage TankTM meta data controller (over Ethernet using TCP/IP protocols).
  • the separation of user data and meta data can be either logical or physical.
  • Storage subsystems which generally comprise a number of disk drives that can be aggregated and made to appear as virtual disk drives to one or more client computers, usually deploy a cache, which is used to hold frequently accessed disk blocks, to improve input-output performance.
  • One or more aspects of the present invention take advantage of the fact that where user data and meta data are separated, the processing of meta data in conjunction with file access provides additional information which can be used to inform a storage subsystem of future input/output (I/O) access requests. This information can be utilized by a storage subsystem to facilitate the management of its internal caches.
  • I/O input/output
  • FIG. 4 An example of the management of the contents of a cache in a data storage subsystem which utilizes information obtained by processing file meta data in accordance with an aspect of the present invention is described as follows with reference to FIG. 4 .
  • client computer 210 wants to read and update some disk blocks associated with a file located in a storage subsystem 230 , client computer 210 must be granted an exclusive lock on the associated disk blocks.
  • Client computer 210 initiates a transaction to meta data controller 220 , requesting the lock. This lock request indicates that client computer 210 intends to perform I/O operations on a certain range of disk blocks in the future.
  • meta data controller 220 If meta data controller 220 can grant the requested lock, meta data controller 220 passes a “hint” to storage subsystem 230 , which stores those blocks, indicating that the storage subsystem can expect to receive an I/O request from a client computer for a particular range of disk blocks. Meta data controller (MDC) 220 communicates this “hint” to storage subsystem 230 via private network 222 . Essentially concurrently, meta data controller 220 grants the lock by signaling to client computer 210 via meta data network 212 .
  • MDC 220 is an example of a request manager
  • storage subsystem controller 236 of storage subsystem 230 is an example of a data object manager.
  • storage subsystem 230 determines that the requested disk blocks are not in cache 232 , it pre-fetches the requested blocks from storage disks 234 into cache 232 .
  • client computer 210 initiates an I/O operation with storage subsystem 230 via data network 214 to access at least some of the disk blocks on which a lock was received.
  • the storage subsystem may have the requested disk blocks in its cache already as a result of pre-fetching. If not, storage subsystem 230 has already commenced the necessary physical I/O to load the requested blocks into cache 232 as a result of previously receiving a hint from meta data controller 220 .
  • the requested disk blocks are available in cache 232 , they are sent to client computer 210 from cache 232 via data network 214 .
  • the result of storage subsystem 230 initiating disk input/output in order to store disk blocks that are subject to a future access request by a client computer in cache 232 in advance of receiving a request from client computer 210 is that data access latency is reduced.
  • the method of the present invention is also utilized in the operation of the example illustrated in FIG. 4 when a client computer writes disk blocks to the storage subsystem. If client computer 210 sends a transaction to MDC 220 , indicating that the client computer has closed a file and has finished writing blocks to the storage subsystem, meta data controller 220 communicates this information to storage subsystem 230 via private network 222 .
  • Storage subsystem 230 determines whether to free the storage locations in cache 232 in which the disk blocks comprising the closed file are stored based on this file-closed message and possibly “hints” received regarding other future data access requests. Freeing storage locations in cache 232 permits storage of other disk blocks in cache for expedited access.
  • Another example of managing the contents of a cache in a data storage subsystem which utilizes information obtained by processing file meta data in accordance with an aspect of the present invention relates to a computer writing a large file which is not likely to be read.
  • An example of such a file is a checkpoint/restart file, created by a long-running computational job, or a database log file. These files are typically used to recover the state of a computational workload after a computer crash. Since computer crashes are very rare, checkpoint/restart files are typically written regularly, but rarely read. Knowledge of this information can be used to inform a storage subsystem not to cache a checkpoint/restart file once it has been written to disk.
  • Meta data controller 220 grants permission to computer 210 to write the file, providing a list of blocks where the file should be written, and simultaneously, via private network 222 , informs storage controller 236 of storage subsystem 230 that computer 210 is about to write a large file that should not be actively cached in storage subsystem cache 232 .
  • storage subsystem controller 236 decides how much of cache 232 to allocate to storing all or part of the large file and, as quickly as possible, writes the contents of the large file to storage disks 234 within the storage subsystem. As soon as the contents (or partial contents) of the file are written to storage disks 234 of the storage subsystem, the associated file data within the cache 232 can be discarded immediately, since it is highly unlikely that this file will need to be read again.
  • utilization of the cache based on knowledge of the type of data being stored, which is gained through processing the file's meta data, facilitates optimization of the use of the cache resource.
  • An example related to the previous example involves a computer reading a checkpoint/restart file described above to recover the state of a computational workload after the computer has crashed. Knowledge that this type of file is rarely read can be used to inform the storage subsystem to not cache the file when it is being read from disk. It should be noted that the management of the contents of a cache in a data storage subsystem described below with respect to this example applies to reading any large file which is only likely to be accessed infrequently.
  • computer 210 intends to read a checkpoint/restart file so computer 210 requests permission from meta data controller 220 via meta data network 212 to read the file and informs meta data controller 220 that the file should not be actively cached.
  • meta data controller 220 automatically recognizes that the file is of a particular type (such as a checkpoint/restart file), rather than having to be informed by computer 210 .
  • Meta data controller 220 grants permission to computer 210 to read the file, providing a list of blocks where the file is located, and simultaneously, via the private network 222 , informs storage subsystem controller 236 of storage subsystem 230 that computer 210 is about to read a large file that should not be actively cached in the storage subsystem cache 232 .
  • storage subsystem controller 236 decides how much of cache 232 to allocate to reading the file from storage disks 234 . As soon as the contents (or partial contents) of the file are transmitted to computer 210 , the associated file data within the cache 232 can be discarded immediately, since it is highly unlikely that this file will need to be read again. Thus, utilization of the cache based on the type of data being accessed, which is determined by processing the file's meta data, frees cache resources to facilitate faster access to other files.
  • a number of computers ( 310 , 311 , and 312 ) are members of a database cluster. Each of computers 310 , 311 , and 312 hosts an instance of the cluster's database.
  • computer 310 wants to read and update some disk blocks associated with a database table located in a storage subsystem 330 , computer 310 must be granted an exclusive lock on the associated disk blocks.
  • Computer 310 requests a lock on those disk blocks from database lock manager 320 by sending a request on external network 314 . If database lock manager 320 can grant the requested lock, database lock manager 320 grants the lock via a message sent to computer 310 on external network 314 and essentially simultaneously sends a message to storage subsystem 330 , indicating that the storage subsystem can expect to receive an I/O request from a client computer for a particular range of disk blocks. Database lock manager 320 communicates this message to storage subsystem 330 via private network 322 . Analogously to the previous example, database lock manager 320 is an example of a request manager, and storage subsystem 330 comprises a data object manager.
  • storage subsystem 330 determines that the disk blocks for which a lock was granted are not in cache 332 , storage subsystem 330 initiates an I/O operation to storage disks 334 .
  • Computer 310 initiates an I/O operation with storage subsystem 330 since it has been granted a lock on the requested disk blocks.
  • the storage subsystem may have already pre-fetched the requested disk blocks and stored them in cache 332 . Even if all requested disk blocks have not yet been loaded in the cache, the storage subsystem has initiated the physical I/O from storage disks 334 prior to receiving the request from computer 310 . As a result, the latency in providing the requested disk blocks from cache 332 is less than it would be without pre-fetching prompted by the “hint” from the database lock manager.
  • a centralized storage meta data controller 434 is co-hosted with a storage subsystem 420 .
  • one or more computers are connected to the data storage subsystem via a data network.
  • a computer 410 exchanges data with a storage subsystem 420 via data network 412 .
  • Storage subsystem 420 comprises a server 430 connected to a cache 322 and storage disks 424 .
  • Server 430 comprises logical partitions 431 and 432 .
  • meta data controller 434 and storage subsystem controller 433 are executed by software running in logical partitions (LPARs) 432 and 431 , respectively, of server 430 .
  • LPARs logical partitions
  • “hints” regarding anticipated future I/O requests directed to storage subsystem 420 are passed at very high speed and low latency from meta data controller 434 to storage subsystem controller 433 .
  • the benefit of pre-fetching disk blocks into the storage subsystem cache is enhanced by the use of high-speed, low-latency communications between the meta data controller and storage subsystem controller.
  • meta data controller 434 and storage subsystem controller 433 are examples of a request manager and a data object manager, respectively.
  • the present invention can be included in an article of manufacture (e.g., one or more computer program products) having, for instance, computer usable media.
  • the media has therein, for instance, computer readable program code means or logic (e.g., instructions, code, commands, etc.) to provide and facilitate the capabilities of the present invention.
  • the article of manufacture can be included as a part of a computer system or sold separately.
  • At least one program storage device readable by a machine embodying at least one program of instructions executable by the machine to perform the capabilities of the present invention can be provided.

Abstract

A method and system are provided for managing requests to access data in a computer environment. In one aspect of the present invention, a request manager receives a request associated with meta data corresponding to data that is maintained separately from the meta data. In another aspect, the request manager informs a data object manager of an anticipated request that will be received by the data object manager to enable it to prepare for the anticipated request. The data object manager commences preparing for the anticipated request in response to being informed of the anticipated request to facilitate a reduction in data access time. In one example of a computing environment utilizing one or aspects of the invention, a storage subsystem comprising a data object manager prepares for the anticipated request by pre-fetching data blocks from storage media into a cache.

Description

    TECHNICAL FIELD
  • This invention relates, in general, to the management of requests to access data in communications environments, and more particularly, to informing a data object manager of an anticipated request to access the data from storage media based on a received request associated with meta data which corresponds to the data.
  • BACKGROUND OF THE INVENTION
  • Storage subsystems generally consist of a number of disk drives that can be aggregated and made to appear as virtual disk drives to one or more client computers. To improve performance, storage subsystems usually deploy a cache which is used to hold frequently accessed disk blocks. The choice of which disk blocks to cache can have a significant impact on overall system performance. Some storage subsystems attempt to anticipate which disk blocks may be required by client computers by examining historical patterns of access to disk blocks. The nature of such cache management algorithms is predictive.
  • Although there are techniques today for the management of requests to access data in communications environments, these techniques can cause a storage subsystem to load data into its cache that is not accessed within the expected time because of their predictive nature. Thus, there is still a need for further techniques to facilitate the management of requests to access data in computer environments.
  • SUMMARY OF THE INVENTION
  • The shortcomings of the prior art are overcome and additional advantages are provided through the provision of a method of managing requests. In one aspect, a manager receives a request associated with meta data corresponding to data maintained separately from the meta data. In another aspect of the present invention, the manager informs another manager of an anticipated request that will be received by the another manager to enable it to prepare for the anticipated request.
  • Systems and computer program products corresponding to the above-summarized methods are also described and claimed herein.
  • Additional features and advantages are realized through the techniques of the present invention. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
  • FIG. 1 illustrates a flowchart of one embodiment of a technique for managing requests associated with data in a computer environment, in accordance with an aspect of the present invention;
  • FIG. 2 illustrates a flowchart of another embodiment of a technique for managing requests associated with data in a computer environment, in accordance with an aspect of the present invention;
  • FIG. 3 illustrates one embodiment of a technique for managing requests for access to data in an environment in which data and meta data associated with the data are stored separately, in accordance with an aspect of the present invention;
  • FIG. 4 illustrates an example of an environment in which a technique for managing requests for access to data is utilized, in accordance with an aspect of the present invention;
  • FIG. 5 illustrates another example of an environment in which a technique for managing requests for access to data is utilized, in accordance with an aspect of the present invention; and
  • FIG. 6 illustrates a third example of an environment in which a technique for managing requests for access to data is utilized, in accordance with an aspect of the present invention.
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • In one aspect of the present invention, a manager receives a request associated with meta data. The manager informs another manager of an anticipated request to be received the another manager to enable the another manager to prepare for the anticipated request.
  • A technique for managing requests associated with data in a computer environment in accordance with an aspect of the present invention is described below with reference to request management flowchart 60 illustrated in FIG. 1. First, step 61 comprises a request manager receiving a request associated with meta data. Then, in step 62, the request manager informs a data object manager of a change in a data object's meta data. The data object manager makes a data object management decision in step 63 and, if necessary, acts to implement the data object management decision in step 64.
  • Further aspects of a technique for managing requests associated with data objects in a computer environment in accordance with the present invention are described below with reference to flowchart 50 illustrated in FIG. 2. First, step 51 comprises a request manager receiving a usage request. Then, if the communications unit is granted permission to use a data object as determined in step 52, a request manager sends a request-management message to a data object manager and responds, substantially simultaneously, to a communications unit with a usage-request response in steps 53 and 55, respectively. Subsequent to steps 53 and 55, respectively, the data object manager prepares for an anticipated request in step 54, and data-usage communications are transmitted between the communications unit and data object manager in step 56. In one example, the step of transmitting data-usage communications includes transmitting by a communications unit requests for data blocks and transmitting by a data object manager data comprising the requested data blocks of the data object. Alternatively, if the communications unit is not granted permission to use a data object, step 57 comprises the request manager responding to a communications unit with a usage-request response.
  • One example of a communications environment in which a technique for managing requests associated with data objects is utilized in accordance with an aspect of the present invention is described below with reference to FIG. 3. In an environment in which the meta data associated with data objects may be stored separately from the data objects, a request manager 10 receives usage requests from a communications unit 20 via request management network 12. Request manager 10 transmits a request-management message to a data object manager 30 via a private network 14 and responds to communications unit 20 with a usage-request response via request management network 12. If communications unit 20 is granted permission to use a data object 40, the communications to support use of data object 40 are transmitted via data network 16 between communications unit 20 and data object manager 30.
  • Generally, both meta data and user data are associated with data objects in a computer communications environment. User data is the information that has meaning to a user or to a program that may process that data. Examples of user data are the contents of a Freelance Graphics® presentation, or employee information stored within a relational database. Meta data is information about user data. Examples of meta data associated with data objects include the identity of client computers that have access permission, data object type, names of files associated with a set of disk blocks, the length of a file, the list of blocks that constitute a file, information about user access permissions, and the date and time a file has been created or updated. Data objects comprise data. Data object types include data files, checkpoint files, file systems, logical volumes, and journaled file system (JFS) logical volume logs.
  • Features facilitated by use of the technique in the computer environment illustrated in FIG. 3 include: improved security with respect to access to data objects, regulation of the speed of access to data objects, arbitration of access priorities to data objects, and increased speed of access to data objects.
  • An emerging class of storage environments separates the storage of user data and meta data and provides separate networks over which the user data and meta data traverse. An example of such a storage environment is IBM's Storage Tank™ file system wherein a Storage Tank™ client (a computer) accesses user data from a storage subsystem (over a storage area network (SAN) using block transfer protocols) and accesses meta data from a centralized Storage Tank™ meta data controller (over Ethernet using TCP/IP protocols). The separation of user data and meta data can be either logical or physical. Storage subsystems, which generally comprise a number of disk drives that can be aggregated and made to appear as virtual disk drives to one or more client computers, usually deploy a cache, which is used to hold frequently accessed disk blocks, to improve input-output performance.
  • One or more aspects of the present invention take advantage of the fact that where user data and meta data are separated, the processing of meta data in conjunction with file access provides additional information which can be used to inform a storage subsystem of future input/output (I/O) access requests. This information can be utilized by a storage subsystem to facilitate the management of its internal caches.
  • An example of the management of the contents of a cache in a data storage subsystem which utilizes information obtained by processing file meta data in accordance with an aspect of the present invention is described as follows with reference to FIG. 4. When a client computer 210 wants to read and update some disk blocks associated with a file located in a storage subsystem 230, client computer 210 must be granted an exclusive lock on the associated disk blocks. Client computer 210 initiates a transaction to meta data controller 220, requesting the lock. This lock request indicates that client computer 210 intends to perform I/O operations on a certain range of disk blocks in the future. If meta data controller 220 can grant the requested lock, meta data controller 220 passes a “hint” to storage subsystem 230, which stores those blocks, indicating that the storage subsystem can expect to receive an I/O request from a client computer for a particular range of disk blocks. Meta data controller (MDC) 220 communicates this “hint” to storage subsystem 230 via private network 222. Essentially concurrently, meta data controller 220 grants the lock by signaling to client computer 210 via meta data network 212. In this exemplary embodiment, MDC 220 is an example of a request manager, and storage subsystem controller 236 of storage subsystem 230 is an example of a data object manager.
  • If storage subsystem 230 determines that the requested disk blocks are not in cache 232, it pre-fetches the requested blocks from storage disks 234 into cache 232. After receiving the requested lock, client computer 210 initiates an I/O operation with storage subsystem 230 via data network 214 to access at least some of the disk blocks on which a lock was received. When the client-initiated I/O request is received by storage subsystem 230, the storage subsystem may have the requested disk blocks in its cache already as a result of pre-fetching. If not, storage subsystem 230 has already commenced the necessary physical I/O to load the requested blocks into cache 232 as a result of previously receiving a hint from meta data controller 220. When the requested disk blocks are available in cache 232, they are sent to client computer 210 from cache 232 via data network 214. The result of storage subsystem 230 initiating disk input/output in order to store disk blocks that are subject to a future access request by a client computer in cache 232 in advance of receiving a request from client computer 210 is that data access latency is reduced.
  • The method of the present invention is also utilized in the operation of the example illustrated in FIG. 4 when a client computer writes disk blocks to the storage subsystem. If client computer 210 sends a transaction to MDC 220, indicating that the client computer has closed a file and has finished writing blocks to the storage subsystem, meta data controller 220 communicates this information to storage subsystem 230 via private network 222. Storage subsystem 230 determines whether to free the storage locations in cache 232 in which the disk blocks comprising the closed file are stored based on this file-closed message and possibly “hints” received regarding other future data access requests. Freeing storage locations in cache 232 permits storage of other disk blocks in cache for expedited access.
  • Another example of managing the contents of a cache in a data storage subsystem which utilizes information obtained by processing file meta data in accordance with an aspect of the present invention relates to a computer writing a large file which is not likely to be read. An example of such a file is a checkpoint/restart file, created by a long-running computational job, or a database log file. These files are typically used to recover the state of a computational workload after a computer crash. Since computer crashes are very rare, checkpoint/restart files are typically written regularly, but rarely read. Knowledge of this information can be used to inform a storage subsystem not to cache a checkpoint/restart file once it has been written to disk.
  • This example is described further with reference to the exemplary environment of FIG. 4. When computer 210 wishes to write a checkpoint/restart file, it requests write permission from meta data controller 220 via the meta data network 212 and informs meta data controller 220 that the file should not be actively cached. Alternatively, meta data controller 220 could automatically recognize that the file is of a particular type (such as a checkpoint/restart file). Meta data controller 220 grants permission to computer 210 to write the file, providing a list of blocks where the file should be written, and simultaneously, via private network 222, informs storage controller 236 of storage subsystem 230 that computer 210 is about to write a large file that should not be actively cached in storage subsystem cache 232.
  • When computer 210 writes the file via data network 214 to storage subsystem 230, storage subsystem controller 236 decides how much of cache 232 to allocate to storing all or part of the large file and, as quickly as possible, writes the contents of the large file to storage disks 234 within the storage subsystem. As soon as the contents (or partial contents) of the file are written to storage disks 234 of the storage subsystem, the associated file data within the cache 232 can be discarded immediately, since it is highly unlikely that this file will need to be read again. Thus, utilization of the cache based on knowledge of the type of data being stored, which is gained through processing the file's meta data, facilitates optimization of the use of the cache resource.
  • An example related to the previous example involves a computer reading a checkpoint/restart file described above to recover the state of a computational workload after the computer has crashed. Knowledge that this type of file is rarely read can be used to inform the storage subsystem to not cache the file when it is being read from disk. It should be noted that the management of the contents of a cache in a data storage subsystem described below with respect to this example applies to reading any large file which is only likely to be accessed infrequently.
  • With reference to FIG. 4, computer 210 intends to read a checkpoint/restart file so computer 210 requests permission from meta data controller 220 via meta data network 212 to read the file and informs meta data controller 220 that the file should not be actively cached. Alternatively, meta data controller 220 automatically recognizes that the file is of a particular type (such as a checkpoint/restart file), rather than having to be informed by computer 210. Meta data controller 220 grants permission to computer 210 to read the file, providing a list of blocks where the file is located, and simultaneously, via the private network 222, informs storage subsystem controller 236 of storage subsystem 230 that computer 210 is about to read a large file that should not be actively cached in the storage subsystem cache 232. When computer 210 reads the file via data network 214 from storage subsystem 230, storage subsystem controller 236 decides how much of cache 232 to allocate to reading the file from storage disks 234. As soon as the contents (or partial contents) of the file are transmitted to computer 210, the associated file data within the cache 232 can be discarded immediately, since it is highly unlikely that this file will need to be read again. Thus, utilization of the cache based on the type of data being accessed, which is determined by processing the file's meta data, frees cache resources to facilitate faster access to other files.
  • Another example of an environment using a technique for managing requests for access to data objects to effectuate management of the contents of a cache in a data storage subsystem by utilizing information obtained from processing file meta data in accordance with an aspect of the present invention is described below with reference to FIG. 5. In this example, a number of computers (310, 311, and 312) are members of a database cluster. Each of computers 310, 311, and 312 hosts an instance of the cluster's database. When a computer 310 wants to read and update some disk blocks associated with a database table located in a storage subsystem 330, computer 310 must be granted an exclusive lock on the associated disk blocks. Computer 310 requests a lock on those disk blocks from database lock manager 320 by sending a request on external network 314. If database lock manager 320 can grant the requested lock, database lock manager 320 grants the lock via a message sent to computer 310 on external network 314 and essentially simultaneously sends a message to storage subsystem 330, indicating that the storage subsystem can expect to receive an I/O request from a client computer for a particular range of disk blocks. Database lock manager 320 communicates this message to storage subsystem 330 via private network 322. Analogously to the previous example, database lock manager 320 is an example of a request manager, and storage subsystem 330 comprises a data object manager.
  • If storage subsystem 330 determines that the disk blocks for which a lock was granted are not in cache 332, storage subsystem 330 initiates an I/O operation to storage disks 334. Computer 310 initiates an I/O operation with storage subsystem 330 since it has been granted a lock on the requested disk blocks. When the I/O request initiated by computer 310 is received by storage subsystem 330 via data network 316, the storage subsystem may have already pre-fetched the requested disk blocks and stored them in cache 332. Even if all requested disk blocks have not yet been loaded in the cache, the storage subsystem has initiated the physical I/O from storage disks 334 prior to receiving the request from computer 310. As a result, the latency in providing the requested disk blocks from cache 332 is less than it would be without pre-fetching prompted by the “hint” from the database lock manager.
  • In another example of a computer environment embodying the present invention, which is described with reference to FIG. 6, a centralized storage meta data controller 434 is co-hosted with a storage subsystem 420. In this environment, one or more computers are connected to the data storage subsystem via a data network. By way of example, as shown in FIG. 6, a computer 410 exchanges data with a storage subsystem 420 via data network 412. Storage subsystem 420 comprises a server 430 connected to a cache 322 and storage disks 424. Server 430 comprises logical partitions 431 and 432.
  • In this example, the functions of meta data controller 434 and storage subsystem controller 433 are executed by software running in logical partitions (LPARs) 432 and 431, respectively, of server 430. Using virtual input/output bus 436 between the meta data controller LPAR 432 and the storage subsystem controller LPAR 431, “hints” regarding anticipated future I/O requests directed to storage subsystem 420 are passed at very high speed and low latency from meta data controller 434 to storage subsystem controller 433. The benefit of pre-fetching disk blocks into the storage subsystem cache is enhanced by the use of high-speed, low-latency communications between the meta data controller and storage subsystem controller. In this example, meta data controller 434 and storage subsystem controller 433 are examples of a request manager and a data object manager, respectively.
  • The present invention can be included in an article of manufacture (e.g., one or more computer program products) having, for instance, computer usable media. The media has therein, for instance, computer readable program code means or logic (e.g., instructions, code, commands, etc.) to provide and facilitate the capabilities of the present invention. The article of manufacture can be included as a part of a computer system or sold separately.
  • Additionally, at least one program storage device readable by a machine embodying at least one program of instructions executable by the machine to perform the capabilities of the present invention can be provided.
  • The flow diagrams depicted herein are just examples. There may be many variations to these diagrams or the steps (or operations) described therein without departing from the spirit of the invention. For instance, the steps may be performed in a differing order, or steps may be added, deleted or modified. All of these variations are considered a part of the claimed invention.
  • Although preferred embodiments have been depicted and described in detail herein, it will be apparent to those skilled in the relevant art that various modifications, additions, substitutions and the like can be made without departing from the spirit of the invention and these are therefore considered to be within the scope of the invention as defined in the following claims.

Claims (20)

1. A method of managing requests in a communications environment, said method comprising:
receiving by a manager a request associated with meta data, said meta data corresponding to data maintained separately from the meta data; and
informing, by the manager, another manager of an anticipated request to be received by the another manager to enable the another manager to prepare for the anticipated request.
2. The method of claim 1, further comprising preparing by the another manager for the anticipated request, said preparing responsive to said informing.
3. The method of claim 2, wherein said preparing comprises managing contents of a cache in a data storage subsystem.
4. The method of claim 2, wherein said preparing comprises managing a user's or a client computer's access to the data.
5. The method of claim 2, further comprising:
sending, by the manager, a reply to a communications unit in response to the request substantially simultaneously with said informing; and
receiving, by the another manager, the anticipated request, wherein said preparing begins before the receiving by the another manager.
6. The method of claim 3, wherein said managing contents comprises pre-fetching one or more data blocks from one or more storage media of the data storage subsystem whereby the data blocks are stored in the cache, the data blocks comprising at least some of the data.
7. The method of claim 3, wherein said managing contents comprises releasing storage locations of the cache whereby the storage locations become available for storing other data, the storage locations storing data blocks comprising at least some of the data.
8. A request management system for a communications environment, said system comprising:
means for receiving by a manager a request associated with meta data, said meta data corresponding to data maintained separately from the meta data; and
means for informing, by the manager, another manager of an anticipated request to be received by the another manager to enable the another manager to prepare for the anticipated request.
9. The system of claim 8, further comprising means for preparing by the another manager for the anticipated request, said means for preparing responsive to said means for informing.
10. The system of claim 9, wherein said means for preparing comprises means for managing contents of a cache in a data storage subsystem.
11. The system of claim 9, wherein said means for preparing comprises means for managing a user's or a client computer's access to the data.
12. The system of claim 9, further comprising:
means for sending, by the manager, a reply to a communications unit in response to the request substantially simultaneously with informing the another manager of the anticipated request to be received; and
means for receiving, by the another manager, the anticipated request, wherein said means for preparing begins prepare for the anticipated request before the means for receiving receives the anticipated request.
13. The system of claim 10, wherein said means for managing contents comprises means for pre-fetching one or more data blocks from one or more storage media of the data storage subsystem whereby the data blocks are stored in the cache, the data blocks comprising at least some of the data.
14. The system of claim 10, wherein said means for managing contents comprises means for releasing storage locations of the cache whereby the storage locations become available for storing other data, the storage locations storing data blocks comprising at least some of the data.
15. At least one program storage device readable by a machine embodying at least one program of instructions executable by the machine to perform a method of managing requests in a communications environment, said method comprising:
receiving by a manager a request associated with meta data, said meta data corresponding to data maintained separately from the meta data; and
informing, by the manager, another manager of an anticipated request to be received by the another manager to enable the another manager to prepare for the anticipated request.
16. The at least one program storage device of claim 15, wherein said method further comprises preparing by the another manager for the anticipated request, said preparing responsive to said informing.
17. The at least one program storage device of claim 16, wherein said preparing comprises managing contents of a cache in a data storage subsystem.
18. The at least one program storage device of claim 16, wherein said preparing comprises managing a user's or a client computer's access to the data.
19. The at least one program storage device of claim 16, wherein said method further comprises:
sending, by the manager, a reply to a communications unit in response to the request substantially simultaneously with said informing; and
receiving, by the another manager, the anticipated request, wherein said preparing begins before the receiving by the another manager.
20. The at least one program storage device of claim 17, wherein said managing contents comprises pre-fetching one or more data blocks from one or more storage media of the data storage subsystem whereby the data blocks are stored in the cache, the data blocks comprising at least some of the data.
US10/749,879 2003-12-31 2003-12-31 Method and system for managing data access requests utilizing storage meta data processing Abandoned US20050149562A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US10/749,879 US20050149562A1 (en) 2003-12-31 2003-12-31 Method and system for managing data access requests utilizing storage meta data processing
CNB2004100909773A CN1292352C (en) 2003-12-31 2004-11-11 Method and system for managing data access requests utilizing storage meta data processing
US11/854,002 US7818309B2 (en) 2003-12-31 2007-09-12 Method for managing data access requests utilizing storage meta data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/749,879 US20050149562A1 (en) 2003-12-31 2003-12-31 Method and system for managing data access requests utilizing storage meta data processing

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/854,002 Continuation US7818309B2 (en) 2003-12-31 2007-09-12 Method for managing data access requests utilizing storage meta data processing

Publications (1)

Publication Number Publication Date
US20050149562A1 true US20050149562A1 (en) 2005-07-07

Family

ID=34711153

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/749,879 Abandoned US20050149562A1 (en) 2003-12-31 2003-12-31 Method and system for managing data access requests utilizing storage meta data processing
US11/854,002 Expired - Fee Related US7818309B2 (en) 2003-12-31 2007-09-12 Method for managing data access requests utilizing storage meta data processing

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/854,002 Expired - Fee Related US7818309B2 (en) 2003-12-31 2007-09-12 Method for managing data access requests utilizing storage meta data processing

Country Status (2)

Country Link
US (2) US20050149562A1 (en)
CN (1) CN1292352C (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070265855A1 (en) * 2006-05-09 2007-11-15 Nokia Corporation mCARD USED FOR SHARING MEDIA-RELATED INFORMATION
US20080120487A1 (en) * 2006-11-21 2008-05-22 Ramakrishna Saripalli Address translation performance in virtualized environments
US20080201549A1 (en) * 2007-02-20 2008-08-21 Raytheon Company System and Method for Improving Data Caching
US20080263259A1 (en) * 2007-04-23 2008-10-23 Microsoft Corporation Hints model for optimization of storage devices connected to host and write optimization schema for storage devices
WO2009045884A2 (en) * 2007-09-28 2009-04-09 Intel Corporation Address translation caching and i/o cache performance improvement in virtualized environments
US20100138613A1 (en) * 2008-06-20 2010-06-03 Nokia Corporation Data Caching
US9632557B2 (en) 2011-09-30 2017-04-25 Intel Corporation Active state power management (ASPM) to reduce power consumption by PCI express components
US11520769B1 (en) * 2021-06-25 2022-12-06 International Business Machines Corporation Block level lock on data table

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070179854A1 (en) * 2006-01-30 2007-08-02 M-Systems Media predictive consignment
US8539124B1 (en) * 2010-03-31 2013-09-17 Emc Corporation Storage integration plugin for virtual servers
US8767334B2 (en) 2010-09-30 2014-07-01 International Business Machines Corporation System, method, and computer program product for creating a single library image from multiple independent tape libraries
US9645950B2 (en) * 2013-01-31 2017-05-09 Vmware, Inc. Low-cost backup and edge caching using unused disk blocks
US10489852B2 (en) * 2013-07-02 2019-11-26 Yodlee, Inc. Financial account authentication
US10534714B2 (en) * 2014-12-18 2020-01-14 Hewlett Packard Enterprise Development Lp Allocating cache memory on a per data object basis
US10831398B2 (en) 2016-09-19 2020-11-10 International Business Machines Corporation Storage device efficiency during data replication

Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5761678A (en) * 1996-06-26 1998-06-02 International Business Machines Corporation Creation of clone storage area with identification of base storage area and deferred cloning of metadata
US5819296A (en) * 1996-10-31 1998-10-06 Veritas Software Corporation Method and apparatus for moving large numbers of data files between computer systems using import and export processes employing a directory of file handles
US5832515A (en) * 1996-09-12 1998-11-03 Veritas Software Log device layered transparently within a filesystem paradigm
US5852724A (en) * 1996-06-18 1998-12-22 Veritas Software Corp. System and method for "N" primary servers to fail over to "1" secondary server
US5872931A (en) * 1996-08-13 1999-02-16 Veritas Software, Corp. Management agent automatically executes corrective scripts in accordance with occurrences of specified events regardless of conditions of management interface and management engine
US5944782A (en) * 1996-10-16 1999-08-31 Veritas Software Corporation Event management system for distributed computing environment
US5996054A (en) * 1996-09-12 1999-11-30 Veritas Software Corp. Efficient virtualized mapping space for log device data storage system
US6021408A (en) * 1996-09-12 2000-02-01 Veritas Software Corp. Methods for operating a log device
US6044373A (en) * 1997-09-29 2000-03-28 International Business Machines Corporation Object-oriented access control method and system for military and commercial file systems
US6119222A (en) * 1996-12-23 2000-09-12 Texas Instruments Incorporated Combined branch prediction and cache prefetch in a microprocessor
US6145012A (en) * 1998-10-14 2000-11-07 Veritas Software Corporation Apparatus and method for efficiently updating files in computer networks
US6163773A (en) * 1998-05-05 2000-12-19 International Business Machines Corporation Data storage system with trained predictive cache management engine
US6256645B1 (en) * 1998-02-14 2001-07-03 International Business Machines Corporation Storage manager which sets the size of an initial-free area assigned to a requesting application according to statistical data
US6282710B1 (en) * 1998-10-28 2001-08-28 Veritas Software Corp. Apparatus and method for externally initiating automatic execution of media placed in basic removable disc drives
US6381602B1 (en) * 1999-01-26 2002-04-30 Microsoft Corporation Enforcing access control on resources at a location other than the source location
US6389420B1 (en) * 1999-09-30 2002-05-14 Emc Corporation File manager providing distributed locking and metadata management for shared data access by clients relinquishing locks after time period expiration
US6401193B1 (en) * 1998-10-26 2002-06-04 Infineon Technologies North America Corp. Dynamic data prefetching based on program counter and addressing mode
US20020078239A1 (en) * 2000-12-18 2002-06-20 Howard John H. Direct access from client to storage device
US6487644B1 (en) * 1996-11-22 2002-11-26 Veritas Operating Corporation System and method for multiplexed data back-up to a storage tape and restore operations using client identification tags
US6502174B1 (en) * 1999-03-03 2002-12-31 International Business Machines Corporation Method and system for managing meta data
US20030005219A1 (en) * 2001-06-29 2003-01-02 Royer Robert J. Partitioning cache metadata state
US20030131097A1 (en) * 2002-01-09 2003-07-10 Stephane Kasriel Interactive path analysis
US20030149694A1 (en) * 2002-02-05 2003-08-07 Ibm Corporation Path-based ranking of unvisited web pages
US20030187860A1 (en) * 2002-03-29 2003-10-02 Panasas, Inc. Using whole-file and dual-mode locks to reduce locking traffic in data storage systems
US6654766B1 (en) * 2000-04-04 2003-11-25 International Business Machines Corporation System and method for caching sets of objects
US20050086206A1 (en) * 2003-10-15 2005-04-21 International Business Machines Corporation System, Method, and service for collaborative focused crawling of documents on a network
US6952737B1 (en) * 2000-03-03 2005-10-04 Intel Corporation Method and apparatus for accessing remote storage in a distributed storage cluster architecture
US6982960B2 (en) * 2001-03-09 2006-01-03 Motorola, Inc. Protocol for self-organizing network using a logical spanning tree backbone
US20060085427A1 (en) * 2004-10-15 2006-04-20 Microsoft Corporation Method and apparatus for intranet searching

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0773085A (en) 1993-07-02 1995-03-17 Hitachi Ltd Data processing system and look-ahead method for meta data
US5852743A (en) * 1996-07-12 1998-12-22 Twinhead International Corp. Method and apparatus for connecting a plug-and-play peripheral device to a computer
US7580972B2 (en) * 2001-12-12 2009-08-25 Valve Corporation Method and system for controlling bandwidth on client and server
US7113945B1 (en) * 2002-04-10 2006-09-26 Emc Corporation Virtual storage device that uses volatile memory
US7035854B2 (en) * 2002-04-23 2006-04-25 International Business Machines Corporation Content management system and methodology employing non-transferable access tokens to control data access

Patent Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5852724A (en) * 1996-06-18 1998-12-22 Veritas Software Corp. System and method for "N" primary servers to fail over to "1" secondary server
US5761678A (en) * 1996-06-26 1998-06-02 International Business Machines Corporation Creation of clone storage area with identification of base storage area and deferred cloning of metadata
US5872931A (en) * 1996-08-13 1999-02-16 Veritas Software, Corp. Management agent automatically executes corrective scripts in accordance with occurrences of specified events regardless of conditions of management interface and management engine
US5832515A (en) * 1996-09-12 1998-11-03 Veritas Software Log device layered transparently within a filesystem paradigm
US5996054A (en) * 1996-09-12 1999-11-30 Veritas Software Corp. Efficient virtualized mapping space for log device data storage system
US6021408A (en) * 1996-09-12 2000-02-01 Veritas Software Corp. Methods for operating a log device
US5944782A (en) * 1996-10-16 1999-08-31 Veritas Software Corporation Event management system for distributed computing environment
US5819296A (en) * 1996-10-31 1998-10-06 Veritas Software Corporation Method and apparatus for moving large numbers of data files between computer systems using import and export processes employing a directory of file handles
US6487644B1 (en) * 1996-11-22 2002-11-26 Veritas Operating Corporation System and method for multiplexed data back-up to a storage tape and restore operations using client identification tags
US6119222A (en) * 1996-12-23 2000-09-12 Texas Instruments Incorporated Combined branch prediction and cache prefetch in a microprocessor
US6044373A (en) * 1997-09-29 2000-03-28 International Business Machines Corporation Object-oriented access control method and system for military and commercial file systems
US6256645B1 (en) * 1998-02-14 2001-07-03 International Business Machines Corporation Storage manager which sets the size of an initial-free area assigned to a requesting application according to statistical data
US6163773A (en) * 1998-05-05 2000-12-19 International Business Machines Corporation Data storage system with trained predictive cache management engine
US6145012A (en) * 1998-10-14 2000-11-07 Veritas Software Corporation Apparatus and method for efficiently updating files in computer networks
US6401193B1 (en) * 1998-10-26 2002-06-04 Infineon Technologies North America Corp. Dynamic data prefetching based on program counter and addressing mode
US6282710B1 (en) * 1998-10-28 2001-08-28 Veritas Software Corp. Apparatus and method for externally initiating automatic execution of media placed in basic removable disc drives
US6381602B1 (en) * 1999-01-26 2002-04-30 Microsoft Corporation Enforcing access control on resources at a location other than the source location
US6502174B1 (en) * 1999-03-03 2002-12-31 International Business Machines Corporation Method and system for managing meta data
US6389420B1 (en) * 1999-09-30 2002-05-14 Emc Corporation File manager providing distributed locking and metadata management for shared data access by clients relinquishing locks after time period expiration
US6952737B1 (en) * 2000-03-03 2005-10-04 Intel Corporation Method and apparatus for accessing remote storage in a distributed storage cluster architecture
US6654766B1 (en) * 2000-04-04 2003-11-25 International Business Machines Corporation System and method for caching sets of objects
US20020078239A1 (en) * 2000-12-18 2002-06-20 Howard John H. Direct access from client to storage device
US6982960B2 (en) * 2001-03-09 2006-01-03 Motorola, Inc. Protocol for self-organizing network using a logical spanning tree backbone
US20030005219A1 (en) * 2001-06-29 2003-01-02 Royer Robert J. Partitioning cache metadata state
US20030131097A1 (en) * 2002-01-09 2003-07-10 Stephane Kasriel Interactive path analysis
US20030149694A1 (en) * 2002-02-05 2003-08-07 Ibm Corporation Path-based ranking of unvisited web pages
US20030187860A1 (en) * 2002-03-29 2003-10-02 Panasas, Inc. Using whole-file and dual-mode locks to reduce locking traffic in data storage systems
US20050086206A1 (en) * 2003-10-15 2005-04-21 International Business Machines Corporation System, Method, and service for collaborative focused crawling of documents on a network
US20060085427A1 (en) * 2004-10-15 2006-04-20 Microsoft Corporation Method and apparatus for intranet searching
US20060085397A1 (en) * 2004-10-15 2006-04-20 Microsoft Corporation Method and apparatus for intranet searching

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007132302A2 (en) * 2006-05-09 2007-11-22 Nokia Corporation mCARD USED FOR SHARING MEDIA-RELATED INFORMATION
WO2007132302A3 (en) * 2006-05-09 2008-01-31 Nokia Corp mCARD USED FOR SHARING MEDIA-RELATED INFORMATION
US20070265855A1 (en) * 2006-05-09 2007-11-15 Nokia Corporation mCARD USED FOR SHARING MEDIA-RELATED INFORMATION
US7707383B2 (en) 2006-11-21 2010-04-27 Intel Corporation Address translation performance in virtualized environments
US20080120487A1 (en) * 2006-11-21 2008-05-22 Ramakrishna Saripalli Address translation performance in virtualized environments
US20080201549A1 (en) * 2007-02-20 2008-08-21 Raytheon Company System and Method for Improving Data Caching
US20080263259A1 (en) * 2007-04-23 2008-10-23 Microsoft Corporation Hints model for optimization of storage devices connected to host and write optimization schema for storage devices
US7853759B2 (en) * 2007-04-23 2010-12-14 Microsoft Corporation Hints model for optimization of storage devices connected to host and write optimization schema for storage devices
WO2009045884A2 (en) * 2007-09-28 2009-04-09 Intel Corporation Address translation caching and i/o cache performance improvement in virtualized environments
WO2009045884A3 (en) * 2007-09-28 2009-06-25 Intel Corp Address translation caching and i/o cache performance improvement in virtualized environments
US8161243B1 (en) 2007-09-28 2012-04-17 Intel Corporation Address translation caching and I/O cache performance improvement in virtualized environments
US8407422B2 (en) 2007-09-28 2013-03-26 Intel Corporation Address translation caching and I/O cache performance improvement in virtualized environments
US20100138613A1 (en) * 2008-06-20 2010-06-03 Nokia Corporation Data Caching
US9632557B2 (en) 2011-09-30 2017-04-25 Intel Corporation Active state power management (ASPM) to reduce power consumption by PCI express components
US11520769B1 (en) * 2021-06-25 2022-12-06 International Business Machines Corporation Block level lock on data table

Also Published As

Publication number Publication date
US7818309B2 (en) 2010-10-19
US20070299809A1 (en) 2007-12-27
CN1637722A (en) 2005-07-13
CN1292352C (en) 2006-12-27

Similar Documents

Publication Publication Date Title
US7818309B2 (en) Method for managing data access requests utilizing storage meta data processing
US10176057B2 (en) Multi-lock caches
CN110998557B (en) High availability database system and method via distributed storage
US8176233B1 (en) Using non-volatile memory resources to enable a virtual buffer pool for a database application
US8868610B2 (en) File system with optimistic I/O operations on shared storage
US20040049636A1 (en) Technique for data transfer
US8990954B2 (en) Distributed lock manager for file system objects in a shared file system
EP2352090B1 (en) System accessing shared data by a plurality of application servers
EP3262512B1 (en) Application cache replication to secondary application(s)
CN101556559A (en) Transactional memory execution utilizing virtual memory
US9229869B1 (en) Multi-lock caches
US7752386B1 (en) Application performance acceleration
US8898357B1 (en) Storage integration plugin for virtual servers
CN113703672A (en) Super-fusion system, IO request issuing method thereof and physical server
US20130103922A9 (en) Method, computer program product and appartus for accelerating responses to requests for transactions involving data operations
CN117480500A (en) Historical information in an instructor-based database system for transaction consistency
CN117461029A (en) Director-based database system for transaction consistency
US10642745B2 (en) Key invalidation in cache systems
US11366594B2 (en) In-band extent locking
CN110569112A (en) Log data writing method and object storage daemon device
US20190102288A1 (en) Control modules, multi-level data storage devices, multi-level data storage methods, and computer readable media
US11216439B2 (en) Auto-expiring locks based on object stamping
US11341163B1 (en) Multi-level replication filtering for a distributed database
US11886439B1 (en) Asynchronous change data capture for direct external transmission
US20210132801A1 (en) Optimized access to high-speed storage device

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BROWNE, MICHAEL E.;WIGHTWICK, GLENN R.;REEL/FRAME:014860/0534;SIGNING DATES FROM 20031223 TO 20031224

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION