WO1999012101A2 - Method, software and apparatus for saving, using and recovering data - Google Patents
Method, software and apparatus for saving, using and recovering data Download PDFInfo
- Publication number
- WO1999012101A2 WO1999012101A2 PCT/US1998/018863 US9818863W WO9912101A2 WO 1999012101 A2 WO1999012101 A2 WO 1999012101A2 US 9818863 W US9818863 W US 9818863W WO 9912101 A2 WO9912101 A2 WO 9912101A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- disk
- location
- time
- buffer
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
- G06F11/1451—Management of the data involved in backup or backup restore by selection of backup contents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/1435—Saving, restoring, recovering or retrying at system level using file system or storage system metadata
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1469—Backup restoration techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1471—Saving, restoring, recovering or retrying involving logging of persistent data for recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/1873—Versioning file systems, temporal file systems, e.g. file system supporting different historic versions of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1456—Hardware arrangements for backup
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1464—Management of the backup or restore process for networked environments
Definitions
- the present invention pertains generally to the storage of digital data, and more particularly to method and apparatus for the backup and recovery of data stored by a digital computer.
- the applications that run on computers typically operate under an operating system (OS) that has the responsibility, among other things, to save and recall information from a hard disk.
- OS operating system
- the information is typically organized in files.
- the OS maintains a method of mapping between a file and the associated locations on a hard disk at which the file's information is kept.
- computers are generally operated in a manner where information
- a backup (data) is read and written to a disk for permanent storage.
- a backup (copy) is typically made of the disk to address two types of problems: First, the disk itself physically fails making the information it had contained inaccessible. Second, if the information on disk changes and it is determined the original state was desired, a user uses the backup to recover this original state. Backups can be made to the same disk or to an alternate media (disk, tape drive, etc.).
- the present invention provides a method and apparatus for information recovery focusing, in one example embodiment, on the second situation not involving a physical disk failure, but where information is altered and access to its original state may be desired.
- Some typical examples would be: a computer system "crashing" during an update of a piece of information, thus leaving it in neither the original or “new” state, the user changing information only later to desire to restore (or just reference) the original state, a computer virus altering information, or a file being deleted accidentally.
- Tape backup traditionally involves duplicating a disk's contents, either organized as files or a disk sector image, onto a magnetic tape. Such a tape is typically removable and therefore can be stored off-site to provide recovery due to a disk drive malfunction or even to an entire site (including the disk drive) being destroyed, for example, in a fire.
- Tape backup focuses on backing up an entire disk or specific files at a given moment in time. Typically the process will take a long time and is thus done infrequently (e.g., in the evening). Incremental backups involve only saving data that has changed since the last backup, thus reducing the amount of tape and backup time required. However, a full system recovery requires that the initial full system backup and all subsequent incremental backups be read and combined in order to restore to the time of the last incremental backup. The key shortcoming of tape backup is that you may not have performed a recent backup and therefore may lose the information or work that was subsequently generated. The present invention addresses this problem by employing a new method of saving changing disk information states providing for a continuously running disk backup system. This method could be implemented on a tape drive, as a tape drive does share the basic random read and write abilities of a disk drive.
- WORM drives cannot provide continuous backup of changing disk information because eventually they will fill.
- a RAID system is a collection of drives which collectively act as a single storage system, which can tolerate the failure of a drive without losing data, and which can operate independently of each other.
- the two key techniques involved in RAID are striping and mirroring. Striping has data split across drives, resulting in higher data throughput. Mirroring provides redundancy by duplicating all data from one drive on another drive.
- RAID systems are concerned with speed and data redundancy as a form of backup against physical drive failures. They do not address reverting back in time to retrieve information that has since changed. Therefore RAID is not relevant to the present invention other than being an option to use in conjunction with the present invention to provide means for recovery from both physical disk drive failures as well as undesired changes.
- the Tilios Operating System was developed several years ago by the assignee hereof. It provided for securing a disk's state and then allowing the user to continue on and modify it.
- the operating system maintained both the secured and current states. Logging of keystrokes was performed so that in the event of a crash, where the current state is lost or becomes invalid, the disk could easily revert to its secured state and the log replayed. This would recover all disk information up to the time of the crash by, for example, simulating a user editing a file.
- the secured disk image was always available along with the current so that information could be copied forward in time-i.e., information saved at the time of the securing backup could be copied to the current state.
- the Tilios Operating System could perform a more rapid backup because all the work was performed on the disk (e.g., there was no transfer to tape) and techniques were used to take advantage of the incremental nature of change (i.e., the current and secured states typically only had minor differences). Nonetheless, the user was still faced with selecting specific times at which to secure (backup) and the replay method for keystrokes was not entirely reliable for recreating states subsequent to the backup.
- the keystrokes may have been commands copying data from a floppy disk or the Internet, both of whose interactions are beyond the scope of the CPU and disk to recreate.
- a RAID system only deals with backup in the context of physical drive failures.
- Tape, WORM, Tilios, and file copies also address backup in the context of recovering changed (lost) information.
- the traditional backup process involves stopping at a specific time and making a duplicate copy of the disk's information. This involves looking at the entire disk and making a copy such that the entire disk can be recreated or specific information recalled. This process typically involves writing to a tape. Alternatively, a user may backup a specific set of files by creating duplicates that represent frozen copies from a specific time. It is assumed the originals will go on to be altered. This process typically involves creating a backup file on the same disk drive with the original. Note that a "disk" may actually be one or more disk drives or devices acting in the manner of a disk drive (storage means).
- the technology of the present invention seeks to eliminate the need to pause and make backups or decide which files should be backed up in the context of short term information recovery. That is, recovering information that was known reasonably recently as opposed, for example, to recovering information that has been lost for a long period of time.
- a final example of why a user would want to revert to a backup is when the operating system gets corrupted (the executable or data files that are essential to run a computer) due, for example, to installing new software or device drivers that don't work.
- U.S. Patent No. 5,325,579 entitled “Fault Tolerant Computer with Archival Rollback Capabilities", to Long et al.
- the '579 patent discloses a storage device which includes processing circuitry for detecting access requests to alter data in respective locations of a storage device, and, prior to executing such requests, storing the data in such locations in an audit partition region of the storage device. The device of the '579 patent can subsequently restore the data retained in the audit partition region to its previous location on the device, and thereby return the storage device to a previous state.
- the present invention is a method and apparatus for disk based information recovery in computer systems. This applies to all types of computer systems that utilize one or more hard disks (or equivalent), where the disks represent a nonvolatile storage system or systems. Such types of computers may be, but are not limited to, personal computers, network servers, file servers, or mainframes.
- the invention stipulates using the otherwise unused pages or special dedicated pages on a hard disk in a circular fashion to store the recent original states of information on the disk that is altered. Collectively these extra pages represent a history buffer. These history pages can be intermixed with the OS's data and thus the present invention relies on re-mapping of disk locations between the OS and the actual hard disk.
- the history buffer contains information.
- the saved information may be disk sectors, files, or portions of files.
- the invention provides a method, and corresponding apparatus, of protecting the resources on a computer necessary to operate a data storage device, wherein the computer has a processor for executing program code.
- the method disallows the processor from altering the resources unless program code execution passes through a gate which validates that the code executed by the processor is trusted code and is authorized to alter the resources.
- the trusted code re-enables the protection of the resources prior to the processor returning to execution of non-trusted code.
- the invention provides a method, and corresponding apparatus, comprising recording original states of altered data on a disk, over some period of time, sufficient to recreate the disk's image at various points within the period of time, and writing the recorded data as well as the current operating system (OS) visible image of the disk to another secondary storage medium, such that the medium can be used to recreate the disk's OS visible state at various points in time.
- OS current operating system
- Figure 1 illustrates the operation of a history buffer according to the present invention
- Figure 2 illustrates the operation of the history buffer to restore a virtual drive that reflects the state of another drive at a previous point in time.
- Figure 3 illustrates the reversion of a simulated or virtual drive to a selected point in time.
- Figure 4 illustrates the structure of a history buffer according to the present invention.
- Figures 5 A and 5B illustrate the current drive read/write algorithm.
- Figures 6A and 6B illustrate the simulated drive read/write algorithm.
- Figure 7 illustrates the main area and extra pages of a storage disk.
- Figure 8 illustrates how two maps can be used to represent the main area and history buffer of a disk.
- Figure 9 illustrates short burst write activity to a disk.
- Figure 10 illustrates an extended period of reasonably continuous write activity to a disk.
- Figure 11 illustrates a case of frequent write activity to a disk, but with sufficient gaps to establish safe points.
- Figure 12 illustrates two maps referencing pages in both the main and extra areas.
- Figure 13 illustrates the effect of swapping so that the history map only references pages in the extra page area and the main map only references pages in the main area.
- Figure 14 illustrates shows the main area map's links removed.
- Figure 15 illustrates a three-way swap.
- Figures 16 -23 illustrate a write example, wherein the disk has multiple page locations and some page locations are assigned to the main area and the other for extra pages.
- Figures 24-25 illustrate allocation of the history buffer.
- Figures 26-31 illustrate reverting a disk to a prior state.
- Figures 32-33 illustrate how a disk read access moves from the operating system through the engine to the disk drive.
- Figure 34 illustrates the blocking of a disk.
- Figures 35-40 illustrate writing to a disk.
- Figure 47 illustrates the relationship between maps of a disk.
- Figure 48 illustrates a sequence of writing to a file.
- Figure 49 illustrates a normal write operation.
- Figure 50 illustrates the Move Method of writing data to a disk.
- Figure 51 illustrates the Temp Method of writing data to a disk.
- Figure 52 illustrates a single frame for the Always and File Methods of writing data to a disk.
- Figure 53 illustrates an external backup procedure.
- Figures 54-64 illustrates low-level swapping.
- Figures 65-60 illustrate processing a read during a swap.
- Figures 61-62 illustrate example embodiments of the invention.
- Figure 63 illustrates a conventional computer architecture.
- Figure 64 illustrates an embodiment of the invention wherein resources are protected.
- Figure 65 illustrates alternate embodiments where the present invention can be implemented.
- the present invention provides methods of returning to any prior state in time of a disk, up to a limit. By allowing return to any time (within the current limit) the user is relieved of having to specifically call out points at which to make backups and having to decide what information is backed up. Because there is a limit in time as to how far one can go back and retrieve information, the technology focuses on short term information recovery.
- the invention information is maintained for a reasonable period of time and then is automatically discarded. What is included in the backup information is significantly all of the activity to the disk. This allows a user to return to any disk state at any time up to a limit. This limit is determined by the amount of available backup storage and the rate to which information is written (user activity).
- the present invention is de-coupled from the operating system. Since this embodiment of the present invention can revert a disk back to an earlier state it can recover from bugs in the operating system that might otherwise cause catastrophic information loss by a single improper disk write. Backup techniques that are tightly coupled with an operating system and its filing system are less able to recover from bugs in themselves.
- the present invention can also be implemented as part of the operating system.
- the saving process The maintaining of original states prior to disk (or otherwise permanent) changes to disk based information.
- the recovery process On one hand the ability to simulate a time reverted disk while at the same time allowing the user to continue using the current disk (thus, for example, allowing you to copy forward information from the past into the current). On the other hand, the ability to completely revert a disk to a prior state in time.
- the management process Providing utilities that operate on the saved information to determine available versions of a file, look for virus activity, and other useful history enabled operations.
- disk may actually be a portion of a physical disk drive, may be one disk drive, or more than one disk drive or device, whose storage is identified and used as an independent storage means by the operating system from other storage means.
- a PC might have a floppy disk as drive A, a hard disk with one partition C, another hard disk with two partitions D and E, and a RAID disk array set up as drive F.
- the processes described herein are applied individually to these independently identified disk drives regardless of . whether they are physically mapped to part of a hard disk, an entire hard disk, or multiple hard disk drives or other storage means.
- the history buffer works by recording the original state of sectors on a disk prior to being changed. The time of change is also recorded, although it is not essential, in some cases it is necessary to know the order in which changes have been made, but not the time these are made.
- the process is illustrated in Figure 1.
- a sector 10 contains value A
- a request to write a value B occurs
- our method involves intercepting the write, reading the sector location 10 (picking up the A value), writing this original value into the circular history buffer, and then returning to complete the write of the new B value.
- the technology has no impact on read performance, which is visible by the user since an application cannot continue executing until all desired reads are complete.
- Another approach to saving original states when information changes is to re-direct the write to an alternate location.
- a note is made in a map about this re-direction. For example, assume there is some old data at disk location X that gets overwritten with a new current state. The current data that is expected at disk location X really gets stored at Y. The original "old" data at X is left at this location. Later if the "current" data is read at the original location X, the system knows through consulting the map that the data is really stored at Y and so re-directs the read. Eventually the old data at X would become very old and as new locations are needed to map changes the location would be recycled.
- mapping is required for both read and write disk transfers. This adds overhead during the crucial read accesses where added processing is noticeable to the user. Further, although it may seem optimal to simply re-direct a write instead of actually moving data, the re-direction involves updating a map. Since this map must be maintained constantly on disk in order to recover from an unexpected crash, and since a map update would likely involve a read and a write access, the total overhead in this approach may be similar to simply moving the data (two writes and a read).
- the Tilios operating system kept a current and a secured state of the disk.
- the secured state was a form of a backup that was frozen at a particular time.
- the current state was defined in terms of differences (recent changes combined with the original data). In the event of a crash the system reverted to the secured state. One could not count on the current version stored on disk since much of it may never have been actually written out to the disk — the changes were made in RAM and had not been flushed from the cache (written) at the time of the crash.
- notes can be made about what task requested the writes. This provides an audit trail so that by looking through the history buffer corrupted information can not only be located, the proper information restored, but the task (e.g., virus) responsible for the damage can be determined.
- the task e.g., virus
- history buffer can relate to when the system was booted, which typically provides a good reversion point if needed. Also, by monitoring the operating system's cache status a note can be made about when the operating system felt it had flushed all information to disk. Again, this would be an excellent reversion point.
- Keystrokes and other user interactions can also be logged in the history buffer. Such information is useful in helping to identify what a user was doing at a given time. For example, as the user moves the time selector back and forth in establishing a reversion time, the system can present a summary of the user's interaction around that time based on keystrokes and other user interaction information saved at that time. An example of other information that could be presented as a user looks back in time are the names of files that were being accessed. Searches could be performed for specific file names or keystroke patterns to assist in locating reversion times of interest. Another example would be screen ' shots. The computer could periodically take snapshots of the user's screen, perhaps every five minutes, and save these in the history buffer.
- the present invention provides for two basic forms of recovery.
- the system can support another drive D, which instead of being a real disk drive, is a simulated or virtual drive whose image is created by combining information on the original drive C and the history buffer (which is typically part of drive C).
- the process of looking back in time involves setting a reference time for drive D and then simply accessing it as if it really was another physical disk drive whose contents had been copied from drive C at the specified time.
- the second form of recovery involves reverting the main drive C back to an earlier state. In this situation there is no simulated D drive from which old information is brought forward into the present, but simply the main drive is entirely brought back in time. This recovery mode is particularly useful when the current state has become unusable (cannot boot or access files) or undesirable (an installation of new software or hardware drivers does not work as expected).
- the implementation can be as simple as copying the appropriate saved original data back into place, updating the history buffer to reflect the reversion, and restarting the system.
- the process of reverting the main drive C entirely back can also be done by first using the simulated drive D to get back to a desirable point in time. This gives the user a chance to confirm and possibly correct some information, and then request the software of the present invention to "copy" drive D to C. It should be noted that this entire reversion is still logged in the history buffer, thus simply making this reversion reversible. In other words, assuming sufficient space in the history buffer, a user could create a disk state SI at time Tl, continue on to a new state S2 at a later T2, then S3 and T3, and then realize there is a problem. So, the user at time T4 could entirely revert the disk to state S 1.
- the present invention does not offer recovery in the case where the disk drive physically fails. However, it does further enable the use of standard full drive backup (like to tape) by allowing the user to backup the simulated drive D instead of C. This means the user can come to point in their work where they would like the system backed up, set the time for drive D to current (thus freezing it), start a backup based on D, and then continue working without having to wait for the backup to complete. This may save a substantial amount of time waiting for a backup to complete.
- the Bemis patent U.S. Patent No. 5,553,160, teaches a related method where during a backup, a write request to a disk is trapped. If the disk location being written has not yet been backed up, the original contents are copied to an alternate temporary storage device in order to allow the backup to proceed.
- the present invention offers in this limited situation the same results — that is a backup of a disk or system of disks at a specific frozen time — it does so without specifically being aware that a backup, as opposed to any other application, is being performed. Therefore, unlike Bemis, there is no impact on the format of backup (extra original state information is not appended to its end) or the backup and restore algorithms. Further, the present invention and Bemis differ in process. The present invention is concerned with simulating a disk frozen and reverted to a specific time whereas Bemis focuses on the flow of disk information during a specific backup.
- the present invention approach differs from one where real copies are frequently made by allowing a relatively small amount of disk space to effectively represent all possible backups made during the use of the disk, limited by the size of the history buffer.
- By tracking only differences in the history buffer the amount of information that is transferred in order to create a backup is reduced, as compared to backing up the entire system every time any information was changed.
- Incremental backups are typically designed to start from some reference point and re-log only files that have changed since the last backup. A full system recovery requires one go back to the reference point and merge in all the incremental changes from this point.
- the present invention starts with the current "bad" state and works backwards through a history of incremental changes.
- the present invention is not designed to replace traditional tape backup approaches, as they offer recovery from physical drive failures and do guarantee that information is available from a specific time. Such a backup is totally guaranteed in terms of what it provides.
- the present invention on the other hand, can only go back in time as far as the history buffer's size permits. The amount of time depends on the user's write activity. Writing heavily to a disk reduces the distance back in time that original states are available. However, with reasonably large history buffers and average usage there will be an excellent chance any desired backup state within a predictable period will be available.
- the system monitors the average rate at which data is written to (and thus discarded from) the circular history buffer. Any sudden increase in usage or nearing to a user specified minimum in look back time will generate an alert to the user. What is meant by look back time is an amount of time that the system and method of the present invention can go back in time from the current time in recreating a backup.
- a feature of information recovery under the sector based implementation of the present invention is utilizing the time stamps associated with all the saved original data in the history buffer to locate periods during which the operating system is likely to have flushed everything to disk. Such a point is identified by a sufficiently large gap in time of disk activity.
- the recovery interface would normally automatically snap to these "safe" points as, for example, one moves a recovery time slider in the same way a graphics program will snap to grid points.
- Figure 3 illustrates a time selection interface utilizing a sliding bar. The darkest area at the left in Figure 3 arrow 20 indicates time to which a reversion is possible but may disappear should disk (history buffer) space no longer be available.
- Figure 3 illustrates a user interface containing a slider, represented by arrow 22.
- the software of the present invention may scan the directory structure of the reverted image and adjust it to make it valid.
- This functionality is also provided, for example, by the standard ScanDisk software provided as part of the Windows 3.x and 95 operating systems. This adjustment constitutes altering the reverted version. The changes are held in a temporary area and get discarded as soon as the user terminates the reversion.
- a reverted simulated disk D might be thought of as read-only, since it is initially a view looking back in time, it is fully changeable.
- the present invention can be implemented in either the main CPU (software solution) or as substantially part of a disk controller (hardware solution) thus providing true isolation from the operating system.
- a disk having the present invention embedded therein including the disk controller.
- the disk When installed in a computer the disk would appear to be two disks, drive C and D.
- the disk C might report as having 1000 megabytes of storage. However, the disk actually has 1560 megabytes of storage where the extra 560 megabytes are reserved for the history buffer.
- a small independent interface to the disk would be provided that would indicate the maximum reversion time and the current reversion for drive D. Physically this interface might be about the size of a clock with two time/date displays and other indicators and selectors. Adjusting the current reversion time informs the disk as to how it should present its simulated drive D.
- This type of hardware solution could work with any operating system as it is truly transparent (i.e., it would not know that drive D is other than a normal disk drive).
- additional software could be added in the operating system boot process that would allow a reversion to occur on request by the user. This means would be similar to the common practice of pressing F2 during boot to divert to editing the CMOS (PC settings). Once requested, this pre-operating system boot reversion could be accomplished by either booting code to perform a reversion under the present invention that would be specially located on the disk or booting any other alternate non- volatile storage (like ROM or FLASH). Of course, you could also simply boot from a "recovery" floppy disk and so avoiding the need for any special hardware or operating system boot process changes.
- mapping tables could be maintained in battery backed up RAM or FLASH on the disk drive (controller).
- the tables could be quickly modified without disk access and yet the mapping information would not be lost in the event of the main CPU crashing. Since the mapping information is part of the disk drive, there is no issue regarding keeping the disk actually organized as the operating system expects it. See the Information Recovery Process section hereof for more discussion on how a hardware implementation could work.
- the other case of a simulated drive is more complex.
- D simulated disk
- C main disk
- the simulated drive D is "created" by trapping all read and write disk transfers. For any read accesses a map is consulted that indicates the true location of desired disk sectors.
- This map is initially constructed when the reversion time is selected. Typically it will be a form of a tree or table that will map an original location to an actual, for some number of continuous sectors. If there is no entry in the map one can assume the original location is still valid (no mapping required). However, the map must be continually maintained as writes to drive C could cause the moving of "frozen" data that can be accessed through drive D.
- the present invention's file implementation involves trapping high level (e.g., open and close) requests.
- the Technology of the Present Invention at the File Level can be implemented either at the sector (disk) level or as part of an operating system's filing system at the file level.
- the concept of reverting the sectors on a disk, either through a virtual disk or on the real disk, back in time to a prior state is pretty easy to understand. For example, if you reverted back to 12: 19, you would expect to see the data on the disk in its form as of this time.
- a file level implementation of the present invention with tight integration into the operating system, the method and understanding by the user of retrieving information may significantly differ, although is still based on the principal of saving original states of recent changes. For example, an operating system may simply present the user with a list of prior states for a given file, and allow any to be copied forward. This is quite different than reverting a disk.
- the file level implementation of present invention automatically keeps backup copies offiles prior to their being changed in a circular system. As new backup files are added and the amount of disk storage dedicated to these backup files approach a preset limit, older backup files would be automatically discarded. Of course, if there is available free space on disk (as is known to the filing system), this limit can be ignored.
- Discarding would occur if the free space must be recovered, up to maintaining the preset space limit for the history buffer.
- the present invention at the file level is possible it may be more difficult to implement, prove correct, and isolate as a subsystem. Such an implementation could save an entire prior file state just before a file is opened and the intention to modify is clear, or it could attempt to only save prior states of the portion of the file being modified. That later starts to resemble the sector based method but differs in where in the computer system technology of the present invention is inserted — at a high level in the operating system's filing system or lower down in the disk I/O path.
- a file level implementation must also keep directory structure information in the history system, if the user is to be allowed to view a reverted disk's directory structure. For example, an entire subdirectory offiles may be moved to another location in the filing system hierarchy. This case is handled if the directory and mapping information is assumed kept in system files that would be processed by the present invention.
- a limited implementation of the present invention at the file level only seeks to allow access to the saved backup file copies and not simulate an entirely reverted disk. Thus the user might be allowed to simply view what is in the circular history system and open/retrieve desired files.
- the History Buffer In talking about drive C the intention is to refer to a portion of a disk that contains information organized under a single filing system. Often a disk will have only one such filing system and therefore what is meant in referring to drive C or disk C becomes the same and interchangeable.
- a real hard disk drive can be organized into independent partitions where each has their own filing system and organizational method.
- the history buffer Under the DOS operating system, in order to de-couple the present invention software from the operating system, the history buffer is allocated in its own partition and managed directly and only by the software of the present invention software (as opposed to the operating system). Any attempt to represent the history buffer as a hidden or other file type accessible on drive C opens the history buffer to interaction with the operating system. For example, the operating ' system may choose to move the file.
- Partitions represent an established method of subdividing a hard disk under the DOS operating system. Note that the Windows 3.x and 95 operating systems essentially run on top of DOS and its partitioning of a disk. Different partitions can be managed by different operating systems without interfering with each other. Thus, in essence, the present invention software becomes the operating system for its dedicated history buffer partition.
- the technology of the present invention involves logging disk sector writes and other activity to a circular history buffer.
- a circular history buffer is a fixed size buffer where when writing and reaching the end of the buffer, one wraps back to its beginning. Thus new data will overwrite the oldest.
- the recovery process involves scanning the buffer starting from the most recently written data in the backwards order in which data was written. Although many users are willing to pay dearly to recover lost information, such must be done in a reasonable amount of time. It is entirely possible to have several gigabytes dedicated for a history buffer. The process of creating the required maps to initiate recovery to a certain point in time can involve scanning these several gigabytes of data. Disks may be fast but this still could take many minutes, if reaching far back in time. Note that looking back shorter distances in time, of course, will take much less time.
- each block 32 contains a header 34 that contains a table of mapping entries, where each table entry contains: 1) a type,
- mapping header After the mapping header are kept the original state sector images (data pages) in a second table.
- a write process therefore involves first reading the old data about to be overwritten, writing at the end of the current history buffer block, updating the corresponding mapping table entry, and then finally writing the new data into place.
- This read Since the mapping table is always extended, it likely is already loaded in a cache and so is read only once before the first update. Therefore this read is not counted in average overhead per normal write (i.e., a small percentage of normal writes will become two reads and three writes).
- the type field in the mapping table allows for using some data pages for saving data other than original sectors states.
- the pages can be used for a mapping tree required during a recovery process or for data written to a reverted disk (i.e., you can revert a simulated drive D back in time and actually write to it, although these changes are lost when the reversion ends).
- the time stamp field in the mapping entry advances in time each time an entry is re-used.
- the field is used when the system starts up. By looking for a backwards break in the time stamps between adjacent entries, the system can determine where writing last stopped.
- the recovery process can simply scan through the mapping tables at the front of each block (cylinder) when constructing an overall recovery system. This greatly reduces the time required to build a tree, table, or do any other analysis where only the mapping tables need be scanned (as opposed to the data pages which require much more time to read due to their large size).
- mapping table Although separating the information regarding data pages into a mapping table improves the scan rate of the history buffer, it introduces a challenge in updating the mapping table as entries are modified. If the system crashes during the update of the mapping table the portion of the table being written to disk may become invalid. In order to insure the mapping table is always available, two copies are written. It is assumed a crash can at worst corrupt generally only one of the copies. Though this adds yet another write step to a normal write process, the write of the two mapping table sectors and the data page are all located in the same area (cylinder). Thus little overhead is added compared with the disk seeks that move back and forth between the original sector location and the block.
- This blocking scheme can also be implemented with the present invention at the file level where the data pages would correspond to file information instead of disk sectors. Other details would also change as are obvious to anyone skilled in the part of programming.
- the process of the present invention has generally been one where old disk states are maintained as new data is written.
- the history buffer need only record the original state prior to a group boundary of data that is changed one or more times during the disk activity within a group. This can reduce the amount of data stored in the history buffer by dropping duplicate writes to a given location within a group.
- old states need not be continuously recorded for every write to a given location, but only once at the time of the first write to a given location within a group.
- a simple form of the present invention would be to save initial states of any changed data to a non-circular history buffer. Assuming the buffer does not overflow, the user could revert to the point prior to starting the process. If the buffer did overflow, the user could be queried as to whether they would like to discard all changes or proceed knowing reversion is no longer possible. This approach would be useful, for example, when installing new software on a computer than one may ' want to back out (because the new software interferes with the computer's operation or is simply not desired, from hindsight).
- the present invention would provide to the user an ability to mark a point in time. The user would expect to be queried as to whether to revert or continue logging should it appear that the history buffer will no longer be able to revert to the marked point in time.
- the technology of the present invention is based on intercepting and modifying transfers to and from a disk.
- the method of doing such is operating system dependent, but typically involves writing a device driver for a hard disk that is inserted in front of that normally used.
- the techniques are well known to accomplish this effect. See the INT 13 request.
- a partial hardware implementation could involve putting logic of the present invention right in the disk controller.
- a read disk request 40 is processed normally (47).
- a request 56 to write a disk location includes steps 57-63, which provide that the old data is read from the disk, writing to the history buffer (58, 59), mapped (60), and the new data written to the location (61). If the simulated reversion is in progress (62), the simulated disk drive's map in updated (63).
- FIGs 6 A and 6B The simulated drive read/write algorithm is illustrated in Figures 6 A and 6B.
- a request 60 to read a disk location is followed by looking up the location in the mapping tree (61). If the location is found, the location found in the map is used for the read (62), otherwise the originally requested location is used (63).
- a write request (70) is followed by looking up the location in the mapping tree (71), and if found it is checked to see if it is from the original state (72), and if yes, a new page is allocated in the history buffer (73). If no, the new data is written to a mapped location (74). If there is no location found in the mapping tree at step 71, a new data page is allocated in the history buffer, a map entry is added, and the new data is written to the new location (75).
- the history buffer is storing a continuous running log of recent disk changes and other information, this information can be used in ways other than for disk reversion. Specifically, the history buffer can be searched, which is effectively allowing one to walk back through time looking for specific events or conditions. Typically the results of a search are used to guide the user in identifying good reversion points at which desired information may exist for recovery.
- search scenarios include:
- An operating system's filing system information (directory) is kept on disk and thus any changes have been recorded in the history buffer. Therefore, it is possible to scan backwards through the history buffer looking only for certain changes to said filing system information in order to produce a list of likely available states for one or more files. Generally, one would be watching for the last modified date of a file to change. It would be expected that around the time the directory received a change in last modified date, the corresponding file would exist in a new state should a reversion be made to or near this time. Further specialized access to the history buffer would be able to locate and recover versions only of desired files, instead of reverting an entire disk in order to recover each.
- a goal is implementing the present invention to minimize the performance impact on reading data and rely on writes being a background activity. If writes are done in the background then overhead can be added without impacting response to the user. Realizing that eventually some additional work (overhead) must be performed to maintain a history buffer, previously we stated that we relied on an operating system's cache to allow the writing of data to complete and the application to continue. Thus the actual disk writes would be done in the background at which time overhead may be added without forcing the user to wait for the additional overhead to complete.
- the end result is to allow a significant number of writes to immediately be processed at a time cost near equal to writing the data to its originally intended location. Therefore, many applications will be able to complete their write cycle and allow continued activity by the user. Later, in the background, the history buffer overhead is performed without causing delay to the user.
- the amount of new data that can be written to this cache is limited by two factors: first, the available history buffer space. Second, the amount of RAM available to note the temporary redirection, limits the size of the cache. In the event information in the cache is read before the background processing has had a chance to move it into place, the notes made in this RAM insure the read is properly redirected.
- This caching technique is simply the write re-direction and mapping on read implementation of the present implementation used in a limited way with the actual moving of prior states into a history buffer implementation.
- the use of both implementations yield fast read and write accesses, up to a limit, as well as keeping the current disk substantially organized as the operating system expects. Since the read mapping is only for data temporarily placed in a disk cache, its overhead is likely less than mapping an entire disk (i.e., the highbred approach is faster on processing current disk reads).
- a computer's operating system typically stores information on a hard disk.
- the example embodiments of the present invention present five fundamental methods of recording the original state of information prior to its being altered.
- the first four methods work substantially outside of the OS's method of organizing and assigning its file to disk pages. They substantially differ in performance and how they utilize the disk.
- the last method calls for integrating the process of saving and retrieving original states of altered information directly into the OS's filing system.
- a reasonable objective for all the methods is providing transparent near-term backup services to a user.
- the aspect of transparency means the user is not required to specifically call out for backups, nor is their daily routine otherwise impacted. This is accomplished by automatically saving the prior states of altered data on their hard disk, thus providing a means to restore to earlier times. However, in order to avoid impacting the user's routine, this saving process must not substantially reduce the disk access throughput to which the user is accustomed.
- the Move Method involves first reading data about to be overwritten and saving it in a disk-based history buffer. It has the drawback of fundamentally being slow.
- the Divert Method uses a relatively small area on disk to save newly written data, thus attempting to move the work of saving prior states into the background. It has the drawback that a fixed-size buffer eventually overflows and then degrades into the Move Method.
- the Temp Method utilizes mapping to allow the history buffer and the area accessed by the OS (main area) to exchange roles.
- the always Method attempts to place newly written data directly over the oldest historic data, and so often entirely avoids the problem of moving data. It has the drawback of requiring permanent re-mapping of the OS's page assignments.
- the File Method assumes integration with the operating system and uses the OS's file mapping to eliminate one of the maps from the Always Method.
- the current disk image refers to the non-historic view of the disk. It consists of the data last written by the user. If no historic logging was in place on a disk, its current image is the data the disk now contains.
- the simulated disk is to the user and OS a completely independent disk. However, the engine at a level below the OS creates this disk on the fly from the current image and saved historic data.
- the actual hard disk is generally divided into two basic areas consisting of main and extra pages. The main area holds the pages belonging to the current image. In the extra page area the historic data is kept.
- the main area map re-routes accesses to the current image to possible alternate locations assigned by the engine.
- a file consists of data that may be overwritten by an application.
- the present invention is concerned with saving the data's original state. This is accomplished by either copying (moving) the data before it is physically overwritten, or re-directing the write and thus avoiding a true overwrite.
- the expression is referring to the file's data that existed prior to the OS overwriting it, and which is now being preserved as historic data by the engine.
- Disk management responsibilities may be segregated out of an operating system into a filing system (e.g., NTFS in Windows NT).
- NTFS in Windows NT
- engine refers to the logic implementing the method currently under discussion. Various methods are discussed and each has its own engine.
- extra in the term 'extra page area' is conceptually founded in the idea that what is not visible to a user is extra.
- a disk physically has a given capacity.
- some of this disk, in the Move, Divert, and Temp Methods, is set aside and hidden from the user.
- main area which is that reported by the OS
- the storage that is not visible to the user is "extra,” which the engine utilizes.
- the OS assigns disk locations to various structures under its control (e.g., files). However, because some the engines re-map the OS's disk locations to other locations, in order to distinguish between the use of "disk locations" in the context of the OS and the engine, the OS disk locations are called location keys.
- the basic elements of the Move Method are described in the '579 patent.
- a portion of the hard disk is reserved to store historic information (history buffer).
- the OS writes to the hard disk
- the information about to be overwritten is read and saved in the history buffer, and then the original write is performed.
- Reasonable optimization of this process addresses the relative extreme time cost of moving disk heads.
- a sequence of nearby writes might be delayed and combined so that the affected data can be read as a block, moved to the history buffer, and then the original writes performed.
- a single write typically involves positioning a disk head at a specific location on disk where the data is to be written.
- the Move Method increases this to a disk read and two disk writes. This involves the positioning of the disk head three times: once to the target area about to be overwritten so that its data can be read, once to the history buffer to save this original data, and finally back to the target area to overwrite the new data.
- Caching writes in memory and committing them to disk during free time can reduce or eliminate the impact on the user, even though there is a tripling of time in the actual writing of the data.
- the OS really stores the data in RAM, allowing the user to continue as if the writes had actually occurred. Then some time later the filing system performs the actual disk writes.
- the Move Method of saving original states triples the duration of this background write process, in theory the user had been free to continue working and so should not notice the performance degradation.
- the flaw in this process is that a RAM cache is often ins ⁇ fficient to hold the amount of data typically written. For example, word processing documents can easily be a megabyte in size. Graphic image files are even larger.
- new data is written to the end of the history buffer and later during free time swapping it, along with the historic data, into place. This increases the amount of new data that can be written without falling back to having to move data before overwriting.
- the limiting factors are the size of the history buffer and the mapping process required to redirect reads to the history buffer, should the desired data that was recently written not yet have been swapped into place. In other words, one must deal with read and write accesses to data that has moved out of place.
- the Temp Method yields, even under circumstances where a large amount of data is overwritten, similar disk access performance compared to no method (not saving prior states).
- the Temp Method builds on the Divert Method in which newly written data is diverted to the end of the history buffer and later swapped into place. However, the Temp Method does not focus on diverting writes to an alternate buffer. Rather, the Temp Method avoids the inherent size limitation of a buffer and thus the possibility of it overflowing. If an overflow occurs the Move Method is forced into the slow Move Method.
- the Temp Method on the other hand, is not collecting up changes in a fixed-size buffer, but immediately writing the changes out to a re-mapped location. Thus, with enough writes, the Move Method's buffering can overflow, whereas the Temp Method always has some alternate location to which to write new data.
- Prior states of a disk are maintained by reserving on the disk an "extra" area in which old copies of altered information are saved. (See Figure 7.)
- the main area which is the area of the disk of which it is aware
- the pages about to be overwritten are, at least eventually, moved into a circular history buffer (extra pages). Therefore, a prior state of the disk can be reconstructed by combining the current image with the appropriate data in the histpry buffer. (Of course, you can only go back in time as far as prior states have been saved in the history buffer.)
- One solution according to the present invention is to utilize maps that allow re-direction of a write to an alternate location, with the old location becoming "part" of the history buffer by a note made in a map.
- the maps are adjusted.
- the location originally associated with X now becomes historic data that is part of the history buffer.
- the location associated with Y which had contained very old historic data, now becomes part of the main image that is visible to the OS.
- Figure 8 shows how two maps could be used to represent the main area and history buffer.
- the mapping scheme allows this method to operate continuously and maintain old states of altered data, without ever having to pause and move data around.
- the problem that arises over time is that what was continuous areas in the main area in effect become fragmented over the entire disk. This significantly reduces disk access performance.
- Most operating systems and associated utilities take care to manage the organization of data on disk to minimize fragmentation — that is, data likely to be read as a block (like a file) is located in adjacent locations.
- the engine By re-mapping the OS's allocations the engine re-introduces fragmentation.
- the engine employs the maps to allow for heavy write access to the disk, but at the same time, knowledge of where the main and extra pages areas are retained. Thus, in the background the pages are moved back into place, restoring the main and extra pages areas to their independent and non-mapped states.
- mapping system is cached and efficient so that it introduces little overhead. Since data is likely written in large blocks (like when a user saves a word processing document) the initial diversion to the extra pages area does not cause fragmentation. In fact, write performance is enhanced since writes to different areas of the disk, which would normally involve time intensive seeks, are instead redirected to the continuous extra pages area. Fragmentation arises during subsequent passes through the history buffer where its pages, after the initial pass, have now been sprinkled about the main area. As more passes are made, the problem worsens. This is the case where the system's performance degrades because of re-mapping.
- FIG. 11 illustrates the situation leading to deep fragmentation. It involves a long sequence of writes. However, time gaps or other clues provide for many safe points thus making logging useful. A user may not be able to restore to the starting point of the long sequence that has fallen off the end of the buffer, but there are plenty of safe points further ahead. Figure 11 shows this case of frequent write activity, but with sufficient gaps to establish safe points. The gaps are not sufficient for background swapping, thus preventing de-fragmenting.
- Fragmentation therefore becomes increasingly a problem: the engine, due to re-mapping, breaks up what the OS thought were continuous areas on disk, and therefore access to these areas is slower. The slowdown occurs because the disk head must move to many different positions on the disk surface in order to read what the OS thought was a large continuous block of data.
- OS accesses involve typically a sequence of pages allocated in sequential locations, and so the engine is not constantly hopping from one low-level node to another.
- the upper portion of the tree indicates whether a low-level node fetch is required. If the entire OS visible disk (main area) was written (900 megabytes), 11% of the time you will go through a low-level node. Thus, as the mapping boundaries of the low-level nodes are crossed, one of every 1,000 accesses requires the fetch of another node. This is a negligible overhead. In the other 89% of accesses the upper two levels of the tree are cached and immediately indicate direct .(unmapped) access, adding negligible overhead.
- Figure 12 illustrates the two maps referencing pages in both the main and extra areas.
- pages belonging to one area are temporarily swapping with pages from the other area.
- Figure 13 shows the effect of the swapping so that the history map only references pages in the extra page area and the main map only references pages in the main area.
- a basic purpose of the engine is to provide means for rolling back the state of a disk to a previous time. This involves maintaining original and cunent states and a mapping system to guide how these should be combined to create a given state corresponding to some specific time in the past.
- it is not useful to restore a disk to a transitional state where information was in the process of being updated. For example, if you were to save a word processing document, you would like to see the disk either before or after a save. Restoring to the time during the write process should be avoided since there is no guarantee as to what the user would see. Therefore, the concept of a safe point is introduced which conesponds to times at which the disk is reasonably usable. These times are identified from large gaps in disk activity, which are assumed to indicate the OS has flushed its caches, or specific signals from the OS indicating such, when available.
- the user is allowed to select only a safe point in time to which to revert. This implies the engine need only flush its own information to disk at these times. It also implies that the process of logging is not one of recording each write and its original data in a time-ordered sequence, but of changes from one state at a given safe point in time to the state of the next save point. Therefore, the stable (non-transitional) information maintained on disk by the engine switches at distinct points in time, the safe points, to include the next disk representation. Note that logging the prior state for every change provides the necessary information for transitioning at safe points, but is overkill.
- the engine's switching to a new stable state of its internal data is generally an independent process from any flushing of data from within the OS. It is possible at some random point in time for the engine to pause and flush out all its maps and other data required to represent the data thus far written by the OS. However, it has just been pointed out that if the OS's data is incomplete (transitional) there is no point in providing recovery to this time. Therefore, synchronization of the engine to the OS avoids useless stable transitions in the engine.
- the time between safe points during which the disk is in transition is referred to as a write session.
- a write session The time between safe points during which the disk is in transition.
- a write session if, during a given write session, more than one write occurs to a given location, then only the data's initial state before the first write is saved. Thus, subsequent writes directly overwrite the page. There is no need to save intermediate states during a given write session. Failure to filter out subsequent writes from the history buffer causes no harm other than needlessly taking space.
- One technique of detecting subsequent writes is keeping a session index along with the re-mapping information. If only a small portion of the disk is remapped, then the additional disk overhead is minimal. However, it is possible to map a large portion of the disk. This total mapping is the rule in the upcoming Always Method.
- bit map is maintained in RAM. Each bit indicates if a conesponding page has been overwritten in the current write session. Given a page size of 572 bytes, then 100k of RAM indicates the status for 400 hundred megabytes of disk. If the bit map is blocked so that the 400 megabytes can be spread across the disk, mapping only the cunently active areas, then this 100k can handle the overwriting of 400 megabytes of data within a given write session. This ratio is reasonable given RAM and disk costs, and likely amount of data to change during a write session. When the next safe point begins, this bit map is simply cleared.
- the engine In addition to historic data, the engine must keep a variety of other "overhead" information on disk; for example, the maps. The general question arises as how to modify this overhead information without introducing points in time at which, if the system crashed and restarted, the information would have been corrupted. Since the engine is expected to revert only back to safe points, in the event of a crash, it is assumed the disk would come back up in its state as of the last safe point.
- a method of maintaining the engine's overhead information in such a way as to insure that the last safe point's data is always available, is to doubly allocate space for all such information.
- Two bit maps are used to indicate which of the copies conesponds to the last safe point and which copy, if any, conesponds to the transitional data. Any changes since the time of the last safe point are considered transitional and are written to the "other" allocation.
- the stable bit map indicates which allocations make up the overhead information conesponding to the last safe point. Should a crash occur, on restart the stable version is loaded.
- the transitional bit map indicates either the same allocation as that in the stable bit map or the other allocation, which would contain altered transitional data.
- the cunent transitional bit map becomes the new stable bit map.
- the In Use Bit Maps facilitate the duplication of altered internal engine data during transitions.
- a switch page is used to indicate which of the two In Use bit maps are playing the stable and transitional roles.
- the switch page is the root to all the engine's internal data. It is allocated at a predefined location with space for two copies. Whenever the page is updated, both copies are written. If for some reason the first copy is not successfully written (for example, the system crashes) it is assumed the second copy will be valid. Thus, when booting up and reading the switch page, the first copy is read, where if the read fails (e.g., disk crashed during its write), then the second copy is read.
- Information in addition to that relating to the In Use bit maps can also be kept in the switch page.
- the limiting factor of what to keep in the switch page is insuring its update is relatively efficient (e.g., not too much data to write).
- the other information typically found in the switch page is: a version number, the next write area, root links for the curcent and simulated image maps, low-level swap information, and parameters for tracking the general logged data pages.
- Trees are used to implement the main area and simulated maps. Given sufficient background swap time the main area map is reduced to nothing, which indicates re-mapping is not active.
- the entries in the main area map contain the following fields:
- the visiting page location (conesponds to the data actually stored at this location).
- the history map where there is one entry for each extra page, should be implemented as a table. These entries are typically always active, indicating the original locations of their associated extra pages.
- the "history buffer” is • the collection of pages indicated by either following the temporary swap links, when active, or referencing the associated extra pages.
- the fields in a historic page descriptor (HPD) that make up the history map are:
- Page Type (not in use, historic, special).
- the swap link indicates the page that really has the data that normally is associated with the HPD's extra page. This link indicates either a main or extra page. If null then no re-mapping is in effect.
- the return link is used only when the swap link indicates an extra page. In this case the HPD associated with the referenced extra page has its retum link set to indicate the HPD with the referencing swap link. In other words, the swap link is like a "next" link and the return link is a "last" link as in the context of a double link list.
- the next available "logical" location to receive data is determined by looking to the next location in the history buffer (map) to write (HP). 2.
- the swap link for this logical location in the history buffer is checked to see if it should in fact use the extra page directly, or instead, go to where its contents have temporarily been placed. This is the effective write location (EW).
- EW effective write location
- the new data is written to EW. 4.
- a note is made of the real location (OL) of the data that would have been overwritten by the write under normal circumstances. In other words, determine where the main area map entry currently indicates the data for SL is located.
- the main area map entry for SL is updated to indicate its data is at EW.
- the swap link for the logical extra page location is updated. It is changed to OL, which indicates the actual location that had contained the data for SL.
- the example starts by illustrating how five writes are handled, to locations 1, 2, 3, and then 3, and 2. The example then continues on into the Swap Section.
- the main area map has two links for each page location: one indicates where the data for the associated location really is found, and the other indicating the page whose contents have been temporarily been placed at a given location.
- the main area map for location #1 indicates that the data "Dlb" for this location is really in the first history page. However, if location #1 was actually read, the visitor link indicates the data "d3a" that belongs in location #3 would be returned.
- Data is represented by three characters: the first is normally "d” but is changed to "D" when the location conesponds to that last written in the extra pages area. This implies that the next location, wrapping around to the top of the area, represents the next location in which to save historic data.
- the second character is a number that indicates the true location to which the data belongs. For example,
- the data is historic (a saved copy of previously overwritten data), otherwise it is part of the main (current) disk image. Only historic data can be tossed as one never discards parts of the main disk image that is visible to the OS.
- the swap link is updated for the first extra page.
- This swap link indicates the location whose real data is now the newest historic data. This is the data that was just overwritten: the write request was to location #3 and so its prior state is now referenced as that associated with the extra (historic) page.
- Figure 19C it is seen that no mapping is done and so the data "d3a" is normally what would be overwritten.
- the swap link is set to indicate here and the data in this location gets underlined, as now it is historic.
- Swapping is performed in the background (while the system is otherwise idling). The process is divided into two phases. First, all main area pages are swapped into place. Second, the extra pages are swapped among themselves so that no redirection is in effect. This insures that as one walks sequentially through the history map, the conesponding extra pages are also in sequential order. This is optimal when diverting a sequence of writes to the history buffer.
- the preceding example has shown how to write data to the main image. Now page swapping will be discussed. In Figure 19G it is assumed some free time is detected and the engine starts to reorganize the main area. The approach is to generally walk through the map, swapping pages back where they really belong.
- the map entry processed in this Figure 19 is for location #1.
- the map indicates location #1 's data is found in the first extra page. This data is swapped with that which is really in location #1. Following the map's visitor link it is seen (from Figure 19F) that it is the data from the second extra page that is really in location #1.
- the first extra page contains location #l's data. However, if it had been in the main area, which it wasn't, then one would set its visitor link to the second extra page (location #1 's original visitor which is being moved to the first extra page). Of course, if the visitor link update results in linking to itself then the link is simply cleared. However, this later case would already have been handled in the prior step, so the update can be skipped. 4.
- the maps have now been updated, noting it is the transitional maps and not the stable versions that are changed. The actual data "Dlb" and "d3b” is now swapped and the transitional maps eventually made stable. In order to optimize the flushing of map data and disk access, the swap algorithm should buffer up a reasonably large series of swaps and optimize the disk access.
- Figure 19K shows the results of executing a swap on location #1. Continuing from Figure 19K, in Figure 19L location #2 is swapped back in place. The results of swapping of location #3 back in place are much like Figure 191 except the first and second extra pages contain "Dlb" and "d3b" respectively.
- Figure 19P there is set up a situation in which a swap will involve only main area pages. All examples so far have always involved a main area and an extra area page.
- Figure 19Q shows a swap of location #1 into place. Up to this point, the write and swap main page algorithms have been discussed. The swapping done was used to reorganize the main area. In doing so, the temporary exchanging of pages between the two areas, the main and extra page areas, are resolved. The two areas become independent. That is, the main area only contains pages that are cunent and directly visible by the OS (no re-mapping). The extra page area contains all the historic saved pages and none from the main area. An example of this state is shown in Figure 19H.
- the flaw in this approach is that more than two pages may be involved in a swap operation. In other words, it may be a set of three or more pages that are involved in a cross-linked system. This is demonstrated with reference to Figure 19R.
- Figure 19R there is seen three main pages and three extra pages.
- Figure 19S there is a write to locations #1, #2, and #3, in this order. This leads to Figure 19S.
- Figures 19T, 19U, and 19V there is a write to #3, #1, and #2.
- the extra page area is left with a three way swap required to restore a direct mapping between the HPDs and their respective extra pages. This is shown in Figure 20.
- the solution is to add the retum links that create a double link list system, which is one that can be easily edited.
- the extra page area swap algorithm is much like that used for the main area except that it is known that only one area is involved — the algorithm is a double link list deletion. Keep in mind that the linking in the extra page area is only complete when the two areas have been made independent (by first re-organizing the main area).
- next write area is a scheme that allows a single update of the switch page to set aside a whole area in which allocations can freely be made.
- the allocable pages in this area are all treated as unused (not in use) regardless of their conesponding page types in the stable HPDs.
- the stable version can be trimmed of blocks of allocable storage. This is done during transitional processing minimizing the disk flushing required to process a series of allocations to simply a single update of the switch page.
- Figure 25 illustrates the concept of a next write area.
- the size of the write area is chosen by trading off the fact that the larger the area, the more historic information is discarded in one step, even though only a few allocations were required, with the desire to avoid frequently advancing the area during a given transition.
- General Logged Data In addition to tracking the original states of changed pages the engine must also track various other data. For example file activity (open and closes), program activity (launches), system boots, keystrokes and mouse activity, as well as other information. At a minimum the engine must track the location of safe points in the history buffer. General logged data pages support this need. These are pages that get mixed into the stream of normally allocated history buffer (historic) pages. As with historic pages they are de-allocated as the circular system wraps around and reuses the pages.
- This method of saving miscellaneous data in general logged data pages that are mixed in with the historic pages is a good way to save information that is to come and go in much the same way as historic data. Other methods are certainly possible. Note that care should be taken to avoid prematurely losing "notes" about historic pages before the pages themselves are discarded. For example, discarding information about the oldest safe point's location before discarding all the historic data after the safe point makes the saving of all this historic data pointless. Without the safe point marker it cannot be used.
- the process of selecting a time is often made based on information such as file modification times stored in the general log (described in the prior section).
- the entire retrieval operation may hide the process of establishing a simulated disk.
- the act of selecting a file to retrieve from a list, wherein the list is constructed from information in the general log can automatically lead to the steps of creating the appropriate simulated disk, copying the file, and closing (de-activating) the simulated disk.
- the user may come to access historic information based on a selection other than directly choosing a time.
- file names For example, consider a user who has the ability to access their historic disk states over the last month. Sometime during this period the user created a file, used it for an hour, and then deleted it. Although the user can establish a simulated disk to any point in the last month, the knowledge of precisely to what time to go in order to retrieve the file, generally requires the use of the file activity information stored in the general log. Presenting the contents of the general log conelated with time, along with a search ability, provides the user an efficient method for retrieving the file in the cunent example.
- the present invention provides an extension to Explorer wherein the user can right click on a specific file and have the option to view a list of old versions of the file. This list is constructed by scanning the general log. However, the approach does not handle the case where the file has been deleted, renamed, or moved and so cannot be selected.
- the additional method is to create a new type of special "disk” that can be examined through Explorer, where this disk does not conespond to any standard physical hard disk, but instead whose contents are generated based on file activity entries in the general log.
- the file hierarchy for this special disk is formed by combining all relevant file entries cunently found in the general log and sorting them. Duplicates are removed, but their associated reference times (that is, when the file existed in time) are noted and used to present a list of old versions, should such be requested.
- This special disk appears much like the real disk on which it is based, except that if a file ever existed at some location in the hierarchy, providing the file can still be retrieved using saved historic disk states, the file will remain present regardless of whether it was subsequently been deleted, renamed, or moved.
- this special disk shows all available old versions offiles and directories for another disk in the form of a hierarchy, as presented by Explorer.
- the simulated disk image is one that initially conesponds to OS visible disk data from an earlier time.
- the simulated image is typically viewed through the OS by the user as simply another disk drive. Once established, the user may write to the simulated image, and by altering it creates effectively a fork in time. Eventually when the simulated image is discarded any changes will be lost.
- the method of establishing the simulated disk image is to run through the HPDs starting with the cunent time and go backwards, up to and including the desired reversion time (safe point). For each HPD a conesponding entry is added to the simulated map, thus mapping a cunent location to an original state. Effectively each HPD processed is undoing a change. If an entry already exists in the simulated map, it gets overwritten. This case indicates a given location has been altered multiple times since the desired reversion point. As the map is initially built, all its entries are flagged as associated with original data. Subsequently, if data is written to the simulated disk then entries of a second type are added to the map. These are pointing to the pages that hold the differences from the initial state.
- a second request to establish a simulated disk image specifies an earlier time than the present simulated disk image, and nothing has been written to the present simulated disk image, then one can start the walk back from the present simulated image (map). This avoids having to start from the cunent time and building up to the present simulated image time when this work is already readily at hand.
- the normal method of reverting a disk to a prior state involves establishing the prior state on the simulated drive, making any further desired adjustments, and then "copying" the simulated drive to the cunent (which effectively saves the original cunent state). In some cases there is not sufficient space in the history buffer to allow the straightforward saving of the original cunent state prior to the reversion and so another method is used. This special case is discussed later.
- Figures 26A through 26H illustrate activity to a disk in which there is one location in the main area and four extra pages to save historic states.
- Figure 26A shows the initial state where location #1 maps to and contains value HI.
- Figure 26B a new value NI has been written to location #1 and the swapping process performed to put everything in its desired location.
- Figure 26C a reversion back to HI occurs which basically involves copying HI to location #1.
- the new copy of HI is designated H2 even though its value is identical.
- Frames D through H show this process repeated, thus creating effectively two additional copies of HI, namely H2 and H3, both of which are highlighted.
- this new map provides for moving data on the disk without actually having to do the move.
- the reversion involves both duplicating and an eventual swap.
- Use of the delayed move map incorporates the duplicating process into the swap process. For example, instead of moving A to B and then swapping B with C, this swap can simply read from A instead of B. Further, the process becomes a background process, thus yielding faster response to the user.
- a delayed move map entry For each mapped location, a delayed move map entry has two fields. An entry is classified either as a read-side or a write-side type. In the read-side case the source location indicates, for a read, the true location of the data. The link field associates all locations that logically have the value of the source location (though the actual duplication has not yet been performed). If a write occurs to a read-side entry, then it is discarded. This involves unlinking it. Using its source location field as a key into the map, the list header located in the redirected page is found, and then the entry referencing this is identified, and finally the mapping entry is unlinked and discarded. See Figure 27.
- the write-side case represents a page whose contents are being referenced in the handling of reads for other pages. If a read is done to such a page, the mapping has no effect. However, if a write is about to be performed to a write-side page, then the page's contents must first be written to all the linked pages. After the duplication has been done, the read-side and write-side entries are discarded.
- the intent of the delayed move map is that it is gradually eliminated as part of the normal swap process after a reversion.
- the duplication overhead associated with a reversion can be reduced and delayed.
- Background reorganization typically reduces the delayed move map to nothing or near nothing.
- a final background flush process insures that any mapping is eventually eliminated. This is further discussed shortly.
- Figures 261 through 26M continue after Figure 26C and illustrate the situation where multiple reversions without any swap processing (or other resolution of the delayed move map) result in stacked (more than one) redirection to a page by way of the map.
- the progression past Figure 26C to Figure 26D and beyond involves the swap process at which point use of the delayed move map is resolved.
- the delayed move map linking is represented by dashed lines and arrows in the Figure 26 sequence.
- a reversion performed only in the maps should be at least one order of magnitude faster than actually duplicating the data.
- the reasoning is that each delayed map low-level node maps about 1,000 pages and so, given clustering of at least 10 pages accessed per low-level node, the duplicating process should be about 10 times faster.
- a swap must be performed and so the overall impact is less than a doubling of performance (swap is more intense than a copy).
- the map allows all the work to be performed in the background, which is perhaps a more important feature.
- a given link list never grows by more than one entry per reversion. In essence this is because a redirection for a given location is to a page that represented the same location at a prior time. A location is never redirected to a page that represented another location as seen by the user.
- the specific core algorithm for performing a reversion is to cycle through the simulated map and "copy" each entry to the cunent image. Since this is effectively writing to the main image, the normal processes allow for an undo of the reversion, should one be desired.
- the copying process is normally done using the delayed move map.
- Figure 30 illustrates the more typical situation where the amount of data involved in a reversion is a relatively small part of the extra page area.
- a reversion is a process of duplication involving normal writes into the historic area. In the prior case where the extra page area was too small to allow duplication then special case processing is required.
- the reversion process must take care to process pages chronologically in the history buffer, as opposed to any other order such as, for example, sequentially by location. This insures that HPDs are not re-used until their contents have been processed. Care must be taken to make this process crash-proof. Since the initial state prior to reversion is being discarded as part of the reversion, recovery after a crash must complete the reversion. One cannot retum to the pre-reversion state, as required data is gone.
- the reversion can simply recognize two states: the original cunent and the desired, as represented by the main and simulated maps.
- the reversion process would involve switching roles.
- the downside to this approach is that all states before the current are lost. However, this is inherent in the situation where most of the history buffer is required to do the desired reversion. Doing all the work in the maps allows the process to be crash proof: you would either retum to before or after the reversion: the maps are duplicated whereas the extra page area is not.
- the second approach is to carefully cycle through the HPDs and do the "copy" in such a way as to never overwrite data not yet processed. Since most of the extra page area is involved, and the part that is not involved is the first utilized for the copying process, this approach yields results that are effectively identical to the first approach. However, this process actually moves the user's data and therefore can require a large amount of time. On the other hand, adjusting maps and allowing the actual moves to occur in the background (swap) yields faster user response. Therefore there is no advantage to the second approach. In both cases the cunent and simulated (reverted) images are exchanged. A subsequent reversion can "undo" this process but can go no further back in time. Therefore the first approach is recommended as it is faster.
- Figures 31 A through 3 ID illustrate a map-based reversion where the cunent and simulated images are "exchanged" and all other historic data tossed (6 and 8). Note that the current image map is not maintained but can be rebuilt should another reversion be requested.
- the cunent image map represents to the user a disk image of 1, 3, 5, and 4.
- the simulated image represents 2, 7, 5, and 4.
- the "n" represents a link to a page that was written to the simulated map.
- a trivial mapping is one in which there is no re-mapping.
- Figure 3 IB shows a newly established current image map representing the original simulated image.
- the linking shown in Figure 3 IB indicates how the pages must be exchanged in order to accomplish the normal "swap" processing.
- Figure 31C shows the results of the swapping, and finally, Figure 3 ID shows the historic data packed in the extra page area.
- the method to determine how to do a reversion is to first evaluate how much data would need to be copied forward under the normal situation. This is effectively the number of pages actively represented by the simulated map. Next one must determine the size of the extra page area that is available for writing before one would reach data involved in representing the simulated map. If there is sufficient space to save the original states of overwritten pages, then a normal reversion is performed, otherwise the special case logic is used.
- the Always Method deviates from the prior three in that it assumes that some basic knowledge is provided by the OS regarding the organization of data on disk. With this knowledge the Always Method's engine takes over the role of really determining where data is placed on disk.
- Information regarding the disk locations that should physically be nearby as well as those that are de-allocated is periodically provided by the OS.
- the information may come indirectly from the OS by way of an intermediate program.
- This intermediate program might, for example, scan the OS's directory and disk allocation structures, compare them with notes it made on the last scan, and forward the differences appropriately.
- loc_id .. ⁇ , ..
- the set of de-allocated locations ⁇ loc d, .. ⁇
- the information builds upon that last specified as well as what is infened from disk accesses (e.g., previously de-allocated pages that are overwritten by the OS are now assumed to be in use). Initially all disk locations are assumed available (de-allocated by the OS). Under some conditions the engine may request that all adjacency and de-allocation information be re-supplied, instead of an incremental update from the known state.
- the adjacency information becomes dated and may not reflect the optimal organization. Since this information is used to optimize the disk, inconect adjacency information at worst leads to non-optimal performance. As long as the percentage of inconect adjacency information is relatively small, the impact on performance is typically small.
- This engine takes a leap from the other methods by treating the disk locations supplied by the OS as simply lookup keys into the engine's own mapping systems. There is no attempt to place data written by the OS to some specified location, either immediately or eventually, at this location. An exception is the case where the engine is removed and the OS resumes direct control of the disk. OS- generated disk locations are refened to as location keys.
- this method squarely addresses the first two. It employs caching to minimize read-access overhead due to re-mapping. The responsibility for optimally organizing a disk is moved to the engine, with the OS providing guiding information.
- the benefits of this engine are five-fold: First, often the engine writes data directly to its relatively final resting spot on disk, thus avoiding any swapping. Even though the Temp Method manages to avoid a user- visible performance degradation, the swapping significantly adds to the total amount of disk access. Second, de- fragmenting is automatically performed. Third, all the OS's unallocated disk space is used to hold historic states. Although the engine has a minimum amount of disk space to store historic information, the ability to use unallocated storage may greatly enhance a user's reach back in time. Most users have a significant amount of free space on their disk, if for no other reason, than that it is unwise to substantially fill a ' disk (as it is easy to overflow).
- the fourth benefit is that the engine has few interfaces with the OS and so it more easily adapts to and is isolated from the various operating systems. And fifth, the engine is more likely to hold up under more constant disk write activity without falling into a state of deep fragmentation. If, relative to a file's size, large continuous sections of it are overwritten, then the engine typically allocates these optimally on the disk. If small random sections of a file are modified, then the nature of access is already non-sequential and so fragmenting the file has less of an impact on performance. See the Temp Method and its discussion of deep fragmentation concerns.
- Figure 32 illustrates in general how a disk read access moves from the OS through the engine to the disk drive.
- the OS initiates a read of a location associated with a file. Without the engine this would be the location on disk of the desired data. However, when using the engine, this location is simply a lookup key. The engine looks up this location and determines where it has really been assigned. This desired location is then run through a cunent image map that indicates if it has a temporary re-mapping. The disk is then finally accessed.
- the role of the desired location map in the engine is to map a location as specified by the OS to where it has really been assigned (desired location).
- the engine bonows from the Temp Method in providing for a cunent image map that allows yet another re-direction. This re-direction occurs when, for various reasons, the desired location is not available and so the data is stored in an alternate location.
- the desired location map reflects where data should optimally be located, given de-fragmenting and other concerns, and the cunent image map reflects the needs and actual organization of the moment.
- the engine's use of a double mapping system is very powerful. It allows for quick major re-organizations of data on disk and thus minimizes interference with the user's ability to continue working. Immediacy is achieved by initially only logically "moving" data using the desired location map. The move is accomplished by adjusting the map, rather than actually going to disk and moving the data. Changing a map is many times faster than actually moving disk data. Granted, the user does not realize any performance gains by the logical move. The disk head must still travel far and wide to pick up non-optimally organized data. However, the framework is laid to move to the more optimal organization incrementally and in the background.
- Double mapping is what allows changes to the desired location map without actually moving data on disk.
- the second cunent image map is adjusted many times faster than actually moving data, and this second adjustment can compensate for a change to the desired location map.
- the OS would present a location key X, which conelates to data at disk location Y ( Figure 33 A). It is determined that overall access to this data is better achieved if it is at location Z.
- the desired location map is adjusted to indicate that any reference by the OS to location key X is really at Z.
- the disk basically contains data visible by the OS and historic data representing the original states of data overwritten by the OS. Consistent with the Temp Method, data that is visible by the OS is called the cunent image and generally is located in the main page area. The historic data is located generally in the extra page area. It is visible to the OS through a simulated disk along with any appropriate data from the cunent image. These "areas" as a result of the engine's mapping, are typically intermixed and spread across the physical disk.
- the goal of the engine is, in general and for the main area, to physically organize it so that sequential page allocations conesponding to a given file are, after all mapping, sequentially allocated on disk. To a lesser degree it is desirable to locate small files within a given directory near each other. In other words, the engine seeks to keep the main area de-fragmented, based on adjacency recommendations from the OS. Thus, when sequentially reading a file the conesponding pages are fetched physically from consecutive locations on disk. This minimizes the need to move the disk head.
- the goal, in general, for the extra page area, is to physically organize the historic pages in chronological order, within a circular system.
- the allocations are sequential.
- the number of pages in a block is selected by weighing the disk transfer speed against disk head seek (positioning) time. When a block is sufficiently large, the amount of added time to jump from reading one block to another is relatively • small compared to the time it takes to read the data from the two blocks. On the other hand, it is best to use the smallest reasonable block size to minimize the amount of data that must be shifted around when manipulating the pages within a block. Further, a small block size facilitates caching blocks in RAM.
- the engine has four primary block types.
- a main area block contains only pages that are cunently visible to the OS.
- An extra page area block contains only historic pages.
- a CTEX block is one that had been a main area block but is now in the process of becoming an extra page area block. CTEX stands for converting to • extra pages.
- a CTMA block is opposite of a CTEX block. Its pages are in the process of converting from extra to main area pages.
- An unused type deals with storage before it is ever written.
- An overhead type addresses allocations that hold data internal (overhead) to the engine.
- There is a special main area direct block whose pages require no mapping. Thus a read access in such a block requires no checking of the desired location, cunent image, or delayed-move maps.
- a special CTEX block with unused pages supports the situation where unused pages are exchanged into a CTEX block as part of a consolidation at a safe point.
- Allocations of the engine's various internal data structures that are stored on disk are made from different sets of overhead blocks, each set conesponding to a given fixed-size data structure.
- each set of overhead blocks is managed like an array of fixed size entries.
- a bit map indicates whether an entry is available or in use.
- the segregation of sizes avoids fragmenting issues. At most two blocks within a given set should be combined when both fall below half full, thereby returning a block for use in holding historic data.
- the maximum number of overhead blocks required should be computed and a conesponding minimum number of blocks should be set aside for extra page area blocks. It is from these that overhead blocks are taken and by having a minimum properly established, it is known that an overhead block is always available when needed.
- Figure 34 illustrates the relationship between the blocks as they rotate through the four primary roles. Note that the block types are collectively shown grouped together but in reality the block types are intermixed on disk. The grouping is established through non-physical means such as a table of pointers. An "M” in a block's page indicates main area data (OS visible), an “X” is historic data, and "-" is an unused page.
- the new data is placed in a CTMA block. Since the new data is placed in unused pages in a CTMA block, diverting the writes here inherently saves the overwritten data, from the file's viewpoint. How this saved (historic) data is tracked is discussed shortly. For now this description will focus on writing the new data.
- the OS when writing, can also supply a file identifier. If specified, this identifier allows the engine to direct new data from different files to different CTMA blocks.
- the engine allows a limited number of CTMA blocks to co-exist in order to support the OS simultaneously writing to a limited number offiles. By sending new data for each file to a different CTMA block, the engine de-fragments the files. As more CTMA blocks are supported at one time, the historic data is more rapidly discarded. In other words, the CTMA blocks reduce the number of extra page blocks, which reduces the distance the user can see into the past. Of course, this is all relative. If the blocks are 56k bytes and writing up to 20 simultaneous files is supported, one megabyte of disk is used. This is a small percentage compared to perhaps the gigabyte of extra pages that might exist.
- the OS does not supply a file identifier with each write request, and there is no other way to distinguish location keys data from different files, then new data is simply written page after page into a single CTMA block. However, it is common that files are written one at a time, in which case there are no fragmenting problems. In the long term, the OS supplies file layout information that facilitates defragmenting, should it be required.
- CTMA blocks are created by taking the extra page blocks containing the oldest historic data, discarding the data, and filling them with newly written data. Once a CTMA block is entirely filled it becomes a main area block. See Figure 35. However, in the beginning a disk consists of unused blocks and it is from these that CTMA blocks are allocated until there are no more.
- the desired location map is adjusted to associate the OS's location keys with the pages in the CTMA block. Note that the cunent image map for these locations may indicate a temporary re-mapping, even as the data is written for the first time.
- a page swap is performed between the main area block and one of the CTEX blocks.
- every CTEX block contains at least one main page, for otherwise the block would become an extra page block. Therefore, a main area page in a CTEX block can be identified and swapped with the newly historic page in its main area block. If a data swap on disk were actually done, this would take considerable time. Instead, the swap is initially accomplished ' by updating the maps. This situation is bonowing from the techniques in the Temp Method. It is possible that the OS will overwrite data it has recently written, but not so quickly as to be in the same write session (safe point).
- the data to be overwritten may be in a CTMA page, which cannot have historic data.
- the solution is to swap the data into a CTEX page, taking main area data from the CTEX page and putting it in the CTMA page.
- the OS overwrites data (making it historic) in: 1. a main area block, then a transition to a CTEX page occurs ( Figure 36) or a swap occurs ( Figure 37),
- This process may yield main area blocks, given a sufficient number of main area pages. Likewise, extra page blocks are also produced, given a sufficient number of extra pages. What is left over, if there are any pages, establish the single CTEX block that is carried over into the next write session. Between write sessions the CTEX blocks are consolidated into one so that a single point in the set of extra pages and last CTEX block marks the session's end. The actual moving and re- ananging of pages is left for the background by initially doing the consolidation in the maps. See Figure 40.
- CTMA page or pages or establishing one, if required. Moving them here still leads to the desired transformation of CTEX blocks into extra page blocks, but the moved data is not so susceptible to re-moving in subsequent consolidations.
- Figures 40A through 40H illustrate the effects of moving scraps to the final CTEX block
- Figures 401 through 40N move the pages to a CTMA block.
- the important difference between the sequences occurs in the moving of page "A" twice when a CTEX block is the destination. This example involves an unusually small number of pages making up a block, and so one should realize that in practice, the multiple moving of "A" would be multiplied many times.
- Figure 40A illustrates a starting point.
- the two circled “A” are overwritten with “a” data.
- the result is shown in 40B.
- Another two “A” pages are overwritten (circled) with 40C showing the result.
- 40D there is seen the first moving of "A”.
- Now four "B” pages are overwritten with "b” data.
- the results are in 40E.
- Figure 40F shows another consolidation, with two "C” pages getting overwritten and the results shown in 40G.
- One last consolidation shown in 40H yields the second moving of "A”. This writing process is now repeated, only with scraps going to a CTMA page.
- Figure 401 is identical to 34C and picks up at the first consolidation. The results are shown in 40J.
- the "B” overwrite occurs and yields 40K, whose consolidation is shown in 40L.
- the "C” pages are overwritten, yielding 40M, which is consolidated in 40N.
- the write processes of the Temp and Always Methods In both cases the new data is written to some alternate location other than that specified by the OS. In the Temp Method the diversion preserves the original state of overwritten data. Its focus is maintaining past states. However, the scope of the Always Method includes attempting to place newly written data in likely unfragmented locations. This is a location from which near optimal disk access occurs when accessing the data in its most likely context, that is, with the rest of the data associated with its file.
- the OS does not inform the engine of de-allocated pages, the engine does not allow the recycling of these pages for the use in holding historic states. This needlessly reduces a user's recovery reach, as the contents of de-allocated pages snould never be required. Therefore, the storage can be put to better purposes.
- the Move, Divert, and the Temp Methods do not make use of de-allocated storage. They require a fixed area be set aside for holding historic information.
- the Always Method makes use of unused (de-allocated) space on a disk. This allows for a dynamically sized history buffer. The user automatically has greater recovery reach when utilizing less of the disk, and at the same time, when the user requires more storage, the history buffer yields it back. A minimum history buffer size can be provided, forcing upon the user a disk overflow condition as opposed to giving up the option to revert to some minimal distance back in time.
- the engine generally assumes that writes are passed along to the engine, without re-ordering.
- an application writes A, B, and C to the pages of a file
- the engine eventually gets these three writes in the same order.
- an operating system is likely to use a cache that has the potential of re-ordering the writes.
- the prior writes of A, B, and C go into a cache.
- the cache is flushed, the pages are passed to the engine, but their order could be altered.
- the pages could come to the engine in the order B, C, and A.
- This ordering would not reflect the likely order of future read accesses, which is contrary to what is assumed by the engine. Therefore, when integrating the engine with an OS, the effects of its cache on write ordering should be understood. Appropriate steps should be taken to ensure that the order of writes reasonably predicts the future order of reads.
- the OS gets an unallocated page, puts new data in the page, then writes it to disk.
- the various directories and maps used by the OS may not even reflect, on disk, the page's change in status before the page gets written.
- the OS informs the engine that a page is allocated not simply by writing to it but also by including it in a set of allocations that should physically be mapped nearby each other. However, since this information is provided only periodically and in the background, it is unlikely the data written to files is not flushed before the update. The act of writing to a de-allocated page is therefore not a problem, but rather the norm.
- the engine detects when a write occurs to this previously de-allocated page. Since the engine does not associate physical disk locations with these location keys specified by the OS, the engine does not inte ⁇ ret the write as overwriting any data at all. It simply fetches a new disk location that had contained very old historic data (from a CTMA block) and assigns it to the OS's location key.
- the second case relating to de-allocation is when the engine believes a given location key is not de-allocated when in fact it is. This situation by itself simply leads to the inability of the engine to make use of the page for storing historic data. Thus the user's reach back in time is reduced. However, this condition is resolved in the next update.
- a special monitoring program running under the OS looks for rapid de-allocations of significant disk space. If such is detected, the program can trigger an update, thus keeping the engine more closely synchronized.
- a delay in expanding the history buffer should not normally be of much concern.
- the next step in this scenario occurs when the page is allocated to a file and written.
- the engine thinks the page belongs to a certain file, when in fact it has been de-allocated, but then is re-allocated to perhaps a different file and written. Since the file identifier supplied (if any) along with the write is cunent, the engine will not inconectly associate the newly written data with the old file (this is only important if writes are also occurring simultaneously to the old file). In fact, during the write process, the engine is not referring to any of the overall file information supplied during the last update. What the engine sees is that some data is being overwritten.
- the overwrite of a page that has been, without the engine's knowledge, deallocated and re-allocated to another file is handled much like the case where the page was simply modified within the same file.
- the overwritten page is made historic and the newly assigned location from the appropriate CTMA block takes over its role.
- the engine may choose to leave the data in the CTMA block, therefore adjusting the desired location map accordingly.
- the engine can seek to put the data back in the existing overwritten location.
- the desired location map would not change, the new location is considered only temporary (through re- mapping), and eventually a swap puts it back in its location as specified by the desired location map. This scenario is similar to what occurs in the Temp Method.
- an overwrite' s new data diversion is considered temporary, with a swap pending, waiting for the next OS update may yield an optimization.
- an OS update occurs before background swapping, an adjustment to the swap can be made to avoid a double move: a first move placing the page in with the old file's data and a second move de-fragmenting the page, moving it near the new file's pages.
- the engine learns before processing pending swaps that a page really belongs to a different file, it adjusts the pending swap to place the page with the new file.
- overwrites not be treated as a temporary diversion, as in the Temp Method, but as an attempt at placing the newly written data in an optimal location.
- the engine relies upon long-term de-fragmenting (based on the OS's updates) so that it can conect the situations where its adjacency assumptions are in enor.
- the conection takes the form of setting up to swap the data back to its originally assigned location. Thus, at worst, establishing the swap and performing it are delayed. What is avoided is moving large blocks of overwritten data around when such does not lead to more optimized conditions.
- the engine modifies the desired location map to reflect what it hopes is a new optimal placement.
- the swapping mechanism bonowed from the Temp Method is thus utilized differently than with the Temp Method: it is not used to swap pages back to their original overwritten location. It is used, for example, in re-ananging the contents of blocks, facilitating their transition from one block type to another.
- the desired location map is a table of dmap entries, one for each location key.
- a dmap entry consists of a disk location field packed with a 3 -bit type field, in typically four bytes. Since the desired location map is allocated twice so that changes can be made to a transitional version, each location key really requires eight bytes of desired location map support. If the disk's page size is 572 bytes, then the map is using 16 bytes per 572 or about 1.6% of the disk, which is reasonable.
- One dmap type indicates that the conesponding key location is de-allocated.
- One scenario might be that a swap was being done and the engine could not read some data. As the swap progresses the trouble spot gets re-written with new data and thus cures the condition.
- the dmap type can indicate it is re-mapping the location key in the main area. Note the main area map may again re-map this location. Also, incorporated into the type is adjacency information, which is discussed shortly.
- Caching of the desired location map will certainly cut down on the overhead. It has a density 64 times that of the data. In other words, an 8-byte dmap entry maps to 572 bytes of data, which are typical sizes. Thus 100k of cached mapping is covering 6.4 megabytes of disk. Access may tend to be in regions of the "disk" as viewed by the OS's allocations. This occurs because related files are allocated and de-allocated around the same time. Fragmentation may not be totally random and spread across the whole disk. Thus, in the prior example, if the required sections of the desired location map were cached, there would be a fivefold improvement in accessing the file. However, it takes time to build up caching and so initial accesses • still are slow.
- a solution to the problem of having location keys that conespond to what should be nearby data spread throughout the desired location map is the use of an adjacency map.
- This map is built and saved in its own area at the time of an OS update.
- the map is simply a table that conelates location keys with their re-mapped locations.
- the conesponding entries in the desired location map cease to indicate remapped locations but instead link to the adjacency map.
- the engine scans the desired location map and the adjacency maps to insure that allocations flagged to be adjacent still remain so.
- Overwriting data which results in the overwritten data being placed (allocated by the engine) elsewhere, can alter what was a good situation.
- the desired adjacency may be lost. If a small amount of data is overwritten, then a file whose contents were actually allocated together may now physically be placed in different areas. This is conected with some limited swapping. On the other hand, if an entire file is overwritten, then likely its new locations have maintained reasonable adjacency. In this case no swapping is required, which is the desired goal of the engine.
- an adjacency map adds even more to the disk space overhead of the engine. Eight bytes are typically required for each entry in the map (location key and re-mapped location). This is in addition to the conesponding eight bytes in the desired location map for each entry. Therefore each page has an overhead of 16 bytes, which must be doubled to 32 to account for the stable and transitional versions. Assuming a typical page size is 572 bytes, 6.25% of the disk could be used just in re-mapping. Selective use of adjacency maps, a different scheme to handle transitions, as well as possible packing, can lower the percentage. An alternative approach to adjacency maps is to have a means of re- sequencing a file's location keys.
- FIG 41 illustrates the general relationships between the maps.
- the Blocking Map is a table of pointers. Each entry in the table conesponds to a block of disk storage. A block is typically 100k bytes. It takes, for example, about 48,000 entries or 168k of RAM to map a four-gigabyte disk. Reserved values indicate main area (normal and direct), CTEX (normal and with unused pages), CTMA, unused, and overhead block types. Otherwise, one is dealing with an extra page area block.
- Its map value is a link to a header containing the block's historic page descriptors (HPD) and a link to the next such block in chronological order. An extra entry at the end of the table serves as the list header for the extra page blocks.
- HPD historic page descriptors
- An extra entry at the end of the table serves as the list header for the extra page blocks. Note in Figure 41 the chronological linking is shown on top of the Blocking Map. This is an abstraction as the links are, as just stated, in the headers.
- the Desired Location Map is a simple table of dmap entries. At eight bytes per 572 bytes of disk, a four-gigabyte disk's map is 64 megabytes, including the double allocation to facilitate safely transitioning to new stable versions. Portions of ' the map are read and cached on an as-needed basis.
- the map translates the OS's location keys (its version of disk locations) into the engine's re-mapped locations as well as directly or indirectly stores adjacency information supplied by the OS.
- An entry in the map indicates if a given location key is de-allocated by the OS, in which case it has no re-mapped location.
- the map may also indicate a page's mapping is found in another level of mapping, an adjacency map.
- the map is implemented as a tree with an implied no re-mapping for the areas covered by nodes that do not exist, the amount of disk space used for the map is likely reduced. It is perhaps not so important to save disk space as it is to improve performance.
- a special "main area direct" block type indicates that no re-mapping of its pages are required. Detecting this block type in the Block Map, which is in RAM, implies that large portions of the Desired Location Map never need to be loaded. Not only does this save time in reading the map, it also keeps these sections of the map out of the cache. The recovered cache space can then be used to map other areas. This enhancement is recommended.
- the downside to using a tree for the map is that one loses adjacency information.
- the Write Session Overwrite Map is a bit map that exists only in RAM. Each bit conesponds to a page on disk and indicates whether or not the page has been written during the cunent write session. It is used to avoid logging a page's original state prior to overwrite after the initial write. This implies that after the initial logging, subsequent writes in the same write session are directed to simply overwrite the existing location. It is recommended the map be blocked into sections that can be mapped anywhere on disk, so that a map in a limited amount of RAM can represent the disk's active areas. Should there be an insufficient size map to cover all active areas then information can be dropped, as it is not essential. This results in needless logging of original states, which, though harmless, reduce the user's reach back into the past. Completely mapping a four-gigabyte disk in RAM requires a megabyte.
- the In Use Map is a bit map that distinguishes between transitional and stable data. Its general concept is presented in the Temp Method section. All allocations subject to transitional processing are allocated in adjacent pairs. If a given data structure that is written as a single unit occupies more than one page, then all the pages for the first copy are grouped together followed by the pages for the second copy. The in-use status bit conesponding to the first page controls which of the two copies are indicated. Because of the double allocation, only one bit exists in the map for every two pages. To find a page's conesponding bit. simply divide the page location by two and use the result as a bit offset into the map.
- the Adjacency Map addresses the problem of location keys that conespond to consecutive pages in a file being themselves spread across their numeric range. This results from the OS generating fragmented allocations and normally leads to the accessing of many desired location mapping pages when translating the spread- apart location key values to their associated physical disk locations. However, on the first access to the file, instead of the desired location map producing a re-map, it directs one to an adjacency map. This map is cached and first consulted upon subsequent accesses before returning to the desired location map. The adjacency map conelates location keys to their re-mapped disk locations, but is organized not by location key index but by the adjacency information provided by the OS.
- the adjacency map clusters re-mapping information according to file association, which is a good predictor of subsequent location key references. This minimizes the amount of mapping information actually read in order to process a series of accesses within a given file.
- the adjacency map consists of its table size and the table of location keys and re-mapped locations. The table size should be limited, as there is no substantial gain in having a very large table as compared to two independent tables. Adjacency maps can be discarded, with their mapping information re-incorporated into the • desired location map, if space is scarce. In this case the OS can re-supply the information, should conditions change.
- the maps are of varying length and therefore a special overhead block "size" set is used for their allocation and management. If a new map is being formed and it references a location key that belongs to another, then it is assumed that this prior reference is obsolete, it is removed from the old map, and it is added to the new.
- the map would require 222 entries plus a length, or 1780 bytes. The map must be doubly allocated to deal with transitions.
- the Main Area Map addresses short-term re-mapping of pages. This re- mapping is below the level of the Desired Location Map.
- the workings of the Main Area Map are similar to that in the Temp Method. It is a tree, where if no remapping information is found for a given location, then no mapping is assumed. Background swapping resolves the mappings and thus the map is often empty.
- a mapping entry for a given location key (owner) consists of its actual location and the location whose contents are cunently visiting the owner's spot on disk.
- Main area pages can be swapped with other main area pages or historic pages. In the case of swapping with another main area page, the Main Area Map contains the links supporting the swap. If a swap involves a historic page, the associated Historic Page Descriptor contains the links. If you consider all extra page area blocks collectively, then there is a Historic
- Page Map for all the pages in these blocks.
- This map consists of Historic Page Descriptors that indicate the original physical disk locations of associated historic pages. It also contains swap and retum links that are utilized for short-term re- mappings. These links, along with those in the Main Area Map, generally work as described in the Temp Method. These three fields typically make for a descriptor size of 12 bytes (four bytes per field).
- Historic Page Descriptors are only required for historic pages, and these are generally only found in an extra page block, a set of descriptors is allocated for its pages from the appropriate overhead block size set. These allocations are called Historic Page Map Segments and they exist in proportion to the amount of historic data in the system.
- Historic pages are also found in the transitional CTMA and CTEX block types, and thus these types also have associated map segments.
- the Delayed Move Map allows the engine to defer copying a page from one location to another. It is used, for example, to quickly effect a reversion.
- the map consists of entries each having a source field and a next link. See the Temp Method for more details. The map could grow, at 16 bytes per 572 bytes of disk data, to 128 megabytes for a four-gigabyte disk, but this is unlikely and in time the map is eliminated.
- the Figure 42 sequence illustrates writing to a file.
- the file is ten pages long and is progressively overwritten. Under the "operating system” heading are shown the contents of the file. They are in boxes with their conesponding location keys to the side.
- the example shows a somewhat fragmented file, as allocated by the OS.
- the desired location and main area maps are shown. Links in Figure 42A show the desired location map de-fragmenting the location keys. No temporary mapping is in effect for the main area.
- Under the "actual pages on disk” heading are the contents of the disk. Off to the left side are the associated physical disk locations. The contents are blocked and labeled.
- XUSE indicates an unused block
- EXTR is an extra page area block
- MAIN, CTMA, and CTEX indicate their respective block types.
- Figure 42A shows the initial state of the example.
- an overwrite of the file's first page occurs.
- the new data is routed to the current CTMA block.
- the block just filled with main area pages changes to a MAIN block type.
- a HPD notes the location of the overwritten data.
- the overwriting continues in Figure 42C in which a new CTMA block is started.
- CTMA blocks are allocated from the oldest extra page area blocks, but in this case there are some never-used blocks available.
- overwrites lead to two CTEX blocks.
- the File Method is one in which the functionality of saving prior states such that one can restore or view data from the past is inco ⁇ orated into the OS.
- One way to accomplish this functionality in the OS is to merge the Always Method into OS.
- the desired location and adjacency maps disappear, as they are inco ⁇ orated into the OS's method of mapping its files.
- the engine's adjacency processing which includes the periodic OS updates to the engine, under the Always Method evolves into the OS re-sequencing the disk locations assigned to a file. This de-fragmenting with the associated page swapping is accomplished through the background mechanisms in the engine.
- FIG. 43 The outer boxes are numbered frames where each frame conesponds to one or more major disk accesses. Inside are two columns of boxes. The column on the left represents a file. Each box contains a value for a page in the file. Off to the column's left are the disk locations (location keys) assigned by the OS. Notice that the locations fall into two groups, and thus the file is slightly fragmented in its allocation. The right column represents the physical disk, with the disk locations to the side. In the examples here, the file's contents are overwritten with the new values shown in the left column. This column conesponds to data in RAM. The arrows represent a major disk transfer with the source or destination on disk circled. A major disk transfer is one in which re-positioning of the disk head is likely.
- Frame 1 the first part of the file is written to disk.
- Frame 2 shows the second part written. At this point the user is free to continue in their activities. Upcoming processes involve background work, in which case frames occur after the user continues working.
- Figure 44 illustrates the Move Method.
- each frame is added on the right side, making for two columns. These columns reflect the contents of the hard disk.
- the first of the two (left) represents the OS-visible area.
- the second (right) column is a history buffer visible only to the engine.
- Frame 1 the file is overwritten, in RAM at least, but before the hard disk is modified, the affected pages are moved into the history buffer.
- Frame 1 shows the reading of the data about to be overwritten and where it is eventually placed. However, for the moment the data goes into a buffer.
- Frame 2 shows the second area read and now both areas, having been loaded into a buffer, are written to the disk -based history buffer.
- Frames 3 and 4 then show the actual overwrites, after which the user can continue.
- the Temp Method is illustrated in Figure 45. Another column in each frame, associated with the hard disk's data, is added to represent a swap area on disk. As pages are exchanged on disk under the Temp Method, the data is stored in the swap area as a backup in case the system crashes before completing a swap. This ensures that it is not possible for the system to crash at some transition point where original states are lost. In Frame 1, all the newly written data is re-directed to the history buffer, leaving the original states unchanged. Updating various maps allows the user to continue after this point. Later on, in the background, the engine collects up all the data and exchanges it.
- the Temp Method has temporarily put the new data in the history buffer and left the now historic data in the normally OS-visible main area.
- Frame 2 shows the new data read into memory, which is eventually written the to swap area.
- Frames 3 and 4 show the file's original contents read. Having collected all the data involved in the swap, a backup of the data is written in Frame 4. The data are now written into their appropriate locations.
- Frame 5 shows the overwriting of the first part of the file, Frame 6 the second part, and Frame 7 the historic data. The maps at this point would also be updated, indicating that everything is in its place.
- the Divert Method can be thought of as the Temp Method where new data is written directly to the swap area. This would involve less total disk access than the Temp Method but has the unacceptable drawback that if more data is written than fits in the swap area, the method reverts to the Move Method. No figure is presented for it.
- Figure 46 it is seen that a single frame for the Always and File Methods.
- the file's new data is simply written to a single area on disk.
- the file's original data is located elsewhere and therefore remains available for recreating the past.
- the writes overwrite very old historic data whose tracking is no longer possible.
- Various updates to maps are also performed, but not shown.
- the File Method should be a bit more efficient than the Always Method, as the desired location map folds into the OS's normal mapping for its files.
- the Always and File Methods yield the best overall performance by sacrificing some disk space in mapping overhead.
- their read and write access throughput is similar to that when the OS directly accesses the disk.
- the Temp Method from a user responsiveness viewpoint, performs just as well as the Always and File Methods.
- the Temp Method requires substantial background swapping. The swapping increases the overall total amount of disk access associated with a given write. But for the average user, as long as the added accesses are hidden, they are likely of no concern. Recall that there are other benefits and drawbacks to these methods outside the scope of disk access performance.
- the Temp, Always, and File Methods provide backup services without generally impacting the user-visible disk performance. This is measured by the time it takes a user to read and write data (listed in the "continue” column).
- the Move Method is straightforward but in its simplicity, it sacrifices the disk performance to which users arc accustomed.
- a simulated disk allows a user to access data from the past, while at the same time continuing to run off their main disk (image).
- the expression “to run off a disk” commonly refers to the process of booting (starting up the OS) from disk. It is also the disk that applications are generally configured to use (e.g., an application may note that a file is at "C: ⁇ windows ⁇ example”). Note that the terms “disk” and “drive” are herein interchangeable.
- the simulated disk is typically accessed through its own drive identifier or letter.
- the simulated disk might just as well be another hard disk to which a backup was made at the desired time in the past.
- changes can be made to the simulated disk after its initial starting point time is set. Note there is no reason why more than one simulated disk cannot be in use at one time, each with its own map.
- a useful example of running off of a simulated disk is to provide the user with in effect two disks that share a common origin. This allows a parent to establish a drive for their child's use. Initially the drive starts as a copy of the main drive. However, the parent can then delete desired files, making them inaccessible to the children. Placing a cap on disk space allocable to the simulated drive limits any impact a child could have on the main disk and historic information. A password system protects the main disk.
- a problem in creating a long-term simulated disk is that changes to the main disk often require updates to the simulated map. This reduces throughput during the parent's use of the computer.
- One solution is to establish and release the simulated image each time a child wishes to use the computer. The parent specifies a list of private files and directories. These are automatically deleted during creation of the child's simulated image.
- the process of generating an external backup is enhanced by use of a simulated disk image.
- a user can establish a simulated image conesponding to the cunent time, start backing it up, and continue working.
- An entirely different approach to achieving an external backup is to have an external disk drive that, like the main disk, employs a method of saving original disk states.
- the information on the backup includes the historic information, allowing the backup to re-create a range of "backup" times.
- the external disk generally minors the main internal disk. This is how a RAID system generally works.
- an external drive that receives changes chronologically from the main drive is capable of restoring to any number of points in time.
- the external drive likely contains a safe point followed by the transitional changes just preceding the crash. Since the transitional changes are useless, as they are incomplete, one reverts to the safe point.
- the external backup process of the present invention differs from one in which the internal disk drive is simply copied onto another medium (e.g., disk or tape drive). Such a duplication is very time-consuming. Instead, the states of the external and internal drives are compared, and the appropriate historic and cunent image data is transfened, until both are synchronized. This transfer process is asynchronous to and can lag substantially behind recent changes to the cunent image. Therefore, it can be implemented on an inexpensive and relatively slow bus. For example, a parallel printer or USB port.
- the external disk can be removable. In the case of a portable computer, one may leave the external unit at work and bring the portable home. When it is re- attached to the external disk, the transfer of information begins. Thus, the removal of the portable for a period of time is simply introducing a "delay" in what is already a lagged transfer.
- the engine's ability to redirect disk activity, to reference back in time to prior states of a disk, and to perform work in the background all contribute to providing enhanced backup service.
- the engine When initially connecting a blank external disk to operate under the management of the engine, the engine establishes a simulated disk to the most recent safe point. This image is then transfened to the external drive. Next, all historic data from the period before the time to which the simulated disk is set, is sent over. Both these processes are special in that they are setting up the external disk and therefore writes are not re-directed and prior states are not saved. Once the external disk contains a current image (though likely out of date compared to the internal disk) and historic data, the external disk is ready for normal use.
- the engine seeks to synchronize it with the internal disk. This involves identifying the last point in the internal disk's history that conesponds to the most recently transfened information. If such a point does not exist, in that it has rolled off the end of the internal disk's history buffer, then the external disk is treated as blank and completely re-initialized. Otherwise, the engine walks forward through the internal disk's history, starting at the time associated with the simulated disk. The new state of each historic page is transfened down as basically a normal write to the external disk. Normal engine management of the external disk saves the data about to be overwritten and accepts the page's new value.
- a page's new state is found either ahead in the history buffer or as part of the cunent image.
- the prior case involving the history buffer arises when a given location is overwritten multiple times, thus its "new" state at some time in the past may not be the current state, but one in between.
- the engine is writing to the external disk in generally
- FIG 47A illustrates disconnected internal and external drives. Each drive contains a cunent image and historic data. Initially the internal drive's four pages contain the values "A", "B", "C", and "D". The external drive is blank. In Figure 47B the values "a” and "b" are overwritten on "A” and "B", respectively. Thus, the original states move to the history buffer and the cunent image reflects the change.
- the external drive is then connected in Figure 47C.
- the engine responds by establishing a simulated disk based on the internal drive's cunent state (each write is assumed to also be a safe point).
- a dashed line in Figure 47C represents this time.
- the user has overwritten "C” with "c”, thus displacing "C” to the history buffer. Note that this change occuned after the simulated disk was established, so it is not part of what initially gets sent over.
- Figure 47C also shows the simulated disk's image being transfened and written to the external disk.
- Frame 47E the user overwrites "D". Having gotten the simulated image across, the historic data prior to the simulated disk's reference time is sent. Notice that the result of the user's continuing activity during the synchronization process has led to a lesser amount of available historic data (i.e., "A" has rolled off the end of the buffer).
- Figure 47F shows the engine attempting to keep the two disks synchronized.
- the changes occurring after the simulated disk was established are sent over. This occurs in Frame 47G as normal writes under the engine, with the overwritten data moving to the external disk's history buffer. At this point the two disks have been synchronized. However, in Frame 47H, "E" is overwritten. The internal disk immediately reflects the change while the change's transfer to the external disk just begins. Some time later, Frame 471 shows the disks synchronized again.
- the concept of an external disk from the prior section can certainly be extended to include a disk interfaced to a target computer through a network.
- the network is simply a high-speed bus.
- the access to the external disk from the network generally requires an associated server controlling and actually performing the transfers to and from the disk.
- a server on a network can communicate with more than one PC, it follows that the server can independently maintain the OS visible disk image and historic states for a set of PC's. For example, a server with a 10 gigabyte disk could backup, over a network, four PCs each having an internal disk of 2, 3, 3, and 1 gigabytes in size (totaling 9 gigabytes — thus, the server has at least, or in this case, more storage than all the PCs together).
- each PC has an internal disk for which a portion represents OS visible data and the rest generally is historic (original states of overwritten OS visible data).
- the OS visible portion is typically bounded by the size of the PC's internal disk minus a minimum that is set aside for historic data (which could be zero).
- the server needs, for each PC, to have at least sufficient space for the OS visible portion of the PC's internal disk.
- the amount of additional disk allocated on the server to a given PC is used to hold historic data. If one views the external disk as simply a second copy of the PCs internal disk which lags behind in being updated, the two disks should be the same size. However, there is no reason the external disk cannot have more or less additional storage used for historic states as compared to that reserved on the internal disk. This implies the external disk may be able to reach further back in time in re-creating prior states, if it has more historic information, or not as far back if it has less.
- the server maps to its available disk storage (which may be one or more disks) areas to represent the OS visible portions of the PC disks to which it is backing up. It further assigns areas to save historic states for each backed up PC, whose sizes are independent of the storage committed to maintaining historic data on their respective PC's. Provisions in the PC's software would divert to and take advantage of an external disk that had more historic information than available on the internal disk, and whose access is desired.
- disk storage which may be one or more disks
- FIG 47G illustrates a set of PCs being backed up by a server. Note the figure shows data flowing from the PCs to the server, but data actually flows in both directions (e.g., when the "external disk” effectively represented on and by the server takes over the role of a PC's internal disk).
- Disk Controller or Server Based Firewall Protection relies on an engine running in a target computer to implement one of the described methods. Even in the case of using an external backup, in addition to the target computer's internal disk, read and write accesses to the external disk are still controlled by the engine (that runs in the target computer).
- the engine affords virus protection by allowing the user to restore all or part of the disk (main image) to an earlier time. However, this assumes the virus cannot get in between the engine and the disk. Should a virus directly access either the internal or external disks, the engine's data may be ineversibly corrupted.
- a method of protecting the disk and engine is to move appropriate portions of the engine's logic into the "disk,” as part of the disk controller.
- the read and write accesses that are passed to the disk (controller) conespond to what is generated by the OS (i.e., there is no engine doing re-mapping between the OS and disk controller).
- Mapping and re-direction occurs within the disk controller, with only the disk controller able to access the engine's internal data.
- a virus would then be unable to access and corrupt the historic data or the engine's internal data stored on the disk. Therefore, in this mode the user is truly provided security against a virus on the target computer.
- the only path left for a virus to attack a user's disk involves the virus overwriting so much data that the engine's ability to track changes over time is effectively lost. In other words, the virus writes so much data over and over again that the historic log fills with these changes, pushing out the memory of the pre-virus disk states.
- This window of vulnerability is addressed by allowing the engine to shut down a disk, should it appear that the disk is being excessively altered. This protects the historic data and therefore the ability of the user to revert a reasonable distance back in time. In the event the engine believes a shut down condition is forthcoming, it alerts the user and allows for a safe means of defeating or adjusting the conditions that force a shut down.
- safety means is a means where a virus cannot pretend to be the user and defeat the shut down.
- the user could be required to press a button that directly interfaces to the engine, which is especially useful when the appropriate parts of the engine run inside the disk controller.
- Another "safe means” involves the user entering a password that is unknown to the target computer (before it is entered).
- Moving parts of the engine into the disk controller can be done on either or both the internal or external disk drives.
- the external disk is implemented using a server on a network, so that parts of the engine execute on its local processor (the server does not allow the PC to directly alter the engine's internal data), firewall protection is achieved. Therefore, firewall protection can be achieved using commonly available PCs and servers, without hardware modification, by adding the appropriate engine software to both.
- the firewall does not prevent a virus from getting into a PC and interfering with the nature of the data written to, and through, the firewall and then onto the disk. It is hoped that a user detects the presence of a virus and has sufficient ability to revert a disk back in time to before the virus struck. The firewall is protecting the user's ability to revert. Should a virus infect and corrupt data over an extended period of time, beyond the ability of saved historic data to revert, then the virus will have succeeded.
- a general solution is to build on the engine's ability to revert the disk back in time. If snapshots of the RAM used by the application are periodically taken at moments in time after a safe point is established but before any further disk modifications, then it is possible to restore both the disk and application (RAM) to a synchronized and earlier time. These snapshots may also include the OS's RAM (or portion of it), at which point the entire computer, OS and all, can be reverted. Some care must be taken when restarting from an earlier time to insure that devices other than the disk and RAM are reasonably re-started — for example, a printer, the video card, or a network connection.
- RAM snapshots may be taken at either fixed intervals and/or after a certain amount of user activity (e.g., keystrokes or mouse activity). Compression of a snapshot reduces memory requirements.
- the intention of performing work in the background is to not interfere with the user.
- the best method involves detecting user activity and ceasing all background activity until a reasonable period elapses after the last user activity. Thus, while the user is even slightly active, no background processing occurs.
- the engine can temporarily divert writes to alternate locations. It also can delay copying various pages using pointers. In the background the engine works out the swaps, putting the data in their desired locations, as well as delayed moves. It is the job of low-level swap processing to queue up a sequence of swap and move submissions and execute them as block, in a time optimized and crash proof manner.
- the low-level swap and delayed move map processing in the swap handler is the gatekeeper to the user's data. Since any exchange of data must be appropriately reflected in the maps, the swap handler effectively performs two steps simultaneously: moving data and updating the maps. This is important because there is always the chance of a crash mid-process.
- Prior to calling the swap handler all desired map changes are made to the transitional version. The associated user data moves are queued up. All of this is then passed to the swap handler which completes the operation. The user data is moved and then the transitional version is made stable in a final single write to the switch page. Once the swap handler has processed a request up to the point of altering user data the request becomes inevocable.
- Figure 48 sequence illustrates a simple case of swapping two sets of three pages.
- Figure 48 A shows the state just before the swap handler goes to work.
- the pages to swap have been submitted as well as the conesponding map changes implemented in the transitional copy of the engine's internal data.
- a second submission to swap B and C modifies where the data from the first submission really winds up. In this particular case, if you read A, B, and C into memory, you would write A to C's old location, B goes to A's old location, and C goes to B's old location.
- Figure 49 illustrates three swap submissions, each involving three specific page swaps. It shows the simple approach of making a list of all the locations involved in a swap handler request, and sorting them into read and write passes.
- the algorithm to form the sorted read list is straightforward. Take all page locations and sort them, tossing any duplicates. Of course, the write locations are the same as the read locations.
- the issue is to reorder the pages in memory so as to conespond to where they are being swapped. Basically you walk down the list of swaps and process the left and then right side, as long as their locations have not • already been processed. For each side you initially assume its conesponding swap location is that specified on the other side.
- Figure 50 shows how this • algorithm carries out the swap in the second column of Figure 49.
- Swap and move submissions are submitted to a pre-swap setup routine. Here they are run through the delayed move map, the map is adjusted, and any associated move operations are added. The operations are accumulated until a limit has been passed or they are flushed if a timeout occurs. There are two limiting factors as to the total number of pages that can be swapped in one operation. They are a function of the swap area's size (and RAM buffer) and the number of different faraway areas accessed on disk.
- the area limit arises in order to control the worst case duration of a swap request. If a disk seek takes 10ms and two large areas of 100 pages each are swapped, the seek time is on the order of 2 visits (read+write) * 2 areas * 10ms, or 40ms. The transfer time at one megabyte per second is on the order of 100ms. With everything accounted for, the total time is easily under a second. However, if each • page required a seek to a different area on disk, the seek time by itself is on the order of 2 visits * 200 areas * 10ms, or 4 seconds. This is a long time to wait for a background operation to complete. The time is controlled by limiting the number of different areas that are visited in a given swap handler request.
- a swap (or move) submission has the form: do_swap A location, B location, A_to_B_only
- transitional state is made stable. However, it is also understood that this step may be delayed in order to allow multiple submissions to accumulate and be processed together. In other words, small transitional steps are accumulated into a larger transitional step. Although this increases the chance of losing the larger transitional step (more time available to crash) all the work is cleanup and does not involve any user information — i.e., the work can be re-created.
- each new do_swap submission has its two swap locations run through the delayed move map. If one is found to have a read-side mapping then the true location from which to fetch the data is updated. As part of processing a read-side mapping, the mapping entry itself is deleted (from the delayed move map) since as part of the swap, the location gets overwritten. On the other hand, if it is a write-side mapping that is found then the other pages whose reads are being diverted to this page must have the page's data put in place. Therefore, one cycles through the write-side entry's link list and adds the appropriate moves to the swap request. Note that they all share a common source: A to B, A to C, A to D, etc.
- the write-side and associated read-side entries are then deleted from the map.
- the same page may be read as a source for different writes.
- there can be more than one "read" of a given page although in practice a single read gets routed multiple places.
- the locations in the read table reflect any possible delayed move map processing. In other words, they are the actual versus the original stated locations. Note that only locations being read are redirected. The delayed moved map does not redirect write locations.
- A_loc and B_loc are added to both the read and write tables. Although one cannot say much at this time about what data is actually going to be read and written, one can identify the locations affected (areas) by essentially ORing all the locations.
- A_loc is added to the read table and B_loc to the write. An addition to the read table is ignored if the specified location has previously been written as the destination of a move. Ifthis write was part of a swap, then an associated read would also have been processed and the addition ignored, as it is already present in the table.
- the write was the destination of a previous move then the location does not need to be read. For example, if A is moved to B, and then B is swapped with C, the original value of B is not part of what gets written and so does not need to be read. Thus only the right side of move submissions need be checked.
- processing then advances toward setting up the swap handler request.
- the next major step is reading the indicated data into memory and establishing a mapping table that takes a read index into the collective data read and produces the associated write page index.
- the write index indicates where the page belongs in the collective data represented by the write area table.
- the total size of the read data may be smaller than that which gets written. This is because some pages that are read should be duplicated in the write data.
- the method for creating the read-to-write index map is to essentially use the previously discussed final destination algorithm that cycles through all the stated read locations. Some changes are required to deal with move submissions and duplication.
- the example in Figure 53 demonstrates the process of determining what is read and where it eventually gets written.
- the ⁇ --> symbol indicates a swap and - indicates a move.
- the final read and write data patterns are shown, as worked out by hand, with only bold letters part of the read and write set.
- the final destination algorithm creates the read-to-write index map.
- the algorithm cycles through all the swap and move submissions and determines where each read location will finally be written.
- the read and write locations are then converted to page indices in the read and write areas, and the read-to-write map updated. Tracking information is updated in the source (left) side of move submission when such is encountered.
- a move submission represents a forking of the source. Since the algorithm cycles through all submissions, and for each cycles through the remaining submissions, its performance is modeled as n+(n-l)+(n- 2)+..+(n-(n-l)) or of the nature n 2 . This is not particularly good. There can be easily 100 submissions. The algorithm's performance is greatly improved by linking all like locations together to eliminate much of the scanning. The algorithm is then on the order of n 1 .
- Figure 54 illustrates the building of the read-to-write map. Notice that all locations get updated once in the map, as well as in the read data and the write data arrays. The end result matches that previously determined by hand in Figure 53.
- the read-to-write map provides the means for reordering the extended read data into write data.
- the write data is written to the swap area.
- the switch page is updated to reflect where data will be written in case the system crashes before the operation's completion, so that the operation can be re-started.
- the algorithm shown in Figure 55A reorders the read data. It involves the use of two temporary page buffers through which a displaced page shifts.
- a write_data_order anay indicates for each page whether it is in read data or write data order. Initially the anay is false.
- the algorithm starts at the top of the write_data_order anay and searches for a page not yet in 'write order.' When found, the read-to-write map is consulted to determine where the page really belongs.
- the reorder algorithm can be optimized to eliminate shifting pages through a temporary page. Basically the presented algorithm is run backwards. The data for the initial page that would be written is held in a temporary buffer. The moves are then performed until cycling back to the final location, conesponding to the temporary buffer's data. After moving out the final location's data the temporary buffer is moved in.
- Figure 56 illustrates the execution the reorder algorithm on the cunent example (started in Figure 53).
- Two closed loops are processed.
- the processing of the second closed loop shows a write of "H" occurring over an existing "H” (circled).
- the overwritten location is a duplicate page and its location assignment is arbitrary. This is an unnecessary overwrite that arises because pages are duplicated yet treated as independent. Optimization could look for such overwrites and adjust the read-to-write map to eliminate them, but the effort is not likely worthwhile. Duplications occur from move submissions that originate from reverting disk, but this does not occur often.
- delayed move map and swap processes combine is the situation involving two swaps where two of the locations are mapped elsewhere to a common location. More specifically, take the case where A is swapped with B and C with D, but where A and C are both mapped to R for the pu ⁇ oses of reading (via the delayed move map).
- the read areas are R, B, and D.
- the location R is duplicated in the swap area and then A, B, C, and D written.
- Figure 57 is based on Figure 26J taken from an example in the Reversion and the Delayed Move Map Section. In this other section the swaps are shown one at a time. Figure 57 illustrates the same outcome as in Figure 26M, except that all the swaps are done in a single swap handler request (note HI, H2, and H3 are all the same). The delayed move map before the swap redirects reads of locations C and E to B. The swap submissions in Figure 57 are constructed by following the swaps from Figure 26J onward (everything is swapping through location A).
- the final destination algorithm is of the order n 2 .
- the algorithm needs to know whether a given side in a new submission has been the destination of a move.
- the resulting scans are also of the order n 2 . Both algorithms are reduced to n 1 by use of indices and linking.
- Every disk location is run through a hash header table and a list of collisions followed until a match is found (or new entry is added).
- the located entry identifies an index for the location. This index identifies a table entry in a table of headers.
- the index's table entry identifies the first occunence in the submission table of the associated location. It also contains a flag that is set if the location is the destination of a move. This flag replaces scanning, and the read-to-write index map algorithm can follow relatively short lists. Left and right link fields are added to the submission table to support the linking. See Figure 58.
- a user's read request is immediately handled while in the middle of a swap request.
- the engine must complete the swap request, which may take some time, it can pause to process a few of the user's reads.
- the effective locations for the reads are determined using the transitional maps and then a check is made to see if the page is affected by the cunent swap request. If not, the read is passed along, otherwise it is redirected appropriately.
- a read request of a page involved in the swap is handled differently. If the read comes while the handler is collecting up (reading) the data involved in the swap then the read is directed to the pre-swapped location. The read location is based on the transitional maps that assume the swap is complete. However, since none of the data being swapped is in its proper place, the read location is re-directed to its pre-swap location. The other stage to handle is after all the data is gathered and written to the swap area. At this point the swap handler begins writing data to their appropriate locations. However, until this process is complete, the affected locations are basically in transition.
- a read location is re-directed to a location in the swap area that holds a copy of the page that will eventually be written to the read location.
- the swap area is held in memory, one could also simply pass back the data and skip the actual disk read.
- the engine attempts to immediately process any user's read, it does not allow a continuous stream of reads to hold off the completion of the swap request. This would cause an indefinite delay of the transition to the new stable • image. After a maximum delay is exceeded, the swap request takes precedence. If a write request occurs then the operating system waits until the swap request completes. This should not have a serious effect on user response.
- the reasoning is that foreground activity is detected during the initial period when new writes are going to the operating system's cache (but not yet to the engine). Thus the engine gets some advance notice of the actual write (when the cache is flushed or overflows) during which time it completes the cunent swap handler request. Swapping is in general an optimization that is performed in the background.
- the engine may pause (stop accepting • requests) so that it can complete the cunent swap request.
- the act of the user writing data prevents the engine from rapidly responding, should in the future there be a read request. For example, take the situation where an application writes a small amount of data, pauses, and then reads some data. During the pause the operating system flushes the write, passing it to the engine. If the write were to immediately complete, the application's read would follow. However, the engine is busy finishing up background work (swap request) before working on the write. The write must complete before the read is processed. The user waits as shown in Figure 59.
- This response delay is avoided by either of two techniques.
- the OS can query the state of the engine before starting to flush its cache, and delay if the engine is in the middle of a swap handler request. During this wait the OS informs the engine that there is pending foreground activity so that the engine quickly wraps up its background work and allows the processing of writes. While waiting for the engine to become ready, the OS allows the application to generate read requests that are immediately passed along to the engine (before the flushing). Since the engine can interrupt its background processing to handle a read, the user response is optimal. This solution assumes a modification to the operating system's cache flushing process. See Figure 60.
- the second technique is to simply have the time period before the engine " begins its background work longer than that which the operating system waits before flushing its cache; in other words, make sure the engine's background activity occurs after the OS's flush.
- the advantage of the first technique is that it could use the time before the flushing of the cache for engine background activity.
- the second technique is implemented without OS modifications. In the end this raises the question of how long and why should the OS delay before flushing its cache. The general reason would seem to be that it improves user responsiveness. By waiting there is no process to complete, even if called off early (i.e., only part of the entire cache is flushed), and so response improves. See the "A Nice Background Section. "
- a user may be unable to boot their computer due to corruption of the disk's data. For example, a virus could have corrupted files needed in order to start, or the user installed a new software driver that interferes with normal operation. Assuming one of the engines had been in use, it is easy to revert the disk to an earlier time — for example, to a day ago. (One may wonder how it is possible to start a computer in order to request its disk be reverted, when the problem is that the computer will not start. The answer is, although it is not possible to fully start the computer from the hard disk, the engine has been protecting its own ability to boot into the computer's memory.
- the engine can intervene before attempting to fully start the OS and revert to a time at which the system could fully start.)
- the user is faced with a new problem.
- the computer has returned to its state as of a day ago.
- the work performed since that time no longer appears on the disk (main area).
- all the differences between a day ago and when the computer ceased to boot were generally saved in the history buffer as part of the reversion. Therefore, the recent work is not really lost.
- the problem is that a user does not want to bring all the historic information forward to the present, as this is what led to the computer's being unable to start (crash). Instead, selective retrieval is desired.
- the engine logs the names, directory locations, and time-of-access of all files that are altered. Therefore, after recovering from a crash, the engine can establish a list of the files altered during the period between the reversion and crash (recovery period). The user can then select from this list specific files to recover. In response the engine, through a simulated drive, goes back to the appropriate time and copies forward the specified files to the cunent image. In this way files are rescued. The presented files are sorted with only the most recent version listed. This reduces the volume of information presented to the user. Filtering of non-user files can further reduce the list.
- An alternative form of presentation creates a directory tree containing directory and file entries conesponding only to files that were altered during the recovery period. The user can browse the tree and select files for recovery in a manner similar to that done using the Microsoft Windows Explorer.
- the start and end times of the recovery period do not change.
- the associated list offiles is also stable, for as long as the referenced historic information is available. This is important, in that the user expects any files recovered through this mechanism to reference only files altered during the recovery period. For example, assume the user has re-started their computer, reads in a certain word processing document, made and saved a few changes, but then realized that they wanted the version "lost" in the recovery period. When viewing the files that can be "recovered,” it would be confusing to include versions created after the reversion.
- the file rescue process involves identifying a set offiles that were altered prior to a reversion, but after the time to which the reversion is done.
- This list remains generally stable and provides the means for the user to select (for recovery) files that were altered during this period.
- Presentation of the list can involve sorting, filtering, and tree structures (hierarchies).
- Embodiments of the present invention are applicable to all types of computer systems that utilize one or more hard disks, where the disks represent a non- volatile storage system or systems.
- Such types of computers may be, but are not limited to, personal computer, network servers, file servers, or mainframes.
- Figure 61 illustrates an exemplary personal computer 100 on which the present invention can be implemented.
- the exemplary personal computer as shown in Figure 61, includes a monitor 110, a keyboard 112, a central processing unit 113, and a hard disk 114.
- Figure 62 further illustrates the various embodiments of the invention.
- the invention, and in particular the "engines” described herein, can be implemented in software and stored in computer readable form on various carrier media such as floppy disks 116, CD-ROM 118, permanent or temporary memory 120 or as an electronic data transmission 122, in addition to being stored on hard disk 114.
- the software of the present invention for implementing the various computer-implemented embodiments described above is, in one exemplary form, distributed on a carrier media such as a floppy disk 116, CD-ROM 118 or by data transmission 122, and installed on the hard drive of a computer, such as, but not by way of limitation, an IBM-compatible personal computer.
- a computer such as, but not by way of limitation, an IBM-compatible personal computer.
- the hard drive of the IBM compatible computer also has installed on it a copy of the WindowsTM Operating System (Version 3.1 or later, including Windows 95TM, available from Microsoft
- Co ⁇ oration for performing the operating systems functions for the computer.
- the software of the various embodiments of the invention may be adapted for use on the MacintoshTM computer system, available from Apple Computer, Inc.
- MacintoshTM computer system available from Apple Computer, Inc.
- these example embodiments in no way should be taken as limiting the computer platforms on which the invention may be applied.
- Most personal computers at their core consist of a main processing unit (e.g., an Intel Pentium), RAM, and a hard disk.
- a key concern is protecting the integrity of the data stored on the hard disk.
- the conventional method is to make backups, copying all or key data from the hard disk to another medium.
- Various reverting methods have been described above that provide for the ability to recover altered information. These provide an enhanced means of protecting against data loss wherein the user is not required to stop and make a backup at some predetermined time. By themselves, these reverting methods store their recovery information along with the current user's data on the same disk.
- a method of establishing a second external disk in which changes to the main disk are duplicated has also been described above. This adds a level of hardware redundancy.
- the main processing unit already has sufficient RAM, processing horsepower, and time to perform the activities of a reverting method.
- it is susceptible to bugs and viruses. Therefore, a method is described of how to establish a firewall between the key elements of a reverting method and the rest of the system, without requiring significant new hardware.
- the key technique is to isolate through foolproof means a portion of the main processor's RAM as well as the interface to the hard disk from what is normally accessible by the main processor. There is no need to control access to ROM (read-only memory) since it cannot be changed. Access by the main processing unit to protected resources is generally disabled.
- main processor executes a certain sequence of instructions
- access to the protected resources is enabled and the main processor begins executing code at a predetermined location in the protected RAM or ROM.
- interrupts are generally disabled to prevent the main processor from diverting to unknown code.
- the concept of transferring program control to a predetermined location is a form of a gate. Before passing through the gate, access to protected resources is disabled. Once through the gate, access to the protected resources is enabled. The transfer of program control through a gate (or gates) is detected by hardware ("Gate Monitor”) which then enables access to the protected resources.
- Gate Monitor hardware
- a malicious or out of control program may jump into the middle of code (ROM) that is part of the code that normally executes after passing through a gate. This can lead to attempts to access protected resources from code that normally does such accesses, but that was entered improperly (i.e., in an uncontrolled manner). Since control did not flow to this code through a gate, the Gate Monitor did not enable access to the protected resources. Thus no harm results: the disk interface cannot be accessed or the reverting method's RAM altered. Presumably, the operating system eventually aborts the offending task.
- Control passes to the core reverting method's code ("Driver") by setting various parameters in the main processor's registers (or RAM) and triggering an external interrupt (for example, by writing to an i/o port or certain memory location).
- Driver the core reverting method's code
- the Driver When the Driver completes its operation, it disables access to the protected resources and allows the main processor to resume normal unprotected execution. Such cases arise in both servicing requests to access the disk as well as from within the Driver when allowing the servicing of intermpts.
- the latter case could be implemented by, from within the Driver, periodically branching to code that closes the gate (disables access to protected resources), enables intermpts (allowing their servicing), then falls back through a re-entry gate. This gate disables intermpts again and returns to processing the cunent request.
- ROM ROM
- EPROM Erasable programmable read-only memory
- flash flash
- encryption and validation of any new software (code) that is to replace all or part of the cunent Driver prevents the Driver's corruption.
- the hard disk or disks under the control of the Driver may be either internal or external to the computer. Interfacing from the main processor to a disk is typically done using a bus, of which some examples are IDE, SACS, and USB. Adding a physical switch that is accessible to the user of a computer provides a means for the user to signal to the Driver that it is OK to perform an unrecoverable operation. Examples of such operations are the total clearing of historic information and the discarding of historic information required to restore back to some minimum distance in time. In the latter example, a vims might attempt to write so much new data that the ability to restore to, say, a day ago, is going to be lost.
- the driver queries the user (through the OS) to whether this is acceptable, the vims could intercept the query and respond positively without ever informing the user.
- the Driver can validate the response to its query is in fact from the user. This switch can take the form of a key press as long as the Driver has direct access to the keyboard controller (i.e., a vims cannot fake the response).
- Figure 63 illustrates a typical personal computer's internal architecture. Notice that accessing the disk is possible by any software that is appropriately loaded into main memory. In Figure 64, access to the disk is only possible by passing through a gate. Once the main processor passes through this gate, it is presumably executing an uncormpted version of an engine which provides access to the disk.
- the Driver's RAM and the general RAM are typically implemented using the same system of memory chips. However, access to the locations reserved for the Driver's RAM is made conditionally depending on whether the Gate Monitor is allowing access to protected resources. Should an access occur to the Driver's RAM (or other protected resource) when such is not allowed, the access is ignored. A system fault may also be generated.
- the Driver could execute in the main processor with the external disk on a similar bus to the internal disk. In this case the Driver directly controls the transferring of information to and from the disk.
- An alternate implementation inco ⁇ orates the Driver into the external disk controller. Here, the Driver receives requests through the disk interface. The difference between these two cases is in which side of the disk interface lies the Driver. This is illustrated in Figure 65.
- Firewall a clean separation between the computer and the external disk so that malicious or otherwise badly executing code cannot corrupt the Driver's working and non- volatile storage. Firewall protection allows the Driver to validate requests from the computer
- a Driver which maintains and protects historic disk sector states, in a disk controller creates a firewall.
- This Driver records all or portions of altered files (instead of disk sectors).
- the protocol to a file level Driver would be similar to that of a network file server. However, this "server" only services one computer and also maintains historic states.
- the external disk can also be substantially implemented as or supplemented by a tape drive.
- a tape drive has the same basic properties of a disk drive, except that access to non-sequential storage blocks is impractical on a frequent basis. If the data sent to the external "disk" is instead of or in addition to, written sequentially to a tape, it is possible to use such tape to recover data from a given state associated with a given time that was captured on the tape.
- the base image is restored and all the time ordered changes are read and applied to this image up to a desired point in time.
- Another second recovery mode involves restoring both the base and all or some amount of changes together to disk.
- the Driver is used to write to a disk the information read from tape, and so the tape, as representing a series of states over some time period, is restored.
- the tape can also represent an exact image of the disk under a Driver's control, and thus its restoration to a sufficiently large disk also recovers states of the user's disk over a period of time.
- the tape contains both user data as well as the internal data structures of the Driver.
- Such a tape is quickly made since essentially both the disk and tape are processed sequentially.
- it has the disadvantage of requiring cessation or the diverting of modifications to the source disk while the backup is written.
- the data written to the tape must conespond to a disk at a single point in time.
- This advance in providing a redundant backup on a tape facilitates tape based recovery of data over a range of time, as opposed to a single point in time. It generally differs from a traditional 'base image plus incremental backup' in that it is disk sector based and contains the synchronization (safe points) information and
- the Driver creates the backup tape while at the same time allowing the user to continue modifying their data.
- the basic process is identical to maintaining a redundant external disk. Note that if too much modification occurs, the tape backup process must re-start (the same situation occurs when an external disk's tracking of changes falls behind
- the tape generated by the Driver is created in one recording session and covers a window of time that goes backward from the time the tape gets written. This is possible because the Driver has stored incremental change information on the source disk.
- Creating an incremental tape backup in one recording session reduces the complexity of the backup process.
- the reason for creating a traditional incremental backup was to reduce backup time, in that saving differences generally takes less time than a "full backup", and to reduce the amount of physical tape used (recording less takes less space).
- these • benefits came at the cost of added handling and restoration complexity.
- the reason for the Driver making a backup tape that spans a window of time is in fact to get this feature.
- the resulting tape has the benefit of being both a full backup, in that it is not dependent on another earlier tape, and providing restoration ability over a window of time. Further, unlike a traditional incremental backup from which restoration is only possible to a time at which the user had made an incremental backup run, the Driver's backup tape allows for restoration from virtually any usable point in the backed up window of time. The difference between these approaches is similar to the difference between constantly copying data to tape throughout the workday or simply making one backup tape at the end of the day. 3) A Directory for a Backward Looking Incremental Tape Backup
- the prior paragraph discusses a new process for creating an incremental backup tape.
- the tape contains all the necessary information to restore data from various points within a window of time, the organization of the data on the tape is such that selective restoration (e.g., a single file) is complicated.
- restoration of the entire tape to a disk and the subsequent use of the normal Driver software for recovery is the most natural and simplest means of accessing the tape's data.
- the directory can map all the various versions offiles throughout the backed up window of time, or just at one time. In the latter case, the tape must be restored to disk in order to access files across the window of time.
- the present invention is a method and apparatus for disk based information recovery in computer systems. This applies to all types of computer systems that utilize one or more hard disks, where the disks represent a non-volatile storage system or systems. Such types of computers may be, but are not limited to, personal computer, network servers, file servers, or mainframes. Thus, the various embodiments of the present invention provide that a disk or other storage device can be backed up incrementally and continuously. Some of the features of the invention that make this possible include saving original states of disk information in a size bounded circular history buffer system such that older information is discarded in favor of newer. These operations are summarized below. 1. How is data saved. The "saving” or “copying" of data does not necessarily imply that data is read and duplicated in another location.
- the space available to the history buffer is, for example, fixed in size, pre-allocated, or limited to only a certain area on the disk.
- An implementation of the history buffer may, for example, dynamically allocate space, move its contents around, exist independently or under an operating system's filing system, or manage space in any other way that achieves the effect of the above Statement.
- the focus on bounding simply reflects the fact that storage is not infinite and yet the present invention provides for recent information recovery for an unbounded amount of time and write activity.
- this establishes the information by which backups to any point in time, as limited by the history buffer size, can be generated without having requested such to be made in advance.
- the present invention addresses the history buffer's use for recent information recovery by:
- the present invention provides a method and apparatus to recover a disk drive (partition) to a prior recent state in time.
- the invention provides that "old" files or data may be recovered without having had specifically backed up a disk drive.
- the invention has been described with respect to its prefened forms, many other implementations are possible and within the skill of the art.
- the invention is not limited to disk based storage mediums, but may be applied to any storage device such as random access memory.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2000509037A JP3878412B2 (en) | 1997-09-05 | 1998-09-04 | How to save, use and recover data |
DE19882659T DE19882659T1 (en) | 1997-09-05 | 1998-09-04 | Process, software and device for storing, using and retrieving data |
AU93832/98A AU9383298A (en) | 1997-09-05 | 1998-09-04 | Method, software and apparatus for saving, using and recovering data |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US92419897A | 1997-09-05 | 1997-09-05 | |
US08/924,198 | 1997-09-05 | ||
US3965098A | 1998-03-16 | 1998-03-16 | |
US09/039,650 | 1998-03-16 | ||
US09/105,733 | 1998-06-26 | ||
US09/105,733 US6016553A (en) | 1997-09-05 | 1998-06-26 | Method, software and apparatus for saving, using and recovering data |
Publications (2)
Publication Number | Publication Date |
---|---|
WO1999012101A2 true WO1999012101A2 (en) | 1999-03-11 |
WO1999012101A3 WO1999012101A3 (en) | 1999-08-19 |
Family
ID=27365582
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1998/018863 WO1999012101A2 (en) | 1997-09-05 | 1998-09-04 | Method, software and apparatus for saving, using and recovering data |
Country Status (5)
Country | Link |
---|---|
US (3) | US6016553A (en) |
JP (1) | JP3878412B2 (en) |
AU (1) | AU9383298A (en) |
DE (1) | DE19882659T1 (en) |
WO (1) | WO1999012101A2 (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000065447A1 (en) * | 1999-04-23 | 2000-11-02 | Wild File, Inc. | Method and apparatus for dealing with data corruption and shared disks in the context of saving, using and recovering data |
WO2001001251A1 (en) * | 1999-06-30 | 2001-01-04 | Microsoft Corporation | Restoration of a computer to a previous state |
US6594780B1 (en) | 1999-10-19 | 2003-07-15 | Inasoft, Inc. | Operating system and data protection |
US6732293B1 (en) | 1998-03-16 | 2004-05-04 | Symantec Corporation | Method, software and apparatus for recovering and recycling data in conjunction with an operating system |
DE10334815A1 (en) * | 2003-07-30 | 2005-03-10 | Siemens Ag | Reading and saving of data method e.g. for communication bus system, involves storing and selection of data with time information and validity information within two ranges to be stored alternately |
US7051055B1 (en) | 1999-07-09 | 2006-05-23 | Symantec Corporation | Optimized disk storage defragmentation with swapping capabilities |
US7055055B1 (en) | 1999-04-23 | 2006-05-30 | Symantec Corporation | Write cache flushing method for reducing data corruption |
US7337360B2 (en) | 1999-10-19 | 2008-02-26 | Idocrase Investments Llc | Stored memory recovery system |
US7440398B2 (en) | 2004-11-29 | 2008-10-21 | Honeywell International Inc. | Fault tolerant communication apparatus |
US7506013B2 (en) | 1999-07-09 | 2009-03-17 | Symantec Corporation | Disk storage defragmentation |
US7730031B2 (en) | 2000-03-01 | 2010-06-01 | Computer Associates Think, Inc. | Method and system for updating an archive of a computer file |
US7949665B1 (en) | 2004-11-19 | 2011-05-24 | Symantec Corporation | Rapidly traversing disc volumes during file content examination |
US8358567B2 (en) | 2010-02-04 | 2013-01-22 | Panasonic Corporation | Information reproduction device and information reproduction method |
US10110572B2 (en) | 2015-01-21 | 2018-10-23 | Oracle International Corporation | Tape drive encryption in the data path |
Families Citing this family (298)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB9605338D0 (en) | 1996-03-13 | 1996-05-15 | Arendee Ltd | Improvements in or relating to computer systems |
US6434663B1 (en) * | 1996-09-06 | 2002-08-13 | Intel Corporation | Disk block allocation optimization methodology with accommodation for file system cluster size greater than operating system memory page size |
US6016553A (en) * | 1997-09-05 | 2000-01-18 | Wild File, Inc. | Method, software and apparatus for saving, using and recovering data |
US7389442B1 (en) * | 1997-12-26 | 2008-06-17 | Samsung Electronics Co., Ltd. | Apparatus and method for self diagnosis, repair, removal by reversion of computer problems from desktop and recovery from booting or loading of operating system errors by removable media |
US6363487B1 (en) * | 1998-03-16 | 2002-03-26 | Roxio, Inc. | Apparatus and method of creating a firewall data protection |
US6202124B1 (en) * | 1998-05-05 | 2001-03-13 | International Business Machines Corporation | Data storage system with outboard physical data transfer operation utilizing data path distinct from host |
GB9809885D0 (en) | 1998-05-09 | 1998-07-08 | Vircon Limited | Protected storage device for computer system |
US6163856A (en) * | 1998-05-29 | 2000-12-19 | Sun Microsystems, Inc. | Method and apparatus for file system disaster recovery |
US6289355B1 (en) * | 1998-09-16 | 2001-09-11 | International Business Machines Corp. | Fast log apply |
US6714935B1 (en) * | 1998-09-21 | 2004-03-30 | Microsoft Corporation | Management of non-persistent data in a persistent database |
US6314567B1 (en) * | 1998-11-13 | 2001-11-06 | Hewlett-Packard Company | Apparatus and method for transferring state data when performing on-line replacement of a running program code and data |
US7181486B1 (en) | 1998-12-07 | 2007-02-20 | Network Ice Corporation | Method and apparatus for remote installation of network drivers and software |
JP2002532784A (en) * | 1998-12-07 | 2002-10-02 | ネットワーク アイス コーポレイション | Method and apparatus for remote installation of network drivers and software |
WO2000034867A1 (en) | 1998-12-09 | 2000-06-15 | Network Ice Corporation | A method and apparatus for providing network and computer system security |
US6751604B2 (en) * | 1999-01-06 | 2004-06-15 | Hewlett-Packard Development Company, L.P. | Method of displaying temporal and storage media relationships of file names protected on removable storage media |
US6317875B1 (en) * | 1999-01-15 | 2001-11-13 | Intel Corporation | Application execution performance through disk block relocation |
US7774315B1 (en) | 1999-03-12 | 2010-08-10 | Eldad Galker | Backup system |
JP2000284987A (en) * | 1999-03-31 | 2000-10-13 | Fujitsu Ltd | Computer, computer network system and recording medium |
US6148354A (en) | 1999-04-05 | 2000-11-14 | M-Systems Flash Disk Pioneers Ltd. | Architecture for a universal serial bus-based PC flash disk |
US7035880B1 (en) * | 1999-07-14 | 2006-04-25 | Commvault Systems, Inc. | Modular backup and retrieval system used in conjunction with a storage area network |
US7395282B1 (en) * | 1999-07-15 | 2008-07-01 | Commvault Systems, Inc. | Hierarchical backup and retrieval system |
US7389311B1 (en) * | 1999-07-15 | 2008-06-17 | Commvault Systems, Inc. | Modular backup and retrieval system |
US7346929B1 (en) * | 1999-07-29 | 2008-03-18 | International Business Machines Corporation | Method and apparatus for auditing network security |
US6948099B1 (en) * | 1999-07-30 | 2005-09-20 | Intel Corporation | Re-loading operating systems |
US6742137B1 (en) * | 1999-08-17 | 2004-05-25 | Adaptec, Inc. | Object oriented fault tolerance |
US6775339B1 (en) | 1999-08-27 | 2004-08-10 | Silicon Graphics, Inc. | Circuit design for high-speed digital communication |
US6591377B1 (en) * | 1999-11-24 | 2003-07-08 | Unisys Corporation | Method for comparing system states at different points in time |
US6883120B1 (en) * | 1999-12-03 | 2005-04-19 | Network Appliance, Inc. | Computer assisted automatic error detection and diagnosis of file servers |
US8006243B2 (en) * | 1999-12-07 | 2011-08-23 | International Business Machines Corporation | Method and apparatus for remote installation of network drivers and software |
US6526418B1 (en) * | 1999-12-16 | 2003-02-25 | Livevault Corporation | Systems and methods for backing up data files |
US6779003B1 (en) | 1999-12-16 | 2004-08-17 | Livevault Corporation | Systems and methods for backing up data files |
US6625623B1 (en) * | 1999-12-16 | 2003-09-23 | Livevault Corporation | Systems and methods for backing up data files |
US6847984B1 (en) | 1999-12-16 | 2005-01-25 | Livevault Corporation | Systems and methods for backing up data files |
US6460055B1 (en) * | 1999-12-16 | 2002-10-01 | Livevault Corporation | Systems and methods for backing up data files |
US7031420B1 (en) * | 1999-12-30 | 2006-04-18 | Silicon Graphics, Inc. | System and method for adaptively deskewing parallel data signals relative to a clock |
US7155481B2 (en) | 2000-01-31 | 2006-12-26 | Commvault Systems, Inc. | Email attachment management in a computer system |
US6658436B2 (en) | 2000-01-31 | 2003-12-02 | Commvault Systems, Inc. | Logical view and access to data managed by a modular data and storage management system |
US7003641B2 (en) | 2000-01-31 | 2006-02-21 | Commvault Systems, Inc. | Logical view with granular access to exchange data managed by a modular data and storage management system |
GB2359386B (en) * | 2000-02-16 | 2004-08-04 | Data Connection Ltd | Replicated control block handles for fault-tolerant computer systems |
US6704730B2 (en) | 2000-02-18 | 2004-03-09 | Avamar Technologies, Inc. | Hash file system and method for use in a commonality factoring system |
US7062648B2 (en) * | 2000-02-18 | 2006-06-13 | Avamar Technologies, Inc. | System and method for redundant array network storage |
US7194504B2 (en) * | 2000-02-18 | 2007-03-20 | Avamar Technologies, Inc. | System and method for representing and maintaining redundant data sets utilizing DNA transmission and transcription techniques |
US6826711B2 (en) | 2000-02-18 | 2004-11-30 | Avamar Technologies, Inc. | System and method for data protection with multidimensional parity |
KR100380250B1 (en) * | 2000-02-21 | 2003-04-18 | 트렉 2000 인터네셔널 엘티디. | A Portable Data Storage Device |
US6714720B1 (en) * | 2000-03-06 | 2004-03-30 | Ati International Srl | Method and apparatus for storing multi-media data |
US20050257827A1 (en) * | 2000-04-27 | 2005-11-24 | Russell Gaudiana | Rotational photovoltaic cells, systems and methods |
IL152502A0 (en) * | 2000-04-28 | 2003-05-29 | Internet Security Systems Inc | Method and system for managing computer security information |
US6484187B1 (en) * | 2000-04-28 | 2002-11-19 | International Business Machines Corporation | Coordinating remote copy status changes across multiple logical sessions to maintain consistency |
AU2001257400A1 (en) * | 2000-04-28 | 2001-11-12 | Internet Security Systems, Inc. | System and method for managing security events on a network |
US7111201B2 (en) * | 2000-05-19 | 2006-09-19 | Self Repairing Computers, Inc. | Self repairing computer detecting need for repair and having switched protected storage |
IL152936A0 (en) | 2000-05-19 | 2003-06-24 | Self Repairing Computers Inc | A computer with switchable components |
US7100075B2 (en) * | 2000-05-19 | 2006-08-29 | Sel Repairing Computers, Inc. | Computer system having data store protected from internet contamination by virus or malicious code and method for protecting |
US7137034B2 (en) * | 2000-05-19 | 2006-11-14 | Vir2Us, Inc. | Self repairing computer having user accessible switch for modifying bootable storage device configuration to initiate repair |
US7096381B2 (en) * | 2001-05-21 | 2006-08-22 | Self Repairing Computer, Inc. | On-the-fly repair of a computer |
US20060277433A1 (en) * | 2000-05-19 | 2006-12-07 | Self Repairing Computers, Inc. | Computer having special purpose subsystems and cyber-terror and virus immunity and protection features |
US6496840B1 (en) * | 2000-05-31 | 2002-12-17 | International Business Machines Corporation | Method, system and program products for atomically and persistently swapping resource groups |
US6907531B1 (en) | 2000-06-30 | 2005-06-14 | Internet Security Systems, Inc. | Method and system for identifying, fixing, and updating security vulnerabilities |
US6754682B1 (en) * | 2000-07-10 | 2004-06-22 | Emc Corporation | Method and apparatus for enabling consistent ancillary disk array storage device operations with respect to a main application |
US7093239B1 (en) | 2000-07-14 | 2006-08-15 | Internet Security Systems, Inc. | Computer immune system and method for detecting unwanted code in a computer system |
US6779072B1 (en) | 2000-07-20 | 2004-08-17 | Silicon Graphics, Inc. | Method and apparatus for accessing MMR registers distributed across a large asic |
US7333516B1 (en) | 2000-07-20 | 2008-02-19 | Silicon Graphics, Inc. | Interface for synchronous data transfer between domains clocked at different frequencies |
US6703908B1 (en) | 2000-07-20 | 2004-03-09 | Silicon Graphic, Inc. | I/O impedance controller |
US7248635B1 (en) | 2000-07-20 | 2007-07-24 | Silicon Graphics, Inc. | Method and apparatus for communicating computer data from one point to another over a communications medium |
US6839856B1 (en) | 2000-07-20 | 2005-01-04 | Silicon Graphics, Inc. | Method and circuit for reliable data capture in the presence of bus-master changeovers |
US6831924B1 (en) | 2000-07-20 | 2004-12-14 | Silicon Graphics, Inc. | Variable mode bi-directional and uni-directional computer communication system |
US6763428B1 (en) | 2000-08-02 | 2004-07-13 | Symantec Corporation | Methods and systems for performing push-pull optimization of files while file storage allocations are actively changing |
US7539828B2 (en) * | 2000-08-08 | 2009-05-26 | Faronics Corporation | Method and system for automatically preserving persistent storage |
US6681293B1 (en) | 2000-08-25 | 2004-01-20 | Silicon Graphics, Inc. | Method and cache-coherence system allowing purging of mid-level cache entries without purging lower-level cache entries |
US6879996B1 (en) * | 2000-09-13 | 2005-04-12 | Edward W. Laves | Method and apparatus for displaying personal digital assistant synchronization data using primary and subordinate data fields |
EP1195679A1 (en) * | 2000-10-06 | 2002-04-10 | Hewlett-Packard Company, A Delaware Corporation | Performing operating system recovery from external back-up media in a headless computer entity |
GB2367656A (en) * | 2000-10-06 | 2002-04-10 | Hewlett Packard Co | Self-repairing operating system for computer entities |
US9027121B2 (en) | 2000-10-10 | 2015-05-05 | International Business Machines Corporation | Method and system for creating a record for one or more computer security incidents |
US7146305B2 (en) * | 2000-10-24 | 2006-12-05 | Vcis, Inc. | Analytical virtual machine |
US6618794B1 (en) * | 2000-10-31 | 2003-09-09 | Hewlett-Packard Development Company, L.P. | System for generating a point-in-time copy of data in a data storage system |
US6810398B2 (en) * | 2000-11-06 | 2004-10-26 | Avamar Technologies, Inc. | System and method for unorchestrated determination of data sequences using sticky byte factoring to determine breakpoints in digital sequences |
US7089449B1 (en) * | 2000-11-06 | 2006-08-08 | Micron Technology, Inc. | Recovering a system that has experienced a fault |
US6968462B2 (en) | 2000-12-11 | 2005-11-22 | International Business Machines Corporation | Verifying physical universal serial bus keystrokes |
US7130466B2 (en) * | 2000-12-21 | 2006-10-31 | Cobion Ag | System and method for compiling images from a database and comparing the compiled images with known images |
AU2002243763A1 (en) * | 2001-01-31 | 2002-08-12 | Internet Security Systems, Inc. | Method and system for configuring and scheduling security audits of a computer network |
CA2438481A1 (en) * | 2001-03-27 | 2002-10-03 | British Telecommunications Public Limited Company | File synchronisation |
US7010696B1 (en) | 2001-03-30 | 2006-03-07 | Mcafee, Inc. | Method and apparatus for predicting the incidence of a virus |
US20020169880A1 (en) * | 2001-04-19 | 2002-11-14 | Koninklijke Philips Electronics N.V. | Method and device for robust real-time estimation of the bottleneck bandwidth in the internet |
US7440972B2 (en) * | 2001-04-26 | 2008-10-21 | Sonic Solutions | Interactive media authoring without access to original source material |
US7392541B2 (en) * | 2001-05-17 | 2008-06-24 | Vir2Us, Inc. | Computer system architecture and method providing operating-system independent virus-, hacker-, and cyber-terror-immune processing environments |
US7849360B2 (en) * | 2001-05-21 | 2010-12-07 | Vir2Us, Inc. | Computer system and method of controlling communication port to prevent computer contamination by virus or malicious code |
US7526811B1 (en) * | 2001-05-22 | 2009-04-28 | Novell, Inc. | Methods for detecting executable code which has been altered |
WO2002097587A2 (en) * | 2001-05-31 | 2002-12-05 | Internet Security Systems, Inc. | Method and system for implementing security devices in a network |
US20040139125A1 (en) * | 2001-06-05 | 2004-07-15 | Roger Strassburg | Snapshot copy of data volume during data access |
US7640582B2 (en) | 2003-04-16 | 2009-12-29 | Silicon Graphics International | Clustered filesystem for mix of trusted and untrusted nodes |
US7657419B2 (en) * | 2001-06-19 | 2010-02-02 | International Business Machines Corporation | Analytical virtual machine |
WO2003003278A1 (en) * | 2001-06-28 | 2003-01-09 | Trek 2000 International Ltd. | A portable device having biometrics-based authentication capabilities |
WO2003003295A1 (en) * | 2001-06-28 | 2003-01-09 | Trek 2000 International Ltd. | A portable device having biometrics-based authentication capabilities |
ATE335236T1 (en) * | 2001-06-28 | 2006-08-15 | Trek 2000 Int Ltd | DATA TRANSFER PROCEDURES AND FACILITIES |
US7673343B1 (en) | 2001-07-26 | 2010-03-02 | Mcafee, Inc. | Anti-virus scanning co-processor |
US6773083B2 (en) | 2001-08-29 | 2004-08-10 | Lexmark International, Inc. | Method and apparatus for non-volatile memory usage in an ink jet printer |
US20030046605A1 (en) * | 2001-09-03 | 2003-03-06 | Farstone Technology Inc. | Data protection system and method regarding the same |
US6832301B2 (en) * | 2001-09-11 | 2004-12-14 | International Business Machines Corporation | Method for recovering memory |
US6880101B2 (en) * | 2001-10-12 | 2005-04-12 | Dell Products L.P. | System and method for providing automatic data restoration after a storage device failure |
US6668336B2 (en) | 2001-11-08 | 2003-12-23 | M-Systems Flash Disk Pioneers Ltd. | Ruggedized block device driver |
US7324983B1 (en) * | 2001-11-08 | 2008-01-29 | I2 Technologies Us, Inc. | Reproducible selection of members in a hierarchy |
US6883114B2 (en) * | 2001-11-08 | 2005-04-19 | M-Systems Flash Disk Pioneers Ltd. | Block device driver enabling a ruggedized file system |
US7536598B2 (en) * | 2001-11-19 | 2009-05-19 | Vir2Us, Inc. | Computer system capable of supporting a plurality of independent computing environments |
GB2382889A (en) * | 2001-12-05 | 2003-06-11 | Cambridge Consultants | microprocessor design system |
US7181560B1 (en) | 2001-12-21 | 2007-02-20 | Joseph Grand | Method and apparatus for preserving computer memory using expansion card |
US7007152B2 (en) * | 2001-12-28 | 2006-02-28 | Storage Technology Corporation | Volume translation apparatus and method |
US7467274B2 (en) * | 2001-12-31 | 2008-12-16 | Hewlett-Packard Development Company, L.P. | Method to increase the life span of limited cycle read/write media |
WO2003058451A1 (en) * | 2002-01-04 | 2003-07-17 | Internet Security Systems, Inc. | System and method for the managed security control of processes on a computer system |
JP2003248596A (en) * | 2002-02-26 | 2003-09-05 | Hitachi Ltd | Method for taking over processing in multicomputer system |
US7788699B2 (en) | 2002-03-06 | 2010-08-31 | Vir2Us, Inc. | Computer and method for safe usage of documents, email attachments and other content that may contain virus, spy-ware, or malicious code |
US6993539B2 (en) | 2002-03-19 | 2006-01-31 | Network Appliance, Inc. | System and method for determining changes in two snapshots and for transmitting changes to destination snapshot |
TW200401267A (en) * | 2002-04-04 | 2004-01-16 | Sonic Solutions | Optimizing the recording on a rewritable interactive medium of revisions to an existing project on that medium |
EP1454240B1 (en) * | 2002-05-13 | 2006-02-08 | Trek 2000 International Ltd | System and apparatus for compressing and decompressing data stored to a portable data storage device |
US7370360B2 (en) * | 2002-05-13 | 2008-05-06 | International Business Machines Corporation | Computer immune system and method for detecting unwanted code in a P-code or partially compiled native-code program executing within a virtual machine |
US7844577B2 (en) * | 2002-07-15 | 2010-11-30 | Symantec Corporation | System and method for maintaining a backup storage system for a computer system |
US6980698B2 (en) * | 2002-07-22 | 2005-12-27 | Xerox Corporation | Image finder method and apparatus for pixography and other photo-related reproduction applications |
US6907504B2 (en) * | 2002-07-29 | 2005-06-14 | International Business Machines Corporation | Method and system for upgrading drive firmware in a non-disruptive manner |
TW588243B (en) * | 2002-07-31 | 2004-05-21 | Trek 2000 Int Ltd | System and method for authentication |
US6957362B2 (en) * | 2002-08-06 | 2005-10-18 | Emc Corporation | Instantaneous restoration of a production copy from a snapshot copy in a data storage system |
US7124322B1 (en) * | 2002-09-24 | 2006-10-17 | Novell, Inc. | System and method for disaster recovery for a computer network |
US7206961B1 (en) * | 2002-09-30 | 2007-04-17 | Emc Corporation | Preserving snapshots during disk-based restore |
US7340486B1 (en) * | 2002-10-10 | 2008-03-04 | Network Appliance, Inc. | System and method for file system snapshot of a virtual logical disk |
US20040083357A1 (en) * | 2002-10-29 | 2004-04-29 | Sun Microsystems, Inc. | Method, system, and program for executing a boot routine on a computer system |
US7010645B2 (en) * | 2002-12-27 | 2006-03-07 | International Business Machines Corporation | System and method for sequentially staging received data to a write cache in advance of storing the received data |
US7913303B1 (en) | 2003-01-21 | 2011-03-22 | International Business Machines Corporation | Method and system for dynamically protecting a computer system from attack |
GB2399188B (en) * | 2003-03-04 | 2005-11-30 | Fujitsu Serv Ltd | Reducing the boot-up time of a computer system |
US7454569B2 (en) | 2003-06-25 | 2008-11-18 | Commvault Systems, Inc. | Hierarchical system and method for performing storage operations in a computer network |
US7401092B2 (en) * | 2003-06-26 | 2008-07-15 | Standbysoft Llc | Method and apparatus for exchanging sub-hierarchical structures within a hierarchical file system |
US20050033625A1 (en) * | 2003-08-06 | 2005-02-10 | International Business Machines Corporation | Method, apparatus and program storage device for scheduling the performance of maintenance tasks to maintain a system environment |
US7506198B2 (en) * | 2003-08-11 | 2009-03-17 | Radix Israel, Ltd. | Protection and recovery system and automatic hard disk drive (HDD) instant recovery |
US7565382B1 (en) | 2003-08-14 | 2009-07-21 | Symantec Corporation | Safely rolling back a computer image |
US7577806B2 (en) * | 2003-09-23 | 2009-08-18 | Symantec Operating Corporation | Systems and methods for time dependent data storage and recovery |
US7730222B2 (en) * | 2004-08-24 | 2010-06-01 | Symantec Operating System | Processing storage-related I/O requests using binary tree data structures |
US7239581B2 (en) * | 2004-08-24 | 2007-07-03 | Symantec Operating Corporation | Systems and methods for synchronizing the internal clocks of a plurality of processor modules |
US7296008B2 (en) * | 2004-08-24 | 2007-11-13 | Symantec Operating Corporation | Generation and use of a time map for accessing a prior image of a storage device |
US7577807B2 (en) * | 2003-09-23 | 2009-08-18 | Symantec Operating Corporation | Methods and devices for restoring a portion of a data store |
US7827362B2 (en) * | 2004-08-24 | 2010-11-02 | Symantec Corporation | Systems, apparatus, and methods for processing I/O requests |
US7409587B2 (en) * | 2004-08-24 | 2008-08-05 | Symantec Operating Corporation | Recovering from storage transaction failures using checkpoints |
US7287133B2 (en) * | 2004-08-24 | 2007-10-23 | Symantec Operating Corporation | Systems and methods for providing a modification history for a location within a data store |
US7725760B2 (en) * | 2003-09-23 | 2010-05-25 | Symantec Operating Corporation | Data storage system |
US7904428B2 (en) * | 2003-09-23 | 2011-03-08 | Symantec Corporation | Methods and apparatus for recording write requests directed to a data store |
US7991748B2 (en) * | 2003-09-23 | 2011-08-02 | Symantec Corporation | Virtual data store creation and use |
US7631120B2 (en) * | 2004-08-24 | 2009-12-08 | Symantec Operating Corporation | Methods and apparatus for optimally selecting a storage buffer for the storage of data |
US7225208B2 (en) * | 2003-09-30 | 2007-05-29 | Iron Mountain Incorporated | Systems and methods for backing up data files |
US7171538B2 (en) | 2003-10-22 | 2007-01-30 | International Business Machines Corporation | Incremental data storage method, apparatus, interface, and system |
US7707374B2 (en) * | 2003-10-22 | 2010-04-27 | International Business Machines Corporation | Incremental data storage method, apparatus, interface, and system |
US7657938B2 (en) * | 2003-10-28 | 2010-02-02 | International Business Machines Corporation | Method and system for protecting computer networks by altering unwanted network data traffic |
US7721062B1 (en) | 2003-11-10 | 2010-05-18 | Netapp, Inc. | Method for detecting leaked buffer writes across file system consistency points |
US7401093B1 (en) | 2003-11-10 | 2008-07-15 | Network Appliance, Inc. | System and method for managing file data during consistency points |
WO2005050381A2 (en) * | 2003-11-13 | 2005-06-02 | Commvault Systems, Inc. | Systems and methods for performing storage operations using network attached storage |
US7412583B2 (en) * | 2003-11-14 | 2008-08-12 | International Business Machines Corporation | Virtual incremental storage method |
US7206973B2 (en) * | 2003-12-11 | 2007-04-17 | Lsi Logic Corporation | PCI validation |
JP4477370B2 (en) * | 2004-01-30 | 2010-06-09 | 株式会社日立製作所 | Data processing system |
US20050204186A1 (en) * | 2004-03-09 | 2005-09-15 | Rothman Michael A. | System and method to implement a rollback mechanism for a data storage unit |
US7779463B2 (en) | 2004-05-11 | 2010-08-17 | The Trustees Of Columbia University In The City Of New York | Systems and methods for correlating and distributing intrusion alert information among collaborating computer systems |
US7257684B1 (en) | 2004-05-25 | 2007-08-14 | Storage Technology Corporation | Method and apparatus for dynamically altering accessing of storage drives based on the technology limits of the drives |
US7353242B2 (en) | 2004-07-09 | 2008-04-01 | Hitachi, Ltd. | File server for long term data archive |
US20060047714A1 (en) * | 2004-08-30 | 2006-03-02 | Mendocino Software, Inc. | Systems and methods for rapid presentation of historical views of stored data |
US7664983B2 (en) * | 2004-08-30 | 2010-02-16 | Symantec Corporation | Systems and methods for event driven recovery management |
US8495023B1 (en) * | 2004-09-01 | 2013-07-23 | Symantec Operating Corporation | Delta catalogs in a backup system |
JP2006085279A (en) * | 2004-09-14 | 2006-03-30 | Konica Minolta Business Technologies Inc | File management program and file management method |
US7591018B1 (en) | 2004-09-14 | 2009-09-15 | Trend Micro Incorporated | Portable antivirus device with solid state memory |
US7873782B2 (en) * | 2004-11-05 | 2011-01-18 | Data Robotics, Inc. | Filesystem-aware block storage system, apparatus, and method |
AU2005304792B2 (en) | 2004-11-05 | 2010-07-08 | Drobo, Inc. | Storage system condition indicator and method |
US7814367B1 (en) * | 2004-11-12 | 2010-10-12 | Double-Take Software Canada, Inc. | Method and system for time addressable storage |
US7784097B1 (en) * | 2004-11-24 | 2010-08-24 | The Trustees Of Columbia University In The City Of New York | Systems and methods for correlating and distributing intrusion alert information among collaborating computer systems |
US20060130144A1 (en) * | 2004-12-14 | 2006-06-15 | Delta Insights, Llc | Protecting computing systems from unauthorized programs |
US7509530B2 (en) * | 2005-01-19 | 2009-03-24 | Sonic Solutions | Method and system for use in restoring an active partition |
US20060200589A1 (en) * | 2005-02-18 | 2006-09-07 | Collins Mark A | Automated driver reset for an information handling system |
US7440979B2 (en) * | 2005-03-30 | 2008-10-21 | Sap Ag | Snapshots for instant backup in a database management system |
US7694088B1 (en) * | 2005-03-31 | 2010-04-06 | Symantec Operating Corporation | System and method for efficient creation of aggregate backup images |
US20090132613A1 (en) * | 2005-04-25 | 2009-05-21 | Koninklijke Philips Electronics, N.V. | Apparatus, Method and System For Restoring Files |
US7327167B2 (en) * | 2005-04-28 | 2008-02-05 | Silicon Graphics, Inc. | Anticipatory programmable interface pre-driver |
US20060265756A1 (en) * | 2005-05-11 | 2006-11-23 | Microsoft Corporation | Disk protection using enhanced write filter |
US8335768B1 (en) * | 2005-05-25 | 2012-12-18 | Emc Corporation | Selecting data in backup data sets for grooming and transferring |
US8521752B2 (en) * | 2005-06-03 | 2013-08-27 | Osr Open Systems Resources, Inc. | Systems and methods for arbitrary data transformations |
US9378099B2 (en) | 2005-06-24 | 2016-06-28 | Catalogic Software, Inc. | Instant data center recovery |
AU2006262045B2 (en) * | 2005-06-24 | 2011-11-03 | Catalogic Software, Inc. | System and method for high performance enterprise data protection |
US7653682B2 (en) * | 2005-07-22 | 2010-01-26 | Netapp, Inc. | Client failure fencing mechanism for fencing network file system data in a host-cluster environment |
WO2007047346A2 (en) * | 2005-10-14 | 2007-04-26 | Symantec Operating Corporation | Technique for timeline compression in a data store |
WO2007047348A2 (en) * | 2005-10-14 | 2007-04-26 | Revivio, Inc. | Technique for remapping data in a storage management system |
US20070106993A1 (en) * | 2005-10-21 | 2007-05-10 | Kenneth Largman | Computer security method having operating system virtualization allowing multiple operating system instances to securely share single machine resources |
US7756834B2 (en) * | 2005-11-03 | 2010-07-13 | I365 Inc. | Malware and spyware attack recovery system and method |
US8677087B2 (en) | 2006-01-03 | 2014-03-18 | Emc Corporation | Continuous backup of a storage device |
US20070174664A1 (en) * | 2006-01-04 | 2007-07-26 | Ess Data Recovery, Inc. | Data recovery application |
US7693883B2 (en) * | 2006-01-30 | 2010-04-06 | Sap Ag | Online data volume deletion |
US8341127B1 (en) * | 2006-02-02 | 2012-12-25 | Emc Corporation | Client initiated restore |
US8042172B1 (en) * | 2006-02-02 | 2011-10-18 | Emc Corporation | Remote access architecture enabling a client to perform an operation |
US8886902B1 (en) | 2006-02-02 | 2014-11-11 | Emc Corporation | Disk backup set access |
US7574621B2 (en) * | 2006-03-14 | 2009-08-11 | Lenovo (Singapore) Pte Ltd. | Method and system for identifying and recovering a file damaged by a hard drive failure |
US7975304B2 (en) * | 2006-04-28 | 2011-07-05 | Trend Micro Incorporated | Portable storage device with stand-alone antivirus capability |
AU2007247939B2 (en) | 2006-05-05 | 2012-02-09 | Hybir Inc. | Group based complete and incremental computer file backup system, process and apparatus |
US7793110B2 (en) * | 2006-05-24 | 2010-09-07 | Palo Alto Research Center Incorporated | Posture-based data protection |
DE602006008597D1 (en) * | 2006-06-29 | 2009-10-01 | Incard Sa | Compression method for managing the storage of persistent data of a nonvolatile memory in a backup buffer |
US7512748B1 (en) | 2006-08-17 | 2009-03-31 | Osr Open Systems Resources, Inc. | Managing lock rankings |
US8539228B1 (en) | 2006-08-24 | 2013-09-17 | Osr Open Systems Resources, Inc. | Managing access to a resource |
US8301673B2 (en) * | 2006-12-29 | 2012-10-30 | Netapp, Inc. | System and method for performing distributed consistency verification of a clustered file system |
US8775369B2 (en) * | 2007-01-24 | 2014-07-08 | Vir2Us, Inc. | Computer system architecture and method having isolated file system management for secure and reliable data processing |
US8219821B2 (en) | 2007-03-27 | 2012-07-10 | Netapp, Inc. | System and method for signature based data container recognition |
US8260748B1 (en) * | 2007-03-27 | 2012-09-04 | Symantec Corporation | Method and apparatus for capturing data from a backup image |
US8024433B2 (en) * | 2007-04-24 | 2011-09-20 | Osr Open Systems Resources, Inc. | Managing application resources |
US8219749B2 (en) * | 2007-04-27 | 2012-07-10 | Netapp, Inc. | System and method for efficient updates of sequential block storage |
US7882304B2 (en) * | 2007-04-27 | 2011-02-01 | Netapp, Inc. | System and method for efficient updates of sequential block storage |
US7827350B1 (en) | 2007-04-27 | 2010-11-02 | Netapp, Inc. | Method and system for promoting a snapshot in a distributed file system |
US7844853B2 (en) * | 2007-08-07 | 2010-11-30 | International Business Machines Corporation | Methods and apparatus for restoring a node state |
US7949693B1 (en) | 2007-08-23 | 2011-05-24 | Osr Open Systems Resources, Inc. | Log-structured host data storage |
US7996636B1 (en) | 2007-11-06 | 2011-08-09 | Netapp, Inc. | Uniquely identifying block context signatures in a storage volume hierarchy |
CN101459679A (en) * | 2007-12-12 | 2009-06-17 | 华为技术有限公司 | Network storage device and data read-write control method |
US8271454B2 (en) * | 2007-12-18 | 2012-09-18 | Microsoft Corporation | Circular log amnesia detection |
US7857420B2 (en) * | 2007-12-27 | 2010-12-28 | Infoprint Solutions Company, Llc | Methods and apparatus to identify pages to be discarded in a print system |
US7831861B1 (en) | 2008-02-07 | 2010-11-09 | Symantec Corporation | Techniques for efficient restoration of granular application data |
US8180744B2 (en) * | 2008-03-05 | 2012-05-15 | Hewlett-Packard Development Company, L.P. | Managing storage of data in a data structure |
US8725986B1 (en) | 2008-04-18 | 2014-05-13 | Netapp, Inc. | System and method for volume block number to disk block number mapping |
US8271751B2 (en) * | 2008-04-24 | 2012-09-18 | Echostar Technologies L.L.C. | Systems and methods for reliably managing files in a computer system |
US20100061207A1 (en) * | 2008-09-09 | 2010-03-11 | Seagate Technology Llc | Data storage device including self-test features |
JP2010123233A (en) * | 2008-11-21 | 2010-06-03 | Hitachi Global Storage Technologies Netherlands Bv | Magnetic disk drive and method for recording data to magnetic disk |
US8738621B2 (en) * | 2009-01-27 | 2014-05-27 | EchoStar Technologies, L.L.C. | Systems and methods for managing files on a storage device |
US8307154B2 (en) * | 2009-03-03 | 2012-11-06 | Kove Corporation | System and method for performing rapid data snapshots |
US8131928B2 (en) * | 2009-05-01 | 2012-03-06 | Computer Associates Think, Inc. | Restoring striped volumes of data |
US8788751B2 (en) * | 2009-05-01 | 2014-07-22 | Ca, Inc. | Restoring spanned volumes of data |
US8731190B2 (en) * | 2009-06-09 | 2014-05-20 | Emc Corporation | Segment deduplication system with encryption and compression of segments |
US8918779B2 (en) * | 2009-08-27 | 2014-12-23 | Microsoft Corporation | Logical migration of applications and data |
US8386731B2 (en) * | 2009-09-14 | 2013-02-26 | Vmware, Inc. | Method and system for optimizing live migration of persistent data of virtual machine using disk I/O heuristics |
US9244716B2 (en) | 2009-10-30 | 2016-01-26 | Avaya Inc. | Generation of open virtualization framework package for solution installations and upgrades |
US8793288B2 (en) * | 2009-12-16 | 2014-07-29 | Sap Ag | Online access to database snapshots |
JP2011170680A (en) * | 2010-02-19 | 2011-09-01 | Nec Corp | Fault tolerant server |
US8612398B2 (en) * | 2010-03-11 | 2013-12-17 | Microsoft Corporation | Clean store for operating system and software recovery |
US8412899B2 (en) | 2010-04-01 | 2013-04-02 | Autonomy, Inc. | Real time backup storage node assignment |
WO2021099839A1 (en) | 2019-11-18 | 2021-05-27 | Roy Mann | Collaborative networking systems, methods, and devices |
WO2021161104A1 (en) | 2020-02-12 | 2021-08-19 | Monday.Com | Enhanced display features in collaborative network systems, methods, and devices |
US11410129B2 (en) | 2010-05-01 | 2022-08-09 | Monday.com Ltd. | Digital processing systems and methods for two-way syncing with third party applications in collaborative work systems |
US8818962B2 (en) | 2010-05-26 | 2014-08-26 | International Business Machines Corporation | Proactive detection of data inconsistencies in a storage system point-in-time copy of data |
US8452735B2 (en) | 2010-05-26 | 2013-05-28 | International Business Machines Corporation | Selecting a data restore point with an optimal recovery time and recovery point |
US8468365B2 (en) | 2010-09-24 | 2013-06-18 | Intel Corporation | Tweakable encryption mode for memory encryption with protection against replay attacks |
US8954664B1 (en) * | 2010-10-01 | 2015-02-10 | Western Digital Technologies, Inc. | Writing metadata files on a disk |
US8756361B1 (en) | 2010-10-01 | 2014-06-17 | Western Digital Technologies, Inc. | Disk drive modifying metadata cached in a circular buffer when a write operation is aborted |
JP5534024B2 (en) * | 2010-10-21 | 2014-06-25 | 富士通株式会社 | Storage control apparatus and storage control method |
US10073844B1 (en) * | 2010-11-24 | 2018-09-11 | Federal Home Loan Mortgage Corporation (Freddie Mac) | Accelerated system and method for providing data correction |
US9021198B1 (en) | 2011-01-20 | 2015-04-28 | Commvault Systems, Inc. | System and method for sharing SAN storage |
TW201239612A (en) * | 2011-03-31 | 2012-10-01 | Hon Hai Prec Ind Co Ltd | Multimedia storage device |
US20120284474A1 (en) * | 2011-05-06 | 2012-11-08 | International Business Machines Corporation | Enabling recovery during data defragmentation |
US8756382B1 (en) | 2011-06-30 | 2014-06-17 | Western Digital Technologies, Inc. | Method for file based shingled data storage utilizing multiple media types |
US11016702B2 (en) * | 2011-07-27 | 2021-05-25 | Pure Storage, Inc. | Hierarchical event tree |
US8903874B2 (en) | 2011-11-03 | 2014-12-02 | Osr Open Systems Resources, Inc. | File system directory attribute correction |
US9372910B2 (en) * | 2012-01-04 | 2016-06-21 | International Business Machines Corporation | Managing remote data replication |
KR102050732B1 (en) * | 2012-09-28 | 2019-12-02 | 삼성전자 주식회사 | Computing system and method for managing data in the system |
US9633035B2 (en) * | 2013-01-13 | 2017-04-25 | Reduxio Systems Ltd. | Storage system and methods for time continuum data retrieval |
US9189409B2 (en) * | 2013-02-19 | 2015-11-17 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Reducing writes to solid state drive cache memories of storage controllers |
TWI505090B (en) * | 2013-03-12 | 2015-10-21 | Macronix Int Co Ltd | Difference l2p method |
US9110847B2 (en) | 2013-06-24 | 2015-08-18 | Sap Se | N to M host system copy |
US9342419B2 (en) * | 2013-11-11 | 2016-05-17 | Globalfoundries Inc. | Persistent messaging mechanism |
JP5953295B2 (en) * | 2013-12-12 | 2016-07-20 | 京セラドキュメントソリューションズ株式会社 | Facsimile machine |
KR20150081810A (en) * | 2014-01-07 | 2015-07-15 | 한국전자통신연구원 | Method and device for multiple snapshot management of data storage media |
US9830329B2 (en) | 2014-01-15 | 2017-11-28 | W. Anthony Mason | Methods and systems for data storage |
US9423961B2 (en) * | 2014-09-08 | 2016-08-23 | Apple Inc. | Method to enhance programming performance in multilevel NVM devices |
US10089481B2 (en) | 2015-09-23 | 2018-10-02 | International Business Machines Corporation | Securing recorded data |
US10083096B1 (en) * | 2015-12-15 | 2018-09-25 | Workday, Inc. | Managing data with restoring from purging |
US10296418B2 (en) * | 2016-01-19 | 2019-05-21 | Microsoft Technology Licensing, Llc | Versioned records management using restart era |
US10382436B2 (en) * | 2016-11-22 | 2019-08-13 | Daniel Chien | Network security based on device identifiers and network addresses |
US10761743B1 (en) | 2017-07-17 | 2020-09-01 | EMC IP Holding Company LLC | Establishing data reliability groups within a geographically distributed data storage environment |
JP6940768B2 (en) * | 2017-11-01 | 2021-09-29 | 富士通株式会社 | Test control program, test control device and test control method |
KR101975880B1 (en) * | 2017-11-30 | 2019-08-28 | 부경대학교 산학협력단 | Real time collaborative editing method for memory saving |
US11698890B2 (en) | 2018-07-04 | 2023-07-11 | Monday.com Ltd. | System and method for generating a column-oriented data structure repository for columns of single data types |
US11436359B2 (en) | 2018-07-04 | 2022-09-06 | Monday.com Ltd. | System and method for managing permissions of users for a single data type column-oriented data structure |
US11126505B1 (en) * | 2018-08-10 | 2021-09-21 | Amazon Technologies, Inc. | Past-state backup generator and interface for database systems |
US11188622B2 (en) | 2018-09-28 | 2021-11-30 | Daniel Chien | Systems and methods for computer security |
US11436203B2 (en) | 2018-11-02 | 2022-09-06 | EMC IP Holding Company LLC | Scaling out geographically diverse storage |
US10826912B2 (en) | 2018-12-14 | 2020-11-03 | Daniel Chien | Timestamp-based authentication |
US10848489B2 (en) | 2018-12-14 | 2020-11-24 | Daniel Chien | Timestamp-based authentication with redirection |
US11748004B2 (en) | 2019-05-03 | 2023-09-05 | EMC IP Holding Company LLC | Data replication using active and passive data storage modes |
KR102070885B1 (en) * | 2019-07-24 | 2020-03-02 | 주식회사 이글루시스템즈 | Synchronization System Of Distributed Data on Block Unit |
US11449399B2 (en) * | 2019-07-30 | 2022-09-20 | EMC IP Holding Company LLC | Mitigating real node failure of a doubly mapped redundant array of independent nodes |
US11228322B2 (en) | 2019-09-13 | 2022-01-18 | EMC IP Holding Company LLC | Rebalancing in a geographically diverse storage system employing erasure coding |
US11449248B2 (en) | 2019-09-26 | 2022-09-20 | EMC IP Holding Company LLC | Mapped redundant array of independent data storage regions |
US11435910B2 (en) | 2019-10-31 | 2022-09-06 | EMC IP Holding Company LLC | Heterogeneous mapped redundant array of independent nodes for data storage |
US11288139B2 (en) | 2019-10-31 | 2022-03-29 | EMC IP Holding Company LLC | Two-step recovery employing erasure coding in a geographically diverse data storage system |
US20210150135A1 (en) | 2019-11-18 | 2021-05-20 | Monday.Com | Digital processing systems and methods for integrated graphs in cells of collaborative work system tables |
US11435957B2 (en) | 2019-11-27 | 2022-09-06 | EMC IP Holding Company LLC | Selective instantiation of a storage service for a doubly mapped redundant array of independent nodes |
US11677754B2 (en) | 2019-12-09 | 2023-06-13 | Daniel Chien | Access control systems and methods |
US11231860B2 (en) | 2020-01-17 | 2022-01-25 | EMC IP Holding Company LLC | Doubly mapped redundant array of independent nodes for data storage with high performance |
CN111352617B (en) * | 2020-03-16 | 2023-03-31 | 山东省物化探勘查院 | Magnetic method data auxiliary arrangement method based on Fortran language |
US11507308B2 (en) | 2020-03-30 | 2022-11-22 | EMC IP Holding Company LLC | Disk access event control for mapped nodes supported by a real cluster storage system |
US11829953B1 (en) | 2020-05-01 | 2023-11-28 | Monday.com Ltd. | Digital processing systems and methods for managing sprints using linked electronic boards |
US11501255B2 (en) | 2020-05-01 | 2022-11-15 | Monday.com Ltd. | Digital processing systems and methods for virtual file-based electronic white board in collaborative work systems |
US11277361B2 (en) | 2020-05-03 | 2022-03-15 | Monday.com Ltd. | Digital processing systems and methods for variable hang-time for social layer messages in collaborative work systems |
US11288229B2 (en) | 2020-05-29 | 2022-03-29 | EMC IP Holding Company LLC | Verifiable intra-cluster migration for a chunk storage system |
US11438145B2 (en) | 2020-05-31 | 2022-09-06 | Daniel Chien | Shared key generation based on dual clocks |
US11509463B2 (en) | 2020-05-31 | 2022-11-22 | Daniel Chien | Timestamp-based shared key generation |
US11693983B2 (en) | 2020-10-28 | 2023-07-04 | EMC IP Holding Company LLC | Data protection via commutative erasure coding in a geographically diverse data storage system |
US11687216B2 (en) | 2021-01-14 | 2023-06-27 | Monday.com Ltd. | Digital processing systems and methods for dynamically updating documents with data from linked files in collaborative work systems |
US11847141B2 (en) | 2021-01-19 | 2023-12-19 | EMC IP Holding Company LLC | Mapped redundant array of independent nodes employing mapped reliability groups for data storage |
US11625174B2 (en) | 2021-01-20 | 2023-04-11 | EMC IP Holding Company LLC | Parity allocation for a virtual redundant array of independent disks |
US11354191B1 (en) | 2021-05-28 | 2022-06-07 | EMC IP Holding Company LLC | Erasure coding in a large geographically diverse data storage system |
US11449234B1 (en) | 2021-05-28 | 2022-09-20 | EMC IP Holding Company LLC | Efficient data access operations via a mapping layer instance for a doubly mapped redundant array of independent nodes |
US11775399B1 (en) * | 2022-03-28 | 2023-10-03 | International Business Machines Corporation | Efficient recovery in continuous data protection environments |
US11797394B1 (en) * | 2022-05-30 | 2023-10-24 | Vast Data Ltd. | Snapshot restore |
US11741071B1 (en) | 2022-12-28 | 2023-08-29 | Monday.com Ltd. | Digital processing systems and methods for navigating and viewing displayed content |
US11886683B1 (en) | 2022-12-30 | 2024-01-30 | Monday.com Ltd | Digital processing systems and methods for presenting board graphics |
US11893381B1 (en) | 2023-02-21 | 2024-02-06 | Monday.com Ltd | Digital processing systems and methods for reducing file bundle sizes |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1991001026A2 (en) * | 1989-07-11 | 1991-01-24 | Intelligence Quotient International Limited | A method of operating a data processing system |
US5089958A (en) * | 1989-01-23 | 1992-02-18 | Vortex Systems, Inc. | Fault tolerant computer backup system |
US5325519A (en) * | 1991-10-18 | 1994-06-28 | Texas Microsystems, Inc. | Fault tolerant computer with archival rollback capabilities |
US5339406A (en) * | 1992-04-03 | 1994-08-16 | Sun Microsystems, Inc. | Reconstructing symbol definitions of a dynamically configurable operating system defined at the time of a system crash |
WO1996012232A1 (en) * | 1994-10-13 | 1996-04-25 | Vinca Corporation | Snapshot of data stored on a mass storage system |
EP0751462A1 (en) * | 1995-06-19 | 1997-01-02 | Kabushiki Kaisha Toshiba | A recoverable disk control system with a non-volatile memory |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE69126066T2 (en) * | 1990-06-29 | 1997-09-25 | Oracle Corp | Method and device for optimizing logbook usage |
JP2993528B2 (en) * | 1991-05-18 | 1999-12-20 | 富士通株式会社 | Text management and restoration method |
DE69119222T2 (en) | 1991-06-04 | 1996-11-21 | Ibm | Data backup and elimination in a data processing system |
US5369758A (en) * | 1991-11-15 | 1994-11-29 | Fujitsu Limited | Checking for proper locations of storage devices in a storage array |
US5802264A (en) * | 1991-11-15 | 1998-09-01 | Fujitsu Limited | Background data reconstruction in a storage device array system |
US5297258A (en) * | 1991-11-21 | 1994-03-22 | Ast Research, Inc. | Data logging for hard disk data storage systems |
US5331646A (en) * | 1992-05-08 | 1994-07-19 | Compaq Computer Corporation | Error correcting code technique for improving reliablility of a disk array |
US5404361A (en) * | 1992-07-27 | 1995-04-04 | Storage Technology Corporation | Method and apparatus for ensuring data integrity in a dynamically mapped data storage subsystem |
US5530855A (en) * | 1992-10-13 | 1996-06-25 | International Business Machines Corporation | Replicating a database by the sequential application of hierarchically sorted log records |
US5487160A (en) * | 1992-12-04 | 1996-01-23 | At&T Global Information Solutions Company | Concurrent image backup for disk storage system |
US5557770A (en) * | 1993-03-24 | 1996-09-17 | International Business Machines Corporation | Disk storage apparatus and method for converting random writes to sequential writes while retaining physical clustering on disk |
US5659747A (en) * | 1993-04-22 | 1997-08-19 | Microsoft Corporation | Multiple level undo/redo mechanism |
US5835953A (en) | 1994-10-13 | 1998-11-10 | Vinca Corporation | Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating |
US5604862A (en) * | 1995-03-14 | 1997-02-18 | Network Integrity, Inc. | Continuously-snapshotted protection of computer files |
US6016553A (en) * | 1997-09-05 | 2000-01-18 | Wild File, Inc. | Method, software and apparatus for saving, using and recovering data |
-
1998
- 1998-06-26 US US09/105,733 patent/US6016553A/en not_active Expired - Lifetime
- 1998-09-04 DE DE19882659T patent/DE19882659T1/en not_active Ceased
- 1998-09-04 AU AU93832/98A patent/AU9383298A/en not_active Abandoned
- 1998-09-04 JP JP2000509037A patent/JP3878412B2/en not_active Expired - Fee Related
- 1998-09-04 WO PCT/US1998/018863 patent/WO1999012101A2/en active Application Filing
-
1999
- 1999-07-15 US US09/354,250 patent/US6199178B1/en not_active Expired - Lifetime
- 1999-11-29 US US09/450,266 patent/US6240527B1/en not_active Expired - Lifetime
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5089958A (en) * | 1989-01-23 | 1992-02-18 | Vortex Systems, Inc. | Fault tolerant computer backup system |
WO1991001026A2 (en) * | 1989-07-11 | 1991-01-24 | Intelligence Quotient International Limited | A method of operating a data processing system |
US5325519A (en) * | 1991-10-18 | 1994-06-28 | Texas Microsystems, Inc. | Fault tolerant computer with archival rollback capabilities |
US5339406A (en) * | 1992-04-03 | 1994-08-16 | Sun Microsystems, Inc. | Reconstructing symbol definitions of a dynamically configurable operating system defined at the time of a system crash |
WO1996012232A1 (en) * | 1994-10-13 | 1996-04-25 | Vinca Corporation | Snapshot of data stored on a mass storage system |
EP0751462A1 (en) * | 1995-06-19 | 1997-01-02 | Kabushiki Kaisha Toshiba | A recoverable disk control system with a non-volatile memory |
Non-Patent Citations (3)
Title |
---|
GREEN R J ET AL: "Designing a fast, on-line backup system for a log-structured file system" DIGITAL TECHNICAL JOURNAL, vol. 8, no. 2, 1996, pages 32-45, XP002088807 * |
HULTGREN C D: "FAULT-TOLERANT PERSONAL COMPUTERS SAFEGUARD CRITICAL APPLICATIONS FAIL-SOFT PCS PROTECT CRITICAL DATA WITH INTERNAL MONITORING AND CONTROL, ULTRA-RELIABLE COMPONENTS, AND ENHANCED OPERATING SOFTWARE" I & CS - INDUSTRIAL AND PROCESS CONTROL MAGAZINE, vol. 65, no. 9, 1 September 1992, pages 23-28, XP000316168 * |
ROBINSON J T: "ANALYSIS OF STEADY-STATE SEGMENT STORAGE ULTIZATIONS IN A LOG-STRUCTURED FILE SYSTEM WITH LEAST-UTILIZED SEGMENT CHEANING" OPERATING SYSTEMS REVIEW (SIGOPS), vol. 30, no. 4, October 1996, pages 29-32, XP000639698 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6732293B1 (en) | 1998-03-16 | 2004-05-04 | Symantec Corporation | Method, software and apparatus for recovering and recycling data in conjunction with an operating system |
WO2000065447A1 (en) * | 1999-04-23 | 2000-11-02 | Wild File, Inc. | Method and apparatus for dealing with data corruption and shared disks in the context of saving, using and recovering data |
US7055055B1 (en) | 1999-04-23 | 2006-05-30 | Symantec Corporation | Write cache flushing method for reducing data corruption |
US7506257B1 (en) | 1999-06-30 | 2009-03-17 | Microsoft Corporation | System and method for providing help contents for components of a computer system |
US6618735B1 (en) | 1999-06-30 | 2003-09-09 | Microsoft Corporation | System and method for protecting shared system files |
US6802025B1 (en) | 1999-06-30 | 2004-10-05 | Microsoft Corporation | Restoration of a computer to a previous working state |
US6687749B1 (en) | 1999-06-30 | 2004-02-03 | Microsoft Corporation | Methods and systems for reporting and resolving support incidents |
WO2001001251A1 (en) * | 1999-06-30 | 2001-01-04 | Microsoft Corporation | Restoration of a computer to a previous state |
US7506013B2 (en) | 1999-07-09 | 2009-03-17 | Symantec Corporation | Disk storage defragmentation |
US7051055B1 (en) | 1999-07-09 | 2006-05-23 | Symantec Corporation | Optimized disk storage defragmentation with swapping capabilities |
US6802029B2 (en) | 1999-10-19 | 2004-10-05 | Inasoft, Inc. | Operating system and data protection |
US6594780B1 (en) | 1999-10-19 | 2003-07-15 | Inasoft, Inc. | Operating system and data protection |
US7337360B2 (en) | 1999-10-19 | 2008-02-26 | Idocrase Investments Llc | Stored memory recovery system |
US7730031B2 (en) | 2000-03-01 | 2010-06-01 | Computer Associates Think, Inc. | Method and system for updating an archive of a computer file |
DE10334815B4 (en) * | 2003-07-30 | 2005-12-01 | Siemens Ag | Method for storing and reading data |
DE10334815A1 (en) * | 2003-07-30 | 2005-03-10 | Siemens Ag | Reading and saving of data method e.g. for communication bus system, involves storing and selection of data with time information and validity information within two ranges to be stored alternately |
US7949665B1 (en) | 2004-11-19 | 2011-05-24 | Symantec Corporation | Rapidly traversing disc volumes during file content examination |
US7440398B2 (en) | 2004-11-29 | 2008-10-21 | Honeywell International Inc. | Fault tolerant communication apparatus |
US8358567B2 (en) | 2010-02-04 | 2013-01-22 | Panasonic Corporation | Information reproduction device and information reproduction method |
US10110572B2 (en) | 2015-01-21 | 2018-10-23 | Oracle International Corporation | Tape drive encryption in the data path |
Also Published As
Publication number | Publication date |
---|---|
JP2004504645A (en) | 2004-02-12 |
WO1999012101A3 (en) | 1999-08-19 |
US6240527B1 (en) | 2001-05-29 |
JP3878412B2 (en) | 2007-02-07 |
AU9383298A (en) | 1999-03-22 |
US6199178B1 (en) | 2001-03-06 |
DE19882659T1 (en) | 2000-10-12 |
US6016553A (en) | 2000-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6199178B1 (en) | Method, software and apparatus for saving, using and recovering data | |
US20020049883A1 (en) | System and method for restoring a computer system after a failure | |
US6732293B1 (en) | Method, software and apparatus for recovering and recycling data in conjunction with an operating system | |
US6311193B1 (en) | Computer system | |
US9699255B2 (en) | Device driver | |
US6038639A (en) | Data file storage management system for snapshot copy operations | |
KR101137299B1 (en) | Hierarchical storage management for a file system providing snapshots | |
US8005797B1 (en) | File-level continuous data protection with access to previous versions | |
US5956745A (en) | System and method for automatically resizing a disk drive volume | |
US8296264B1 (en) | Method and system for file-level continuous data protection | |
JP4181044B2 (en) | A backup method for saving a snapshot of selected data in a mass storage system | |
US5086502A (en) | Method of operating a data processing system | |
US7752402B2 (en) | Systems and methods for allowing incremental journaling | |
US7822932B2 (en) | Systems and methods for providing nonlinear journaling | |
US6738863B2 (en) | Method for rebuilding meta-data in a data storage system and a data storage system | |
CN1311358C (en) | Efficient search for migration and purge candidates | |
US20110153569A1 (en) | Systems and methods for providing nonlinear journaling | |
CA2504322A1 (en) | Apparatus and method for hardware-based file system | |
JPH07104808B2 (en) | Method and apparatus for dynamic volume tracking in an installable file system | |
KR20000022716A (en) | Efficient volume copy using pre-configuration of log structured target storage | |
US6629203B1 (en) | Alternating shadow directories in pairs of storage spaces for data storage | |
EP1091299A2 (en) | Method, software and apparatus for recovering data in conjunction with an operating system | |
JPH09152983A (en) | Reentrant garbage collection processing in file system immanent in flash memory | |
Schonhorst | Evolution of the Unix File System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US US US UZ VN YU ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
AK | Designated states |
Kind code of ref document: A3 Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US US US UZ VN YU ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
NENP | Non-entry into the national phase |
Ref country code: KR |
|
ENP | Entry into the national phase |
Ref document number: 2000 509037 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 09530919 Country of ref document: US |
|
RET | De translation (de og part 6b) |
Ref document number: 19882659 Country of ref document: DE Date of ref document: 20001012 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 19882659 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: CA |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8607 |