US9158630B1 - Testing integrity of replicated storage - Google Patents

Testing integrity of replicated storage Download PDF

Info

Publication number
US9158630B1
US9158630B1 US14/133,945 US201314133945A US9158630B1 US 9158630 B1 US9158630 B1 US 9158630B1 US 201314133945 A US201314133945 A US 201314133945A US 9158630 B1 US9158630 B1 US 9158630B1
Authority
US
United States
Prior art keywords
volume
replica
site
hash signatures
production
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/133,945
Inventor
Assaf Natanzon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EMC Corp
Original Assignee
EMC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EMC Corp filed Critical EMC Corp
Priority to US14/133,945 priority Critical patent/US9158630B1/en
Assigned to EMC CORPORATION reassignment EMC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NATANZON, ASSAF
Application granted granted Critical
Publication of US9158630B1 publication Critical patent/US9158630B1/en
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT SECURITY AGREEMENT Assignors: ASAP SOFTWARE EXPRESS, INC., AVENTAIL LLC, CREDANT TECHNOLOGIES, INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL SOFTWARE INC., DELL SYSTEMS CORPORATION, DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., MAGINATICS LLC, MOZY, INC., SCALEIO LLC, SPANNING CLOUD APPS LLC, WYSE TECHNOLOGY L.L.C.
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT SECURITY AGREEMENT Assignors: ASAP SOFTWARE EXPRESS, INC., AVENTAIL LLC, CREDANT TECHNOLOGIES, INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL SOFTWARE INC., DELL SYSTEMS CORPORATION, DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., MAGINATICS LLC, MOZY, INC., SCALEIO LLC, SPANNING CLOUD APPS LLC, WYSE TECHNOLOGY L.L.C.
Assigned to EMC IP Holding Company LLC reassignment EMC IP Holding Company LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EMC CORPORATION
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY AGREEMENT Assignors: CREDANT TECHNOLOGIES, INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY AGREEMENT Assignors: CREDANT TECHNOLOGIES INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to AVENTAIL LLC, FORCE10 NETWORKS, INC., DELL SYSTEMS CORPORATION, DELL PRODUCTS L.P., DELL INTERNATIONAL, L.L.C., DELL SOFTWARE INC., EMC CORPORATION, ASAP SOFTWARE EXPRESS, INC., MOZY, INC., SCALEIO LLC, CREDANT TECHNOLOGIES, INC., DELL USA L.P., WYSE TECHNOLOGY L.L.C., MAGINATICS LLC, DELL MARKETING L.P., EMC IP Holding Company LLC reassignment AVENTAIL LLC RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH
Assigned to DELL PRODUCTS L.P., SCALEIO LLC, DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), DELL USA L.P., DELL INTERNATIONAL L.L.C., DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.) reassignment DELL PRODUCTS L.P. RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Assigned to DELL USA L.P., SCALEIO LLC, DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), DELL INTERNATIONAL L.L.C., EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), DELL PRODUCTS L.P., DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.) reassignment DELL USA L.P. RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • G06F11/1451Management of the data involved in backup or backup restore by selection of backup contents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2097Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/83Indexing scheme relating to error detection, to error correction, and to monitoring the solution involving signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/84Using snapshots, i.e. a logical point-in-time copy of the data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/855Details of asynchronous mirroring using a journal to transfer not-yet-mirrored changes

Definitions

  • Conventional data protection systems include tape backup drives, for storing organizational production site data on a periodic basis.
  • Another conventional data protection system uses data replication, by creating a copy of production site data of an organization on a secondary backup storage system, and updating the backup with changes.
  • the backup storage system may be situated in the same physical location as the production storage system, or in a physically remote location.
  • Data replication systems generally operate either at the application level, at the file system level, or at the data block level.
  • a method includes marking a list of locations currently being shipped to a replica site, generating course granularity hash signatures of data for each area of memory in the snapshot volume, generating fine granularity hash signatures of data for each subarea of memory in the snapshot volume, sending course granularity hash signatures to the replica site for each area not being changed in a production volume and sending fine granularity hash signatures to the replica site for each area being changed in the production volume.
  • the snapshot volume is a snapshot of the production volume.
  • an apparatus in another aspect, includes electronic hardware circuitry configured to mark a list of locations currently being shipped to a replica site, generate course granularity hash signatures of data for each area of memory in the snapshot volume, generate fine granularity hash signatures of data for each subarea of memory in the snapshot volume, send course granularity hash signatures to the replica site for each area not being changed in a production volume and send fine granularity hash signatures to the replica site for each area being changed in the production volume.
  • the snapshot volume is a snapshot of the production volume.
  • an article in a further aspect, includes a non-transitory computer-readable medium that stores computer-executable instructions.
  • the instructions cause a machine to mark a list of locations currently being shipped to a replica site, generate course granularity hash signatures of data for each area of memory in the snapshot volume, the snapshot volume being a snapshot of a production volume, generate fine granularity hash signatures of data for each subarea of memory in the snapshot volume, send course granularity hash signatures to the replica site for each area not being changed in the production volume and send fine granularity hash signatures to the replica site for each area being changed in the production volume.
  • FIG. 1 is a block diagram of an example of a data protection system used with a continuous replication mode.
  • FIG. 2 is an illustration of an example of a journal history of write transactions for a storage system.
  • FIG. 3 is a block diagram of an example of a data protection system used with a snapshot shipping mode.
  • FIG. 4 is a flowchart of an example of a process to replicate data from a production site to a replication site using the snapshot shipping mode.
  • FIGS. 5 to 7 are flowcharts of an example of processes used to test the integrity of a replicated storage.
  • FIGS. 8 and 9 are flowcharts of processes used to focus integrity testing within a replicate storage.
  • FIG. 10 is a computer on which any of the processes of FIGS. 5 to 9 may be implemented.
  • Described herein are techniques to test the integrity of a replicated storage.
  • BACKUP SITE a facility where replicated production site data is stored; the backup site may be located in a remote site or at the same location as the production site;
  • BOOKMARK a bookmark is metadata information stored in a replication journal which indicates a point in time.
  • DATA PROTECTION APPLIANCE a computer or a cluster of computers responsible for data protection services including inter alia data replication of a storage system, and journaling of I/O requests issued by a host computer to the storage system;
  • HASH SIGNATURE a hash signature is generated using an algorithm such as a cryptographic hash function (e.g., SHA-1 or SHA-2) and sometimes referred to herein as a signature;
  • a cryptographic hash function e.g., SHA-1 or SHA-2
  • HOST at least one computer or networks of computers that runs at least one data processing application that issues I/O requests to one or more storage systems; a host is an initiator with a SAN;
  • HOST DEVICE an internal interface in a host, to a logical storage unit
  • IMAGE a copy of a logical storage unit at a specific point in time
  • INITIATOR a node in a SAN that issues I/O requests
  • I/O DATA Data that will be or is written to a volume by, for example, an application, sometimes called write transaction data or write data;
  • I/O REQUEST an input/output request (sometimes referred to as an I/O), which may be a read I/O request (sometimes referred to as a read request or a read) or a write I/O request (sometimes referred to as a write request or a write);
  • JOURNAL a record of write transactions issued to a storage system; used to maintain a duplicate storage system, and to roll back the duplicate storage system to a previous point in time;
  • LOGICAL UNIT a logical entity provided by a storage system for accessing data from the storage system.
  • the logical disk may be a physical logical unit or a virtual logical unit;
  • LUN a logical unit number for identifying a logical unit
  • PHYSICAL LOGICAL UNIT a physical entity, such as a disk or an array of disks, for storing data in storage locations that can be accessed by address;
  • PRODUCTION SITE a facility where one or more host computers run data processing applications that write data to a storage system and read data from the storage system;
  • REMOTE ACKNOWLEDGEMENTS an acknowledgement from remote DPA to the local DPA that data arrived at the remote DPA (either to the appliance or the journal);
  • SIGNATURE a signature is a hash signature
  • SPLITTER ACKNOWLEDGEMENT an acknowledgement from a DPA to the protection agent (splitter) that data has been received at the DPA; this may be achieved by an SCSI status command;
  • SAN a storage area network of nodes that send and receive an I/O and other requests, each node in the network being an initiator or a target, or both an initiator and a target;
  • SOURCE SIDE a transmitter of data within a data replication workflow, during normal operation a production site is the source side; and during data recovery a backup site is the source side, sometimes called a primary side;
  • STORAGE SYSTEM a SAN entity that provides multiple logical units for access by multiple SAN initiators
  • TARGET a node in a SAN that replies to I/O requests
  • TARGET SIDE a receiver of data within a data replication workflow; during normal operation a back site is the target side, and during data recovery a production site is the target side, sometimes called a secondary side;
  • VIRTUAL LOGICAL UNIT a virtual storage entity which is treated as a logical unit by virtual machines
  • WAN a wide area network that connects local networks and enables them to communicate with one another, such as the Internet.
  • journaling and some techniques associated with journaling may be described in the patent titled “METHODS AND APPARATUS FOR OPTIMAL JOURNALING FOR CONTINUOUS DATA REPLICATION” and with U.S. Pat. No. 7,516,287, which is hereby incorporated by reference.
  • a data protection system 100 includes two sites; Site I, which is a production site, and Site II, which is a backup site or replica site.
  • Site I which is a production site
  • Site II which is a backup site or replica site.
  • the backup site is responsible for replicating production site data. Additionally, the backup site enables roll back of Site I data to an earlier pointing time, which may be used in the event of data corruption of a disaster, or alternatively in order to view or to access data from an earlier point in time.
  • FIG. 1 is an overview of a system for data replication of either physical or virtual logical units.
  • a hypervisor in one example, would consume logical units and generate a distributed file system on them such as VMFS creates files in the file system and expose the files as logical units to the virtual machines (each VMDK is seen as a SCSI device by virtual hosts).
  • the hypervisor consumes a network based file system and exposes files in the NFS as SCSI devices to virtual hosts.
  • a failover may be performed in the event of a disaster at the production site, or for other reasons.
  • Site I or Site II behaves as a production site for a portion of stored data, and behaves simultaneously as a backup site for another portion of stored data.
  • a portion of stored data is replicated to a backup site, and another portion is not.
  • the production site and the backup site may be remote from one another, or they may both be situated at a common site, local to one another.
  • Local data protection has the advantage of minimizing data lag between target and source, and remote data protection has the advantage is being robust in the event that a disaster occurs at the source side.
  • the source and target sides communicate via a wide area network (WAN) 128 , although other types of networks may be used.
  • WAN wide area network
  • Each side of system 100 includes three major components coupled via a storage area network (SAN); namely, (i) a storage system, (ii) a host computer, and (iii) a data protection appliance (DPA).
  • the source side SAN includes a source host computer 104 , a source storage system 108 , and a source DPA 112 .
  • the target side SAN includes a target host computer 116 , a target storage system 120 , and a target DPA 124 .
  • the protection agent (sometimes referred to as a splitter) may run on the host, or on the storage, or in the network or at a hypervisor level, and that DPAs are optional and DPA code may run on the storage array too, or the DPA 124 may run as a virtual machine.
  • a SAN includes one or more devices, referred to as “nodes”.
  • a node in a SAN may be an “initiator” or a “target”, or both.
  • An initiator node is a device that is able to initiate requests to one or more other devices; and a target node is a device that is able to reply to requests, such as SCSI commands, sent by an initiator node.
  • a SAN may also include network switches, such as fiber channel switches.
  • the communication links between each host computer and its corresponding storage system may be any appropriate medium suitable for data transfer, such as fiber communication channel links.
  • the host communicates with its corresponding storage system using small computer system interface (SCSI) commands.
  • SCSI small computer system interface
  • System 100 includes source storage system 108 and target storage system 120 .
  • Each storage system includes physical storage units for storing data, such as disks or arrays of disks.
  • storage systems 108 and 120 are target nodes.
  • storage system 108 exposes one or more logical units (LU) to which commands are issued.
  • LU logical units
  • storage systems 108 and 120 are SAN entities that provide multiple logical units for access by multiple SAN initiators.
  • Logical units are a logical entity provided by a storage system, for accessing data stored in the storage system.
  • the logical unit may be a physical logical unit or a virtual logical unit.
  • a logical unit is identified by a unique logical unit number (LUN).
  • Storage system 108 exposes a logical unit 136 , designated as LU A, and storage system 120 exposes a logical unit 156 , designated as LU B.
  • LU B is used for replicating LU A. As such, LU B is generated as a copy of LU A. In one embodiment, LU B is configured so that its size is identical to the size of LU A.
  • storage system 120 serves as a backup for source side storage system 108 .
  • some logical units of storage system 120 may be used to back up logical units of storage system 108 , and other logical units of storage system 120 may be used for other purposes.
  • there is symmetric replication whereby some logical units of storage system 108 are used for replicating logical units of storage system 120 , and other logical units of storage system 120 are used for replicating other logical units of storage system 108 .
  • System 100 includes a source side host computer 104 and a target side host computer 116 .
  • a host computer may be one computer, or a plurality of computers, or a network of distributed computers, each computer may include inter alia a conventional CPU, volatile and non-volatile memory, a data bus, an I/O interface, a display interface and a network interface.
  • a host computer runs at least one data processing application, such as a database application and an e-mail server.
  • an operating system of a host computer creates a host device for each logical unit exposed by a storage system in the host computer SAN.
  • a host device is a logical entity in a host computer, through which a host computer may access a logical unit.
  • Host device 104 identifies LU A and generates a corresponding host device 140 , designated as Device A, through which it can access LU A.
  • host computer 116 identifies LU B and generates a corresponding device 160 , designated as Device B.
  • host computer 104 is a SAN initiator that issues I/O requests (write/read operations) through host device 140 to LU A using, for example, SCSI commands. Such requests are generally transmitted to LU A with an address that includes a specific device identifier, an offset within the device, and a data size. Offsets are generally aligned to 512 byte blocks.
  • the average size of a write operation issued by host computer 104 may be, for example, 10 kilobytes (KB); i.e., 20 blocks. For an I/O rate of 50 megabytes (MB) per second, this corresponds to approximately 5,000 write transactions per second.
  • System 100 includes two data protection appliances, a source side DPA 112 and a target side DPA 124 .
  • a DPA performs various data protection services, such as data replication of a storage system, and journaling of I/O requests issued by a host computer to source side storage system data.
  • data protection services such as data replication of a storage system, and journaling of I/O requests issued by a host computer to source side storage system data.
  • a DPA may also enable roll back of data to an earlier point in time, and processing of rolled back data at the target site.
  • Each DPA 112 and 124 is a computer that includes inter alia one or more conventional CPUs and internal memory.
  • each DPA is a cluster of such computers.
  • Use of a cluster ensures that if a DPA computer is down, then the DPA functionality switches over to another computer.
  • the DPA computers within a DPA cluster communicate with one another using at least one communication link suitable for data transfer via fiber channel or IP based protocols, or such other transfer protocol.
  • One computer from the DPA cluster serves as the DPA leader.
  • the DPA cluster leader coordinates between the computers in the cluster, and may also perform other tasks that require coordination between the computers, such as load balancing.
  • DPA 112 and DPA 124 are standalone devices integrated within a SAN.
  • each of DPA 112 and DPA 124 may be integrated into storage system 108 and storage system 120 , respectively, or integrated into host computer 104 and host computer 116 , respectively.
  • Both DPAs communicate with their respective host computers through communication lines such as fiber channels using, for example, SCSI commands or any other protocol.
  • DPAs 112 and 124 are configured to act as initiators in the SAN; i.e., they can issue I/O requests using, for example, SCSI commands, to access logical units on their respective storage systems. DPA 112 and DPA 124 are also configured with the necessary functionality to act as targets; i.e., to reply to I/O requests, such as SCSI commands, issued by other initiators in the SAN, including inter alia their respective host computers 104 and 116 . Being target nodes, DPA 112 and DPA 124 may dynamically expose or remove one or more logical units.
  • Site I and Site II may each behave simultaneously as a production site and a backup site for different logical units.
  • DPA 112 and DPA 124 may each behave as a source DPA for some logical units, and as a target DPA for other logical units, at the same time.
  • Host computer 104 and host computer 116 include protection agents 144 and 164 , respectively.
  • Protection agents 144 and 164 intercept SCSI commands issued by their respective host computers, via host devices to logical units that are accessible to the host computers.
  • a data protection agent may act on an intercepted SCSI commands issued to a logical unit, in one of the following ways: send the SCSI commands to its intended logical unit; redirect the SCSI command to another logical unit; split the SCSI command by sending it first to the respective DPA; after the DPA returns an acknowledgement, send the SCSI command to its intended logical unit; fail a SCSI command by returning an error return code; and delay a SCSI command by not returning an acknowledgement to the respective host computer.
  • a protection agent may handle different SCSI commands, differently, according to the type of the command. For example, a SCSI command inquiring about the size of a certain logical unit may be sent directly to that logical unit, while a SCSI write command may be split and sent first to a DPA associated with the agent.
  • a protection agent may also change its behavior for handling SCSI commands, for example as a result of an instruction received from the DPA.
  • the behavior of a protection agent for a certain host device generally corresponds to the behavior of its associated DPA with respect to the logical unit of the host device.
  • the associated protection agent splits I/O requests issued by a host computer to the host device corresponding to that logical unit.
  • the associated protection agent fails I/O requests issued by host computer to the host device corresponding to that logical unit.
  • Protection agents may use any protocol suitable for data transfer within a SAN, such as fiber channel, or SCSI over fiber channel.
  • the communication may be direct, or via a logical unit exposed by the DPA.
  • Protection agents communicate with their respective DPAs by sending SCSI commands over fiber channel.
  • Protection agents 144 and 164 are drivers located in their respective host computers 104 and 116 .
  • a protection agent may also be located in a fiber channel switch, or in any other device situated in a data path between a host computer and a storage system or on the storage system itself.
  • the protection agent may run at the hypervisor layer or in a virtual machine providing a virtualization layer.
  • DPA 112 acts as a source site DPA for LU A.
  • protection agent 144 is configured to act as a source side protection agent; i.e., as a splitter for host device A. Specifically, protection agent 144 replicates SCSI I/O write requests. A replicated SCSI I/O write request is sent to DPA 112 . After receiving an acknowledgement from DPA 124 , protection agent 144 then sends the SCSI I/O write request to LU A. After receiving a second acknowledgement from storage system 108 host computer 104 acknowledges that an I/O command complete.
  • DPA 112 When DPA 112 receives a replicated SCSI write request from data protection agent 144 , DPA 112 transmits certain I/O information characterizing the write request, packaged as a “write transaction”, over WAN 128 to DPA 124 on the target side, for journaling and for incorporation within target storage system 120 .
  • DPA 112 may send its write transactions to DPA 124 using a variety of modes of transmission, including inter alia (i) a synchronous mode, (ii) an asynchronous mode, and (iii) a snapshot mode.
  • DPA 112 sends each write transaction to DPA 124 , receives back an acknowledgement from DPA 124 , and in turns sends an acknowledgement back to protection agent 144 .
  • Protection agent 144 waits until receipt of such acknowledgement before sending the SCSI write request to LU A.
  • DPA 112 sends an acknowledgement to protection agent 144 upon receipt of each I/O request, before receiving an acknowledgement back from DPA 124 .
  • DPA 112 receives several I/O requests and combines them into an aggregate “snapshot” of all write activity performed in the multiple I/O requests, and sends the snapshot to DPA 124 , for journaling and for incorporation in target storage system 120 .
  • DPA 112 also sends an acknowledgement to protection agent 144 upon receipt of each I/O request, before receiving an acknowledgement back from DPA 124 .
  • DPA 124 While in production mode, DPA 124 receives replicated data of LU A from DPA 112 , and performs journaling and writing to storage system 120 . When applying write operations to storage system 120 , DPA 124 acts as an initiator, and sends SCSI commands to LU B.
  • DPA 124 undoes the write transactions in the journal, so as to restore storage system 120 to the state it was at, at an earlier time.
  • LU B is used as a backup of LU A.
  • host computer 116 should not be sending I/O requests to LU B.
  • protection agent 164 acts as a target site protection agent for host Device B and fails I/O requests sent from host computer 116 to LU B through host Device B.
  • Target storage system 120 exposes a logical unit 176 , referred to as a “journal LU”, for maintaining a history of write transactions made to LU B, referred to as a “journal”.
  • journal LU 176 may be striped over several logical units, or may reside within all of or a portion of another logical unit.
  • DPA 124 includes a journal processor 180 for managing the journal.
  • Journal processor 180 functions generally to manage the journal entries of LU B. Specifically, journal processor 180 enters write transactions received by DPA 124 from DPA 112 into the journal, by writing them into the journal LU, reads the undo information for the transaction from LU B, updates the journal entries in the journal LU with undo information, applies the journal transactions to LU B, and removes already-applied transactions from the journal.
  • FIG. 2 which is an illustration of a write transaction 200 for a journal.
  • the journal may be used to provide an adaptor for access to storage 120 at the state it was in at any specified point in time. Since the journal contains the “undo” information necessary to roll back storage system 120 , data that was stored in specific memory locations at the specified point in time may be obtained by undoing write transactions that occurred subsequent to such point in time.
  • Write transaction 200 generally includes the following fields: one or more identifiers; a time stamp, which is the date & time at which the transaction was received by source side DPA 112 ; a write size, which is the size of the data block; a location in journal LU 176 where the data is entered; a location in LU B where the data is to be written; and the data itself.
  • Write transaction 200 is transmitted from source side DPA 112 to target side DPA 124 .
  • DPA 124 records the write transaction 200 in the journal that includes four streams.
  • a first stream referred to as a DO stream, includes new data for writing in LU B.
  • a second stream referred to as an DO METADATA stream, includes metadata for the write transaction, such as an identifier, a date & time, a write size, a beginning address in LU B for writing the new data in, and a pointer to the offset in the DO stream where the corresponding data is located.
  • a third stream referred to as an UNDO stream
  • a fourth stream referred to as an UNDO METADATA
  • an identifier a date & time
  • a write size a beginning address in LU B where data was to be overwritten
  • each of the four streams holds a plurality of write transaction data.
  • write transactions are received dynamically by target DPA 124 , they are recorded at the end of the DO stream and the end of the DO METADATA stream, prior to committing the transaction.
  • the various write transactions are applied to LU B, prior to writing the new DO data into addresses within the storage system, the older data currently located in such addresses is recorded into the UNDO stream.
  • the metadata stream e.g., UNDO METADATA stream or the DO METADATA stream
  • the data stream e.g., UNDO stream or DO stream
  • a data protection system 300 includes a data protection appliance (DPA) cluster 302 a and a storage array 306 a at a production site and a DPA cluster 302 b and a storage array 306 b at a replication site.
  • the DPA clusters 302 a , 302 b are connected by a network 304 (e.g., a WAN, a Fibre Channel and so forth).
  • a network 304 e.g., a WAN, a Fibre Channel and so forth.
  • the storage array 306 a includes a primary storage volume 312 a , a journal 316 a , a first snapshot 322 a , a second snapshot 322 b , APIs 350 a and a delta marking stream 360 .
  • the storage array 306 b includes a replica storage volume 312 b which replicates the primary storage 312 a , a journal 316 b , a list of snapshots 340 , and APIs 350 b.
  • Process 400 generates a first snapshot of a production storage array ( 402 ).
  • the DPA cluster 302 a generates a first snapshot 322 a of the primary storage volume 312 a .
  • the DPA cluster 302 a generates the first snapshot 322 a using the API 350 a .
  • the first snapshot is sent to the replica site.
  • Process 400 generates a second snapshot of a production storage array ( 406 ).
  • the DPA cluster 302 a generates a second snapshot 322 b of the primary storage volume 312 a .
  • the DPA cluster 302 a generates the second snapshot 322 b using the API 350 a.
  • Process 400 obtains differences between the first snapshot and the second snapshot ( 408 ).
  • the DPA cluster 302 a obtains the differences between the first snapshot 322 a and the second snapshot 322 b by using the API 350 a.
  • obtaining the differences between the first snapshot and the second snapshot may be performed by executing a vendor specific read difference command.
  • the read difference command is a vendor specific command which returns locations of the difference and data in the locations.
  • the result of executing the read difference command is a sequence of (location, length); (location, length) and so forth.
  • the read difference command returns a change bitmap.
  • Process 400 adds the differences to a delta marking stream ( 410 ).
  • the DPA cluster 302 a adds the differences obtained in processing block 408 to the delta marking stream 360 using an the API 350 a.
  • Process 400 deletes the first snapshot ( 418 ).
  • the DPA cluster 302 a deletes the first snapshot 322 a.
  • Process 400 reads the data that changed in the second snapshot ( 422 ) and sends the data to the replication site ( 440 ).
  • the DPA cluster 302 a reads the data that changed in the second snapshot 322 b and sends the data to the DPA cluster 302 b.
  • Process 400 renames the second snapshot to the first snapshot ( 446 ) and performs processing block 406 .
  • the DPA cluster 302 a renames the second snapshot to the first snapshot.
  • FIGS. 5 to 7 depict an example of processes used to test the integrity of a replicated storage.
  • FIGS. 5 and 7 depict processes (e.g., a process 500 and a process 700 respectively) that occur at the production site while the FIG. 6 depicts a process (e.g., a process 600 ) that occurs at replication site.
  • the issue with the above approach is that it requires that the snapshots to be in existence a long time. In particular, the snapshots cannot be erased during the period of checking for integrity. If the replicated volume size is large, the scrubbing can take a significant amount of time and thus keeping the snapshots will require significant amount of memory space.
  • the integrity check is started at the beginning of the volume and continues to the end of the volume.
  • the integrity check is performed a portion of the volume at a time. Since the system is in the middle of snapshot shipping, some of the portion may be changed during the shipping of the snapshot.
  • the snapshots do not need to be kept for long periods of time consuming significant amounts of memory.
  • process 500 generates a snapshot of the production volume to form a snapshot volume ( 502 ).
  • Process 500 goes to a first portion of the snapshot volume ( 508 ).
  • the integrity check is performed a portion of the volume at a time.
  • the snapshot volume may be a terabyte and a portion of the snapshot volume may be 10 gigabytes.
  • Process 500 reads a first area of the portion of the snapshot of the volume ( 514 ).
  • a portion is 10 gigabytes and the first area is 10 megabytes.
  • Process 500 generates a course granularity hash ( 518 ).
  • a course granularity hash signature is a hash signature of the data in a area.
  • Process 500 generates fine granularity hash signatures ( 522 ).
  • the area is further broken down into subareas.
  • a fine granularity hash signature is a hash signature of the data in a subarea.
  • a fine granularity hash signature is generated for each subarea that makes up the area.
  • a subarea may be a size of a smallest block used for snapshot shipping.
  • Process 500 determines if the area of memory intersects with a portion of the volume that is under going changes due differences occurring in the production volume from the current snapshot volume ( 528 ). While the integrity check is occurring, portions of the production volume may change. Process 500 determines if those differences occurred within the area of memory currently being processed.
  • process 500 sends the fine granularity hash signatures to the replica site ( 530 ).
  • process 500 sends the course granularity hash signature to the replica site ( 536 ).
  • Process 500 determines if there are any more areas left to process in the current portion ( 538 ). If there are more areas left, process 500 reads the next area ( 540 ) and repeats processing block 518 .
  • process 500 determines if there any more portions left in the snapshot volume ( 542 ). If there are more portions, process 500 goes to the next portion ( 546 ) and reads the first area ( 514 ). If there are no more portions, process 500 ends. In some embodiments, if the check of the portion is not complete the system will not move to ship the next snapshots (i.e., going from processing block 446 to processing block 406 ).
  • the integrity check may not delay the snapshot shipping process.
  • the integrity tool is configured to start reading from the second snapshot, but the differences are added to a special data structure of areas to be ignored.
  • process 600 goes to the first area of the portion of the replica volume ( 602 ).
  • Process 600 determines that for the current area whether a course granularity hash signature was received from the production site for the corresponding area in the snapshot volume ( 608 ). If a course granularity hash signature was received from the production site, process 600 determines a course granularity hash signature for the corresponding area in the replica volume ( 614 ).
  • Process 600 determines if the course granularity hash signature for a area from the snapshot volume at the production site is the same as the course granularity hash signature for the corresponding area in the replica volume ( 618 ). If the course granularity hash signature for an area from the snapshot volume at the production site is not the same as the course granularity hash signature for the corresponding area in the replica volume, process 600 generates and sends fine granularity hash signatures of the subareas of the area in the replica volume to the production site ( 622 ).
  • process 600 determines if fine granularity hash signatures for the corresponding area in the replica volume were received ( 626 ). If fine granularity hash signatures were received from the production site, process 600 determines fine granularity hash signatures for the corresponding subareas in the replica volume ( 632 ).
  • Process 600 determines if the fine granularity hash signatures for a area from the snapshot volume at the production site is the same as the fine granularity hash signatures for the corresponding subareas in the replica volume ( 636 ). If the fine granularity hash signatures for an area from the snapshot volume at the production site is not the same as the fine granularity hash signatures for the corresponding subareas in the replica volume, process 600 send the locations of those differences to the production site ( 642 ).
  • Process 600 determines if there any more areas left ( 628 ) and if there are more areas left, goes to the next area ( 630 ) and repeats processing block 608 .
  • process 600 determines if there are any portions left ( 632 ); and if there are portions left, process 600 , goes to the next portion ( 634 ) and repeats processing block 602 . If there are no portions left, process 600 ends.
  • process 700 determines if fine granularity hash signatures was received from the replica site ( 702 ). For example, fine granularity hash signatures sent by the replica site in processing block 622 ( FIG. 6 ).
  • process 700 determines if there are any differences between the fine granularity hash signatures from the replica site and the fine granularity hash signatures at the production site for the corresponding subareas ( 704 ). If there are differences, process 700 marks the areas as suspected dirty in a dirty list ( 710 ).
  • Process 700 determines if bad locations were received from the replica site ( 714 ). For example, those locations sent by the replica site in processing block 642 ( FIG. 6 ). If there are bad locations received, process 700 marks the locations in the dirty list ( 716 ).
  • Process 700 determines if the system is configured to stop or pause shipping new snapshots of the production volume to the replica site until the integrity testing of a portion is complete ( 722 ) and if the system is configured as such, process 700 reports the dirty list as integrity errors ( 730 ).
  • process 700 removes entries from the dirty list that are being changed in the production volume ( 734 ). If process 700 determines that the dirty list is not empty ( 738 ), process 700 reports the dirty list as a list of integrity errors ( 742 ).
  • Process 800 and 900 are examples of processes to determine which portions of the volume to check first.
  • process 800 checks for regions in a volume with a higher recent activity ( 810 ). For example, locations which are more write active in the last few days or since the last cheek are checked first, as these locations are probably more important and if there was corruption due to replication error it is more likely to happen within these blocks.
  • storage tiering statistics of the storage e.g., EMC® fully automated storage tiering (FAST®) are used to find the most active areas.
  • process 900 checks regions which are not write active ( 902 ). For example, storage statistics are used to determine locations which are not write active. For example, the defect may be caused if the spindles of a device themselves are not functioning well (i.e., some sectors are corrupted). If the sectors are corrupted and active in the system, theses error would be discovered. However, if an area of storage is not accessed or the sectors are only read from then the sectors may not be verified at the replica site since the system does not read from them.
  • a computer 1000 includes a processor 1002 , a volatile memory 1004 , a non-volatile memory 1006 (e.g., hard disk) and the user interface (UI) 1008 (e.g., a graphical user interface, a mouse, a keyboard, a display, touch screen and so forth).
  • the non-volatile memory 1006 stores computer instructions 1012 , an operating system 1016 and data 1018 .
  • the computer instructions 1012 are executed by the processor 1002 out of volatile memory 1004 to perform all or part of the processes described herein (e.g., processes 500 , 600 , 700 , 800 and 900 ).
  • the processes described herein are not limited to use with the hardware and software of FIG. 10 ; they may find applicability in any computing or processing environment and with any type of machine or set of machines that is capable of running a computer program.
  • the processes described herein may be implemented in hardware, software, or a combination of the two.
  • the processes described herein may be implemented in computer programs executed on programmable computers/machines that each includes a processor, a non-transitory machine-readable medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices.
  • Program code may be applied to data entered using an input device to perform any of the processes described herein and to generate output information.
  • the system may be implemented, at least in part, via a computer program product, (e.g., in a non-transitory machine-readable storage medium such as, for example, a non-transitory computer-readable medium), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers)).
  • a computer program product e.g., in a non-transitory machine-readable storage medium such as, for example, a non-transitory computer-readable medium
  • data processing apparatus e.g., a programmable processor, a computer, or multiple computers
  • Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system.
  • the programs may be implemented in assembly or machine language.
  • the language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in
  • a computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • a computer program may be stored on a non-transitory machine-readable medium that is readable by a general or special purpose programmable computer for configuring and operating the computer when the non-transitory machine-readable medium is read by the computer to perform the processes described herein.
  • the processes described herein may also be implemented as a non-transitory machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate in accordance with the processes.
  • a non-transitory machine-readable medium may include but is not limited to a hard drive, compact disc, flash memory, non-volatile memory, volatile memory, magnetic diskette and so forth but does not include a transitory signal per se.
  • the processes described herein are not limited to the specific examples described.
  • the processes 500 , 600 , 700 , 800 and 900 are not limited to the specific processing order of FIGS. 5 to 9 , respectively. Rather, any of the processing blocks of FIGS. 5 to 9 may be re-ordered, combined or removed, performed in parallel or in serial, as necessary, to achieve the results set forth above.
  • one of ordinary skill in the art would recognize that increasing and decreasing reference counts may be done opposite as described, For example, the reference count can be decreased and then increased.
  • One of ordinary skill in the art would also recognize that a value is changed from a first state to a second state when the signature data is needed to avoid erasure of the data and when the data is no longer needed the value returns to a first state.
  • the processing blocks (for example, in the processes 500 , 600 , 700 , 800 and 900 ) associated with implementing the system may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as, special purpose logic circuitry (e.g., an FPGA (field-programmable gate array) and/or an ASIC (application-specific integrated circuit)). All or part of the system may be implemented using electronic hardware circuitry that include electronic devices such as, for example, at least one of a processor, a memory, a programmable logic device or a logic gate.
  • special purpose logic circuitry e.g., an FPGA (field-programmable gate array) and/or an ASIC (application-specific integrated circuit)
  • All or part of the system may be implemented using electronic hardware circuitry that include electronic devices such as, for example, at least one of a processor, a memory, a programmable logic device or a logic gate.

Abstract

In one aspect, a method includes marking a list of locations currently being shipped to a replica site, generating course granularity hash signatures of data for each area of memory in the snapshot volume, generating fine granularity hash signatures of data for each subarea of memory in the snapshot volume, sending course granularity hash signatures to the replica site for each area not being changed in a production volume and sending fine granularity hash signatures to the replica site for each area being changed in the production volume. The snapshot volume is a snapshot of the production volume.

Description

BACKGROUND
Computer data is vital to today's organizations and a significant part of protection against disasters is focused on data protection. As solid-state memory has advanced to the point where cost of memory has become a relatively insignificant factor, organizations can afford to operate with systems that store and process terabytes of data.
Conventional data protection systems include tape backup drives, for storing organizational production site data on a periodic basis. Another conventional data protection system uses data replication, by creating a copy of production site data of an organization on a secondary backup storage system, and updating the backup with changes. The backup storage system may be situated in the same physical location as the production storage system, or in a physically remote location. Data replication systems generally operate either at the application level, at the file system level, or at the data block level.
SUMMARY
In one aspect, a method includes marking a list of locations currently being shipped to a replica site, generating course granularity hash signatures of data for each area of memory in the snapshot volume, generating fine granularity hash signatures of data for each subarea of memory in the snapshot volume, sending course granularity hash signatures to the replica site for each area not being changed in a production volume and sending fine granularity hash signatures to the replica site for each area being changed in the production volume. The snapshot volume is a snapshot of the production volume.
In another aspect, an apparatus includes electronic hardware circuitry configured to mark a list of locations currently being shipped to a replica site, generate course granularity hash signatures of data for each area of memory in the snapshot volume, generate fine granularity hash signatures of data for each subarea of memory in the snapshot volume, send course granularity hash signatures to the replica site for each area not being changed in a production volume and send fine granularity hash signatures to the replica site for each area being changed in the production volume. The snapshot volume is a snapshot of the production volume.
In a further aspect, an article includes a non-transitory computer-readable medium that stores computer-executable instructions. The instructions cause a machine to mark a list of locations currently being shipped to a replica site, generate course granularity hash signatures of data for each area of memory in the snapshot volume, the snapshot volume being a snapshot of a production volume, generate fine granularity hash signatures of data for each subarea of memory in the snapshot volume, send course granularity hash signatures to the replica site for each area not being changed in the production volume and send fine granularity hash signatures to the replica site for each area being changed in the production volume.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of an example of a data protection system used with a continuous replication mode.
FIG. 2 is an illustration of an example of a journal history of write transactions for a storage system.
FIG. 3 is a block diagram of an example of a data protection system used with a snapshot shipping mode.
FIG. 4 is a flowchart of an example of a process to replicate data from a production site to a replication site using the snapshot shipping mode.
FIGS. 5 to 7 are flowcharts of an example of processes used to test the integrity of a replicated storage.
FIGS. 8 and 9 are flowcharts of processes used to focus integrity testing within a replicate storage.
FIG. 10 is a computer on which any of the processes of FIGS. 5 to 9 may be implemented.
DETAILED DESCRIPTION
Described herein are techniques to test the integrity of a replicated storage.
The following definitions may be useful in understanding the specification and claims.
BACKUP SITE—a facility where replicated production site data is stored; the backup site may be located in a remote site or at the same location as the production site;
BOOKMARK—a bookmark is metadata information stored in a replication journal which indicates a point in time.
DATA PROTECTION APPLIANCE (DPA)—a computer or a cluster of computers responsible for data protection services including inter alia data replication of a storage system, and journaling of I/O requests issued by a host computer to the storage system;
HASH SIGNATURE—a hash signature is generated using an algorithm such as a cryptographic hash function (e.g., SHA-1 or SHA-2) and sometimes referred to herein as a signature;
HOST—at least one computer or networks of computers that runs at least one data processing application that issues I/O requests to one or more storage systems; a host is an initiator with a SAN;
HOST DEVICE—an internal interface in a host, to a logical storage unit;
IMAGE—a copy of a logical storage unit at a specific point in time;
INITIATOR—a node in a SAN that issues I/O requests;
I/O DATA—Data that will be or is written to a volume by, for example, an application, sometimes called write transaction data or write data;
I/O REQUEST—an input/output request (sometimes referred to as an I/O), which may be a read I/O request (sometimes referred to as a read request or a read) or a write I/O request (sometimes referred to as a write request or a write);
JOURNAL—a record of write transactions issued to a storage system; used to maintain a duplicate storage system, and to roll back the duplicate storage system to a previous point in time;
LOGICAL UNIT—a logical entity provided by a storage system for accessing data from the storage system. The logical disk may be a physical logical unit or a virtual logical unit;
LUN—a logical unit number for identifying a logical unit;
PHYSICAL LOGICAL UNIT—a physical entity, such as a disk or an array of disks, for storing data in storage locations that can be accessed by address;
PRODUCTION SITE—a facility where one or more host computers run data processing applications that write data to a storage system and read data from the storage system;
REMOTE ACKNOWLEDGEMENTS—an acknowledgement from remote DPA to the local DPA that data arrived at the remote DPA (either to the appliance or the journal);
SIGNATURE—a signature is a hash signature;
SPLITTER ACKNOWLEDGEMENT—an acknowledgement from a DPA to the protection agent (splitter) that data has been received at the DPA; this may be achieved by an SCSI status command;
SAN—a storage area network of nodes that send and receive an I/O and other requests, each node in the network being an initiator or a target, or both an initiator and a target;
SOURCE SIDE—a transmitter of data within a data replication workflow, during normal operation a production site is the source side; and during data recovery a backup site is the source side, sometimes called a primary side;
STORAGE SYSTEM—a SAN entity that provides multiple logical units for access by multiple SAN initiators;
TARGET—a node in a SAN that replies to I/O requests;
TARGET SIDE—a receiver of data within a data replication workflow; during normal operation a back site is the target side, and during data recovery a production site is the target side, sometimes called a secondary side;
VIRTUAL LOGICAL UNIT—a virtual storage entity which is treated as a logical unit by virtual machines;
WAN—a wide area network that connects local networks and enables them to communicate with one another, such as the Internet.
A description of journaling and some techniques associated with journaling may be described in the patent titled “METHODS AND APPARATUS FOR OPTIMAL JOURNALING FOR CONTINUOUS DATA REPLICATION” and with U.S. Pat. No. 7,516,287, which is hereby incorporated by reference.
AN EXAMPLE OF A REPLICATION SYSTEM USED WITH A CONTINUOUS REPLICATION MODE (FIGS. 1 AND 2)
Referring to FIG. 1, a data protection system 100 includes two sites; Site I, which is a production site, and Site II, which is a backup site or replica site. Under normal operation the production site is the source side of system 100, and the backup site is the target side of the system. The backup site is responsible for replicating production site data. Additionally, the backup site enables roll back of Site I data to an earlier pointing time, which may be used in the event of data corruption of a disaster, or alternatively in order to view or to access data from an earlier point in time.
FIG. 1 is an overview of a system for data replication of either physical or virtual logical units. Thus, one of ordinary skill in the art would appreciate that in a virtual environment a hypervisor, in one example, would consume logical units and generate a distributed file system on them such as VMFS creates files in the file system and expose the files as logical units to the virtual machines (each VMDK is seen as a SCSI device by virtual hosts). In another example, the hypervisor consumes a network based file system and exposes files in the NFS as SCSI devices to virtual hosts.
During normal operations, the direction of replicate data flow goes from source side to target side. It is possible, however, for a user to reverse the direction of replicate data flow, in which case Site I starts to behave as a target backup site, and Site II starts to behave as a source production site. Such change of replication direction is referred to as a “failover”. A failover may be performed in the event of a disaster at the production site, or for other reasons. In some data architectures, Site I or Site II behaves as a production site for a portion of stored data, and behaves simultaneously as a backup site for another portion of stored data. In some data architectures, a portion of stored data is replicated to a backup site, and another portion is not.
The production site and the backup site may be remote from one another, or they may both be situated at a common site, local to one another. Local data protection has the advantage of minimizing data lag between target and source, and remote data protection has the advantage is being robust in the event that a disaster occurs at the source side.
The source and target sides communicate via a wide area network (WAN) 128, although other types of networks may be used.
Each side of system 100 includes three major components coupled via a storage area network (SAN); namely, (i) a storage system, (ii) a host computer, and (iii) a data protection appliance (DPA). Specifically with reference to FIG. 1, the source side SAN includes a source host computer 104, a source storage system 108, and a source DPA 112. Similarly, the target side SAN includes a target host computer 116, a target storage system 120, and a target DPA 124. As well, the protection agent (sometimes referred to as a splitter) may run on the host, or on the storage, or in the network or at a hypervisor level, and that DPAs are optional and DPA code may run on the storage array too, or the DPA 124 may run as a virtual machine.
Generally, a SAN includes one or more devices, referred to as “nodes”. A node in a SAN may be an “initiator” or a “target”, or both. An initiator node is a device that is able to initiate requests to one or more other devices; and a target node is a device that is able to reply to requests, such as SCSI commands, sent by an initiator node. A SAN may also include network switches, such as fiber channel switches. The communication links between each host computer and its corresponding storage system may be any appropriate medium suitable for data transfer, such as fiber communication channel links.
The host communicates with its corresponding storage system using small computer system interface (SCSI) commands.
System 100 includes source storage system 108 and target storage system 120. Each storage system includes physical storage units for storing data, such as disks or arrays of disks. Typically, storage systems 108 and 120 are target nodes. In order to enable initiators to send requests to storage system 108, storage system 108 exposes one or more logical units (LU) to which commands are issued. Thus, storage systems 108 and 120 are SAN entities that provide multiple logical units for access by multiple SAN initiators.
Logical units are a logical entity provided by a storage system, for accessing data stored in the storage system. The logical unit may be a physical logical unit or a virtual logical unit. A logical unit is identified by a unique logical unit number (LUN). Storage system 108 exposes a logical unit 136, designated as LU A, and storage system 120 exposes a logical unit 156, designated as LU B.
LU B is used for replicating LU A. As such, LU B is generated as a copy of LU A. In one embodiment, LU B is configured so that its size is identical to the size of LU A. Thus, for LU A, storage system 120 serves as a backup for source side storage system 108. Alternatively, as mentioned hereinabove, some logical units of storage system 120 may be used to back up logical units of storage system 108, and other logical units of storage system 120 may be used for other purposes. Moreover, there is symmetric replication whereby some logical units of storage system 108 are used for replicating logical units of storage system 120, and other logical units of storage system 120 are used for replicating other logical units of storage system 108.
System 100 includes a source side host computer 104 and a target side host computer 116. A host computer may be one computer, or a plurality of computers, or a network of distributed computers, each computer may include inter alia a conventional CPU, volatile and non-volatile memory, a data bus, an I/O interface, a display interface and a network interface. Generally a host computer runs at least one data processing application, such as a database application and an e-mail server.
Generally, an operating system of a host computer creates a host device for each logical unit exposed by a storage system in the host computer SAN. A host device is a logical entity in a host computer, through which a host computer may access a logical unit. Host device 104 identifies LU A and generates a corresponding host device 140, designated as Device A, through which it can access LU A. Similarly, host computer 116 identifies LU B and generates a corresponding device 160, designated as Device B.
In the course of continuous operation, host computer 104 is a SAN initiator that issues I/O requests (write/read operations) through host device 140 to LU A using, for example, SCSI commands. Such requests are generally transmitted to LU A with an address that includes a specific device identifier, an offset within the device, and a data size. Offsets are generally aligned to 512 byte blocks. The average size of a write operation issued by host computer 104 may be, for example, 10 kilobytes (KB); i.e., 20 blocks. For an I/O rate of 50 megabytes (MB) per second, this corresponds to approximately 5,000 write transactions per second.
System 100 includes two data protection appliances, a source side DPA 112 and a target side DPA 124. A DPA performs various data protection services, such as data replication of a storage system, and journaling of I/O requests issued by a host computer to source side storage system data. As explained in detail herein, when acting as a target side DPA, a DPA may also enable roll back of data to an earlier point in time, and processing of rolled back data at the target site. Each DPA 112 and 124 is a computer that includes inter alia one or more conventional CPUs and internal memory.
For additional safety precaution, each DPA is a cluster of such computers. Use of a cluster ensures that if a DPA computer is down, then the DPA functionality switches over to another computer. The DPA computers within a DPA cluster communicate with one another using at least one communication link suitable for data transfer via fiber channel or IP based protocols, or such other transfer protocol. One computer from the DPA cluster serves as the DPA leader. The DPA cluster leader coordinates between the computers in the cluster, and may also perform other tasks that require coordination between the computers, such as load balancing.
In the architecture illustrated in FIG. 1, DPA 112 and DPA 124 are standalone devices integrated within a SAN. Alternatively, each of DPA 112 and DPA 124 may be integrated into storage system 108 and storage system 120, respectively, or integrated into host computer 104 and host computer 116, respectively. Both DPAs communicate with their respective host computers through communication lines such as fiber channels using, for example, SCSI commands or any other protocol.
DPAs 112 and 124 are configured to act as initiators in the SAN; i.e., they can issue I/O requests using, for example, SCSI commands, to access logical units on their respective storage systems. DPA 112 and DPA 124 are also configured with the necessary functionality to act as targets; i.e., to reply to I/O requests, such as SCSI commands, issued by other initiators in the SAN, including inter alia their respective host computers 104 and 116. Being target nodes, DPA 112 and DPA 124 may dynamically expose or remove one or more logical units.
As described hereinabove, Site I and Site II may each behave simultaneously as a production site and a backup site for different logical units. As such, DPA 112 and DPA 124 may each behave as a source DPA for some logical units, and as a target DPA for other logical units, at the same time.
Host computer 104 and host computer 116 include protection agents 144 and 164, respectively. Protection agents 144 and 164 intercept SCSI commands issued by their respective host computers, via host devices to logical units that are accessible to the host computers. A data protection agent may act on an intercepted SCSI commands issued to a logical unit, in one of the following ways: send the SCSI commands to its intended logical unit; redirect the SCSI command to another logical unit; split the SCSI command by sending it first to the respective DPA; after the DPA returns an acknowledgement, send the SCSI command to its intended logical unit; fail a SCSI command by returning an error return code; and delay a SCSI command by not returning an acknowledgement to the respective host computer.
A protection agent may handle different SCSI commands, differently, according to the type of the command. For example, a SCSI command inquiring about the size of a certain logical unit may be sent directly to that logical unit, while a SCSI write command may be split and sent first to a DPA associated with the agent. A protection agent may also change its behavior for handling SCSI commands, for example as a result of an instruction received from the DPA.
Specifically, the behavior of a protection agent for a certain host device generally corresponds to the behavior of its associated DPA with respect to the logical unit of the host device. When a DPA behaves as a source site DPA for a certain logical unit, then during normal course of operation, the associated protection agent splits I/O requests issued by a host computer to the host device corresponding to that logical unit. Similarly, when a DPA behaves as a target device for a certain logical unit, then during normal course of operation, the associated protection agent fails I/O requests issued by host computer to the host device corresponding to that logical unit.
Communication between protection agents and their respective DPAs may use any protocol suitable for data transfer within a SAN, such as fiber channel, or SCSI over fiber channel. The communication may be direct, or via a logical unit exposed by the DPA. Protection agents communicate with their respective DPAs by sending SCSI commands over fiber channel.
Protection agents 144 and 164 are drivers located in their respective host computers 104 and 116. Alternatively, a protection agent may also be located in a fiber channel switch, or in any other device situated in a data path between a host computer and a storage system or on the storage system itself. In a virtualized environment, the protection agent may run at the hypervisor layer or in a virtual machine providing a virtualization layer.
What follows is a detailed description of system behavior under normal production mode, and under recovery mode.
In production mode DPA 112 acts as a source site DPA for LU A. Thus, protection agent 144 is configured to act as a source side protection agent; i.e., as a splitter for host device A. Specifically, protection agent 144 replicates SCSI I/O write requests. A replicated SCSI I/O write request is sent to DPA 112. After receiving an acknowledgement from DPA 124, protection agent 144 then sends the SCSI I/O write request to LU A. After receiving a second acknowledgement from storage system 108 host computer 104 acknowledges that an I/O command complete.
When DPA 112 receives a replicated SCSI write request from data protection agent 144, DPA 112 transmits certain I/O information characterizing the write request, packaged as a “write transaction”, over WAN 128 to DPA 124 on the target side, for journaling and for incorporation within target storage system 120.
DPA 112 may send its write transactions to DPA 124 using a variety of modes of transmission, including inter alia (i) a synchronous mode, (ii) an asynchronous mode, and (iii) a snapshot mode. In synchronous mode, DPA 112 sends each write transaction to DPA 124, receives back an acknowledgement from DPA 124, and in turns sends an acknowledgement back to protection agent 144. Protection agent 144 waits until receipt of such acknowledgement before sending the SCSI write request to LU A.
In asynchronous mode, DPA 112 sends an acknowledgement to protection agent 144 upon receipt of each I/O request, before receiving an acknowledgement back from DPA 124.
In snapshot mode, DPA 112 receives several I/O requests and combines them into an aggregate “snapshot” of all write activity performed in the multiple I/O requests, and sends the snapshot to DPA 124, for journaling and for incorporation in target storage system 120. In snapshot mode DPA 112 also sends an acknowledgement to protection agent 144 upon receipt of each I/O request, before receiving an acknowledgement back from DPA 124.
For the sake of clarity, the ensuing discussion assumes that information is transmitted at write-by-write granularity.
While in production mode, DPA 124 receives replicated data of LU A from DPA 112, and performs journaling and writing to storage system 120. When applying write operations to storage system 120, DPA 124 acts as an initiator, and sends SCSI commands to LU B.
During a recovery mode, DPA 124 undoes the write transactions in the journal, so as to restore storage system 120 to the state it was at, at an earlier time.
As described hereinabove, LU B is used as a backup of LU A. As such, during normal production mode, while data written to LU A by host computer 104 is replicated from LU A to LU B, host computer 116 should not be sending I/O requests to LU B. To prevent such I/O requests from being sent, protection agent 164 acts as a target site protection agent for host Device B and fails I/O requests sent from host computer 116 to LU B through host Device B.
Target storage system 120 exposes a logical unit 176, referred to as a “journal LU”, for maintaining a history of write transactions made to LU B, referred to as a “journal”. Alternatively, journal LU 176 may be striped over several logical units, or may reside within all of or a portion of another logical unit. DPA 124 includes a journal processor 180 for managing the journal.
Journal processor 180 functions generally to manage the journal entries of LU B. Specifically, journal processor 180 enters write transactions received by DPA 124 from DPA 112 into the journal, by writing them into the journal LU, reads the undo information for the transaction from LU B, updates the journal entries in the journal LU with undo information, applies the journal transactions to LU B, and removes already-applied transactions from the journal.
Referring to FIG. 2, which is an illustration of a write transaction 200 for a journal. The journal may be used to provide an adaptor for access to storage 120 at the state it was in at any specified point in time. Since the journal contains the “undo” information necessary to roll back storage system 120, data that was stored in specific memory locations at the specified point in time may be obtained by undoing write transactions that occurred subsequent to such point in time.
Write transaction 200 generally includes the following fields: one or more identifiers; a time stamp, which is the date & time at which the transaction was received by source side DPA 112; a write size, which is the size of the data block; a location in journal LU 176 where the data is entered; a location in LU B where the data is to be written; and the data itself.
Write transaction 200 is transmitted from source side DPA 112 to target side DPA 124. As shown in FIG. 2, DPA 124 records the write transaction 200 in the journal that includes four streams. A first stream, referred to as a DO stream, includes new data for writing in LU B. A second stream, referred to as an DO METADATA stream, includes metadata for the write transaction, such as an identifier, a date & time, a write size, a beginning address in LU B for writing the new data in, and a pointer to the offset in the DO stream where the corresponding data is located. Similarly, a third stream, referred to as an UNDO stream, includes old data that was overwritten in LU B; and a fourth stream, referred to as an UNDO METADATA, include an identifier, a date & time, a write size, a beginning address in LU B where data was to be overwritten, and a pointer to the offset in the UNDO stream where the corresponding old data is located.
In practice each of the four streams holds a plurality of write transaction data. As write transactions are received dynamically by target DPA 124, they are recorded at the end of the DO stream and the end of the DO METADATA stream, prior to committing the transaction. During transaction application, when the various write transactions are applied to LU B, prior to writing the new DO data into addresses within the storage system, the older data currently located in such addresses is recorded into the UNDO stream. In some examples, the metadata stream (e.g., UNDO METADATA stream or the DO METADATA stream) and the data stream (e.g., UNDO stream or DO stream) may be kept in a single stream each (i.e., one UNDO data and UNDO METADATA stream and one DO data and DO METADATA stream) by interleaving the metadata into the data stream.
AN EXAMPLE OF A REPLICATION SYSTEM USED WITH A SNAPSHOT SHIPPING MODE (FIGS. 3 AND 4)
Referring to FIG. 3, a data protection system 300 includes a data protection appliance (DPA) cluster 302 a and a storage array 306 a at a production site and a DPA cluster 302 b and a storage array 306 b at a replication site. The DPA clusters 302 a, 302 b are connected by a network 304 (e.g., a WAN, a Fibre Channel and so forth).
The storage array 306 a includes a primary storage volume 312 a, a journal 316 a, a first snapshot 322 a, a second snapshot 322 b, APIs 350 a and a delta marking stream 360. The storage array 306 b includes a replica storage volume 312 b which replicates the primary storage 312 a, a journal 316 b, a list of snapshots 340, and APIs 350 b.
Referring to FIG. 4, an example of a process to send data from the production site to the replication site using a snapshot shipping mode is a process 400. Process 400 generates a first snapshot of a production storage array (402). For example, the DPA cluster 302 a generates a first snapshot 322 a of the primary storage volume 312 a. In one example, the DPA cluster 302 a generates the first snapshot 322 a using the API 350 a. At first time initialization, the first snapshot is sent to the replica site.
Process 400 generates a second snapshot of a production storage array (406). For example, the DPA cluster 302 a generates a second snapshot 322 b of the primary storage volume 312 a. In one example, the DPA cluster 302 a generates the second snapshot 322 b using the API 350 a.
Process 400 obtains differences between the first snapshot and the second snapshot (408). For example, the DPA cluster 302 a obtains the differences between the first snapshot 322 a and the second snapshot 322 b by using the API 350 a.
In one example, obtaining the differences between the first snapshot and the second snapshot may be performed by executing a vendor specific read difference command. The read difference command is a vendor specific command which returns locations of the difference and data in the locations. In one example, the result of executing the read difference command is a sequence of (location, length); (location, length) and so forth. In other examples, the read difference command returns a change bitmap.
Process 400 adds the differences to a delta marking stream (410). For example, the DPA cluster 302 a adds the differences obtained in processing block 408 to the delta marking stream 360 using an the API 350 a.
Process 400 deletes the first snapshot (418). For example, the DPA cluster 302 a deletes the first snapshot 322 a.
Process 400 reads the data that changed in the second snapshot (422) and sends the data to the replication site (440). For example, the DPA cluster 302 a reads the data that changed in the second snapshot 322 b and sends the data to the DPA cluster 302 b.
Process 400 renames the second snapshot to the first snapshot (446) and performs processing block 406. For example, the DPA cluster 302 a renames the second snapshot to the first snapshot.
FIGS. 5 to 7 depict an example of processes used to test the integrity of a replicated storage. FIGS. 5 and 7 depict processes (e.g., a process 500 and a process 700 respectively) that occur at the production site while the FIG. 6 depicts a process (e.g., a process 600) that occurs at replication site.
Theoretically checking the integrity of replication in a snapshot shipping mode is relatively easy. Once a snapshot is shipped to the replica site, both production and replica storage have the same snapshot and the system can just start scrubbing the devices by creating strong hash signature (say SHA-1 or SHA-2) for all the disk or for long parts of the disk (if fine granular errors need to be found). Then once a comparison is complete the system can decide if production and replica volumes are identical or not.
The issue with the above approach is that it requires that the snapshots to be in existence a long time. In particular, the snapshots cannot be erased during the period of checking for integrity. If the replicated volume size is large, the scrubbing can take a significant amount of time and thus keeping the snapshots will require significant amount of memory space.
As will be described herein, the integrity check is started at the beginning of the volume and continues to the end of the volume. The integrity check is performed a portion of the volume at a time. Since the system is in the middle of snapshot shipping, some of the portion may be changed during the shipping of the snapshot. By performing processes 500, 600 and 700 the snapshots do not need to be kept for long periods of time consuming significant amounts of memory.
Referring to FIG. 5, process 500 generates a snapshot of the production volume to form a snapshot volume (502).
Process 500 goes to a first portion of the snapshot volume (508). The integrity check is performed a portion of the volume at a time. In one particular example, the snapshot volume may be a terabyte and a portion of the snapshot volume may be 10 gigabytes.
Process 500 reads a first area of the portion of the snapshot of the volume (514). In one example, a portion is 10 gigabytes and the first area is 10 megabytes.
Process 500 generates a course granularity hash (518). A course granularity hash signature is a hash signature of the data in a area.
Process 500 generates fine granularity hash signatures (522). The area is further broken down into subareas. A fine granularity hash signature is a hash signature of the data in a subarea. A fine granularity hash signature is generated for each subarea that makes up the area. In one example, a subarea may be a size of a smallest block used for snapshot shipping.
Process 500 determines if the area of memory intersects with a portion of the volume that is under going changes due differences occurring in the production volume from the current snapshot volume (528). While the integrity check is occurring, portions of the production volume may change. Process 500 determines if those differences occurred within the area of memory currently being processed.
If the area of memory does intersect with a portion of the volume that is undergoing changes, process 500 sends the fine granularity hash signatures to the replica site (530).
If the area of memory does not intersect with a portion of the volume that is undergoing changes, process 500 sends the course granularity hash signature to the replica site (536).
Process 500 determines if there are any more areas left to process in the current portion (538). If there are more areas left, process 500 reads the next area (540) and repeats processing block 518.
If there are no more areas left, process 500 determines if there any more portions left in the snapshot volume (542). If there are more portions, process 500 goes to the next portion (546) and reads the first area (514). If there are no more portions, process 500 ends. In some embodiments, if the check of the portion is not complete the system will not move to ship the next snapshots (i.e., going from processing block 446 to processing block 406).
In other embodiments, the integrity check may not delay the snapshot shipping process. In this case when a second snapshot is generated (406), the integrity tool is configured to start reading from the second snapshot, but the differences are added to a special data structure of areas to be ignored.
Referring to FIG. 6, process 600 goes to the first area of the portion of the replica volume (602). Process 600 determines that for the current area whether a course granularity hash signature was received from the production site for the corresponding area in the snapshot volume (608). If a course granularity hash signature was received from the production site, process 600 determines a course granularity hash signature for the corresponding area in the replica volume (614).
Process 600 determines if the course granularity hash signature for a area from the snapshot volume at the production site is the same as the course granularity hash signature for the corresponding area in the replica volume (618). If the course granularity hash signature for an area from the snapshot volume at the production site is not the same as the course granularity hash signature for the corresponding area in the replica volume, process 600 generates and sends fine granularity hash signatures of the subareas of the area in the replica volume to the production site (622).
If a course granularity hash signature was not received from the production site, process 600 determines if fine granularity hash signatures for the corresponding area in the replica volume were received (626). If fine granularity hash signatures were received from the production site, process 600 determines fine granularity hash signatures for the corresponding subareas in the replica volume (632).
Process 600 determines if the fine granularity hash signatures for a area from the snapshot volume at the production site is the same as the fine granularity hash signatures for the corresponding subareas in the replica volume (636). If the fine granularity hash signatures for an area from the snapshot volume at the production site is not the same as the fine granularity hash signatures for the corresponding subareas in the replica volume, process 600 send the locations of those differences to the production site (642).
Process 600 determines if there any more areas left (628) and if there are more areas left, goes to the next area (630) and repeats processing block 608.
If there are not anymore areas left, process 600 determines if there are any portions left (632); and if there are portions left, process 600, goes to the next portion (634) and repeats processing block 602. If there are no portions left, process 600 ends.
Referring to FIG. 7, process 700 determines if fine granularity hash signatures was received from the replica site (702). For example, fine granularity hash signatures sent by the replica site in processing block 622 (FIG. 6).
If there were fine granularity hash signatures received from the replica site, process 700 determines if there are any differences between the fine granularity hash signatures from the replica site and the fine granularity hash signatures at the production site for the corresponding subareas (704). If there are differences, process 700 marks the areas as suspected dirty in a dirty list (710).
Process 700 determines if bad locations were received from the replica site (714). For example, those locations sent by the replica site in processing block 642 (FIG. 6). If there are bad locations received, process 700 marks the locations in the dirty list (716).
Process 700 determines if the system is configured to stop or pause shipping new snapshots of the production volume to the replica site until the integrity testing of a portion is complete (722) and if the system is configured as such, process 700 reports the dirty list as integrity errors (730).
Otherwise, process 700 removes entries from the dirty list that are being changed in the production volume (734). If process 700 determines that the dirty list is not empty (738), process 700 reports the dirty list as a list of integrity errors (742).
Referring to FIGS. 8 and 9, as described above, the portions chosen for processing were not chosen for any particular. However, if portions were chosen where there was a greater chance of finding errors, then finding these errors would occur faster. Process 800 and 900 are examples of processes to determine which portions of the volume to check first.
Referring to FIG. 8, process 800 checks for regions in a volume with a higher recent activity (810). For example, locations which are more write active in the last few days or since the last cheek are checked first, as these locations are probably more important and if there was corruption due to replication error it is more likely to happen within these blocks. In one particular example, storage tiering statistics of the storage (e.g., EMC® fully automated storage tiering (FAST®)) are used to find the most active areas.
Referring to FIG. 9, process 900 checks regions which are not write active (902). For example, storage statistics are used to determine locations which are not write active. For example, the defect may be caused if the spindles of a device themselves are not functioning well (i.e., some sectors are corrupted). If the sectors are corrupted and active in the system, theses error would be discovered. However, if an area of storage is not accessed or the sectors are only read from then the sectors may not be verified at the replica site since the system does not read from them.
Referring to FIG. 10, in one example, a computer 1000 includes a processor 1002, a volatile memory 1004, a non-volatile memory 1006 (e.g., hard disk) and the user interface (UI) 1008 (e.g., a graphical user interface, a mouse, a keyboard, a display, touch screen and so forth). The non-volatile memory 1006 stores computer instructions 1012, an operating system 1016 and data 1018. In one example, the computer instructions 1012 are executed by the processor 1002 out of volatile memory 1004 to perform all or part of the processes described herein (e.g., processes 500, 600, 700, 800 and 900).
The processes described herein (e.g., processes 500, 600, 700, 800 and 900) are not limited to use with the hardware and software of FIG. 10; they may find applicability in any computing or processing environment and with any type of machine or set of machines that is capable of running a computer program. The processes described herein may be implemented in hardware, software, or a combination of the two. The processes described herein may be implemented in computer programs executed on programmable computers/machines that each includes a processor, a non-transitory machine-readable medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code may be applied to data entered using an input device to perform any of the processes described herein and to generate output information.
The system may be implemented, at least in part, via a computer program product, (e.g., in a non-transitory machine-readable storage medium such as, for example, a non-transitory computer-readable medium), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers)). Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a non-transitory machine-readable medium that is readable by a general or special purpose programmable computer for configuring and operating the computer when the non-transitory machine-readable medium is read by the computer to perform the processes described herein. For example, the processes described herein may also be implemented as a non-transitory machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate in accordance with the processes. A non-transitory machine-readable medium may include but is not limited to a hard drive, compact disc, flash memory, non-volatile memory, volatile memory, magnetic diskette and so forth but does not include a transitory signal per se.
The processes described herein are not limited to the specific examples described. For example, the processes 500, 600, 700, 800 and 900 are not limited to the specific processing order of FIGS. 5 to 9, respectively. Rather, any of the processing blocks of FIGS. 5 to 9 may be re-ordered, combined or removed, performed in parallel or in serial, as necessary, to achieve the results set forth above.
In other examples, one of ordinary skill in the art would recognize that increasing and decreasing reference counts may be done opposite as described, For example, the reference count can be decreased and then increased. One of ordinary skill in the art would also recognize that a value is changed from a first state to a second state when the signature data is needed to avoid erasure of the data and when the data is no longer needed the value returns to a first state.
The processing blocks (for example, in the processes 500, 600, 700, 800 and 900) associated with implementing the system may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as, special purpose logic circuitry (e.g., an FPGA (field-programmable gate array) and/or an ASIC (application-specific integrated circuit)). All or part of the system may be implemented using electronic hardware circuitry that include electronic devices such as, for example, at least one of a processor, a memory, a programmable logic device or a logic gate.
Elements of different embodiments described herein may be combined to form other embodiments not specifically set forth above. Other embodiments not specifically described herein are also within the scope of the following claims.

Claims (20)

What is claimed is:
1. A method comprising:
marking a list of locations currently being shipped to a replica site;
generating course granularity hash signatures of data for each area of memory in the snapshot volume, the snapshot volume being a snapshot of a production volume;
generating fine granularity hash signatures of data for each subarea of memory in the snapshot volume;
sending course granularity hash signatures to the replica site for each area not being changed in the production volume; and
sending fine granularity hash signatures to the replica site for each area being changed in the production volume.
2. The method of claim 1, further comprising:
receiving from the replica site fine granularity hash signatures of data for subareas of memory in a replica volume; and
comparing the fine granularity hash signatures from the replica site with the fine granularity hash signatures of the production site for corresponding subareas of memory.
3. The method of claim 2, further comprising:
comparing the fine granularity hash signatures received from the production site with the fine granularity hash signatures of the replica volume for corresponding subareas of memory; and
sending from the replica site the potential locations of corruption in the replica volume for the fine granularity hash signatures received from the production site that do not match with the fine granularity hash signatures of the replica volume.
4. The method of claim 3, further comprising:
comparing the course granularity hash signatures received from the production site with the course granularity hash signatures of the replica volume for corresponding areas of memory; and
sending, from the replica site to the production site, fine granularity hash signatures for those subareas within area of memory where the course granularity hash signature received form the production site does not match the course granularity hash signature of the corresponding area in the replica volume.
5. The method of claim 1, wherein a sub-area is equal to a smallest block used in snapshot shipping.
6. The method of claim 1, further comprising checking first at least one of areas within the replica volume with a higher recent activity during an integrity test or areas within the replica volume which are not write active during an integrity test.
7. The method of claim 1, further comprising, if a new snapshot of the production volume is configured to be shipped to the replica site, adding differences between fine granularity hash signatures from the replica site and the fine granularity hash signatures at the production site to a suspected difference list.
8. The method of claim 7, further comprising:
removing entries from the suspected difference list for locations being updated in the production volume; and
reporting the suspected list as errors if the suspected list is not empty.
9. The method of claim 1, further comprising pausing shipping a snapshot of the production volume from the production site to the replica site if an integrity check of a portion of a volume is not complete.
10. The method of claim 9, further comprising reporting as errors differences between fine granularity hash signatures from the replica site and the fine granularity hash signatures at the production site.
11. An apparatus, comprising:
electronic hardware circuitry configured to:
mark a list of locations currently being shipped to a replica site;
generate course granularity hash signatures of data for each area of memory in the snapshot volume, the snapshot volume being a snapshot of a production volume;
generate fine granularity hash signatures of data for each subarea of memory in the snapshot volume;
send course granularity hash signatures to the replica site for each area not being changed in the production volume; and
send fine granularity hash signatures to the replica site for each area being changed in the production volume;
wherein the circuitry comprises at least one of a processor, a memory, a programmable logic device or a logic gate.
12. The apparatus of claim 11, further comprising circuitry configured to:
receive from the replica site fine granularity hash signatures of data for subareas of memory in a replica volume;
compare the fine granularity hash signatures from the replica site with the fine granularity hash signatures of the production site for corresponding subareas of memory;
compare the fine granularity hash signatures received from the production site with the fine granularity hash signatures of the replica volume for corresponding subareas of memory;
send from the replica site the potential locations of corruption in the replica volume for the fine granularity hash signatures received from the production site that do not match with the fine granularity hash signatures of the replica volume;
compare the course granularity hash signatures received from the production site with the course granularity hash signatures of the replica volume for corresponding areas of memory; and
send, from the replica site to the production site, fine granularity hash signatures for those subareas within area of memory where the course granularity hash signature received form the production site does not match the course granularity hash signature of the corresponding area in the replica volume.
13. The apparatus of claim 11, further comprising circuitry configured to check first at least one of areas within the replica volume with a higher recent activity during an integrity test or areas within the replica volume which are not write active during an integrity test.
14. The apparatus of claim 11, further comprising circuitry configured to if a new snapshot of the production volume is configured to be shipped to the replica site:
add differences between fine granularity hash signatures from the replica site and the fine granularity hash signatures at the production site to a suspected difference list;
remove entries from the suspected difference list for locations being updated in the production volume; and
report the suspected list as errors if the suspected list is not empty.
15. The apparatus of claim 11, further comprising circuitry configured to:
pause shipping a snapshot of the production volume from the production site to the replica site if an integrity check of a portion of a volume is not complete; and
report as errors differences between fine granularity hash signatures from the replica site and the fine granularity hash signatures at the production site.
16. An article comprising:
a non-transitory computer-readable medium that stores computer-executable instructions, the instructions causing a machine to:
mark a list of locations currently being shipped to a replica site;
generate course granularity hash signatures of data for each area of memory in the snapshot volume, the snapshot volume being a snapshot of a production volume;
generate fine granularity hash signatures of data for each subarea of memory in the snapshot volume;
send course granularity hash signatures to the replica site for each area not being changed in the production volume; and
send fine granularity hash signatures to the replica site for each area being changed in the production volume.
17. The article of claim 16, further comprising instructions causing the machine to:
receive from the replica site fine granularity hash signatures of data for subareas of memory in a replica volume;
compare the fine granularity hash signatures from the replica site with the fine granularity hash signatures of the production site for corresponding subareas of memory;
compare the fine granularity hash signatures received from the production site with the fine granularity hash signatures of the replica volume for corresponding subareas of memory;
send from the replica site the potential locations of corruption in the replica volume for the fine granularity hash signatures received from the production site that do not match with the fine granularity hash signatures of the replica volume.
compare the course granularity hash signatures received from the production site with the course granularity hash signatures of the replica volume for corresponding areas of memory; and
send, from the replica site to the production site, fine granularity hash signatures for those subareas within area of memory where the course granularity hash signature received form the production site does not match the course granularity hash signature of the corresponding area in the replica volume.
18. The article of claim 16, further comprising instructions causing the machine to check first at least one of areas within the replica volume with a higher recent activity during an integrity test or areas within the replica volume which are not write active during an integrity test.
19. The article of claim 16, further comprising instructions causing the machine to if a new snapshot of the production volume is configured to be shipped to the replica site:
add differences between fine granularity hash signatures from the replica site and the fine granularity hash signatures at the production site to a suspected difference list;
remove entries from the suspected difference list for locations being updated in the production volume; and
report the suspected list as errors if the suspected list is not empty.
20. The article of claim 16, further comprising instructions causing the machine to:
pause shipping a snapshot of the production volume from the production site to the replica site if an integrity check of a portion of a volume is not complete; and
report as errors differences between fine granularity hash signatures from the replica site and the fine granularity hash signatures at the production site.
US14/133,945 2013-12-19 2013-12-19 Testing integrity of replicated storage Active 2034-05-09 US9158630B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/133,945 US9158630B1 (en) 2013-12-19 2013-12-19 Testing integrity of replicated storage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/133,945 US9158630B1 (en) 2013-12-19 2013-12-19 Testing integrity of replicated storage

Publications (1)

Publication Number Publication Date
US9158630B1 true US9158630B1 (en) 2015-10-13

Family

ID=54252673

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/133,945 Active 2034-05-09 US9158630B1 (en) 2013-12-19 2013-12-19 Testing integrity of replicated storage

Country Status (1)

Country Link
US (1) US9158630B1 (en)

Cited By (76)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9244997B1 (en) 2013-03-15 2016-01-26 Emc Corporation Asymmetric active-active access of asynchronously-protected data storage
US9286052B1 (en) 2011-09-15 2016-03-15 Emc Corporation Upgrading software on a pair of nodes in a clustered environment
US9336094B1 (en) 2012-09-13 2016-05-10 Emc International Company Scaleout replication of an application
US9367260B1 (en) 2013-12-13 2016-06-14 Emc Corporation Dynamic replication system
US9383937B1 (en) 2013-03-14 2016-07-05 Emc Corporation Journal tiering in a continuous data protection system using deduplication-based storage
US9405481B1 (en) 2014-12-17 2016-08-02 Emc Corporation Replicating using volume multiplexing with consistency group file
US9405765B1 (en) 2013-12-17 2016-08-02 Emc Corporation Replication of virtual machines
US9411535B1 (en) 2015-03-27 2016-08-09 Emc Corporation Accessing multiple virtual devices
US9501542B1 (en) 2008-03-11 2016-11-22 Emc Corporation Methods and apparatus for volume synchronization
US9529885B1 (en) 2014-09-29 2016-12-27 EMC IP Holding Company LLC Maintaining consistent point-in-time in asynchronous replication during virtual machine relocation
US9557921B1 (en) 2015-03-26 2017-01-31 EMC IP Holding Company LLC Virtual volume converter
US9600377B1 (en) 2014-12-03 2017-03-21 EMC IP Holding Company LLC Providing data protection using point-in-time images from multiple types of storage devices
US9619543B1 (en) 2014-06-23 2017-04-11 EMC IP Holding Company LLC Replicating in virtual desktop infrastructure
US9632881B1 (en) 2015-03-24 2017-04-25 EMC IP Holding Company LLC Replication of a virtual distributed volume
US9665305B1 (en) 2015-06-26 2017-05-30 EMC IP Holding Company LLC Tiering data between two deduplication devices
US9678680B1 (en) 2015-03-30 2017-06-13 EMC IP Holding Company LLC Forming a protection domain in a storage architecture
US9684576B1 (en) 2015-12-21 2017-06-20 EMC IP Holding Company LLC Replication using a virtual distributed volume
US9696939B1 (en) 2013-03-14 2017-07-04 EMC IP Holding Company LLC Replicating data using deduplication-based arrays using network-based replication
CN107203345A (en) * 2017-06-01 2017-09-26 深圳市云舒网络技术有限公司 A kind of many copy storage rapid verification coherence methods and its device
US9910735B1 (en) 2016-03-30 2018-03-06 EMC IP Holding Company LLC Generating an application-consistent snapshot
US9910621B1 (en) 2014-09-29 2018-03-06 EMC IP Holding Company LLC Backlogging I/O metadata utilizing counters to monitor write acknowledgements and no acknowledgements
US10019194B1 (en) 2016-09-23 2018-07-10 EMC IP Holding Company LLC Eventually consistent synchronous data replication in a storage system
US10031703B1 (en) 2013-12-31 2018-07-24 Emc Corporation Extent-based tiering for virtual storage using full LUNs
US10042751B1 (en) 2015-09-30 2018-08-07 EMC IP Holding Company LLC Method and system for multi-tier all-flash array
US10055148B1 (en) 2015-12-22 2018-08-21 EMC IP Holding Company LLC Storing application data as an enhanced copy
US10061666B1 (en) 2011-12-30 2018-08-28 Emc International Company Method and apparatus for adding a director to storage with network-based replication without data resynchronization
US10067837B1 (en) 2015-12-28 2018-09-04 EMC IP Holding Company LLC Continuous data protection with cloud resources
US10078459B1 (en) 2016-09-26 2018-09-18 EMC IP Holding Company LLC Ransomware detection using I/O patterns
US10082980B1 (en) 2014-06-20 2018-09-25 EMC IP Holding Company LLC Migration of snapshot in replication system using a log
US10101943B1 (en) 2014-09-25 2018-10-16 EMC IP Holding Company LLC Realigning data in replication system
US10108356B1 (en) 2016-03-25 2018-10-23 EMC IP Holding Company LLC Determining data to store in retention storage
US10114581B1 (en) 2016-12-27 2018-10-30 EMC IP Holding Company LLC Creating a virtual access point in time on an object based journal replication
US10133874B1 (en) 2015-12-28 2018-11-20 EMC IP Holding Company LLC Performing snapshot replication on a storage system not configured to support snapshot replication
US10140039B1 (en) 2016-12-15 2018-11-27 EMC IP Holding Company LLC I/O alignment for continuous replication in a storage system
US10146961B1 (en) 2016-09-23 2018-12-04 EMC IP Holding Company LLC Encrypting replication journals in a storage system
US10152267B1 (en) 2016-03-30 2018-12-11 Emc Corporation Replication data pull
US10191687B1 (en) 2016-12-15 2019-01-29 EMC IP Holding Company LLC Adaptive snap-based replication in a storage system
US10210073B1 (en) * 2016-09-23 2019-02-19 EMC IP Holding Company, LLC Real time debugging of production replicated data with data obfuscation in a storage system
US10223023B1 (en) 2016-09-26 2019-03-05 EMC IP Holding Company LLC Bandwidth reduction for multi-level data replication
US10229006B1 (en) 2015-12-28 2019-03-12 EMC IP Holding Company LLC Providing continuous data protection on a storage array configured to generate snapshots
US10235091B1 (en) 2016-09-23 2019-03-19 EMC IP Holding Company LLC Full sweep disk synchronization in a storage system
US10235060B1 (en) 2016-04-14 2019-03-19 EMC IP Holding Company, LLC Multilevel snapshot replication for hot and cold regions of a storage system
US10235088B1 (en) 2016-03-30 2019-03-19 EMC IP Holding Company LLC Global replication policy for multi-copy replication
US10235087B1 (en) 2016-03-30 2019-03-19 EMC IP Holding Company LLC Distributing journal data over multiple journals
US10235090B1 (en) 2016-09-23 2019-03-19 EMC IP Holding Company LLC Validating replication copy consistency using a hash function in a storage system
US10235145B1 (en) 2012-09-13 2019-03-19 Emc International Company Distributed scale-out replication
US10235092B1 (en) 2016-12-15 2019-03-19 EMC IP Holding Company LLC Independent parallel on demand recovery of data replicas in a storage system
US10235061B1 (en) 2016-09-26 2019-03-19 EMC IP Holding Company LLC Granular virtual machine snapshots
US10235196B1 (en) 2015-12-28 2019-03-19 EMC IP Holding Company LLC Virtual machine joining or separating
US10235247B1 (en) 2016-09-26 2019-03-19 EMC IP Holding Company LLC Compressing memory snapshots
US10235064B1 (en) 2016-12-27 2019-03-19 EMC IP Holding Company LLC Optimized data replication using special NVME protocol and running in a friendly zone of storage array
US10255314B2 (en) 2017-03-16 2019-04-09 International Business Machines Corporation Comparison of block based volumes with ongoing inputs and outputs
US10296419B1 (en) 2015-03-27 2019-05-21 EMC IP Holding Company LLC Accessing a virtual device using a kernel
US10324798B1 (en) 2014-09-25 2019-06-18 EMC IP Holding Company LLC Restoring active areas of a logical unit
US10324637B1 (en) 2016-12-13 2019-06-18 EMC IP Holding Company LLC Dual-splitter for high performance replication
US10353603B1 (en) 2016-12-27 2019-07-16 EMC IP Holding Company LLC Storage container based replication services
US10366011B1 (en) 2018-05-03 2019-07-30 EMC IP Holding Company LLC Content-based deduplicated storage having multilevel data cache
US10409629B1 (en) 2016-09-26 2019-09-10 EMC IP Holding Company LLC Automated host data protection configuration
US10409986B1 (en) 2016-09-26 2019-09-10 EMC IP Holding Company LLC Ransomware detection in a continuous data protection environment
US10409787B1 (en) 2015-12-22 2019-09-10 EMC IP Holding Company LLC Database migration
US10423634B1 (en) 2016-12-27 2019-09-24 EMC IP Holding Company LLC Temporal queries on secondary storage
US10440039B1 (en) * 2015-11-09 2019-10-08 8X8, Inc. Delayed replication for protection of replicated databases
US10437783B1 (en) 2014-09-25 2019-10-08 EMC IP Holding Company LLC Recover storage array using remote deduplication device
US10467102B1 (en) 2016-12-15 2019-11-05 EMC IP Holding Company LLC I/O score-based hybrid replication in a storage system
US10489321B1 (en) 2018-07-31 2019-11-26 EMC IP Holding Company LLC Performance improvement for an active-active distributed non-ALUA system with address ownerships
US10496487B1 (en) 2014-12-03 2019-12-03 EMC IP Holding Company LLC Storing snapshot changes with snapshots
US10579282B1 (en) 2016-03-30 2020-03-03 EMC IP Holding Company LLC Distributed copy in multi-copy replication where offset and size of I/O requests to replication site is half offset and size of I/O request to production volume
US10592166B2 (en) 2018-08-01 2020-03-17 EMC IP Holding Company LLC Fast input/output in a content-addressable storage architecture with paged metadata
US10628268B1 (en) 2016-12-15 2020-04-21 EMC IP Holding Company LLC Proof of data replication consistency using blockchain
US10713221B2 (en) 2018-07-30 2020-07-14 EMC IP Holding Company LLC Dual layer deduplication for a file system running over a deduplicated block storage
US10747667B2 (en) 2018-11-02 2020-08-18 EMC IP Holding Company LLC Memory management of multi-level metadata cache for content-based deduplicated storage
US10747606B1 (en) 2016-12-21 2020-08-18 EMC IP Holding Company LLC Risk based analysis of adverse event impact on system availability
US10776211B1 (en) 2016-12-27 2020-09-15 EMC IP Holding Company LLC Methods, systems, and apparatuses to update point in time journal using map reduce to create a highly parallel update
US10853181B1 (en) 2015-06-29 2020-12-01 EMC IP Holding Company LLC Backing up volumes using fragment files
US11093158B2 (en) 2019-01-29 2021-08-17 EMC IP Holding Company LLC Sub-lun non-deduplicated tier in a CAS storage to reduce mapping information and improve memory efficiency
US11455277B2 (en) 2019-03-27 2022-09-27 Nutanix Inc. Verifying snapshot integrity

Citations (147)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5170480A (en) 1989-09-25 1992-12-08 International Business Machines Corporation Concurrently applying redo records to backup database in a log sequence using single queue server per queue at a time
US5249053A (en) 1991-02-05 1993-09-28 Dycam Inc. Filmless digital camera with selective image compression
US5388254A (en) 1992-03-27 1995-02-07 International Business Machines Corporation Method and means for limiting duration of input/output (I/O) requests
US5499367A (en) 1991-11-15 1996-03-12 Oracle Corporation System for database integrity with multiple logs assigned to client subsets
US5526397A (en) 1992-04-20 1996-06-11 Hughes Electronics Switching transcoder
US5864837A (en) 1996-06-12 1999-01-26 Unisys Corporation Methods and apparatus for efficient caching in a distributed environment
US5879459A (en) 1997-08-29 1999-03-09 Genus, Inc. Vertically-stacked process reactor and cluster tool system for atomic layer deposition
US5990899A (en) 1995-10-27 1999-11-23 Microsoft Corporation Method for compressing journal streams
US5990810A (en) * 1995-02-17 1999-11-23 Williams; Ross Neil Method for partitioning a block of data into subblocks and for storing and communcating such subblocks
US6042652A (en) 1999-05-01 2000-03-28 P.K. Ltd Atomic layer deposition apparatus for depositing atomic layer on multiple substrates
US6065018A (en) 1998-03-04 2000-05-16 International Business Machines Corporation Synchronizing recovery log having time stamp to a remote site for disaster recovery of a primary database having related hierarchial and relational databases
US6143659A (en) 1997-11-18 2000-11-07 Samsung Electronics, Co., Ltd. Method for manufacturing aluminum metal interconnection layer by atomic layer deposition method
US6148340A (en) 1998-04-30 2000-11-14 International Business Machines Corporation Method and system for differencing container files
US6174377B1 (en) 1997-03-03 2001-01-16 Genus, Inc. Processing chamber for atomic layer deposition processes
US6174809B1 (en) 1997-12-31 2001-01-16 Samsung Electronics, Co., Ltd. Method for forming metal layer using atomic layer deposition
WO2000045581A3 (en) 1999-01-29 2001-01-25 Data Race Inc Modem transfer mechanism which prioritized data transfers
US6203613B1 (en) 1999-10-19 2001-03-20 International Business Machines Corporation Atomic layer deposition with nitrate containing precursors
US6260125B1 (en) 1998-12-09 2001-07-10 Ncr Corporation Asynchronous write queues, reconstruction and check-pointing in disk-mirroring applications
US6272534B1 (en) 1998-03-04 2001-08-07 Storage Technology Corporation Method and system for efficiently storing web pages for quick downloading at a remote device
US6270572B1 (en) 1998-08-07 2001-08-07 Samsung Electronics Co., Ltd. Method for manufacturing thin film using atomic layer deposition
US6287965B1 (en) 1997-07-28 2001-09-11 Samsung Electronics Co, Ltd. Method of forming metal layer using atomic layer deposition and semiconductor device having the metal layer as barrier metal layer or upper or lower electrode of capacitor
EP1154356A1 (en) 2000-05-09 2001-11-14 Alcatel Caching of files during loading from a distributed file system
US20020129168A1 (en) 2001-03-12 2002-09-12 Kabushiki Kaisha Toshiba Data transfer scheme using caching and differential compression techniques for reducing network load
US6467023B1 (en) 1999-03-23 2002-10-15 Lsi Logic Corporation Method for logical unit creation with immediate availability in a raid storage environment
US20030048842A1 (en) 2001-09-07 2003-03-13 Alcatel Method of compressing animation images
US20030061537A1 (en) 2001-07-16 2003-03-27 Cha Sang K. Parallelized redo-only logging and recovery for highly available main memory database systems
US6574657B1 (en) 1999-05-03 2003-06-03 Symantec Corporation Methods and apparatuses for file synchronization and updating using a signature list
US20030110278A1 (en) 2001-12-11 2003-06-12 Eric Anderson Technique for reducing network bandwidth for delivery fo dynamic and mixed content
US20030145317A1 (en) 1998-09-21 2003-07-31 Microsoft Corporation On demand patching of applications via software implementation installer mechanism
US20030196147A1 (en) 1998-11-12 2003-10-16 Hitachi, Ltd. Storage apparatus and control method thereof
US20040030852A1 (en) * 2002-03-18 2004-02-12 Coombs David Lawrence System and method for data backup
US6804676B1 (en) 1999-08-31 2004-10-12 International Business Machines Corporation System and method in a data processing system for generating compressed affinity records from data records
US20040205092A1 (en) 2003-03-27 2004-10-14 Alan Longo Data storage and caching architecture
US20040250032A1 (en) 2003-06-06 2004-12-09 Minwen Ji State machine and system for data redundancy
US20040254964A1 (en) 2003-06-12 2004-12-16 Shoji Kodama Data replication with rollback
US20050015663A1 (en) 2003-06-25 2005-01-20 Philippe Armangau Data recovery with internet protocol replication with or without full resync
US20050028022A1 (en) 2003-06-26 2005-02-03 Hitachi, Ltd. Method and apparatus for data recovery system using storage based journaling
US20050049924A1 (en) 2003-08-27 2005-03-03 Debettencourt Jason Techniques for use with application monitoring to obtain transaction data
US20050172092A1 (en) 2004-02-04 2005-08-04 Lam Wai T. Method and system for storing data
US6947981B2 (en) 2002-03-26 2005-09-20 Hewlett-Packard Development Company, L.P. Flexible data replication mechanism
US20050273655A1 (en) 2004-05-21 2005-12-08 Bea Systems, Inc. Systems and methods for automatic retry of transactions
US6981151B1 (en) * 1999-04-08 2005-12-27 Battelle Energy Alliance, Llc Digital data storage systems, computers, and data verification methods
US20060031647A1 (en) 2004-08-04 2006-02-09 Hitachi, Ltd. Storage system and data processing system
US20060047996A1 (en) 2004-08-30 2006-03-02 Mendocino Software, Inc. Systems and methods for optimizing restoration of stored data
US20060064416A1 (en) 2004-09-17 2006-03-23 Sim-Tang Siew Y Method and system for data reduction
US7043610B2 (en) 2002-08-19 2006-05-09 Aristos Logic Corporation System and method for maintaining cache coherency without external controller intervention
US20060107007A1 (en) 2004-04-28 2006-05-18 Yusuke Hirakawa Data processing system
US7051126B1 (en) 2003-08-19 2006-05-23 F5 Networks, Inc. Hardware accelerated compression
US20060117211A1 (en) 2002-01-16 2006-06-01 Hitachi, Ltd. Fail-over storage system
US7076620B2 (en) 2003-02-27 2006-07-11 Hitachi, Ltd. Data processing system including storage systems
US20060161810A1 (en) 2004-08-25 2006-07-20 Bao Bill Q Remote replication
US20060179343A1 (en) 2005-02-08 2006-08-10 Hitachi, Ltd. Method and apparatus for replicating volumes between heterogenous storage systems
US20060195670A1 (en) 2003-08-11 2006-08-31 Takashige Iwamura Multi-site remote-copy system
US7111197B2 (en) 2001-09-21 2006-09-19 Polyserve, Inc. System and method for journal recovery for multinode environments
US20060212462A1 (en) 2002-04-25 2006-09-21 Kashya Israel Ltd. Apparatus for continuous compression of large volumes of data
US7120768B2 (en) 2002-03-22 2006-10-10 Hitachi, Ltd. Snapshot acquisition method, storage system and disk apparatus
US7130975B2 (en) 2003-06-27 2006-10-31 Hitachi, Ltd. Data processing system
US7139927B2 (en) 2002-03-21 2006-11-21 Electronics And Telecommunications Research Institute Journaling and recovery method of shared disk file system
US20070055833A1 (en) 2005-09-06 2007-03-08 Dot Hill Systems Corp. Snapshot restore method and apparatus
US7203741B2 (en) 2000-10-12 2007-04-10 Peerapp Ltd. Method and system for accelerating receipt of data in a client-to-client network
US7222136B1 (en) 2002-05-23 2007-05-22 Oracle International Corporation Communicating data dictionary information of database objects through a redo stream
US20070162513A1 (en) 2005-12-21 2007-07-12 Michael Lewin Methods and apparatus for point in time data access and recovery
US20070180304A1 (en) 2003-09-16 2007-08-02 Hitachi, Ltd. Mapping apparatus for backup and restoration of multi-generation recovered snapshots
US20070198602A1 (en) 2005-12-19 2007-08-23 David Ngo Systems and methods for resynchronizing information
US20070220311A1 (en) 2006-02-17 2007-09-20 Michael Lewin Cross tagging of data for consistent recovery
US7296008B2 (en) 2004-08-24 2007-11-13 Symantec Operating Corporation Generation and use of a time map for accessing a prior image of a storage device
US20070266053A1 (en) 2005-12-22 2007-11-15 Shlomo Ahal Methods and apparatus for multiple point in time data access
US7328373B2 (en) 2004-01-30 2008-02-05 Hitachi, Ltd. Data processing system
US20080066054A1 (en) * 2003-02-28 2008-03-13 Bea Systems, Inc. System and method for determining when an ejb compiler needs to be executed
US7353335B2 (en) 2006-02-03 2008-04-01 Hitachi, Ltd. Storage control method for database recovery in logless mode
US20080082592A1 (en) 2006-09-28 2008-04-03 Shlomo Ahal Methods and apparatus for optimal journaling for continuous data replication
US20080082770A1 (en) 2006-09-28 2008-04-03 Shlomo Ahal Methods and apparatus for optimal journaling for continuous data replication
US20080082591A1 (en) 2006-09-28 2008-04-03 Shlomo Ahal Methods and apparatus for managing data flow in a continuous data replication system having journaling
US7360113B2 (en) 2004-08-30 2008-04-15 Mendocino Software, Inc. Protocol for communicating data block copies in an error recovery environment
US20080243769A1 (en) * 2007-03-30 2008-10-02 Symantec Corporation System and method for exporting data directly from deduplication storage to non-deduplication storage
US7519628B1 (en) 2004-06-01 2009-04-14 Network Appliance, Inc. Technique for accelerating log replay with partial cache flush
US7519625B2 (en) 2005-09-27 2009-04-14 Hitachi, Ltd. Snapshot management apparatus and method, and storage system
US7546485B2 (en) 2006-08-15 2009-06-09 Hewlett-Packard Development Company, L.P. Method and system for efficient journal-based resynchronization
US7606940B2 (en) 2003-06-23 2009-10-20 Hitachi, Ltd. Remote copy system
US20090307430A1 (en) * 2008-06-06 2009-12-10 Vmware, Inc. Sharing and persisting code caches
US7685171B1 (en) * 2006-09-22 2010-03-23 Emc Corporation Techniques for performing a restoration operation using device scanning
US7719443B1 (en) 2008-06-27 2010-05-18 Emc Corporation Compressing data in a continuous data protection environment
US7757057B2 (en) 2006-11-27 2010-07-13 Lsi Corporation Optimized rollback of copy-on-write snapshot volumes
US7797358B1 (en) 2007-12-26 2010-09-14 Emc (Benelux) B.V., S.A.R.L. Methods and apparatus for continuous data protection system having journal compression
US7840662B1 (en) 2008-03-28 2010-11-23 EMC(Benelux) B.V., S.A.R.L. Dynamically managing a network cluster
US7840536B1 (en) 2007-12-26 2010-11-23 Emc (Benelux) B.V., S.A.R.L. Methods and apparatus for dynamic journal expansion
US7844856B1 (en) 2007-12-26 2010-11-30 Emc (Benelux) B.V., S.A.R.L. Methods and apparatus for bottleneck processing in a continuous data protection system having journaling
US7860836B1 (en) 2007-12-26 2010-12-28 Emc (Benelux) B.V., S.A.R.L. Method and apparatus to recover data in a continuous data protection environment using a journal
US7882286B1 (en) 2008-09-26 2011-02-01 EMC (Benelux)B.V., S.A.R.L. Synchronizing volumes for replication
US7934262B1 (en) 2007-12-26 2011-04-26 Emc (Benelux) B.V., S.A.R.L. Methods and apparatus for virus detection using journal data
US7958372B1 (en) 2007-12-26 2011-06-07 Emc (Benelux) B.V., S.A.R.L. Method and apparatus to convert a logical unit from a first encryption state to a second encryption state using a journal in a continuous data protection environment
US8041940B1 (en) 2007-12-26 2011-10-18 Emc Corporation Offloading encryption processing in a storage area network
US8060714B1 (en) 2008-09-26 2011-11-15 Emc (Benelux) B.V., S.A.R.L. Initializing volumes in a replication system
US8060713B1 (en) 2005-12-21 2011-11-15 Emc (Benelux) B.V., S.A.R.L. Consolidating snapshots in a continuous data protection system using journaling
US8103937B1 (en) 2010-03-31 2012-01-24 Emc Corporation Cas command network replication
US8108634B1 (en) 2008-06-27 2012-01-31 Emc B.V., S.A.R.L. Replicating a thin logical unit
US20120124307A1 (en) * 2010-11-16 2012-05-17 Actifio, Inc. System and method for performing a plurality of prescribed data management functions in a manner that reduces redundant access operations to primary storage
US20120166448A1 (en) * 2010-12-28 2012-06-28 Microsoft Corporation Adaptive Index for Data Deduplication
US8214612B1 (en) 2009-09-28 2012-07-03 Emc Corporation Ensuring consistency of replicated volumes
US8271447B1 (en) 2010-06-18 2012-09-18 Emc International Company Mirroring metadata in a continuous data protection environment
US8271441B1 (en) 2009-12-26 2012-09-18 Emc Corporation Virtualized CG
US8332687B1 (en) 2010-06-23 2012-12-11 Emc Corporation Splitter used in a continuous data protection environment
US8335761B1 (en) 2010-12-02 2012-12-18 Emc International Company Replicating in a multi-copy environment
US8335771B1 (en) 2010-09-29 2012-12-18 Emc Corporation Storage array snapshots for logged access replication in a continuous data protection system
US8341115B1 (en) 2009-12-26 2012-12-25 Emc Corporation Dynamically switching between synchronous and asynchronous replication
US8370648B1 (en) 2010-03-15 2013-02-05 Emc International Company Writing and reading encrypted data using time-based encryption keys
US8380885B1 (en) 2011-06-30 2013-02-19 Emc Corporation Handling abort commands in replication
US8392680B1 (en) 2010-03-30 2013-03-05 Emc International Company Accessing a volume in a distributed environment
US8429362B1 (en) 2011-03-31 2013-04-23 Emc Corporation Journal based replication with a virtual service layer
US8433869B1 (en) 2010-09-27 2013-04-30 Emc International Company Virtualized consistency group using an enhanced splitter
US8478955B1 (en) 2010-09-27 2013-07-02 Emc International Company Virtualized consistency group using more than one data protection appliance
US8495304B1 (en) 2010-12-23 2013-07-23 Emc Corporation Multi source wire deduplication
US8510279B1 (en) 2012-03-15 2013-08-13 Emc International Company Using read signature command in file system to backup data
US8521694B1 (en) 2011-06-24 2013-08-27 Emc Corporation Leveraging array snapshots for immediate continuous data protection
US8521691B1 (en) 2011-06-24 2013-08-27 Emc Corporation Seamless migration between replication technologies
US8543609B1 (en) 2011-09-29 2013-09-24 Emc Corporation Snapshots in deduplication
US8583885B1 (en) 2009-12-01 2013-11-12 Emc Corporation Energy efficient sync and async replication
US8600945B1 (en) 2012-03-29 2013-12-03 Emc Corporation Continuous data replication
US8601085B1 (en) 2011-03-28 2013-12-03 Emc Corporation Techniques for preferred path determination
US8627012B1 (en) 2011-12-30 2014-01-07 Emc Corporation System and method for improving cache performance
US8683592B1 (en) 2011-12-30 2014-03-25 Emc Corporation Associating network and storage activities for forensic analysis
US8694700B1 (en) 2010-09-29 2014-04-08 Emc Corporation Using I/O track information for continuous push with splitter for storage device
US8706700B1 (en) 2010-12-23 2014-04-22 Emc Corporation Creating consistent snapshots across several storage arrays or file systems
US8712962B1 (en) 2011-12-01 2014-04-29 Emc Corporation Snapshots in de-duplication
US8719497B1 (en) 2011-09-21 2014-05-06 Emc Corporation Using device spoofing to improve recovery time in a continuous data protection environment
US8725691B1 (en) 2010-12-16 2014-05-13 Emc Corporation Dynamic LUN resizing in a replication environment
US8726066B1 (en) 2011-03-31 2014-05-13 Emc Corporation Journal based replication with enhance failover
US8725692B1 (en) 2010-12-16 2014-05-13 Emc Corporation Replication of xcopy command
US8738813B1 (en) 2011-12-27 2014-05-27 Emc Corporation Method and apparatus for round trip synchronous replication using SCSI reads
US8745004B1 (en) 2011-06-24 2014-06-03 Emc Corporation Reverting an old snapshot on a production volume without a full sweep
US8751828B1 (en) 2010-12-23 2014-06-10 Emc Corporation Sharing encryption-related metadata between multiple layers in a storage I/O stack
US8769336B1 (en) 2011-12-27 2014-07-01 Emc Corporation Method and apparatus for preventing journal loss on failover in symmetric continuous data protection replication
US8805786B1 (en) 2011-06-24 2014-08-12 Emc Corporation Replicating selected snapshots from one storage array to another, with minimal data transmission
US8806161B1 (en) 2011-09-29 2014-08-12 Emc Corporation Mirroring splitter meta data
US8825848B1 (en) 2012-03-20 2014-09-02 Emc Corporation Ordering of event records in an electronic system for forensic analysis
US8850144B1 (en) 2012-03-29 2014-09-30 Emc Corporation Active replication switch
US8850143B1 (en) 2010-12-16 2014-09-30 Emc Corporation Point in time access in a replication environment with LUN resizing
US8862546B1 (en) 2011-06-30 2014-10-14 Emc Corporation Virtual access roll
US8892835B1 (en) 2012-06-07 2014-11-18 Emc Corporation Insertion of a virtualization layer into a replication environment
US8898515B1 (en) 2012-06-28 2014-11-25 Emc International Company Synchronous replication using multiple data protection appliances across multiple storage arrays
US8898409B1 (en) 2012-06-27 2014-11-25 Emc International Company Journal-based replication without journal loss
US8898112B1 (en) 2011-09-07 2014-11-25 Emc Corporation Write signature command
US8898519B1 (en) 2012-03-30 2014-11-25 Emc Corporation Method and apparatus for an asynchronous splitter
US20140365746A1 (en) * 2013-06-11 2014-12-11 Lsi Corporation I/o path selection
US8914595B1 (en) 2011-09-29 2014-12-16 Emc Corporation Snapshots in deduplication
US8924668B1 (en) 2011-12-23 2014-12-30 Emc Corporation Method and apparatus for an application- and object-level I/O splitter
US8930947B1 (en) 2011-12-30 2015-01-06 Emc Corporation System and method for live migration of a virtual machine with dedicated cache

Patent Citations (167)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5170480A (en) 1989-09-25 1992-12-08 International Business Machines Corporation Concurrently applying redo records to backup database in a log sequence using single queue server per queue at a time
US5249053A (en) 1991-02-05 1993-09-28 Dycam Inc. Filmless digital camera with selective image compression
US5499367A (en) 1991-11-15 1996-03-12 Oracle Corporation System for database integrity with multiple logs assigned to client subsets
US5388254A (en) 1992-03-27 1995-02-07 International Business Machines Corporation Method and means for limiting duration of input/output (I/O) requests
US5526397A (en) 1992-04-20 1996-06-11 Hughes Electronics Switching transcoder
US5990810A (en) * 1995-02-17 1999-11-23 Williams; Ross Neil Method for partitioning a block of data into subblocks and for storing and communcating such subblocks
US6621493B1 (en) 1995-10-27 2003-09-16 Microsoft Corporation Metafile compression
US5990899A (en) 1995-10-27 1999-11-23 Microsoft Corporation Method for compressing journal streams
US5864837A (en) 1996-06-12 1999-01-26 Unisys Corporation Methods and apparatus for efficient caching in a distributed environment
US6174377B1 (en) 1997-03-03 2001-01-16 Genus, Inc. Processing chamber for atomic layer deposition processes
US6287965B1 (en) 1997-07-28 2001-09-11 Samsung Electronics Co, Ltd. Method of forming metal layer using atomic layer deposition and semiconductor device having the metal layer as barrier metal layer or upper or lower electrode of capacitor
US5879459A (en) 1997-08-29 1999-03-09 Genus, Inc. Vertically-stacked process reactor and cluster tool system for atomic layer deposition
US6143659A (en) 1997-11-18 2000-11-07 Samsung Electronics, Co., Ltd. Method for manufacturing aluminum metal interconnection layer by atomic layer deposition method
US6174809B1 (en) 1997-12-31 2001-01-16 Samsung Electronics, Co., Ltd. Method for forming metal layer using atomic layer deposition
US6272534B1 (en) 1998-03-04 2001-08-07 Storage Technology Corporation Method and system for efficiently storing web pages for quick downloading at a remote device
US6065018A (en) 1998-03-04 2000-05-16 International Business Machines Corporation Synchronizing recovery log having time stamp to a remote site for disaster recovery of a primary database having related hierarchial and relational databases
US6148340A (en) 1998-04-30 2000-11-14 International Business Machines Corporation Method and system for differencing container files
US6270572B1 (en) 1998-08-07 2001-08-07 Samsung Electronics Co., Ltd. Method for manufacturing thin film using atomic layer deposition
US20030145317A1 (en) 1998-09-21 2003-07-31 Microsoft Corporation On demand patching of applications via software implementation installer mechanism
US20030196147A1 (en) 1998-11-12 2003-10-16 Hitachi, Ltd. Storage apparatus and control method thereof
US6260125B1 (en) 1998-12-09 2001-07-10 Ncr Corporation Asynchronous write queues, reconstruction and check-pointing in disk-mirroring applications
WO2000045581A3 (en) 1999-01-29 2001-01-25 Data Race Inc Modem transfer mechanism which prioritized data transfers
US6467023B1 (en) 1999-03-23 2002-10-15 Lsi Logic Corporation Method for logical unit creation with immediate availability in a raid storage environment
US6981151B1 (en) * 1999-04-08 2005-12-27 Battelle Energy Alliance, Llc Digital data storage systems, computers, and data verification methods
US6042652A (en) 1999-05-01 2000-03-28 P.K. Ltd Atomic layer deposition apparatus for depositing atomic layer on multiple substrates
US6574657B1 (en) 1999-05-03 2003-06-03 Symantec Corporation Methods and apparatuses for file synchronization and updating using a signature list
US6804676B1 (en) 1999-08-31 2004-10-12 International Business Machines Corporation System and method in a data processing system for generating compressed affinity records from data records
US6203613B1 (en) 1999-10-19 2001-03-20 International Business Machines Corporation Atomic layer deposition with nitrate containing precursors
EP1154356A1 (en) 2000-05-09 2001-11-14 Alcatel Caching of files during loading from a distributed file system
US8250149B2 (en) 2000-10-12 2012-08-21 Peerapp Ltd. Method and system for accelerating receipt of data in a client to client network
US8930500B2 (en) 2000-10-12 2015-01-06 Peerapp Ltd. Method and system for accelerating receipt of data in a client to client network
US7203741B2 (en) 2000-10-12 2007-04-10 Peerapp Ltd. Method and system for accelerating receipt of data in a client-to-client network
US8037162B2 (en) 2000-10-12 2011-10-11 PeeApp Ltd. Method and system for accelerating receipt of data in a client to client network
US20020129168A1 (en) 2001-03-12 2002-09-12 Kabushiki Kaisha Toshiba Data transfer scheme using caching and differential compression techniques for reducing network load
US20030061537A1 (en) 2001-07-16 2003-03-27 Cha Sang K. Parallelized redo-only logging and recovery for highly available main memory database systems
US20030048842A1 (en) 2001-09-07 2003-03-13 Alcatel Method of compressing animation images
US7111197B2 (en) 2001-09-21 2006-09-19 Polyserve, Inc. System and method for journal recovery for multinode environments
US20030110278A1 (en) 2001-12-11 2003-06-12 Eric Anderson Technique for reducing network bandwidth for delivery fo dynamic and mixed content
US20060117211A1 (en) 2002-01-16 2006-06-01 Hitachi, Ltd. Fail-over storage system
US20040030852A1 (en) * 2002-03-18 2004-02-12 Coombs David Lawrence System and method for data backup
US7139927B2 (en) 2002-03-21 2006-11-21 Electronics And Telecommunications Research Institute Journaling and recovery method of shared disk file system
US7120768B2 (en) 2002-03-22 2006-10-10 Hitachi, Ltd. Snapshot acquisition method, storage system and disk apparatus
US6947981B2 (en) 2002-03-26 2005-09-20 Hewlett-Packard Development Company, L.P. Flexible data replication mechanism
US8205009B2 (en) 2002-04-25 2012-06-19 Emc Israel Development Center, Ltd. Apparatus for continuous compression of large volumes of data
US20060212462A1 (en) 2002-04-25 2006-09-21 Kashya Israel Ltd. Apparatus for continuous compression of large volumes of data
US7222136B1 (en) 2002-05-23 2007-05-22 Oracle International Corporation Communicating data dictionary information of database objects through a redo stream
US7043610B2 (en) 2002-08-19 2006-05-09 Aristos Logic Corporation System and method for maintaining cache coherency without external controller intervention
US7076620B2 (en) 2003-02-27 2006-07-11 Hitachi, Ltd. Data processing system including storage systems
US20080066054A1 (en) * 2003-02-28 2008-03-13 Bea Systems, Inc. System and method for determining when an ejb compiler needs to be executed
US20040205092A1 (en) 2003-03-27 2004-10-14 Alan Longo Data storage and caching architecture
US20040250032A1 (en) 2003-06-06 2004-12-09 Minwen Ji State machine and system for data redundancy
US20040254964A1 (en) 2003-06-12 2004-12-16 Shoji Kodama Data replication with rollback
US7606940B2 (en) 2003-06-23 2009-10-20 Hitachi, Ltd. Remote copy system
US20050015663A1 (en) 2003-06-25 2005-01-20 Philippe Armangau Data recovery with internet protocol replication with or without full resync
US20050028022A1 (en) 2003-06-26 2005-02-03 Hitachi, Ltd. Method and apparatus for data recovery system using storage based journaling
US7130975B2 (en) 2003-06-27 2006-10-31 Hitachi, Ltd. Data processing system
US20060195670A1 (en) 2003-08-11 2006-08-31 Takashige Iwamura Multi-site remote-copy system
US20070198791A1 (en) 2003-08-11 2007-08-23 Takashige Iwamura Multi-site remote-copy system
US7051126B1 (en) 2003-08-19 2006-05-23 F5 Networks, Inc. Hardware accelerated compression
US20050049924A1 (en) 2003-08-27 2005-03-03 Debettencourt Jason Techniques for use with application monitoring to obtain transaction data
US7590887B2 (en) 2003-09-16 2009-09-15 Hitachi, Ltd. Mapping apparatus for backup and restoration of multi-generation recovered snapshots
US20070180304A1 (en) 2003-09-16 2007-08-02 Hitachi, Ltd. Mapping apparatus for backup and restoration of multi-generation recovered snapshots
US7328373B2 (en) 2004-01-30 2008-02-05 Hitachi, Ltd. Data processing system
US20050172092A1 (en) 2004-02-04 2005-08-04 Lam Wai T. Method and system for storing data
US7167963B2 (en) 2004-04-28 2007-01-23 Hitachi, Ltd. Storage system with multiple remote site copying capability
US7117327B2 (en) 2004-04-28 2006-10-03 Hitachi, Ltd. Data processing system
US20060107007A1 (en) 2004-04-28 2006-05-18 Yusuke Hirakawa Data processing system
US20050273655A1 (en) 2004-05-21 2005-12-08 Bea Systems, Inc. Systems and methods for automatic retry of transactions
US7519628B1 (en) 2004-06-01 2009-04-14 Network Appliance, Inc. Technique for accelerating log replay with partial cache flush
US7159088B2 (en) 2004-08-04 2007-01-02 Hitachi, Ltd. Storage system and data processing system
US20060031647A1 (en) 2004-08-04 2006-02-09 Hitachi, Ltd. Storage system and data processing system
US7296008B2 (en) 2004-08-24 2007-11-13 Symantec Operating Corporation Generation and use of a time map for accessing a prior image of a storage device
US20060161810A1 (en) 2004-08-25 2006-07-20 Bao Bill Q Remote replication
US20060047996A1 (en) 2004-08-30 2006-03-02 Mendocino Software, Inc. Systems and methods for optimizing restoration of stored data
US7360113B2 (en) 2004-08-30 2008-04-15 Mendocino Software, Inc. Protocol for communicating data block copies in an error recovery environment
US20060064416A1 (en) 2004-09-17 2006-03-23 Sim-Tang Siew Y Method and system for data reduction
US20060179343A1 (en) 2005-02-08 2006-08-10 Hitachi, Ltd. Method and apparatus for replicating volumes between heterogenous storage systems
US20070055833A1 (en) 2005-09-06 2007-03-08 Dot Hill Systems Corp. Snapshot restore method and apparatus
US7426618B2 (en) 2005-09-06 2008-09-16 Dot Hill Systems Corp. Snapshot restore method and apparatus
US7519625B2 (en) 2005-09-27 2009-04-14 Hitachi, Ltd. Snapshot management apparatus and method, and storage system
US20070198602A1 (en) 2005-12-19 2007-08-23 David Ngo Systems and methods for resynchronizing information
US20070162513A1 (en) 2005-12-21 2007-07-12 Michael Lewin Methods and apparatus for point in time data access and recovery
US8060713B1 (en) 2005-12-21 2011-11-15 Emc (Benelux) B.V., S.A.R.L. Consolidating snapshots in a continuous data protection system using journaling
US7774565B2 (en) 2005-12-21 2010-08-10 Emc Israel Development Center, Ltd. Methods and apparatus for point in time data access and recovery
US20070266053A1 (en) 2005-12-22 2007-11-15 Shlomo Ahal Methods and apparatus for multiple point in time data access
US7849361B2 (en) 2005-12-22 2010-12-07 Emc Corporation Methods and apparatus for multiple point in time data access
US7353335B2 (en) 2006-02-03 2008-04-01 Hitachi, Ltd. Storage control method for database recovery in logless mode
US20070220311A1 (en) 2006-02-17 2007-09-20 Michael Lewin Cross tagging of data for consistent recovery
US7577867B2 (en) 2006-02-17 2009-08-18 Emc Corporation Cross tagging to data for consistent recovery
US7546485B2 (en) 2006-08-15 2009-06-09 Hewlett-Packard Development Company, L.P. Method and system for efficient journal-based resynchronization
US7685171B1 (en) * 2006-09-22 2010-03-23 Emc Corporation Techniques for performing a restoration operation using device scanning
US7627612B2 (en) 2006-09-28 2009-12-01 Emc Israel Development Center, Ltd. Methods and apparatus for optimal journaling for continuous data replication
US7627687B2 (en) 2006-09-28 2009-12-01 Emc Israel Development Center, Ltd. Methods and apparatus for managing data flow in a continuous data replication system having journaling
US7516287B2 (en) 2006-09-28 2009-04-07 Emc Israel Development Center, Ltd. Methods and apparatus for optimal journaling for continuous data replication
US20080082591A1 (en) 2006-09-28 2008-04-03 Shlomo Ahal Methods and apparatus for managing data flow in a continuous data replication system having journaling
US20080082770A1 (en) 2006-09-28 2008-04-03 Shlomo Ahal Methods and apparatus for optimal journaling for continuous data replication
US20080082592A1 (en) 2006-09-28 2008-04-03 Shlomo Ahal Methods and apparatus for optimal journaling for continuous data replication
US7757057B2 (en) 2006-11-27 2010-07-13 Lsi Corporation Optimized rollback of copy-on-write snapshot volumes
US20080243769A1 (en) * 2007-03-30 2008-10-02 Symantec Corporation System and method for exporting data directly from deduplication storage to non-deduplication storage
US7844856B1 (en) 2007-12-26 2010-11-30 Emc (Benelux) B.V., S.A.R.L. Methods and apparatus for bottleneck processing in a continuous data protection system having journaling
US7797358B1 (en) 2007-12-26 2010-09-14 Emc (Benelux) B.V., S.A.R.L. Methods and apparatus for continuous data protection system having journal compression
US7860836B1 (en) 2007-12-26 2010-12-28 Emc (Benelux) B.V., S.A.R.L. Method and apparatus to recover data in a continuous data protection environment using a journal
US7934262B1 (en) 2007-12-26 2011-04-26 Emc (Benelux) B.V., S.A.R.L. Methods and apparatus for virus detection using journal data
US7958372B1 (en) 2007-12-26 2011-06-07 Emc (Benelux) B.V., S.A.R.L. Method and apparatus to convert a logical unit from a first encryption state to a second encryption state using a journal in a continuous data protection environment
US7840536B1 (en) 2007-12-26 2010-11-23 Emc (Benelux) B.V., S.A.R.L. Methods and apparatus for dynamic journal expansion
US8041940B1 (en) 2007-12-26 2011-10-18 Emc Corporation Offloading encryption processing in a storage area network
US7840662B1 (en) 2008-03-28 2010-11-23 EMC(Benelux) B.V., S.A.R.L. Dynamically managing a network cluster
US20090307430A1 (en) * 2008-06-06 2009-12-10 Vmware, Inc. Sharing and persisting code caches
US7719443B1 (en) 2008-06-27 2010-05-18 Emc Corporation Compressing data in a continuous data protection environment
US8108634B1 (en) 2008-06-27 2012-01-31 Emc B.V., S.A.R.L. Replicating a thin logical unit
US8060714B1 (en) 2008-09-26 2011-11-15 Emc (Benelux) B.V., S.A.R.L. Initializing volumes in a replication system
US7882286B1 (en) 2008-09-26 2011-02-01 EMC (Benelux)B.V., S.A.R.L. Synchronizing volumes for replication
US8214612B1 (en) 2009-09-28 2012-07-03 Emc Corporation Ensuring consistency of replicated volumes
US8583885B1 (en) 2009-12-01 2013-11-12 Emc Corporation Energy efficient sync and async replication
US8341115B1 (en) 2009-12-26 2012-12-25 Emc Corporation Dynamically switching between synchronous and asynchronous replication
US8271441B1 (en) 2009-12-26 2012-09-18 Emc Corporation Virtualized CG
US8370648B1 (en) 2010-03-15 2013-02-05 Emc International Company Writing and reading encrypted data using time-based encryption keys
US8392680B1 (en) 2010-03-30 2013-03-05 Emc International Company Accessing a volume in a distributed environment
US8464101B1 (en) 2010-03-31 2013-06-11 Emc Corporation CAS command network replication
US8103937B1 (en) 2010-03-31 2012-01-24 Emc Corporation Cas command network replication
US8271447B1 (en) 2010-06-18 2012-09-18 Emc International Company Mirroring metadata in a continuous data protection environment
US8438135B1 (en) 2010-06-18 2013-05-07 Emc International Company Mirroring metadata in a continuous data protection environment
US8332687B1 (en) 2010-06-23 2012-12-11 Emc Corporation Splitter used in a continuous data protection environment
US8832399B1 (en) 2010-09-27 2014-09-09 Emc International Company Virtualized consistency group using an enhanced splitter
US8433869B1 (en) 2010-09-27 2013-04-30 Emc International Company Virtualized consistency group using an enhanced splitter
US8478955B1 (en) 2010-09-27 2013-07-02 Emc International Company Virtualized consistency group using more than one data protection appliance
US8694700B1 (en) 2010-09-29 2014-04-08 Emc Corporation Using I/O track information for continuous push with splitter for storage device
US8335771B1 (en) 2010-09-29 2012-12-18 Emc Corporation Storage array snapshots for logged access replication in a continuous data protection system
US20120124307A1 (en) * 2010-11-16 2012-05-17 Actifio, Inc. System and method for performing a plurality of prescribed data management functions in a manner that reduces redundant access operations to primary storage
US8335761B1 (en) 2010-12-02 2012-12-18 Emc International Company Replicating in a multi-copy environment
US8850143B1 (en) 2010-12-16 2014-09-30 Emc Corporation Point in time access in a replication environment with LUN resizing
US8725692B1 (en) 2010-12-16 2014-05-13 Emc Corporation Replication of xcopy command
US8725691B1 (en) 2010-12-16 2014-05-13 Emc Corporation Dynamic LUN resizing in a replication environment
US8751828B1 (en) 2010-12-23 2014-06-10 Emc Corporation Sharing encryption-related metadata between multiple layers in a storage I/O stack
US8495304B1 (en) 2010-12-23 2013-07-23 Emc Corporation Multi source wire deduplication
US8706700B1 (en) 2010-12-23 2014-04-22 Emc Corporation Creating consistent snapshots across several storage arrays or file systems
US20120166448A1 (en) * 2010-12-28 2012-06-28 Microsoft Corporation Adaptive Index for Data Deduplication
US8601085B1 (en) 2011-03-28 2013-12-03 Emc Corporation Techniques for preferred path determination
US8726066B1 (en) 2011-03-31 2014-05-13 Emc Corporation Journal based replication with enhance failover
US8429362B1 (en) 2011-03-31 2013-04-23 Emc Corporation Journal based replication with a virtual service layer
US8805786B1 (en) 2011-06-24 2014-08-12 Emc Corporation Replicating selected snapshots from one storage array to another, with minimal data transmission
US8521694B1 (en) 2011-06-24 2013-08-27 Emc Corporation Leveraging array snapshots for immediate continuous data protection
US8521691B1 (en) 2011-06-24 2013-08-27 Emc Corporation Seamless migration between replication technologies
US8745004B1 (en) 2011-06-24 2014-06-03 Emc Corporation Reverting an old snapshot on a production volume without a full sweep
US8862546B1 (en) 2011-06-30 2014-10-14 Emc Corporation Virtual access roll
US8380885B1 (en) 2011-06-30 2013-02-19 Emc Corporation Handling abort commands in replication
US8898112B1 (en) 2011-09-07 2014-11-25 Emc Corporation Write signature command
US8719497B1 (en) 2011-09-21 2014-05-06 Emc Corporation Using device spoofing to improve recovery time in a continuous data protection environment
US8543609B1 (en) 2011-09-29 2013-09-24 Emc Corporation Snapshots in deduplication
US8914595B1 (en) 2011-09-29 2014-12-16 Emc Corporation Snapshots in deduplication
US8806161B1 (en) 2011-09-29 2014-08-12 Emc Corporation Mirroring splitter meta data
US8712962B1 (en) 2011-12-01 2014-04-29 Emc Corporation Snapshots in de-duplication
US8924668B1 (en) 2011-12-23 2014-12-30 Emc Corporation Method and apparatus for an application- and object-level I/O splitter
US8738813B1 (en) 2011-12-27 2014-05-27 Emc Corporation Method and apparatus for round trip synchronous replication using SCSI reads
US8769336B1 (en) 2011-12-27 2014-07-01 Emc Corporation Method and apparatus for preventing journal loss on failover in symmetric continuous data protection replication
US8930947B1 (en) 2011-12-30 2015-01-06 Emc Corporation System and method for live migration of a virtual machine with dedicated cache
US8683592B1 (en) 2011-12-30 2014-03-25 Emc Corporation Associating network and storage activities for forensic analysis
US8627012B1 (en) 2011-12-30 2014-01-07 Emc Corporation System and method for improving cache performance
US8510279B1 (en) 2012-03-15 2013-08-13 Emc International Company Using read signature command in file system to backup data
US8825848B1 (en) 2012-03-20 2014-09-02 Emc Corporation Ordering of event records in an electronic system for forensic analysis
US8850144B1 (en) 2012-03-29 2014-09-30 Emc Corporation Active replication switch
US8600945B1 (en) 2012-03-29 2013-12-03 Emc Corporation Continuous data replication
US8898519B1 (en) 2012-03-30 2014-11-25 Emc Corporation Method and apparatus for an asynchronous splitter
US8892835B1 (en) 2012-06-07 2014-11-18 Emc Corporation Insertion of a virtualization layer into a replication environment
US8898409B1 (en) 2012-06-27 2014-11-25 Emc International Company Journal-based replication without journal loss
US8898515B1 (en) 2012-06-28 2014-11-25 Emc International Company Synchronous replication using multiple data protection appliances across multiple storage arrays
US20140365746A1 (en) * 2013-06-11 2014-12-11 Lsi Corporation I/o path selection

Non-Patent Citations (22)

* Cited by examiner, † Cited by third party
Title
AIX System Management Concepts: Operating Systems and Devices; May 2000; pp. 1-280.
Bunyan, "Multiplexing in a BrightStor® ARCserve® Backup Release 11;" Mar. 2004; pp. 1-4.
Gibson, "Five Point Plan Lies at the Heart of Compression Technology:" Apr. 29, 1991; p. 1.
Hill, "Network Computing;" Jun. 8, 2006; pp. 1-9.
Linux Filesystems; Sams Publishing; 2002; pp. 17-22 and 67-71.
Marks, "Network Computing;" Feb. 2, 2006; pp. 1-8.
Microsoft Computer Dictionary; 2002; Press Fifth Edition; 2 pages.
Retrieved from http://en.wikipedia.org/wiki/DEFLATE; DEFLATE; Jun. 19, 2008; pp. 1-6.
Retrieved from http://en.wikipedia.org/wiki/Huffman-coding; Huffman Coding; Jun. 8, 2008; pp. 1-11.
Retrieved from http://en.wikipedia.org/wiki/LZ77; LZ77 and LZ78; Jun. 17, 2008; pp. 1-2.
Soules et al.; "Matadata Efficiency in a Comprehensive Versioning File System;" May 2002; CMU-CS-02-145; School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213; 33 pages.
Soules, "Metadata Efficiency in Versioning File Systems;" 2003; pp. 1-16.
U.S. Appl. No. 10/512,687 downloaded Jan. 7, 2015 Part 1 of 2; 300 pages.
U.S. Appl. No. 10/512,687 downloaded Jan. 7, 2015 Part 2 of 2; 254 pages.
U.S. Appl. No. 11/356,920 downloaded Jan. 7, 2015 272 pages.
U.S. Appl. No. 11/536,160 downloaded Jan. 7, 2015 230 pages.
U.S. Appl. No. 11/536,215 downloaded Jan. 7, 2015 172 pages.
U.S. Appl. No. 11/536,233 downloaded Jan. 7, 2015 256 pages.
U.S. Appl. No. 11/609,560 downloaded Jan. 7, 2015 265 pages.
U.S. Appl. No. 11/609,561 downloaded Jan. 7, 2015 219 pages.
U.S. Appl. No. 11/964,168 downloaded Jan. 7, 2015 222 pages.
U.S. Appl. No. 12/057,652 downloaded Jan. 7, 2015 296 pages.

Cited By (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9501542B1 (en) 2008-03-11 2016-11-22 Emc Corporation Methods and apparatus for volume synchronization
US9286052B1 (en) 2011-09-15 2016-03-15 Emc Corporation Upgrading software on a pair of nodes in a clustered environment
US10061666B1 (en) 2011-12-30 2018-08-28 Emc International Company Method and apparatus for adding a director to storage with network-based replication without data resynchronization
US10235145B1 (en) 2012-09-13 2019-03-19 Emc International Company Distributed scale-out replication
US9336094B1 (en) 2012-09-13 2016-05-10 Emc International Company Scaleout replication of an application
US9383937B1 (en) 2013-03-14 2016-07-05 Emc Corporation Journal tiering in a continuous data protection system using deduplication-based storage
US9696939B1 (en) 2013-03-14 2017-07-04 EMC IP Holding Company LLC Replicating data using deduplication-based arrays using network-based replication
US9244997B1 (en) 2013-03-15 2016-01-26 Emc Corporation Asymmetric active-active access of asynchronously-protected data storage
US9367260B1 (en) 2013-12-13 2016-06-14 Emc Corporation Dynamic replication system
US9405765B1 (en) 2013-12-17 2016-08-02 Emc Corporation Replication of virtual machines
US10031703B1 (en) 2013-12-31 2018-07-24 Emc Corporation Extent-based tiering for virtual storage using full LUNs
US10082980B1 (en) 2014-06-20 2018-09-25 EMC IP Holding Company LLC Migration of snapshot in replication system using a log
US9619543B1 (en) 2014-06-23 2017-04-11 EMC IP Holding Company LLC Replicating in virtual desktop infrastructure
US10324798B1 (en) 2014-09-25 2019-06-18 EMC IP Holding Company LLC Restoring active areas of a logical unit
US10437783B1 (en) 2014-09-25 2019-10-08 EMC IP Holding Company LLC Recover storage array using remote deduplication device
US10101943B1 (en) 2014-09-25 2018-10-16 EMC IP Holding Company LLC Realigning data in replication system
US9529885B1 (en) 2014-09-29 2016-12-27 EMC IP Holding Company LLC Maintaining consistent point-in-time in asynchronous replication during virtual machine relocation
US9910621B1 (en) 2014-09-29 2018-03-06 EMC IP Holding Company LLC Backlogging I/O metadata utilizing counters to monitor write acknowledgements and no acknowledgements
US10496487B1 (en) 2014-12-03 2019-12-03 EMC IP Holding Company LLC Storing snapshot changes with snapshots
US9600377B1 (en) 2014-12-03 2017-03-21 EMC IP Holding Company LLC Providing data protection using point-in-time images from multiple types of storage devices
US9405481B1 (en) 2014-12-17 2016-08-02 Emc Corporation Replicating using volume multiplexing with consistency group file
US9632881B1 (en) 2015-03-24 2017-04-25 EMC IP Holding Company LLC Replication of a virtual distributed volume
US9557921B1 (en) 2015-03-26 2017-01-31 EMC IP Holding Company LLC Virtual volume converter
US9411535B1 (en) 2015-03-27 2016-08-09 Emc Corporation Accessing multiple virtual devices
US10296419B1 (en) 2015-03-27 2019-05-21 EMC IP Holding Company LLC Accessing a virtual device using a kernel
US9678680B1 (en) 2015-03-30 2017-06-13 EMC IP Holding Company LLC Forming a protection domain in a storage architecture
US9665305B1 (en) 2015-06-26 2017-05-30 EMC IP Holding Company LLC Tiering data between two deduplication devices
US10853181B1 (en) 2015-06-29 2020-12-01 EMC IP Holding Company LLC Backing up volumes using fragment files
US10042751B1 (en) 2015-09-30 2018-08-07 EMC IP Holding Company LLC Method and system for multi-tier all-flash array
US11153335B1 (en) 2015-11-09 2021-10-19 8X8, Inc. Delayed replication for protection of replicated databases
US10440039B1 (en) * 2015-11-09 2019-10-08 8X8, Inc. Delayed replication for protection of replicated databases
US9684576B1 (en) 2015-12-21 2017-06-20 EMC IP Holding Company LLC Replication using a virtual distributed volume
US10055148B1 (en) 2015-12-22 2018-08-21 EMC IP Holding Company LLC Storing application data as an enhanced copy
US10409787B1 (en) 2015-12-22 2019-09-10 EMC IP Holding Company LLC Database migration
US10133874B1 (en) 2015-12-28 2018-11-20 EMC IP Holding Company LLC Performing snapshot replication on a storage system not configured to support snapshot replication
US10229006B1 (en) 2015-12-28 2019-03-12 EMC IP Holding Company LLC Providing continuous data protection on a storage array configured to generate snapshots
US10235196B1 (en) 2015-12-28 2019-03-19 EMC IP Holding Company LLC Virtual machine joining or separating
US10067837B1 (en) 2015-12-28 2018-09-04 EMC IP Holding Company LLC Continuous data protection with cloud resources
US10108356B1 (en) 2016-03-25 2018-10-23 EMC IP Holding Company LLC Determining data to store in retention storage
US9910735B1 (en) 2016-03-30 2018-03-06 EMC IP Holding Company LLC Generating an application-consistent snapshot
US10579282B1 (en) 2016-03-30 2020-03-03 EMC IP Holding Company LLC Distributed copy in multi-copy replication where offset and size of I/O requests to replication site is half offset and size of I/O request to production volume
US10235088B1 (en) 2016-03-30 2019-03-19 EMC IP Holding Company LLC Global replication policy for multi-copy replication
US10235087B1 (en) 2016-03-30 2019-03-19 EMC IP Holding Company LLC Distributing journal data over multiple journals
US10152267B1 (en) 2016-03-30 2018-12-11 Emc Corporation Replication data pull
US10235060B1 (en) 2016-04-14 2019-03-19 EMC IP Holding Company, LLC Multilevel snapshot replication for hot and cold regions of a storage system
US10235090B1 (en) 2016-09-23 2019-03-19 EMC IP Holding Company LLC Validating replication copy consistency using a hash function in a storage system
US10210073B1 (en) * 2016-09-23 2019-02-19 EMC IP Holding Company, LLC Real time debugging of production replicated data with data obfuscation in a storage system
US10019194B1 (en) 2016-09-23 2018-07-10 EMC IP Holding Company LLC Eventually consistent synchronous data replication in a storage system
US10235091B1 (en) 2016-09-23 2019-03-19 EMC IP Holding Company LLC Full sweep disk synchronization in a storage system
US10146961B1 (en) 2016-09-23 2018-12-04 EMC IP Holding Company LLC Encrypting replication journals in a storage system
US10078459B1 (en) 2016-09-26 2018-09-18 EMC IP Holding Company LLC Ransomware detection using I/O patterns
US10235061B1 (en) 2016-09-26 2019-03-19 EMC IP Holding Company LLC Granular virtual machine snapshots
US10235247B1 (en) 2016-09-26 2019-03-19 EMC IP Holding Company LLC Compressing memory snapshots
US10223023B1 (en) 2016-09-26 2019-03-05 EMC IP Holding Company LLC Bandwidth reduction for multi-level data replication
US10409986B1 (en) 2016-09-26 2019-09-10 EMC IP Holding Company LLC Ransomware detection in a continuous data protection environment
US10409629B1 (en) 2016-09-26 2019-09-10 EMC IP Holding Company LLC Automated host data protection configuration
US11016677B2 (en) 2016-12-13 2021-05-25 EMC IP Holding Company LLC Dual-splitter for high performance replication
US10324637B1 (en) 2016-12-13 2019-06-18 EMC IP Holding Company LLC Dual-splitter for high performance replication
US10235092B1 (en) 2016-12-15 2019-03-19 EMC IP Holding Company LLC Independent parallel on demand recovery of data replicas in a storage system
US10628268B1 (en) 2016-12-15 2020-04-21 EMC IP Holding Company LLC Proof of data replication consistency using blockchain
US10140039B1 (en) 2016-12-15 2018-11-27 EMC IP Holding Company LLC I/O alignment for continuous replication in a storage system
US10191687B1 (en) 2016-12-15 2019-01-29 EMC IP Holding Company LLC Adaptive snap-based replication in a storage system
US10467102B1 (en) 2016-12-15 2019-11-05 EMC IP Holding Company LLC I/O score-based hybrid replication in a storage system
US10747606B1 (en) 2016-12-21 2020-08-18 EMC IP Holding Company LLC Risk based analysis of adverse event impact on system availability
US10423634B1 (en) 2016-12-27 2019-09-24 EMC IP Holding Company LLC Temporal queries on secondary storage
US10353603B1 (en) 2016-12-27 2019-07-16 EMC IP Holding Company LLC Storage container based replication services
US10114581B1 (en) 2016-12-27 2018-10-30 EMC IP Holding Company LLC Creating a virtual access point in time on an object based journal replication
US10235064B1 (en) 2016-12-27 2019-03-19 EMC IP Holding Company LLC Optimized data replication using special NVME protocol and running in a friendly zone of storage array
US10776211B1 (en) 2016-12-27 2020-09-15 EMC IP Holding Company LLC Methods, systems, and apparatuses to update point in time journal using map reduce to create a highly parallel update
US10255314B2 (en) 2017-03-16 2019-04-09 International Business Machines Corporation Comparison of block based volumes with ongoing inputs and outputs
CN107203345A (en) * 2017-06-01 2017-09-26 深圳市云舒网络技术有限公司 A kind of many copy storage rapid verification coherence methods and its device
US10366011B1 (en) 2018-05-03 2019-07-30 EMC IP Holding Company LLC Content-based deduplicated storage having multilevel data cache
US10713221B2 (en) 2018-07-30 2020-07-14 EMC IP Holding Company LLC Dual layer deduplication for a file system running over a deduplicated block storage
US10853286B2 (en) 2018-07-31 2020-12-01 EMC IP Holding Company LLC Performance improvement for an active-active distributed non-ALUA system with address ownerships
US10489321B1 (en) 2018-07-31 2019-11-26 EMC IP Holding Company LLC Performance improvement for an active-active distributed non-ALUA system with address ownerships
US11144247B2 (en) 2018-08-01 2021-10-12 EMC IP Holding Company LLC Fast input/output in a content-addressable storage architecture with paged metadata
US10592166B2 (en) 2018-08-01 2020-03-17 EMC IP Holding Company LLC Fast input/output in a content-addressable storage architecture with paged metadata
US10747667B2 (en) 2018-11-02 2020-08-18 EMC IP Holding Company LLC Memory management of multi-level metadata cache for content-based deduplicated storage
US11093158B2 (en) 2019-01-29 2021-08-17 EMC IP Holding Company LLC Sub-lun non-deduplicated tier in a CAS storage to reduce mapping information and improve memory efficiency
US11455277B2 (en) 2019-03-27 2022-09-27 Nutanix Inc. Verifying snapshot integrity

Similar Documents

Publication Publication Date Title
US9158630B1 (en) Testing integrity of replicated storage
US9367260B1 (en) Dynamic replication system
US9069709B1 (en) Dynamic granularity in data replication
US9910621B1 (en) Backlogging I/O metadata utilizing counters to monitor write acknowledgements and no acknowledgements
US9383937B1 (en) Journal tiering in a continuous data protection system using deduplication-based storage
US9189339B1 (en) Replication of a virtual distributed volume with virtual machine granualarity
US9632881B1 (en) Replication of a virtual distributed volume
US8954796B1 (en) Recovery of a logical unit in a consistency group while replicating other logical units in the consistency group
US9110914B1 (en) Continuous data protection using deduplication-based storage
US9087112B1 (en) Consistency across snapshot shipping and continuous replication
US10133874B1 (en) Performing snapshot replication on a storage system not configured to support snapshot replication
US8996460B1 (en) Accessing an image in a continuous data protection using deduplication-based storage
US8898409B1 (en) Journal-based replication without journal loss
US9146878B1 (en) Storage recovery from total cache loss using journal-based replication
US9274718B1 (en) Migration in replication system
US9037822B1 (en) Hierarchical volume tree
US10082980B1 (en) Migration of snapshot in replication system using a log
US10496487B1 (en) Storing snapshot changes with snapshots
US9336094B1 (en) Scaleout replication of an application
US8949180B1 (en) Replicating key-value pairs in a continuous data protection system
US9529885B1 (en) Maintaining consistent point-in-time in asynchronous replication during virtual machine relocation
US10191687B1 (en) Adaptive snap-based replication in a storage system
US9619543B1 (en) Replicating in virtual desktop infrastructure
US10078459B1 (en) Ransomware detection using I/O patterns
US10067837B1 (en) Continuous data protection with cloud resources

Legal Events

Date Code Title Description
AS Assignment

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NATANZON, ASSAF;REEL/FRAME:031820/0108

Effective date: 20131219

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040134/0001

Effective date: 20160907

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT, TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040136/0001

Effective date: 20160907

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT

Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040134/0001

Effective date: 20160907

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., A

Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040136/0001

Effective date: 20160907

AS Assignment

Owner name: EMC IP HOLDING COMPANY LLC, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EMC CORPORATION;REEL/FRAME:040203/0001

Effective date: 20160906

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., T

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223

Effective date: 20190320

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223

Effective date: 20190320

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001

Effective date: 20200409

AS Assignment

Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: SCALEIO LLC, MASSACHUSETTS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: MOZY, INC., WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: MAGINATICS LLC, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: FORCE10 NETWORKS, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL SYSTEMS CORPORATION, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL MARKETING L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL INTERNATIONAL, L.L.C., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: CREDANT TECHNOLOGIES, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: AVENTAIL LLC, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

AS Assignment

Owner name: SCALEIO LLC, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL INTERNATIONAL L.L.C., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

AS Assignment

Owner name: SCALEIO LLC, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL INTERNATIONAL L.L.C., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8