US20120159106A1 - Data manipulation during memory backup - Google Patents

Data manipulation during memory backup Download PDF

Info

Publication number
US20120159106A1
US20120159106A1 US13/083,407 US201113083407A US2012159106A1 US 20120159106 A1 US20120159106 A1 US 20120159106A1 US 201113083407 A US201113083407 A US 201113083407A US 2012159106 A1 US2012159106 A1 US 2012159106A1
Authority
US
United States
Prior art keywords
data
source
sdram
signature
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/083,407
Other versions
US8738843B2 (en
Inventor
Gary J. Piccirillo
Peter B. Chon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
LSI Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LSI Corp filed Critical LSI Corp
Priority to US13/083,407 priority Critical patent/US8738843B2/en
Assigned to LSI CORPORATION reassignment LSI CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHO, PETER B., PICCIRILLO, GARY J.
Priority to TW100119843A priority patent/TW201227746A/en
Priority to KR1020110062097A priority patent/KR20120069517A/en
Priority to JP2011143809A priority patent/JP2012133746A/en
Priority to CN201110328441.0A priority patent/CN102567139B/en
Priority to EP11194645A priority patent/EP2466473A1/en
Publication of US20120159106A1 publication Critical patent/US20120159106A1/en
Publication of US8738843B2 publication Critical patent/US8738843B2/en
Application granted granted Critical
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LSI CORPORATION
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED MERGER (SEE DOCUMENT FOR DETAILS). Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED CORRECTIVE ASSIGNMENT TO CORRECT THE EFFECTIVE DATE OF THE MERGER PREVIOUSLY RECORDED AT REEL: 047230 FRAME: 0910. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER. Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED CORRECTIVE ASSIGNMENT TO CORRECT THE ERROR IN RECORDING THE MERGER IN THE INCORRECT US PATENT NO. 8,876,094 PREVIOUSLY RECORDED ON REEL 047351 FRAME 0384. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER. Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1441Resetting or repowering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • G06F11/1048Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices using arrangements adapted for a specific error detection or correction feature
    • G06F11/1052Bypassing or disabling error detection or correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/16Protection against loss of memory contents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1694Configuration of memory controller to different memory types

Definitions

  • All or most of the components of a computer or other electronic system may be integrated into a single integrated circuit (chip).
  • the chip may contain various combinations of digital, analog, mixed-signal, and radio-frequency functions. These integrated circuits may be referred to as a system-on-a-chip (SoC or SOC).
  • SoC system-on-a-chip
  • a typical application is in the area of embedded systems.
  • a variant of a system on a chip is the integration of many RAID functions on a single chip. This may be referred to as RAID on a chip (ROC).
  • RAID arrays may be configured in ways that provide redundancy and error recovery without any loss of data. RAID arrays may also be configured to increase read and write performance by allowing data to be read or written simultaneously to multiple disk drives. RAID arrays may also be configured to allow “hot-swapping” which allows a failed disk to be replaced without interrupting the storage services of the array.
  • the 1987 publication by David A. Patterson, et al., from the University of California at Berkeley titled “A Case for Redundant Arrays of Inexpensive Disks (RAID)” discusses the fundamental concepts and levels of RAID technology.
  • RAID storage systems typically utilize a controller that shields the user or host system from the details of managing the storage array.
  • the controller makes the storage array appear as one or more disk drives (or volumes). This is accomplished in spite of the fact that the data (or redundant data) for a particular volume may be spread across multiple disk drives.
  • An embodiment of the invention may therefore comprise a method of transferring data between a volatile memory and a nonvolatile memory, comprising: receiving a command data block having a memory address field, the memory address field having a first plurality of bits that indicate a location in the volatile memory and a second plurality of bits that indicate a data manipulation; based on the indicated data manipulation, selecting a source for data to be sent to the nonvolatile memory; transferring data from the volatile memory to the source; receiving manipulated data from the source; and, transferring the manipulated data to the nonvolatile memory.
  • An embodiment of the invention may therefore further comprise a method of transferring data between a nonvolatile memory and a volatile memory, comprising: receiving a command data block having a memory address field, the memory address field having a first plurality of bits that indicate a location in the volatile memory and a second plurality of bits that indicate a data manipulation; based on the indicated data manipulation, selecting a source for data to be sent to the volatile memory ; transferring data from the nonvolatile memory to the source; receiving manipulated data from the source; and, transferring the manipulated data to the volatile memory.
  • An embodiment of the invention may therefore further comprise an integrated circuit, comprising: a data manipulation controller that receives a memory address field, the memory address field having a first plurality of bits that indicate a location in a volatile memory and a second plurality of bits that indicate a data manipulation, based on the indicated data manipulation, the data manipulation controller selects a source for data to be sent to a nonvolatile memory; a volatile memory controller to be coupled to a volatile memory, the volatile memory controller to facilitate the transfer of data from the volatile memory to the source; a nonvolatile memory controller to be coupled to the nonvolatile memory, the nonvolatile memory controller to receive manipulated data from the source and transfer the manipulated data to the nonvolatile memory.
  • An embodiment of the invention may therefore further comprise an integrated circuit, comprising: a data manipulation controller that receives a memory address field, the memory address field having a first plurality of bits that indicate a location in a volatile memory and a second plurality of bits that indicate a data manipulation, based on the indicated data manipulation, the data manipulation controller selects a source for data to be sent to the volatile memory; an nonvolatile memory controller to be coupled to a nonvolatile memory, the nonvolatile memory controller to facilitate the transfer of data from the nonvolatile memory to the source; a volatile memory controller to be coupled to the volatile memory, the volatile memory controller to receive manipulated data from the source and transfer the manipulated data to the volatile memory.
  • FIG. 1 is a block diagram of a power isolation and backup system.
  • FIG. 2 is a flowchart of a method of power isolation.
  • FIGS. 3A and 3B are block diagrams of data manipulation system configurations.
  • FIG. 4 is an illustration of a command data block (CDB).
  • FIG. 5 a block diagram of a power isolation and backup system.
  • FIG. 6 is a block diagram of a computer system.
  • FIG. 1 is a block diagram of a power isolation and backup system.
  • isolation and backup system 100 comprises: integrated circuit 110 , power control 150 , SDRAM 125 , and nonvolatile memory (e.g., flash) 135 .
  • Integrated circuit (IC) 110 includes SDRAM subsystem 115 , control 140 , clock generator 141 and other circuitry 111 .
  • SDRAM subsystem 115 includes SDRAM controller 120 and nonvolatile memory controller 130 .
  • Other circuitry 112 may include temporary storage 112 (e.g., cache memory, buffers, etc.).
  • SDRAM controller 120 interfaces with and controls SDRAM 125 via interface 121 .
  • Nonvolatile memory controller 130 interfaces with and controls nonvolatile memory 135 via interface 131 .
  • SDRAM subsystem 115 (and thus SDRAM controller 120 and nonvolatile memory controller 130 ) is operatively coupled to control 140 , clock generator 141 , other circuitry 111 , and temporary storage 112 .
  • Clock generator 141 is operatively coupled to control 140 and other circuitry 111 .
  • Power control 150 provides power supply A (PWRA) 160 to IC 110 .
  • Power control 150 provides power supply B (PWRB) 161 to SDRAM subsystem 115 .
  • Power control 150 provides power supply C (PWRC) 162 to SDRAM 125 .
  • Power control 150 provides power supply D (PWRD) 163 to nonvolatile memory 135 .
  • Power control 150 provides a power fail signal 165 to control 140 .
  • Power control 150 is also operatively coupled to SDRAM subsystem by signals 166 .
  • SDRAM Serial Dynamic Random Access Memory
  • SDRAM subsystem 115 may, in an embodiment, comprise a Static Random Access Memory (SRAM) controller and SDRAM 125 may comprise a SRAM device.
  • SRAM Static Random Access Memory
  • power control 150 when power control 150 detects a power failure condition (either impending power failure or existing power failure) power control 150 notifies IC 110 of the condition via a power fail signal 165 . This will starts a power isolation sequence to isolate SDRAM subsystem 115 from the rest of IC 110 , and other circuitry 111 , in particular. In an embodiment, the entire power isolation sequence is controlled by hardware (e.g., control 140 , SDRAM subsystem 115 , or both) with no interaction from software.
  • hardware e.g., control 140 , SDRAM subsystem 115 , or both
  • temporary storage 112 Upon receiving notification of a power fail condition, all of the interfaces (e.g., interfaces to other circuitry 111 ) connected to SDRAM subsystem 115 will be halted. On-chip temporary storage 112 will be flushed. It should be understood that although, in FIG. 1 , temporary storage 112 is shown outside of SDRAM subsystem 115 , temporary storage 112 may be part of SDRAM subsystem 115 . In an example, temporary storage 112 may be a cache (e.g., level 1 cache, level 2 cache, level 3 cache), a posting buffer, or the like.
  • cache e.g., level 1 cache, level 2 cache, level 3 cache
  • logic connected to SDRAM subsystem 115 indicates when the interfaces used for the flushes have halted. Once halted, these interfaces are not accepting any new cycles. Once all of the interfaces are halted, inputs that are required for external devices and internal core logic (i.e., other circuitry 111 ) are latched so that their state will not be lost when isolation occurs. Clocks that are not needed after the inputs are latched are gated off. The SDRAM subsystem will switch to internally generated clocks, or to clocks generated by a clock generator that shares power with SDRAM subsystem 115 (e.g., clock generator 141 ). Following this, inputs to SDRAM subsystem 115 not required for memory backup are isolated. In an embodiment, these inputs are driven to an inactive state.
  • SDRAM subsystem 115 (or control 140 ) signals (for example, using signals 166 ) power control 150 to remove PWRA 160 . This results in power being turned off to all of IC 110 other than SDRAM subsystem 115 .
  • SDRAM subsystem 115 is on a separate power plane from at least other circuitry 111 . This allows power to be maintained (i.e., by PWRB 161 ) to the SDRAM subsystem until power is totally lost to isolation and backup system 100 .
  • FIG. 1 illustrates connections between IC 110 chip and external logic along with some of the internal connections that may be used for power isolation and subsequent memory backup.
  • the power control 150 detects a power failure, it notifies IC 110 via power fail signal 165 .
  • Control 140 monitors power fail signal 165 .
  • control 140 sees power fail signal 165 asserted, and the power isolation is enabled to be done, control 140 notifies SDRAM subsystem 115 to begin an isolation sequence by asserting a power_iso_begin signal (not explicitly shown in FIG. 1 ). SDRAM subsystem 115 then performs steps required for the power isolation sequence. The steps included in the power isolation sequence are explained in greater detail later in this specification.
  • a MSS_core_iso_ready signal (not explicitly shown in FIG. 1 ) is asserted to indicate that at least PWRA 160 can be removed.
  • Power control 150 disables PWRA 160 , but will keeps PWRB 161 , PWRC 162 , and PWRD 163 enabled. Disabling PWRA 160 removes power from portions of IC 110 other than circuitry that is connected to PWRB 161 .
  • SDRAM subsystem 115 along with associated phase locked loops (e.g., internal to clock generator 141 ) and IO's (e.g., interfaces 121 and 131 ) are on a different power plane than the rest of IC 110 . This plane is powered by PWRB 161 and will remain enabled.
  • the functional blocks that have at least a portion of their circuitry on this separate power plane are control 140 , clock generator 141 , and SDRAM subsystem 115 .
  • an external SDRAM 125 remains powered by PWRC 162 and an external nonvolatile memory remains powered by PWRD 163 . This is a reduced amount of logic that must remain powered in order for the memory backup to be performed.
  • SDRAM subsystem 115 begins an SDRAM 125 memory backup at an appropriate time. This backup moves required (or requested) data from SDRAM 125 to nonvolatile memory 135 . In an embodiment, the entire memory backup is performed without software intervention.
  • FIG. 1 the methods discussed above and illustrated in part by FIG. 1 for supplying power supplies 160 - 163 are exemplary ways for supplying (and removing) power to one or more components of isolation and backup system 100 .
  • all the power supplies 160 - 163 and control of the various power domains/planes is done externally to IC 110 .
  • One method may use a single external power source per voltage and then create the different power domains/planes using switches internal to IC 110 .
  • Another method may reduce the number of external voltages, and use regulators internal to one or more components of isolation and backup system 100 (e.g., IC 110 ) to get various voltages along with switches internal to IC 110 to control the different power domains/planes.
  • power isolation is done approximately the same way.
  • power control logic 150 that needs to be notified to keep power supplies 161 - 163 enabled may be located internally or externally.
  • FIG. 2 is a flowchart of a method of power isolation. The steps illustrated in FIG. 2 may be performed by one or more elements of isolation and backup system 100 .
  • Power is received for a first on-chip subsystem ( 202 ).
  • PWRA 160 which powers other circuitry 111 , may be received by IC 100 .
  • An indicator of a power fail condition is received ( 204 ).
  • power fail signal 165 may be received by IC 110 . This may result in the power isolation sequence beginning when a power_iso_begin signal is asserted.
  • Interfaces to an SDRAM subsystem are halted ( 206 ).
  • Temporary storage is flushed to SDRAM ( 208 ).
  • a level-3 cache, level-2 cache, posting buffer, or any other type of memory storage that is used to temporarily store a copy of the data to/from SDRAM 125 may be flushed.
  • Logic connected to each of the interfaces may return a halt indication when they have completed all outstanding cycles and stopped accepting any new ones.
  • an on-chip SDRAM subsystem is isolated ( 210 ). For example, when the SDRAM interface (or temporary storage 112 ) has indicated it has halted accepting cycles, its inputs will be isolated by setting them to inactive states. Once halts from the other interfaces are received, inputs that need to be preserved for external core devices and internal logic are latched. These inputs include such things as resets, signals for the PLL and strap inputs. At this point in time, any clocks that are not needed by the SDRAM subsystem anymore may be gated off to assist in reducing power consumption. Some amount of time later, a signal (e.g., MSS_core iso_enable) may be asserted which will indicate to isolate all of the inputs to the SDRAM subsystem and set them to their inactive state.
  • a signal e.g., MSS_core iso_enable
  • a clock and power used by a first on-chip subsystem is gated off ( 212 ). For example, the clock going to temporary storage 112 may be switched to an internally generated clock.
  • a signal e.g., MSS_core_iso_ready
  • a clock for use by the SDRAM subsystem is generated ( 214 ).
  • clock generator 141 may generate a clock for use by the SDRAM subsystem to be used when PWRA 160 is off.
  • Data is copied from SDRAM to nonvolatile memory ( 216 ).
  • the memory backup from SDRAM 125 to nonvolatile memory 135 may start by asserting a signal (e.g., flash_offload_begin).
  • Power is removed from the SDRAM subsystem, SDRAM, and nonvolatile memory ( 218 ). For example, either under the control of power control 150 upon the completion of memory backup, or simply because power to the entire isolation and backup system 100 has failed, power is removed from SDRAM subsystem 115 , SDRAM 125 , and nonvolatile memory 135 .
  • An advantage to isolating the power of SDRAM subsystem 115 during backup is a reduced amount of power is consumed. Only the logic inside of IC 110 that handles the memory backup, external SDRAM 125 , and nonvolatile memory 135 are powered. By reducing the power consumption, it increases the amount of time available to perform the memory backup before all of the remaining power is consumed. Having more time allows for more memory to be backed up in addition to less external logic being required to maintain the power until the backup is completed. Because the power isolation is being done, it may be advantageous to move the flash controller internally to reduce power consumption and overall system cost required to do memory backup.
  • additional data protection is provided for the data that is backed up by performing encryption and/or a data integrity signature calculation as the data in SDRAM 125 is moved to nonvolatile memory 135 .
  • Encryption of data provides a secure method of storing the data.
  • Data integrity signature calculation protects against most data errors likely to occur.
  • SDRAM subsystem 115 moves data between SDRAM 125 and nonvolatile memory 135 when a memory backup or restore is required.
  • SDRAM subsystem 115 may use a list of CDBs (Command Descriptor Blocks) for indicating the data movement that is requested.
  • the format of these CDBs is typically pre-defined.
  • One of the fields in a CDB is a memory address field that indicates where in SDRAM 125 to read or write data. In an embodiment, the number of address bits provided in this field exceeds the number that is required to address all of SDRAM 125 . Some of these address bits that are not required may be used to encode information on how the data should be manipulated as it is moved from/to SDRAM 125 . This movement may occur when a memory backup or restore is performed, or at other times.
  • the encoding of the unused address bits may indicate if the data should be encrypted/decrypted, if signature generation is required, if the signature should be offloaded or reset, and which signature engine to use.
  • the aforementioned unused address bits may be interpreted to determine what data manipulation to perform as the data moves between SDRAM 125 and nonvolatile memory 135 , via SDRAM subsystem 115 .
  • FIGS. 3A and 3B are a block diagrams of data manipulation system configurations.
  • data manipulation system 300 comprises: SDRAM controller 310 , flash controller 320 , control 330 , signature engines 340 , encrypt/decrypt engine 350 , and multiplexer (MUX) 360 .
  • Control 330 is operatively coupled to SDRAM controller 310 , flash controller 320 , signature engines 340 , and encrypt/decrypt engine 350 , and MUX 360 .
  • control 330 may receive commands, signals, CDBs, etc. from flash controller 320 , perform arbitration, and otherwise manage the data flows and configuration of data manipulation system 300 .
  • SDRAM 310 is configured, via coupling 371 , to send data read from an SDRAM (not shown in FIG. 3A ) to signature engines 340 , encrypt/decrypt engine 350 , and a first input of MUX 360 .
  • Encrypt/decrypt engine 350 is configured, via coupling 372 , to send encrypted data to a second input of MUX 360 .
  • Signature engines 340 are configured, via coupling 373 , to send data integrity signatures to third input of MUX 360 .
  • MUX is controlled by control 330 to send one of unmodified data read from the SDRAM, encrypted data, or data integrity signatures to flash controller 320 .
  • Flash controller 320 may store the unmodified data read from the SDRAM, the encrypted data, or the data integrity signatures in a flash memory (not shown in FIG. 3A ).
  • FIG. 3A illustrates a configuration for data flow and control of when a read from SDRAM (e.g., SDRAM 125 ) request is received from flash controller 320 by control 330 .
  • this configuration and flow is used when a backup of SDRAM memory is required.
  • signature engines 340 and encrypt/decrypt engine 350 are used for both read and write requests.
  • the data connections and flow for flash write requests (which corresponds to an SDRAM read) are illustrated in FIG. 3A .
  • the data connections for flash read requests (which corresponds to an SDRAM write) are illustrated in FIG. 3B .
  • Flash controller 320 sends a read request to control 330 .
  • Encoded address lines (or a dedicated field) of the request are examined by control 330 to determine where to route the read data from that is being returned from SDRAM controller 310 and what data manipulation, if any, is required.
  • address bits [ 46 : 40 ] contain an encoding and a mapping that is as follows: bits 40 - 42 (SES[ 0 : 2 ]) specify which of 8 signature engines 340 should take the action specified (if any) by the other bits of the encoding; bit 43 (SG) determines whether the specified signature engine should generate a data integrity signature using the read data as input; bit 44 (SO) tells the specified signature engine to output a data integrity signature (which, depending on the state of MUX 360 , may be sent to flash controller 320 for storage); bit 45 (SR) resets the data integrity signature of the specified signature engine; and, bit 46 (E/D) determines whether encrypted data from the output of encryption/decryption engine 350 should be sent to flash controller 320 .
  • FIG. 4 is an illustration of a command data block (CDB).
  • CDB command data block
  • an address field for address bits 0 - 46 is illustrated.
  • a field in the SDRAM address bits specifying the SDRAM address bits that are used (A[ 0 : 39 ]), and a field of the encoded address bits (A[ 40 : 46 ]).
  • the individual bit fields (SES[ 0 : 2 ], SG, SO, SR, and E/D) of the encoded address bits are also illustrated.
  • an indication will be sent to MUX 360 which results in one of three different sources being used by flash controller 320 .
  • the data will come either directly from SDRAM controller 310 , encryption/decryption engine 350 , or if a signature offload from one of signature engines 340 . If the encoding indicates to perform encryption, encryption/decryption engine 350 will be controlled by control 330 to receive the read data from SDRAM controller 310 . Once encryption/decryption engine 350 receives the data from SDRAM controller 310 , it performs the data encryption, sends the result to MUX 360 for routing to flash controller 320 , and waits for it to accept the data.
  • the encoding also indicates if signature generation should be done on the data being transferred to flash memory.
  • One of the eight signature engines 340 as indicated by the signature engine select (SES[ 0 : 2 ]) field of the encoding, will be notified that its CRC/checksum signature value should be updated.
  • the data is also sent to at least the specified signature engine 340 .
  • the selected signature engine 340 sees the SDRAM data being accepted by either of those blocks, the current CRC/checksum signature is updated using that data.
  • the encoding indicates if a signature offload should be output. If a Signature Offload is required, a read command will not be issued by control 330 to SDRAM controller 310 . Instead, control 330 will instruct the selected signature engine 340 to send the data integrity signature data to flash controller 320 .
  • flash controller 320 is configured, via coupling 381 , to send data read from a flash memory (not shown in FIG. 3B ) to signature engines 340 , encrypt/decrypt engine 350 , and a first input of MUX 361 .
  • Encrypt/decrypt engine 350 is configured, via coupling 382 , to send encrypted data to a second input of MUX 361 .
  • Signature engines 340 are configured, via coupling 383 , to indicate a current value of a selected data integrity signature.
  • MUX 361 is controlled by control 330 to send one of unmodified data read from the flash memory (via flash controller 320 ) or decrypted data to SDRAM controller 310 .
  • SDRAM controller 310 may store the unmodified data read from the flash memory, or the decrypted data in an SDRAM (not shown in FIG. 3B ).
  • Control 330 may receive a write command from flash controller 320 .
  • Control 330 may issue a write request to SDRAM controller 310 .
  • the encoded address lines of request are examined to determine where to route the write data from that is being sent to the SDRAM Controller (from flash controller 320 ) and what data manipulation, if any, is required. The same encoding described in the discussion of FIG. 3A may be used.
  • either unmodified data from flash controller 320 , or decrypted data from encryption/decryption engine 350 will be selected by MUX 361 to send to SDRAM controller 310 . If the encoding indicates to perform decryption, encryption/decryption engine 350 will be controlled to accept the data from flash controller 320 . Once the encryption/decryption engine 350 accepts the data from flash controller 320 , it performs the data decryption, sends the result to the SDRAM controller 310 , and waits for SDRAM controller 310 to accept the data. The encoding will also indicate if signature generation needs to be done for the data being transferred to SDRAM.
  • One of the eight signature engines 340 is controlled to update its CRC/checksum signature value.
  • the signature generation is always done on decrypted data. So the signature engine 340 is controlled to select between data from flash controller 320 or the decrypt results from encryption/decryption engine 350 to update the data integrity signature value.
  • the data will also be sent to a selected signature engine 340 . Once the selected signature engine 340 sees the data being accepted by the SDRAM controller 310 , the current CRC/Checksum Signature is updated using that data.
  • the current value of one of the eight data integrity signatures can be selected and read by software via coupling 383 .
  • This value may be compared by software with a backup signature that is restored from flash memory to SDRAM. This may be done to verify that no data errors occurred while the data was backed up or restored.
  • FIG. 5 is a block diagram of a power isolation and backup system.
  • isolation and backup system 500 comprises: integrated circuit 510 , power control 550 , SDRAM 525 , and nonvolatile memory (e.g., flash) 535 .
  • Integrated circuit (IC) 510 includes SDRAM subsystem 515 , control 540 , clock generator 541 and other circuitry 511 .
  • SDRAM subsystem 515 includes SDRAM controller 520 , nonvolatile memory controller 530 , and data manipulation 570 .
  • Other circuitry 512 may include temporary storage 512 (e.g., cache memory, buffers, etc.).
  • SDRAM controller 520 interfaces with and controls SDRAM 525 via interface 521 .
  • Nonvolatile memory controller 530 interfaces with and controls nonvolatile memory 535 via interface 531 .
  • SDRAM subsystem 515 (and thus SDRAM controller 520 , nonvolatile memory controller 530 , and data manipulation 570 ) is operatively coupled to control 540 , clock generator 541 , other circuitry 511 , and temporary storage 512 .
  • Clock generator 541 is operatively coupled to control 540 and other circuitry 511 .
  • Power control 550 provides power supply A (PWRA) 560 to IC 510 .
  • Power control 550 provides power supply B (PWRB) 561 to SDRAM subsystem 515 .
  • Power control 550 provides power supply C (PWRC) 562 to SDRAM 525 .
  • Power control 550 provides power supply D (PWRD) 563 to nonvolatile memory 535 .
  • Power control 550 provides a power fail signal 565 to control 540 .
  • Power control 550 is also operatively coupled to SDRAM subsystem by signals 566 .
  • power control 550 when power control 550 detects a power failure condition (either impending power fail or existing power fail) power control 550 notifies IC 510 of the condition via a power fail signal 565 . This will starts a power isolation sequence to isolate SDRAM subsystem 515 from the rest of IC 510 , and other circuitry 511 , in particular. In an embodiment, the entire power isolation sequence is controlled by hardware (e.g., control 540 , SDRAM subsystem 515 , or both) with no interaction from software.
  • hardware e.g., control 540 , SDRAM subsystem 515 , or both
  • temporary storage 512 Upon receiving notification of a power fail condition, all of the interfaces (e.g., interfaces to other circuitry 511 ) connected to SDRAM subsystem 515 will be halted. On-chip temporary storage 512 will be flushed. It should be understood that although, in FIG. 5 , temporary storage 512 is shown outside of SDRAM subsystem 515 , temporary storage 512 may be part of SDRAM subsystem 515 . In an example, temporary storage 512 may be a cache (e.g., level 1 cache, level 2 cache, level 3 cache), a posting buffer, or the like.
  • cache e.g., level 1 cache, level 2 cache, level 3 cache
  • logic connected to SDRAM subsystem 515 indicates when the interfaces used for the flushes have halted. Once halted, these interfaces are not accepting any new cycles. Once all of the interfaces are halted, inputs that are required for external devices and internal core logic (i.e., other circuitry 511 ) are latched so that their state will not be lost when isolation occurs. Clocks that are not needed after the inputs are latched are gated off. The SDRAM subsystem will switch to internally generated clocks, or to clocks generated by a clock generator that shares power with SDRAM subsystem 515 (e.g., clock generator 541 ). Following this, inputs to SDRAM subsystem 515 not required for memory backup are isolated. In an embodiment, these inputs are driven to an inactive state.
  • SDRAM subsystem 515 (or control 540 ) signals (for example, using signals 566 ) power control 550 to remove PWRA 560 . This results in power being turned off to all of IC 510 other than SDRAM subsystem 515 .
  • SDRAM subsystem 515 is on a separate power plane from at least other circuitry 511 . This allows power to be maintained (i.e., by PWRB 561 ) to the SDRAM subsystem until power is totally lost to isolation and backup system 500 .
  • data manipulation 570 As data is moved to or from SDRAM 525 , from or to, respectively, nonvolatile memory 535 , it may be manipulated by data manipulation 570 .
  • Data manipulation 570 is configured, operates, and functions as in the same manner described previously with reference to data manipulation system 300 of FIGS. 3A and 3B .
  • data manipulation 570 can be configured to encrypt/decrypt data, and/or calculate/check data integrity signatures.
  • the functions, data flow, and configuration of data manipulation 570 may be performed while PWRA 560 is off (for example to save encrypted data and/or calculate and store a data integrity signature.) In another embodiment, the functions, data flow, and configuration of data manipulation 570 may be performed while PWRA 560 is on (for example to restore encrypted data and/or calculate and store a data integrity signature.)
  • the methods, systems and devices described above may be implemented in computer systems, or stored by computer systems. The methods described above may also be stored on a computer readable medium. Devices, circuits, and systems described herein may be implemented using computer-aided design tools available in the art, and embodied by computer-readable files containing software descriptions of such circuits. This includes, but is not limited to isolation and backup system 100 and 500 , IC 110 and 510 , power control 150 and 550 , SDRAM subsystem 115 and 515 , and their components. These software descriptions may be: behavioral, register transfer, logic component, transistor and layout geometry-level descriptions. Moreover, the software descriptions may be stored on storage media or communicated by carrier waves.
  • Data formats in which such descriptions may be implemented include, but are not limited to: formats supporting behavioral languages like C, formats supporting register transfer level (RTL) languages like Verilog and VHDL, formats supporting geometry description languages (such as GDSII, GDSIII, GDSIV, CIF, and MEBES), and other suitable formats and languages.
  • RTL register transfer level
  • GDSII, GDSIII, GDSIV, CIF, and MEBES formats supporting geometry description languages
  • data transfers of such files on machine-readable media may be done electronically over the diverse media on the Internet or, for example, via email.
  • physical files may be implemented on machine-readable media such as: 4 mm magnetic tape, 8 mm magnetic tape, 31 ⁇ 2 inch floppy media, CDs, DVDs, and so on.
  • FIG. 6 illustrates a block diagram of a computer system.
  • Computer system 600 includes communication interface 620 , processing system 630 , storage system 640 , and user interface 660 .
  • Processing system 630 is operatively coupled to storage system 640 .
  • Storage system 640 stores software 650 and data 670 .
  • Processing system 630 is operatively coupled to communication interface 620 and user interface 660 .
  • Computer system 600 may comprise a programmed general-purpose computer.
  • Computer system 600 may include a microprocessor.
  • Computer system 600 may comprise programmable or special purpose circuitry.
  • Computer system 600 may be distributed among multiple devices, processors, storage, and/or interfaces that together comprise elements 620 - 670 .
  • Communication interface 620 may comprise a network interface, modem, port, bus, link, transceiver, or other communication device. Communication interface 620 may be distributed among multiple communication devices.
  • Processing system 630 may comprise a microprocessor, microcontroller, logic circuit, or other processing device. Processing system 630 may be distributed among multiple processing devices.
  • User interface 660 may comprise a keyboard, mouse, voice recognition interface, microphone and speakers, graphical display, touch screen, or other type of user interface device. User interface 660 may be distributed among multiple interface devices.
  • Storage system 640 may comprise a disk, tape, integrated circuit, RAM, ROM, network storage, server, or other memory function. Storage system 640 may be a computer readable medium. Storage system 640 may be distributed among multiple memory devices.
  • Processing system 630 retrieves and executes software 650 from storage system 640 .
  • Processing system may retrieve and store data 670 .
  • Processing system may also retrieve and store data via communication interface 620 .
  • Processing system 630 may create or modify software 650 or data 670 to achieve a tangible result.
  • Processing system may control communication interface 620 or user interface 660 to achieve a tangible result.
  • Processing system may retrieve and execute remotely stored software via communication interface 620 .
  • Software 650 and remotely stored software may comprise an operating system, utilities, drivers, networking software, and other software typically executed by a computer system.
  • Software 650 may comprise an application program, applet, firmware, or other form of machine-readable processing instructions typically executed by a computer system.
  • software 650 or remotely stored software may direct computer system 600 to operate as described herein.

Abstract

Disclosed is a power isolation and backup system. When a power fail condition is detected, temporary storage is flushed to an SDRAM. After the flush, interfaces are halted, and power is removed from most of the chip except the SDRAM subsystem. The SDRAM subsystem copies data from an SDRAM to a flash memory. On the way, the data may be encrypted, and/or a data integrity signature calculated. To restore data, the SDRAM subsystem copies data from the flash memory to the SDRAM. On the way, the data being restored may be decrypted, and/or a data integrity signature checked.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • The present patent application is based upon and claims the benefit of U.S. Provisional Patent Application Ser. No. 61/424,701, filed on Dec. 20, 2010, by Peter B. Chon, entitled “Low Power Hardware Controlled Memory Backup that includes Encryption and Signature Generation,” which application is hereby specifically incorporated herein by reference for all that it discloses and teaches.
  • BACKGROUND OF THE INVENTION
  • All or most of the components of a computer or other electronic system may be integrated into a single integrated circuit (chip). The chip may contain various combinations of digital, analog, mixed-signal, and radio-frequency functions. These integrated circuits may be referred to as a system-on-a-chip (SoC or SOC). A typical application is in the area of embedded systems. A variant of a system on a chip is the integration of many RAID functions on a single chip. This may be referred to as RAID on a chip (ROC).
  • RAID arrays may be configured in ways that provide redundancy and error recovery without any loss of data. RAID arrays may also be configured to increase read and write performance by allowing data to be read or written simultaneously to multiple disk drives. RAID arrays may also be configured to allow “hot-swapping” which allows a failed disk to be replaced without interrupting the storage services of the array. The 1987 publication by David A. Patterson, et al., from the University of California at Berkeley titled “A Case for Redundant Arrays of Inexpensive Disks (RAID)” discusses the fundamental concepts and levels of RAID technology.
  • RAID storage systems typically utilize a controller that shields the user or host system from the details of managing the storage array. The controller makes the storage array appear as one or more disk drives (or volumes). This is accomplished in spite of the fact that the data (or redundant data) for a particular volume may be spread across multiple disk drives.
  • SUMMARY OF THE INVENTION
  • An embodiment of the invention may therefore comprise a method of transferring data between a volatile memory and a nonvolatile memory, comprising: receiving a command data block having a memory address field, the memory address field having a first plurality of bits that indicate a location in the volatile memory and a second plurality of bits that indicate a data manipulation; based on the indicated data manipulation, selecting a source for data to be sent to the nonvolatile memory; transferring data from the volatile memory to the source; receiving manipulated data from the source; and, transferring the manipulated data to the nonvolatile memory.
  • An embodiment of the invention may therefore further comprise a method of transferring data between a nonvolatile memory and a volatile memory, comprising: receiving a command data block having a memory address field, the memory address field having a first plurality of bits that indicate a location in the volatile memory and a second plurality of bits that indicate a data manipulation; based on the indicated data manipulation, selecting a source for data to be sent to the volatile memory ; transferring data from the nonvolatile memory to the source; receiving manipulated data from the source; and, transferring the manipulated data to the volatile memory.
  • An embodiment of the invention may therefore further comprise an integrated circuit, comprising: a data manipulation controller that receives a memory address field, the memory address field having a first plurality of bits that indicate a location in a volatile memory and a second plurality of bits that indicate a data manipulation, based on the indicated data manipulation, the data manipulation controller selects a source for data to be sent to a nonvolatile memory; a volatile memory controller to be coupled to a volatile memory, the volatile memory controller to facilitate the transfer of data from the volatile memory to the source; a nonvolatile memory controller to be coupled to the nonvolatile memory, the nonvolatile memory controller to receive manipulated data from the source and transfer the manipulated data to the nonvolatile memory.
  • An embodiment of the invention may therefore further comprise an integrated circuit, comprising: a data manipulation controller that receives a memory address field, the memory address field having a first plurality of bits that indicate a location in a volatile memory and a second plurality of bits that indicate a data manipulation, based on the indicated data manipulation, the data manipulation controller selects a source for data to be sent to the volatile memory; an nonvolatile memory controller to be coupled to a nonvolatile memory, the nonvolatile memory controller to facilitate the transfer of data from the nonvolatile memory to the source; a volatile memory controller to be coupled to the volatile memory, the volatile memory controller to receive manipulated data from the source and transfer the manipulated data to the volatile memory.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a power isolation and backup system.
  • FIG. 2 is a flowchart of a method of power isolation.
  • FIGS. 3A and 3B are block diagrams of data manipulation system configurations.
  • FIG. 4 is an illustration of a command data block (CDB).
  • FIG. 5 a block diagram of a power isolation and backup system.
  • FIG. 6 is a block diagram of a computer system.
  • DETAILED DESCRIPTION OF THE EMBODIMENTS
  • FIG. 1 is a block diagram of a power isolation and backup system. In FIG. 1, isolation and backup system 100 comprises: integrated circuit 110, power control 150, SDRAM 125, and nonvolatile memory (e.g., flash) 135. Integrated circuit (IC) 110 includes SDRAM subsystem 115, control 140, clock generator 141 and other circuitry 111. SDRAM subsystem 115 includes SDRAM controller 120 and nonvolatile memory controller 130. Other circuitry 112 may include temporary storage 112 (e.g., cache memory, buffers, etc.). SDRAM controller 120 interfaces with and controls SDRAM 125 via interface 121. Nonvolatile memory controller 130 interfaces with and controls nonvolatile memory 135 via interface 131. SDRAM subsystem 115 (and thus SDRAM controller 120 and nonvolatile memory controller 130) is operatively coupled to control 140, clock generator 141, other circuitry 111, and temporary storage 112. Clock generator 141 is operatively coupled to control 140 and other circuitry 111.
  • Power control 150 provides power supply A (PWRA) 160 to IC 110. Power control 150 provides power supply B (PWRB) 161 to SDRAM subsystem 115. Power control 150 provides power supply C (PWRC) 162 to SDRAM 125. Power control 150 provides power supply D (PWRD) 163 to nonvolatile memory 135. Power control 150 provides a power fail signal 165 to control 140. Power control 150 is also operatively coupled to SDRAM subsystem by signals 166.
  • It should be understood that as used in this application SDRAM (Synchronous Dynamic Random Access Memory) is intended to include all volatile memory technologies. Thus, SDRAM subsystem 115 may, in an embodiment, comprise a Static Random Access Memory (SRAM) controller and SDRAM 125 may comprise a SRAM device.
  • In an embodiment, when power control 150 detects a power failure condition (either impending power failure or existing power failure) power control 150 notifies IC 110 of the condition via a power fail signal 165. This will starts a power isolation sequence to isolate SDRAM subsystem 115 from the rest of IC 110, and other circuitry 111, in particular. In an embodiment, the entire power isolation sequence is controlled by hardware (e.g., control 140, SDRAM subsystem 115, or both) with no interaction from software.
  • Upon receiving notification of a power fail condition, all of the interfaces (e.g., interfaces to other circuitry 111) connected to SDRAM subsystem 115 will be halted. On-chip temporary storage 112 will be flushed. It should be understood that although, in FIG. 1, temporary storage 112 is shown outside of SDRAM subsystem 115, temporary storage 112 may be part of SDRAM subsystem 115. In an example, temporary storage 112 may be a cache (e.g., level 1 cache, level 2 cache, level 3 cache), a posting buffer, or the like.
  • Once temporary storage 112 has been flushed, logic connected to SDRAM subsystem 115 indicates when the interfaces used for the flushes have halted. Once halted, these interfaces are not accepting any new cycles. Once all of the interfaces are halted, inputs that are required for external devices and internal core logic (i.e., other circuitry 111) are latched so that their state will not be lost when isolation occurs. Clocks that are not needed after the inputs are latched are gated off. The SDRAM subsystem will switch to internally generated clocks, or to clocks generated by a clock generator that shares power with SDRAM subsystem 115 (e.g., clock generator 141). Following this, inputs to SDRAM subsystem 115 not required for memory backup are isolated. In an embodiment, these inputs are driven to an inactive state.
  • After isolation of the inputs completes, SDRAM subsystem 115 (or control 140) signals (for example, using signals 166) power control 150 to remove PWRA 160. This results in power being turned off to all of IC 110 other than SDRAM subsystem 115. SDRAM subsystem 115 is on a separate power plane from at least other circuitry 111. This allows power to be maintained (i.e., by PWRB 161) to the SDRAM subsystem until power is totally lost to isolation and backup system 100.
  • In addition to controlling the isolation and removal of power to all but the SDRAM subsystem 115 (and any other logic needed by SDRAM subsystem 115), once the interfaces have halted and temporary storage 112 been flushed, internal memory backup logic will start moving data from SDRAM 125 to nonvolatile memory 135. In an embodiment, these are the only cycles running on the entire chip once PWRA has been removed.
  • FIG. 1 illustrates connections between IC 110 chip and external logic along with some of the internal connections that may be used for power isolation and subsequent memory backup. When the power control 150 detects a power failure, it notifies IC 110 via power fail signal 165. Control 140 monitors power fail signal 165. When control 140 sees power fail signal 165 asserted, and the power isolation is enabled to be done, control 140 notifies SDRAM subsystem 115 to begin an isolation sequence by asserting a power_iso_begin signal (not explicitly shown in FIG. 1). SDRAM subsystem 115 then performs steps required for the power isolation sequence. The steps included in the power isolation sequence are explained in greater detail later in this specification.
  • Once the power isolation sequence has completed, a MSS_core_iso_ready signal (not explicitly shown in FIG. 1) is asserted to indicate that at least PWRA 160 can be removed. Power control 150 disables PWRA 160, but will keeps PWRB 161, PWRC 162, and PWRD 163 enabled. Disabling PWRA 160 removes power from portions of IC 110 other than circuitry that is connected to PWRB 161. SDRAM subsystem 115 along with associated phase locked loops (e.g., internal to clock generator 141) and IO's (e.g., interfaces 121 and 131) are on a different power plane than the rest of IC 110. This plane is powered by PWRB 161 and will remain enabled. In an example, the functional blocks that have at least a portion of their circuitry on this separate power plane are control 140, clock generator 141, and SDRAM subsystem 115. In an embodiment, an external SDRAM 125 remains powered by PWRC 162 and an external nonvolatile memory remains powered by PWRD 163. This is a reduced amount of logic that must remain powered in order for the memory backup to be performed.
  • During the power isolation sequence, SDRAM subsystem 115 begins an SDRAM 125 memory backup at an appropriate time. This backup moves required (or requested) data from SDRAM 125 to nonvolatile memory 135. In an embodiment, the entire memory backup is performed without software intervention.
  • It should be understood that the methods discussed above and illustrated in part by FIG. 1 for supplying power supplies 160-163 are exemplary ways for supplying (and removing) power to one or more components of isolation and backup system 100. In the illustrated examples, all the power supplies 160-163 and control of the various power domains/planes is done externally to IC 110. However, there are other methods for supplying (and removing) power to one or more components of isolation and backup system 100. One method may use a single external power source per voltage and then create the different power domains/planes using switches internal to IC 110. Another method may reduce the number of external voltages, and use regulators internal to one or more components of isolation and backup system 100 (e.g., IC 110) to get various voltages along with switches internal to IC 110 to control the different power domains/planes. With these methods, power isolation is done approximately the same way. A difference is that power control logic 150 that needs to be notified to keep power supplies 161-163 enabled may be located internally or externally.
  • FIG. 2 is a flowchart of a method of power isolation. The steps illustrated in FIG. 2 may be performed by one or more elements of isolation and backup system 100. Power is received for a first on-chip subsystem (202). For example, PWRA 160, which powers other circuitry 111, may be received by IC 100. An indicator of a power fail condition is received (204). For example, power fail signal 165 may be received by IC 110. This may result in the power isolation sequence beginning when a power_iso_begin signal is asserted.
  • Interfaces to an SDRAM subsystem are halted (206). Temporary storage is flushed to SDRAM (208). For example, a level-3 cache, level-2 cache, posting buffer, or any other type of memory storage that is used to temporarily store a copy of the data to/from SDRAM 125 may be flushed. Logic connected to each of the interfaces may return a halt indication when they have completed all outstanding cycles and stopped accepting any new ones.
  • Under hardware control, an on-chip SDRAM subsystem is isolated (210). For example, when the SDRAM interface (or temporary storage 112) has indicated it has halted accepting cycles, its inputs will be isolated by setting them to inactive states. Once halts from the other interfaces are received, inputs that need to be preserved for external core devices and internal logic are latched. These inputs include such things as resets, signals for the PLL and strap inputs. At this point in time, any clocks that are not needed by the SDRAM subsystem anymore may be gated off to assist in reducing power consumption. Some amount of time later, a signal (e.g., MSS_core iso_enable) may be asserted which will indicate to isolate all of the inputs to the SDRAM subsystem and set them to their inactive state.
  • A clock and power used by a first on-chip subsystem is gated off (212). For example, the clock going to temporary storage 112 may be switched to an internally generated clock. Once the inputs have been isolated, a signal (e.g., MSS_core_iso_ready) may be asserted. This indicates, to the power control logic 150, for example, that PWRA 160 connected to IC 110 can now be disabled.
  • A clock for use by the SDRAM subsystem is generated (214). For example, clock generator 141 may generate a clock for use by the SDRAM subsystem to be used when PWRA 160 is off. Data is copied from SDRAM to nonvolatile memory (216). For example, the memory backup from SDRAM 125 to nonvolatile memory 135 may start by asserting a signal (e.g., flash_offload_begin). Power is removed from the SDRAM subsystem, SDRAM, and nonvolatile memory (218). For example, either under the control of power control 150 upon the completion of memory backup, or simply because power to the entire isolation and backup system 100 has failed, power is removed from SDRAM subsystem 115, SDRAM 125, and nonvolatile memory 135.
  • An advantage to isolating the power of SDRAM subsystem 115 during backup is a reduced amount of power is consumed. Only the logic inside of IC 110 that handles the memory backup, external SDRAM 125, and nonvolatile memory 135 are powered. By reducing the power consumption, it increases the amount of time available to perform the memory backup before all of the remaining power is consumed. Having more time allows for more memory to be backed up in addition to less external logic being required to maintain the power until the backup is completed. Because the power isolation is being done, it may be advantageous to move the flash controller internally to reduce power consumption and overall system cost required to do memory backup.
  • In an embodiment, additional data protection is provided for the data that is backed up by performing encryption and/or a data integrity signature calculation as the data in SDRAM 125 is moved to nonvolatile memory 135. Encryption of data provides a secure method of storing the data. Data integrity signature calculation protects against most data errors likely to occur.
  • SDRAM subsystem 115 moves data between SDRAM 125 and nonvolatile memory 135 when a memory backup or restore is required. SDRAM subsystem 115 may use a list of CDBs (Command Descriptor Blocks) for indicating the data movement that is requested. The format of these CDBs is typically pre-defined. One of the fields in a CDB is a memory address field that indicates where in SDRAM 125 to read or write data. In an embodiment, the number of address bits provided in this field exceeds the number that is required to address all of SDRAM 125. Some of these address bits that are not required may be used to encode information on how the data should be manipulated as it is moved from/to SDRAM 125. This movement may occur when a memory backup or restore is performed, or at other times. The encoding of the unused address bits may indicate if the data should be encrypted/decrypted, if signature generation is required, if the signature should be offloaded or reset, and which signature engine to use.
  • When a request from nonvolatile memory controller 130 is received to read/write SDRAM 125, the aforementioned unused address bits may be interpreted to determine what data manipulation to perform as the data moves between SDRAM 125 and nonvolatile memory 135, via SDRAM subsystem 115.
  • In an embodiment, FIGS. 3A and 3B are a block diagrams of data manipulation system configurations. In FIG. 3A, data manipulation system 300 comprises: SDRAM controller 310, flash controller 320, control 330, signature engines 340, encrypt/decrypt engine 350, and multiplexer (MUX) 360. Control 330 is operatively coupled to SDRAM controller 310, flash controller 320, signature engines 340, and encrypt/decrypt engine 350, and MUX 360. Thus, control 330 may receive commands, signals, CDBs, etc. from flash controller 320, perform arbitration, and otherwise manage the data flows and configuration of data manipulation system 300.
  • In FIG. 3A, SDRAM 310 is configured, via coupling 371, to send data read from an SDRAM (not shown in FIG. 3A) to signature engines 340, encrypt/decrypt engine 350, and a first input of MUX 360. Encrypt/decrypt engine 350 is configured, via coupling 372, to send encrypted data to a second input of MUX 360. Signature engines 340 are configured, via coupling 373, to send data integrity signatures to third input of MUX 360. MUX is controlled by control 330 to send one of unmodified data read from the SDRAM, encrypted data, or data integrity signatures to flash controller 320. Flash controller 320 may store the unmodified data read from the SDRAM, the encrypted data, or the data integrity signatures in a flash memory (not shown in FIG. 3A).
  • FIG. 3A illustrates a configuration for data flow and control of when a read from SDRAM (e.g., SDRAM 125) request is received from flash controller 320 by control 330. In an embodiment, this configuration and flow is used when a backup of SDRAM memory is required. In an embodiment, signature engines 340 and encrypt/decrypt engine 350 are used for both read and write requests. The data connections and flow for flash write requests (which corresponds to an SDRAM read) are illustrated in FIG. 3A. The data connections for flash read requests (which corresponds to an SDRAM write) are illustrated in FIG. 3B.
  • Flash controller 320 sends a read request to control 330. Encoded address lines (or a dedicated field) of the request are examined by control 330 to determine where to route the read data from that is being returned from SDRAM controller 310 and what data manipulation, if any, is required. In an embodiment, address bits [46:40] contain an encoding and a mapping that is as follows: bits 40-42 (SES[0:2]) specify which of 8 signature engines 340 should take the action specified (if any) by the other bits of the encoding; bit 43 (SG) determines whether the specified signature engine should generate a data integrity signature using the read data as input; bit 44 (SO) tells the specified signature engine to output a data integrity signature (which, depending on the state of MUX 360, may be sent to flash controller 320 for storage); bit 45 (SR) resets the data integrity signature of the specified signature engine; and, bit 46 (E/D) determines whether encrypted data from the output of encryption/decryption engine 350 should be sent to flash controller 320.
  • FIG. 4 is an illustration of a command data block (CDB). In FIG. 4, an address field for address bits 0-46 is illustrated. Also illustrated is a field in the SDRAM address bits specifying the SDRAM address bits that are used (A[0:39]), and a field of the encoded address bits (A[40:46]). The individual bit fields (SES[0:2], SG, SO, SR, and E/D) of the encoded address bits are also illustrated.
  • As can be understood, based on the encoding of address bits 40-46, an indication will be sent to MUX 360 which results in one of three different sources being used by flash controller 320. The data will come either directly from SDRAM controller 310, encryption/decryption engine 350, or if a signature offload from one of signature engines 340. If the encoding indicates to perform encryption, encryption/decryption engine 350 will be controlled by control 330 to receive the read data from SDRAM controller 310. Once encryption/decryption engine 350 receives the data from SDRAM controller 310, it performs the data encryption, sends the result to MUX 360 for routing to flash controller 320, and waits for it to accept the data.
  • The encoding also indicates if signature generation should be done on the data being transferred to flash memory. One of the eight signature engines 340, as indicated by the signature engine select (SES[0:2]) field of the encoding, will be notified that its CRC/checksum signature value should be updated. In parallel with the data to being sent directly to flash controller 320, or to encryption/decryption engine 350, the data is also sent to at least the specified signature engine 340. Once the selected signature engine 340 sees the SDRAM data being accepted by either of those blocks, the current CRC/checksum signature is updated using that data. Finally, the encoding indicates if a signature offload should be output. If a Signature Offload is required, a read command will not be issued by control 330 to SDRAM controller 310. Instead, control 330 will instruct the selected signature engine 340 to send the data integrity signature data to flash controller 320.
  • In FIG. 3B, flash controller 320 is configured, via coupling 381, to send data read from a flash memory (not shown in FIG. 3B) to signature engines 340, encrypt/decrypt engine 350, and a first input of MUX 361. Encrypt/decrypt engine 350 is configured, via coupling 382, to send encrypted data to a second input of MUX 361. Signature engines 340 are configured, via coupling 383, to indicate a current value of a selected data integrity signature. MUX 361 is controlled by control 330 to send one of unmodified data read from the flash memory (via flash controller 320) or decrypted data to SDRAM controller 310. SDRAM controller 310 may store the unmodified data read from the flash memory, or the decrypted data in an SDRAM (not shown in FIG. 3B).
  • The data connections for flash read requests (which corresponds to an SDRAM write) are illustrated in FIG. 3B. In an embodiment, this flow is used when a restore of data back to SDRAM memory is required. Control 330 may receive a write command from flash controller 320. Control 330 may issue a write request to SDRAM controller 310. The encoded address lines of request are examined to determine where to route the write data from that is being sent to the SDRAM Controller (from flash controller 320) and what data manipulation, if any, is required. The same encoding described in the discussion of FIG. 3A may be used. Based on the encoding, either unmodified data from flash controller 320, or decrypted data from encryption/decryption engine 350 will be selected by MUX 361 to send to SDRAM controller 310. If the encoding indicates to perform decryption, encryption/decryption engine 350 will be controlled to accept the data from flash controller 320. Once the encryption/decryption engine 350 accepts the data from flash controller 320, it performs the data decryption, sends the result to the SDRAM controller 310, and waits for SDRAM controller 310 to accept the data. The encoding will also indicate if signature generation needs to be done for the data being transferred to SDRAM. One of the eight signature engines 340, as indicated by the SES[0:2] field of the encoding, is controlled to update its CRC/checksum signature value. The signature generation is always done on decrypted data. So the signature engine 340 is controlled to select between data from flash controller 320 or the decrypt results from encryption/decryption engine 350 to update the data integrity signature value. In parallel to the data to being sent to SDRAM controller 310 from either the flash controller or the encryption/decryption engine 350, the data will also be sent to a selected signature engine 340. Once the selected signature engine 340 sees the data being accepted by the SDRAM controller 310, the current CRC/Checksum Signature is updated using that data. Finally, the current value of one of the eight data integrity signatures can be selected and read by software via coupling 383. This value may be compared by software with a backup signature that is restored from flash memory to SDRAM. This may be done to verify that no data errors occurred while the data was backed up or restored.
  • FIG. 5 is a block diagram of a power isolation and backup system. In FIG. 5, isolation and backup system 500 comprises: integrated circuit 510, power control 550, SDRAM 525, and nonvolatile memory (e.g., flash) 535. Integrated circuit (IC) 510 includes SDRAM subsystem 515, control 540, clock generator 541 and other circuitry 511. SDRAM subsystem 515 includes SDRAM controller 520, nonvolatile memory controller 530, and data manipulation 570. Other circuitry 512 may include temporary storage 512 (e.g., cache memory, buffers, etc.). SDRAM controller 520 interfaces with and controls SDRAM 525 via interface 521. Nonvolatile memory controller 530 interfaces with and controls nonvolatile memory 535 via interface 531. SDRAM subsystem 515 (and thus SDRAM controller 520, nonvolatile memory controller 530, and data manipulation 570) is operatively coupled to control 540, clock generator 541, other circuitry 511, and temporary storage 512. Clock generator 541 is operatively coupled to control 540 and other circuitry 511.
  • Power control 550 provides power supply A (PWRA) 560 to IC 510. Power control 550 provides power supply B (PWRB) 561 to SDRAM subsystem 515. Power control 550 provides power supply C (PWRC) 562 to SDRAM 525. Power control 550 provides power supply D (PWRD) 563 to nonvolatile memory 535. Power control 550 provides a power fail signal 565 to control 540. Power control 550 is also operatively coupled to SDRAM subsystem by signals 566.
  • In an embodiment, when power control 550 detects a power failure condition (either impending power fail or existing power fail) power control 550 notifies IC 510 of the condition via a power fail signal 565. This will starts a power isolation sequence to isolate SDRAM subsystem 515 from the rest of IC 510, and other circuitry 511, in particular. In an embodiment, the entire power isolation sequence is controlled by hardware (e.g., control 540, SDRAM subsystem 515, or both) with no interaction from software.
  • Upon receiving notification of a power fail condition, all of the interfaces (e.g., interfaces to other circuitry 511) connected to SDRAM subsystem 515 will be halted. On-chip temporary storage 512 will be flushed. It should be understood that although, in FIG. 5, temporary storage 512 is shown outside of SDRAM subsystem 515, temporary storage 512 may be part of SDRAM subsystem 515. In an example, temporary storage 512 may be a cache (e.g., level 1 cache, level 2 cache, level 3 cache), a posting buffer, or the like.
  • Once temporary storage 512 has been flushed, logic connected to SDRAM subsystem 515 indicates when the interfaces used for the flushes have halted. Once halted, these interfaces are not accepting any new cycles. Once all of the interfaces are halted, inputs that are required for external devices and internal core logic (i.e., other circuitry 511) are latched so that their state will not be lost when isolation occurs. Clocks that are not needed after the inputs are latched are gated off. The SDRAM subsystem will switch to internally generated clocks, or to clocks generated by a clock generator that shares power with SDRAM subsystem 515 (e.g., clock generator 541). Following this, inputs to SDRAM subsystem 515 not required for memory backup are isolated. In an embodiment, these inputs are driven to an inactive state.
  • After isolation of the inputs completes, SDRAM subsystem 515 (or control 540) signals (for example, using signals 566) power control 550 to remove PWRA 560. This results in power being turned off to all of IC 510 other than SDRAM subsystem 515. SDRAM subsystem 515 is on a separate power plane from at least other circuitry 511. This allows power to be maintained (i.e., by PWRB 561) to the SDRAM subsystem until power is totally lost to isolation and backup system 500.
  • In addition to controlling the isolation and removal of power to all but the SDRAM subsystem 515 (and any other logic needed by SDRAM subsystem 515), once the interfaces have halted and temporary storage 512 been flushed, internal memory backup logic will start moving data from SDRAM 525 to nonvolatile memory 535. In an embodiment, these are the only cycles running on the entire chip once PWRA has been removed.
  • In an embodiment, as data is moved to or from SDRAM 525, from or to, respectively, nonvolatile memory 535, it may be manipulated by data manipulation 570. Data manipulation 570 is configured, operates, and functions as in the same manner described previously with reference to data manipulation system 300 of FIGS. 3A and 3B. Thus, in short, data manipulation 570 can be configured to encrypt/decrypt data, and/or calculate/check data integrity signatures. In an embodiment, the functions, data flow, and configuration of data manipulation 570 may be performed while PWRA 560 is off (for example to save encrypted data and/or calculate and store a data integrity signature.) In another embodiment, the functions, data flow, and configuration of data manipulation 570 may be performed while PWRA 560 is on (for example to restore encrypted data and/or calculate and store a data integrity signature.)
  • The methods, systems and devices described above may be implemented in computer systems, or stored by computer systems. The methods described above may also be stored on a computer readable medium. Devices, circuits, and systems described herein may be implemented using computer-aided design tools available in the art, and embodied by computer-readable files containing software descriptions of such circuits. This includes, but is not limited to isolation and backup system 100 and 500, IC 110 and 510, power control 150 and 550, SDRAM subsystem 115 and 515, and their components. These software descriptions may be: behavioral, register transfer, logic component, transistor and layout geometry-level descriptions. Moreover, the software descriptions may be stored on storage media or communicated by carrier waves.
  • Data formats in which such descriptions may be implemented include, but are not limited to: formats supporting behavioral languages like C, formats supporting register transfer level (RTL) languages like Verilog and VHDL, formats supporting geometry description languages (such as GDSII, GDSIII, GDSIV, CIF, and MEBES), and other suitable formats and languages. Moreover, data transfers of such files on machine-readable media may be done electronically over the diverse media on the Internet or, for example, via email. Note that physical files may be implemented on machine-readable media such as: 4 mm magnetic tape, 8 mm magnetic tape, 3½ inch floppy media, CDs, DVDs, and so on.
  • FIG. 6 illustrates a block diagram of a computer system. Computer system 600 includes communication interface 620, processing system 630, storage system 640, and user interface 660. Processing system 630 is operatively coupled to storage system 640. Storage system 640 stores software 650 and data 670. Processing system 630 is operatively coupled to communication interface 620 and user interface 660. Computer system 600 may comprise a programmed general-purpose computer. Computer system 600 may include a microprocessor. Computer system 600 may comprise programmable or special purpose circuitry. Computer system 600 may be distributed among multiple devices, processors, storage, and/or interfaces that together comprise elements 620-670.
  • Communication interface 620 may comprise a network interface, modem, port, bus, link, transceiver, or other communication device. Communication interface 620 may be distributed among multiple communication devices. Processing system 630 may comprise a microprocessor, microcontroller, logic circuit, or other processing device. Processing system 630 may be distributed among multiple processing devices. User interface 660 may comprise a keyboard, mouse, voice recognition interface, microphone and speakers, graphical display, touch screen, or other type of user interface device. User interface 660 may be distributed among multiple interface devices. Storage system 640 may comprise a disk, tape, integrated circuit, RAM, ROM, network storage, server, or other memory function. Storage system 640 may be a computer readable medium. Storage system 640 may be distributed among multiple memory devices.
  • Processing system 630 retrieves and executes software 650 from storage system 640. Processing system may retrieve and store data 670. Processing system may also retrieve and store data via communication interface 620. Processing system 630 may create or modify software 650 or data 670 to achieve a tangible result. Processing system may control communication interface 620 or user interface 660 to achieve a tangible result. Processing system may retrieve and execute remotely stored software via communication interface 620.
  • Software 650 and remotely stored software may comprise an operating system, utilities, drivers, networking software, and other software typically executed by a computer system. Software 650 may comprise an application program, applet, firmware, or other form of machine-readable processing instructions typically executed by a computer system. When executed by processing system 630, software 650 or remotely stored software may direct computer system 600 to operate as described herein.
  • The foregoing description of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and other modifications and variations may be possible in light of the above teachings. The embodiment was chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and various modifications as are suited to the particular use contemplated. It is intended that the appended claims be construed to include other alternative embodiments of the invention except insofar as limited by the prior art.

Claims (20)

1. A method of transferring data between a volatile memory and a nonvolatile memory, comprising:
receiving a command data block having a memory address field, said memory address field having a first plurality of bits that indicate a location in said volatile memory and a second plurality of bits that indicate a data manipulation;
based on said indicated data manipulation, selecting a source for data to be sent to said nonvolatile memory;
transferring data from said volatile memory to said source;
receiving manipulated data from said source; and,
transferring said manipulated data to said nonvolatile memory.
2. The method of claim 1, wherein said source is a data encryption engine.
3. The method of claim 1, wherein said source is a signature engine.
4. The method of claim 3, wherein said second plurality of bits indicate said signature engine is to send a data integrity signature.
5. The method of claim 1, wherein said source can be one of a plurality of signature engines and said second plurality of bits indicate a one of said plurality of signature engines to be said source.
6. A method of transferring data between a nonvolatile memory and a volatile memory, comprising:
receiving a command data block having a memory address field, said memory address field having a first plurality of bits that indicate a location in said volatile memory and a second plurality of bits that indicate a data manipulation;
based on said indicated data manipulation, selecting a source for data to be sent to said volatile memory;
transferring data from said nonvolatile memory to said source;
receiving manipulated data from said source; and,
transferring said manipulated data to said volatile memory.
7. The method of claim 6, wherein said source is a data decryption engine.
8. The method of claim 6, wherein said source is a signature engine.
9. The method of claim 8, wherein said second plurality of bits indicate said signature engine is to send a data integrity signature.
10. The method of claim 7, wherein said source can be one of a plurality of signature engines and said second plurality of bits indicate a one of said plurality of signature engines to be said source.
11. An integrated circuit, comprising:
a data manipulation controller that receives a memory address field, said memory address field having a first plurality of bits that indicate a location in a volatile memory and a second plurality of bits that indicate a data manipulation, based on said indicated data manipulation, said data manipulation controller selects a source for data to be sent to a nonvolatile memory;
a volatile memory controller to be coupled to a volatile memory, said volatile memory controller to facilitate the transfer of data from said volatile memory to said source;
a nonvolatile memory controller to be coupled to said nonvolatile memory, the nonvolatile memory controller to receive manipulated data from said source and transfer said manipulated data to said nonvolatile memory.
12. The integrated circuit of claim 11, wherein said source is a data encryption engine.
13. The integrated circuit of claim 11, wherein said source is a signature engine.
14. The integrated circuit of claim 13, wherein said second plurality of bits indicate said signature engine is to send a data integrity signature.
15. The integrated circuit of claim 11, wherein said source can be one of a plurality of signature engines and said second plurality of bits indicate a one of said plurality of signature engines to be said source.
16. An integrated circuit, comprising:
a data manipulation controller that receives a memory address field, said memory address field having a first plurality of bits that indicate a location in a volatile memory and a second plurality of bits that indicate a data manipulation, based on said indicated data manipulation, said data manipulation controller selects a source for data to be sent to said volatile memory;
an nonvolatile memory controller to be coupled to a nonvolatile memory, said nonvolatile memory controller to facilitate the transfer of data from said nonvolatile memory to said source;
a volatile memory controller to be coupled to said volatile memory, the volatile memory controller to receive manipulated data from said source and transfer said manipulated data to said volatile memory.
17. The integrated circuit of claim 16, wherein said source is a data encryption engine.
18. The integrated circuit of claim 16, wherein said source is a signature engine.
19. The integrated circuit of claim 18, wherein said second plurality of bits indicate said signature engine is to send a data integrity signature.
20. The integrated circuit of claim 16, wherein said source can be one of a plurality of signature engines and said second plurality of bits indicate a one of said plurality of signature engines to be said source.
US13/083,407 2010-12-20 2011-04-08 Data manipulation during memory backup Active 2032-03-08 US8738843B2 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US13/083,407 US8738843B2 (en) 2010-12-20 2011-04-08 Data manipulation during memory backup
TW100119843A TW201227746A (en) 2010-12-20 2011-06-07 Data manipulation during memory backup
KR1020110062097A KR20120069517A (en) 2010-12-20 2011-06-27 Data manipulation during memory backup
JP2011143809A JP2012133746A (en) 2010-12-20 2011-06-29 Data manipulation during memory backup
CN201110328441.0A CN102567139B (en) 2010-12-20 2011-10-26 The data manipulation of memory backup
EP11194645A EP2466473A1 (en) 2010-12-20 2011-12-20 Data manipulation during memory backup

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201061424701P 2010-12-20 2010-12-20
US13/083,407 US8738843B2 (en) 2010-12-20 2011-04-08 Data manipulation during memory backup

Publications (2)

Publication Number Publication Date
US20120159106A1 true US20120159106A1 (en) 2012-06-21
US8738843B2 US8738843B2 (en) 2014-05-27

Family

ID=45491268

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/083,407 Active 2032-03-08 US8738843B2 (en) 2010-12-20 2011-04-08 Data manipulation during memory backup

Country Status (6)

Country Link
US (1) US8738843B2 (en)
EP (1) EP2466473A1 (en)
JP (1) JP2012133746A (en)
KR (1) KR20120069517A (en)
CN (1) CN102567139B (en)
TW (1) TW201227746A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014179333A1 (en) * 2013-04-29 2014-11-06 Amazon Technologies, Inc. Selective backup of program data to non-volatile memory

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014003764A1 (en) * 2012-06-28 2014-01-03 Hewlett-Packard Development Company, L.P. Memory module with a dual-port buffer
US9207749B2 (en) * 2012-08-28 2015-12-08 Intel Corporation Mechanism for facilitating efficient operations paths for storage devices in computing systems
US10102889B2 (en) * 2012-09-10 2018-10-16 Texas Instruments Incorporated Processing device with nonvolatile logic array backup
WO2014120140A1 (en) * 2013-01-30 2014-08-07 Hewlett-Packard Development Company, L.P. Runtime backup of data in a memory module
US20190129865A1 (en) * 2017-11-02 2019-05-02 Kaminario Technologies Ltd. Encryption and decryption of data persisted by non-volatile memory

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3657705A (en) * 1969-11-12 1972-04-18 Honeywell Inc Instruction translation control with extended address prefix decoding
US20040246151A1 (en) * 2002-05-28 2004-12-09 Broadcom Corporation Methods and systems for data manipulation
US20060015683A1 (en) * 2004-06-21 2006-01-19 Dot Hill Systems Corporation Raid controller using capacitor energy source to flush volatile cache data to non-volatile memory during main power outage
US20060072369A1 (en) * 2004-10-04 2006-04-06 Research In Motion Limited System and method for automatically saving memory contents of a data processing device on power failure
US20060212605A1 (en) * 2005-02-17 2006-09-21 Low Yun S Serial host interface and method for operating the same
US20120159239A1 (en) * 2010-12-20 2012-06-21 Chon Peter B Data manipulation of power fail
US20120159289A1 (en) * 2010-12-20 2012-06-21 Piccirillo Gary J Data signatures to determine sucessful completion of memory backup

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0442496A (en) * 1990-06-08 1992-02-13 New Japan Radio Co Ltd Nonvolatile ram
JPH0675866A (en) * 1992-08-25 1994-03-18 Matsushita Electric Ind Co Ltd Memory control circuit
US6145068A (en) 1997-09-16 2000-11-07 Phoenix Technologies Ltd. Data transfer to a non-volatile storage medium
US6496949B1 (en) * 1999-08-06 2002-12-17 International Business Machines Corp. Emergency backup system, method and program product therefor
US20030005385A1 (en) 2001-06-27 2003-01-02 Stieger Ronald D. Optical communication system with variable error correction coding
JP4252301B2 (en) * 2002-12-26 2009-04-08 株式会社日立製作所 Storage system and data backup method thereof
EP1711896B1 (en) 2004-02-05 2015-11-18 BlackBerry Limited Memory controller interface
JP2007525771A (en) 2004-02-27 2007-09-06 ティギ・コーポレイション System and method for data manipulation
JP4957577B2 (en) * 2008-02-14 2012-06-20 日本電気株式会社 Disk controller
US8171205B2 (en) * 2008-05-05 2012-05-01 Intel Corporation Wrap-around sequence numbers for recovering from power-fall in non-volatile memory
JP5278441B2 (en) * 2008-12-04 2013-09-04 富士通株式会社 Storage device and failure diagnosis method
US8386723B2 (en) * 2009-02-11 2013-02-26 Sandisk Il Ltd. System and method of host request mapping
FR2945393B1 (en) 2009-05-07 2015-09-25 Commissariat Energie Atomique METHOD FOR PROTECTING ELECTRONIC CIRCUITS, DEVICE AND SYSTEM IMPLEMENTING THE METHOD

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3657705A (en) * 1969-11-12 1972-04-18 Honeywell Inc Instruction translation control with extended address prefix decoding
US20040246151A1 (en) * 2002-05-28 2004-12-09 Broadcom Corporation Methods and systems for data manipulation
US20060015683A1 (en) * 2004-06-21 2006-01-19 Dot Hill Systems Corporation Raid controller using capacitor energy source to flush volatile cache data to non-volatile memory during main power outage
US20060072369A1 (en) * 2004-10-04 2006-04-06 Research In Motion Limited System and method for automatically saving memory contents of a data processing device on power failure
US20060212605A1 (en) * 2005-02-17 2006-09-21 Low Yun S Serial host interface and method for operating the same
US20120159239A1 (en) * 2010-12-20 2012-06-21 Chon Peter B Data manipulation of power fail
US20120159289A1 (en) * 2010-12-20 2012-06-21 Piccirillo Gary J Data signatures to determine sucessful completion of memory backup

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014179333A1 (en) * 2013-04-29 2014-11-06 Amazon Technologies, Inc. Selective backup of program data to non-volatile memory
US9195542B2 (en) 2013-04-29 2015-11-24 Amazon Technologies, Inc. Selectively persisting application program data from system memory to non-volatile data storage
JP2016517122A (en) * 2013-04-29 2016-06-09 アマゾン・テクノロジーズ・インコーポレーテッド Selective retention of application program data migrated from system memory to non-volatile data storage
US10089191B2 (en) 2013-04-29 2018-10-02 Amazon Technologies, Inc. Selectively persisting application program data from system memory to non-volatile data storage

Also Published As

Publication number Publication date
EP2466473A1 (en) 2012-06-20
KR20120069517A (en) 2012-06-28
CN102567139B (en) 2016-04-13
US8738843B2 (en) 2014-05-27
JP2012133746A (en) 2012-07-12
TW201227746A (en) 2012-07-01
CN102567139A (en) 2012-07-11

Similar Documents

Publication Publication Date Title
US9251005B2 (en) Power isolation for memory backup
US8826098B2 (en) Data signatures to determine successful completion of memory backup
US8738843B2 (en) Data manipulation during memory backup
US9043642B2 (en) Data manipulation on power fail
US9081691B1 (en) Techniques for caching data using a volatile memory cache and solid state drive
KR101316918B1 (en) Backup and restoration for a semiconductor storage device and method
US10915448B2 (en) Storage device initiated copy back operation
WO2014170984A1 (en) Storage system and method for controlling storage
US9223658B2 (en) Managing errors in a raid
US20080133695A1 (en) Information processing system and backing up data method
US9311018B2 (en) Hybrid storage system for a multi-level RAID architecture
US11385815B2 (en) Storage system
US8316244B1 (en) Power failure system and method for storing request information
US20120254502A1 (en) Adaptive cache for a semiconductor storage device-based system
CN102841821A (en) Data operation during memory backup period
CN102841820A (en) Data operation of power failure
US8839024B2 (en) Semiconductor storage device-based data restoration

Legal Events

Date Code Title Description
AS Assignment

Owner name: LSI CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PICCIRILLO, GARY J.;CHO, PETER B.;REEL/FRAME:026199/0868

Effective date: 20110330

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LSI CORPORATION;REEL/FRAME:035390/0388

Effective date: 20140814

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001

Effective date: 20170119

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001

Effective date: 20170119

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047230/0910

Effective date: 20180509

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE EFFECTIVE DATE OF THE MERGER PREVIOUSLY RECORDED AT REEL: 047230 FRAME: 0910. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047351/0384

Effective date: 20180905

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ERROR IN RECORDING THE MERGER IN THE INCORRECT US PATENT NO. 8,876,094 PREVIOUSLY RECORDED ON REEL 047351 FRAME 0384. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:049248/0558

Effective date: 20180905

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8