WO2009052525A1 - Managing memory systems containing components with asymmetric characteristics - Google Patents

Managing memory systems containing components with asymmetric characteristics Download PDF

Info

Publication number
WO2009052525A1
WO2009052525A1 PCT/US2008/080557 US2008080557W WO2009052525A1 WO 2009052525 A1 WO2009052525 A1 WO 2009052525A1 US 2008080557 W US2008080557 W US 2008080557W WO 2009052525 A1 WO2009052525 A1 WO 2009052525A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
disruption region
region
disruption
update
Prior art date
Application number
PCT/US2008/080557
Other languages
French (fr)
Inventor
Kenneth A. Okin
Vijay Karamcheti
Original Assignee
Virident Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Virident Systems, Inc. filed Critical Virident Systems, Inc.
Publication of WO2009052525A1 publication Critical patent/WO2009052525A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1658Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/205Hybrid memory, e.g. using both volatile and non-volatile memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7202Allocation control and policies
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/34Determination of programming status, e.g. threshold voltage, overprogramming or underprogramming, retention
    • G11C16/3418Disturbance prevention or evaluation; Refreshing of disturbed memory data
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/34Determination of programming status, e.g. threshold voltage, overprogramming or underprogramming, retention
    • G11C16/3418Disturbance prevention or evaluation; Refreshing of disturbed memory data
    • G11C16/3427Circuits or methods to prevent or reduce disturbance of the state of a memory cell when neighbouring cells are read or written

Definitions

  • Computer systems may move objects into memory for ready availability to one or more processors. Similarly, objects may be moved out of memory to enable other objects to re-use the space in memory.
  • Fig. 1 is a block diagram of a central processing unit (CPU) that interfaces with a memory system through a memory controller.
  • CPU central processing unit
  • Fig. 2 is a block diagram of disruption regions in asymmetric memory.
  • Fig. 3 is a block diagram of a remapping (remap) table associated with a memory controller.
  • Fig. 4 is an illustration of an addressing format used by a memory controller.
  • Pig. 5 is an illustration of banks and sectors within asymmetric memory.
  • Fig. 6 is a flow chart of a process by which a memory controller enables access to a memory system that includes asymmetric memory.
  • Fig.7A is a flow chart of a process for managing storage of data in a memory system that includes asymmetric memory.
  • Figs. 7B-7E are diagrams of different configurations for a memory system that aligns banks enabling asymmetric memory to be written in an efficient manner.
  • Fig. 8A is a flow chart of a process by which data is queued in order to write the data to a memory system that includes asymmetric memory and symmetric memory.
  • Figs. 8B-8H are diagrams of different configurations for a memory system mat copies data from a first region to a second region as an update to the data is being written to the second region.
  • data that is stored in a memory system that includes asymmetric memory is managed.
  • the asymmetric memory is configured to enable block write operations in which an address within a block of one of the asymmetric memory components affects the availability of other addressee within the block of the asymmetric memory component.
  • a first disruption region that is associated with problematic read access of first addresses within the first disruption region during the block write operations to the first disruption region is identified within asymmetric storage.
  • a second disruption region that is separate from the first disruption region, the second disruption regjon associated with problematic read access of second addresses within the second disruption region during the block write operations to the second disruption region is identified within the asymmetric storage.
  • Blocks within the first disruption region of asymmetric storage are associated with the second disruption region of asymmetric storage, where a first block, a second block, and a third block in the first disruption region axe configured to include data that mirrors data within a first block, a second block, and a third block of the second disruption region.
  • a first update to be written to the memory system is received.
  • the first update is written to a fourth block of the second disruption region that follows a block of the second disruption region corresponding to the third block of the first disruption region.
  • Read access to the first block, the second block, and the third block of the second disruption region is disabled while writing the first update to the fourth block to the second disruption region.
  • Data is enabled to be read from the first block, the second block, and the third block of the first disruption region while writing the first update to the fourth block to the second disruption region.
  • the first update is written to the fourth block of the first disruption region and read access is disabled io the first block, the second block, and the third block of the first disruption region while writing the fourth block to the first disruption region.
  • Data is enabled to be read from, the first block, the second block, and the third block of the aecoiid disruption region while writing the update to the fourth block to the first disruption region.
  • Implementations may include one or more of the following features. For example, after configuring a memory controller to enable access to the first block, the second block, the third block, and the fourth block from the second disruption region, a second update to a portion of data in the first block of the second disruption region may be received. A third update to a portion of data in the third block of the second disruption region may be received and the second update and third update may be written to the first block and the third block of the first disruption region.
  • the memory management unit may be enabled to read the first block and the third block from the second disruption region and the data in the second block of the first disruption region may be preserved for subsequent access as the second update and the third update are being written to the first disruption region.
  • the second disruption region may includes a previous instance of mirrored content prior to writing the first update to the second disruption region.
  • the first disruption region may be in use when the first update is received.
  • the first block, the second block, the third block, and the fourth block in the first disruption region may be sequentially arranged.
  • the fourth block in the second disruption region may be unutilized prior to writing the first update.
  • the fourth block in the first disruption region and the second disruption region may use an identical offset for a memory controller-specified physical address offset.
  • a memory controller may be enabled to read the first update from the second disruption region after the first update has been written to the fourth block of the second disruption region.
  • a memory controller may be enabled to read the first block, the second block, and the third block from the second disruption region after the first update has been written to the fourth block of the second disruption region.
  • a memory controller configured is use to associate a memory management unit-specified physical address from a CPU with a first memory controller-specified physical address for the first disruption region and a second memory controller-specified physical address for the second disruption region.
  • An identical block subaddress may be used for the first block in me first disruption region and the first block in the second disruption region.
  • the first block, the second block and the third block in the second disruption region may be sequentially arranged.
  • Associating the blocks may include associating the blocks using a memory controller configured to interface with symmetric storage and the asymmetric storage.
  • Associating the blocks may include associating the blocks using a memory controller configured to interface only with the asymmetric storage.
  • Writing the first update to the fourth block of the first disruption region, disabling read access to the first block, the second block, and the third block of the first disruption region while writing me fourth block to the first disruption region, and enabling data to be read from the first block, the second block, and the third block of the second disruption region may be performed while writing the update to the fourth block to the first disruption region are performed before the second update is received.
  • data that is stored in a memory system that includes asymmetric memory may be managed
  • the asymmetric memory is configured to enable block write operations in which an address within a block of one of the asymmetric memory components affects the availability of other addresses within the block of the asymmetric memory component
  • a first disruption region that is associated with problematic read access of first addresses within the first disruption region during the block write operations to the first disruption region is identified within asymmetric storage.
  • a second disruption region that is separate from the first disruption region and that is associated with problematic read access of second addresses within the second disruption region during the block write operations to the second disruption region is identified within the asymmetric storage.
  • Blocks within the first disruption region are associated with the second disruption region, where a first block in the first disruption region are configured to include data that mirrors data within a first block of the second disruption region.
  • a first update to be written to the memoiy system is received and the first update to a second block of the second disruption region is written.
  • Read access to the first block of the second disruption region is disabled while writing the second block to the second disruption region.
  • Data is enabled to be read from the first block of the first disruption region while writing the first update to the second block to the second disruption region.
  • the first update to the second block of the first bank is written.
  • Read access to the first block of the first disruption region is disabled while writing the second block to the first disruption region and data is enabled to be read from the first block of the second disruption region while writing the update to the second block to the first disruption region.
  • Implementations may include one or more of the following features.
  • the second block in the second disruption region may be unutilized prior to writing the first update.
  • the second disruption region may include a previous instance of mirrored content prior to writing the first update to me second disruption region.
  • the first disruption region may be in use when the first update is received.
  • the first block and the second block in the first disruption region may be sequentially arranged.
  • the first block and the second block in the second disruption region may be sequentially arranged.
  • the second block in the first disruption region and the second disruption region may use an identical offset for a memory controller-specified physical address offset
  • a memory controller may be enabled to read the first update from the second disruption region after me fixst update has been written to the second block of the second disruption region.
  • a memory controller may be enabled to read the first block and the second block from the second disruption region after the first update has been written to the second block of the second disruption region.
  • the data in the first block of the second disruption region may be preserved during the writing of the first update to the second block of the second disruption region.
  • a memory controller may be used configured to associate a memory management unit-specified physical address from a CPU with a first memoiy controller-specified physical address for the first disruption region and a second memory controller-specified physical address for me second disruption region.
  • An identical Mock subaddress may be used for the first block in the first disruption region and the first block in the second disruption region.
  • the various aspects, implementations, and features may be implemented using, for example, one or more of a method, an apparatus, a system, an apparatus, system, tool, or processing device for performing a method, a program or other set of instructions, an apparatus that includes a program or a set of instructions, and a computer program embodied in a tangible computer readable medium.
  • the tangible computer readable medium may include, for example, instructions, software, images, and other data.
  • NOR flash memory NOR flash memory
  • main memory a server acting as a search engine
  • NOR flash memory may be used in the main memory of a server acting as a search engine.
  • asymmetric memory has pronounced access characteristics lhat gi vc rise to the label "asymmetric."
  • a system using asymmetric memory must account for the different access characteristics in order to maximize performance.
  • NOR flash the access characteristics for read operations can resemble access characteristics for DRAM (Dynamic Random Access Memory), where read operations for both NOR flash and DRAM require roughly comparable time to retrieve content requested in an address.
  • DRAM Dynamic Random Access Memory
  • write operations for some types ofNOR flash differ dramatically from write characteristics lor DRAM.
  • some types of NOR flash cannot perform write operations to data in a random access manner initiated by a single CPU-level store instruction.
  • NOR flash requires an additional long-running erase operation prior to a write.
  • writing to NOR flash memory typically requires more time (several orders of magnitude longer) than writing to DRAM.
  • a disruption region represents the area proximate to a particular address on a NOR flash, which area is actually (or deemed likely to be) associated with undesirable behavior when a memory controller writes or erases to the particular address.
  • writing to a particular address causes the NOR flash to corrupt data at other addresses in the disruption region.
  • the disruption region does not return timely results in response to requesting data from other addresses in the disruption region. Instill another instance, accesses to other data in the disruption region returns the correct
  • NOR flash memory as part of a system's main memory uses a memory controller that protects other components in the system from experiencing performance challenges resulting from the access characteristics of NOR flash (or other asymmetric memory systems).
  • the memory controller defines or observes disruption regions within NOR flash, enabling data residing in a disruption region to remain available for reading from an original location with consistent read times, amidst writing of data associated with that same location to another memory region that will later serve read requests.
  • a memory controller may be associated with a remapping table to enable access to content in a memory system that includes asymmetric memory and symmetric memory.
  • the MC receives a Tequest for a memory read or an
  • I/O Input/Output
  • CPU central processing unit
  • MMU memory management unit
  • the CPU uses the MMU to manage memory operations for the CPU, by translating the victual addresses associated with CPU instructions into physical addresses representing system memory or VO locations.
  • the MC for asymmetric memories is configured to process the MMU-specified physical addresses as an additional type of virtual addresses, creating a layer of abstraction between the physical address specified by the MMU and the physical memory address with which that address is associated by the MC. In this sense, the MC provides a uniform memory structure to the MMU and shields the CPU from the computational complexities required to implement a memory system that may include symmetric and asymmetric components.
  • a remapping table is used by the MC to enable its abstraction (or virtualization).
  • me remapping table associates MMU-specified physical addresses (bus addresses) with the actual physical address for me particular memory location in the memory system.
  • the remapping table is used to identify a first MC- specified physical address associated with the MMU-specified physical address, where the first MC-specified physical address is said to correspond to a first location within an asymmetric memory storage.
  • the MC may associate bus address A from the MMU with physical address D for a NOR flash memory system.
  • the first bank of storage is associated with characteristics that include corrupted content or nondeterministic read latency as a result of attempting to read data from the first bank of storage at the same time as an I/O (block) write is being performed that involves the first bank of storage.
  • the MC may determine that NOR flash physical addresses beginning with Ol 000 and lasting until 09000 represent the disruption region.
  • physical addresses at location D (“physical address D") may be associated with one or more of these specific addresses.
  • the MC men executes the write instruction directed to physical address D by writing content from the first bank of storage to a second bank of storage which is currently unused by any CPU, and adding or modifying content, per the instruction, leaving the first bank unchanged and available for read access while doing so.
  • data in physical address range 01000-09000 remains available for read access while it, and updates to it, are written to physical address range 11000-19000.
  • the MC is configured to process read instructions from me MMU for content residing in the first bank of storage by reading from the first bank of storage.
  • the MC may process a request from the MMU to read physical address A by reading data from physical address 05000 instead of the duplicated and identical data residing in physical address 15000, which would represent a read to an impacted disruption region.
  • the remapping table is configured to associate Ae first MMU-$pecificd bus address with a second MC-specified physical address related to the second bank of storage.
  • the MC may configure the remapping table to associate bus address A with physical addresses 11000-19000 in response to determining that the data from physical addresses 01000-09000 has been written. This would have the effect of processing subsequent reads to bus address A by reading data from physical address 15000.
  • a minor disruption region is created or maintained, which is proactively kept up to date with an original disruption region.
  • application performance may be increased because less data may need to be transferred at the time an update is received corresponding to the address range of the original disruption region.
  • Fig. 1 is a block diagram of a system 100 with a CPU 110 that interlaces with a memory system 150 through a MC 130. More precisely, the MC 130 is positioned, oriented, and configured to enable it to be leveraged in processing read and write requests from the MMU 120 to the memory system 150 while shielding the CPU 110 from challenges resulting from particular access characteristics associated with asymmetric memory 170.
  • the CPU 110 includes logic configured to implement one or more processing operations. Examples of a CPU 110 may include x86-based processors from Intel Inc. and Advanced Micro Devices Ihc.(AMD), and/or AHM (TM) processors.
  • the CPU 110 may include one or more processor cores.
  • each CPU 110 interfaces with an operating sy ⁇ tem.
  • the operating system may interface with one or more applications.
  • These applications may include a search engine, business analytics, data mining, or a database management application.
  • Each of these applications may be compiled (or interpreted) into different machine-level instructions for execution on the CPU 110.
  • Each CPU 110 supports an instruction set with a variety of instructions. These instructions may include, among other operations, register shifts, arithmetic operations, and memory operations. Examples of the memory operations may include random access read and write operations and block (VO) read and write operations.
  • the CPU 110 includes a MMU 120.
  • the MMU 120 is configured to manage a physical address space for the CPU 110.
  • the CPU 110 may request to read data from a particular address and write data to a particular address.
  • the MMU 120 may be configured to receive a virtual address from the CPU 110 and translate the virtual address into a physical address (i.e., a MMU-speoified physical address).
  • the CPU 110 maybe configured to instruct the MMU 120 to retrieve data from a particular MMU-specified physical address.
  • Other systems e.g., the MC 130
  • the MMLI may be configured to obtain the physical address associated with the virtual address.
  • the MMU or another entity in me CPU then may be instructed to retrieve data from the MMU-specified physical address.
  • the MC 130 is logically oriented as an intermediary system to enable the CPU 110 to interface with a memory system 150. More precisely, the MC 130 is a system configured to receive read and write instructions with a MMU-specified physical address from the MMU 120, lookup a MC-specified physical address associated with the MMU ' Specified physical address, and perform the requested read and/or write instruction on the block(s) of physical memory corresponding to the MC-specified physical address.
  • the MC 130 includes logic (hardware and/or software) configured to manage me placement ofand movement ofdata within a memory system 150.
  • the MC 130 may be configured to dynamically process instructions based on a determination of whether the data and/or address is associated with symmetric memory or asymmetric memory.
  • the MC 130 may be configured to process read instructions in a first way, and process write instructions in a second way.
  • the MC 130 may be configured to permit read operations from the MMU that specify a particular address to leverage or otherwise operate on a first address within asymmetric memory, while concurrently processing write operations from the MMU that specify a particular address to leverage or otherwise operate on a second address within symmetric memory.
  • the MC 130 includes a physical interface to the memory system 150.
  • the physical interface is configured to automatically exchange physical address information as the MC 130 interfaces with the memory system IS0.
  • the MC 130 includes a remapping table 140 that stores an association between a MMU-specified physical address and a MC-specified physical address.
  • the remapping table 140 may associate multiple MC-specified physical addresses with a MMU-specified physical address.
  • the remapping table 140 may be configured to indicate that writes to the disruption region should be directed to a "new" physical address range for the MMU, while reads from the disruption region should be directed to the "old" physical address range.
  • the remapping table 140 may indicate that a particular MMU-specified physical address is associated with a disruption region.
  • the memory system 150 may include symmetric memory 160 and asymmetric memory 170. In one configuration, the memory system 150 includes only asymmetric memory 170 ( hat is behind the MC 130. For instance, symmetric memory may be configured to interface with a first MC and asymmetric memory may be configured to interface with a second MC.
  • the first MC may be configured to not include the remapping table and other components that are adapted to support the operations associated with the asymmetric memory 170.
  • the MC 130 may be configured to support both symmetric memory 160 and the asymmetric memory 170.
  • the memory system 150 may be packaged as one or more DlMMs (Dual Inline Memory Modules) and configured to support DIMM protocols, signaling, interconnects, end physical interfaces.
  • the symmetric memory 160 includes one or more memory systems where read characteristics have similar properties to the write characteristics. Examples of symmetric memory include DRAM, and other random access memory technologies.
  • the asymmetric memory 170 includes one or more memory systems where read characteristics and write characteristics are dissimilar. For example, some types of asymmetric memory have write latencies that are orders of magnitude greater than the read latencies. Examples of asymmetric memory may include NOR flash. With some types of asymmetric memory, an interlace is used to (among other things) require an entire region of content to be rendered inaccessible to read requests whenever any constituent block of the region is written, for example, using an I/O block write, instead of a smaller, random access write to the specific update location,
  • the DIMM may be configured to dynamically process read and write instructions. For example, in a read mode, one or more DIMMs may be configured to receive instructions to access content residing at a particular physical address, and provide requested content to the processor through the MC. Generally, in a write mode, the DIMM is configured to receive data to be written with a physical address. Depending on whether the physical address represents flash or DRAM, the data may be written in a random access manner (e.g., a word or byte) or in a block (e.g., a 4 Megabyte or a 4 kilobyte block).
  • a random access manner e.g., a word or byte
  • a block e.g., a 4 Megabyte or a 4 kilobyte block.
  • the DIMM is entirely made up of one type of memory (e.g., DRAM or NOR flash).
  • the DIMM includes a hybrid of both DRAM and flash and relies on an internal or external MC to perform the different operations required to implement a hybrid memory system.
  • a hybrid and/or dedicated configuration was used, a different configuration may be used.
  • Fig.2 is a block diagram of the disruption regions in asymmetric memory 200.
  • disruption regions 210-240 represent four areas of memory where a write to one address in the region causes the proximate memory addresses to demonstrate disruptive behavior.
  • the disruption region represents other memory cells that rely on a common interna) write controller within the asymmetric memory.
  • the disruption region represents a range of addresses, logically or physically grouped.
  • Fig. 3 is a block diagram of a remapping (remap) table 300 associated with a
  • the remapping table 300 includes a list of MMU-specified physical addressee 310 and a corresponding list of MC-specified physical addresses 320.
  • the MMU-specified physioal addresses refer to those addresses referenced by the MMU within the CPU.
  • the list of MC-specified physical addresses represents the physical addresses within a memory system.
  • the MC is configured to process the MMU-specified physical address as an additional type of virtual address by mapping that address to the corresponding MC-specified physical address shown at reference numeral 320.
  • me remapping table 300 illustrates how different physical addresses may be associated with a MMU-specified physical address, for example, based on whether the MMU provides a read or write instruction.
  • MMU-speoified physical address 100 is associated with a read address 1100 and write address 2100.
  • MMU-specified physical address 200 is associated with read address 1110 and write address 3100.
  • the remapping table 300 maybe configured to associate different physical addresses based on whether the MMU (bus) address is associated with a disruption region.
  • the MC is configured to associate identically sized regions with both the read and write translations.
  • Selection of a region size may have an impact on the "alignment" of addresses (e.g., contents of the lower bits in a physical address).
  • addresses e.g., contents of the lower bits in a physical address.
  • using regions of a consistent size may permit the use of reduced complexity addressing operations as remapping operations are performed.
  • the addressing operations may be limited to modifications to certain portions of the address.
  • the remapping table may be implemented using a variety of structures. In one configuration, both the read/write translations am found in the same remapping table residing in a single, intermediary device. In another configuration, both the read/write translations are found in the same remapping table residing in a device residing on a DIMM. In yet another configuration, several integrated read/write translations are found within a single device, such as a configuration where a single remapping table is configured to support several banks within a DIMM and several remapping tables are used.
  • the read translation may be implemented as part of the memory logic while me table used for write translations is implemented as a software- managed table residing, for example, with an application, a hypervisor, an operating system, a CPU, or a MC
  • me table used for write translations is implemented as a software- managed table residing, for example, with an application, a hypervisor, an operating system, a CPU, or a MC
  • Multiple remapping tables for read and/or write translations may be used that reside in different locations, such as a MC, a DIMM or on multiple devices within a DIMM.
  • a hierarchical structure of remapping tables may be used so that a MC includes a first remapping table that interlaces with a second remapping table that acts a$ the interface to several devices within a DIMM.
  • Fig.4 is an illustration of the addressing format 400 used by a MC.
  • the physical address has 3 components.
  • the low bits ⁇ n 0> are the offset of the data within a given bank and sector.
  • Bits ⁇ m n+l> address a specific sector within a bank, and bits ⁇ p m+l> address a specific bank within the memory subsystem.
  • sectors may be allocated with 128K bytes, and there may be anywhere from 256 to 4096 sectors per bank and 1 to 16 banks per chip.
  • the memory subsystem may additionally be designed such that a plurality of chips/modules are accessed in parallel with a single address, permitting a logical sector to be constructed from the aggregation of corresponding physical sectors In the plurality of chips/modules.
  • the plurality of chips/modules may be used to inorease ⁇ n>: the number of bits within a sector because each logical sector has additional storage for each address. Additional flash chips may be also aggregated within the memory subsystem but given different addresses. This also increases tbe number of banks ⁇ p m+l> within the memory subsystem.
  • Fig. 5 is a ⁇ illustration of banks and sectors within asymmetric memory 500. More precisely, Fig. 5 shows the diagram of an example of an NOR flash chip.
  • the NOR flash receives commands from me MC over a chip interface (not shown).
  • a NOR flash chip includes a number of banks, as illustrated by banks S10a ... bank S10n. Each bank includes a number of sectors ⁇ l...n>.
  • Bach sector includes some number of words.
  • the MC may be configured to directly address and read a specified word from any sector in any bank using, for example, random access instructions.
  • writing data into NOR flash may be more problematic. That is. writing to NOR flash may require special operations to accommodate me particular access characteristics of NOR flash.
  • the specific sector for example, sector 2 in bank 510a
  • the "new" data may be written into the specified physical address within the sector.
  • Other new data then may be added to a proximate location within the specified sector without requiring additional erases because the previous data residing at the proximate location was previously erased.
  • the bank S10a represents an example of the disruption region.
  • Fig.6 is a flow chart 600 of a process by which a MC enables access to a memory system that includes symmetric and asymmetric memory. More precisely, flow chart 600 illustrates a process by which a MC uses a remapping table in order to provide access to content in a memory system that includes asymmetric memory, where the remapping table includes physical addresses for storage in both symmetric and asymmetric memory. Alternatively, the remapping table in the MC may he configured to only interface with asymmetric memory and physical addresses associated with the asymmetric memory.
  • the operations described in flow chart 600 are implemented on a system, such as the MC 130 described with respect to system 100. However, the operations may be performed by other systems In other contexts where a different architecture is being used.
  • a MC receives, from a MMU within a central processing unit (CPU), a request for an Input/Output (I/O) write to a first MMU-specified physical address (610).
  • the MMU may instruct the MC to write VALUE.A to MMU- ⁇ pecified physical address 1.
  • the MC accesses a remapping table associated with a MC (620). Accessing a memory table may include referencing a memory structure that maintains an association between a list of MMU-specified physical addresses and a list of MC- specified physical addresses.
  • the MC uses the remapping table to identify a first MC-specified physical address associated with the first MMU-specified physical address, the first MC- specified physical address corresponding to a first location within asymmetric memory storage ($30). For example, the MC may lookup MMU-specified physical address 1 and determine that MC-specified physical address 1 refers to MC-specified physical address 101, an address that maps to asymmetric storage.
  • the MC may be configured to lookup me remapping table using 38-bii addressing offsets derived from 40-bi ⁇ MMU physical addresses (in the range 0 - 256 GB). As a result of the lookup, 0)640-bit MMU physical addresses are translated into 39-bit NOR Flash physical addresses.
  • An example of the translation may include a configuration where bits 37-29 of the addressing offset represent a 9-bit index in the remapping table.
  • a lookup in the remapping table returns a 10-bit address.
  • the 10-bit address becomes the high-order bits of the MC physical address. For example, a MMU physical address 0x0400001000 may get translated into memory-controller physical address
  • 0x0840001000 An alternate memory-controller physical address the same range may be associated with is 0x0880001000.
  • a translation that combines a subset of the high-order bits with the low order bits may be used to generate multiple offsets into a collection, of D1MM modules. For example, bits 37» 31 of the addressing offset may be concatenated with bits 7-6 to obtain the 9-bit index into the remap table, ⁇ n this configuration, NfMU physical address 0x0400000080 may get translated into memory-controller physical address 0x0840000000. Art alternate memory-controller physical address for the same range may be associated with address 0x0880000000.
  • the MC identifies a first bank of storage within the asymmetric memory storage that includes the first location, the first bank of storage including a disruption region that is associated with characteristics that include corrupted content or nondeterministic read latency as a result of attempting to read data from the first bank of storage as an I/O write is being performed that involves the first bank of storage (640).
  • identifying the first bank of storage includes analyzing the MC-specified physical address to identity the bank. For example, in Fig. 5, bits ⁇ p m+l> identify the bank. And in keeping with the example earlier introduced, when the MOspocificd physical address of 0x0840001000 was identified, 0x84 is identified as the bank.
  • the access characteristics for the disruption region need not necessarily cause a problem nor must the access characteristics be inconsistent Rather, the disruption region relates to the potential for problematic access characteristics.
  • the MC then writes contents from the first bank of storage to a second bank of storage (650).
  • the MC may load 4MB of data from a first bank (or block) on a chip (or collection of chips) to a second bank of chips (or a collection of chips).
  • the second bank represents a different disruption region than the first bank, hi addition to identifying a different disruption region, other criteria may be used to identify a different bank.
  • the MC may use a policy that minimizes a number of erases that are required, e.g., using a ping-pong scheme that returns to the same bank until storage in the bank is exhausted.
  • the policy may be configured to identify another bank in the same physical device in order to minimize the cost of the data copy.
  • a write to block 1 caused the first bank to be written to the second bank
  • a read of a location in a second block (e.g., block 2), which is immediately adjacent to blockl and which is within the same disruption region, continues to be serviced by bank 1 until the block 1 related write is completed to bank 2.
  • reads to such locations of block 2 are serviced by the updated block 2 now available in bank 2.
  • read instructions from the MMU for requested content associated with a second MMU-specicled physical address may be serviced by reading the requested content from the first bank of storage (660).
  • the MC may be configured to operate in two parallel modes in response to determining that the CPU is accessing content in the first bank.
  • a "read” mode the MC is configured to process read instructions addressed to the first bank by reading from the first bank
  • a "write” mode the MC is configured to place the write instruction in a queue until the previous block write instruction has been initialed and/or completed.
  • the adverse impact on application performance may be reduced by configuring read instructions t ⁇ be performed from a first bank while configuring write instructions to be queued.
  • the MC configures the remapping table to associate the first MMU-specified physical address with a second MC-specified physical address corresponding to the second bank of storage after determining that the contents have been written to the second bank of storage (670). That is, In response to determining that the data in the first bank has been written to the second bank, the MC now configures the remapping table to associate the MMU-specified physical address with the MC-specified physical addresses for the second bank.
  • the second write operation results in VALUE A being mapped to ⁇ p'"...m+l'">in the third bank(bank510x).
  • the third bank may be the first bank.
  • the system is configured to switch back-and-fbrth between a first bank and a second bank.
  • the first bank and the second bank share all or nearly all of then data.
  • the data added into the second bank may be copied back into the first bank, as a result of receiving a second write instruction or otherwise. Because the first bank retains the previously-stored data, the first bank then is reconfigured into a copy of the second bank with minimal updating. After the data added to the second bank is copied into the first bank.
  • the data being written via the second write instruction is added into sectors within the first bank, and the remapping table is changed to map the MMU-specified physical address back into the first address (e.g., ⁇ p' ..m+1 '>, where ⁇ p' ..m+1 '> points back to the first bank).
  • the new data i.e., the data from me second write instruction
  • the second bank again becomes a copy of the first bank.
  • the copies and writes between the first and second banks may be repeated.
  • the use of coordinated writes between aligned banks may be used to reduce the computational burden in transferring data between two different banks. That is, the amount of processing and burden involved with a block write from a first bank to a second bank may be reduced if the MC only must coordinate the update to a sector within a bank.
  • Fig. 7A is a flow chart 70OA of a process by which data (hat is stored in a memory system that includes asymmetric memory is managed. More particularly, flow chart 700A illustrates a process by which the magnitude of write instructions are reduced by associating a first bank of asymmetric storage with the second bank of asymmetric storage, where at least some of the sectors in the first and second bank mirror one another.
  • a MC such as the MC 130 described with respect to system 100
  • the operations shown In flow chart 700A may be performed on other systems. For example, the operations may be performed by other systems in other contexts where a different architecture is being used.
  • the MC identifies a first bank of asymmetric storage, the first bank representing a first disruption region (710A).
  • identifying a first bank of storage may include managing MC-spccificd physical addresses for a MC mat includes NOR flash memory systems.
  • a bank representing a disruption region may be identified by configuring the MC to use certain portions of the MC-specified physical addresses as an indication of the bank.
  • the MC identifies a second bank of asymmetric storage, the second bank representing a second disruption region that is separate from the first disruption region (720A). That is, either of the first or the second bank remains accessible for reads while the other bank its being written to.
  • a bank may represent a logical or physical region. For example, in one configuration, the bank represents a physical module. In another configuration, the bank represents a region within a chip. In still other configurations, the bank represents multiple regions across multiple chips.
  • the MC associates the first bank of asymmetric storage with the second bank of asymmetric storage, with a first sector, a second sector, and a thiid sector in the first bank, including data that minors data within a first sector, a second sector, and a third sector of the second bank (730A).
  • the MC maybe configured to coordinate reads and write operations so that writes always occur to one of the two banks, and so that one of the banks is available to support at .east some read operations while the other bank is unavailable due to a write operation being performed.
  • the MC receives a first update to be written to the memory system (740A).
  • Receiving an update may include receiving a command to update a data structure associated with a MMU-specified physical address (acting as a virtual address) with another entry and/or replacing a particular entry in the data structure at the MMU- specificd physical address with an updated entry.
  • Updating the data structure may include preparing a batch of writes to the data structure. For example, insofar as different and independent addresses are being updated, the MC may queue several operations.
  • the writes (e.g., updates) need not be of the same size.
  • a first write may relate to a record of a first size while a second write may relate to a record of a second size.
  • the MC writes the first update to a fourth sector of me second bank that follows the third sector of the first bank (750A). For example, the MC may determine that addresses 001-003 axe being used in a first bank, and that the next available ftee space begins at address 004 (the fourth sector). Therefore, the MC instructs the second bank, fourth sector, to store the first update. In another example, the MC may determine that the fourth sector in a ten sector (001 -010) search index needs to be updated with the first update. The MC instructs the second bank, fourth sector to store the update to the search index.
  • the MC disables reed accesses to the first sector, the second sector, and the third sector of the second bank while writing the fourth sector to the second bank (760A). That is, the MC may be configured to protect an application from experiencing difficulties by disabling read access to the disruption region caused by writing the first update to the second bank.
  • the MC enables data to be read from the first sector, the second sector, and the third sectoT or the first bank while writing the update to fourth sector to the second bank (770A). Because the disruption region can be quite large (e.g., 1 Gigabyte), and the impact on an application of disrupting read access to a large amount of data may be severe, the MC may continue to provide read access to content o&erwise unavailable due to the write operation being performed to the second bank. Therefore, since the first, second, and third sectors have not been changed, the MC may continue to provide application read access to those sectors using the first bank.
  • the MC writes the first update to the fourth sector of the first bank (780A).
  • the MC disables read access to the first sector, the second sector, and the third sector of the first bank while writing the fourth sector to the first bank (790A). That is, because reading the first, second, and third sectors from the first bank may cause the previously-alluded to disruption, the potential for undesirable performance is reduced by disabling read access to those sectors from the first bank.
  • the MC enables data to be read from the first sector, the second sector, the third sector, and the fourth sector of the second bank while writing the update to the fourth sector to the first bank (795A).
  • the first and second banks are now mirrored with identical (or substantially similar) data in each bask.
  • Figs. 7B-7E are diagrams of different configurations 700B-700E for a memory system that aligns banks enabling asymmetric memory to be written in a more efficient manner. More precisely, ine ⁇ oiy systems 700B-700E illustrate how two regions (e.g., banks or disruption regions) may be aligned so that as content U written from a first bank to a second bank, the first bank enables read access to content widiin the first bank.
  • memory system 700B includes first region 710B, second region 71SB, first region 720B, and second region 72SB. Generally, first region 710B is aligned with second region 71 SB and first region 720B is aligned with second region 725B.
  • Aligning a first region (e.g., a read region) with a second region (e.g., a write region) enables the MC to support read instructions while a write instruction is being performed to the write region.
  • the designation of read and write labels to a region then is reversed once the data has been written from the read region to the write region.
  • Configuration 700C illustrates how a MC processes an instruction from the MMU relative to an internal remapping table and the configuration of first region 710B and second region 71 SB.
  • Memory Controller Actions indicates mat the MC has received an instruction to store an update for MMU- ⁇ pecified physical address D.
  • first region 710B is active with three sectors (1, 2, 3), and second region 715 is not active, but mirrors the first region 710B with sectors 1, 2, and 3.
  • the MC allocates the next available memory, which the MC indicates is MC-specified physical address, sector 4.
  • the MC then allocates the update from MMU-specified physical address D to MC-specified physical address, sector 4.
  • Configuration 700D illustrates how the first region remains active to support requests for MMU -specified addresses A, B, and C from MC- ⁇ pecified physical addresses in first region 710B, sectors 1, 2, and 3.
  • the second region 71 SB is now a disruption region as MMU-specificd address D is being written to MC-specified physical addresses in second region 715B, sector 4,
  • Configuration 70OB then illustrates a subsequent configuration where the second region becomes active to support requests for MMU-specified addresses A, B, C, and D from MC-specified physical addressee in second region 715B, sectors 1 , 2, 3, and 4.
  • the first region 710B is now a disruption region as MMU-specified address D is being written to MC-specified physical addresses in first region 710B, sector 4.
  • Fig. 8A is a flow chart 800A of a process by which data is queued in order to write the data to a memory system that includes asymmetric memory.
  • a MC receives an instruction to write a value to a first MMU-specified physical address (810A).
  • the MC determines that the first MMU-specified physical address is associated with a MC-specified physical address for asymmetric memory and queues the write instruction (820A).
  • the MC copies the data from a first region proximate to the MC-specified physical address that is an active region to a second region (83OA).
  • the MC modifies the remapping table so that read access to the content of the first region is still enabled, and writes the value to the second region (840A).
  • the first region may already be active for reading.
  • the MC reconfigures the remapping table to identify the second region as the active region for the content previously stored in the first ⁇ egion and the value associated with Ae first MMU-specified physical address (850A).
  • Figs. 8B-8H are diagrams of different configurations 800B-800H for a memory system that copies data from a first region to a second region as an update to the data is being written to the second region. More precisely, configurations 800B- 800H illustrate how a MC is configured to write the contents from an active region to a second region as a result of receiving a write instruction. In configuration 800B, four regions ( ⁇ ) 0B-840B) are shown with region 810B being active and regions 820B-840B being empty. Configuration 800B illustrates that T, *2 ⁇ and '3' are already stored In region 810B and that flic MC haa received a write instruction to write 2A to storage.
  • labels such as ' 1', l 2 ⁇ and 2A refer to data that is stored or data that wUl be written into asymmetric memory. In one configutation, the label represents the data itself. In another configuration, the label represents a MC-specified address or a MMU-specified physical address.
  • configuration 800C illustrates a progression from configuration 800B where region 810C remains active to support access to T, '2% and 4 3 ⁇ Configuration 800C illustrates that T has been copied from region 810C to region 820C and that region 820C is now identified as a disruption region (e.g., bank).
  • the MC writes 2A to region 820D.
  • an instruction to write '4* is received, Because configuration 800D indicates that a write is being performed, the MC queues '4' in configuration 800E.
  • Configuration 800E illustrates that the write has not yet been completed, and region 820E remains inactive as '3* is being copied to region 820E. As a result, region 810E remains active to support access to • 1 ' and *3 '.
  • Fig. 8F indicates that region 820F is active after ' 3 ' has been copied to region 820F.
  • configuration 800F indicates that ' 1 ⁇ 2A, and *3'are copied from region 820F to region 830F 1 where region 830F becomes a disruption region.
  • the MC writes '4* to region 830G. Once the write is complete, configuration 800H indicates that region 830H becomes active.
  • the characteristics of a disruption region may vary between different read operations performed that follows or is simultaneous with a write operation. For example, during a first simultaneous read/write operation to a disruption region, the content may be corrupted. The content may be delayed during a second simultaneous read/write operation, and a third simultaneous read operation may see the content corrupted and delayed.
  • a fourth simultaneous read/write operation may include unco ⁇ upted data received in a timely manner. However, notwithstanding uncoirupted Mid timely data, the memory controller may be configured to still operate as though an asymmetric memory component will operate in an adverse manner (e.g., corrupted data and latency).
  • a remapping table may be configured to manage the access between different banks within a memory subsystem.
  • the remapping granularity may be finer or coarser either remapping chips (coarser) or even remapping sectors within banks (finer).
  • the address remapping table previously described remaps bits

Abstract

A memory controller (MC) is associated with a remapping table to enable access to content in a memory system that includes asymmetric memory. The MC receives a request for a memory read or an Input/Output (I/O) write from a central processing unit (CPU) for a physical address specified by the system's memory management unit (MMU). The CPU uses the MMU to manage memory operations for the CPU, by translating the virtual addresses associated with CPU instructions into physical addresses representing system memory or I/O locations. The MC for asymmetric memories is configured to process the MMU-specified physical addresses as an additional type of virtual addresses, creating a layer of abstraction between the physical address specified by the MMU and the physical memory address with which that address is associated by the MC.

Description

MANAGING MEMORY SYSTEMS CONTAINING COMPONENTS WITH ASYMMETRIC CHARACTERISTICS
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority to U.S. Application Serial No.
60/981 ,284, entitled "MANAGING MEMORY SYSTEMS CON TAINING COMPONENTS WITH ASYMMETRIC CHARACTERISTICS", and filed on October 19, 2007.
BACKGROU ND
Computer systems may move objects into memory for ready availability to one or more processors. Similarly, objects may be moved out of memory to enable other objects to re-use the space in memory.
BRIEF DESCRIPTIONS OF THE DRAWINGS
Fig. 1 is a block diagram of a central processing unit (CPU) that interfaces with a memory system through a memory controller.
Fig. 2 is a block diagram of disruption regions in asymmetric memory.
Fig. 3 is a block diagram of a remapping (remap) table associated with a memory controller.
Fig. 4 is an illustration of an addressing format used by a memory controller. Pig. 5 is an illustration of banks and sectors within asymmetric memory.
Fig. 6 is a flow chart of a process by which a memory controller enables access to a memory system that includes asymmetric memory.
Fig.7A is a flow chart of a process for managing storage of data in a memory system that includes asymmetric memory.
Figs. 7B-7E are diagrams of different configurations for a memory system that aligns banks enabling asymmetric memory to be written in an efficient manner.
Fig. 8A is a flow chart of a process by which data is queued in order to write the data to a memory system that includes asymmetric memory and symmetric memory.
Figs. 8B-8H are diagrams of different configurations for a memory system mat copies data from a first region to a second region as an update to the data is being written to the second region.
SUMMARY In one general sense, data that is stored in a memory system that includes asymmetric memory is managed. The asymmetric memory is configured to enable block write operations in which an address within a block of one of the asymmetric memory components affects the availability of other addressee within the block of the asymmetric memory component. A first disruption region that is associated with problematic read access of first addresses within the first disruption region during the block write operations to the first disruption region is identified within asymmetric storage. A second disruption region that is separate from the first disruption region, the second disruption regjon associated with problematic read access of second addresses within the second disruption region during the block write operations to the second disruption region is identified within the asymmetric storage. Blocks within the first disruption region of asymmetric storage are associated with the second disruption region of asymmetric storage, where a first block, a second block, and a third block in the first disruption region axe configured to include data that mirrors data within a first block, a second block, and a third block of the second disruption region. A first update to be written to the memory system is received. The first update is written to a fourth block of the second disruption region that follows a block of the second disruption region corresponding to the third block of the first disruption region. Read access to the first block, the second block, and the third block of the second disruption region is disabled while writing the first update to the fourth block to the second disruption region. Data is enabled to be read from the first block, the second block, and the third block of the first disruption region while writing the first update to the fourth block to the second disruption region. The first update is written to the fourth block of the first disruption region and read access is disabled io the first block, the second block, and the third block of the first disruption region while writing the fourth block to the first disruption region. Data is enabled to be read from, the first block, the second block, and the third block of the aecoiid disruption region while writing the update to the fourth block to the first disruption region.
Implementations may include one or more of the following features. For example, after configuring a memory controller to enable access to the first block, the second block, the third block, and the fourth block from the second disruption region, a second update to a portion of data in the first block of the second disruption region may be received. A third update to a portion of data in the third block of the second disruption region may be received and the second update and third update may be written to the first block and the third block of the first disruption region. The memory management unit may be enabled to read the first block and the third block from the second disruption region and the data in the second block of the first disruption region may be preserved for subsequent access as the second update and the third update are being written to the first disruption region. The second disruption region may includes a previous instance of mirrored content prior to writing the first update to the second disruption region. The first disruption region may be in use when the first update is received. The first block, the second block, the third block, and the fourth block in the first disruption region may be sequentially arranged. The fourth block in the second disruption region may be unutilized prior to writing the first update. The fourth block in the first disruption region and the second disruption region may use an identical offset for a memory controller-specified physical address offset. A memory controller may be enabled to read the first update from the second disruption region after the first update has been written to the fourth block of the second disruption region. A memory controller may be enabled to read the first block, the second block, and the third block from the second disruption region after the first update has been written to the fourth block of the second disruption region.
The data in the first block, the second block, and the third block of the second disruption region may be preserved during the writing of the first update to the fourth block of the second disruption region. A memory controller configured is use to associate a memory management unit-specified physical address from a CPU with a first memory controller-specified physical address for the first disruption region and a second memory controller-specified physical address for the second disruption region. An identical block subaddress may be used for the first block in me first disruption region and the first block in the second disruption region. The first block, the second block and the third block in the second disruption region may be sequentially arranged. Associating the blocks may include associating the blocks using a memory controller configured to interface with symmetric storage and the asymmetric storage. Associating the blocks may include associating the blocks using a memory controller configured to interface only with the asymmetric storage.
Writing the first update to the fourth block of the first disruption region, disabling read access to the first block, the second block, and the third block of the first disruption region while writing me fourth block to the first disruption region, and enabling data to be read from the first block, the second block, and the third block of the second disruption region may be performed while writing the update to the fourth block to the first disruption region are performed before the second update is received.
In another general sense, data that is stored in a memory system that includes asymmetric memory may be managed The asymmetric memory is configured to enable block write operations in which an address within a block of one of the asymmetric memory components affects the availability of other addresses within the block of the asymmetric memory component A first disruption region that is associated with problematic read access of first addresses within the first disruption region during the block write operations to the first disruption region is identified within asymmetric storage. A second disruption region that is separate from the first disruption region and that is associated with problematic read access of second addresses within the second disruption region during the block write operations to the second disruption region is identified within the asymmetric storage. Blocks within the first disruption region are associated with the second disruption region, where a first block in the first disruption region are configured to include data that mirrors data within a first block of the second disruption region. A first update to be written to the memoiy system is received and the first update to a second block of the second disruption region is written. Read access to the first block of the second disruption region is disabled while writing the second block to the second disruption region. Data is enabled to be read from the first block of the first disruption region while writing the first update to the second block to the second disruption region. The first update to the second block of the first bank is written. Read access to the first block of the first disruption region is disabled while writing the second block to the first disruption region and data is enabled to be read from the first block of the second disruption region while writing the update to the second block to the first disruption region.
Implementations may include one or more of the following features. For example, the second block in the second disruption region may be unutilized prior to writing the first update. The second disruption region may include a previous instance of mirrored content prior to writing the first update to me second disruption region. The first disruption region may be in use when the first update is received. The first block and the second block in the first disruption region may be sequentially arranged. The first block and the second block in the second disruption region may be sequentially arranged. The second block in the first disruption region and the second disruption region may use an identical offset for a memory controller-specified physical address offset A memory controller may be enabled to read the first update from the second disruption region after me fixst update has been written to the second block of the second disruption region. A memory controller may be enabled to read the first block and the second block from the second disruption region after the first update has been written to the second block of the second disruption region. The data in the first block of the second disruption region may be preserved during the writing of the first update to the second block of the second disruption region. A memory controller may be used configured to associate a memory management unit-specified physical address from a CPU with a first memoiy controller-specified physical address for the first disruption region and a second memory controller-specified physical address for me second disruption region. An identical Mock subaddress may be used for the first block in the first disruption region and the first block in the second disruption region.
The various aspects, implementations, and features may be implemented using, for example, one or more of a method, an apparatus, a system, an apparatus, system, tool, or processing device for performing a method, a program or other set of instructions, an apparatus that includes a program or a set of instructions, and a computer program embodied in a tangible computer readable medium. The tangible computer readable medium may include, for example, instructions, software, images, and other data.
DETAILED DESCRIPTION A recent increase in mobile telecommunications device usage has made asymmetric memory systems (e.g., NOR flash memory) more readily available. Leveraging this increased availability, our engineers are able to design systems that use asymmetric memory as a portion of CPU's main memory. For example, NOR flash memory may be used in the main memory of a server acting as a search engine. However, asymmetric memory has pronounced access characteristics lhat gi vc rise to the label "asymmetric." A system using asymmetric memory must account for the different access characteristics in order to maximize performance. For example, in the case of some types of NOR flash, the access characteristics for read operations can resemble access characteristics for DRAM (Dynamic Random Access Memory), where read operations for both NOR flash and DRAM require roughly comparable time to retrieve content requested in an address. In contrast, write operations for some types ofNOR flash differ dramatically from write characteristics lor DRAM. Unlike DRAM, some types of NOR flash cannot perform write operations to data in a random access manner initiated by a single CPU-level store instruction. In addition, NOR flash requires an additional long-running erase operation prior to a write. Thus, writing to NOR flash memory typically requires more time (several orders of magnitude longer) than writing to DRAM.
Another aspect of NOR flash-like asymmetric memories is that write operations are associated with a disruption region. A disruption region represents the area proximate to a particular address on a NOR flash, which area is actually (or deemed likely to be) associated with undesirable behavior when a memory controller writes or erases to the particular address. In one instance, writing to a particular address causes the NOR flash to corrupt data at other addresses in the disruption region. In another instance, the disruption region does not return timely results in response to requesting data from other addresses in the disruption region. Instill another instance, accesses to other data in the disruption region returns the correct
designed cannot rely on access to the NOR flash memory system without experiencing difficulties. Thus, a system using NOR flash memory as part of a system's main memory uses a memory controller that protects other components in the system from experiencing performance challenges resulting from the access characteristics of NOR flash (or other asymmetric memory systems). For example, the memory controller defines or observes disruption regions within NOR flash, enabling data residing in a disruption region to remain available for reading from an original location with consistent read times, amidst writing of data associated with that same location to another memory region that will later serve read requests.
A memory controller (MC) may be associated with a remapping table to enable access to content in a memory system that includes asymmetric memory and symmetric memory. The MC receives a Tequest for a memory read or an
Input/Output (I/O) write from a central processing unit (CPU) for a physical address specified by the system's memory management unit (MMU). The CPU uses the MMU to manage memory operations for the CPU, by translating the victual addresses associated with CPU instructions into physical addresses representing system memory or VO locations. However, the MC for asymmetric memories is configured to process the MMU-specified physical addresses as an additional type of virtual addresses, creating a layer of abstraction between the physical address specified by the MMU and the physical memory address with which that address is associated by the MC. In this sense, the MC provides a uniform memory structure to the MMU and shields the CPU from the computational complexities required to implement a memory system that may include symmetric and asymmetric components.
A remapping table is used by the MC to enable its abstraction (or virtualization). Generally, me remapping table associates MMU-specified physical addresses (bus addresses) with the actual physical address for me particular memory location in the memory system. The remapping table is used to identify a first MC- specified physical address associated with the MMU-specified physical address, where the first MC-specified physical address is said to correspond to a first location within an asymmetric memory storage. For example, the MC may associate bus address A from the MMU with physical address D for a NOR flash memory system. With this structure, when data is to be written, an instruction from the CPU that implicates a physical address (e.g., physical address A) specified by the MMU is translated by the remap table of the MC as an address within a memoiy system. Assuming that NOR flash is used in the memoiy system, the address is used to identify a disruption region corresponding thereto. And, in keeping with the above example, the MC associates the MMU-spccificd physical address, Le., physical address A, is used by the MC to identify a first disruption region (e.g., a bank in NOR flash memory) within the asymmetric memory storage that includes location D. In representing a disruption region for NOR flash devices, the first bank of storage is associated with characteristics that include corrupted content or nondeterministic read latency as a result of attempting to read data from the first bank of storage at the same time as an I/O (block) write is being performed that involves the first bank of storage.
To further illustrate, the MC may determine that NOR flash physical addresses beginning with Ol 000 and lasting until 09000 represent the disruption region. For example, physical addresses at location D ("physical address D") may be associated with one or more of these specific addresses. The MC men executes the write instruction directed to physical address D by writing content from the first bank of storage to a second bank of storage which is currently unused by any CPU, and adding or modifying content, per the instruction, leaving the first bank unchanged and available for read access while doing so. For example, and again in keeping with the above example, data in physical address range 01000-09000 remains available for read access while it, and updates to it, are written to physical address range 11000-19000. In greater detail, as the contents from the fust bank of storage are being written to the second bank of storage and until the contents have been written to the second bank of storage, the MC is configured to process read instructions from me MMU for content residing in the first bank of storage by reading from the first bank of storage. to the example, while the data from physical addresses 01000-09000 are being copied to 11000-19000, the MC may process a request from the MMU to read physical address A by reading data from physical address 05000 instead of the duplicated and identical data residing in physical address 15000, which would represent a read to an impacted disruption region.
Yet, thereafter, fo response to determining that the contents have been written to the second bank of storage, the remapping table is configured to associate Ae first MMU-$pecificd bus address with a second MC-specified physical address related to the second bank of storage. For example, the MC may configure the remapping table to associate bus address A with physical addresses 11000-19000 in response to determining that the data from physical addresses 01000-09000 has been written. This would have the effect of processing subsequent reads to bus address A by reading data from physical address 15000.
In another implementation, a minor disruption region is created or maintained, which is proactively kept up to date with an original disruption region. As a result, application performance may be increased because less data may need to be transferred at the time an update is received corresponding to the address range of the original disruption region. When a write instruction for MMU-specified physical address corresponding to a constituent data block within the original disruption region is received, one or more updates are made to a corresponding block within the region containing the mirrored copy, thus effecting the write instruction. Meanwhile, while the updates are being made to the mirrored copy, read access for blocks of the disruption region remains available. Once me updates to the mirrored copy are completed, the mirrored image is made available for read access. More specifically, the MC remap fable replaces the constituent data block address with the address of the corresponding block, such (hat an ensuing read to the Implicated MMU-specified physical address is directed to the mirrored, but now updated, data block. Fig. 1 is a block diagram of a system 100 with a CPU 110 that interlaces with a memory system 150 through a MC 130. More precisely, the MC 130 is positioned, oriented, and configured to enable it to be leveraged in processing read and write requests from the MMU 120 to the memory system 150 while shielding the CPU 110 from challenges resulting from particular access characteristics associated with asymmetric memory 170. The CPU 110 includes logic configured to implement one or more processing operations. Examples of a CPU 110 may include x86-based processors from Intel Inc. and Advanced Micro Devices Ihc.(AMD), and/or AHM (TM) processors. The CPU 110 may include one or more processor cores.
Generally, each CPU 110 interfaces with an operating syβtem. The operating system, in turn, may interface with one or more applications. These applications may include a search engine, business analytics, data mining, or a database management application. Each of these applications may be compiled (or interpreted) into different machine-level instructions for execution on the CPU 110. Although only one CPU 110 is shown, multiple CPUs may be used, Each CPU 110 supports an instruction set with a variety of instructions. These instructions may include, among other operations, register shifts, arithmetic operations, and memory operations. Examples of the memory operations may include random access read and write operations and block (VO) read and write operations.
As shown, the CPU 110 includes a MMU 120. Generally, the MMU 120 is configured to manage a physical address space for the CPU 110. As the CPU 110 is executing a program, the CPU 110 may request to read data from a particular address and write data to a particular address. More precisely, the MMU 120 may be configured to receive a virtual address from the CPU 110 and translate the virtual address into a physical address (i.e., a MMU-speoified physical address). In the case of a read, the CPU 110 maybe configured to instruct the MMU 120 to retrieve data from a particular MMU-specified physical address. Other systems (e.g., the MC 130) may, in turn, process the MMU-spocified physical address as an additional type of virtual address. In another configuration where a translation element of an MMU is separated out from the portion of the CPU that is responsible for interacting with the memory subsystem, the MMLI may be configured to obtain the physical address associated with the virtual address. The MMU (or another entity in me CPU) then may be instructed to retrieve data from the MMU-specified physical address.
The MC 130 is logically oriented as an intermediary system to enable the CPU 110 to interface with a memory system 150. More precisely, the MC 130 is a system configured to receive read and write instructions with a MMU-specified physical address from the MMU 120, lookup a MC-specified physical address associated with the MMU'Specified physical address, and perform the requested read and/or write instruction on the block(s) of physical memory corresponding to the MC-specified physical address.
The MC 130 includes logic (hardware and/or software) configured to manage me placement ofand movement ofdata within a memory system 150. The MC 130 may be configured to dynamically process instructions based on a determination of whether the data and/or address is associated with symmetric memory or asymmetric memory. Similarly, the MC 130 may be configured to process read instructions in a first way, and process write instructions in a second way. For example, the MC 130 may be configured to permit read operations from the MMU that specify a particular address to leverage or otherwise operate on a first address within asymmetric memory, while concurrently processing write operations from the MMU that specify a particular address to leverage or otherwise operate on a second address within symmetric memory.
The MC 130 includes a physical interface to the memory system 150. The physical interface is configured to automatically exchange physical address information as the MC 130 interfaces with the memory system IS0.
The MC 130 includes a remapping table 140 that stores an association between a MMU-specified physical address and a MC-specified physical address. The remapping table 140 may associate multiple MC-specified physical addresses with a MMU-specified physical address. For example, the remapping table 140 may be configured to indicate that writes to the disruption region should be directed to a "new" physical address range for the MMU, while reads from the disruption region should be directed to the "old" physical address range. Moreover, the remapping table 140 may indicate that a particular MMU-specified physical address is associated with a disruption region. For example, high-order address bits and some low-order bits in the MMU-supplied physical address may be analyzed to indicate which bank is being used. Alternatively, the MC may include logic (or include another column in a software table) that identifies the bank being used. Configuring the MC to have a bank identifier readily available may be used to reduce the processing burden of identifying a bank, Tor example, when performing write operations. The memory system 150 may include symmetric memory 160 and asymmetric memory 170. In one configuration, the memory system 150 includes only asymmetric memory 170 (hat is behind the MC 130. For instance, symmetric memory may be configured to interface with a first MC and asymmetric memory may be configured to interface with a second MC. The first MC may be configured to not include the remapping table and other components that are adapted to support the operations associated with the asymmetric memory 170. Alternatively or in addition, the MC 130 may be configured to support both symmetric memory 160 and the asymmetric memory 170. The memory system 150 may be packaged as one or more DlMMs (Dual Inline Memory Modules) and configured to support DIMM protocols, signaling, interconnects, end physical interfaces.
The symmetric memory 160 includes one or more memory systems where read characteristics have similar properties to the write characteristics. Examples of symmetric memory include DRAM, and other random access memory technologies.
The asymmetric memory 170 includes one or more memory systems where read characteristics and write characteristics are dissimilar. For example, some types of asymmetric memory have write latencies that are orders of magnitude greater than the read latencies. Examples of asymmetric memory may include NOR flash. With some types of asymmetric memory, an interlace is used to (among other things) require an entire region of content to be rendered inaccessible to read requests whenever any constituent block of the region is written, for example, using an I/O block write, instead of a smaller, random access write to the specific update location,
In configurations where the memory system 150 resides in DIMM packaging, the DIMM may be configured to dynamically process read and write instructions. For example, in a read mode, one or more DIMMs may be configured to receive instructions to access content residing at a particular physical address, and provide requested content to the processor through the MC. Generally, in a write mode, the DIMM is configured to receive data to be written with a physical address. Depending on whether the physical address represents flash or DRAM, the data may be written in a random access manner (e.g., a word or byte) or in a block (e.g., a 4 Megabyte or a 4 kilobyte block). In one configuration, the DIMM is entirely made up of one type of memory (e.g., DRAM or NOR flash). In another implementation, the DIMM includes a hybrid of both DRAM and flash and relies on an internal or external MC to perform the different operations required to implement a hybrid memory system. And, although one or more configurations were described where a hybrid and/or dedicated configuration was used, a different configuration may be used.
Fig.2 is a block diagram of the disruption regions in asymmetric memory 200. In asymmetric memory, disruption regions 210-240 represent four areas of memory where a write to one address in the region causes the proximate memory addresses to demonstrate disruptive behavior. In one instance, the disruption region represents other memory cells that rely on a common interna) write controller within the asymmetric memory. In another instance, the disruption region represents a range of addresses, logically or physically grouped. Fig. 3 is a block diagram of a remapping (remap) table 300 associated with a
MC. As shown, the remapping table 300 includes a list of MMU-specified physical addressee 310 and a corresponding list of MC-specified physical addresses 320. Generally, the MMU-specified physioal addresses refer to those addresses referenced by the MMU within the CPU. The list of MC-specified physical addresses represents the physical addresses within a memory system. As such, the MC is configured to process the MMU-specified physical address as an additional type of virtual address by mapping that address to the corresponding MC-specified physical address shown at reference numeral 320.
More specifically, me remapping table 300 illustrates how different physical addresses may be associated with a MMU-specified physical address, for example, based on whether the MMU provides a read or write instruction. As shown In the first entry, MMU-speoified physical address 100 is associated with a read address 1100 and write address 2100. Similarly, MMU-specified physical address 200 is associated with read address 1110 and write address 3100. The remapping table 300 maybe configured to associate different physical addresses based on whether the MMU (bus) address is associated with a disruption region. In one configuration, the MC is configured to associate identically sized regions with both the read and write translations. Selection of a region size may have an impact on the "alignment" of addresses (e.g., contents of the lower bits in a physical address). Thus, using regions of a consistent size may permit the use of reduced complexity addressing operations as remapping operations are performed. For example, the addressing operations may be limited to modifications to certain portions of the address.
The remapping table may be implemented using a variety of structures. In one configuration, both the read/write translations am found in the same remapping table residing in a single, intermediary device. In another configuration, both the read/write translations are found in the same remapping table residing in a device residing on a DIMM. In yet another configuration, several integrated read/write translations are found within a single device, such as a configuration where a single remapping table is configured to support several banks within a DIMM and several remapping tables are used.
Alternatively, the read translation may be implemented as part of the memory logic while me table used for write translations is implemented as a software- managed table residing, for example, with an application, a hypervisor, an operating system, a CPU, or a MC Multiple remapping tables for read and/or write translations may be used that reside in different locations, such as a MC, a DIMM or on multiple devices within a DIMM. A hierarchical structure of remapping tables may be used so that a MC includes a first remapping table that interlaces with a second remapping table that acts a$ the interface to several devices within a DIMM.
Fig.4 is an illustration of the addressing format 400 used by a MC. As shown in format 400, the physical address has 3 components. The low bits <n 0> are the offset of the data within a given bank and sector. Bits<m n+l> address a specific sector within a bank, and bits <p m+l> address a specific bank within the memory subsystem. In one configuration of an individual flash chip, sectors may be allocated with 128K bytes, and there may be anywhere from 256 to 4096 sectors per bank and 1 to 16 banks per chip. The memory subsystem may additionally be designed such that a plurality of chips/modules are accessed in parallel with a single address, permitting a logical sector to be constructed from the aggregation of corresponding physical sectors In the plurality of chips/modules. The plurality of chips/modules may be used to inorease <n>: the number of bits within a sector because each logical sector has additional storage for each address. Additional flash chips may be also aggregated within the memory subsystem but given different addresses. This also increases tbe number of banks <p m+l> within the memory subsystem.
Fig. 5 is aα illustration of banks and sectors within asymmetric memory 500. More precisely, Fig. 5 shows the diagram of an example of an NOR flash chip. The NOR flash receives commands from me MC over a chip interface (not shown). A NOR flash chip includes a number of banks, as illustrated by banks S10a ... bank S10n. Each bank includes a number of sectors <l...n>. Bach sector includes some number of words. The MC may be configured to directly address and read a specified word from any sector in any bank using, for example, random access instructions. As indicted above, writing data into NOR flash may be more problematic. That is. writing to NOR flash may require special operations to accommodate me particular access characteristics of NOR flash. First, in one configuration, the specific sector (for example, sector 2 in bank 510a) with the word to be written must be erased. Then, the "new" data may be written into the specified physical address within the sector. Other new data then may be added to a proximate location within the specified sector without requiring additional erases because the previous data residing at the proximate location was previously erased. Alternatively, due to some limitations in some configurations of NOR flash chips, if one sector (for example, sector 2 within a bank S10a) is being erased or written, date from the other sectors (1, 3...n) within bank 510a are rendered contemporaneously unavailable for reading during the erase/write operation involving sector 2 of that bank 510a, while the other banks within the memory subsystem remain available for reading. In this case, the bank S10a represents an example of the disruption region.
Fig.6 is a flow chart 600 of a process by which a MC enables access to a memory system that includes symmetric and asymmetric memory. More precisely, flow chart 600 illustrates a process by which a MC uses a remapping table in order to provide access to content in a memory system that includes asymmetric memory, where the remapping table includes physical addresses for storage in both symmetric and asymmetric memory. Alternatively,, the remapping table in the MC may he configured to only interface with asymmetric memory and physical addresses associated with the asymmetric memory. Generally, the operations described in flow chart 600 are implemented on a system, such as the MC 130 described with respect to system 100. However, the operations may be performed by other systems In other contexts where a different architecture is being used.
Initially, a MC receives, from a MMU within a central processing unit (CPU), a request for an Input/Output (I/O) write to a first MMU-specified physical address (610). For example, the MMU may instruct the MC to write VALUE.A to MMU- βpecified physical address 1.
The MC accesses a remapping table associated with a MC (620). Accessing a memory table may include referencing a memory structure that maintains an association between a list of MMU-specified physical addresses and a list of MC- specified physical addresses. The MC uses the remapping table to identify a first MC-specified physical address associated with the first MMU-specified physical address, the first MC- specified physical address corresponding to a first location within asymmetric memory storage ($30). For example, the MC may lookup MMU-specified physical address 1 and determine that MC-specified physical address 1 refers to MC-specified physical address 101, an address that maps to asymmetric storage.
The MC may be configured to lookup me remapping table using 38-bii addressing offsets derived from 40-biι MMU physical addresses (in the range 0 - 256 GB). As a result of the lookup, 0)640-bit MMU physical addresses are translated into 39-bit NOR Flash physical addresses.
An example of the translation may include a configuration where bits 37-29 of the addressing offset represent a 9-bit index in the remapping table. A lookup in the remapping table returns a 10-bit address. The 10-bit address becomes the high-order bits of the MC physical address. For example, a MMU physical address 0x0400001000 may get translated into memory-controller physical address
0x0840001000. An alternate memory-controller physical address the same range may be associated with is 0x0880001000.
More sophisticated translations are also possible. Specifically, a translation that combines a subset of the high-order bits with the low order bits may be used to generate multiple offsets into a collection, of D1MM modules. For example, bits 37» 31 of the addressing offset may be concatenated with bits 7-6 to obtain the 9-bit index into the remap table, ϊn this configuration, NfMU physical address 0x0400000080 may get translated into memory-controller physical address 0x0840000000. Art alternate memory-controller physical address for the same range may be associated with address 0x0880000000. The MC identifies a first bank of storage within the asymmetric memory storage that includes the first location, the first bank of storage including a disruption region that is associated with characteristics that include corrupted content or nondeterministic read latency as a result of attempting to read data from the first bank of storage as an I/O write is being performed that involves the first bank of storage (640). In one configuration (and as described in Pigs.4 and 5), identifying the first bank of storage includes analyzing the MC-specified physical address to identity the bank. For example, in Fig. 5, bits <p m+l> identify the bank. And in keeping with the example earlier introduced, when the MOspocificd physical address of 0x0840001000 was identified, 0x84 is identified as the bank. The access characteristics for the disruption region need not necessarily cause a problem nor must the access characteristics be inconsistent Rather, the disruption region relates to the potential for problematic access characteristics.
The MC then writes contents from the first bank of storage to a second bank of storage (650). For example, the MC may load 4MB of data from a first bank (or block) on a chip (or collection of chips) to a second bank of chips (or a collection of chips). The second bank represents a different disruption region than the first bank, hi addition to identifying a different disruption region, other criteria may be used to identify a different bank. For example, the MC may use a policy that minimizes a number of erases that are required, e.g., using a ping-pong scheme that returns to the same bank until storage in the bank is exhausted. Alternatively, the policy may be configured to identify another bank in the same physical device in order to minimize the cost of the data copy. If a write to block 1 caused the first bank to be written to the second bank, a read of a location in a second block (e.g., block 2), which is immediately adjacent to blockl and which is within the same disruption region, continues to be serviced by bank 1 until the block 1 related write is completed to bank 2. Thereupon, reads to such locations of block 2 are serviced by the updated block 2 now available in bank 2. In other words, between a time when the contents from the first bank of storage begin to be written to the second bank of storage and before a time when the contents to be written have been written to the second bank of storage, read instructions from the MMU for requested content associated with a second MMU-specicled physical address may be serviced by reading the requested content from the first bank of storage (660). Therefore, the MC may be configured to operate in two parallel modes in response to determining that the CPU is accessing content in the first bank. In a "read" mode, the MC is configured to process read instructions addressed to the first bank by reading from the first bank, In a "write" mode, the MC is configured to place the write instruction in a queue until the previous block write instruction has been initialed and/or completed. Particularly in applications where read instructions occur far more frequently than write instructions, the adverse impact on application performance may be reduced by configuring read instructions tυ be performed from a first bank while configuring write instructions to be queued. The MC configures the remapping table to associate the first MMU-specified physical address with a second MC-specified physical address corresponding to the second bank of storage after determining that the contents have been written to the second bank of storage (670). That is, In response to determining that the data in the first bank has been written to the second bank, the MC now configures the remapping table to associate the MMU-specified physical address with the MC-specified physical addresses for the second bank.
Other operations that are not shown may be performed in conjunction with the operations shown in flow chart 600. For example, if the MC received instructions to write additional data into the second bank, the MC identifies another unused bank (e.g., bank 51Ox in Fig. 5) and copies the data from the second bank (e.g., bank S10n) into the third bank (e.g., bank 510x). The MC then may be configured to write the additional data into the third bank <e.g., bank 510x). Thus, in the original example where VALUE A is mapped first into <p' ...m + 1 '> and the remapping table is reconfigured to map VALUE A into <p''..m+l"> (writing VALUE A to the second batik), the second write operation results in VALUE A being mapped to <p'"...m+l'">in the third bank(bank510x). Here, the third bank may be the first bank.
Io one configuration, the system is configured to switch back-and-fbrth between a first bank and a second bank. As content is being written from a first bank to a second bank, the first bank and the second bank share all or nearly all of then data. Once an update is performed to one of the banks, however, or as the second bank is activated into service via changing the remapping table in the above-described configuration, the data added into the second bank may be copied back into the first bank, as a result of receiving a second write instruction or otherwise. Because the first bank retains the previously-stored data, the first bank then is reconfigured into a copy of the second bank with minimal updating. After the data added to the second bank is copied into the first bank. the data being written via the second write instruction is added into sectors within the first bank, and the remapping table is changed to map the MMU-specified physical address back into the first address (e.g., <p' ..m+1 '>, where <p' ..m+1 '> points back to the first bank). Thereupon, the new data (i.e., the data from me second write instruction) is added to me second bank, and the second bank again becomes a copy of the first bank. The copies and writes between the first and second banks may be repeated.
Insofar as the write operations only involve the writing of new data (e.g., an update) to a portion of a sector within the bank, the use of coordinated writes between aligned banks may be used to reduce the computational burden in transferring data between two different banks. That is, the amount of processing and burden involved with a block write from a first bank to a second bank may be reduced if the MC only must coordinate the update to a sector within a bank.
Fig. 7A is a flow chart 70OA of a process by which data (hat is stored in a memory system that includes asymmetric memory is managed. More particularly, flow chart 700A illustrates a process by which the magnitude of write instructions are reduced by associating a first bank of asymmetric storage with the second bank of asymmetric storage, where at least some of the sectors in the first and second bank mirror one another. Although the operations described in flow chart 700A are described as being performed on a MC, such as the MC 130 described with respect to system 100, the operations shown In flow chart 700A may be performed on other systems. For example, the operations may be performed by other systems in other contexts where a different architecture is being used.
In any event, the MC identifies a first bank of asymmetric storage, the first bank representing a first disruption region (710A). For example, identifying a first bank of storage may include managing MC-spccificd physical addresses for a MC mat includes NOR flash memory systems. A bank representing a disruption region may be identified by configuring the MC to use certain portions of the MC-specified physical addresses as an indication of the bank.
The MC identifies a second bank of asymmetric storage, the second bank representing a second disruption region that is separate from the first disruption region (720A). That is, either of the first or the second bank remains accessible for reads while the other bank its being written to. Note that a bank may represent a logical or physical region. For example, in one configuration, the bank represents a physical module. In another configuration, the bank represents a region within a chip. In still other configurations, the bank represents multiple regions across multiple chips. the MC associates the first bank of asymmetric storage with the second bank of asymmetric storage, with a first sector, a second sector, and a thiid sector in the first bank, including data that minors data within a first sector, a second sector, and a third sector of the second bank (730A). The MC maybe configured to coordinate reads and write operations so that writes always occur to one of the two banks, and so that one of the banks is available to support at .east some read operations while the other bank is unavailable due to a write operation being performed.
The MC receives a first update to be written to the memory system (740A). Receiving an update may include receiving a command to update a data structure associated with a MMU-specified physical address (acting as a virtual address) with another entry and/or replacing a particular entry in the data structure at the MMU- specificd physical address with an updated entry. Updating the data structure may include preparing a batch of writes to the data structure. For example, insofar as different and independent addresses are being updated, the MC may queue several operations. The writes (e.g., updates) need not be of the same size. A first write may relate to a record of a first size while a second write may relate to a record of a second size.
The MC writes the first update to a fourth sector of me second bank that follows the third sector of the first bank (750A). For example, the MC may determine that addresses 001-003 axe being used in a first bank, and that the next available ftee space begins at address 004 (the fourth sector). Therefore, the MC instructs the second bank, fourth sector, to store the first update. In another example, the MC may determine that the fourth sector in a ten sector (001 -010) search index needs to be updated with the first update. The MC instructs the second bank, fourth sector to store the update to the search index. The MC disables reed accesses to the first sector, the second sector, and the third sector of the second bank while writing the fourth sector to the second bank (760A). That is, the MC may be configured to protect an application from experiencing difficulties by disabling read access to the disruption region caused by writing the first update to the second bank.
The MC enables data to be read from the first sector, the second sector, and the third sectoT or the first bank while writing the update to fourth sector to the second bank (770A). Because the disruption region can be quite large (e.g., 1 Gigabyte), and the impact on an application of disrupting read access to a large amount of data may be severe, the MC may continue to provide read access to content o&erwise unavailable due to the write operation being performed to the second bank. Therefore, since the first, second, and third sectors have not been changed, the MC may continue to provide application read access to those sectors using the first bank.
Thereafter, the MC writes the first update to the fourth sector of the first bank (780A). The MC disables read access to the first sector, the second sector, and the third sector of the first bank while writing the fourth sector to the first bank (790A). That is, because reading the first, second, and third sectors from the first bank may cause the previously-alluded to disruption, the potential for undesirable performance is reduced by disabling read access to those sectors from the first bank. The MC enables data to be read from the first sector, the second sector, the third sector, and the fourth sector of the second bank while writing the update to the fourth sector to the first bank (795A). Thus, the first and second banks are now mirrored with identical (or substantially similar) data in each bask.
Figs. 7B-7E are diagrams of different configurations 700B-700E for a memory system that aligns banks enabling asymmetric memory to be written in a more efficient manner. More precisely, ineπoiy systems 700B-700E illustrate how two regions (e.g., banks or disruption regions) may be aligned so that as content U written from a first bank to a second bank, the first bank enables read access to content widiin the first bank. As shown in configuration 700B, memory system 700B includes first region 710B, second region 71SB, first region 720B, and second region 72SB. Generally, first region 710B is aligned with second region 71 SB and first region 720B is aligned with second region 725B. Aligning a first region (e.g., a read region) with a second region (e.g., a write region) enables the MC to support read instructions while a write instruction is being performed to the write region. The designation of read and write labels to a region then is reversed once the data has been written from the read region to the write region.
Configuration 700C illustrates how a MC processes an instruction from the MMU relative to an internal remapping table and the configuration of first region 710B and second region 71 SB. In particular, Memory Controller Actions indicates mat the MC has received an instruction to store an update for MMU-βpecified physical address D. As the instruction to store the update is received, first region 710B is active with three sectors (1, 2, 3), and second region 715 is not active, but mirrors the first region 710B with sectors 1, 2, and 3. As a result of receiving the instruction to store the update, the MC allocates the next available memory, which the MC indicates is MC-specified physical address, sector 4. The MC then allocates the update from MMU-specified physical address D to MC-specified physical address, sector 4.
Configuration 700D illustrates how the first region remains active to support requests for MMU -specified addresses A, B, and C from MC-βpecified physical addresses in first region 710B, sectors 1, 2, and 3. The second region 71 SB is now a disruption region as MMU-specificd address D is being written to MC-specified physical addresses in second region 715B, sector 4,
Configuration 70OB then illustrates a subsequent configuration where the second region becomes active to support requests for MMU-specified addresses A, B, C, and D from MC-specified physical addressee in second region 715B, sectors 1 , 2, 3, and 4. The first region 710B is now a disruption region as MMU-specified address D is being written to MC-specified physical addresses in first region 710B, sector 4.
Fig. 8A is a flow chart 800A of a process by which data is queued in order to write the data to a memory system that includes asymmetric memory. Initially, a MC receives an instruction to write a value to a first MMU-specified physical address (810A). The MC determines that the first MMU-specified physical address is associated with a MC-specified physical address for asymmetric memory and queues the write instruction (820A). The MC then copies the data from a first region proximate to the MC-specified physical address that is an active region to a second region (83OA). The MC then modifies the remapping table so that read access to the content of the first region is still enabled, and writes the value to the second region (840A). Alternatively, the first region may already be active for reading. Once the writing is completed, the MC reconfigures the remapping table to identify the second region as the active region for the content previously stored in the first τegion and the value associated with Ae first MMU-specified physical address (850A).
Figs. 8B-8H are diagrams of different configurations 800B-800H for a memory system that copies data from a first region to a second region as an update to the data is being written to the second region. More precisely, configurations 800B- 800H illustrate how a MC is configured to write the contents from an active region to a second region as a result of receiving a write instruction. In configuration 800B, four regions (δ) 0B-840B) are shown with region 810B being active and regions 820B-840B being empty. Configuration 800B illustrates that T, *2\ and '3' are already stored In region 810B and that flic MC haa received a write instruction to write 2A to storage. Generally, labels such as ' 1', l2\ and 2A refer to data that is stored or data that wUl be written into asymmetric memory. In one configutation, the label represents the data itself. In another configuration, the label represents a MC-specified address or a MMU-specified physical address.
As a result of receiving the write instruction, configuration 800C illustrates a progression from configuration 800B where region 810C remains active to support access to T, '2% and 43\ Configuration 800C illustrates that T has been copied from region 810C to region 820C and that region 820C is now identified as a disruption region (e.g., bank). In the next operation, in addition to writing ' 1' to region 820D1 the MC writes 2A to region 820D. As the write is being performed, an instruction to write '4* is received, Because configuration 800D indicates that a write is being performed, the MC queues '4' in configuration 800E. Configuration 800E illustrates that the write has not yet been completed, and region 820E remains inactive as '3* is being copied to region 820E. As a result, region 810E remains active to support access to 1 ' and *3 '. Fig. 8F indicates that region 820F is active after ' 3 ' has been copied to region 820F. In order to write '4\ configuration 800F indicates that ' 1 \ 2A, and *3'are copied from region 820F to region 830F1 where region 830F becomes a disruption region. In addition to copying '1', 2A, and '3' to region 830G, the MC writes '4* to region 830G. Once the write is complete, configuration 800H indicates that region 830H becomes active.
Other implementations are within the scope of the following claims. For example, although aspects of the system was described in the context of NOR flash with respect to using banks and sectors within the banks, the operations may be perfbnned using other types of asymmetric memory with disruption regions (e.g., a NOR flash bank) and blocks (e.g., a NOR flash sector).
The characteristics of a disruption region may vary between different read operations performed that follows or is simultaneous with a write operation. For example, during a first simultaneous read/write operation to a disruption region, the content may be corrupted. The content may be delayed during a second simultaneous read/write operation, and a third simultaneous read operation may see the content corrupted and delayed. A fourth simultaneous read/write operation may include uncoπupted data received in a timely manner. However, notwithstanding uncoirupted Mid timely data, the memory controller may be configured to still operate as though an asymmetric memory component will operate in an adverse manner (e.g., corrupted data and latency).
Furthermore, although aspects of the flash memory subsystem previously described flash chips being mounted on DIMMs, other implementations may be used where the flash chips and controller ace mounted on the same circuit board. While examples described configurations with flash memory DIMM, other configurations may use other non-volatile memory device* where the non-volatile memory device disables access to some data during the writing of other data within the device. In another example, a remapping table may be configured to manage the access between different banks within a memory subsystem. The remapping granularity may be finer or coarser either remapping chips (coarser) or even remapping sectors within banks (finer). The address remapping table previously described remaps bits
<p m-H> into the same number of bits <p* !»+-.*>. Other configurations may remap into a larger or smaller number of bits <p' m+1 '>. Although many of the configurations were described with respect to NOR flash memory systems, the operations and configurations may be performed on other

Claims

What is claimed is:
1. A method of managing data that is stored in a memory system that includes asymmetric memory, where the asymmetric memory is configured to enable block write operations in which an address within a block of one of the asymmetric memory components affects me availability of other addresses within the block of (he asymmetric memory component, the method comprising: identifying, within asymmetric storage, a first disruption region that is associated with problematic read access of first addresses within the first disruption region during the block write operations to the first disruption region; identifying, within the asymmetric storage, a second disruption region that is separate from the first disruption region, the second disruption region associated wim problematic read access of second addresses within the second disruption region during the block write operations to the second disruption region; associating blocks within the first disruption region of asymmetric storage with the second disruption region of asymmetric storage, where a first block, a second block, and a third block in the first disruption region are configured to include data that minors data within a first block, a second block, and a third block of the second disruption region; receiving a first update to be written to the memory system; writing the first update to a fourth block of the second disruption region that follows a block of the second disruption region corøspoadiog to the third block of the first disruption region; disabling read access to the first block, the second block, and the third block of the second disruption region while writing me first update to the fourth block to the second disruption region; enabling data to be read from the first block, the second block, and the third block of the first disruption region while writing the first update to the fourth block to the second disruption region; writing the first update to the fourth block of the tint disruption region; disabling read access to the first block, the second block, and the third block of the first disruption region while writing the fourth block to the first disruption region; and enabling data to be read from the first block, the second block, and die third block of the second disruption region while writing the update to the fourth block to the first disruption region.
2. The method of claim 1 further comprising: after configuring a memory controller to enable access to the first block, the second block, the third block, and the fourth block from the second disruption region, receiving a second update to a portion of date in the first block of the second disruption region; receiving a third update to a portion of data in the third block of the second disruption region; writing the second update and third update to the first block and the third block of the first disruption region; enabling the memory management unit to read the first block and the third block from the second disruption region; and preserving the data in the second block of the Hist disruption region for subsequent access as the second update and the third update ate being written to the first disruption region.
3. The method of claim 1 wherein the second disruption region includes a previous instance of mirrored content prior to writing the first update to the second disruption region.
4. The method of claim 1 wherein the first disruption region is in use when me first update is received.
5. The method of claim 1 wherein the first block, the second block, the third block, and the fourth block in the first disruption region are sequentially arranged.
6. The method of clahn 1 wherein the fourth block in the second disruption region is unutilized prior to writing the first update.
7. The method of claim 1 wherein the fourth block in the first disruption region and the second disruption region uses an identical offset for a memory controller-specified physical address offset.
B. The method of claim 1 further comprising enabling a memory controller to read the first update from the second disruption region after the first update has been written to the fourth block of the second disruption region.
9. The method of claim 1 further comprising enabling a memory controller to read the first block, the second block, and the third block from the second disruption region after the first update has been written to the fourth block of the second disruption region.
10. The method of claim 1 further comprising preserving the data in the first block, the second block, and the third block of the second disruption region during the writing of the first update to the fourth block of the second disruption region.
11. The method of claim 1 further comprising using a memory controller configured to associate a memory management unit-specified physical address from a CPU with a first memory controller-specified physical address for the first disruption region and a second memory controller-specified physical address for the second disruption region.
12. The method of claim 1 further comprising using an identical block subaddress for the first block in the first disruption region and the first block in the
13. The method of claim 1 wherein the first block, the second block and the third block in the second disruption region are sequentially arranged.
14. The method of claim 1 wherein associating the blocks includes associating the blocks using a memory controller configured to interface with symmetric storage and the asymmetric storage.
15. The method of claim 1 wherein associating the blocks includes associating the blocks using a memory controller configured to interface only with the asymmetric storage.
16. The method of claim 1 wherein writing the first update to the fourth block of the first disruption region, disabling read access to the first block, the second block, and the third block of the first disruption region while writing the fourth block to the first disruption region, and enabling data to be read from the first block, the second block, and the third block of the second disruption region while writing the update to the fourth block to me first disruption region are performed before the second update is received.
17. A method of managing data that is stored in a memory system mat includes asymmetric memory, where the asymmetric memory is configured to enable block write operations in which an address within a block of one of the asymmetric memory components afifects the availability of other addresses within the block of tbc asymmetric memory component, the method comprising: identifying, within asymmetric storage, a first disruption region that is associated with problematic read access of first addresses within the first disruption region during the block write operations to the first disruption region; identifying, within the asymmetric storage, a second disruption region that is separate from the first disruption region and that is associated with problematic read access of second addresses within the second disruption region during the block write operations to the second disruption region; associating blocks within the first disruption region with the second disruption region, where a first block in the first disruption region are configured to include data that mirrors data within a first block of the second disruption region; receiving a first update to be written to the memory system; writing the first update to a second block of the second disruption region; disabling read access to the first block of the second disruption region while writing the second block to the second disruption region; enabling data to be read from the first block of the first disruption region while writing the fust update to the second block to the second disruption region; writing the first update to the second block of the first bank; disabling read access to the first block of the first disruption region while writing the second block to the first disruption region; and enabling data to be read from the first block of the second disruption region while writing the update to the second block to the first disruption region.
18. The method of claim 17 wherein the second block in the second disruption region is unutilized prior to writing the first update.
19. The method of claim 17 wherein the second disruption region includes a previous instance of mirrored content prior to writing the first update to the second disruption region.
20. The method of claim 17 wherein the first disruption region is in use when the first update is received.
21. The method of olaim 17 wherein the first block and the second block in me first disruption region are sequentially arranged.
22. The method of claim 17 wherein the first block and the second block in the second disruption region are sequentially arranged.
23. The method of claim 17 wherein the second block in me first disruption region and the second disruption region uses an identical offset for a memory controller-specified physical address offset.
24. The method of claim 17 further comprising enabling α memoiy controller to read the first update from the second disruption region after the first update has been written to the second block of the second disruption region.
25. The method of claim 17 further comprising enabling a memory controller to read the first Mock and the second block ftora the second disruption region after the first update has been written to the second block of me second disruption region.
26. The method of claim 17 further comprising preserving the data in the first block of the second disruption region during the writing of the first update to the second block of the second disruption region.
27. The method of claim 17 further comprising using a memory controller configured to associate a memory management unit-specified physical address from a CPU with a first memory controller-specified physical address for the first disruption region and a second memory controller-specified physical address for the second disruption region.
28. The method of claim 17 further comprising using an identical block sαbaddress for the first block in the first disruption region and the first block in the second disruption region.
29 A system that enables a memory controller associated with a remapping table to enable access to content in a memory system that includes asymmetric memory, the system comprising: means for identifying, within asymmetric storage, a first disruption region that is associated with problematic read access of first addresses within the first disruption region during the block write operations to the fast disruption region; means for identifying, within the asymmetric storage, a second disruption region that is separate from the first disruption region and that is associated with problematic read access of second addresses within the second disruption region during the block write operations to the second disruption region; means for associating blocks within the first disruption region with the second disruption region, where a first block in the first disruption region are configured to include data that mirrors data within a first block of the second disruption region; means for receiving a first update to be written to the memory system; means for writing the first update to a second block of the second disruption region; means for disabling read access to the first block of the second disruption region while writing the second block to the second disruption region; means for enabling data to be read from the first block of the first disruption region while writing the first update to the second block to the second disruption 5 region; means for writing the first update to the second block of the first bank; means for disabling read access to the first block of the first disruption region while writing the second block to the first disruption region; and means for enabling data to be read from the first block of Ae second disruption o region while writing the update to me second block to the first disruption region.
managing data that is stored in a memory system that includes asymmetric memory, where me asymmetric memory is configured to enable block write operations in which an address within a block of one of the asymmetric memory5 components affects the availability of other addresses within the block of the asymmetric memory component, the method comprising:
30. A system that manages data that is stored Ina memory system that includes asymmetric memory, where the asymmetric memory is configured to enable 0 block write operations in which an address within a block of one of the asymmetric memory components affects the availability of other addresses within the block of me asymmetric memory component, the system comprising: a processor; and a memory controller that is configured to: identity, within asymmetric storage, a first disruption region that is associated with problematic read access of first addresses within the first disruption region during the block write operations to the first disruption region; identify, within the asymmetric storage, a second disruption region that is separate from the first disruption region and that is associated with problematic read access of second addresses within the second disruption region during the block write operations to Ae second disruption region; associate blocks within the first disruption region with the second disruption region, where a first block in the first disruption region are configured to include data that minors data within a first block of the second disruption region; receive a first update to be written to the memory system; write the first update to a second block of the second disruption region; disable read access to the first block of the second disruption region while writing the second block to the second disruption region; enable data to be read from the first block of the first disruption region while writing the first update to the second block to the second disruption region; write the first update to the second block of the first bank; disable read access to the first block of the first disruption region while writing the second block to the first disruption region; and enable data to be read from the first block of the second disruption region while writing the update to the second block to the first disruption region.
31. A computer program product on a computer readable medium, the computer program managing data that is stored in a memory system that includes asymmetric memory, where the asymmetric memory is configured to enable block write operations in which an address within a block of one of the asymmetric memory components affects the availability of other addresses within the block of the asymmetric memory component and comprising instructions that when executed on a processor cause the processor to perform operations that include: identifying, within asymmetric storage, a first disruption region that is associated with problematic read access of first addresses within the first disruption region during the block write operations to the first disruption region; identifying, within the asymmetric storage, a second disruption region mat is separate from the first disruption region and that is associated with problematic read access of second addresses within the second disruption region during the block write operations to the second disruption region; associating blocks within the first disruption region with the second disruption region, where a first block in the first disruption region are configured to include data that mirrors data within a first block of the second disruption region; receiving a first update to be written to the memory system; writing me first update to a second block of the second disruption region; disabling read access to the first block of the second disruption region while writing the second block to the second disruption region; enabling data to be read from the first block of the fust disruption region while writing the first update to the second block to the second disruption region; writing the first update to the second block of the first bank; disabling read access to the first block of the first disruption region while writing the second block to the first disruption region; and enabling data to be read from the first block of the second disruption region while writing the update to Ae second block to flie first disruption region.
32. A computer program product on a computer readable medium, the computer program managing data that is stored in a memory system that includes asymmetric memory, where the asymmetric memory is configured to enable block write operations in which an address within a block of one of the asymmetric memory components affects the availability of other addresses within the block of the asymmetric memory component and comprising instructions that when executed on a processor cause the processor to perform operations that include: identifying, within asymmetric storage, a first disruption region that is associated with problematic read access of first addresses within the first disruption region during the block write operations to the first disruption region; identifying, within the asymmetric storage, a second disruption region that is separate from the first disruption region, the second disruption region associated with problematic read access of second addresses within the second disruption region during the block write operations to the second disruption region; associating blocks within the first disruption region of asymmetric storage with the second disruption region of asymmetric storage, where a first block, a second block, and a third block in the first disruption region are configured to include data that mirrors data within a first block, a second block, and a third block of the 5 second disruption region; receiving a first update to be written to the memory system; writing the first update Iu a fourth block of the second disruption region that follows a block of the second disruption region corresponding to the third block of the first disruption region; o disabling read access to the first block, the second block, and the third block of the second disruption region while writing the first update to the fourth block to the second disruption region; enabling data to be read from the first block, the second block, and the third block of the first disruption region while writing the first update to the fourth 5 block to the second disruption region; writing the first update to the fourth block of the first, disruption region; disabling read access to the first block, the second block, and the third block of the first disruption region while writing the fourth block to the first disruption region; and o enabling data to be read from the first block, the second block, and the third block of the second disruption region while writing the update to the fourth block to the first disruption region.
PCT/US2008/080557 2007-10-19 2008-10-20 Managing memory systems containing components with asymmetric characteristics WO2009052525A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US98128407P 2007-10-19 2007-10-19
US60/981,284 2007-10-19

Publications (1)

Publication Number Publication Date
WO2009052525A1 true WO2009052525A1 (en) 2009-04-23

Family

ID=40564634

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2008/080557 WO2009052525A1 (en) 2007-10-19 2008-10-20 Managing memory systems containing components with asymmetric characteristics
PCT/US2008/080559 WO2009052527A1 (en) 2007-10-19 2008-10-20 Managing memory systems containing components with asymmetric characteristics

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/US2008/080559 WO2009052527A1 (en) 2007-10-19 2008-10-20 Managing memory systems containing components with asymmetric characteristics

Country Status (2)

Country Link
US (5) US8156299B2 (en)
WO (2) WO2009052525A1 (en)

Families Citing this family (171)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7913055B2 (en) * 2006-11-04 2011-03-22 Virident Systems Inc. Seamless application access to hybrid main memory
US8495292B2 (en) 2006-12-06 2013-07-23 Fusion-Io, Inc. Apparatus, system, and method for an in-server storage area network
WO2009052525A1 (en) 2007-10-19 2009-04-23 Virident Systems, Inc. Managing memory systems containing components with asymmetric characteristics
US8473691B2 (en) * 2009-02-27 2013-06-25 Ryosuke Ohgishi Memory management device, image forming apparatus, and image forming method
US8429436B2 (en) 2009-09-09 2013-04-23 Fusion-Io, Inc. Apparatus, system, and method for power reduction in a storage device
US9223514B2 (en) 2009-09-09 2015-12-29 SanDisk Technologies, Inc. Erase suspend/resume for memory
US8984216B2 (en) 2010-09-09 2015-03-17 Fusion-Io, Llc Apparatus, system, and method for managing lifetime of a storage device
US9047178B2 (en) 2010-12-13 2015-06-02 SanDisk Technologies, Inc. Auto-commit memory synchronization
EP2652623B1 (en) 2010-12-13 2018-08-01 SanDisk Technologies LLC Apparatus, system, and method for auto-commit memory
US9218278B2 (en) 2010-12-13 2015-12-22 SanDisk Technologies, Inc. Auto-commit memory
US10817421B2 (en) 2010-12-13 2020-10-27 Sandisk Technologies Llc Persistent data structures
US10817502B2 (en) 2010-12-13 2020-10-27 Sandisk Technologies Llc Persistent memory management
US9208071B2 (en) 2010-12-13 2015-12-08 SanDisk Technologies, Inc. Apparatus, system, and method for accessing memory
US9104690B2 (en) * 2011-01-27 2015-08-11 Micron Technology, Inc. Transactional memory
DE112011105984T5 (en) 2011-12-20 2014-09-18 Intel Corporation Dynamic partial shutdown of a memory-side buffer in a two-level memory hierarchy
US8805952B2 (en) 2012-01-04 2014-08-12 International Business Machines Corporation Administering globally accessible memory space in a distributed computing system
US8554963B1 (en) 2012-03-23 2013-10-08 DSSD, Inc. Storage system with multicast DMA and unified address space
US9678863B2 (en) 2012-06-12 2017-06-13 Sandisk Technologies, Llc Hybrid checkpointed memory
US9047090B2 (en) 2012-08-07 2015-06-02 Qualcomm Incorporated Methods, systems and devices for hybrid memory management
US10303618B2 (en) * 2012-09-25 2019-05-28 International Business Machines Corporation Power savings via dynamic page type selection
US9842660B1 (en) 2012-12-28 2017-12-12 Virident Systems, Llc System and method to improve enterprise reliability through tracking I/O performance metrics in non-volatile random access memory
US9286002B1 (en) 2012-12-28 2016-03-15 Virident Systems Inc. Dynamic restriping in nonvolatile memory systems
US9158667B2 (en) 2013-03-04 2015-10-13 Micron Technology, Inc. Apparatuses and methods for performing logical operations using sensing circuitry
US10073626B2 (en) 2013-03-15 2018-09-11 Virident Systems, Llc Managing the write performance of an asymmetric memory system
US8996796B1 (en) 2013-03-15 2015-03-31 Virident Systems Inc. Small block write operations in non-volatile memory systems
US9135164B2 (en) 2013-03-15 2015-09-15 Virident Systems Inc. Synchronous mirroring in non-volatile memory systems
US8964496B2 (en) 2013-07-26 2015-02-24 Micron Technology, Inc. Apparatuses and methods for performing compare operations using sensing circuitry
US8971124B1 (en) 2013-08-08 2015-03-03 Micron Technology, Inc. Apparatuses and methods for performing logical operations using sensing circuitry
US9153305B2 (en) 2013-08-30 2015-10-06 Micron Technology, Inc. Independently addressable memory array address spaces
US9019785B2 (en) 2013-09-19 2015-04-28 Micron Technology, Inc. Data shifting via a number of isolation devices
US9449675B2 (en) 2013-10-31 2016-09-20 Micron Technology, Inc. Apparatuses and methods for identifying an extremum value stored in an array of memory cells
US9430191B2 (en) 2013-11-08 2016-08-30 Micron Technology, Inc. Division operations for memory
US9934856B2 (en) 2014-03-31 2018-04-03 Micron Technology, Inc. Apparatuses and methods for comparing data patterns in memory
KR102254099B1 (en) 2014-05-19 2021-05-20 삼성전자주식회사 Method for processing memory swapping operation, and host device, storage device and data processing system adopting the same
US9779019B2 (en) 2014-06-05 2017-10-03 Micron Technology, Inc. Data storage layout
US9704540B2 (en) 2014-06-05 2017-07-11 Micron Technology, Inc. Apparatuses and methods for parity determination using sensing circuitry
US9455020B2 (en) 2014-06-05 2016-09-27 Micron Technology, Inc. Apparatuses and methods for performing an exclusive or operation using sensing circuitry
US9496023B2 (en) 2014-06-05 2016-11-15 Micron Technology, Inc. Comparison operations on logical representations of values in memory
US10074407B2 (en) 2014-06-05 2018-09-11 Micron Technology, Inc. Apparatuses and methods for performing invert operations using sensing circuitry
US9449674B2 (en) 2014-06-05 2016-09-20 Micron Technology, Inc. Performing logical operations using sensing circuitry
US9711206B2 (en) 2014-06-05 2017-07-18 Micron Technology, Inc. Performing logical operations using sensing circuitry
US9786335B2 (en) 2014-06-05 2017-10-10 Micron Technology, Inc. Apparatuses and methods for performing logical operations using sensing circuitry
US9910787B2 (en) 2014-06-05 2018-03-06 Micron Technology, Inc. Virtual address table
US9711207B2 (en) 2014-06-05 2017-07-18 Micron Technology, Inc. Performing logical operations using sensing circuitry
US9830999B2 (en) 2014-06-05 2017-11-28 Micron Technology, Inc. Comparison operations in memory
KR102249416B1 (en) * 2014-06-11 2021-05-07 삼성전자주식회사 Memory system and method of operating memory system
US9653184B2 (en) * 2014-06-16 2017-05-16 Sandisk Technologies Llc Non-volatile memory module with physical-to-physical address remapping
US9898252B2 (en) 2014-09-03 2018-02-20 Micron Technology, Inc. Multiplication operations in memory
US9589602B2 (en) 2014-09-03 2017-03-07 Micron Technology, Inc. Comparison operations in memory
US9847110B2 (en) 2014-09-03 2017-12-19 Micron Technology, Inc. Apparatuses and methods for storing a data value in multiple columns of an array corresponding to digits of a vector
US10068652B2 (en) 2014-09-03 2018-09-04 Micron Technology, Inc. Apparatuses and methods for determining population count
US9904515B2 (en) 2014-09-03 2018-02-27 Micron Technology, Inc. Multiplication operations in memory
US9740607B2 (en) 2014-09-03 2017-08-22 Micron Technology, Inc. Swap operations in memory
US9747961B2 (en) 2014-09-03 2017-08-29 Micron Technology, Inc. Division operations in memory
US9836218B2 (en) 2014-10-03 2017-12-05 Micron Technology, Inc. Computing reduction and prefix sum operations in memory
US9940026B2 (en) 2014-10-03 2018-04-10 Micron Technology, Inc. Multidimensional contiguous memory allocation
US10163467B2 (en) 2014-10-16 2018-12-25 Micron Technology, Inc. Multiple endianness compatibility
US10147480B2 (en) 2014-10-24 2018-12-04 Micron Technology, Inc. Sort operation in memory
US9779784B2 (en) 2014-10-29 2017-10-03 Micron Technology, Inc. Apparatuses and methods for performing logical operations using sensing circuitry
US10073635B2 (en) 2014-12-01 2018-09-11 Micron Technology, Inc. Multiple endianness compatibility
US9747960B2 (en) 2014-12-01 2017-08-29 Micron Technology, Inc. Apparatuses and methods for converting a mask to an index
US10032493B2 (en) 2015-01-07 2018-07-24 Micron Technology, Inc. Longest element length determination in memory
US10061590B2 (en) 2015-01-07 2018-08-28 Micron Technology, Inc. Generating and executing a control flow
US9583163B2 (en) 2015-02-03 2017-02-28 Micron Technology, Inc. Loop structure for operations in memory
EP3254286B1 (en) 2015-02-06 2019-09-11 Micron Technology, INC. Apparatuses and methods for parallel writing to multiple memory device locations
EP3254287A4 (en) 2015-02-06 2018-08-08 Micron Technology, INC. Apparatuses and methods for memory device as a store for program instructions
WO2016126472A1 (en) 2015-02-06 2016-08-11 Micron Technology, Inc. Apparatuses and methods for scatter and gather
WO2016144724A1 (en) 2015-03-10 2016-09-15 Micron Technology, Inc. Apparatuses and methods for shift decisions
US9741399B2 (en) 2015-03-11 2017-08-22 Micron Technology, Inc. Data shift by elements of a vector in memory
US9898253B2 (en) 2015-03-11 2018-02-20 Micron Technology, Inc. Division operations on variable length elements in memory
US10365851B2 (en) 2015-03-12 2019-07-30 Micron Technology, Inc. Apparatuses and methods for data movement
US10146537B2 (en) 2015-03-13 2018-12-04 Micron Technology, Inc. Vector population count determination in memory
US10049054B2 (en) * 2015-04-01 2018-08-14 Micron Technology, Inc. Virtual register file
US10140104B2 (en) 2015-04-14 2018-11-27 Micron Technology, Inc. Target architecture determination
US9959923B2 (en) 2015-04-16 2018-05-01 Micron Technology, Inc. Apparatuses and methods to reverse data stored in memory
US10073786B2 (en) 2015-05-28 2018-09-11 Micron Technology, Inc. Apparatuses and methods for compute enabled cache
US9704541B2 (en) 2015-06-12 2017-07-11 Micron Technology, Inc. Simulating access lines
US9921777B2 (en) 2015-06-22 2018-03-20 Micron Technology, Inc. Apparatuses and methods for data transfer from sensing circuitry to a controller
US9996479B2 (en) 2015-08-17 2018-06-12 Micron Technology, Inc. Encryption of executables in computational memory
US9905276B2 (en) 2015-12-21 2018-02-27 Micron Technology, Inc. Control of sensing components in association with performing operations
US9952925B2 (en) 2016-01-06 2018-04-24 Micron Technology, Inc. Error code calculation on sensing circuitry
US10048888B2 (en) 2016-02-10 2018-08-14 Micron Technology, Inc. Apparatuses and methods for partitioned parallel data movement
US9892767B2 (en) 2016-02-12 2018-02-13 Micron Technology, Inc. Data gathering in memory
US9971541B2 (en) 2016-02-17 2018-05-15 Micron Technology, Inc. Apparatuses and methods for data movement
US9899070B2 (en) 2016-02-19 2018-02-20 Micron Technology, Inc. Modified decode for corner turn
US10956439B2 (en) 2016-02-19 2021-03-23 Micron Technology, Inc. Data transfer with a bit vector operation device
US9697876B1 (en) 2016-03-01 2017-07-04 Micron Technology, Inc. Vertical bit vector shift in memory
US9997232B2 (en) 2016-03-10 2018-06-12 Micron Technology, Inc. Processing in memory (PIM) capable memory device having sensing circuitry performing logic operations
US10262721B2 (en) 2016-03-10 2019-04-16 Micron Technology, Inc. Apparatuses and methods for cache invalidate
US10558570B2 (en) * 2016-03-14 2020-02-11 Intel Corporation Concurrent accesses of asymmetrical memory sources
US10379772B2 (en) 2016-03-16 2019-08-13 Micron Technology, Inc. Apparatuses and methods for operations using compressed and decompressed data
US9910637B2 (en) 2016-03-17 2018-03-06 Micron Technology, Inc. Signed division in memory
US11074988B2 (en) 2016-03-22 2021-07-27 Micron Technology, Inc. Apparatus and methods for debugging on a host and memory device
US10388393B2 (en) 2016-03-22 2019-08-20 Micron Technology, Inc. Apparatus and methods for debugging on a host and memory device
US10120740B2 (en) 2016-03-22 2018-11-06 Micron Technology, Inc. Apparatus and methods for debugging on a memory device
US10977033B2 (en) 2016-03-25 2021-04-13 Micron Technology, Inc. Mask patterns generated in memory from seed vectors
US10474581B2 (en) 2016-03-25 2019-11-12 Micron Technology, Inc. Apparatuses and methods for cache operations
US10430244B2 (en) 2016-03-28 2019-10-01 Micron Technology, Inc. Apparatuses and methods to determine timing of operations
US10074416B2 (en) 2016-03-28 2018-09-11 Micron Technology, Inc. Apparatuses and methods for data movement
US10453502B2 (en) 2016-04-04 2019-10-22 Micron Technology, Inc. Memory bank power coordination including concurrently performing a memory operation in a selected number of memory regions
US10607665B2 (en) 2016-04-07 2020-03-31 Micron Technology, Inc. Span mask generation
US9818459B2 (en) 2016-04-19 2017-11-14 Micron Technology, Inc. Invert operations using sensing circuitry
US9659605B1 (en) 2016-04-20 2017-05-23 Micron Technology, Inc. Apparatuses and methods for performing corner turn operations using sensing circuitry
US10153008B2 (en) 2016-04-20 2018-12-11 Micron Technology, Inc. Apparatuses and methods for performing corner turn operations using sensing circuitry
US10042608B2 (en) 2016-05-11 2018-08-07 Micron Technology, Inc. Signed division in memory
US9659610B1 (en) 2016-05-18 2017-05-23 Micron Technology, Inc. Apparatuses and methods for shifting data
US10049707B2 (en) 2016-06-03 2018-08-14 Micron Technology, Inc. Shifting data
US10387046B2 (en) 2016-06-22 2019-08-20 Micron Technology, Inc. Bank to bank data transfer
US10037785B2 (en) 2016-07-08 2018-07-31 Micron Technology, Inc. Scan chain operation in sensing circuitry
US10388360B2 (en) 2016-07-19 2019-08-20 Micron Technology, Inc. Utilization of data stored in an edge section of an array
US10733089B2 (en) 2016-07-20 2020-08-04 Micron Technology, Inc. Apparatuses and methods for write address tracking
US10387299B2 (en) 2016-07-20 2019-08-20 Micron Technology, Inc. Apparatuses and methods for transferring data
US9767864B1 (en) 2016-07-21 2017-09-19 Micron Technology, Inc. Apparatuses and methods for storing a data value in a sensing circuitry element
US9972367B2 (en) 2016-07-21 2018-05-15 Micron Technology, Inc. Shifting data in sensing circuitry
US10303632B2 (en) 2016-07-26 2019-05-28 Micron Technology, Inc. Accessing status information
US10468087B2 (en) 2016-07-28 2019-11-05 Micron Technology, Inc. Apparatuses and methods for operations in a self-refresh state
US9990181B2 (en) 2016-08-03 2018-06-05 Micron Technology, Inc. Apparatuses and methods for random number generation
US11029951B2 (en) 2016-08-15 2021-06-08 Micron Technology, Inc. Smallest or largest value element determination
US10606587B2 (en) 2016-08-24 2020-03-31 Micron Technology, Inc. Apparatus and methods related to microcode instructions indicating instruction types
US10466928B2 (en) 2016-09-15 2019-11-05 Micron Technology, Inc. Updating a register in memory
US10387058B2 (en) 2016-09-29 2019-08-20 Micron Technology, Inc. Apparatuses and methods to change data category values
US10014034B2 (en) 2016-10-06 2018-07-03 Micron Technology, Inc. Shifting data in sensing circuitry
US10529409B2 (en) 2016-10-13 2020-01-07 Micron Technology, Inc. Apparatuses and methods to perform logical operations using sensing circuitry
US9805772B1 (en) 2016-10-20 2017-10-31 Micron Technology, Inc. Apparatuses and methods to selectively perform logical operations
US10373666B2 (en) 2016-11-08 2019-08-06 Micron Technology, Inc. Apparatuses and methods for compute components formed over an array of memory cells
US10423353B2 (en) 2016-11-11 2019-09-24 Micron Technology, Inc. Apparatuses and methods for memory alignment
US9761300B1 (en) 2016-11-22 2017-09-12 Micron Technology, Inc. Data shift apparatuses and methods
US10402340B2 (en) 2017-02-21 2019-09-03 Micron Technology, Inc. Memory array page table walk
US10403352B2 (en) 2017-02-22 2019-09-03 Micron Technology, Inc. Apparatuses and methods for compute in data path
US10268389B2 (en) 2017-02-22 2019-04-23 Micron Technology, Inc. Apparatuses and methods for in-memory operations
US10838899B2 (en) 2017-03-21 2020-11-17 Micron Technology, Inc. Apparatuses and methods for in-memory data switching networks
US10185674B2 (en) 2017-03-22 2019-01-22 Micron Technology, Inc. Apparatus and methods for in data path compute operations
US11222260B2 (en) 2017-03-22 2022-01-11 Micron Technology, Inc. Apparatuses and methods for operating neural networks
US10049721B1 (en) 2017-03-27 2018-08-14 Micron Technology, Inc. Apparatuses and methods for in-memory operations
US10147467B2 (en) 2017-04-17 2018-12-04 Micron Technology, Inc. Element value comparison in memory
US10043570B1 (en) 2017-04-17 2018-08-07 Micron Technology, Inc. Signed element compare in memory
US9997212B1 (en) * 2017-04-24 2018-06-12 Micron Technology, Inc. Accessing data in memory
US10942843B2 (en) 2017-04-25 2021-03-09 Micron Technology, Inc. Storing data elements of different lengths in respective adjacent rows or columns according to memory shapes
US10236038B2 (en) 2017-05-15 2019-03-19 Micron Technology, Inc. Bank to bank data transfer
US10620879B2 (en) 2017-05-17 2020-04-14 Macronix International Co., Ltd. Write-while-read access method for a memory device
US10068664B1 (en) 2017-05-19 2018-09-04 Micron Technology, Inc. Column repair in memory
US10013197B1 (en) 2017-06-01 2018-07-03 Micron Technology, Inc. Shift skip
US10262701B2 (en) 2017-06-07 2019-04-16 Micron Technology, Inc. Data transfer between subarrays in memory
US10152271B1 (en) 2017-06-07 2018-12-11 Micron Technology, Inc. Data replication
US10318168B2 (en) 2017-06-19 2019-06-11 Micron Technology, Inc. Apparatuses and methods for simultaneous in data path compute operations
US10162005B1 (en) 2017-08-09 2018-12-25 Micron Technology, Inc. Scan chain operations
US10534553B2 (en) 2017-08-30 2020-01-14 Micron Technology, Inc. Memory array accessibility
US10346092B2 (en) 2017-08-31 2019-07-09 Micron Technology, Inc. Apparatuses and methods for in-memory operations using timing circuitry
US10741239B2 (en) 2017-08-31 2020-08-11 Micron Technology, Inc. Processing in memory device including a row address strobe manager
US10416927B2 (en) 2017-08-31 2019-09-17 Micron Technology, Inc. Processing in memory
US10409739B2 (en) 2017-10-24 2019-09-10 Micron Technology, Inc. Command selection policy
US10522210B2 (en) 2017-12-14 2019-12-31 Micron Technology, Inc. Apparatuses and methods for subarray addressing
US10332586B1 (en) 2017-12-19 2019-06-25 Micron Technology, Inc. Apparatuses and methods for subrow addressing
US10445088B2 (en) 2018-01-11 2019-10-15 Macronix International Co., Ltd. System boot code clone
US10614875B2 (en) 2018-01-30 2020-04-07 Micron Technology, Inc. Logical operations using memory cells
US11194477B2 (en) 2018-01-31 2021-12-07 Micron Technology, Inc. Determination of a match between data values stored by three or more arrays
US10437557B2 (en) 2018-01-31 2019-10-08 Micron Technology, Inc. Determination of a match between data values stored by several arrays
US10725696B2 (en) 2018-04-12 2020-07-28 Micron Technology, Inc. Command selection policy with read priority
US10440341B1 (en) 2018-06-07 2019-10-08 Micron Technology, Inc. Image processor formed in an array of memory cells
US10769071B2 (en) 2018-10-10 2020-09-08 Micron Technology, Inc. Coherent memory access
US11175915B2 (en) 2018-10-10 2021-11-16 Micron Technology, Inc. Vector registers implemented in memory
US10483978B1 (en) 2018-10-16 2019-11-19 Micron Technology, Inc. Memory device processing
US11184446B2 (en) 2018-12-05 2021-11-23 Micron Technology, Inc. Methods and apparatus for incentivizing participation in fog networks
US11335430B2 (en) * 2019-03-26 2022-05-17 Rambus Inc. Error remapping
US10867655B1 (en) 2019-07-08 2020-12-15 Micron Technology, Inc. Methods and apparatus for dynamically adjusting performance of partitioned memory
US11360768B2 (en) 2019-08-14 2022-06-14 Micron Technolgy, Inc. Bit string operations in memory
US11194515B2 (en) * 2019-09-16 2021-12-07 Macronix International Co., Ltd. Memory system, method of operating memory, and non-transitory computer readable storage medium
US11449577B2 (en) 2019-11-20 2022-09-20 Micron Technology, Inc. Methods and apparatus for performing video processing matrix operations within a memory array
US11853385B2 (en) 2019-12-05 2023-12-26 Micron Technology, Inc. Methods and apparatus for performing diversity matrix operations within a memory array
US11227641B1 (en) 2020-07-21 2022-01-18 Micron Technology, Inc. Arithmetic operations in memory
CN115442268B (en) * 2022-08-30 2023-06-09 烽火通信科技股份有限公司 Network element port state detection and positioning system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6170047B1 (en) * 1994-11-16 2001-01-02 Interactive Silicon, Inc. System and method for managing system memory and/or non-volatile memory using a memory controller with integrated compression and decompression capabilities
US6240501B1 (en) * 1997-09-05 2001-05-29 Sun Microsystems, Inc. Cache-less address translation
US6370631B1 (en) * 1994-11-16 2002-04-09 Interactive Silicon, Inc. Memory controller including compression/decompression capabilities for improved data access

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5402428A (en) * 1989-12-25 1995-03-28 Hitachi, Ltd. Array disk subsystem
EP0745995B1 (en) * 1995-05-05 2001-04-11 STMicroelectronics S.r.l. Nonvolatile, in particular flash-EEPROM, memory device
GB9618051D0 (en) * 1996-08-29 1996-10-09 Sls Wales Ltd Wrinkle removal
US20020016891A1 (en) * 1998-06-10 2002-02-07 Karen L. Noel Method and apparatus for reconfiguring memory in a multiprcessor system with shared memory
DE69836437T2 (en) * 1997-12-05 2007-09-27 Intel Corporation, Santa Clara STORAGE SYSTEM WITH MEMORY MODULE WITH A MEMORY MODULE CONTROL UNIT
US6260103B1 (en) * 1998-01-05 2001-07-10 Intel Corporation Read-while-write memory including fewer verify sense amplifiers than read sense amplifiers
US6851026B1 (en) * 2000-07-28 2005-02-01 Micron Technology, Inc. Synchronous flash memory with concurrent write and read operation
JP3778774B2 (en) * 2000-05-12 2006-05-24 松下電器産業株式会社 Data processing device
US6772273B1 (en) * 2000-06-29 2004-08-03 Intel Corporation Block-level read while write method and apparatus
US6883044B1 (en) * 2000-07-28 2005-04-19 Micron Technology, Inc. Synchronous flash memory with simultaneous access to one or more banks
JP3998118B2 (en) 2000-10-10 2007-10-24 本田技研工業株式会社 Electric vehicle
JP4049297B2 (en) * 2001-06-11 2008-02-20 株式会社ルネサステクノロジ Semiconductor memory device
JP3802411B2 (en) * 2001-12-20 2006-07-26 株式会社東芝 Data copy method for nonvolatile semiconductor memory device
JP2003257189A (en) * 2002-03-01 2003-09-12 Toshiba Corp Semiconductor memory device
EP1376608A1 (en) * 2002-06-28 2004-01-02 Cp8 Programming method in a nonvolatile memory and system for realisation of such a method
US7085909B2 (en) * 2003-04-29 2006-08-01 International Business Machines Corporation Method, system and computer program product for implementing copy-on-write of a file
US7269708B2 (en) * 2004-04-20 2007-09-11 Rambus Inc. Memory controller for non-homogenous memory system
KR100816748B1 (en) * 2006-03-16 2008-03-27 삼성전자주식회사 Phase change memory device supporting program suspend/resume mode and program method thereof
US20070255889A1 (en) * 2006-03-22 2007-11-01 Yoav Yogev Non-volatile memory device and method of operating the device
US7562180B2 (en) * 2006-03-28 2009-07-14 Nokia Corporation Method and device for reduced read latency of non-volatile memory
US7471588B2 (en) * 2006-05-05 2008-12-30 Altera Corporation Dual port random-access-memory circuitry
WO2009052525A1 (en) * 2007-10-19 2009-04-23 Virident Systems, Inc. Managing memory systems containing components with asymmetric characteristics

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6170047B1 (en) * 1994-11-16 2001-01-02 Interactive Silicon, Inc. System and method for managing system memory and/or non-volatile memory using a memory controller with integrated compression and decompression capabilities
US6370631B1 (en) * 1994-11-16 2002-04-09 Interactive Silicon, Inc. Memory controller including compression/decompression capabilities for improved data access
US6240501B1 (en) * 1997-09-05 2001-05-29 Sun Microsystems, Inc. Cache-less address translation

Also Published As

Publication number Publication date
WO2009052527A1 (en) 2009-04-23
US20090106479A1 (en) 2009-04-23
US8639901B2 (en) 2014-01-28
US8156299B2 (en) 2012-04-10
US9514038B2 (en) 2016-12-06
US20130219112A1 (en) 2013-08-22
US8407439B2 (en) 2013-03-26
US20090106478A1 (en) 2009-04-23
US20120254577A1 (en) 2012-10-04
US8200932B2 (en) 2012-06-12
US20120198138A1 (en) 2012-08-02

Similar Documents

Publication Publication Date Title
US8200932B2 (en) Managing memory systems containing components with asymmetric characteristics
US8627040B2 (en) Processor-bus-connected flash storage paging device using a virtual memory mapping table and page faults
KR100992034B1 (en) Managing computer memory in a computing environment with dynamic logical partitioning
US9471507B2 (en) System and device for page replacement control between virtual and real memory spaces
US8190839B2 (en) Using domains for physical address management in a multiprocessor system
US8453015B2 (en) Memory allocation for crash dump
US20100325374A1 (en) Dynamically configuring memory interleaving for locality and performance isolation
KR100515229B1 (en) Method and system of managing virtualized physical memory in a multi-processor system
US9146879B1 (en) Virtual memory management for real-time embedded devices
WO2009140631A2 (en) Distributed computing system with universal address system and method
US9208088B2 (en) Shared virtual memory management apparatus for providing cache-coherence
US9916256B1 (en) DDR storage adapter
US10949356B2 (en) Fast page fault handling process implemented on persistent memory
KR101893966B1 (en) Memory management method and device, and memory controller
US11126573B1 (en) Systems and methods for managing variable size load units
US20230161714A1 (en) Method and system for direct memory access
US20230090341A1 (en) Method and system for direct memory access
US20190310783A1 (en) Memory systems having semiconductor memory modules therein that support page fault processing
US11009841B2 (en) Initialising control data for a device
EP4116829A1 (en) Systems and methods for managing variable size load units

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08840393

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08840393

Country of ref document: EP

Kind code of ref document: A1