CA2574756C - Systems, methods, computer readable medium and apparatus for memory management using nvram - Google Patents

Systems, methods, computer readable medium and apparatus for memory management using nvram Download PDF

Info

Publication number
CA2574756C
CA2574756C CA2574756A CA2574756A CA2574756C CA 2574756 C CA2574756 C CA 2574756C CA 2574756 A CA2574756 A CA 2574756A CA 2574756 A CA2574756 A CA 2574756A CA 2574756 C CA2574756 C CA 2574756C
Authority
CA
Canada
Prior art keywords
data
memory
file
block size
last
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CA2574756A
Other languages
French (fr)
Other versions
CA2574756A1 (en
Inventor
David Potteiger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
United Parcel Service of America Inc
Original Assignee
United Parcel Service of America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by United Parcel Service of America Inc filed Critical United Parcel Service of America Inc
Publication of CA2574756A1 publication Critical patent/CA2574756A1/en
Application granted granted Critical
Publication of CA2574756C publication Critical patent/CA2574756C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0616Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/04Addressing variable-length words or parts of words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/202Non-volatile memory
    • G06F2212/2022Flash memory

Abstract

A system and method is disclosed for improving data integrity and the efficiency of data storage in separate memories of a computing device. In particular, the present invention introduces a combination of two types of memory, namely, an NVRAM and a Flash memory, as persistent memory for storing file data. By constantly caching a last data portion of a data file in an NVRAM, it avoids any sector erasing for individual bits in a Flash memory.
Such an approach increases the data storage efficiency and life expectancy of a Flash memory. The present invention has very broad application in almost all computing devices, including any PC (desktop or laptop) and server computers.
It demonstrates particularly advantageous performance in portable electronic devices implementing WindowsTM CE operating systems.

Description

SYSTEMS, METHODS, COMPUTER READABLE MEDIUM AND
APPARATUS FOR MEMORY MANAGEMENT USING NVRAM
FIELD OF THE INVENTION
The present invention relates generally to the field of memory storage topologies and, more particularly, to a system and method for improving data integrity and the efficiency of data storage in separate memories of a computing device. It yields especially good memory performance in hand held computing devices implementing WindowsTM CE operating systems.

BACKGROUND OF THE INVENTION
Electronic memory comes in a variety of forms to serve a variety of purposes. Typically, a single electronic computing device includes several tiers of different memories. Such tiering philosophy in memory design helps maximize data storage for quick and easy access by powerful CPUs, while minimizing the memory cost.
Specific to handheld computing devices, many of them includes two kinds of memory, namely a Random Access Memory (RAM or DRAM) and programmable permanent memory. Generally, software applications are loaded, executed, and run in RAM. RAM is also used to receive data input by the user, as well as to display the application output or results to the user. The tasks of receiving data and displaying results are generally performed quickly in the RAM, allowing the user to input data freely, without the delay of storing the data in a more permanent memory. The amount of RAM available generally contributes to the perceived speed of the device. The speed of most RAM configurations, however, must be balanced with the risk of losing data or results. That is, RAM is sometimes called volatile memory because it requires a constant supply of electrical energy to maintain its data. As such, if the supply of electrical power is lost, the data in the RAM will also be lost.
Most types of permanent memory are non-volatile; that is, the permanent memory retains the data even if electrical power is lost. Most permanent memory is programmable, and thus suitable for storing software applications, and erasable, so that the memory can be re-programmed. Generally, selected data can be purposely stored in the permanent memory for later use. For example, the user might make ten quick data entries into the RAM, and then later store the data entries in the permanent memory.
In use, many portable electronic devices are subject to environmental forces, electronic failure, loss of power, and/or other catastrophic events that can automatically and abruptly erase the contents of the RAM. Once the input data stored in the RAM is lost, it cannot be recovered for storage in the permanent memory. Thus, there exists a need for a non-volatile memory to quickly receive and store data, even in the event of a total failure of the device from a catastrophic event, and to provide long-term storage of the data.
Currently, a block-accessed Flash memory is considered to an improved non-volatile memory implemented in portable computing devices. As a type of EEPROM, the Flash memory provides a non-volatile, lower power, low cost, and high-density storage device for programmable code and data. These characteristics make the Flash memory an optimal non-volatile memory device for embedded systems. However, the Flash memory also has a number of undesirable features when implemented in computing devices.
One problem is erase sectors. Unlike a RAM or ROM device, the individual bits of the Flash memory device (e.g. NAND Flash memory) can only be programmed in one direction and cannot be re-programmed without an erase operation. An erase operation for the Flash memory requires that a large section of bits, an erase sector, to be "flashed" or erased at the same time. Such an erase sector is typically 64KB, but can range from 512 bytes to 512KB, determined by the type of the Flash memory and how it is wired into the system.
Additionally, the erase operations are quite slow, typically one half second or so, while a single byte can usually be programmed in about ten microseconds.
This sector-erasing feature of the Flash memory also makes it difficult to maintain data integrity. When using RAM or a conventional disk for storage, data of virtually any size can be written and re-written into the same location without any special handling. Since the Flash memory is not capable of re-writing individual bits of data, all data must be initially written, or re-written, into an unused area of the Flash memory. The original data must then be tracked to free up space in the memory for purposes of sector erasing. When data write and re-write operations are performed on the Flash device, the controlling software must protect the data at every state of the operation to ensure that the original and new data remain valid in the event of an interruption. Interruptions can be caused by several common conditions, such as unexpected power loss due to low battery or a user request to shut down.
Another aspect of the Flash memory that must be considered is its limited life expectancy. For any given Flash memory device, there is a limit to the total number of erase operations that may be performed on a particular erase sector before it becomes unreliable or damaged. Flash memory device lifetimes range from Ib,000 write-erase cycles to 1,000,000 cycles, with most rated around 100,000. When an erase sector approaches its rated limit, it may take longer to perform certain operations or even begin to fail.
To combat the above-identified problems, some special types of software are provided to manage the Flash memory. One example is called a Flash media manager in the present market. To maximize the life cycle of a Flash device, the media manager introduces a process called wear leveling, which consists of ensuring all erase sectors- within the Flash disk are used with the same frequency.
Another process called garbage collection is deployed to reclaim space occupied by discarded data. This process selects an erase sector that has mostly discarded data, copies the valid data from that erase sector into the spare sector, and erases the previously valid erase sector making it the new spare sector. However, inclusion of any of the above-mentioned special software requires overhead space allocated in the Flash memory for storing data identifying/recording the status of the Flash memory, maintaining a file allocation table to track the location and status of stored data, and reserving spare space for garbage collection. Such overhead not only decreases the actual space for data storage, but also results in speed degradation in the Flash memory.
Therefore, there exists a need for a non-volatile memory that overcomes the deficiencies of a Flash memory and provides fast data storage. The emerging Non-Volatile Random Access Memory (NVRAM) appears to meet this need. An NVRAM is a special kind of RAM that retains data when the computer is turned off or there is a power failure. Similar to the computer's Read Only Memory (ROM), NVRAM is powered by a battery within the computer. When the power is turned on, the NVRAM operates just like any other RAM. When the power is turned off, the NVRAM draws enough power from the battery to retain its data.
NVRAM is fairly common in embedded systems. However, NVRAM is much more expensive than other RAM because of the battery. Also, NVRAM is generally less dense than other RAM, particularly DRAM. Thus, its applications are typically limited to the storage of a few hundred bytes of system-critical information that cannot be stored in a better way.
BRIEF SUMMARY OF THE INVENTION
In light of the above, the invention seeks to provide non-volatile, reliable and fast data storage devices at reasonable costs. Specifically, the invention seeks to make a combined use of various existing memory devices, such as a NVRAM
and a Flash memory, in one computing device (especially an electronic portable device) so as to obtain optimal performance of data storage and deployment. To that end, the invention further seeks to provide underlying mechanisms in support of the combined use, which would facilitate data transfer between different memories without compromising data integrity.
In accordance with the present invention, the above aspects and other aspects, features, and advantages will be carried out by a method and system for memory management. The method comprises the steps of. receiving file data in a computing device; processing the file data to determine that the file data is composed of one or more data blocks, each having a pre-determined block size, - 20 and a last data portion having a smaller size than the pre-determined block size;
writing the data blocks into a first memory that is configured to'store data blocks having said pre-determined block size; and caching the last data portion in a second memory prior to transmitting said last data portion to said first memory. In one preferred embodiment, the second memory is a non-volatile random access memory, and the last data portion is cached through the steps of: allocating part of the second memory as a file buffer to store file data; writing data bytes of the last data portion in the file buffer until the file buffer includes data more than the pre-determined block size; and responsive to the determination that the file buffer includes data more than the pre-determined block size, moving data from the file buffer to the first memory prior to continuing to write date bytes of the last data portion into the file buffer.
Also provided in the present invention is a computer readable medium comprising executable instructions to perform the above-described steps.
According to another aspect of the present invention, a system is provided, comprising: a processor configured to receive and process data; a memory comprising at least a first memory and a second memory; and a data manager executed by said processor. The data manager is configured to define within the data a number of data blocks, each having a pre-determined block size, and a last data portion that has a smaller size than said pre-determined block size. It is further configured to allocate space in the first memory for storing the data blocks and assign space in the second memory for caching the last data portion. In a preferred embodiment, the last data portion is cached into the second memory through the steps of: allocating part of the second memory as a file buffer that is configured to store file data; writing data bytes of the last data portion to said file buffer until data in the file buffer is determined to have said pre-determined block size;
and responsive to the determination that the data in the file buffer has such pre-determined block size, moving data from the file buffer to the first memory prior to continuing to write date bytes of the last data portion to the file buffer. In a preferred embodiment, the first memory is a Flash memory, whereas the second memory is a non-volatile random access memory.
Another embodiment of the present invention is particularly directed to a hand held apparatus, which includes: a data input device; a data output device; and a computing device configured to communicate with the data input device, data output device, and a data source via a communications network. The computing device comprises a processor for data processing, a memory comprising at least a non-volatile random access memory and a block-accessed memory and a data manager. The data manager is executed by the processor to perform instructions comprising the steps of: (A) identifying, from a file data input, one or more data blocks, each having a threshold block size, and a last data portion that has a smaller size than the threshold block size; (B) storing the data blocks into the block-accessed memory; (C) writing the last data portion by byte into the non-volatile random access memory until the non-volatile random access memory is determined to include file data of a threshold block size; and (E) responsive to the determination that said non-volatile random access memory is determined to include file data of a threshold block size, moving the file data from said non-volatile random access memory to the block-accessed memory prior to continuing Step (D).
In other embodiments of any of the foregoing aspects of the invention, any type of computer memory may be utilized for the first and the second memories.
In preferred embodiments, the first memory is a non-volatile memory, such as Flash memory, or a hard drive, or a CD-ROM drive, or the like; and the second memory is also a non-volatile memory, preferably NVRAM.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S) FIG. 1 shows a hand held computing device according to one embodiment of the present invention;
FIG. 2 shows components of a computing device according to one embodiment of the present invention;
FIG. 3 shows components of a memory in the computing device in FIG. 2 according to one embodiment of the present invention;
FIG. 4 illustrates a data flow in data processing for memory management according to one embodiment of the present invention;
FIG. 5 is a work flow chart of data processing and memory management according to one embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The present inventions now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. Indeed, these inventions may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like numbers refer to like elements throughout.
Turning to Fig. 1, a handheld device 1 is shown according to one embodiment of the present invention. Such handheld device 1 can be any portable computing device, including, but not limited to, a Personal Digital Assistant (PDA), a Smart Phone (SP), a notebook computer, a tablet computer and a Delivery Information Acquisition Device (DIAD). As illustrated in Fig. 1, this hand held device 1 is mainly composed of, besides other well-known necessary components not shown in the figure, an input device 12, an output or display device 14 and a computing device 20. In operation, the handheld device can receive data from a user 10 through user input (e.g., keyboard entry, bar-code scanning), or, over a communications network 16, from a data source 18 (e.g., electronic data transfer). The data source 18 can be any database on a server computer. In the package delivery context, for example, it can be a carrier's shipping database or a customer's database that is accessible to the computing device 20 by either a wired or wireless connection over the communications network 16, which can be the Internet, a LAN or WAN. Then shipping data or package information can be downloaded from a database and captured into the handheld device 1, which can be a DIAD in this context.
Fig. 2 provides a block diagram of various components inside the computing device 20. It includes at least (1) a processor 22 that executes a data manager 24, which comprises programmed instructions for memory management, (2) an interface 26 to interact with the input device 12 or the output device 14, (3) a network interface 28 to communicate with the communications network 16, and (4) a memory 30. The memory 30 comprises at least two kinds of memory: a DRAM 30A and a ROM 30B. In a preferred configuration of the present invention, a Non-volatile RAM (NVRAM) 32 is employed in addition to the regular RAM 30. As to the ROM 30B, typically, a Flash memory 34 is used in the invention.
Fig. 3 is a detailed illustration of respective data storage in each particular type of memory in the memory 30. As seen in this figure, there is data exchange between the DRAM 30A, NVRAM 32 and the Flash memory 34. In particular, The Flash memory 34 stores file data blocks 35. Each data block is defined to include a pre-determined block size of data bytes. Such pre-determined block size is typically 512 bytes, but may be varied depending on the particular size of erasing sectors of the Flash memory 34. As noted above, an erase sector, which is typically 64KB, but can range from 512 bytes to 512KB. One use of NVRAM 32 is to store non-file data 31, which is typically a File Allocation Table (FAT) 31.
FAT 31 is maintained in NVRAAM 32 to track the location and status of file data stored in both NVRAM 34 and the Flash memory 34. NVRAM 32 is also used to store file data bytes called a last data portion 33 in a buffer Z 37. The DRAM

contains another buffer Y 36. Both buffer Y 36 and buffer Z 37 are configured to include sufficient memory space for holding one data block.
In Fig. 4, a data input 40 is received by the processor 22. The processor 22, coupled with the DRAM containing the buffer Y 36, processes the data input 40 to determine in which memory it should be stored pursuant to the programmed instructions of the data manager 24. As illustrated in Fig. 4, the data input eventually results in three streams of data: (1) file data blocks 35, (2) last data portion (file data bytes) 33 and (3) non-file data (FAT) 31 that is generated or updated for identifying or recording the status of the data input 40. The file data blocks 35 will be written directly into the Flash memory 34, while the last data portion 33 and the FAT 31 will be stored in the NVRAM 32. The process of writing the last data portion 33 into the NVRAM 32 prior to transferring it to the Flash memory 34 is called caching, which will be described in detail below. As additional data is continuously received into the DRAM 30A, another last data portion will be written into the NVRAM 32 until the buffer Z 37 is determined to be full, which means, the file data already stored in the buffer Z 37, plus the additional data, reaches the pre-determined block size. At that point, a file data block 35 will be transmitted into the Flash memory 34 from the buffer Z. Any remaining part of the last data portion is retained in the buffer Z. A
complete sector of the Flash memory 34 can be written, thereby avoiding erasing and re-writing for individual bits. The writing efficiency and life expectancy of a Flash memory is increased as a result of the caching process.
The concrete workflow of the caching process performed by the data manager 24 is demonstrated in Fig. 5. The data manager 24 starts with Step 50 by receiving file data containing X bytes into DRAM 30A. Whenever the file data is to be stored it is sent to the buffer Y 36 in DRAM 30A, and the processor 22 determines whether the buffer Y 36 is full in Step 52. In other words, the processor 22 determines whether data in the buffer Y 36 has reached a pre-defined block size. If the buffer Y 36 is full, one data block of the file data will be written into the Flash memory 34 directly in accordance with Step 54. Meanwhile, the non-file data or FAT 31 in the NVRAM 32 will be updated accordingly, as shown in Step 56.
The same process starting from Step 50 will be repeated against the remaining data bytes (i.e. (X - C) bytes) of the file data until the remaining data is determined to be insufficient to fill the buffer Y 36. In that instance, Step comprises instructions that the remaining file data be written into the buffer of the NVRAM one byte at a time. Specifically, the count for data bytes in the buffer Z 37 increases one byte and meanwhile the remaining file data decreases one byte in count. After each byte, Step 60 determines whether the buffer Z 37 is full. If the buffer Z 37 is full, the processor at Step 66 instructs to write one data block from the buffer Z 37 to the Flash memory 34. Once the data block is moved from the buffer Z 37 to the Flash memory 34, the buffer Z 37 is reset as containing zero data bytes. In Step 68, the non-file data or FAT 31 in the NVRAM 32 will be updated accordingly.
Moving to Step 62, the process checks to determine if the last data portion has been completely transferred. If not, the process returns to Steps 58 and writes another byte to the buffer Z. Now the buffer Z 37 cannot become full as determined at Step 60, because the last data portion began less than one block in size. When all the remaining bytes have been transferred and X=0 as determined at Step 62, the process moves to Step 64 where it updates the FAT 31 in the NVRAM 32, and the process ends. If, immediately following a block transfer at Step 68, it is determined at Step 62 that all of the data bytes of the remaining file data have been stored in the buffer Z 37, then again Step 64 will update the FAT
31 in the NVRAM 32 to reflect the data storage in the NVRAM 32 and the Flash memory 34 and complete the process of caching of the last data portion 33.
Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims.
Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims (23)

1. A method of memory management in computing devices, said method comprising the steps of:
receiving file data in a computing device;
processing said file data to determine that said file data is composed of one or more data blocks and a last data portion, each of said one or more data blocks having a pre-determined block size, said last data portion having a smaller size than said pre-determined block size;
writing said one or two data blocks into a first memory, said first memory configured to store data locks having said pre-determined block size; and caching said last data portion in a second memory prior to transmitting said last data portion to said first memory, the step of caching said last data portion comprises the steps of:
(A) allocating part of said second memory as a file buffer, said file buffer configured to store file data;
(B) writing data bytes of said last data portion to said file buffer until data in said file buffer is determined to have said pre-determined block size; and (C) responsive to said determination that said data in said file buffer has said pre-determined block size, moving said data from said file buffer to said first memory prior to continuing step (B).
2. The method of Claim 1 comprising the steps of:
generating non-file data for said file data; and storing said non-file data in said second memory.
3. The method of Claim 2, wherein said non-file data comprises a File Allocation Table (FAT).
4. The method of Claim 1, wherein said first memory is a Flash memory.
5. The method of Claim 2, wherein said second memory is a non-volatile random access memory.
6. A computer readable medium comprising instructions, said instructions, when executed in a processor of a computing device, comprising the steps of:
(A) receiving a file data input;
(B) processing said file data input to separate said file data input into one or more data blocks and a last data portion, each of said one or more data blocks containing a threshold number of data bytes, said last data portion including data bytes below said threshold number;
(C) storing said one or more data blocks into a first memory;
(D) writing said last data portion by byte into a second memory until said second memory is determined to include file data bytes equal to said threshold number;
(E) responsive to the determination that said second memory includes file data bytes equal to said threshold number, moving said file data bytes from said second memory to said first memory prior to continuing Step (D).
7. The computer readable medium of Claim 6, wherein said instructions further comprise the steps of:
generating a non-file data for said file data input; and storing said non-file data in said second memory.
8. The computer readable medium of Claim 7, wherein said non-file data comprises a File Allocation Table (FAT).
9. The computer readable medium of Claim 6, wherein said first memory is a block-accessed memory.
10. The computer readable medium of Claim 9, wherein said first memory is a Flash memory.
11. The computer readable medium of Claim 6, wherein said second memory is a non-volatile random access memory.
12. The computer readable medium of Claim 6, wherein said threshold number of data bytes is configurable.
13. A memory management system comprising at least an apparatus comprising:
a processor configured to receive and process data; and a memory comprising at least a first memory and a second memory, wherein the processor is configured to execute computer program instructions which cause the apparatus to:
define within said data a number of data blocks, each having a pre-determined block size, and a last data portion that has a smaller size than said pre-determined block size, allocate space in said first memory for storing said number of data blocks, and assign space in said second memory for caching said last data portion, wherein said last data portion is cached into said second memory through the steps of:
(A) allocating part of said memory as a file buffer, said file buffer configured to store file data;
(B) writing data bytes of said last data portion to said file buffer until data in said file buffer is determined to have said pre-determined block size; and (C) responsive to said determination that said data in said file buffer has said pre-determined block size, moving said data from said file buffer to said first memory prior to continuing step (B).
14. The system of Claim 13, wherein said first memory is a block-accessed memory.
15. The system of Claim 13, wherein said first memory is a Flash memory.
16. The system of Claim 13, wherein said second memory is a non-volatile random accessed memory.
17. The system of Claim 13, wherein the apparatus further comprises an output interface for displaying said data to a user.
18. The system of Claim 13, wherein the apparatus further comprises an input interface for receiving said data.
19. The system of Claim 13, wherein the apparatus further comprises an interface for transmitting said data via a communication network.
20. The system of Claim 13, wherein said processor upon execution of the computer program instructions is further configured to cause the apparatus to generate a File Allocation Table (FAT) for said data and store said FAT in said second memory.
21. A hand held apparatus comprising:
a data input device;
a data output device; and a computing device configured to communicate with said data input device, said output device, and a data source via a communications network, said computing device comprising:
a processor for data processing, a memory comprising at least a non-volatile random access memory and a block-accessed memory, wherein the processor is configured to execute computer program instructions which cause the computing device to:

(A) identify, from a file data input, one or more data blocks and a last data portion, each of said one or more data blocks having a threshold block size, said last data portion having a smaller size than said threshold block size;
(B) store said one or more data blocks into said block-accessed memory;
(C) write said last data portion by byte into said non-volatile random access memory until said non-volatile random access memory is determined to include file data of said threshold block size; and (D) responsive to the determination that said non-volatile random access memory is determined to include file data of said threshold block size, move said file data from said non-volatile random access memory to said block-accessed memory prior to continuing Step (C).
22. The hand held apparatus of Claim 21, wherein said block-accessed memory is a Flash memory.
23. The hand held apparatus of Claim 21, wherein said threshold block size is configurable.
CA2574756A 2004-07-30 2005-07-12 Systems, methods, computer readable medium and apparatus for memory management using nvram Active CA2574756C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US10/903,020 2004-07-30
US10/903,020 US7562202B2 (en) 2004-07-30 2004-07-30 Systems, methods, computer readable medium and apparatus for memory management using NVRAM
PCT/US2005/024586 WO2006019700A2 (en) 2004-07-30 2005-07-12 Systems, methods, computer readable medium and apparatus for memory management using nvram

Publications (2)

Publication Number Publication Date
CA2574756A1 CA2574756A1 (en) 2006-02-23
CA2574756C true CA2574756C (en) 2013-02-12

Family

ID=35159742

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2574756A Active CA2574756C (en) 2004-07-30 2005-07-12 Systems, methods, computer readable medium and apparatus for memory management using nvram

Country Status (9)

Country Link
US (1) US7562202B2 (en)
EP (1) EP1782176B1 (en)
JP (1) JP2008508596A (en)
CN (1) CN101014929B (en)
AT (1) ATE479934T1 (en)
CA (1) CA2574756C (en)
DE (1) DE602005023317D1 (en)
MX (1) MX2007001185A (en)
WO (1) WO2006019700A2 (en)

Families Citing this family (83)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050251617A1 (en) * 2004-05-07 2005-11-10 Sinclair Alan W Hybrid non-volatile memory system
US20060190425A1 (en) * 2005-02-24 2006-08-24 Yuan-Chi Chang Method for merging multiple ranked lists with bounded memory
EP1855453A1 (en) * 2006-05-11 2007-11-14 Axalto SA Management of power consumption of a chip card in a mobile device
JP2008033788A (en) * 2006-07-31 2008-02-14 Matsushita Electric Ind Co Ltd Nonvolatile storage device, data storage system, and data storage method
US7996680B2 (en) * 2006-09-27 2011-08-09 Hewlett-Packard Development Company, L.P. Secure data log management
KR100904758B1 (en) * 2007-02-08 2009-06-29 삼성전자주식회사 Flash memory device and system including buffer memory, and data updating method of the flash memory device
JP4897524B2 (en) * 2007-03-15 2012-03-14 株式会社日立製作所 Storage system and storage system write performance deterioration prevention method
JP4710056B2 (en) * 2007-10-04 2011-06-29 Necインフロンティア株式会社 Information processing apparatus, flash memory management method, and flash memory management program
JP2009199199A (en) * 2008-02-20 2009-09-03 Hitachi Ltd Storage system and its data write method
JP4675985B2 (en) * 2008-03-01 2011-04-27 株式会社東芝 Memory system
TWI385669B (en) * 2008-07-23 2013-02-11 Phison Electronics Corp Wear leveling method and storage system and controller using the same
US7719876B2 (en) 2008-07-31 2010-05-18 Unity Semiconductor Corporation Preservation circuit and methods to maintain values representing data in one or more layers of memory
US8397016B2 (en) * 2008-12-31 2013-03-12 Violin Memory, Inc. Efficient use of hybrid media in cache architectures
KR101573047B1 (en) * 2009-01-23 2015-12-02 삼성전자주식회사 Complex memory device and I/O processing method using these
US20110167197A1 (en) * 2010-01-05 2011-07-07 Mark Leinwander Nonvolatile Storage with Disparate Memory Types
WO2011096046A1 (en) * 2010-02-02 2011-08-11 株式会社 東芝 Communication device having storage function
WO2011096045A1 (en) 2010-02-02 2011-08-11 株式会社 東芝 Communication device having storage function
JP5520747B2 (en) * 2010-08-25 2014-06-11 株式会社日立製作所 Information device equipped with cache and computer-readable storage medium
KR101572403B1 (en) * 2011-12-22 2015-11-26 인텔 코포레이션 Power conservation by way of memory channel shutdown
CN102567216B (en) * 2011-12-29 2015-07-29 北京交控科技有限公司 Improve the reliable storage method of service life of flash memory
CN102801768A (en) * 2012-01-20 2012-11-28 华为技术有限公司 Data processing method and system and network device
JP5687648B2 (en) * 2012-03-15 2015-03-18 株式会社東芝 Semiconductor memory device and program
US8996768B2 (en) * 2012-05-31 2015-03-31 Sandisk Technologies Inc. Method and storage device for assessing execution of trim commands
CN104615546B (en) * 2015-02-13 2018-04-27 小米科技有限责任公司 file data management method and device
US10019331B2 (en) * 2015-06-22 2018-07-10 Sap Se Memory allocation and recovery strategies for byte-addressable non-volatile RAM (NVRAM)
CN105117167B (en) * 2015-08-10 2019-03-01 北京联想核芯科技有限公司 A kind of information processing method and device, electronic equipment
CN105159839B (en) * 2015-09-28 2018-05-29 北京联想核芯科技有限公司 A kind of collocation method and device and solid state disk
CN106598473B (en) * 2015-10-15 2020-09-04 南京中兴新软件有限责任公司 Message persistence method and device
TWI615770B (en) 2015-11-17 2018-02-21 群聯電子股份有限公司 Data access method, memory control circuit unit and memory storage device
CN105426779B (en) * 2015-11-19 2018-06-05 浪潮(北京)电子信息产业有限公司 A kind of method for ensureing file system data safety using NVRAM
RU2661280C2 (en) * 2015-12-03 2018-07-13 Хуавэй Текнолоджиз Ко., Лтд. Massive controller, solid state disk and data recording solid state disk control method
US10175891B1 (en) * 2016-03-15 2019-01-08 Pavilion Data Systems, Inc. Minimizing read latency for solid state drives
US11860940B1 (en) 2016-09-26 2024-01-02 Splunk Inc. Identifying buckets for query execution using a catalog of buckets
US11126632B2 (en) 2016-09-26 2021-09-21 Splunk Inc. Subquery generation based on search configuration data from an external data system
US10956415B2 (en) 2016-09-26 2021-03-23 Splunk Inc. Generating a subquery for an external data system using a configuration file
US11003714B1 (en) 2016-09-26 2021-05-11 Splunk Inc. Search node and bucket identification using a search node catalog and a data store catalog
US11461334B2 (en) 2016-09-26 2022-10-04 Splunk Inc. Data conditioning for dataset destination
US11663227B2 (en) 2016-09-26 2023-05-30 Splunk Inc. Generating a subquery for a distinct data intake and query system
US11599541B2 (en) 2016-09-26 2023-03-07 Splunk Inc. Determining records generated by a processing task of a query
US11580107B2 (en) 2016-09-26 2023-02-14 Splunk Inc. Bucket data distribution for exporting data to worker nodes
US11874691B1 (en) 2016-09-26 2024-01-16 Splunk Inc. Managing efficient query execution including mapping of buckets to search nodes
US11562023B1 (en) * 2016-09-26 2023-01-24 Splunk Inc. Merging buckets in a data intake and query system
US11586627B2 (en) 2016-09-26 2023-02-21 Splunk Inc. Partitioning and reducing records at ingest of a worker node
US11281706B2 (en) 2016-09-26 2022-03-22 Splunk Inc. Multi-layer partition allocation for query execution
US11314753B2 (en) 2016-09-26 2022-04-26 Splunk Inc. Execution of a query received from a data intake and query system
US11567993B1 (en) 2016-09-26 2023-01-31 Splunk Inc. Copying buckets from a remote shared storage system to memory associated with a search node for query execution
US11416528B2 (en) 2016-09-26 2022-08-16 Splunk Inc. Query acceleration data store
US11442935B2 (en) 2016-09-26 2022-09-13 Splunk Inc. Determining a record generation estimate of a processing task
US11294941B1 (en) 2016-09-26 2022-04-05 Splunk Inc. Message-based data ingestion to a data intake and query system
US11106734B1 (en) 2016-09-26 2021-08-31 Splunk Inc. Query execution using containerized state-free search nodes in a containerized scalable environment
US11269939B1 (en) 2016-09-26 2022-03-08 Splunk Inc. Iterative message-based data processing including streaming analytics
US11620336B1 (en) * 2016-09-26 2023-04-04 Splunk Inc. Managing and storing buckets to a remote shared storage system based on a collective bucket size
US11604795B2 (en) 2016-09-26 2023-03-14 Splunk Inc. Distributing partial results from an external data system between worker nodes
US10776355B1 (en) 2016-09-26 2020-09-15 Splunk Inc. Managing, storing, and caching query results and partial query results for combination with additional query results
US10977260B2 (en) 2016-09-26 2021-04-13 Splunk Inc. Task distribution in an execution node of a distributed execution environment
US11243963B2 (en) 2016-09-26 2022-02-08 Splunk Inc. Distributing partial results to worker nodes from an external data system
US20180089324A1 (en) 2016-09-26 2018-03-29 Splunk Inc. Dynamic resource allocation for real-time search
US11023463B2 (en) 2016-09-26 2021-06-01 Splunk Inc. Converting and modifying a subquery for an external data system
US10726009B2 (en) 2016-09-26 2020-07-28 Splunk Inc. Query processing using query-resource usage and node utilization data
US11250056B1 (en) * 2016-09-26 2022-02-15 Splunk Inc. Updating a location marker of an ingestion buffer based on storing buckets in a shared storage system
US11232100B2 (en) 2016-09-26 2022-01-25 Splunk Inc. Resource allocation for multiple datasets
US11615104B2 (en) 2016-09-26 2023-03-28 Splunk Inc. Subquery generation based on a data ingest estimate of an external data system
US11550847B1 (en) 2016-09-26 2023-01-10 Splunk Inc. Hashing bucket identifiers to identify search nodes for efficient query execution
US11593377B2 (en) 2016-09-26 2023-02-28 Splunk Inc. Assigning processing tasks in a data intake and query system
US10353965B2 (en) 2016-09-26 2019-07-16 Splunk Inc. Data fabric service system architecture
US11321321B2 (en) 2016-09-26 2022-05-03 Splunk Inc. Record expansion and reduction based on a processing task in a data intake and query system
US11222066B1 (en) 2016-09-26 2022-01-11 Splunk Inc. Processing data using containerized state-free indexing nodes in a containerized scalable environment
US11163758B2 (en) 2016-09-26 2021-11-02 Splunk Inc. External dataset capability compensation
US10984044B1 (en) 2016-09-26 2021-04-20 Splunk Inc. Identifying buckets for query execution using a catalog of buckets stored in a remote shared storage system
US10795884B2 (en) 2016-09-26 2020-10-06 Splunk Inc. Dynamic resource allocation for common storage query
CN107329694B (en) * 2017-06-22 2021-05-18 苏州奈特力智能科技有限公司 Data storage control method, control device and storage equipment
US11921672B2 (en) 2017-07-31 2024-03-05 Splunk Inc. Query execution at a remote heterogeneous data store of a data fabric service
US10896182B2 (en) 2017-09-25 2021-01-19 Splunk Inc. Multi-partitioning determination for combination operations
US11151137B2 (en) 2017-09-25 2021-10-19 Splunk Inc. Multi-partition operation in combination operations
JP2019164712A (en) * 2018-03-20 2019-09-26 東芝メモリ株式会社 Storage device, information processing system and program
US11334543B1 (en) 2018-04-30 2022-05-17 Splunk Inc. Scalable bucket merging for a data intake and query system
CN109710181A (en) * 2018-12-11 2019-05-03 成都嘉泰华力科技有限责任公司 A kind of method and system improving NAND FLASH device file access speed
WO2020220216A1 (en) 2019-04-29 2020-11-05 Splunk Inc. Search time estimate in data intake and query system
US11715051B1 (en) 2019-04-30 2023-08-01 Splunk Inc. Service provider instance recommendations using machine-learned classifications and reconciliation
US11494380B2 (en) 2019-10-18 2022-11-08 Splunk Inc. Management of distributed computing framework components in a data fabric service system
US11922222B1 (en) 2020-01-30 2024-03-05 Splunk Inc. Generating a modified component for a data intake and query system using an isolated execution environment image
KR20220014212A (en) * 2020-07-28 2022-02-04 에스케이하이닉스 주식회사 Storage device and operating method thereof
US11704313B1 (en) 2020-10-19 2023-07-18 Splunk Inc. Parallel branch operation using intermediary nodes

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4571674A (en) 1982-09-27 1986-02-18 International Business Machines Corporation Peripheral storage system having multiple data transfer rates
US5276840A (en) 1991-03-22 1994-01-04 Acer Incorporated Disk caching method for writing data from computer memory including a step of writing a plurality of physically adjacent blocks in a single I/O operation
US5481701A (en) 1991-09-13 1996-01-02 Salient Software, Inc. Method and apparatus for performing direct read of compressed data file
US5694570A (en) 1992-09-23 1997-12-02 International Business Machines Corporation Method and system of buffering data written to direct access storage devices in data processing systems
KR970008188B1 (en) 1993-04-08 1997-05-21 가부시끼가이샤 히다찌세이사꾸쇼 Control method of flash memory and information processing apparatus using the same
US5729767A (en) 1994-10-07 1998-03-17 Dell Usa, L.P. System and method for accessing peripheral devices on a non-functional controller
US6321293B1 (en) 1995-11-14 2001-11-20 Networks Associates, Inc. Method for caching virtual memory paging and disk input/output requests
JPH09319645A (en) * 1996-05-24 1997-12-12 Nec Corp Non-volatile semiconductor memory device
EP1095373A2 (en) 1998-05-15 2001-05-02 Storage Technology Corporation Caching method for data blocks of variable size
US6249841B1 (en) * 1998-12-03 2001-06-19 Ramtron International Corporation Integrated circuit memory device and method incorporating flash and ferroelectric random access memory arrays
US6651142B1 (en) 2000-05-05 2003-11-18 Sagent Technology Method and apparatus for processing data using multi-tier caching
US20030120841A1 (en) * 2001-12-21 2003-06-26 Chang Matthew C.T. System and method of data logging
AU2002353406A1 (en) 2002-12-27 2004-07-22 Solid State System Co., Ltd. Nonvolatile memory unit with specific cache
US20040193782A1 (en) * 2003-03-26 2004-09-30 David Bordui Nonvolatile intelligent flash cache memory
US20050050261A1 (en) * 2003-08-27 2005-03-03 Thomas Roehr High density flash memory with high speed cache data interface
US20050132128A1 (en) * 2003-12-15 2005-06-16 Jin-Yub Lee Flash memory device and flash memory system including buffer memory

Also Published As

Publication number Publication date
CN101014929A (en) 2007-08-08
JP2008508596A (en) 2008-03-21
CN101014929B (en) 2010-05-05
US20060026211A1 (en) 2006-02-02
WO2006019700A3 (en) 2006-10-12
MX2007001185A (en) 2007-03-21
CA2574756A1 (en) 2006-02-23
EP1782176A2 (en) 2007-05-09
US7562202B2 (en) 2009-07-14
ATE479934T1 (en) 2010-09-15
WO2006019700A2 (en) 2006-02-23
DE602005023317D1 (en) 2010-10-14
EP1782176B1 (en) 2010-09-01

Similar Documents

Publication Publication Date Title
CA2574756C (en) Systems, methods, computer readable medium and apparatus for memory management using nvram
US6571326B2 (en) Space allocation for data in a nonvolatile memory
US7191306B2 (en) Flash memory, and flash memory access method and apparatus
US6587915B1 (en) Flash memory having data blocks, spare blocks, a map block and a header block and a method for controlling the same
US7962687B2 (en) Flash memory allocation for improved performance and endurance
US7734862B2 (en) Block management for mass storage
US8738882B2 (en) Pre-organization of data
US7769945B2 (en) Method and system for facilitating fast wake-up of a flash memory system
KR100453053B1 (en) Flash memory file system
US9367451B2 (en) Storage device management device and method for managing storage device
US6621746B1 (en) Monitoring entropic conditions of a flash memory device as an indicator for invoking erasure operations
US20050015557A1 (en) Nonvolatile memory unit with specific cache
US20070094445A1 (en) Method to enable fast disk caching and efficient operations on solid state disks
US8402202B2 (en) Input/output control method and apparatus optimized for flash memory
EP0852765A1 (en) Memory management
KR20070060070A (en) Fat analysis for optimized sequential cluster management
US7802072B2 (en) Data storage device, memory management method and program for updating data recorded in each of a plurality of physically partitioned memory areas
KR20090024971A (en) Method and apparatus for cache using sector set
US20070005929A1 (en) Method, system, and article of manufacture for sector mapping in a flash device
KR101026634B1 (en) A method of data storage for a hybrid flash memory
WO2010145967A1 (en) Memory device for managing the recovery of a non volatile memory
KR101020781B1 (en) A method for log management in flash memory-based database systems
KR100982440B1 (en) System for managing data in single flash memory
US11941246B2 (en) Memory system, data processing system including the same, and operating method thereof
JP2002222120A (en) Memory access management device and management method

Legal Events

Date Code Title Description
EEER Examination request