WO1999048005A1 - Selective data transfer between storage mediums using dynamic memory allocation - Google Patents

Selective data transfer between storage mediums using dynamic memory allocation Download PDF

Info

Publication number
WO1999048005A1
WO1999048005A1 PCT/US1999/004261 US9904261W WO9948005A1 WO 1999048005 A1 WO1999048005 A1 WO 1999048005A1 US 9904261 W US9904261 W US 9904261W WO 9948005 A1 WO9948005 A1 WO 9948005A1
Authority
WO
WIPO (PCT)
Prior art keywords
application data
data
pagenumber
memory
register
Prior art date
Application number
PCT/US1999/004261
Other languages
French (fr)
Inventor
Britto Vincent
Original Assignee
Phoenix Technologies Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Phoenix Technologies Ltd. filed Critical Phoenix Technologies Ltd.
Priority to AU31808/99A priority Critical patent/AU3180899A/en
Publication of WO1999048005A1 publication Critical patent/WO1999048005A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems

Abstract

The present invention is directed to a device and method for selectively transferring or filtering data between storage mediums using the application data itself with dynamic memory allocation to make the selection or create the filter criteria. A preferred application of the present invention is a logic simulation device utilizes a dynamic memory allocation algorithm to simulate hard memory in software which improves the efficiency of data transfers between memory devices and dramatically increases performance of the logic simulation device.

Description

SELECTIVE DATA TRANSFER BETWEEN STORAGE MEDIUMS USING DYNAMIC MEMORY ALLOCATION
Field of the Invention
The present invention is directed to a device and method for selectively transferring or filtering data between storage mediums using the application data itself with dynamic memory allocation to make the selection or create the filter criteria. More particularly, a logic simulation device utilizes a dynamic memory allocation algorithm to simulate hard memory in software which improves the efficiency of data transfers between memory devices and dramatically increases performance of the logic simulation device.
Background of the Invention
The electronics field is becoming more competitive. New entrants to the market are generating tremendous pressure to increase efficiency of logic design, to reduce design cost, and most important, to reduce time to market. Extensive. simulation can detect design errors before the design is manufactured, thus reducing the number of design iterations. An efficient hardware description language (HDL) such as VERELOG ® and the host simulation system have become invaluable in minimizing the number of design errors and have made it possible to have functional chips in the first silicon.
Computers used as the host simulation device usually use a hierarchical information retrieval method for transferring information or data between at least a first and second storage device. One of the storage devices is often a mass storage device such as a hard disk. An intermediate storage device such as a cache on the microprocessor provides rapid, but usually very small, data transfers between the hard disk and RAM. The second storage device is normally smaller than a hard disk, usually a faster and more accessible storage device such as RAM. The size relationship between the cache and the RAM varies. On a personal computer, the cache is usually smaller than the RAM. On workstations and microcomputers, the cache is usually larger than the RAM. Although multiple levels of hierarchy are commonly used, the principle remains the same.
Common retrieval methods are called paging; swapping, or caching. Their purpose is to improve the utilization of the storage devices by transferring large blocks of information reversibly between the storage devices. While these retrieval methods
-1- normally improve the efficiency of a logic simulation device, that is not the purpose, but merely a side effect.
In the prior art, the host simulation system uses a static memory allocation algorithm in the hardware description language, i.e. VERILOG, which requires a memory array or RAM of a size at least equal to the entire simulated memory size. Thus, the size of the information or data transfers is determined largely by the physical size of the memory array. If, for example, a 128 megabit memory device (elsewhere referred to as a hard memory) is called for by the logic simulation program, and the memory array has enough capacity to hold it, the entire memory device is transferred into the second storage device. This can create substantial inefficiencies, especially with large hard memory devices.
The prior art is limited to using either expensive, large memory hardware devices or the time consuming operation of transferring smaller increments of memory regardless of the usefulness of any particular memory increment. The need exists for a logic simulation device which does not use the full memory transfer utilized by the static memory allocation algorithm. To overcome these limitations, the present invention provides a dynamic memory allocation algorithm for the hardware description language, i.e. VERILOG, which changes the size of the transfer to the memory array and can transfer less than the full dump memory. Fewer transfers are made to the memory array. The total size of the memory array eventually transferred by the present invention is a much smaller subset than the entire simulated memory size.
Summary Of The Invention
By its nature, the HDL i.e. VERILOG, does not have a dynamic memory allocation scheme in its native format. This forces designers to use the static memory allocation scheme when they chose to use the standard HDL i.e. VERILOG. The present invention provides a logic simulation device in which circuit information which is described with a HDL such as
VERILOG and is used both as input to the circuit simulation and as addressing information means. The addressing information is then used to provide hierarchical information retrieval means within the simulation data, thereby reducing the data that is to be transferred between a second storage device or dump memory and a third storage device or memory array. Since the amount of data that needs to be transferred is reduced, the time required for transferring this data is consequently reduced which increases the efficiency of the logic simulation device. The addressing information also reduces the size required by the dump memory and memory array for comparable results.
The present invention also uses the circuit information for partitioning structures such as hard memory devices into more convenient sizes for simulation by a logic simulation device. The addressing information may also be used for generating tables of information about the structure of the objects, such as hard memory, which is to be partitioned or paged into the third storage, device or memory array. This further enhances the speed and reduces the size of the transfers of the information. The present invention provides a utility for selectively transferring application data between storage means in a computer system which includes first data storage means for storing the application data and non-application data as contents therein. Second data storage means receives a dump of at least a part of the application and non-application data from the first data storage means. The utility includes means for examining the data in the second storage means to identify the application data and derive a secondary address for the application data. Third data storage means receives and parses only the application data from the second storage means indicated by the secondary address. The utility further includes means for parsing the application data and transferring the parsed application data from the third data storage means to the second data storage means. The parsed application data and the remaining non-application data is transferred from the second data storage means to the first data storage means. The remaining unexamined data is dumped from the first data storage means to the second storage means.
A more specific application of the present invention is a simulation device for dynamically simulating hard memory in the software of a computer system. The device includes a mass memory device for storing the application data and non-application data as a data block therein and a dump memory for receiving a dump of at least a part of the data block containing application and non-application data from the mass memory device. A memory controller examines the data in the dump memory to identify the application data and derive a secondary address for the application data. A memory array receives only the application data from the second storage means indicated by the secondary address. The memory controller parses only the application data. The parsed application data is transferred from the memory array to the dump memory. The parsed application data and
-3- the remaining non-application data is transferred from the dump memory to the mass memory device. The remaining unexamined data is dumped from the mass memory device to the dump memory.
A method of selectively transferring application data between storage means of a computer system is also provided by the present invention. The method includes the steps of: dumping at least a portion of the application data and non-application data from a first data storage device to a second data storage device; examining the dumped application and non-application data in the second storage means to identify the application data and deriving a secondary address for the application data; transferring only the application data in the second storage means indicated by the secondary address to a third storage device; parsing only the application data in the third storage device; transferring the parsed application data in the third storage device to the second storage device; and transferring the parsed application data and remaining non-application data from the second data storage device to the first data storage device and dumping the remaining unexamined application and non-application data from the first data storage device to the second storage device.
It is an object of the present invention to provide an improved logic simulation device which requires less time and uses less expensive hardware to perform a comparable simulation with logic simulation devices of the prior art.
Another object of the present invention is to perform a logic simulation with an algorithm which dynamically changes the size of the data transfer to the memory array so that the size of the total data transfer is much smaller than the entire simulated memory.
A further object of the present invention is to perform a logic simulation with an algorithm which transfers less than the entire simulated memory size to a memory array.
Other and further advantages, embodiments, variations and the like will be apparent to those skilled-in-the-art from the present specification taken with the accompanying drawings and appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate presently preferred embodiments of the present invention. Together with the general description given above and the detailed description of the preferred
-4- embodiments given below, they serve to explain the principles of the present invention in which:
Figure 1 is diagram showing the conventional arrangement of the circuit information in a logic simulation device; Figure 2 is a flowchart for a typical logic simulation device of the prior art utilizing a static memory allocation algorithm;
Figure 3 is a flowchart illustrating a logic simulation device of the present invention utilizing dynamic memory allocation;
Figure 4 is a flow chart illustrating a WRITE routine for the logic simulation device of Figure 3 utilizing dynamic memory allocation;
Figure 5 is a flow chart illustrating a READ routine for the logic simulation device of Figure 3 utilizing dynamic memory allocation;
Figure 6 is a diagram illustrating a computer or network system having at least three varied data storage devices utilizing the dynamic memory allocation of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
Generally, computers and network systems use data storage space or memory space allocated among at least three different types of storage devices wherein the type of memory in each area may be different in function or size or both. First, mass or bulk memory, hereafter referred to as memory, generally has the characteristics that make it preferred for bulk storage even though it is often too slow and unwieldy for other uses. Data are normally stored in memory in large segments, often called blocks, which are well- suited for bulk storage but not convenient for interfacing with a computer directly, and may not be optimally organized or sorted. For example, data relating to an application may be interspersed with other data, a condition known as fragmenting. When a block is transferred out of a bulk memory, pieces of unwanted data, e.g. , relating to another application, may also be transferred.
Since the ability to bring in such memory blocks is limited, it is usually mass transferred, a process often called dumping, using a second type of memory called cache memory or swap space, hereafter referred to as dump memory. Dump memory is organized so that it can optimally receive blocks of data from bulk memory. However, the dump memory is also capable of transferring the minimum data sizes required by the memory array, with data sizes that are small enough to fit into the memory array optimally for a given application such as a logic simulation device. In the prior art dump memory, usually all the data in dump memory is flagged if it is part of the application and then transferred without any examination or discrimination making the circuit simulation enormously slow for the entire static memory array size. The present invention examines the data in the dump memory to flag only those data units that apply to the application being processed. In this way, data that does not need to be processed can be easily avoided by the application. Thus, the present invention makes the circuit simulation about 15 to about 20 times faster compared to the prior art.
The dump memory might be considered as a storing space with formatting, since it can communicate efficiently with memory in large blocks of data, yet communicate with the memory array selectively and in smaller pieces, such as memory words, which are much smaller than mass memory blocks. It is this formatting that allows the memory array to organize the data for efficient utilization by applications, such as in the preferred logic simulation apparatus discussed before.
The dump memory communicates with one or more mass memories in blocks, where the data often is distributed or fragmented between applications. However, it can also communicate with the memory array, also called the physical memory, by marking and making available words or similar packets of information needed for the memory array. This allows the data to be efficiently organized for use by an application, such as the logic simulation apparatus of the preferred mode.
The memory array collects or loads the data in convenient packets, often called words, and organizes the data for efficient utilization, based on the dump memory flags and global information from the operating system of the computer or from the user, etc. It may also have some guidance from the application, e.g. , in the form of start/stop addresses, etc, and in some cases in the form of packet size requirements. It is important to realize that these operations are always in the form of commands for manipulating the object, such as a simulated memory; etc., as a unit. For example, if a hard memory or a portion thereof is required by the task, as defined by the memory controlling circuitry of the computer
-6- operating system, etc. , that is what is loaded. This might be the entire hard memory, or as much of it as can be conveniently loaded in the space available.
If the application calls for data that is too large for the available space, such as in the memory array, the memory controller function, however implemented, transfers in only so much data as can be stored. This is not a function of the task being performed by the application, only of the available space. When another part of the data is required, it is dumped into the dump memory. Of course, if the previous data in the dump memory was changed, it will be transferred back to the memory, or mass storing device, first. This process is often called swapping or paging, and the overall process was historically referred to as virtual memory management. Note that the dumping process may also be controlled by an application, though even then the dumping is done on the data as an object rather than on smaller pieces of the data; that is, on the data hierarchy. The function of transferring ands manipulating memory objects is usually performed by the memory controller, often called direct memory access logic. Figure 1 illustrates the general arrangement of a logic simulation device 10 for simulating the operation of a memory controller 12. The device 10 includes a storage device 14 which has a plurality of memory nodules 16. Examples of hard memory devices include, but are not limited to, DRAM, SDRAM, RAMBUS DRAM and EDO DRAM memory devices. A simulation microprocessor 18 or CPU is connected to the memory controller 12 for controlling the transfer of data to the memory modules 16.
A flowchart 200 illustrating a typical simulation device of the prior art using a static memory allocation algorithm and VERILOG ® HDL for simulating a hardware device such as a memory controller is shown in Figure 2. The prior art flowchart 200 begins with the step 210 of initializing a dump file for controlling the operations on the dump memory by opening a file dump-mem. dat for saving a trace of the dumps in a file (also sometimes called a "history" file). In this prior art example, initializing the dump file is the same as initializing the dump memory, since the dump file is the method used to control the dump memory. In the dump file intializing step 210, if the DUMP_OPTION is '1', then the DUMP ROW START ADDR which is the row address from which dumping is started, the DUMP COL START ADDR which is the column address in the starting row address from which dumping is started, and the DUMP_BYTES which is the number of bytes to be dumped from the starting column address in the starting row address are all logged into the
-7- dump-mem. dat file. Otherwise, if the DUMP-OPTION is '0', then "dumping all written locations" is written into the dump-mem. dat file.
Then the memory is initialized 220 by assigning the operation code OPCODE to the initiate routine (INIT). In the memory initializing step 220, if the LOAD OPTION is '0', the memory initialization is disabled which stops the loading of data from the load. mem file. The load. mem file includes row addresses and corresponding data to be written in to the memarray for initialization. The memarray is the register array equal in size to the number of pages with each register equal in size to the number of columns multiplied by the word length. If the LOAD_OPTION is ' 1 ' , the memory initialization is enabled to load the data rows from the load. mem file into the corresponding address locations of memarray as specified in the load. mem file. The elements in the dumparray corresponding to the loaded addresses are set to high. The dumparray is the array of single bit registers equal in size to the number of pages in a bank. For example, each entry in this file is a 20-bit hex value with a format having a 4-bit bank select value followed by a 16-bit row-address. The step 230 checks the OPCODE and executes or parses the commands therein.
As a result, all data which is flagged as part of the device is transferred from the dump memory to the memory array, whether or not required by the device. The flags or references from the dump memory are not parsed by the memory array to determine whether the data fragments that relate to the application or device will be loaded into the memory array.
Upon parsing, if the OPCODE is WRITE 240 then the address to where the data is to be written is retrieved and is written and indexed into the memarray location. At this step 230, the entire array whose size is equal to the number of pages multiplied by the page size and the wordlength is held in dump memory. Subsequently, the data to be written is masked 250 as per the OPCODE as a MWRITE/MWRITEHi/MWRITELo file.
If the OPCODE is READ 260, then a refarray which is the array of single bit registers equal in size to the number of pages in the bank, is checked for a refresh status. Subsequently, the address from where the data is to be read is retrieved and is read from and indexed into the memarray location. At this step 260, the entire array whose size is equal to the number of pages multiplied by the page size and the wordlength is held in dump memory. If the OPCODE is DUMP 270, then valid data, which is every location in memory which has a corresponding element in the dumparray set to high, is dumped from the memory. The dump step 270 has the option of dumping the entire memory or part of the memory. If the DUMP OPCODE is '0', for example, the dump memory is written back into memory at the address from which it was fetched. Otherwise, a portion of the memory defined by the DUMP ROW START ADDR, the DUMP COL START ADDR, or the DUMP BYTES data is stored in the dump-mem.dat file.
A number of housekeeping instructions are also performed by parsing the OPCODE. For example, if the OPCODE is PRECHARGE, the element in the prearray, corresponding to the accessed memarray location, is set 280 to high. The prearray is an array of single bit registers equal in size to the number of pages in the bank. If the OPCODE is ACTIVATE, the element in the prearray, corresponding to the accessed memarray location, is reset 282 to low. If the OPCODE is REFRESH, the element in the refarray corresponding to the accessed memarray location is set 284 to high. If the OPCODE is NOP, then the data is set 286 to high impedanceby a placeholding instruction. After the above specific examples are performed from continuing to parse the OPCODE, the elapsed simulation time is checked 290. Loop 293 is repeated until the parsing of the OPCODE is complete.
The prior art flowchart 200 in Figure 2 demonstrates an important limitation of the prior art wherein data is allocated to the simulator based on the readily accessible space on the storage device and only that amount of data as would fit in the storage device is loaded into the simulation. The result is a lengthy time period for transferring chunks of memory into the storage device. Timing could be increased only by dramatically increasing the size of the storage device. A flowchart 300 illustrating one embodiment of the present invention for transferring selected contents of memory to a memory array utilizing a dynamic memory allocation algorithm is shown in Figure 3. The dynamic memory flowchart 300 begins with the step 310 which prepares an area in the readily accessible storing device for new data by initializing the memory as previous discussed above with regard Figure 2. There are differences, however, that will now be detailed. Specifically, an array page allocated, which is an array of single bit registers equal in size to a number of pages in the bank, is initialized and set to zeroes. The page allocated array specifies which page has valid data.
-9- An array address_array is also initialized and set to indicate unknown. The address array is a register array of equal size to the number of pages to be initialized.
Then the memory is initialized 320 by assigning the operation code OPCODE to the initiate routine (INIT) and checking the LOAD OPTION. If the LOAD OPTION is ' 1', the bank row addresses given in the loadp.lst file are initialized 322 with user specified data. The row_cum_bank addresses are also read from the loadp.lst file into the address array. Then the elements corresponding to each address specified in loadp.lst is set 324 to high. At this step 324, the entire array is not loaded. The loadp.lst file is only used if the
LOAD OPTION is '1'. In the memory initializing step 320, if the LOAD OPTION is '0', the memory initialization is disabled which stops the loading of the user specified data. The bypass 326 proceeds directly to continue parsing the OPCODE.
The step 330 checks the OPCODE and executes or parses the commands therein.
As a result, the flags or references from the dump memory are parsed to determine whether the data fragments that relate to the application or device will be loaded into the memory array.
Upon parsing, if the OPCODE is DUMP 370, then the data is saved from the memarray into the file ldpxxxx.mem. If the OPCODE is REFRESH, the element in the refarray corresponding to the accessed pagenumber is set 384 to high. If the OPCODE is NOP, then the data is set 386 to high impedance by a placeholding instruction. After these specific examples are performed from continuing to parse the OPCODE, the elapsed simulation time is checked 390. Loop 392 is repeated until the parsing of the OPCODE is complete and the simulation stops 394.
A flowchart 400 for the WRITE 340 routine is illustrated in Figure 4. If the OPCODE is WRITE 340, then the WRITE flowchart 400 checks 410 if the pagenumber which is the currently accessed row/page in the bank against the curr_page_index which is the pagenumber that was last accessed.
If the pagenumber and curr_page_index are the same, then the data is written 420 into memarray which can hold one page/row. The element of page_allocated corresponding to the pagenumber is also set to high, i.e., page_allocated[pagenumberl.
Subsequently, the OPCODE proceeds directly 422 to the step of masking 450 the data to be written as a MWRITE/MWRITEHi/MWRITELo file and the data is written at the offset
-10- within the memarray. At this step, only one page/row is held in memory. Then the lapse of the simulation time is checked 390 and either returns 392 to the step 330 of checking the OPCODE for additional instructions to be executed or stops 394.
If the pagenumber and curr jpage_index are different, then the data in the memarray is saved 430 into the file Idpxxxx.mem which consists of the user defined data for any one row. In the file Idpxxxx.mem, the bank and pagenumber is specified by xxxx. The file name is formed from the curr_page_index and bank addresses within the application or device and allows the transfer of reduced information into the memory array.
Then the element of the page allocated array corresponding to the pagenumber is checked 440. If the page_allocated[pagenumber] is '1', the data from the file Idpxxxx.mem is read 442 into memarray. The file name is formed from the page_number and bank addresses. If the page_allocated[pagenumber] is '0', the page_allocated[pagenumber] is initialized 444 to '1 ' and the memarray is also initialized with unknown values.
After the step 440 of checking the element of the page allocated array corresponding to the pagenumber, the cuur_page_index is updated 446 to the accessed pagenumber. The memarray is also assigned to output the data.
Subsequently, the OPCODE performs the previously described step of masking 450 the data to be written as a MWRITE/MWRITEHi/MWRITELo file and the data is written at the offset within the memarray. At this step, only one page/row is held in memory. Then the lapse of the simulation time is checked 390 and either returns 392 to the step 330 of checking the OPCODE for additional instructions to be executed or stops 394.
A flowchart 500 for the READ 360 routine is illustrated in Figure 5. If the OPCODE is READ 360, then the READ flowchart 500 checks 510 if the pagenumber which is the currently accessed row/page in the bank against the curr_page_index which is the pagenumber that was last accessed.
If the pagenumber and curr_page_index are the same, then the data at the offset within the memarray is read 520. The offset is the currently accessed column address. The memory holds only one page/row. Subsequently, the OPCODE proceeds directly 522 to the step 390 of checking the lapse of the simulation time and either returns 392 to the step 330 of checking the OPCODE for additional instructions to be executed or stops 394.
If the pagenumber and curr_page_index are different, then the element of the page allocated array corresponding to the pagenumber is checked 540. If the
-11- page allocated [pagenumber] is ' 1 ', the data from the memarray is saved 542 into the file Idpxxxx.mem. The file name is formed from the curr_page_index and bank addresses. The data from the file Idpxxxx.mem is read 544 into memarray and the data word at the offset in memarray. The file name is formed from the pagenumber and bank addresses. If the page_allocated[pagenumber] is '0', the page_allocated [pagenumber] is initialized 548 to T and the memarray is also initialized by indicating unknown values. After the step 540 of checking the element of the page allocated array corresponding to the pagenumber, the curr_page_index is updated 546 to the accessed pagenumber. The memarray is also assigned to output the data. Subsequently, the OPCODE performs the previously described step of masking 450 the data to be written as a MWRITE/MWRITEHi/MWRITELo file and the data is written at the offset within the memarray. At this step, only one page/row is held in memory. Then the lapse of the simulation time is checked 390 and either returns 392 to the step 330 of checking the OPCODE for additional instructions to be executed or stops 394.. The flowcharts in Figures 3-5 demonstrate how the present invention uses dynamic memory or additional parsing with the data inside the dump memory, so that steps to be performed by the device can be used to reduce the amount of data that must be transferred from the dump memory to the memory array. By contrast, the static memory utilized by the prior art flowchart 100 flagged all data as part of the device and it was transferred from the dump memory to the memory array, whether the device required it or not.
Since transfers between dump memory and memory array are generally slow, reducing the data to be transferred can significantly reduce the time required to process an application or apparatus. Also, since very large temporary files are required by some applications, such as the preferred logic simulation apparatus, and since these temporary files often reside in the memory array, reducing the size of transfers to and from the memory array by the teachings of the invention can be very beneficial.
The above description of a preferred embodiment of the present invention using dynamic memory to improve a logic simulation device is only one application. The present invention is not so limited. The present invention can be used in any computer or network system utilizing three or more memory storage devices to transfer data to and from user applications.
-12- Figure 6 describes the present invention generally. A computer system or network system 600 includes a first data storage device 610 which is typically a bulk or mass memory, a second data storage device 620 which is a dump memory, and a third data storage device 630 which is usually a memory array. The dump memory forms an interface or translator between the memory and the memory array. Like the memory, the dump memory can accept large blocks of memory in a mass transfer mode sometimes referred to as a dump. It can also efficiently transfer data to and from the memory array in smaller, more versatile units such as words. Word is a known term in the art, and refers to data in a form convenient for use in computations and general data handling operations, e.g. in a computer.
For general reference, the second storage device 620, also called swap space or virtual memory, is normally several times larger than the third storage device 630, but is itself generally much smaller than the first storage device 610, often many hundreds or thousands of times smaller. The first storage device 610 such as a tape, disk array, CDROM, etc., can be both read and written for the purposes of the present invention although memory which cannot be written to is also suitable. The first storage device 610 is interfaced to the second storage device 620 which is generally selected from devices normally used in mass memory that are fastest, though still much slower than memory array devices. The second storage device is interfaced to the third storage device, which is generally the smallest and fastest memory space available in the computer system 600. As a result, the second storage device 620 is generally much smaller and faster than bulk or mass memory, but is much larger and slower than the memory array.
The second storage device 620 is given an address in memory from which the data is to be transferred. The number of bytes to be transferred or the length of the data unit is generally known or user specified. The second storage device 620 also examines or parses the data units 640 to determine which pieces of data to be transferred are applicable to the application or device, since data is often broken up across blocks into smaller data units or fragments. While the information about how the data is fragmented is often a part of the block of data being transferred, the second storage device 620 allocates an area where flags or indicators of the data validity with respect to an system are kept. A flag buffer can be used to hold the flags.
-13- The addressing information is then used to provide hierarchical information retrieval means within the simulation data, thereby reducing the data that is to be transferred from and to the first storing means and to and from the second storing means. The amount of data that needs to be transferred is thereby reduced and the time required for transferring this data is consequently reduced, increasing the efficiency of the logic simulation device. The addressing information also reduces the size required in the second storing means.
Although the above description contains specific examples, these should not be construed as limiting the scope of the invention, but as merely providing illustrations of some of the preferred embodiments of this invention. Many variations are possible and are to be considered within the scope of the present invention. Thus the scope of the invention should be determined not by the examples given herein, but rather by the appended claims and their legal equivalents.
-14-
SUBSTΓΓUTE SHEET (RULE 26)

Claims

WE CLAIM:
1. A utility for selectively transferring application data between storage means in a computer system, the utility comprising: first data storage means for storing the application data and non-application data as contents therein; second data storage means for receiving a dump of at least a part of the application and non-application data from the first data storage means; means for examining the data in the second storage means to identify the application data and deriving a secondary address for the application data; third data storage means for receiving and parsing only the application data from the second storage means indicated by the secondary address; and means for transferring the parsed application data from the third data storage means to the second data storage means, for transferring the parsed application data and the remaining non-application data from the second data storage means to the first data storage means, and for dumping the remaining unexamined data from the first data storage means to the second storage means.
2. The utility of claim 1 wherein the second storage means further: initializes a page_allocated register in the second storage device equal in size to the number of pages in the application data; and initializes an address array register in the second storage device equal in size to the number of pages in the application data to be initialized.
3. The utility of claim 1 wherein the second storage means further: initializes an address file with the application data: reads the addresses from the address file to the address_array register; and sets the page_allocated register corresponding to each address in the address file to a known high value.
4. The utility of claim 1 wherein the second storage means further: writes the application data into a memarray register, if a currently accessed pagenumber is the same as the last accessed pagenumber;
-15- sets the element of the page allocated array corresponding to the pagenumber to a high known value; and masking the written application data and writing the data at an offset within the memarray register.
5. The utility of claim 1 wherein the second storage means further: writes the application data in a memarray register to a Idpxxxx.mem, if a currently accessed pagenumber is different from the last accessed pagenumber; reads the application data from the Idpxxxx.mem file into the memarray register, if the element of the page_allocated array corresponding to the pagenumber is a first predetermined value; initializes the element of the page allocated array corresponding to the pagenumber to the first predetermined value and initializing the memarray register to indicate a known value, if the element of the page_allocated array corresponding to the pagenumber is a second predetermined value; sets the last accessed pagenumber to the currently accessed pagenumber; and masks the written application data and writing the data at an offset within the memarray register.
6. The utility of claim 1 wherein the second storage means further: reads the application data at an offset within the memarray register, if a currently accessed pagenumber is the same as the last accessed pagenumber.
7. The utility of claim 1 wherein the second storage means further: writes the application data to a Idpxxxx.mem file, if a element of the page allocated array corresponding to a currently accessed pagenumber is a first predetermined value and the currently accessed pagenumber is different from the last accessed pagenumber, reads the application data in a memarray register to the Idpxxxx.mem file and reads a application data word at an offset in the memarray register; and sets the last accessed pagenumber to the currently accessed pagenumber.
8. A simulation device for dynamically simulating hard memory in the software of a computer system, the device comprising:
-16- a mass memory device for storing the application data and non-application data as a data block therein; a dump memory for receiving a dump of at least a part of the data block containing application and non-application data from the mass memory device; a memory controller for examining the data in the dump memory to identify the application data and deriving a secondary address for the application data; a memory array for receiving only the application data from the second storage means indicated by the secondary address, the memory controller parsing only the application data; and means for transferring the parsed application data from the memory array to the dump memory, for transferring the parsed application data and the remaining non- application data from the dump memory to the mass memory device, and for dumping the remaining unexamined data from the mass memory device to the dump memory.
9. The device of claim 8 wherein: the first storage means is a mass storage device selected from the group consisting of a hard disk, CD-ROM, tape, or other mass storage devices; the second storage device is selected from the group consisting of random-access memory or other read-write memory devices; and the third storage device is selected from a group consisting of DRAM, SRAM, SDRAM, or EDO DRAM.
10. The device of claim 8 wherein: the application data is a circuit to be simulated.
11. The device of claim 10 wherein: the dump memory contains addresses partly determined by the application data.
12. The device of claim 10 wherein: the circuit is described in a hardware description language.
13. The device of claim 12 wherein: the hardware description language is VERILOG.
-17-
14. A method of selectively transferring application data between storage means of a computer system, the method comprising the steps of: dumping at least a portion of the application data and non-application data from a first data storage device to a second data storage device; examining the dumped application and non-application data in the second storage means to identify the application data and deriving a secondary address for the application data; transferring only the application data in the second storage means indicated by the secondary address to a third storage device; parsing only the application data in the third storage device; transferring the parsed application data in the third storage device to the second storage device; and transferring the parsed application data and remaining non-application data from the second data storage device to the first data storage device and dumping the remaining unexamined application and non-application data from the first data storage device to the second storage device.
15. The method of claim 14 wherein, prior to the dumping step, the method includes the steps of: initializing a page allocated register in the second storage device equal in size to the number of pages in the application data; and initializing an address array register in the second storage device equal in size to the number of pages in the application data to be initialized.
16. The method of claim 15 wherein, prior to the dumping step, the method includes the steps of: initializing an address file with the application data; reading the addresses from the address file to the address array register; and setting the page allocated register corresponding to each address in the address file to a known high value.
17. The method of claim 14 wherein the examining step includes the steps of:
-18- writing the application data into a memarray register, if a currently accessed pagenumber is the same as the last accessed pagenumber; setting the element of the page_allocated array corresponding to the pagenumber to a high known value; and masking the written application data and writing the data at an offset within the memarray register.
18. The method of claim 14 wherein the examining step includes the steps of: writing the application data in a memarray register to a Idpxxxx.mem, if a currently accessed pagenumber is different from the last accessed pagenumber; reading the application data from the Idpxxxx.mem file into the memarray register, if the element of the page_allocated array corresponding to the pagenumber is a first predetermined value; initializing the element of the page_allocated array corresponding to the pagenumber to the first predetermined value and initializing the memarray register to indicate a known value, if the element of the page allocated array corresponding to the pagenumber is a second predetermined value; setting the last accessed pagenumber to the currently accessed pagenumber, and register.
19. The method of claim 14 wherein the examining step includes the step of: reading the application data at an offset within the memarray register, if a currently accessed pagenumber is the same as the last accessed pagenumber.
20. The method of claim 14 wherein the examining step includes the steps of: writing the application data to a Idpxxxx.mem file, if a element of the page allocated array corresponding to a currently accessed pagenumber is a first predetermined value and the currently accessed pagenumber is different from the last accessed pagenumber; reading the application data in a memarray register to the Idpxxxx.mem file and reading a application data word at an offset in the memarray register; and setting the last accessed pagenumber to the currently accessed pagenumber.
-19-
PCT/US1999/004261 1998-03-19 1999-03-19 Selective data transfer between storage mediums using dynamic memory allocation WO1999048005A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU31808/99A AU3180899A (en) 1998-03-19 1999-03-19 Selective data transfer between storage mediums using dynamic memory allocation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/045,055 US6321295B1 (en) 1998-03-19 1998-03-19 System and method for selective transfer of application data between storage devices of a computer system through utilization of dynamic memory allocation
US09/045,055 1998-03-19

Publications (1)

Publication Number Publication Date
WO1999048005A1 true WO1999048005A1 (en) 1999-09-23

Family

ID=21935771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/004261 WO1999048005A1 (en) 1998-03-19 1999-03-19 Selective data transfer between storage mediums using dynamic memory allocation

Country Status (4)

Country Link
US (2) US6321295B1 (en)
AU (1) AU3180899A (en)
TW (1) TW514778B (en)
WO (1) WO1999048005A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6321295B1 (en) * 1998-03-19 2001-11-20 Insilicon Corporation System and method for selective transfer of application data between storage devices of a computer system through utilization of dynamic memory allocation
US6745310B2 (en) * 2000-12-01 2004-06-01 Yan Chiew Chow Real time local and remote management of data files and directories and method of operating the same
JP2002259201A (en) * 2001-03-02 2002-09-13 Hitachi Ltd Method for starting computer system
US6730883B2 (en) * 2002-10-02 2004-05-04 Stratagene Flexible heating cover assembly for thermal cycling of samples of biological material
US7047447B2 (en) * 2003-01-16 2006-05-16 Sun Microsystems, Inc. Method and system for postmortem object type identification
US7418568B2 (en) * 2005-01-05 2008-08-26 Sap Ag Memory management technique
US7459984B2 (en) * 2005-05-26 2008-12-02 Sirf Technology Holdings, Inc. Method and apparatus for self-calibration and adaptive temperature compensation in GPS receivers
US7386636B2 (en) * 2005-08-19 2008-06-10 International Business Machines Corporation System and method for communicating command parameters between a processor and a memory flow controller
US7500039B2 (en) * 2005-08-19 2009-03-03 International Business Machines Corporation Method for communicating with a processor event facility
US7516291B2 (en) * 2005-11-21 2009-04-07 Red Hat, Inc. Cooperative mechanism for efficient application memory allocation
US8438003B2 (en) * 2007-04-12 2013-05-07 Cadence Design Systems, Inc. Methods for improved simulation of integrated circuit designs
TWI391911B (en) * 2008-04-15 2013-04-01 Novatek Microelectronics Corp Memory access apparatus and display using the same
US8645763B2 (en) 2011-09-12 2014-02-04 Microsoft Corporation Memory dump with expanded data and user privacy protection
US8510523B2 (en) 2011-09-12 2013-08-13 Microsoft Corporation Memory dump with expanded data and user privacy protection
US10158715B2 (en) 2015-03-10 2018-12-18 Honeywell International Inc. Method and apparatus for optimizing memory utilization in gateway of process control and automation system
US10657068B2 (en) * 2018-03-22 2020-05-19 Intel Corporation Techniques for an all persistent memory file system
US10713158B2 (en) 2018-06-28 2020-07-14 Western Digital Technologies, Inc. Non-volatile storage system with dynamic allocation of applications to memory based on usage monitoring

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4926322A (en) * 1987-08-03 1990-05-15 Compag Computer Corporation Software emulation of bank-switched memory using a virtual DOS monitor and paged memory management
US5222030A (en) 1990-04-06 1993-06-22 Lsi Logic Corporation Methodology for deriving executable low-level structural descriptions and valid physical implementations of circuits and systems from high-level semantic specifications and descriptions thereof
US5598344A (en) 1990-04-06 1997-01-28 Lsi Logic Corporation Method and system for creating, validating, and scaling structural description of electronic device
US5572436A (en) 1990-04-06 1996-11-05 Lsi Logic Corporation Method and system for creating and validating low level description of electronic design
US5794005A (en) * 1992-01-21 1998-08-11 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Synchronous parallel emulation and discrete event simulation system with self-contained simulation objects and active event objects
WO1993016433A1 (en) 1992-02-07 1993-08-19 Seiko Epson Corporation Hardware emulation accelerator and method
JP3620860B2 (en) 1992-06-05 2005-02-16 株式会社メガチップス Simulation device
JP3172617B2 (en) 1993-03-10 2001-06-04 三菱電機株式会社 Logic description conversion apparatus and method, and logic synthesis method
JP2815281B2 (en) 1993-04-19 1998-10-27 株式会社ピーエフユー Digital circuit design support system and method
JP2768889B2 (en) 1993-06-07 1998-06-25 株式会社東芝 Logic simulation equipment
US5604895A (en) 1994-02-22 1997-02-18 Motorola Inc. Method and apparatus for inserting computer code into a high level language (HLL) software model of an electrical circuit to monitor test coverage of the software model when exposed to test inputs
US5541850A (en) 1994-05-17 1996-07-30 Vlsi Technology, Inc. Method and apparatus for forming an integrated circuit including a memory structure
US5604889A (en) * 1994-06-15 1997-02-18 Texas Instruments Incorporated Memory management system for checkpointed logic simulator with increased locality of data
US5999984A (en) * 1997-06-18 1999-12-07 Hughes Electronics Corporation Computer memory modeling data storage
US6321295B1 (en) * 1998-03-19 2001-11-20 Insilicon Corporation System and method for selective transfer of application data between storage devices of a computer system through utilization of dynamic memory allocation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"METHOD FOR APPLICATION SWAPPING", IBM TECHNICAL DISCLOSURE BULLETIN, vol. 30, no. 9, 1 February 1988 (1988-02-01), pages 146, XP000021702, ISSN: 0018-8689 *

Also Published As

Publication number Publication date
US20020073291A1 (en) 2002-06-13
US6321295B1 (en) 2001-11-20
US6754771B2 (en) 2004-06-22
TW514778B (en) 2002-12-21
AU3180899A (en) 1999-10-11

Similar Documents

Publication Publication Date Title
US6321295B1 (en) System and method for selective transfer of application data between storage devices of a computer system through utilization of dynamic memory allocation
US4937791A (en) High performance dynamic ram interface
US7076598B2 (en) Pipeline accessing method to a large block memory
US5802605A (en) Physical address size selection and page size selection in an address translator
US5289584A (en) Memory system with FIFO data input
US6542971B1 (en) Memory access system and method employing an auxiliary buffer
US5155844A (en) Background memory test during system start up
US20030005219A1 (en) Partitioning cache metadata state
KR100614469B1 (en) Storage device
EP0407119B1 (en) Apparatus and method for reading, writing and refreshing memory with direct virtual or physical access
US7350016B2 (en) High speed DRAM cache architecture
JPH11161547A (en) Storage device for data processor and method for accessing storage place
US5724592A (en) Method and apparatus for managing active power consumption in a microprocessor controlled storage device
US11347650B2 (en) Word type/boundary propagation with memory performance applications
US6505282B1 (en) Method and apparatus for determining memory types of a multi-type memory subsystem where memory of the different types are accessed using column control signals with different timing characteristics
EP1984827B1 (en) An electronic device having a memory element and method of operation therefor
US7184944B1 (en) Apparatus and method for the simulation of a large main memory address space given limited resources
US7596665B2 (en) Mechanism for a processor to use locking cache as part of system memory
US5928338A (en) Method for providing temporary registers in a local bus device by reusing configuration bits otherwise unused after system reset
US20030065891A1 (en) Memory controller and a cache for accessing a main memory, and a system and a method for controlling the main memory
CN115168317B (en) LSM tree storage engine construction method and system
US5878263A (en) Internal chaining technique for fixup records
US5802603A (en) Method and apparatus for asymmetric/symmetric DRAM detection
AU616653B2 (en) Method and apparatus for determining available memory size
JP3609739B2 (en) Semiconductor memory device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
NENP Non-entry into the national phase

Ref country code: KR

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase