CA1278875C - Memory access system - Google Patents
Memory access systemInfo
- Publication number
- CA1278875C CA1278875C CA000538832A CA538832A CA1278875C CA 1278875 C CA1278875 C CA 1278875C CA 000538832 A CA000538832 A CA 000538832A CA 538832 A CA538832 A CA 538832A CA 1278875 C CA1278875 C CA 1278875C
- Authority
- CA
- Canada
- Prior art keywords
- memory
- row address
- address
- data
- column
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/22—Read-write [R-W] timing or clocking circuits; Read-write [R-W] control signal generators or management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0215—Addressing or allocation; Relocation with look ahead addressing means
Abstract
ABSTRACT OF THE INVENTION
An improved memory architecture is disclosed, having particular application for use in computer systems employing virtual memory techniques. A processor provides row and column addresses to access data stored in a dynamic random access memory (DRAM). The virtual address supplied by the processor includes high and low order bits. In the present embodiment, the virtual high order bits represent the row address and the low order bits represent a real column address. The virtual row address is applied to a memory management unit (MMU) for translation into a real row address. The real column address need not be translated. A comparator compares the current virtual row address to the previous row address stored in a latch. If the current row and previous row addresses match, a cycle control circuit couples the real column address to the DRAM, and applies a CAS signal such that the desired data is accessed in the memory without the need to reapply the row address. If the row addresses do not match, the cycle control circuit initiates a complete memory fetch cycle and applies both row and column addresses to the DRAM, along with the respective RAS and CAS signals. By properly organizing data in the memory, the probability that sequential memory operations access the same row in the DRAM may be significantly increased. By using such an organization, the present invention provides data retrieval at speeds on the order of a cache based memory system for a subset of data stored.
An improved memory architecture is disclosed, having particular application for use in computer systems employing virtual memory techniques. A processor provides row and column addresses to access data stored in a dynamic random access memory (DRAM). The virtual address supplied by the processor includes high and low order bits. In the present embodiment, the virtual high order bits represent the row address and the low order bits represent a real column address. The virtual row address is applied to a memory management unit (MMU) for translation into a real row address. The real column address need not be translated. A comparator compares the current virtual row address to the previous row address stored in a latch. If the current row and previous row addresses match, a cycle control circuit couples the real column address to the DRAM, and applies a CAS signal such that the desired data is accessed in the memory without the need to reapply the row address. If the row addresses do not match, the cycle control circuit initiates a complete memory fetch cycle and applies both row and column addresses to the DRAM, along with the respective RAS and CAS signals. By properly organizing data in the memory, the probability that sequential memory operations access the same row in the DRAM may be significantly increased. By using such an organization, the present invention provides data retrieval at speeds on the order of a cache based memory system for a subset of data stored.
Description
1 BACKGROUND OF ~HE INVENTION
1. Field of the Invention:
The present invention relates to memory systems for computers, and more particulary, to methods and apparatus for increasing data access speed and efficiency.
1. Field of the Invention:
The present invention relates to memory systems for computers, and more particulary, to methods and apparatus for increasing data access speed and efficiency.
2. Art Background:
In many data processing systems, it is common to utilize a high speed buffer memory, referred to as a "cache" coupled to a central processing unit (CPU) to improve the average memory access time ~or the processor. The use of a cache is based upon the premise that over time, a data processing system will access certain locallzed areas of memory with high frequency. The cache typically contains a subset of the complete data ~et disposed in the main memory, and can be accessed very guickly by the CPU without the necessity of reading the data locations in the main memory.
The use of a cache adds considerable complexity to a data processing system and substantially increases system cost.
H$storically, memory access requirements were such to ~ustify the increased expense and complexity o~ cache based architectures. In new generation Reduced Instruction Set Computers (RISC) processing speeds re~uire single cycle memory access. Most modern memory systems untilize dynamic random access memories ~DRAMs) which generally have 200 nanosecond cycle tlmes ("cycle" time being the time from the initiation of the memory access until data can again be provided by the memory device). Although typical DRAMs provide 200 nanosecond cycle times, the column access/cycle time for the particular device is generally only 45 nanoseconds (where "access" time is the time ~rom the application o~ a column address signal (CAS) until data i~
.. ....
'7~3875 1 provided on a data bus). Accordingly, the cycle time for a commercially available DRAM may be up to five times the column access/cycle time for the 6ame devic~.
As will be described, by properly allocating data storage in the DRAN memory and using the teachings of the present invention, the main memory of the computer may be effectively used as a cache. The present ~nvention discloses apparatus and methods for use in a computer system to gulckly and efficiently access the computer's main memory to obviate the need for a separate cache. The present invention utilizes static column DRAMs in an architecture which permits very fast access to a localized subset of the main memory, and ls particularly suited for use in ~ystems employing virtual memory techniques.
~27~3~37S
An improved memory architecture is disclosed, having particular application for use in computer systems employing virtual memory techniques. A processor provides row and column addresses to access data stored in a dynamic random access memory (DRAM). The ~irtual address supplied by the processor includes high and low order bits. In the present embodiment, the virtual high order bits represent the row address and the low order bits represent a real column address. The virtual row address is applied to a memory management unit (MMU) for translation into a real row address. The real column address need not be translated. A comparator compares the - current vlrtual rdw address to the previous row address stored in a latch. I~ the current row and previous row addresses match, a cycle control circuit couples the real column address to the DRAM, and applies a CAS ignal such that the desired data is accessed in the memory withouk the need to reapply the row address. If the row addresses do not match, the cycle control circuit initiates a complete memory fetch cycle and applies both row and column addresses to the DRAM, along with the respective RAS and CAS signals and updates the contents of the latoh. By properly organizing data in the memory, the probability that seguential memory operations access the same row in the DRAM may be significantly increased. By using such an organization, the present invention provides data retrieval at speeds on the order of a cache based memory system for a subset of data ~tored. Data stor~d along the same row in the DRAM may be accessed at ~peeds four to ~ive ~imes faster then the time required for a complete memory fetch cycle.
1 BRIEF DESC~IPTION OF THE DRAWINGS
Figure 1 is a block diagram conceptually ~llustrating the nature of the present invention.
Figure 2 i~ a more detailed block diagram illustrating an example of the~presently preferred embodiment of the invention.
Figure 3 ls a ~low chart identifying the sequence of operations o~ the embodiment illustrated in Figure 2.
~ ~'7~387~
An lmproved memory archltecture is disclosed, having particular application for use in computer systems employing virtual memory techniques. In the ~ollowing description for purposes of explanation, specific memory devices, data rates, architectures, and components are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practised without these specific details. In other instances, well known circuits are shown in block diagram form in order not to obscure the present invention unnecessarily.
With reference to Figure 1, the present invention is disclosed conceptually in block diagram form. A dynamic random access memory (DRAM) 10 comprises one memory device of a data processing system. In the present embodiment, DRAM 10 includes a memory array o~ the type having row and column addresses identifyiny the location of data within the array. DRAM 10 may comprise a current generation one megabit CMOS dynamic RAM with statio column access. In such memory devices, the cycle time from the initiation of the memory access to the output of desired data is on the order of 200 nanoseconds. Data is accessed from DRAM 10 by first providing a row address, which identifies a particular row within the memory array in which the data is located, along with a row address strobe (RAS) signal. A column address is then provided to DRAM 1~ which identifies the particular column within the enabled row having the desired data, along with a column address ~trobe (CAS) signal. Typically, in static column DRAMs the time from the application of the column address and CAS 6ignal to the output o~ data by the device is approximately 45 nanoseconds. The time period from the application of the column address and CAS ~ignal to the output of data is referred to as the "column access" time of the memory device. Accordingly, a memory cycle is approximately five times the length of the column access/cycle speed of the device.
1 In most new gen~ration static column DRAM memory devices, once the row address and RAS 6trobe slgnal have been applied, any data element disposed within the enabled row may be accessed by applying the appropriate column address and CAS 6ignal, without the necessity of reapplying the row address and RAS signal. As noted, access to data stored within a previously enabled row may occur very quickly (approximately 45 nanoseconds) as opposed to accessing data stored in another row within the memory array.
It i5 common in high ~peed data processing applications to use a '!cache" memory coupled to a central processing unit to improve the average access time for the processor. The cache contains a subset of data located within the main memory of the system, and can be accessed very ~uickly by the processor without the necessity of reading the data locations in main memory. It is generally assumed that in high speed processing systems that a cache memory is required to achieve single cycle memory access by the processor ~owever, as previously noted, the use o~ a cache signi~icantly increases the complexity and cost of the data processing system. As will be described in this Specification, the present invention provides a memory system which 20 obviates the need for a cache, yet provides high-speed access to localized areas within main memory. It will be appreciated by one 6killed in the art, that 1~ data is appropriately arranged within DRAM
lO (Figure l), that any data element disposed in a previously enabled row may be accessed within a ~ingle clock cycle of the system.
25 Accordingly, by properly storing data within the memory array, a localized subset of the main memory may be effectively used as a cache by the processor.
Referring once again to Figure l, a processor (not ~hown) provides a row address 12 and a column address 14 to access desired 30 data stored within DRAM lO. ~he column address 14 is applied to a multiplexer 16, which, as will be described, selectively applies either the row or column address to DRAM lO. Row address 12 is applied to a latch 18 and a comparator 20. A cycle control circuik 22 8~
1 is coupled to latch 18, comparator 20, as well as multiplexer 16, and ;~
provides ~oth the RAS and CAS signals to DRAM 10 to access data. Upon application of an initial row address 12 and column address 14 to the memory system disclosed in Figure 1, comparator 20 compares the t 5 current row address 12 (RA(i)) with the previous row address (RA(i~
wh~ch had been used to access data on the previous full ~AS, CAS
memory cycle. If the row address 6tored within the latch does not match the current row address supplied by the processor, cycle control 22 clocks latch 18, such that the current row address is then stored 10 within the latch, and signals multiplexer 16 to apply the current row address 12 to DRAM 10. Cycle control 22 further issues a RAS signal to DRAM 10 such that the desired row within the memory array is enabled. Cycle control 22 then ~pplies column address 14 through multiplexer 16 to the DRA~ 10, and issues an appropriate CAS slgnal to 15 the DRAM which identifies the particular data to be accessed in the array. The DRAM 10 then provides the selected data to the processor, typically over a data bus (not ~hown).
In the event that the current row address RA(i) is the same as the previous row address RA(i-l) stored in latch 18, cycle control 22 20 applies the current column address 14 to DXAM 10 and issues a CAS
signal. As prev~ously discussed, once a particular row within DRAM 10 is enabled, access to a data element within that row may be done very quic~ly. In the embodiment illustrated in Figure 1, it will be appreciated that it is only necessary to apply the column address 14 25 to DRAM 10 to access data stored within a row enabled on a previous data cycle. Accordingly, in the case where the current row address is the same as the immediately preceeding row address, it is not necessary to reapply the current row address to the DRAM and provide an additional RAS signal, but rather, it i6 only necessary to apply 30 the current column address and apply the CAS signal to access the memory. In a typical application, the use of the present invention permits access to data stored within DRAM 10 in approximately 45 nanoseconds instead of the typical 200 nanoseconds.
1 As will be appreciated by one sXilled in the art, in a working memory ystem multiple banks of memory must be utilized rather than a single dynamic RAM. In order to render the system outlined in Figure 1 practical, it is necessary to arrange data within multiple banks of memory, such that probabillty of multiple sequential accesses to the same row within a ~emory bank is signi~icantly increased. If ~reguently used data may be localized within the memory in this manner, main memory may be utilized in place of a much more expensive and complex cache system.
Referring now to Figure 2, the present invention will be ~llustrated and described with reference to an embodiment having two memory banks, DRAM bank "A" and DRAM bank "B" in a data processing system utilizing virtual memory techni~ues. For purposes of this Specification, and in order to avoid unnecessary complication~ only those address, control and data lines necessary to illustrate the present invent~on have been disclosed in the Figures. However, it will be apparent to one Qkilled in the art that other lines, devices, and support circuitry may be required in any particular application.
A processor 30 communlcates with the memory system of the present invention along a virtual address bus 32. Virtual address bus 32 includes a plurality of individual lines which carry addresses in the form of multi-bit words. Typically, processors may use address buses between 16 and 32 bits, in which the bits are transmitted in parallel along individual lines or wires which form the address bus.
In the present invention, addresses transmitted over virtual address bus 32 are structured such that low ordered bits form a direct (real) address 6egment ~or the DRAM memory, with the low order bits being separated from the virtual address bus 32 along line 34. As will be described more fully below, the low order (real) address bits comprise the column address for desired data stored within DRAM bank A and DRAM
bank B. High order address bits form a virtual address segment corresponding to the row address for data ~tored within the main memory DRAM banks A and B. ~See, ~or example, U.S. ~atents Nos.
lZ7~7S
1 4,527,232 and 4,550,368, assigned to the Assignee of the present application).
The high order bits comprising the virtual address segment are retained on the virtual address bus 32. As illustrated, the high order virtual bits are applied to a memory management unit (MMU) 38 for translation into a high order physical address. In the present embodiment, MMU 38 is a relatively high speed random access memory unit, such as a fast static RAM. The MMU 38 takes as an input the high order virtual address supplied by processor 30 and provides as an output a translated physical (real) address along line 40, which specifies the actual physical row address of the desired data in DRAM banks A and B. MMU 38 may take the form of the memory management unit described in U.S. Patent No. 4,527,232 to Sun Microsystems, Inc. which issued July 2, 1980.
A latch 42 is coupled to virtual address bus 32 and a comparator 43 is coupled to both the virtual address bus 32 as well as to the latch 42. Similarly, a latch 44 is coupled to virtual address bus 36 an is in turn coupled to a comparator 48, such that the comparator 48 compares the contents of latch 44 with the current virtual address supplied over virtual address bus 32 from processor 30. As illustrated, the outputs of comparators 43 and 48 are passed through an OR gate 50 and are coupled to a cycle control circuit 55. The result of the OR operation between the outputs of comparators 43 and 48 are similarly coupled to cycle control 55. Cycle control circuit 55, as will be described more fully below, controls the clocking of latches 42 and 44, the selection of multiplexer 60, as well as provides appropriate RAS and CAS signals to DRAM banks A
and B.
MMU 38 provides the lowest translated physical bits of the virtual address supplied by processor 30 to the chip enable tCE) port of latches 42 and 44 over line 62.
The state of the lowest translated physical bits (in the present example for two memory banks only the lowest translated physical bit is required) may be used to select g _ 1 either latch 42 or latch 44. For example, if the lowest translatedphysical bit i~ a logical 0, this bit would be applied to latch 44 (but since it is a 0 it wou~d not enable latch 44) and would be inverted by an inverter 63 to a logical 1 which would be applied to latch 42, ther by enabling latch 42~ Conversely, latch 44 would be enabled in the event the lowest translated physical bit of the Yirtual address i8 a logical 1. As lllustrated in Figure 2, the lowest translated physical bit is also supplied to cycle control 55, and this bit ~or bits in the case of more than two memory banks) identifies to cycle control 55 which DRAM bank data is to be accessed from.
Assume for sake of example, that the processor 30 provides a virtual address over virtual address bus 32 which corresponds to data disposed in DRAM bank A. As previously described, the low order bits corresponding to the column address within the bank comprise a real address segment, and are separated and passed over line 34 to multiplexer 60. The virtual segment of the address corresponding to the row addre~s with~n ~RAM bank A iB applied to virtual addreRs bus 32, and is immediately translated by MMU 38 such that the translated high order bits are coupled along line 40 to multiplexer 60. In addition, the lowest translated physical bit is coupled over line 62 to latches 42 and 44. For purposes of the present example, it is assumed that if the lowest translated physical bit is a logical 0, latch 42 i~ enabled since the logical 0 will be inverted to a logical 1 by inverter 63. The virtual address is concurrently applied to enabled latch 42 as well as to comparator 43. It will be noted that although the current virtual row address is applled to latch 42, that latch 42 contains the preceeding virtual row address, and will continue to contain the preceeding virtual row address until the latch is clocked.
Comparator 43 compares the current virtual row address to the preceeding virtual row address stored in latch 42. The output of comparators 43 and 48 are coupled to OR gate 50 and cycle control 55.
In the event that either comparator 43 or comparator 48 identifies a - ( ~ Z7~387~; ~
1 match between the ~urrent row virtual address and the contents of the latches, OR gate 50 will indicate that a match exists and provide a signal over line 65 to cycle control 55. The output of each comparator is also coupled directly to cycle control 55 to identify which latch/comparator circuit identified ~he match. In the current example, where a match exists between the current row virtual address and the contents of latch 42, cycl~ control 55 signals multiplexer 60 to couple the ~eal address segment comprising the column address to physical address bus 70 and DRAM bank A. Cycle control 55 also provides the necessary column addres~ signal tCAS A). DRAM bank A
then provides the desired data to a data bus tnot shown) wlthin the access time of the static column dynamic RAM (approximately 45 nanoseconds). Accordingly, in the case where a match exists, the present invention does not reapply the virtual row address nor is an additional RAS signal required, to access desired data. It will be noted by one 6killed in the art that typically CAS signal lines remain in a high state and are applied in a low condition, and RAS lines are typically in a low state ~or purposes of data access. Moreover, reference is made to Figure 3 which sets forth the seguence of operations of the present invention as described herein.
In the event that there is no match between the current virtual row address and the previous row address stored in latch 42, cycle control 55 signals multiplexer 60 to couple the translated (now real) row address on line 40 to memory bank A along physical address 25 bus 70. Concurrently, cycle control 55 issues a RAS high 6ignal to bank A and provides a clock signal along clock lines 72 which clocks the enabled latch, namely lakch 42, such that the current virtual row address is ~tored therein. The RAS line is then brought to a low condition and mutiplexer 60 then applies the real address segment 30 defining the column address to bank A and applies a low CAS signal.
After completing this full memory cycle access including the updating o~ latch 42, and the application of the row and column addresses along with their re~pective strobes, data is provided by memory banX A to 7 8 ~37 ~
l the pxocessor along the data bus (not shown). It will be appreciated by one ~killed in th~ art, that in the event that a ~ull memory access cycle is required, that the time for the presen~ invention to obtain the desired data is generally the same as that in other computer systems using ~tandard memory access technigues to acquire data in a computer main memory. However, in those instances where a match occurs between the current virtual row address and the previously stored virtual row address, 6ignificant advantages in memory access speed are achieved by the present invention.
It will be further appreciated, that the use of the present invention as illustrated in Figure 2 permits the computer system to allocate memory banks for cerkain types o~ data ba~ed on the translated virtual row address, and particularly, the lowest translated physical row address bits. Generally, it has been found that in most computer ystems approximately 50% of processor memory fetches are for instructions, and approximately 50% of the memory fetches are for data. In current generation RISC processors, approximately 20% of the memory ~etch operations are to obtain data and approximately 80~ of the fetch operations are to obtain instructions. Accordingly, the use of the present invention permits an effective "cache" to be defined in one or more memory banks, such that localized subsets of memory may be accessed very quickly. In other words, the computer 6ystem'~ operating system may keep track of the data and instructlons stored in memory and place data in, for example, even numbered memory banks and instructions in odd number memory banks, or vice versa. Accordingly, the probability ~or sequential memory access by the processor for any DRAM bank having identical virtual row addresses may be significantly increased. ~he use of such a memory system architecture in con~unction with the hardware of the present ~nvention, as illustrated in Figure 2, obviates in many cases the need for a separate cache based system. In addition, it will be noted by one skilled in the art that although Figure 2 illustrates two banks of dynamic RAM memory, that the 3L~ 38~7C;
embod~ment illustrated may be expanded to a plurality of banks, each ~' banX having a corresponding latch and comparator circuit, wherein the number of lowest translated physical bits used to enable specific latches is a function of the number o~ dynamic RAM banks utilized. s Although the present invention has been described with reference to Figure6 1-3, it will be appreciated that the Figures are for illu tration only, and that the present invention may be used in ;~
numerous other memory applications.
In many data processing systems, it is common to utilize a high speed buffer memory, referred to as a "cache" coupled to a central processing unit (CPU) to improve the average memory access time ~or the processor. The use of a cache is based upon the premise that over time, a data processing system will access certain locallzed areas of memory with high frequency. The cache typically contains a subset of the complete data ~et disposed in the main memory, and can be accessed very guickly by the CPU without the necessity of reading the data locations in the main memory.
The use of a cache adds considerable complexity to a data processing system and substantially increases system cost.
H$storically, memory access requirements were such to ~ustify the increased expense and complexity o~ cache based architectures. In new generation Reduced Instruction Set Computers (RISC) processing speeds re~uire single cycle memory access. Most modern memory systems untilize dynamic random access memories ~DRAMs) which generally have 200 nanosecond cycle tlmes ("cycle" time being the time from the initiation of the memory access until data can again be provided by the memory device). Although typical DRAMs provide 200 nanosecond cycle times, the column access/cycle time for the particular device is generally only 45 nanoseconds (where "access" time is the time ~rom the application o~ a column address signal (CAS) until data i~
.. ....
'7~3875 1 provided on a data bus). Accordingly, the cycle time for a commercially available DRAM may be up to five times the column access/cycle time for the 6ame devic~.
As will be described, by properly allocating data storage in the DRAN memory and using the teachings of the present invention, the main memory of the computer may be effectively used as a cache. The present ~nvention discloses apparatus and methods for use in a computer system to gulckly and efficiently access the computer's main memory to obviate the need for a separate cache. The present invention utilizes static column DRAMs in an architecture which permits very fast access to a localized subset of the main memory, and ls particularly suited for use in ~ystems employing virtual memory techniques.
~27~3~37S
An improved memory architecture is disclosed, having particular application for use in computer systems employing virtual memory techniques. A processor provides row and column addresses to access data stored in a dynamic random access memory (DRAM). The ~irtual address supplied by the processor includes high and low order bits. In the present embodiment, the virtual high order bits represent the row address and the low order bits represent a real column address. The virtual row address is applied to a memory management unit (MMU) for translation into a real row address. The real column address need not be translated. A comparator compares the - current vlrtual rdw address to the previous row address stored in a latch. I~ the current row and previous row addresses match, a cycle control circuit couples the real column address to the DRAM, and applies a CAS ignal such that the desired data is accessed in the memory withouk the need to reapply the row address. If the row addresses do not match, the cycle control circuit initiates a complete memory fetch cycle and applies both row and column addresses to the DRAM, along with the respective RAS and CAS signals and updates the contents of the latoh. By properly organizing data in the memory, the probability that seguential memory operations access the same row in the DRAM may be significantly increased. By using such an organization, the present invention provides data retrieval at speeds on the order of a cache based memory system for a subset of data ~tored. Data stor~d along the same row in the DRAM may be accessed at ~peeds four to ~ive ~imes faster then the time required for a complete memory fetch cycle.
1 BRIEF DESC~IPTION OF THE DRAWINGS
Figure 1 is a block diagram conceptually ~llustrating the nature of the present invention.
Figure 2 i~ a more detailed block diagram illustrating an example of the~presently preferred embodiment of the invention.
Figure 3 ls a ~low chart identifying the sequence of operations o~ the embodiment illustrated in Figure 2.
~ ~'7~387~
An lmproved memory archltecture is disclosed, having particular application for use in computer systems employing virtual memory techniques. In the ~ollowing description for purposes of explanation, specific memory devices, data rates, architectures, and components are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practised without these specific details. In other instances, well known circuits are shown in block diagram form in order not to obscure the present invention unnecessarily.
With reference to Figure 1, the present invention is disclosed conceptually in block diagram form. A dynamic random access memory (DRAM) 10 comprises one memory device of a data processing system. In the present embodiment, DRAM 10 includes a memory array o~ the type having row and column addresses identifyiny the location of data within the array. DRAM 10 may comprise a current generation one megabit CMOS dynamic RAM with statio column access. In such memory devices, the cycle time from the initiation of the memory access to the output of desired data is on the order of 200 nanoseconds. Data is accessed from DRAM 10 by first providing a row address, which identifies a particular row within the memory array in which the data is located, along with a row address strobe (RAS) signal. A column address is then provided to DRAM 1~ which identifies the particular column within the enabled row having the desired data, along with a column address ~trobe (CAS) signal. Typically, in static column DRAMs the time from the application of the column address and CAS 6ignal to the output o~ data by the device is approximately 45 nanoseconds. The time period from the application of the column address and CAS ~ignal to the output of data is referred to as the "column access" time of the memory device. Accordingly, a memory cycle is approximately five times the length of the column access/cycle speed of the device.
1 In most new gen~ration static column DRAM memory devices, once the row address and RAS 6trobe slgnal have been applied, any data element disposed within the enabled row may be accessed by applying the appropriate column address and CAS 6ignal, without the necessity of reapplying the row address and RAS signal. As noted, access to data stored within a previously enabled row may occur very quickly (approximately 45 nanoseconds) as opposed to accessing data stored in another row within the memory array.
It i5 common in high ~peed data processing applications to use a '!cache" memory coupled to a central processing unit to improve the average access time for the processor. The cache contains a subset of data located within the main memory of the system, and can be accessed very ~uickly by the processor without the necessity of reading the data locations in main memory. It is generally assumed that in high speed processing systems that a cache memory is required to achieve single cycle memory access by the processor ~owever, as previously noted, the use o~ a cache signi~icantly increases the complexity and cost of the data processing system. As will be described in this Specification, the present invention provides a memory system which 20 obviates the need for a cache, yet provides high-speed access to localized areas within main memory. It will be appreciated by one 6killed in the art, that 1~ data is appropriately arranged within DRAM
lO (Figure l), that any data element disposed in a previously enabled row may be accessed within a ~ingle clock cycle of the system.
25 Accordingly, by properly storing data within the memory array, a localized subset of the main memory may be effectively used as a cache by the processor.
Referring once again to Figure l, a processor (not ~hown) provides a row address 12 and a column address 14 to access desired 30 data stored within DRAM lO. ~he column address 14 is applied to a multiplexer 16, which, as will be described, selectively applies either the row or column address to DRAM lO. Row address 12 is applied to a latch 18 and a comparator 20. A cycle control circuik 22 8~
1 is coupled to latch 18, comparator 20, as well as multiplexer 16, and ;~
provides ~oth the RAS and CAS signals to DRAM 10 to access data. Upon application of an initial row address 12 and column address 14 to the memory system disclosed in Figure 1, comparator 20 compares the t 5 current row address 12 (RA(i)) with the previous row address (RA(i~
wh~ch had been used to access data on the previous full ~AS, CAS
memory cycle. If the row address 6tored within the latch does not match the current row address supplied by the processor, cycle control 22 clocks latch 18, such that the current row address is then stored 10 within the latch, and signals multiplexer 16 to apply the current row address 12 to DRAM 10. Cycle control 22 further issues a RAS signal to DRAM 10 such that the desired row within the memory array is enabled. Cycle control 22 then ~pplies column address 14 through multiplexer 16 to the DRA~ 10, and issues an appropriate CAS slgnal to 15 the DRAM which identifies the particular data to be accessed in the array. The DRAM 10 then provides the selected data to the processor, typically over a data bus (not ~hown).
In the event that the current row address RA(i) is the same as the previous row address RA(i-l) stored in latch 18, cycle control 22 20 applies the current column address 14 to DXAM 10 and issues a CAS
signal. As prev~ously discussed, once a particular row within DRAM 10 is enabled, access to a data element within that row may be done very quic~ly. In the embodiment illustrated in Figure 1, it will be appreciated that it is only necessary to apply the column address 14 25 to DRAM 10 to access data stored within a row enabled on a previous data cycle. Accordingly, in the case where the current row address is the same as the immediately preceeding row address, it is not necessary to reapply the current row address to the DRAM and provide an additional RAS signal, but rather, it i6 only necessary to apply 30 the current column address and apply the CAS signal to access the memory. In a typical application, the use of the present invention permits access to data stored within DRAM 10 in approximately 45 nanoseconds instead of the typical 200 nanoseconds.
1 As will be appreciated by one sXilled in the art, in a working memory ystem multiple banks of memory must be utilized rather than a single dynamic RAM. In order to render the system outlined in Figure 1 practical, it is necessary to arrange data within multiple banks of memory, such that probabillty of multiple sequential accesses to the same row within a ~emory bank is signi~icantly increased. If ~reguently used data may be localized within the memory in this manner, main memory may be utilized in place of a much more expensive and complex cache system.
Referring now to Figure 2, the present invention will be ~llustrated and described with reference to an embodiment having two memory banks, DRAM bank "A" and DRAM bank "B" in a data processing system utilizing virtual memory techni~ues. For purposes of this Specification, and in order to avoid unnecessary complication~ only those address, control and data lines necessary to illustrate the present invent~on have been disclosed in the Figures. However, it will be apparent to one Qkilled in the art that other lines, devices, and support circuitry may be required in any particular application.
A processor 30 communlcates with the memory system of the present invention along a virtual address bus 32. Virtual address bus 32 includes a plurality of individual lines which carry addresses in the form of multi-bit words. Typically, processors may use address buses between 16 and 32 bits, in which the bits are transmitted in parallel along individual lines or wires which form the address bus.
In the present invention, addresses transmitted over virtual address bus 32 are structured such that low ordered bits form a direct (real) address 6egment ~or the DRAM memory, with the low order bits being separated from the virtual address bus 32 along line 34. As will be described more fully below, the low order (real) address bits comprise the column address for desired data stored within DRAM bank A and DRAM
bank B. High order address bits form a virtual address segment corresponding to the row address for data ~tored within the main memory DRAM banks A and B. ~See, ~or example, U.S. ~atents Nos.
lZ7~7S
1 4,527,232 and 4,550,368, assigned to the Assignee of the present application).
The high order bits comprising the virtual address segment are retained on the virtual address bus 32. As illustrated, the high order virtual bits are applied to a memory management unit (MMU) 38 for translation into a high order physical address. In the present embodiment, MMU 38 is a relatively high speed random access memory unit, such as a fast static RAM. The MMU 38 takes as an input the high order virtual address supplied by processor 30 and provides as an output a translated physical (real) address along line 40, which specifies the actual physical row address of the desired data in DRAM banks A and B. MMU 38 may take the form of the memory management unit described in U.S. Patent No. 4,527,232 to Sun Microsystems, Inc. which issued July 2, 1980.
A latch 42 is coupled to virtual address bus 32 and a comparator 43 is coupled to both the virtual address bus 32 as well as to the latch 42. Similarly, a latch 44 is coupled to virtual address bus 36 an is in turn coupled to a comparator 48, such that the comparator 48 compares the contents of latch 44 with the current virtual address supplied over virtual address bus 32 from processor 30. As illustrated, the outputs of comparators 43 and 48 are passed through an OR gate 50 and are coupled to a cycle control circuit 55. The result of the OR operation between the outputs of comparators 43 and 48 are similarly coupled to cycle control 55. Cycle control circuit 55, as will be described more fully below, controls the clocking of latches 42 and 44, the selection of multiplexer 60, as well as provides appropriate RAS and CAS signals to DRAM banks A
and B.
MMU 38 provides the lowest translated physical bits of the virtual address supplied by processor 30 to the chip enable tCE) port of latches 42 and 44 over line 62.
The state of the lowest translated physical bits (in the present example for two memory banks only the lowest translated physical bit is required) may be used to select g _ 1 either latch 42 or latch 44. For example, if the lowest translatedphysical bit i~ a logical 0, this bit would be applied to latch 44 (but since it is a 0 it wou~d not enable latch 44) and would be inverted by an inverter 63 to a logical 1 which would be applied to latch 42, ther by enabling latch 42~ Conversely, latch 44 would be enabled in the event the lowest translated physical bit of the Yirtual address i8 a logical 1. As lllustrated in Figure 2, the lowest translated physical bit is also supplied to cycle control 55, and this bit ~or bits in the case of more than two memory banks) identifies to cycle control 55 which DRAM bank data is to be accessed from.
Assume for sake of example, that the processor 30 provides a virtual address over virtual address bus 32 which corresponds to data disposed in DRAM bank A. As previously described, the low order bits corresponding to the column address within the bank comprise a real address segment, and are separated and passed over line 34 to multiplexer 60. The virtual segment of the address corresponding to the row addre~s with~n ~RAM bank A iB applied to virtual addreRs bus 32, and is immediately translated by MMU 38 such that the translated high order bits are coupled along line 40 to multiplexer 60. In addition, the lowest translated physical bit is coupled over line 62 to latches 42 and 44. For purposes of the present example, it is assumed that if the lowest translated physical bit is a logical 0, latch 42 i~ enabled since the logical 0 will be inverted to a logical 1 by inverter 63. The virtual address is concurrently applied to enabled latch 42 as well as to comparator 43. It will be noted that although the current virtual row address is applled to latch 42, that latch 42 contains the preceeding virtual row address, and will continue to contain the preceeding virtual row address until the latch is clocked.
Comparator 43 compares the current virtual row address to the preceeding virtual row address stored in latch 42. The output of comparators 43 and 48 are coupled to OR gate 50 and cycle control 55.
In the event that either comparator 43 or comparator 48 identifies a - ( ~ Z7~387~; ~
1 match between the ~urrent row virtual address and the contents of the latches, OR gate 50 will indicate that a match exists and provide a signal over line 65 to cycle control 55. The output of each comparator is also coupled directly to cycle control 55 to identify which latch/comparator circuit identified ~he match. In the current example, where a match exists between the current row virtual address and the contents of latch 42, cycl~ control 55 signals multiplexer 60 to couple the ~eal address segment comprising the column address to physical address bus 70 and DRAM bank A. Cycle control 55 also provides the necessary column addres~ signal tCAS A). DRAM bank A
then provides the desired data to a data bus tnot shown) wlthin the access time of the static column dynamic RAM (approximately 45 nanoseconds). Accordingly, in the case where a match exists, the present invention does not reapply the virtual row address nor is an additional RAS signal required, to access desired data. It will be noted by one 6killed in the art that typically CAS signal lines remain in a high state and are applied in a low condition, and RAS lines are typically in a low state ~or purposes of data access. Moreover, reference is made to Figure 3 which sets forth the seguence of operations of the present invention as described herein.
In the event that there is no match between the current virtual row address and the previous row address stored in latch 42, cycle control 55 signals multiplexer 60 to couple the translated (now real) row address on line 40 to memory bank A along physical address 25 bus 70. Concurrently, cycle control 55 issues a RAS high 6ignal to bank A and provides a clock signal along clock lines 72 which clocks the enabled latch, namely lakch 42, such that the current virtual row address is ~tored therein. The RAS line is then brought to a low condition and mutiplexer 60 then applies the real address segment 30 defining the column address to bank A and applies a low CAS signal.
After completing this full memory cycle access including the updating o~ latch 42, and the application of the row and column addresses along with their re~pective strobes, data is provided by memory banX A to 7 8 ~37 ~
l the pxocessor along the data bus (not shown). It will be appreciated by one ~killed in th~ art, that in the event that a ~ull memory access cycle is required, that the time for the presen~ invention to obtain the desired data is generally the same as that in other computer systems using ~tandard memory access technigues to acquire data in a computer main memory. However, in those instances where a match occurs between the current virtual row address and the previously stored virtual row address, 6ignificant advantages in memory access speed are achieved by the present invention.
It will be further appreciated, that the use of the present invention as illustrated in Figure 2 permits the computer system to allocate memory banks for cerkain types o~ data ba~ed on the translated virtual row address, and particularly, the lowest translated physical row address bits. Generally, it has been found that in most computer ystems approximately 50% of processor memory fetches are for instructions, and approximately 50% of the memory fetches are for data. In current generation RISC processors, approximately 20% of the memory ~etch operations are to obtain data and approximately 80~ of the fetch operations are to obtain instructions. Accordingly, the use of the present invention permits an effective "cache" to be defined in one or more memory banks, such that localized subsets of memory may be accessed very quickly. In other words, the computer 6ystem'~ operating system may keep track of the data and instructlons stored in memory and place data in, for example, even numbered memory banks and instructions in odd number memory banks, or vice versa. Accordingly, the probability ~or sequential memory access by the processor for any DRAM bank having identical virtual row addresses may be significantly increased. ~he use of such a memory system architecture in con~unction with the hardware of the present ~nvention, as illustrated in Figure 2, obviates in many cases the need for a separate cache based system. In addition, it will be noted by one skilled in the art that although Figure 2 illustrates two banks of dynamic RAM memory, that the 3L~ 38~7C;
embod~ment illustrated may be expanded to a plurality of banks, each ~' banX having a corresponding latch and comparator circuit, wherein the number of lowest translated physical bits used to enable specific latches is a function of the number o~ dynamic RAM banks utilized. s Although the present invention has been described with reference to Figure6 1-3, it will be appreciated that the Figures are for illu tration only, and that the present invention may be used in ;~
numerous other memory applications.
Claims (15)
1. A memory access system comprising:
memory means for storing data, said memory means including N banks of memory arrays, each of said memory arrays comprising a dynamic random access memory having static column access, wherein data is stored at a plurality of locations, each of said locations specified by a real row address and a column address and wherein said data at said specified location is accessed after receipt of said real row address followed by a row address strobe (RAS) signal and said column address followed by a column address strobe (CAS) signal;
address generation means for generating a virtual row address and said column address corresponding to a desired data location in one of said memory arrays, said address generation means comprising a processor;
latch means electrically connected to said address generation means, said latch means including N
latches, one for each of said N banks of memory arrays, for receiving said virtual row address and storing said virtual row address upon receipt of a clock signal;
comparator means electrically connected to said latch means and said address generation means, said comparator means including N comparators, one for each of said N banks of memory arrays, for comparing a current virtual row address from said address generation means to a preceding virtual row address currently stored in said latch means and for providing an output indicating the Claim 1 continued...
results of said comparison, said comparator means further including on OR gate for receiving the outputs of said comparators and providing an output;
a memory management unit (MMU) for translating said virtual row address into a real row address;
cycle control means electrically connected to said output of said OR gate, said latch means, said MMU, and said memory means for receiving said comparator output and for providing said real row and column addresses and said RAS and CAS signals to said one memory array, wherein said real row address and said RAS signal are provided to said one memory array only if said comparator output indicates that said current virtual row address is not the same as said preceding virtual row address, and wherein only said column address and said CAS signal are provided to said one memory array when said comparator output indicates that said current virtual row address is the same as said preceding virtual row address;
said cycle control means including clock means for providing said clock signal when said comparator output indicates that said current virtual row address is not the same as said preceding virtual row address.
memory means for storing data, said memory means including N banks of memory arrays, each of said memory arrays comprising a dynamic random access memory having static column access, wherein data is stored at a plurality of locations, each of said locations specified by a real row address and a column address and wherein said data at said specified location is accessed after receipt of said real row address followed by a row address strobe (RAS) signal and said column address followed by a column address strobe (CAS) signal;
address generation means for generating a virtual row address and said column address corresponding to a desired data location in one of said memory arrays, said address generation means comprising a processor;
latch means electrically connected to said address generation means, said latch means including N
latches, one for each of said N banks of memory arrays, for receiving said virtual row address and storing said virtual row address upon receipt of a clock signal;
comparator means electrically connected to said latch means and said address generation means, said comparator means including N comparators, one for each of said N banks of memory arrays, for comparing a current virtual row address from said address generation means to a preceding virtual row address currently stored in said latch means and for providing an output indicating the Claim 1 continued...
results of said comparison, said comparator means further including on OR gate for receiving the outputs of said comparators and providing an output;
a memory management unit (MMU) for translating said virtual row address into a real row address;
cycle control means electrically connected to said output of said OR gate, said latch means, said MMU, and said memory means for receiving said comparator output and for providing said real row and column addresses and said RAS and CAS signals to said one memory array, wherein said real row address and said RAS signal are provided to said one memory array only if said comparator output indicates that said current virtual row address is not the same as said preceding virtual row address, and wherein only said column address and said CAS signal are provided to said one memory array when said comparator output indicates that said current virtual row address is the same as said preceding virtual row address;
said cycle control means including clock means for providing said clock signal when said comparator output indicates that said current virtual row address is not the same as said preceding virtual row address.
2. The memory access system as defined by claim 1, wherein the outputs of said N comparators are electrically connected to said cycle control means.
3. The memory access system as defined by claim 2, wherein Y lowest translated physical bits from said MMU
are electrically connected to said N latches such that one of said latches is enabled as a function of the value of said Y bits.
are electrically connected to said N latches such that one of said latches is enabled as a function of the value of said Y bits.
4. The memory access system as defined by claim 3, wherein data in said N banks of memory arrays are organized such that certain of said banks store a higher percentage of instructions than data compared to other of said banks.
5. The memory access system as defined by claim 4, wherein said data is organized to increase the probability of consecutive memory accesses to the same row address as previous accesses.
6. The memory access system as defined by claim 5, wherein said data organization is controlled by an operating system comprising instructions executed by said processor.
7. The memory access system as defined by claim 6, wherein said column addresses supplied by said address generation means are real addresses.
8. The memory access system as defined by claim 7, wherein said translated real row address and said real column address are selectively coupled to one of said memory arrays by a multiplexer upon command of said cycle control means.
9. An improved method for accessing a computer memory including N banks of memory arrays, each of said memory array comprising a dynamic random access memory having static column access, wherein data is stored in at least one of said memory arrays at a plurality of locations, each of said locations specified by a real row address and a column address and wherein said data at said specified location is accessed after receipt of said real row address followed by a row address strobe (RAS) signal and said column address followed by a column address strobe (CAS) signal, comprising the steps of:
storing a virtual row address in latch means upon receipt of a clock signal, said latch means including N
latches, one for each of said N banks of memory arrays;
generating, in a processor, a current virtual row address and a current column address corresponding to desired data in said one memory array;
comparing said current virtual row address to said virtual row address stored in said latch means using N comparators, one for each of said N banks of memory arrays, outputs of said N comparators being input to an OR
gate;
translating, in a memory management unit (MMU), said current virtual row address into a current real row address;
providing said current real row address, said current column address and said RAS and CAS signals to said one memory array as a function of said comparison such that said current real row address and said RAS
Claim 9 continued...
signal are provided to said one memory array only if said current virtual row address is not the same as said virtual row address stored in said latch means and such that only said column address and said CAS signal are provided if said current virtual row address is the same as said virtual row address stored in said latch means;
providing said clock signal to store said current virtual row address in said latch means when said current virtual row address is not the same as said virtual row address stored in said latch means.
storing a virtual row address in latch means upon receipt of a clock signal, said latch means including N
latches, one for each of said N banks of memory arrays;
generating, in a processor, a current virtual row address and a current column address corresponding to desired data in said one memory array;
comparing said current virtual row address to said virtual row address stored in said latch means using N comparators, one for each of said N banks of memory arrays, outputs of said N comparators being input to an OR
gate;
translating, in a memory management unit (MMU), said current virtual row address into a current real row address;
providing said current real row address, said current column address and said RAS and CAS signals to said one memory array as a function of said comparison such that said current real row address and said RAS
Claim 9 continued...
signal are provided to said one memory array only if said current virtual row address is not the same as said virtual row address stored in said latch means and such that only said column address and said CAS signal are provided if said current virtual row address is the same as said virtual row address stored in said latch means;
providing said clock signal to store said current virtual row address in said latch means when said current virtual row address is not the same as said virtual row address stored in said latch means.
10. The method as defined by claim 9, wherein the outputs of said N comparators are coupled to said cycle control means.
11. The method as defined by claim 10, wherein Y
lowest translated physical bits from said MMU are coupled to said latches such that the value of said Y bits corresponds to at least one of said latches and memory banks, thereby enabling said corresponding latch.
lowest translated physical bits from said MMU are coupled to said latches such that the value of said Y bits corresponds to at least one of said latches and memory banks, thereby enabling said corresponding latch.
12. The method as defined by claim 11, wherein data in said N banks of memory arrays are organized such that certain of said banks store a higher percentage of instructions than data compared to other of said banks.
13. The method as defined by claim 12, wherein said data is organized to increase the probability of Claim 13 continued...
consecutive memory accesses to the same row address as previous accesses.
consecutive memory accesses to the same row address as previous accesses.
14. The method as defined by claim 13, wherein said data organization is controlled by an operating system comprising instructions executed by said processor.
15. The method as defined by claim 14, wherein said generated column address is a real address.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US06/890,075 US4803621A (en) | 1986-07-24 | 1986-07-24 | Memory access system |
US890,075 | 1986-07-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
CA1278875C true CA1278875C (en) | 1991-01-08 |
Family
ID=25396226
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA000538832A Expired - Fee Related CA1278875C (en) | 1986-07-24 | 1987-06-04 | Memory access system |
Country Status (8)
Country | Link |
---|---|
US (1) | US4803621A (en) |
JP (1) | JPH083802B2 (en) |
CA (1) | CA1278875C (en) |
DE (1) | DE3724317A1 (en) |
FR (1) | FR2602070B1 (en) |
GB (1) | GB2193017B (en) |
HK (1) | HK27491A (en) |
SG (1) | SG8891G (en) |
Families Citing this family (81)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2714944B2 (en) * | 1987-08-05 | 1998-02-16 | 三菱電機株式会社 | Semiconductor storage device |
US5179687A (en) * | 1987-09-26 | 1993-01-12 | Mitsubishi Denki Kabushiki Kaisha | Semiconductor memory device containing a cache and an operation method thereof |
US5051889A (en) * | 1987-10-23 | 1991-09-24 | Chips And Technologies, Incorporated | Page interleaved memory access |
GB2215498A (en) * | 1988-02-01 | 1989-09-20 | Tsb International Inc | Expandable reconfigurable memory circuit |
IT1216086B (en) * | 1988-03-15 | 1990-02-22 | Honeywell Bull Spa | VIRTUAL PSEUDO ADDRESSING BUFFER MEMORY. |
IT1216087B (en) * | 1988-03-15 | 1990-02-22 | Honeywell Bull Spa | MEMORY SYSTEM WITH PREDICTIVE MODULE SELECTION. |
US5301278A (en) * | 1988-04-29 | 1994-04-05 | International Business Machines Corporation | Flexible dynamic memory controller |
US4974146A (en) * | 1988-05-06 | 1990-11-27 | Science Applications International Corporation | Array processor |
US5230067A (en) * | 1988-05-11 | 1993-07-20 | Digital Equipment Corporation | Bus control circuit for latching and maintaining data independently of timing event on the bus until new data is driven onto |
US4937791A (en) * | 1988-06-02 | 1990-06-26 | The California Institute Of Technology | High performance dynamic ram interface |
US4870622A (en) * | 1988-06-24 | 1989-09-26 | Advanced Micro Devices, Inc. | DRAM controller cache |
JPH0774994B2 (en) * | 1988-09-21 | 1995-08-09 | 株式会社日立製作所 | OSC detection method for buffer storage controller |
GB2226666B (en) * | 1988-12-30 | 1993-07-07 | Intel Corp | Request/response protocol |
US4937734A (en) * | 1989-02-21 | 1990-06-26 | Sun Microsystems, Inc. | High speed bus with virtual memory data transfer and rerun cycle capability |
US5121487A (en) * | 1989-02-21 | 1992-06-09 | Sun Microsystems, Inc. | High speed bus with virtual memory data transfer capability using virtual address/data lines |
US5097483A (en) * | 1989-02-21 | 1992-03-17 | Sun Microsystems, Inc. | Tri-statable bus with apparatus to drive bus line to first level and then second level for predetermined time before turning off |
US4967397A (en) * | 1989-05-15 | 1990-10-30 | Unisys Corporation | Dynamic RAM controller |
US5329629A (en) * | 1989-07-03 | 1994-07-12 | Tandem Computers Incorporated | Apparatus and method for reading, writing, and refreshing memory with direct virtual or physical access |
US5440749A (en) * | 1989-08-03 | 1995-08-08 | Nanotronics Corporation | High performance, low cost microprocessor architecture |
US4951246A (en) * | 1989-08-08 | 1990-08-21 | Cray Research, Inc. | Nibble-mode dram solid state storage device |
US5163131A (en) * | 1989-09-08 | 1992-11-10 | Auspex Systems, Inc. | Parallel i/o network file server architecture |
JP2945757B2 (en) * | 1989-09-08 | 1999-09-06 | オースペックス システムズ インコーポレイテッド | Multi-device operating system architecture. |
JPH03113794A (en) * | 1989-09-22 | 1991-05-15 | Toshiba Corp | Semiconductor memory device |
DE3932103C2 (en) * | 1989-09-26 | 1998-01-15 | Siemens Ag | Circuitry for cache memory |
US5280601A (en) * | 1990-03-02 | 1994-01-18 | Seagate Technology, Inc. | Buffer memory control system for a magnetic disc controller |
GB2246001B (en) * | 1990-04-11 | 1994-06-15 | Digital Equipment Corp | Array architecture for high speed cache memory |
JP2862948B2 (en) * | 1990-04-13 | 1999-03-03 | 三菱電機株式会社 | Semiconductor storage device |
IL96808A (en) | 1990-04-18 | 1996-03-31 | Rambus Inc | Integrated circuit i/o using a high performance bus interface |
US5995443A (en) * | 1990-04-18 | 1999-11-30 | Rambus Inc. | Synchronous memory device |
US6324120B2 (en) | 1990-04-18 | 2001-11-27 | Rambus Inc. | Memory device having a variable data output length |
US6751696B2 (en) * | 1990-04-18 | 2004-06-15 | Rambus Inc. | Memory device having a programmable register |
US5247630A (en) * | 1990-07-09 | 1993-09-21 | The United States Of America As Represented By The Secretary Of The Navy | M-dimensional computer memory with m-1 dimensional hyperplane access |
US5479640A (en) * | 1990-08-31 | 1995-12-26 | International Business Machines Corporation | Memory access system including a memory controller with memory redrive circuitry |
JPH0682339B2 (en) * | 1990-08-31 | 1994-10-19 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Memory access system and method |
EP0482575A3 (en) * | 1990-10-26 | 1992-05-27 | Samsung Semiconductor, Inc. | Storage system for a high-performance processor |
US5249282A (en) * | 1990-11-21 | 1993-09-28 | Benchmarq Microelectronics, Inc. | Integrated cache memory system with primary and secondary cache memories |
US5274786A (en) * | 1990-11-28 | 1993-12-28 | Hewlett-Packard Company | Microprocessor memory bus interface for inhibiting relatching of row address portions upon subsequent accesses including a same row address portion |
US5247643A (en) * | 1991-01-08 | 1993-09-21 | Ast Research, Inc. | Memory control circuit for optimizing copy back/line fill operation in a copy back cache system |
US5301292A (en) * | 1991-02-22 | 1994-04-05 | Vlsi Technology, Inc. | Page mode comparator decode logic for variable size DRAM types and different interleave options |
US5289584A (en) * | 1991-06-21 | 1994-02-22 | Compaq Computer Corp. | Memory system with FIFO data input |
US5430857A (en) * | 1993-01-04 | 1995-07-04 | Intel Corporation | Method and apparatus for translating logical addresses into physical addresses using odd/even translation tables |
US5682515A (en) * | 1993-01-25 | 1997-10-28 | Benchmarq Microelectronics, Inc. | Low power set associative cache memory with status inhibit of cache data output |
JPH06301596A (en) * | 1993-04-09 | 1994-10-28 | Mitsubishi Electric Corp | Microprocessor |
US5606650A (en) * | 1993-04-22 | 1997-02-25 | Apple Computer, Inc. | Method and apparatus for storage and retrieval of a texture map in a graphics display system |
DE69435146D1 (en) * | 1993-06-03 | 2008-11-13 | Network Appliance Inc | Method and apparatus for describing arbitrary areas of a file system |
US6604118B2 (en) | 1998-07-31 | 2003-08-05 | Network Appliance, Inc. | File system image transfer |
US6138126A (en) * | 1995-05-31 | 2000-10-24 | Network Appliance, Inc. | Method for allocating files in a file system integrated with a raid disk sub-system |
US7174352B2 (en) | 1993-06-03 | 2007-02-06 | Network Appliance, Inc. | File system image transfer |
WO1994029796A1 (en) * | 1993-06-03 | 1994-12-22 | Network Appliance Corporation | A method for allocating files in a file system integrated with a raid disk sub-system |
EP0701715A4 (en) * | 1993-06-04 | 1999-11-17 | Network Appliance Corp | A method for providing parity in a raid sub-system using a non-volatile memory |
US5813028A (en) * | 1993-10-12 | 1998-09-22 | Texas Instruments Incorporated | Cache read miss request invalidation prevention method |
JPH07281948A (en) * | 1994-04-06 | 1995-10-27 | Mitsubishi Electric Corp | Memory controller |
US5594913A (en) * | 1994-09-16 | 1997-01-14 | Philips Electronics North America Corporation | High speed memory access system for a microcontroller with directly driven low order address bits |
US5809539A (en) * | 1995-04-27 | 1998-09-15 | Hitachi, Ltd. | Processor system having address allocation and address lock capability adapted for a memory comprised of synchronous DRAMs |
JP3739797B2 (en) * | 1995-10-06 | 2006-01-25 | パトリオット サイエンティフィック コーポレイション | Reduced instruction set computer microprocessor structure |
US6470405B2 (en) * | 1995-10-19 | 2002-10-22 | Rambus Inc. | Protocol for communication with dynamic memory |
US5761694A (en) * | 1995-11-30 | 1998-06-02 | Cirrus Logic, Inc. | Multi-bank memory system and method having addresses switched between the row and column decoders in different banks |
US5815673A (en) * | 1996-03-01 | 1998-09-29 | Samsung Electronics Co., Ltd. | Method and apparatus for reducing latency time on an interface by overlapping transmitted packets |
US6266379B1 (en) | 1997-06-20 | 2001-07-24 | Massachusetts Institute Of Technology | Digital transmitter with equalization |
US6457130B2 (en) | 1998-03-03 | 2002-09-24 | Network Appliance, Inc. | File access control in a multi-protocol file server |
US6317844B1 (en) | 1998-03-10 | 2001-11-13 | Network Appliance, Inc. | File server storage arrangement |
US6535218B1 (en) | 1998-05-21 | 2003-03-18 | Mitsubishi Electric & Electronics Usa, Inc. | Frame buffer memory for graphic processing |
US6559851B1 (en) | 1998-05-21 | 2003-05-06 | Mitsubishi Electric & Electronics Usa, Inc. | Methods for semiconductor systems for graphics processing |
US6504550B1 (en) | 1998-05-21 | 2003-01-07 | Mitsubishi Electric & Electronics Usa, Inc. | System for graphics processing employing semiconductor device |
US6661421B1 (en) | 1998-05-21 | 2003-12-09 | Mitsubishi Electric & Electronics Usa, Inc. | Methods for operation of semiconductor memory |
US6343984B1 (en) * | 1998-11-30 | 2002-02-05 | Network Appliance, Inc. | Laminar flow duct cooling system |
ATE390788T1 (en) * | 1999-10-14 | 2008-04-15 | Bluearc Uk Ltd | APPARATUS AND METHOD FOR HARDWARE EXECUTION OR HARDWARE ACCELERATION OF OPERATING SYSTEM FUNCTIONS |
US6970985B2 (en) | 2002-07-09 | 2005-11-29 | Bluerisc Inc. | Statically speculative memory accessing |
US8041735B1 (en) | 2002-11-01 | 2011-10-18 | Bluearc Uk Limited | Distributed file system and method |
US7457822B1 (en) | 2002-11-01 | 2008-11-25 | Bluearc Uk Limited | Apparatus and method for hardware-based file system |
US20050114850A1 (en) | 2003-10-29 | 2005-05-26 | Saurabh Chheda | Energy-focused re-compilation of executables and hardware mechanisms based on compiler-architecture interaction and compiler-inserted control |
US7996671B2 (en) | 2003-11-17 | 2011-08-09 | Bluerisc Inc. | Security of program executables and microprocessors based on compiler-architecture interaction |
US8607209B2 (en) | 2004-02-04 | 2013-12-10 | Bluerisc Inc. | Energy-focused compiler-assisted branch prediction |
US20060129740A1 (en) * | 2004-12-13 | 2006-06-15 | Hermann Ruckerbauer | Memory device, memory controller and method for operating the same |
ATE407397T1 (en) | 2005-04-29 | 2008-09-15 | Network Appliance Inc | SYSTEM AND METHOD FOR GENERATING UNIFORM IMAGES OF A SET OF DATA OBJECTS |
US20070294181A1 (en) * | 2006-05-22 | 2007-12-20 | Saurabh Chheda | Flexible digital rights management with secure snippets |
US20080126766A1 (en) | 2006-11-03 | 2008-05-29 | Saurabh Chheda | Securing microprocessors against information leakage and physical tampering |
US20080154379A1 (en) * | 2006-12-22 | 2008-06-26 | Musculoskeletal Transplant Foundation | Interbody fusion hybrid graft |
KR100924303B1 (en) | 2008-02-22 | 2009-11-02 | 인하대학교 산학협력단 | Method and Apparatus For Monitoring Memory Address |
TWI428918B (en) * | 2009-09-29 | 2014-03-01 | Silicon Motion Inc | Memory device and data access method for a memory device |
US8694755B1 (en) * | 2010-03-17 | 2014-04-08 | Ambarella, Inc. | Virtual memory management for real-time embedded devices |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3394354A (en) * | 1965-11-30 | 1968-07-23 | Ibm | Multiple word random access memory |
US3413613A (en) * | 1966-06-17 | 1968-11-26 | Gen Electric | Reconfigurable data processing system |
US4156905A (en) * | 1974-02-28 | 1979-05-29 | Ncr Corporation | Method and apparatus for improving access speed in a random access memory |
IT1052771B (en) * | 1975-12-31 | 1981-07-20 | Olivetti C E C S P A | MEMORY ADDRESSING DEVICE |
US4081701A (en) * | 1976-06-01 | 1978-03-28 | Texas Instruments Incorporated | High speed sense amplifier for MOS random access memory |
US4106109A (en) * | 1977-02-01 | 1978-08-08 | Ncr Corporation | Random access memory system providing high-speed digital data output |
US4368515A (en) * | 1981-05-07 | 1983-01-11 | Atari, Inc. | Bank switchable memory system |
US4527232A (en) * | 1982-07-02 | 1985-07-02 | Sun Microsystems, Inc. | High-speed memory and memory management system |
US4550368A (en) * | 1982-07-02 | 1985-10-29 | Sun Microsystems, Inc. | High-speed memory and memory management system |
US4725945A (en) * | 1984-09-18 | 1988-02-16 | International Business Machines Corp. | Distributed cache in dynamic rams |
EP0182501A3 (en) * | 1984-11-20 | 1988-01-20 | Tektronix, Inc. | Memory mapping method and apparatus |
-
1986
- 1986-07-24 US US06/890,075 patent/US4803621A/en not_active Expired - Lifetime
-
1987
- 1987-05-27 GB GB8712443A patent/GB2193017B/en not_active Expired - Fee Related
- 1987-06-04 CA CA000538832A patent/CA1278875C/en not_active Expired - Fee Related
- 1987-07-21 JP JP62180185A patent/JPH083802B2/en not_active Expired - Fee Related
- 1987-07-22 DE DE19873724317 patent/DE3724317A1/en active Granted
- 1987-07-23 FR FR8710472A patent/FR2602070B1/en not_active Expired - Fee Related
-
1991
- 1991-02-19 SG SG88/91A patent/SG8891G/en unknown
- 1991-04-11 HK HK274/91A patent/HK27491A/en not_active IP Right Cessation
Also Published As
Publication number | Publication date |
---|---|
JPS6334652A (en) | 1988-02-15 |
US4803621A (en) | 1989-02-07 |
FR2602070A1 (en) | 1988-01-29 |
FR2602070B1 (en) | 1994-04-01 |
JPH083802B2 (en) | 1996-01-17 |
DE3724317C2 (en) | 1991-08-14 |
GB2193017A (en) | 1988-01-27 |
SG8891G (en) | 1991-06-21 |
DE3724317A1 (en) | 1988-01-28 |
HK27491A (en) | 1991-04-19 |
GB8712443D0 (en) | 1987-07-01 |
GB2193017B (en) | 1990-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA1278875C (en) | Memory access system | |
US5226147A (en) | Semiconductor memory device for simple cache system | |
CA2011518C (en) | Distributed cache dram chip and control method | |
US5519664A (en) | Dynamic random access memory persistent page implemented as processor register sets | |
US5111386A (en) | Cache contained type semiconductor memory device and operating method therefor | |
US5752260A (en) | High-speed, multiple-port, interleaved cache with arbitration of multiple access addresses | |
US6167486A (en) | Parallel access virtual channel memory system with cacheable channels | |
US5329489A (en) | DRAM having exclusively enabled column buffer blocks | |
US5493535A (en) | Memory addressing method and apparatus therefor | |
JPH11265315A (en) | High-speed and compact address bit route specifying mechanism for supporting various dram bank sizes and plural interleaving mechanisms | |
US4930106A (en) | Dual cache RAM for rapid invalidation | |
JPH04233050A (en) | Cache-memory exchanging protcol | |
US5291580A (en) | High performance burst read data transfer operation | |
US8533430B2 (en) | Memory hashing for stride access | |
US6081479A (en) | Hierarchical prefetch for semiconductor memories | |
US5737564A (en) | Cache memory system having multiple caches with each cache mapped to a different area of main memory to avoid memory contention and to lessen the number of cache snoops | |
US4796222A (en) | Memory structure for nonsequential storage of block bytes in multi-bit chips | |
US5566318A (en) | Circuit with a single address register that augments a memory controller by enabling cache reads and page-mode writes | |
JP2838210B2 (en) | Bus master | |
EP0535701A1 (en) | Architecture and method for combining static cache memory and dynamic main memory on the same chip (CDRAM) | |
KR100571435B1 (en) | Synchronous dynamic random access memory architecture for sequential burst mode | |
US6154414A (en) | Semiconductor memory device having a plurality of memory blocks | |
US6292867B1 (en) | Data processing system | |
KR100417548B1 (en) | Integrated cache memory and how to provide data to memory devices in digital memory | |
JPH06282983A (en) | Method for access to data in memory, memory system and memory control system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MKLA | Lapsed |