WO1994028550A1 - Dynamic random access memory system - Google Patents

Dynamic random access memory system Download PDF

Info

Publication number
WO1994028550A1
WO1994028550A1 PCT/US1994/005798 US9405798W WO9428550A1 WO 1994028550 A1 WO1994028550 A1 WO 1994028550A1 US 9405798 W US9405798 W US 9405798W WO 9428550 A1 WO9428550 A1 WO 9428550A1
Authority
WO
WIPO (PCT)
Prior art keywords
dram
latency
address
row
set forth
Prior art date
Application number
PCT/US1994/005798
Other languages
French (fr)
Inventor
Frederick A. Ware
John B. Dillon
Richard M. Barth
Billy Wayne Garrett, Jr.
John Girdner Atwood, Jr.
Michael P. Farmwald
Original Assignee
Rambus, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rambus, Inc. filed Critical Rambus, Inc.
Priority to JP50087995A priority Critical patent/JP4077874B2/en
Priority to AU70434/94A priority patent/AU7043494A/en
Publication of WO1994028550A1 publication Critical patent/WO1994028550A1/en

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C5/00Details of stores covered by group G11C11/00
    • G11C5/06Arrangements for interconnecting storage elements electrically, e.g. by wiring
    • G11C5/066Means for reducing external access-lines for a semiconductor memory clip, e.g. by multiplexing at least address and data signals

Definitions

  • the present invention relates to dynamic random access memory (DRAM) and structure and methods for accessing the same. More particularly, the present invention relates reducing the number of lines required to access DRAM.
  • DRAM dynamic random access memory
  • Dynamic random access memory (DRAM) components provide an inexpensive solid-state storage technology for today's computer systems. Digital information is maintained in the array in the form of a charge stored on a two-dimensional array of capacitors. Accessing the array is a two step process. First, a row address is provided and held in a latch. This row address selects one of the rows of the DRAM by selecting a corresponding word line. The other word lines are deselected. If a read operation to the array is to be performed, a sense operation is performed in which the contents of the row of capacitors are detected through the selected row of transistors by the column amplifiers.
  • DRAM Dynamic random access memory
  • a restore operation is performed in which the contents of the column amplifiers are written to the row of capacitors of the array selected through the selected row of transistors.
  • the sense operation is destructive requiring the row of capacitors to be subsequently recharged with a restore operation.
  • the column amplifiers are latching, the contents are not destroyed when they are restored to the selected row of capacitors.
  • Fig.2 illustrates a prior art memory system including DRAM with the corresponding control, address and data wires which connect the DRAM to the processor or memory controller component.
  • DRAM digital versatile memory
  • an asynchronous (unclocked) interface is utilized in which the internal latches are loaded with the control signals.
  • synchronous DRAMs are typically used in which the interface contains internal latches and registers which employ an externally supplied clock source as the time reference. This permits the DRAM to transmit and receive information at a higher rate.
  • a write access is initiated by transmitting a row address on the address wires and by transmitting the sense control signal (RAS). This causes the desired row to be sensed by the column amplifiers at a time tRCD later.
  • the column address is transmitted on the address wires and the write control signal (CAS) is transmitted along with the first word of the write data WData(a,l).
  • the data word is then received by the DRAM and written into the column amplifiers at the specified column address. This step can be repeated "n" times in the currently sensed row before a new row is sensed. Before a new row is sensed, -the old row must be restored back to the memory core and the bit lines of the DRAM precharged. Typically, there are two methods to achieve this in the DRAM.
  • every write operation causes the sensed row to be restored to the memory array. Thus, only a precharge is performed prior to the next sense operation.
  • the restore operation is done once just prior to the next precharge forward/sense operation.
  • Fig. 3 illustrates synchronous write timing when the size of the transmit /receive word, "tr" bits, equals the size of the read /write word, "rw” bits.
  • a, b... represents a run address; 1, 2...n represent a column address, WData [row, col] represents the DRAM address of data word (rw bits) and sense (RAS) is control sign for initiating a sense operation and WRITE(CAS) and READ(CAS) initiate the write and read operations, respectively, on the column amplifiers.
  • the row column address delay timing parameter tRCD is equal to two clock cycles. After the row address is asserted at the first clock cycle, column addresses and write data are asserted after the tRCD delay to write the data into the DRAM array.
  • a read access is initiated by the processor transmitting a row address on the address wires and by transmitting the sense control signal (RAS). This causes the desired row to be sensed by the column amplifiers.
  • RAS sense control signal
  • the column address is transmitted on the address wire and the read control signal (CAS) is transmitted.
  • the first word of the read data RData- (a,l) is transmitted by the DRAM and received by the processor. This step can be repeated "n" times in the currently sensed row before new row is sensed. Before a new row is sensed, the old row must be restored back to the memory array.
  • tc A A is the "column address access” timing parameter for the DRAM. This parameter specifies the delay between the issuance of the column address and the access to read data and represents the only real difference between read and write accesses.
  • f words of "tr” bits each may be transmitted or received.
  • y, z denote subfields which are tr bits in width of a data word rw bits in width.
  • tcycle represents the time during which tr bits are transmitted/received at the DRAM input/output pins.
  • the t read/write time parameter is the time to read/write rw bits to/from the column amplifiers
  • t R C D is the time to sense a row and place it in the column amplifiers.
  • the address and control lines are combined and the information multiplexed such that the DRAM pins have roughly equal information rates at all times.
  • cap ceiling(ca/f) where cap represents the number of address bits received in every clock cycle; ceiling represents a function returnrhg an integer value greater than or equal to its argument; • ca represents the number of column address bits used every read/write cycle; and
  • the savings are further realized by the multiplexing of column address bits as the "f" parameter increases in future DRAM technology. Further advantages can be gained by multiplexing the row address with the column address on a single set of wires to a single set of pins in the DRAM.
  • the data pins of the DRAM are utilized to transmit the row address as the data pins of the DRAM are not in use when the row address is received.
  • the control signals needed to specify a sense or restore operation or a read or write access can also be multiplexed onto the data pins before the time the DRAM needs to receive or transmit data.
  • BusData[8:0] for data, control, row address information and some column address information
  • BusEnable for the column address
  • BusCtrl for specifying whether data or address information is present on the data wires.
  • the latency incurred during write operations is programmable to set the latency required for read operation to the latency of a write operation. In this manner, every clock cycle of the data bus can be used for a transfer by interleaving accesses regardless of the mix of read and write accesses.
  • a DRAM with pulsed word lines three possible states for the column amplifiers are designated, each state having a different set of operations that must be performed in order to sense a new row.
  • the DRAM includes a dirty flag which is set whenever a write access is made to the column amplifiers. The flag is cleared when the column amplifiers are written into the selected row in the memory array by a restore operation.
  • the present invention permits the state of the DRAM, at the time operations with respect to a row are complete, to be left in one of the three states.
  • the state is selected by the control inputs when a read or write access command is specified. If the column amplifiers are dirty after the access has completed, then the column amplifiers may be left in a dirty state. Alternately, a restore operation can leave the column amplifiers in a clean state, and a restore/precharge operation can leave the column amplifiers in a precharge state.
  • the amplifiers may be left in a clean state or a precharge operation may be performed to leave the column amplifiers in a precharge state.
  • a precharge operation may be performed to leave the column amplifiers in a precharge state.
  • the RDRAM can be controlled to minimize the latency of access to a new row in some situations, yet not perform core operations needlessly in other situations.
  • FIG. 1 is a block diagram of prior art dynamic random access memory (DRAM) component.
  • DRAM dynamic random access memory
  • Figure 2 is a block diagram illustrating a DRAM system and input/ output pins and signal lines for accessing the DRAM.
  • Figure 3 is a timing diagram illustrating synchronous write timing when the size of the read/write word is equal to the size of the transmit/receive word.
  • Figure 4 is a prior art timing diagram illustrating synchronous read timing when the size of the transmit/receive word equals the size of the read/ write word.
  • Figure 5 is a prior art timing diagram illustrating synchronous write timing when the size of the read /write word equals twice the size of the transmit/receive word.
  • Figure 6 is a block diagram of a DRAM system in accordance with the teachings of the present invention showing double multiplexed address lines containing row and column address information.
  • Figure 7 is a timing diagram which illustrates synchronous write timing with double multiplexed row/column information.
  • Figure 8 is a timing diagram which illustrates synchronous read timing with double multiplexed row /column information.
  • Figure 9 is a block diagram of DRAM system utilizing multiplexed data/row information.
  • Figure 10 is a timing diagram illustrating synchronous write timing using multiplexed data/row information.
  • Figure 11 is a DRAM system block diagram illustrating multiplexed data/row/control information.
  • Figure 12 is a timing diagram illustrating synchronous write timing using multiplexed data/row/control information.
  • Figure 13 is a timing diagram illustrating synchronous read timing with multiplexed data/row/ control information.
  • Figure 14 is a timing diagram illustrating synchronous write timing incorporating a plurality of enhancements in accordance with the teachings of the present invention.
  • Figure 15 and Figure 16 illustrate synchronous write timing and synchronous read timing when a latency incurred during the write operation is less than the latency incurred during a read operation.
  • Figure 17 is a simple, exemplary structure for programming latency in accordance with the teachings of the present invention.
  • Figure 18 is a timing diagram illustrating interleaved read/write operation timing when the read latency equals the write latency.
  • Figure 19 is a timing diagram which illustrates synchronous interleaved read timing with multiplexed data /row/ control information.
  • the present invention is directed to a structure and method for minimizing the number of pins and control information lines required to interface to a dynamic random access memory (DRAM).
  • DRAM dynamic random access memory
  • cap ceiling(ca/f)
  • cap the number of address bits received in every dock cyde tciockCycle
  • ca the number of column address bits used every read/ write cycle tR ea d/Write
  • the method the DRAM utilizes to receive the row address can be improved upon.
  • One possibility is to multiplex the row address into the DRAM with the row address bits "rap" which are received per dock cyde to be equal to the number of column address bits "cap” transmitted per dock cycle. This will delay the use of the row address by the amount of time (tRead/Write - tciockCycle) while the row address bits are received and assembled.
  • Figs.7 and 8 illustrate, respectively, the write timing and read timing using a double multiplexed row and column connections.
  • the parameter Col [m,n] represents the subwords of ca/f bits of the column address (which is a total width of ca bits) and Row [m,n] represents the subwords of ra/f bits of the row address.
  • the first three dock cycles are not shown in Fig. 8; however, they are the same as the first three dodc cydes shown in Fig. 7.
  • f can reach a value of eight or more, permitting the number of address wires to be reduced to one or two.
  • the number of address wires has been cut in half, and two clock cycles are required to receive a row or column address.
  • a read /write cyde requires two clock cydes as well. Therefore, for the latency penalty of an additional dock cycle, the number of pins required to access the information is substantially reduced. This, in turn, reduces the costs and complexity of the interface to the DRAM.
  • the row address is communicated across the data pins. This is possible because the data pins of the DRAM are not in use when the row address is transmitted and received by the DRAM.
  • the row address can be received with the data pins in about 1 /f the time it takes on the cap column address pins.
  • Fig.9 which shows the row address information coming from the data lines and the column address information coming across the column address lines.
  • Fig. 10 illustrates synchronous write timing using this concept. As shown in Fig. 10, the row address is transmitted across the data lines during an initial dock cyde and the column information is transmitted across the column address lines. If ra>tr, the row address is transmitted across multiple initial clock cydes.
  • control signals needed to signal a sense or restore operation or a read or write access can also be multiplexed onto the data pins before the time the DRAM needs to actually receive or transmit data.
  • At least one control wire should remain unmultiplexed to indicate when control and row information is transmitted on the data wires.
  • This control information can simply indicate control /row or data information or can be combined with the internal state of DRAM using a predetermined protocol to indicate the type of information available at the pins. For example, as shown in Fig. 11, the DRAM has "tr" data pins which are multiplexed between data, row address and control information, a data/control select pin, and one or two column address pins.
  • Fig. 12 illustrates synchronous write timing with multiplexed data, row and control information
  • Fig.13 illustrates synchronous read timing with multiplexed data, row and control information.
  • a Write(CAS) access is spedfied in the third dock cyde
  • a transfer count is specified to indicate the number of data words (of width "tr") to be transferred.
  • the Read(CAS) access is specified, a transfer count is specified at the same time.
  • the latency tcAA between receiving the Read(CAS) control input and transmitting the first read data word RData(a,ly).
  • the Data/Control Select pin is not limited to spedfying two combinations (data or control). Instead, there are 2 f usable combinations assuming that the processor and DRAM can agree on the proper framing of the f-clock-cycle-long bursts; that is, the f-bit blocks on the Data/Control Select pin must be aligned with respect to the f »tr bit blocks on the Data bus and the f «cap bit blocks on the Column Address bus.
  • One of the function s which can be encoded with the extra combinations is a command to terminate a block transfer, if the protocol can specify transfers that are multiples of the f «tr bit data block size.
  • BusData[8:0] is used for data, control and row address information.
  • the timing enhancements due to the innovations can be seen.
  • the data and control signals are paired for transmission across the Data /Row /Control signal lines and the data/control signal line is used to identify the type of information transmitted.
  • the row address is transmitted across multiple clock cycles to minimize the number of signal lines required.
  • the first column address is transmitted across the data/row/ control signal lines to provide pipelining capability. Subsequent column address information is transmitted across the column address signal lines trea d /wri t e clock cycles earlier as shown.
  • a count value is transmitted across the data/row/control signal lines to provide the count information necessary in a block averted protocol.
  • the data/control signal lines can be used to transmit other control information, by encoding of the bits transmitted across one or more (in the present example, two) dodc cydes .
  • a terminate function is encoded to prematurely terminate a block operation of data.
  • the column address access latency tcAA between the dodc cyde with the column address and read command and the clock cyde with the first word of read data causes the read timing to be longer from the write timing.
  • the latency between the dock cycle with the column address and write command and the dock cycle with the first word of write data is zero.
  • tc A A which occurs. Therefore, there will be wasted clock cydes on the data bus every time a read access is followed by a write access because of this latency difference.
  • the write latency is made programmable so that it can be adjusted to equal to read latency.
  • FIG. 17 show DRAM control logic 500 which delays a signal to initiate an operation (Start R/ W) 505 a certain number of dodc cydes dependent upon the information loaded into the latency control register 510.
  • the information loaded into the latency control register 510 controls the operation of the multiplexars 515, 520, 525. Selection by the multiplexars 515, 520, 525 determine whether the signal 505 is immediately output or input after a predetermined delay by processing the signal 505 through flip flops 530, 535, 540.
  • Each flip flop 530, 535, 540 delays the signal one dodc cycle. It is readily apparent that latency can be programmed using alternate structure.
  • latency can be programmable using a programmable counter to count delays.
  • delays can be inserted in between control signals such that the control signals can be pipelined with other operations in a manner to produce the desired latency.
  • Fig. 18 illustrates interleaved timing of read and write accesses.
  • the interleave structure permits read accesses to a DRAM to be interleaved with write accesses to another DRAM. If a DRAM has multiple independent memory arrays and assodated column amplifiers, then read accesses to one bank can be interleaved with write accesses to another bank within the same DRAM further increasing the bandwidth utilization of the DRAM itself. Furthermore, the interleaving will work with the multiplexed address and control information (described above when f>l) which further enhances the operation of the DRAM.
  • every write operation causes the sensed row to be restored to the memory core; wherein only a precharge operation is performed before the next sense operation on a row.
  • Such DRAMs can be left in a sensed state wherein the column amplifiers contain a copy of one of the rows of the memory array, or the array can be left in a precharged state wherein the column amplifiers and bit lines are precharged and ready for the next sense operation.
  • the choice of these two column amplifiers states may be made with the control inputs when a read or write access command is specified. Probably the precharged state is chosen when the final access has been made to the sensed row.
  • the restore operation is typically done once, just prior to the next precharge/sense operation. However, this restore operation is only necessary-if the column amplifiers are different from the row in the memory array.
  • the first state is a precharged state in which the column amplifiers and bit lines are precharged. If the row is precharged, the sense operation must be performed before read/write access can be initiated.
  • the column amplifiers In the next state, referred to as a clean state, the column amplifiers contain identical information to the row in the memory array. If the amplifiers are in a dean state, a precharge/sense operation must be performed before a read /write access can be started. This, of course, takes a longer period of time than just a sense operation.
  • the third state is the dirty state wherein the column amplifiers contain different information from the row and the memory array. Thus, before a read /write access to a new row can be initiated, a restore/precharge/sense operation must be performed.
  • a dirty flag is utilized.
  • this flag is a bit in a register located in the DRAM control logic and is set whenever a write access is made to the column amplifiers.
  • the dirty flag can be maintained in an external DRAM controller. The bit is cleared when the column amplifiers are written into the seleded row in the memory array by a restore operation.
  • the DRAM's column amplifiers can be left in one of the three states.
  • the state is seleded by the control inputs when a read or write access command is spedfied. For example, six distinct read and write commands (three read, three write) are provided, each identifying the state the column amplifiers are to be left in at the completion of the access.
  • the column amplifiers are dirty after the access has completed, then the column amplifiers may be left dirty or a restore operation will leave the column amplifiers in a dean state or a restore/pred arge operation will leave the column amplifiers in a precharged state. Similarly, if the column amplifiers are dean after the access has completed, then the amplifiers may be left in a dean state, or a precharge operation will leave the column amplifiers in a precharged state.
  • the structure provides the flexibility to leave the row in either of one of three states and perform the operations needed prior to a row being sensed at the end of access to the old row or before access to the new row.

Abstract

As interfaces to DRAMs become more advanced and higher performance, the interfaces and signal lines required to support the interface become more expensive to implement. Therefore, it is desirable to minimize the number of signal lines and maximize the bandwidth of the signal lines interfacing to the DRAM in order to take advantage of the high performance of the signal lines in the interface. In the DRAM memory system of the present invention, the address and control lines are combined and the information multiplexed such that the DRAM pins have roughly equal information rate at all times.

Description

DYNAMIC RANDOM ACCESS MEMORY SYSTEM
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to dynamic random access memory (DRAM) and structure and methods for accessing the same. More particularly, the present invention relates reducing the number of lines required to access DRAM.
2. Art Background
Dynamic random access memory (DRAM) components provide an inexpensive solid-state storage technology for today's computer systems. Digital information is maintained in the array in the form of a charge stored on a two-dimensional array of capacitors. Accessing the array is a two step process. First, a row address is provided and held in a latch. This row address selects one of the rows of the DRAM by selecting a corresponding word line. The other word lines are deselected. If a read operation to the array is to be performed, a sense operation is performed in which the contents of the row of capacitors are detected through the selected row of transistors by the column amplifiers. If a write operation is to be performed, a restore operation is performed in which the contents of the column amplifiers are written to the row of capacitors of the array selected through the selected row of transistors. The sense operation is destructive requiring the row of capacitors to be subsequently recharged with a restore operation. As the column amplifiers are latching, the contents are not destroyed when they are restored to the selected row of capacitors.
Fig.2 illustrates a prior art memory system including DRAM with the corresponding control, address and data wires which connect the DRAM to the processor or memory controller component. In one type of DRAM, an asynchronous (unclocked) interface is utilized in which the internal latches are loaded with the control signals. Today, synchronous DRAMs are typically used in which the interface contains internal latches and registers which employ an externally supplied clock source as the time reference. This permits the DRAM to transmit and receive information at a higher rate.
A write access is initiated by transmitting a row address on the address wires and by transmitting the sense control signal (RAS). This causes the desired row to be sensed by the column amplifiers at a time tRCD later. The column address is transmitted on the address wires and the write control signal (CAS) is transmitted along with the first word of the write data WData(a,l). The data word is then received by the DRAM and written into the column amplifiers at the specified column address. This step can be repeated "n" times in the currently sensed row before a new row is sensed. Before a new row is sensed, -the old row must be restored back to the memory core and the bit lines of the DRAM precharged. Typically, there are two methods to achieve this in the DRAM. In a DRAM with a non-pulsed word line, every write operation causes the sensed row to be restored to the memory array. Thus, only a precharge is performed prior to the next sense operation. In a DRAM with a pulsed word line, the restore operation is done once just prior to the next precharge forward/sense operation.
Fig. 3 illustrates synchronous write timing when the size of the transmit /receive word, "tr" bits, equals the size of the read /write word, "rw" bits. In the figure, a, b... represents a run address; 1, 2...n represent a column address, WData [row, col] represents the DRAM address of data word (rw bits) and sense (RAS) is control sign for initiating a sense operation and WRITE(CAS) and READ(CAS) initiate the write and read operations, respectively, on the column amplifiers. In the present example, the row column address delay timing parameter tRCD is equal to two clock cycles. After the row address is asserted at the first clock cycle, column addresses and write data are asserted after the tRCD delay to write the data into the DRAM array.
A read access is initiated by the processor transmitting a row address on the address wires and by transmitting the sense control signal (RAS). This causes the desired row to be sensed by the column amplifiers. At a time tRCD later, the column address is transmitted on the address wire and the read control signal (CAS) is transmitted. At a time tCAA later, the first word of the read data RData- (a,l) is transmitted by the DRAM and received by the processor. This step can be repeated "n" times in the currently sensed row before new row is sensed. Before a new row is sensed, the old row must be restored back to the memory array.
The read timing is illustrated by the timing diagram of Fig.4. It should be noted that tcAA is the "column address access" timing parameter for the DRAM. This parameter specifies the delay between the issuance of the column address and the access to read data and represents the only real difference between read and write accesses.
It has been recognized that because of the length of time needed to perform a sense operation, it is not necessary for the row and column addresses to be transmitted on the address bus simultaneously. Therefore, it is possible for the DRAM to use one set of inputs to receive first the row address followed by the column addresses. This is typically done on asynchronous DRAMs and some types of synchronous DRAMs. Therefore, most DRAMs have approximately the same number of rows per array as column bits "sr" per row (wherein sr approximately equals b0-5, and b is the number of bits in the array). This maintains the number of row and column address signal lines to be roughly the same.
One trend in the DRAM technology is to increase the rate at which information is transmitted and received. This rate has been increasing in both absolute terms and in relative terms, when compared to the rate at which sense/restore operations and read/write accesses can be performed. Fig. 5 illustrates synchronous write timing, for f=2, when the time it takes to do a read or write access is half as slow as the time for data to be transmitted or received to the DRAM. Thus, in the time it takes to do a read or write access of "rw" bits, "f" words of "tr" bits each may be transmitted or received. In the figure, y, z, denote subfields which are tr bits in width of a data word rw bits in width. In addition, tcycle represents the time during which tr bits are transmitted/received at the DRAM input/output pins. The t read/write time parameter is the time to read/write rw bits to/from the column amplifiers, and tRCD is the time to sense a row and place it in the column amplifiers.
^ SUMMARY OF THE INVENTION
It is object of the present invention to minimize the number of address control pins and signal lines required to access a DRAM while maximizing the usage such that all DRAM pins approximately have equal information rates at all times.
As interfaces to DRAMs become more advanced and higher performance, the interfaces and signal lines required to support the interface become more expensive to implement. Therefore, it is desirable to minimize the number of signal lines and maximize the bandwidth of the signal lines interfacing to the DRAM in order to take advantage of the high performance of the signal lines in the interface. In the DRAM memory system of the present invention, the address and control lines are combined and the information multiplexed such that the DRAM pins have roughly equal information rates at all times. In particular, the number of column address bits which need to be received in every clock cycle in order to meet the requirements can be determined from the following equation: cap = ceiling(ca/f) where cap represents the number of address bits received in every clock cycle; ceiling represents a function returnrhg an integer value greater than or equal to its argument; • ca represents the number of column address bits used every read/write cycle; and
• f represents the ratio of rw/tr.
The savings are further realized by the multiplexing of column address bits as the "f" parameter increases in future DRAM technology. Further advantages can be gained by multiplexing the row address with the column address on a single set of wires to a single set of pins in the DRAM. In an alternate embodiment, the data pins of the DRAM are utilized to transmit the row address as the data pins of the DRAM are not in use when the row address is received. Furthermore, the control signals needed to specify a sense or restore operation or a read or write access can also be multiplexed onto the data pins before the time the DRAM needs to receive or transmit data. Thus, for example, in a 16 megabit DRAM, a total of 11 wires connecting to 11 pins of the DRAM are utilized: BusData[8:0] for data, control, row address information and some column address information, BusEnable for the column address, and BusCtrl for specifying whether data or address information is present on the data wires.
In an alternate embodiment, the latency incurred during write operations is programmable to set the latency required for read operation to the latency of a write operation. In this manner, every clock cycle of the data bus can be used for a transfer by interleaving accesses regardless of the mix of read and write accesses. In addition, in a DRAM with pulsed word lines, three possible states for the column amplifiers are designated, each state having a different set of operations that must be performed in order to sense a new row. The DRAM includes a dirty flag which is set whenever a write access is made to the column amplifiers. The flag is cleared when the column amplifiers are written into the selected row in the memory array by a restore operation. The present invention permits the state of the DRAM, at the time operations with respect to a row are complete, to be left in one of the three states. The state is selected by the control inputs when a read or write access command is specified. If the column amplifiers are dirty after the access has completed, then the column amplifiers may be left in a dirty state. Alternately, a restore operation can leave the column amplifiers in a clean state, and a restore/precharge operation can leave the column amplifiers in a precharge state.
Similarly, if the column amplifiers are clean after access to a row has completed, the amplifiers may be left in a clean state or a precharge operation may be performed to leave the column amplifiers in a precharge state. Although it is generally better to perform as many of these operations as possible at the end of a row access to minimize the amount of time required to perform a sense operation to a new row, in some situations, it may be preferable to address time critical operations and incur the latency at the time a new row is sensed. However, the use of these three states provides the flexibility to reduce the latency of accesses to the new row. If the old row is dirty, a restore/precharge/sense operation must be performed before a read/ write access to a different row can be started. If the old row is clean, only a precharge /sense operation must be performed before a read/ write to a different row access can be started and it follows that if the old row is precharged, the sense operation must be performed before a read/write access to a different row can be started. Therefore, by providing these three states, the RDRAM can be controlled to minimize the latency of access to a new row in some situations, yet not perform core operations needlessly in other situations.
BRIEF DESCRIPTION OF THE DRAWINGS
The objects, features and advantages of the present invention will be apparent from the following detailed desσiption in which:
Figure 1 is a block diagram of prior art dynamic random access memory (DRAM) component.
Figure 2 is a block diagram illustrating a DRAM system and input/ output pins and signal lines for accessing the DRAM.
Figure 3 is a timing diagram illustrating synchronous write timing when the size of the read/write word is equal to the size of the transmit/receive word.
Figure 4 is a prior art timing diagram illustrating synchronous read timing when the size of the transmit/receive word equals the size of the read/ write word.
Figure 5 is a prior art timing diagram illustrating synchronous write timing when the size of the read /write word equals twice the size of the transmit/receive word.
Figure 6 is a block diagram of a DRAM system in accordance with the teachings of the present invention showing double multiplexed address lines containing row and column address information.
Figure 7 is a timing diagram which illustrates synchronous write timing with double multiplexed row/column information. Figure 8 is a timing diagram which illustrates synchronous read timing with double multiplexed row /column information.
Figure 9 is a block diagram of DRAM system utilizing multiplexed data/row information.
Figure 10 is a timing diagram illustrating synchronous write timing using multiplexed data/row information.
Figure 11 is a DRAM system block diagram illustrating multiplexed data/row/control information.
Figure 12 is a timing diagram illustrating synchronous write timing using multiplexed data/row/control information.
Figure 13 is a timing diagram illustrating synchronous read timing with multiplexed data/row/ control information.
Figure 14 is a timing diagram illustrating synchronous write timing incorporating a plurality of enhancements in accordance with the teachings of the present invention.
Figure 15 and Figure 16 illustrate synchronous write timing and synchronous read timing when a latency incurred during the write operation is less than the latency incurred during a read operation.
Figure 17 is a simple, exemplary structure for programming latency in accordance with the teachings of the present invention. Figure 18 is a timing diagram illustrating interleaved read/write operation timing when the read latency equals the write latency.
Figure 19 is a timing diagram which illustrates synchronous interleaved read timing with multiplexed data /row/ control information.
DETAILED DESCRIPTION OF THE INVENTION
In the following description, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to the one skilled in the art that these specific details are not required in order to practice the invention. In other instances well-known electrical structures and circuits are shown in block diagram form in order not to obscure the present invention unnecessarily.
The present invention is directed to a structure and method for minimizing the number of pins and control information lines required to interface to a dynamic random access memory (DRAM). In the following discussion, the delay which occurs between the issuance of the column address and the receipt of the read data from the DRAM will differ from the delay incurred for writing information to the DRAM after issuance of the column address. However, this is only the distinction between read and write accesses. Therefore, it is apparent that although the following discussion may focus primarily on write accesses, the concepts disclosed are equally applicable to read accesses.
As the rate of transmitting and receiving information continues to increase (relative to the rate at which the sense/restore operations and read/ write accesses can be performed), there will be an increase in disparity in the amount of control and address information which must be received by the DRAM relative to the amount of read/ write data which must be transmitted /received. Furthermore, as the system is developed to perform a higher data transfer speed, the system becomes more expensive to implement for each line required. Therefore, it is desirable to maximize not only the speed of the input/output pins but also the usage of those pins to take advantage of the speed to increase the bandwidth while reducing costs of implementation. Therefore, signal lines are eliminated by multiplexing the data/control/address information together so that all DRAM pins have roughly equal information rates at all times.
In particular, the following equation can be used to describe the number of column address bits which need to be received in every clock cycle to maximize usage of the pins: cap = ceiling(ca/f) where
• cap = the number of address bits received in every dock cyde tciockCycle;
• ceiling = a function returning the integer value which is equal or greater than its argument;
• ca = the number of column address bits used every read/ write cycle tRead/Write;
• f = rw/tr, where rw = the number of bits read or written from/to column amplifiers in every read/write cyde tRead/Write/' and tr = the number of bits' transmitted or received from/to the DRAM in every clock cyde tciockCycle/* arid
tRead/Write = f#tClockCyde The multiplexing of the column address bits has the potential of saving a large number of pins and signal lines as the "f ' parameter increases. For example, f is equal to a value of two in current synchronous DRAMs and is expected to increase to eight or more in future synchronous DRAMs. Although the latency of (tRead/Write - tciockCycle) is introduced into the column address access perimeter tcAA for read access, this delay which is needed to assemble a complete column address for column amplifier is found to be have minimal effect in view of the advantages gained by decreasing the number of pins and signal lines required to perform the access.
In addition, the method the DRAM utilizes to receive the row address can be improved upon. One possibility is to multiplex the row address into the DRAM with the row address bits "rap" which are received per dock cyde to be equal to the number of column address bits "cap" transmitted per dock cycle. This will delay the use of the row address by the amount of time (tRead/Write - tciockCycle) while the row address bits are received and assembled.
Fig. 6 illustrates a DRAM in which the row address and column address information is "double multiplexed" onto the column address wires (cap) where cap = ceiling(ac/f). Figs.7 and 8 illustrate, respectively, the write timing and read timing using a double multiplexed row and column connections. The parameter Col [m,n] represents the subwords of ca/f bits of the column address (which is a total width of ca bits) and Row [m,n] represents the subwords of ra/f bits of the row address. It should be noted that the first three dock cycles are not shown in Fig. 8; however, they are the same as the first three dodc cydes shown in Fig. 7. The timing is an example of when f =2; that is, when the number of bits read or written from/to column amplifiers every read /write cycle is twice that of the number of bits transmitted /received from /to the DRAM in every dodc cyde. With more advanced transmit/receive technology, f can reach a value of eight or more, permitting the number of address wires to be reduced to one or two. In the present example, the number of address wires has been cut in half, and two clock cycles are required to receive a row or column address. A read /write cyde requires two clock cydes as well. Therefore, for the latency penalty of an additional dock cycle, the number of pins required to access the information is substantially reduced. This, in turn, reduces the costs and complexity of the interface to the DRAM.
To further maximize usage on the pins of the DRAM, the row address is communicated across the data pins. This is possible because the data pins of the DRAM are not in use when the row address is transmitted and received by the DRAM. The row address can be received with the data pins in about 1 /f the time it takes on the cap column address pins. This is illustrated by the block diagram of Fig.9 which shows the row address information coming from the data lines and the column address information coming across the column address lines. Fig. 10 illustrates synchronous write timing using this concept. As shown in Fig. 10, the row address is transmitted across the data lines during an initial dock cyde and the column information is transmitted across the column address lines. If ra>tr, the row address is transmitted across multiple initial clock cydes. Furthermore, the control signals needed to signal a sense or restore operation or a read or write access can also be multiplexed onto the data pins before the time the DRAM needs to actually receive or transmit data. At least one control wire should remain unmultiplexed to indicate when control and row information is transmitted on the data wires. This control information can simply indicate control /row or data information or can be combined with the internal state of DRAM using a predetermined protocol to indicate the type of information available at the pins. For example, as shown in Fig. 11, the DRAM has "tr" data pins which are multiplexed between data, row address and control information, a data/control select pin, and one or two column address pins.
Fig. 12 illustrates synchronous write timing with multiplexed data, row and control information and Fig.13 illustrates synchronous read timing with multiplexed data, row and control information. It should be noted that with as few as two non-data pins, e.g., the column address pin and the data /control select pin, it is preferable to operate the DRAM in a block-oriented protocol. In particular, when a Write(CAS) access is spedfied in the third dock cyde, a transfer count is specified to indicate the number of data words (of width "tr") to be transferred. Referring to the read timing of Fig.13, when the Read(CAS) access is specified, a transfer count is specified at the same time. Thus, the only difference between read and write accesses is the latency tcAA between receiving the Read(CAS) control input and transmitting the first read data word RData(a,ly).
Because the read and write data is transacted in blocks of rw = f •& bits, the Data/Control Select pin is not limited to spedfying two combinations (data or control). Instead, there are 2f usable combinations assuming that the processor and DRAM can agree on the proper framing of the f-clock-cycle-long bursts; that is, the f-bit blocks on the Data/Control Select pin must be aligned with respect to the f »tr bit blocks on the Data bus and the f «cap bit blocks on the Column Address bus. One of the function s which can be encoded with the extra combinations is a command to terminate a block transfer, if the protocol can specify transfers that are multiples of the f «tr bit data block size.
For a 16 megabit DRAM using the above innovations, a total of eleven wires can be used. The parameters will be spedfied as follows:
• sr = 2048 x 9 bits
• rw = 8 x 9 bits
• tr = 9 bits • f = 8
• ra = 10 bits (plus 15 bits of device address)
• ca = 8 bits
• cap = 1 bit BusData[8:0] is used for data, control and row address information. The pin BusEnable is used to transmit the column address of a multiple clock cycle and BusCtrl pin is used for specifying data or address on the data wires. Because f=8, the BusCtrl wire is available for functions other than specifying data or address as there are certain dock cycles when the BusCtrl wire is not used to transmit any particular signals. Therefore, such functions as indicating when a block data transfer is to be prematurely terminated can be implemented.
A simplified example, for f=2, is shown in Fig.14. Referring to Fig. 14, the timing enhancements due to the innovations can be seen. In particular, the data and control signals are paired for transmission across the Data /Row /Control signal lines and the data/control signal line is used to identify the type of information transmitted. In addition, the row address is transmitted across multiple clock cycles to minimize the number of signal lines required. Furthermore, to enhance performance, the first column address is transmitted across the data/row/ control signal lines to provide pipelining capability. Subsequent column address information is transmitted across the column address signal lines tread/write clock cycles earlier as shown. Additionally, a count value is transmitted across the data/row/control signal lines to provide the count information necessary in a block averted protocol. Finally, the data/control signal lines can be used to transmit other control information, by encoding of the bits transmitted across one or more (in the present example, two) dodc cydes . In the present illustration, a terminate function is encoded to prematurely terminate a block operation of data.
As noted earlier, there is a timing difference between read and write accesses. In particular, the column address access latency tcAA between the dodc cyde with the column address and read command and the clock cyde with the first word of read data causes the read timing to be longer from the write timing. This is illustrated in Figs. 15 and 16. Figs. 15 and 16 illustrate the simple case when f=l. The latency between the dock cycle with the column address and write command and the dock cycle with the first word of write data is zero. In a read situation, there is a delay of tcAA which occurs. Therefore, there will be wasted clock cydes on the data bus every time a read access is followed by a write access because of this latency difference. To maximize usage and increase the bandwidth of the signal lines, the write latency is made programmable so that it can be adjusted to equal to read latency.
A simple, exemplary structure for programming latency is shown in Fig. 17. Fig. 17 show DRAM control logic 500 which delays a signal to initiate an operation (Start R/ W) 505 a certain number of dodc cydes dependent upon the information loaded into the latency control register 510. The information loaded into the latency control register 510 controls the operation of the multiplexars 515, 520, 525. Selection by the multiplexars 515, 520, 525 determine whether the signal 505 is immediately output or input after a predetermined delay by processing the signal 505 through flip flops 530, 535, 540. Each flip flop 530, 535, 540 delays the signal one dodc cycle. It is readily apparent that latency can be programmed using alternate structure. For example, latency can be programmable using a programmable counter to count delays. Alternatively, delays can be inserted in between control signals such that the control signals can be pipelined with other operations in a manner to produce the desired latency. By setting the write latency equal to the read latency every dodc cyde of the data bus can be used for a transfer regardless of the types of accesses which are occurring. This is achieved by interleaving the accesses to be performed. This technique maximizes the bandwidth utilization of the bus at the small expense of the write latency.
Fig. 18 illustrates interleaved timing of read and write accesses. Thus, the interleave structure permits read accesses to a DRAM to be interleaved with write accesses to another DRAM. If a DRAM has multiple independent memory arrays and assodated column amplifiers, then read accesses to one bank can be interleaved with write accesses to another bank within the same DRAM further increasing the bandwidth utilization of the DRAM itself. Furthermore, the interleaving will work with the multiplexed address and control information (described above when f>l) which further enhances the operation of the DRAM.
The concept of interleaving can be taken one step further by recognizing that the control in information and row addresses can be multiplexed on the data bus lines as described earlier. Thus, there would be additional benefit to making the range of the read and write latency larger to permit the transfer count and command information (utilizing a block-oriented protocol) to be interleaved on the data bus. This is illustrated in Fig.17, for example Col[3m], Col[3n]. (f=2) Data from the read command issued in a first dock cycle is deliberately delayed until the seventh clock cyde, when RData[a,3y] RData[a,3z] is available, in order to permit a four-word read or write access to be completed every five cydes. This further maximizes the bandwidth utilization of the data bus at the expense of the read and write latency.
As noted earlier, in a DRAM which utilized non-pulsed word lines, every write operation causes the sensed row to be restored to the memory core; wherein only a precharge operation is performed before the next sense operation on a row. Such DRAMs can be left in a sensed state wherein the column amplifiers contain a copy of one of the rows of the memory array, or the array can be left in a precharged state wherein the column amplifiers and bit lines are precharged and ready for the next sense operation. The choice of these two column amplifiers states may be made with the control inputs when a read or write access command is specified. Probably the precharged state is chosen when the final access has been made to the sensed row. This avoids spending the precharge time Rp before the time tRCD can be acquired for the next row to be sensed. In a DRAM using pulsed word lines, the restore operation is typically done once, just prior to the next precharge/sense operation. However, this restore operation is only necessary-if the column amplifiers are different from the row in the memory array. Thus, three possible states are provided for the column amplifiers, each utilizing a different set of operations that must be performed in order to sense a new row. The first state is a precharged state in which the column amplifiers and bit lines are precharged. If the row is precharged, the sense operation must be performed before read/write access can be initiated. In the next state, referred to as a clean state, the column amplifiers contain identical information to the row in the memory array. If the amplifiers are in a dean state, a precharge/sense operation must be performed before a read /write access can be started. This, of course, takes a longer period of time than just a sense operation. The third state is the dirty state wherein the column amplifiers contain different information from the row and the memory array. Thus, before a read /write access to a new row can be initiated, a restore/precharge/sense operation must be performed.
To track the state of the row, a dirty flag is utilized. Preferably this flag is a bit in a register located in the DRAM control logic and is set whenever a write access is made to the column amplifiers. Alternatively, the dirty flag can be maintained in an external DRAM controller. The bit is cleared when the column amplifiers are written into the seleded row in the memory array by a restore operation. Thus, the DRAM's column amplifiers can be left in one of the three states. The state is seleded by the control inputs when a read or write access command is spedfied. For example, six distinct read and write commands (three read, three write) are provided, each identifying the state the column amplifiers are to be left in at the completion of the access. If the column amplifiers are dirty after the access has completed, then the column amplifiers may be left dirty or a restore operation will leave the column amplifiers in a dean state or a restore/pred arge operation will leave the column amplifiers in a precharged state. Similarly, if the column amplifiers are dean after the access has completed, then the amplifiers may be left in a dean state, or a precharge operation will leave the column amplifiers in a precharged state.
Although it is preferable that as many of these operations are performed at the end of an access rather than performing these operations before a new row is sensed, certain timing constraints may require that other alternatives in the sequence of performing operations are utilized. The structure provides the flexibility to leave the row in either of one of three states and perform the operations needed prior to a row being sensed at the end of access to the old row or before access to the new row.
The present invention has been described in conjunction with the preferred embodiment. It is evident that numerous alternatives, modifications, variations and uses will be apparent to those skilled in the art in light of the foregoing description.

Claims

CLAIMSWhat is claimed is:
1. In a dynamic random access memory (DRAM)system comprising at least one DRAM array accessed according to a row address and column address and an array address /control means for receiving address and control information and a data input/ output means for receiving data to be written to the array and for transmitting data read from the array, said memory system comprising: a plurality of bussed signal lines for communicating address and control information and data, the number of column address lines which are transmitted each dock cycle in order to communicate the column address information with a low latency is determined according to the following equation:
cap = ceiling(ca/f)
where cap represents the number of column address bits received in every clock cyde (tciockcycle)/ ceiling represents a function returning an integer greater than or equal to its argument, ca represents the number of column address bits used ever read /write cyde (tRead/Write)/ f equals rw/tr, tr represents the number of bits transmitted to or received from/to the DRAM in every dodc cyde and tRead/Write = ϊ*tclockcyele wherein said DRAM system comprises a minimum number of signal lines by decreasing the number of lines required to send the column address and the data rates through the signal lines and input/output pins of the DRAM are approximately the same to maximize usage of the signal lines.
2. The DRAM system as set forth in claim 1, wherein the row address information is multiplexed with the column address information across column address signal lines, wherein row address signal lines are eliminated.
3. The DRAM system as set forth in claim 1, wherein the row address information is multiplexed with the data communicated over data signal lines, wherein row address signal lines are eliminated.
4. The DRAM system as set forth in daim 1, wherein the row address information, control information indicating whether a sense, restore, read, write operation is to be performed and data are multiplexed across the same set of signal lines, eliminating the need for separate signal lines, said system further comprising a selert signal line indicating whether the type of information multiplexed is row address information, control information or data.
5. The DRAM system as set forth in claim 4, wherein the select signal line can communicate a multiplidty of selert information across multiple dodc cycles.
6. The DRAM system as set forth in claim 4, wherein the row address is transmitted during at least one initial clock cyde.
7. The DRAM system as set forth in daim 4, wherein control signals to sense, restore, read, or write data are multiplexed onto the set of signal lines prior to time the DRAM receiving or transmitting data.
8. The DRAM system as set forth in daim 4, wherein the state of the DRAM and a signal on the selert signal line in accordance with a predetermined protocol identify the type of information multiplexed on the set of signal lines.
9. The DRAM system as set forth in daim 5, wherein block transfers are performed in multiples of a f *tr bit data block size, said select information further identifying up to 2f commands.
10. The DRAM system as set forth in daim 9, wherein said select information identifying a command to terminate a block transfer.
11. The DRAM system as set forth in daim 1, wherein eleven signal lines are utilized to communicate data, address and control information to/from a 16 Mbit DRAM.
12. The DRAM system as set forth in daim 11, wherein the signal lines comprise BusData[8:0] to communicate data, control and address information, BusEnable to communicate the column address and BusCtrl for spedfying data or control information is being communicated over the BusData[8:0] signal lines.
13. In a dynamic random access memory (DRAM) system comprising at least one DRAM array accessed according to a row address and column address and an array address /control means for receiving address and control information and a data input/output means for receiving data to be written to the array and for transmitting data read from the array, said memory system comprising: means for increasing the latency incurred during a write access to equal the latency incurred during a read access; means for interleaving the read and write accesses to utilize each dock cyde to communicate information across the data signal lines.
14. The DRAM system as set forth in daim 13, wherein the means for increasing the latency is programmable.
15. The DRAM system as set forth in daim 13, wherein the means for increasing the latency comprises: a latency register for storing information indicative of the duration of the latency; control means for controlling the latency to be a duration corresponding to the latency indicated by the information stored in the latency register; wherein the information stored in the latency register can be modified to program the latency.
16. The DRAM system as set forth in daim 13, wherein the means for increasing the latency comprises a programmable counter which counts a number of dock cydes to wait during a write access.
17. The DRAM system as set forth in daim 13, wherein the means for increasing the latency comprises a control means to control the timing of issuance of control signals to perform a write access in order to incur the desired latency.
18. The DRAM system as set forth in daim 1, wherein the system comprises multiple DRAMs using the same signal lines and accesses among the multiple DRAM are multiplexed to maximize usage of the signal lines.
19. The DRAM system as set forth in daim 13, wherein the DRAM comprises multiple arrays, each array having an assodated set of column amplifiers, wherein accesses among the arrays are multiplexed to maximize usage of the signal lines.
20. The DRAM system as set forth in ςlaim 1, wherein a first column address is transmitted prior to corresponding data being transmitted such that column addresses are pipelined to enhance performance.
21. The DRAM system as set forth in daim 20, wherein a first column address is transmitted during an initial dodc cycle across the data signal lines subsequent to transmission of the row address.
22. In a dynamic random access memory (DRAM) system comprising at least one DRAM array accessed according to a row address and column address , said row address decoded in the DRAM to sense a corresponding row and selecting the corresponding word line in the array, said word line being a pulsed word line wherein a restore operation to restore the contents of the column amplifiers to the array is performed once prior to performing a sense/ precharge operation on another row of the array said system comprising: a dirty flag, which when set, spedfies that the information stored in the column amplifiers is different from the information stored in the row of the array; means for selecting a state the column amplifiers are left in prior to sensing the next word line in the array, such that; if the dirty flag is set and said means indicates that the column amplifiers are to be left in a dirty state, just prior to sensing another row of the array the column amplifiers are restored to the memory array and the column amplifiers are precharged, if the dirty flag is set and said means indicates that the column amplifiers are to placed in a dean state, the column amplifiers are restored to the memory array after read/ write operations to the current row are complete, and just prior to sensing another row of the array the column amplifiers are precharged, if the dirty flag is set and said means indicates that the column amplifiers are to be placed in a precharged state, the column amplifiers are restored to the current row and the column amplifiers are precharged after read/ rite operations to the current row are complete, and if the dirty flag is not set and said means indicates that the column amplifiers are to be placed in a precharged state, the column amplifiers are precharged after read/write operations to the current row are complete; wherein at the completion of an access to a row, the column amplifiers can be left in a dirty state, a clean state or a precharged state.
23. The DRAM system as set forth in daim 1, further comprising: means for increasing the latency incurred during a write access to equal the latency incurred during a read access; means for interleaving the read and write accesses to utilize each clock cyde to communicate information across the data signal lines.
24. The DRAM system as set forth in claim 23, wherein the means for increasing the latency is programmable.
25. The DRAM system as set forth in daim 24, wherein the means for increasing the latency comprises: a latency register for storing information indicative of the duration of the latency; control means for controlling the latency to be a duration corresponding to the latency indicated by the information stored in the latency register; wherein the information stored in the latency register can be modified to program the latency.
26. The DRAM system as set forth in claim 25, wherein the means for increasing the latency comprises a programmable counter which counts a number of dock cydes to wait during a write access.
27. The DRAM system as set forth in daim 25, wherein the means for increasing the latency comprises a control means to control the timing of issuance of control signals to perform a write access in order to incur the desired latency.
28. In a dynamic random access memory (DRAM)system comprising at least one DRAM array accessed according to a row address and column address and an array address/control means for receiving address and control information and a data input/ output means for receiving data to be written to the array and for transmitting data read from the array, a method for transmitting address,control and data between DRAM and a device, said method comprising the steps of: transmitting address and control information and data across a plurality of bussed signal lines, the number of column address lines used to communicate column address information with a low latency is determined according to the following equation:
cap = ceiling(ca/f)
where cap represents the number of column address bits received in every clock cyde (tdockcycle)/ ceiling represents a function returning an integer greater than or equal to its argument, ca represents the number of column address bits used ever read /write cyde (tRead/ Write)/ f equals rw/tr, tr represents the number of bits transmitted to or received from the DRAM in every clock cyde and tRead/Write = * tdockcycle ," wherein a minimum number of signal lines by decreasing the number of signal lines are used to send the column address, and the data rates through the signal lines and input/output pins of the DRAM are approximately the same to maximize usage of the signal lines.
29. The method as set forth in daim 28, further comprising the step of multiplexing the row address information with the column address information across column address signal lines, wherein row address signal lines are eliminated.
30. The method as set forth in daim 28, further comprising the step of multiplexing the row address information with data communicated across data signal lines, wherein row address signal lines are eliminated.
31. The method as set forth in daim 28, further comprising the steps of: multiplexing row address information, control information indicating whether a sense, restore, read, write operation is to be performed and data across the same set of signal lines, eliminating the need for separate signal lines; and issuing at least one select signal across a selert signal line to indicate whether the information multiplexed is control information or data.
32. The method as set forth in daim 31, wherein the step of issuing at least one select signal comprises issuing a multiplidty of select information across multiple clock signals.
33. The method as set forth in daim 31, wherein the row address is transmitted during at least one initial clock cyde.
34. The method as set forth in daim 31, wherein control signals to sense, restore, read, or write data are multiplexed onto the set of signal lines prior to time the DRAM receiving or transmitting data.
35. The method as set forth in daim 31, further comprising the stp of determining the state of the the DRAM; and identifying the type of information multiplexed on the set of signal lines according to the state of the DRAM and a signal on the select signal line in accordance with a predetermined protocol.
36. The method as set forth in daim 32, wherein said block transfers are performed in multiples of a f *tr bit data block size, said select information further identifying up to 2f commands.
37. The method as set forth in daim 36, wherein said select information identifying a command to terminate a block transfer.
38. The method as set forth in claim 28, further comprising the steps of: increasing the latency incurred during a write access to equal the latency incurred during a read access; interleaving the read and write accesses to utilize each dock cyde to communicate information across the data signal lines.
39. The method as set forth in daim 38, wherein the step of increasing the latency comprises the step of programming the latency to a predetermined value indicative of the duration of the latency.
40. The method as set forth in daim 39, wherein the step of programming the latency comprises: for storing information indicative of the duration of the latency in a latency register ; controlling the latency to be a duration corresponding to the latency indicated by the information stored in the latency register; wherein the information stored in the latency register can be modified to program the latency.
41. The method as set forth in claim 39, wherein the step of programming the latency comprises the step of counting a number of clock cycles to wait during a write access.
42. The method as set forth in daim 39, wherein the step of programming the latency comprises the step controlling the timing of issuance of control signals to perform a write access in order to incur the desired latency.
43. The method as set forth in daim 28, further comprising the
-« step transmitting a first column address prior to transmitting corresponding data such that column addresses are pipelined to enhance performance.
44. The method as set forth in daim 43, wherein a first column address is transmitted during an initial clock cycle across the data signal lines subsequent to transmission of the row address.
45. In a dynamic random access memory (DRAM)system comprising at least one DRAM array accessed according to a row address and column address , said row address decoded in the DRAM to sense a corresponding row and selecting the corresponding word line in the array, said wordline being a pulsed wordline wherein a restore operation to restore the contents of the column amplifiers to the array is performed once prior to performing a sense/precharge operation on another row of the array, a method for access comprising the steps of: providing a dirty flag, which when set, spedfies that the information stored in the column amplifiers is different from the information stored in the row of the array; selecting a state the column amplifiers are left in prior to sensing the next word line in the array, such that; if the dirty flag is set and a state selected indicates that the column amplifiers are to be left in a dirty state, just prior to sensing another row of the array, restoring the column amplifiers to the memory array and precharging the column amplifiers, if the dirty flag is set and the state selected indicates that the column amplifiers are to placed in a dean state, restoring the column amplifiers to the memory array after read /write operations to the current row are complete, and just prior to sensing another row of the array, precharging the word line, if the dirty flag is set and the state selected indicates that the column amplirfiers are to be placed in a precharged state, restoring the column amplifiers to the current row and precharging the column amplifiers after read /write operations to the current row are complete, and if the dirty flag is not set and the state selected indicates that the column amplifiers are to be placed in a precharged state, prediarging the column amplifiers after read /write operations to the current row are complete; wherein at the completion of an access to a row, the column amplifiers can be left in a dirty state, a clean state or a prediarged state.
PCT/US1994/005798 1993-06-02 1994-05-23 Dynamic random access memory system WO1994028550A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP50087995A JP4077874B2 (en) 1993-06-02 1994-05-23 Dynamic random access memory system
AU70434/94A AU7043494A (en) 1993-06-02 1994-05-23 Dynamic random access memory system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/071,177 1993-06-02
US7117793A 1993-06-23 1993-06-23

Publications (1)

Publication Number Publication Date
WO1994028550A1 true WO1994028550A1 (en) 1994-12-08

Family

ID=22099750

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/005798 WO1994028550A1 (en) 1993-06-02 1994-05-23 Dynamic random access memory system

Country Status (4)

Country Link
US (3) US5511024A (en)
JP (3) JP4077874B2 (en)
AU (1) AU7043494A (en)
WO (1) WO1994028550A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003065376A1 (en) * 2002-01-28 2003-08-07 Intel Corporation Apparatus and method for encoding auto-precharge

Families Citing this family (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL96808A (en) * 1990-04-18 1996-03-31 Rambus Inc Integrated circuit i/o using a high performance bus interface
US5497354A (en) * 1994-06-02 1996-03-05 Intel Corporation Bit map addressing schemes for flash memory
US5590078A (en) * 1994-10-07 1996-12-31 Mukesh Chatter Method of and apparatus for improved dynamic random access memory (DRAM) providing increased data bandwidth and addressing range for current DRAM devices and/or equivalent bandwidth and addressing range for smaller DRAM devices
US5655105A (en) * 1995-06-30 1997-08-05 Micron Technology, Inc. Method and apparatus for multiple latency synchronous pipelined dynamic random access memory
US5748633A (en) * 1995-07-12 1998-05-05 3Com Corporation Method and apparatus for the concurrent reception and transmission of packets in a communications internetworking device
US5651002A (en) * 1995-07-12 1997-07-22 3Com Corporation Internetworking device with enhanced packet header translation and memory
US5537353A (en) * 1995-08-31 1996-07-16 Cirrus Logic, Inc. Low pin count-wide memory devices and systems and methods using the same
US6025840A (en) * 1995-09-27 2000-02-15 Cirrus Logic, Inc. Circuits, systems and methods for memory mapping and display control systems using the same
US5636174A (en) * 1996-01-11 1997-06-03 Cirrus Logic, Inc. Fast cycle time-low latency dynamic random access memories and systems and methods using the same
US5749086A (en) * 1996-02-29 1998-05-05 Micron Technology, Inc. Simplified clocked DRAM with a fast command input
US5906003A (en) * 1996-04-17 1999-05-18 Cirrus Logic, Inc. Memory device with an externally selectable-width I/O port and systems and methods using the same
US5835965A (en) * 1996-04-24 1998-11-10 Cirrus Logic, Inc. Memory system with multiplexed input-output port and memory mapping capability
US5829016A (en) * 1996-04-24 1998-10-27 Cirrus Logic, Inc. Memory system with multiplexed input-output port and systems and methods using the same
US5815456A (en) * 1996-06-19 1998-09-29 Cirrus Logic, Inc. Multibank -- multiport memories and systems and methods using the same
US6115318A (en) * 1996-12-03 2000-09-05 Micron Technology, Inc. Clock vernier adjustment
US5923611A (en) * 1996-12-20 1999-07-13 Micron Technology, Inc. Memory having a plurality of external clock signal inputs
US5894586A (en) * 1997-01-23 1999-04-13 Xionics Document Technologies, Inc. System for providing access to memory in which a second processing unit is allowed to access memory during a time slot assigned to a first processing unit
US5920518A (en) * 1997-02-11 1999-07-06 Micron Technology, Inc. Synchronous clock generator including delay-locked loop
US6912680B1 (en) 1997-02-11 2005-06-28 Micron Technology, Inc. Memory system with dynamic timing correction
US5940608A (en) * 1997-02-11 1999-08-17 Micron Technology, Inc. Method and apparatus for generating an internal clock signal that is synchronized to an external clock signal
US5987576A (en) * 1997-02-27 1999-11-16 Hewlett-Packard Company Method and apparatus for generating and distributing clock signals with minimal skew
US5946244A (en) 1997-03-05 1999-08-31 Micron Technology, Inc. Delay-locked loop with binary-coupled capacitor
US6088761A (en) * 1997-03-31 2000-07-11 Sun Microsystems, Inc. Reduced pin system interface
US5870350A (en) * 1997-05-21 1999-02-09 International Business Machines Corporation High performance, high bandwidth memory bus architecture utilizing SDRAMs
US6173432B1 (en) 1997-06-20 2001-01-09 Micron Technology, Inc. Method and apparatus for generating a sequence of clock signals
US6266379B1 (en) 1997-06-20 2001-07-24 Massachusetts Institute Of Technology Digital transmitter with equalization
US5953284A (en) * 1997-07-09 1999-09-14 Micron Technology, Inc. Method and apparatus for adaptively adjusting the timing of a clock signal used to latch digital signals, and memory device using same
US6011732A (en) * 1997-08-20 2000-01-04 Micron Technology, Inc. Synchronous clock generator including a compound delay-locked loop
US5940609A (en) * 1997-08-29 1999-08-17 Micorn Technology, Inc. Synchronous clock generator including a false lock detector
US5926047A (en) 1997-08-29 1999-07-20 Micron Technology, Inc. Synchronous clock generator including a delay-locked loop signal loss detector
US6101197A (en) * 1997-09-18 2000-08-08 Micron Technology, Inc. Method and apparatus for adjusting the timing of signals over fine and coarse ranges
US5898623A (en) * 1997-10-09 1999-04-27 International Business Machines Corporation Input port switching protocol for a random access memory
US6343352B1 (en) 1997-10-10 2002-01-29 Rambus Inc. Method and apparatus for two step memory write operations
US6401167B1 (en) * 1997-10-10 2002-06-04 Rambus Incorporated High performance cost optimized memory
US6347354B1 (en) * 1997-10-10 2002-02-12 Rambus Incorporated Apparatus and method for maximizing information transfers over limited interconnect resources
AU9693398A (en) * 1997-10-10 1999-05-03 Rambus Incorporated Apparatus and method for pipelined memory operations
US5959929A (en) * 1997-12-29 1999-09-28 Micron Technology, Inc. Method for writing to multiple banks of a memory device
US6269451B1 (en) 1998-02-27 2001-07-31 Micron Technology, Inc. Method and apparatus for adjusting data timing by delaying clock signal
US6065093A (en) * 1998-05-15 2000-05-16 International Business Machines Corporation High bandwidth narrow I/O memory device with command stacking
US6016282A (en) * 1998-05-28 2000-01-18 Micron Technology, Inc. Clock vernier adjustment
US6453377B1 (en) 1998-06-16 2002-09-17 Micron Technology, Inc. Computer including optical interconnect, memory unit, and method of assembling a computer
JP2000137983A (en) * 1998-08-26 2000-05-16 Toshiba Corp Semiconductor storage
US6338127B1 (en) 1998-08-28 2002-01-08 Micron Technology, Inc. Method and apparatus for resynchronizing a plurality of clock signals used to latch respective digital signals, and memory device using same
US6279090B1 (en) 1998-09-03 2001-08-21 Micron Technology, Inc. Method and apparatus for resynchronizing a plurality of clock signals used in latching respective digital signals applied to a packetized memory device
US6349399B1 (en) 1998-09-03 2002-02-19 Micron Technology, Inc. Method and apparatus for generating expect data from a captured bit pattern, and memory device using same
US6029250A (en) * 1998-09-09 2000-02-22 Micron Technology, Inc. Method and apparatus for adaptively adjusting the timing offset between a clock signal and digital signals transmitted coincident with that clock signal, and memory device and system using same
DE19951677B4 (en) * 1998-10-30 2006-04-13 Fujitsu Ltd., Kawasaki Semiconductor memory device
FI982374A (en) * 1998-11-02 2000-06-21 Nokia Mobile Phones Ltd memory Interface
US6430696B1 (en) 1998-11-30 2002-08-06 Micron Technology, Inc. Method and apparatus for high speed data capture utilizing bit-to-bit timing correction, and memory device using same
US6374360B1 (en) 1998-12-11 2002-04-16 Micron Technology, Inc. Method and apparatus for bit-to-bit timing correction of a high speed memory bus
US6292911B1 (en) 1998-12-17 2001-09-18 Cirrus Logic, Inc. Error detection scheme for a high-speed data channel
US6470060B1 (en) 1999-03-01 2002-10-22 Micron Technology, Inc. Method and apparatus for generating a phase dependent control signal
JP4083944B2 (en) 1999-12-13 2008-04-30 東芝マイクロエレクトロニクス株式会社 Semiconductor memory device
US6288898B1 (en) 1999-12-20 2001-09-11 Dell Usa, L.P. Apparatus for mounting and cooling a system components in a computer
US6553449B1 (en) 2000-09-29 2003-04-22 Intel Corporation System and method for providing concurrent row and column commands
US7103696B2 (en) * 2001-04-04 2006-09-05 Adaptec, Inc. Circuit and method for hiding peer devices in a computer bus
US6675272B2 (en) * 2001-04-24 2004-01-06 Rambus Inc. Method and apparatus for coordinating memory operations among diversely-located memory components
US8391039B2 (en) 2001-04-24 2013-03-05 Rambus Inc. Memory module with termination component
US6697926B2 (en) * 2001-06-06 2004-02-24 Micron Technology, Inc. Method and apparatus for determining actual write latency and accurately aligning the start of data capture with the arrival of data at a memory device
US6801989B2 (en) 2001-06-28 2004-10-05 Micron Technology, Inc. Method and system for adjusting the timing offset between a clock signal and respective digital signals transmitted along with that clock signal, and memory device and computer system using same
US6741497B2 (en) * 2001-08-30 2004-05-25 Micron Technology, Inc. Flash memory with RDRAM interface
US7168027B2 (en) 2003-06-12 2007-01-23 Micron Technology, Inc. Dynamic synchronization of data capture on an optical or other high speed communications link
US7257691B2 (en) * 2003-09-26 2007-08-14 International Business Machines Corporation Writing and reading of data in probe-based data storage devices
US7460545B1 (en) * 2004-06-14 2008-12-02 Intel Corporation Enhanced SDRAM bandwidth usage and memory management for TDM traffic
US7301831B2 (en) 2004-09-15 2007-11-27 Rambus Inc. Memory systems with variable delays for write data signals
KR100753081B1 (en) * 2005-09-29 2007-08-31 주식회사 하이닉스반도체 Seniconductor memory device with internal address generator
WO2007116484A1 (en) * 2006-03-31 2007-10-18 Fujitsu Limited Memory apparatus, interface circuit thereof, control method thereof, control program thereof, memory card, circuit board, and electronic device
WO2007116485A1 (en) * 2006-03-31 2007-10-18 Fujitsu Limited Memory device, its interface circuit, memory system, memory card, circuit board, and electronic device
EP3200189B1 (en) 2007-04-12 2021-06-02 Rambus Inc. Memory system with point-to-point request interconnect
KR102370156B1 (en) * 2017-08-23 2022-03-07 삼성전자주식회사 Memory system, and memory module and semiconductor memory device for the same

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS54128226A (en) * 1978-03-29 1979-10-04 Hitachi Ltd Random access memory
WO1979000914A1 (en) * 1978-04-11 1979-11-15 Ncr Co Memory device
JPS5720979A (en) * 1980-07-15 1982-02-03 Nec Corp Memory control system
JPS5853082A (en) * 1981-09-24 1983-03-29 Hitachi Ltd Static type ram
US4434474A (en) * 1981-05-15 1984-02-28 Rockwell International Corporation Single pin time-sharing for serially inputting and outputting data from state machine register apparatus
WO1991016680A1 (en) * 1990-04-18 1991-10-31 Rambus Inc. Integrated circuit i/o using a high preformance bus interface

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3821715A (en) * 1973-01-22 1974-06-28 Intel Corp Memory system for a multi chip digital computer
US4333142A (en) * 1977-07-22 1982-06-01 Chesley Gilman D Self-configurable computer and memory system
US4286321A (en) * 1979-06-18 1981-08-25 International Business Machines Corporation Common bus communication system in which the width of the address field is greater than the number of lines on the bus
US4649516A (en) * 1984-06-01 1987-03-10 International Business Machines Corp. Dynamic row buffer circuit for DRAM
US4979145A (en) * 1986-05-01 1990-12-18 Motorola, Inc. Structure and method for improving high speed data rate in a DRAM
US5184320A (en) * 1988-02-12 1993-02-02 Texas Instruments Incorporated Cached random access memory device and system
CA1314990C (en) * 1988-12-05 1993-03-23 Richard C. Foss Addressing for large dynamic ram
US5257237A (en) * 1989-05-16 1993-10-26 International Business Machines Corporation SAM data selection on dual-ported DRAM devices
US4967398A (en) * 1989-08-09 1990-10-30 Ford Motor Company Read/write random access memory with data prefetch
US5278974A (en) * 1989-12-04 1994-01-11 Digital Equipment Corporation Method and apparatus for the dynamic adjustment of data transfer timing to equalize the bandwidths of two buses in a computer system having different bandwidths
US5243703A (en) * 1990-04-18 1993-09-07 Rambus, Inc. Apparatus for synchronously generating clock signals in a data processing system
US5115411A (en) * 1990-06-06 1992-05-19 Ncr Corporation Dual port memory system
US5119331A (en) * 1990-09-04 1992-06-02 Nec Electronics Inc. Segmented flash write
JP2740063B2 (en) * 1990-10-15 1998-04-15 株式会社東芝 Semiconductor storage device
US5142276A (en) * 1990-12-21 1992-08-25 Sun Microsystems, Inc. Method and apparatus for arranging access of vram to provide accelerated writing of vertical lines to an output display
US5265053A (en) * 1991-07-03 1993-11-23 Intel Corporation Main memory DRAM interface
JP2988804B2 (en) * 1993-03-19 1999-12-13 株式会社東芝 Semiconductor memory device
AU6988494A (en) * 1993-05-28 1994-12-20 Rambus Inc. Method and apparatus for implementing refresh in a synchronous dram system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS54128226A (en) * 1978-03-29 1979-10-04 Hitachi Ltd Random access memory
WO1979000914A1 (en) * 1978-04-11 1979-11-15 Ncr Co Memory device
JPS5720979A (en) * 1980-07-15 1982-02-03 Nec Corp Memory control system
US4434474A (en) * 1981-05-15 1984-02-28 Rockwell International Corporation Single pin time-sharing for serially inputting and outputting data from state machine register apparatus
JPS5853082A (en) * 1981-09-24 1983-03-29 Hitachi Ltd Static type ram
WO1991016680A1 (en) * 1990-04-18 1991-10-31 Rambus Inc. Integrated circuit i/o using a high preformance bus interface

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 3, no. 149 (E - 157) 8 December 1979 (1979-12-08) *
PATENT ABSTRACTS OF JAPAN vol. 6, no. 84 (P - 117) 22 May 1982 (1982-05-22) *
PATENT ABSTRACTS OF JAPAN vol. 7, no. 138 (P - 204) 16 June 1983 (1983-06-16) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003065376A1 (en) * 2002-01-28 2003-08-07 Intel Corporation Apparatus and method for encoding auto-precharge
US6829184B2 (en) 2002-01-28 2004-12-07 Intel Corporation Apparatus and method for encoding auto-precharge
CN1751358B (en) * 2002-01-28 2010-11-17 英特尔公司 Apparatus and method for encoding auto-precharge

Also Published As

Publication number Publication date
US5511024A (en) 1996-04-23
JP4615494B2 (en) 2011-01-19
JP2007012270A (en) 2007-01-18
JP2010135065A (en) 2010-06-17
US5434817A (en) 1995-07-18
JPH09500751A (en) 1997-01-21
AU7043494A (en) 1994-12-20
JP4077874B2 (en) 2008-04-23
US5430676A (en) 1995-07-04

Similar Documents

Publication Publication Date Title
US5434817A (en) Dynamic random access memory system
US6044429A (en) Method and apparatus for collision-free data transfers in a memory device with selectable data or address paths
US5966724A (en) Synchronous memory device with dual page and burst mode operations
US5813023A (en) Method and apparatus for multiple latency synchronous dynamic random access memory
US8370596B2 (en) Mechanism for enabling full data bus utilization without increasing data granularity
JP3317187B2 (en) Semiconductor storage device
US20090276548A1 (en) Dynamically setting burst type of a double data rate memory device
US6477598B1 (en) Memory controller arbitrating RAS, CAS and bank precharge signals
EP0572026B1 (en) Semiconductor memory device
EP0605887B1 (en) Synchronous LSI memory device
US20020144071A1 (en) Method and apparatus for handling memory read return data from different time domains
KR100676981B1 (en) Arrangement with a plurality of processors sharing a collective memory
US6034900A (en) Memory device having a relatively wide data bus
EP0660328A2 (en) Method of controlling semiconductor storage circuit
KR100228455B1 (en) Semiconductor memory circuit
US7093051B2 (en) Dynamic input/output: configurable data bus for optimizing data throughput
EP0924707A2 (en) Synchronous dynamic random access memory architecture for sequential burst mode
US6728143B2 (en) Integrated memory
US6011728A (en) Synchronous memory with read and write mode
US6055609A (en) Apparatus and method for improving bus usage in a system having a shared memory
US5752267A (en) Data processing system for accessing an external device during a burst mode of operation and method therefor
JP2004507817A (en) DRAM control circuit

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR BY CA CH CN CZ DE DK ES FI GB GE HU JP KG KP KR KZ LK LU LV MD MG MN MW NL NO NZ PL PT RO RU SD SE SI SK TJ TT UA UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA