WO1998012637A1 - Dynamic spare column replacement memory system - Google Patents

Dynamic spare column replacement memory system Download PDF

Info

Publication number
WO1998012637A1
WO1998012637A1 PCT/US1997/017186 US9717186W WO9812637A1 WO 1998012637 A1 WO1998012637 A1 WO 1998012637A1 US 9717186 W US9717186 W US 9717186W WO 9812637 A1 WO9812637 A1 WO 9812637A1
Authority
WO
WIPO (PCT)
Prior art keywords
address
memory
switch
cells
word
Prior art date
Application number
PCT/US1997/017186
Other languages
French (fr)
Inventor
Chun-Chu Archie Wu
Chun-Chiu Daniel Wong
Original Assignee
I-Cube, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by I-Cube, Inc. filed Critical I-Cube, Inc.
Publication of WO1998012637A1 publication Critical patent/WO1998012637A1/en

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/70Masking faults in memories by using spares or by reconfiguring
    • G11C29/78Masking faults in memories by using spares or by reconfiguring using programmable devices
    • G11C29/80Masking faults in memories by using spares or by reconfiguring using programmable devices with improved layout
    • G11C29/808Masking faults in memories by using spares or by reconfiguring using programmable devices with improved layout using a flexible replacement scheme

Definitions

  • the present invention relates to a system for dynamically replacing damaged cells of a random access memory when they are addressed.
  • a second solution to the DRAM radiation damage problem has been to use redundant memory cells at each storage location so that a bit written to a particular address is actually written concurrently to several memory cells.
  • each memory cell When reading the bit stored at that address, each memory cell "votes" for the logic state of its stored bit. The state receiving the most votes is taken to be the correct state of the bit. Thus, for example if each storage location has five memory cells, the memory would operate correctly if no more than two of the five cells were damaged.
  • the vote counting process may be software or hardware implemented. While this solution improves DRAM reliability, it substantially increases hardware and/or software overhead.
  • Spare column replacement is a third solution to the radiation damage problem.
  • an M-word, N-bit DRAM is an array of M rows and N columns of memory cells . Each row has a unique address and stores an N-bit word with each bit of the word being stored in a separate memory cell along that row.
  • the DRAM is provided with one or more spare columns of memory cells.
  • the controller compares the current DRAM address to its stored addresses. If the currently addressed row does not match one of its stored addresses, the switch controller assumes the row has no defective cells and sends a switching instruction to the crossbar switch telling it to connect the data lines to the first N cells of the addressed row. However if the currently addressed row matches one of its stored addresses, the currently addressed row has a defective cell. In such case the switch controller transmits a stored switching instruction to the crossbar switch telling it to connect one of the N data lines to a spare cell instead of to the defective cell. Thus a bit is written to or read out of the spare cell instead of the defective cell.
  • FIG. 1 depicts in block diagram form a memory system with dynamic spare column replacement in accordance with the present invention
  • FIG. 2 depicts the crossbar switch of FIG. 1 in more detailed block and schematic diagram form
  • FIG. 3 depicts the switch controller of FIG. 1 in more detailed block diagram form
  • FIG. 4 depicts a typical controller cell of FIG. 3 in more detailed block diagram form
  • FIG. 5 is a flow chart illustrating operations carried out by the host computer of FIG
  • FIG. 6 is a timing diagram illustrating a read cycle of the memory system of FIG. 1;
  • FIG. 7 is a timing diagram illustrating a write cycle of the memory system of FIG. 1.
  • FIG. 1 illustrates in block diagram form a dynamic spare column replacement system 10 in accordance with the present invention.
  • Memory system 10 includes a 256K x 34-bit DRAM 12 having 256K rows and 34 columns of 1-bit memory cells. Each row of cells has a unique address and stores a separate one of the 256K 32-bit data words. Normally only the first 32 1-bit memory cells within each row store the row's 32-bit data word. The last two cells of each row are spares, available as replacements when one or two of the first 32 cells become damaged, for example by radiation. Thus the last two columns of memory cells within DRAM 12 are "spare" columns.
  • memory system 10 determines when a memory cell in any row becomes defective and thereafter reconfigures data access to DRAM 12 so as to functionally replace the defective cell with one of the row's spare cells whenever the row is read or write accessed.
  • a host computer 14 read and write accesses DRAM 12 via a conventional computer bus 16.
  • Bus 16 includes an address bus
  • Bus 16 also includes a 32-bit parallel data bus (DATA) connected to DRAM 12 through a crossbar switch 18.
  • DATA parallel data bus
  • DRAM controller 20 transmits various signals (WR-, OE- , RAS- , CAS- and ADDRESS) to DRAM 12 telling it to read out all 34 bits stored in each of the 34 cells of the addressed row.
  • DRAM 12 places the 34-bit data word on 34 lines (DATA_I/0) leading to crossbar switch 18.
  • the crossbar switch 18 selectively routes 32 of the 34 bits appearing on the DATA_I/0 lines to the DATA lines of bus 16 for delivery to host computer 14.
  • host computer 14 initiates a memory write operation, it places a 32-bit data word on the DATA lines of bus 16, places an address word on the ADDR lines, and signals DRAM controller 20 via the CONT lines to commence a write operation.
  • DRAM controller 20 then tells DRAM 12 via the RAS-, CAS-, R-, OE- and ADDRESS signals to store data appearing on the DATA_I/0 lines from crossbar switch 18 in the row referenced by an address on the ADDR bus .
  • Crossbar switch 18 selectively routes the 32-bit data word appearing on the DATA lines to 32 of the 34 DATA_I/0 lines leading to DRAM 12 so that DRAM 12 stores the data word from the host computer 14.
  • the manner in which crossbar switch 18 interconnects the DATA lines to the DATA_I/0 lines determines which 32 of the 34 memory cells of the addressed DRAM row actually receive and store the 32 bits appearing on the DATA lines.
  • crossbar switch 18 normally connects the first 32 of the 34 lines of the DATA I/O lines of DRAM 12 to the 32 lines of the DATA bus.
  • the 32 bit data words are normally written to and read from the first 32 cells of each DRAM row.
  • crossbar switch 18 disconnects the DATA line normally connected to the DATA_I/0 line servicing the defective cell and reconnects that DATA line to one of the two spare cells of the currently addressed row. Thus one of the spare cells in the currently addressed row assumes the function of the defective cell.
  • a switch controller 24 monitors the CONT lines of bus 16 to determine when host computer 14 is read or write accessing DRAM 12.
  • Switch controller 24 stores a list of DRAM 12 addresses containing defective memory cells. Controller 24 also stores along with each address a switching instruction indicating how crossbar switch 18 is to connect the DATA_I/0 lines to the DATA lines when that address is read or write accessed.
  • switch controller 24 detects from the CONT signals on bus 16 that a memory read or write access has begun, it compares the current memory access address on the ADDR lines of bus 16 to its list of addresses having defective cells. If the current address is not on the list, the currently addressed DRAM row has no defective cells.
  • switch controller 24 sends an instruction (INST) to switch 18 telling switch 18 to connect the 32 DATA lines to the first 32 the DATA_I/0 lines.
  • Switch 18 operates early in the memory access cycle before DRAM 12 operates . Thereafter, later in the memory access cycle, after DRAM 12 has had time to receive control signals from DRAM controller 20 and carry out its read or write operation, the DATA lines access the first 32 memory cells of the currently addressed row within DRAM 12. Thus the word on the DATA lines is written to or read from the first 32 cells of the addressed row.
  • switch controller 24 may determine from its internal address list that the currently addressed row in DRAM 12 has a defective memory cell. In that case, switch controller 24 sends an instruction to crossbar switch 18 telling it to connect one of the DATA lines normally connected to the DATA_I/0 lines leading to the defective cell to a DATA_I/0 lines leading to one of the spare cells. Thereafter, during the latter portion of the memory access cycle, the host computer 14 read or write accesses the spare cells instead of the defective cell.
  • Host computer 14 determines which memory cells are defective and maintains the list of defective DRAM addresses and instructions stored within switch controller 24. On system start-up and occasionally thereafter, host computer 14 tests every address in DRAM 12 to determine if any cells of the DRAM are defective. When it finds a row containing a defective cell, host computer 14 writes the address and an appropriate switching instruction into switch controller 24. As described in detail below switch controller 24 includes a set of addressable registers for holding addresses and instructions, and host computer 14 read and write accesses those registers via bus 16.
  • FIG. 2 is a block diagram illustrating crossbar switch 18 of FIG. 1 in more detail.
  • Switch 18 includes a set of 32 ports 38(l)-38(32) for receiving the 32 DATA lines of bus 16 of FIG. 1 and another set of 34 ports 39(1) -39 (34) for receiving the 34 DATA I/O lines of FIG. 1, a set of 32 horizontal conductors 40 (1) -40 (32) , each connected to a corresponding one ports 38(1)- 38(32) and a set of 34 vertical conductors 42 ( 1 ) -42 (34 ) , each connected to a separate one port 39 ( 1) -39 (34 ) .
  • a set of pass transistors 44 under control of output signals 46 produced by a decoder 48, selectively interconnect the horizontal and vertical conductors so as to provide signal paths therebetween.
  • Three pass transistors 44 are provided for each horizontal line 40(1), where I is any member of the set ⁇ 1..32 ⁇ . For any value of I, the three pass transistors 44 selectively link horizontal line 40(1) to one of vertical lines 42(1), 42(33) and 42(34).
  • decoder 48 receives a 12-bit instruction INST from switch controller 24 of FIG. 1.
  • the first bit of the INST instruction indicates whether the first spare memory cell of a currently addressed row is to replace one of the first 32 cells during the current memory access cycle.
  • Bits 2-6 of the instruction (INST) indicate which of the first 32 cells the first spare cell is to replace.
  • bit 7 of instruction INST indicates whether the second spare memory cell of the addressed row is to replace one of the first 32 cells, and bits 8-12 indicate which of the first 32 cells the second spare cell is to replace.
  • Decoder 48 decodes instruction INST and asserts a control signal at the gate of one pass transistor 44 for each horizontal conductor 40(1) so as to connect the 32 DATA lines to a selected subset of 32 of the 34 DATA_I/0 lines as determined by instruction INST.
  • FIG. 3 illustrates switch controller 24 of FIG. 1 in more detailed block diagram form.
  • Controller 24 includes a set of N table cells 50(1) ..50(N) where N may be any number up to 256.
  • Each cell 50 is capable of storing a DRAM address and a 12-bit switching instruction.
  • Host computer 14 of FIG. 1 may separately access each table cell 50(1) via computer bus 16 so as to read/modify/write the address and instruction pair stored therein.
  • Each table cell 50(1) separately monitors the address appearing on the ADDR lines. If the address on the ADDR lines matches the address stored in any table cell 50(1), that cell sends its 12-bit stored instruction INST(I) to a multiplexer 52 and asserts a REPLACE signal .
  • the REPLACE signal tells
  • multiplexer 52 to forward the instruction INST(I) as switching instruction INST to crossbar switch 18 of FIG. 1.
  • This instruction configures the switch to replace one or two defective cells at the current address with spare cell(s). If the current DRAM address on the ADDR lines does not match an address stored in any table cell 50(1), then none of cells 50(1) asserts the REPLACE signal. In that case, multiplexer 52 forwards a 12-bit hard-wired instruction INST(O) to switch 12.
  • the INST(O) instruction tells switch 18 to connect the DATA lines to the first 32 DATA_I/0 lines of FIG. 1 in the normal manner, since none of the first 32 cells of the addressed DRAM row are known to be defective.
  • FIG. 4 illustrates a typical table cell 50(1) of FIG. 1.
  • Cells 50(1)..50(N) are all similar.
  • Cell 50(1) includes a set of three registers 60, 61 and 62.
  • Register 60 stores the 12 bit instruction and register 62 stores an address.
  • Register 61 stores a single bit VALID indicating whether the address in register 62 is valid.
  • a conventional bus interface circuit 66 provides host computer 14 of FIG. 1 read and write access to registers 60-62 via computer bus 16.
  • a comparator 64 compares the address currently on the ADDR lines to the address stored in register 62. If the two addresses match, comparator 64 asserts an output signal HIT supplied to an input of an AND gate 67.
  • the VALID bit stored in register 61 drives a second input of AND gate 66.
  • FIG. 5 is a flow chart illustrating a program carried out by host computer 14 of FIG. 1 when checking DRAM 12 for defective cells and updating the address/ instruction data in switch controller 24 of FIG. 1.
  • Host computer 14 suitably executes the program on system startup and periodically thereafter. Referring to FIG. 5, the host computer selects the first DRAM address (step 70) and then checks the memory cells of the selected address (step 72) to determine if any cells are defective.
  • the host computer selects the first DRAM address (step 70) and then checks the memory cells of the selected address (step 72) to determine if any cells are defective.
  • the host computer selects the first DRAM address (step 70) and then checks the memory cells of the selected address (step 72) to determine if any cells are defective.
  • the host computer selects the first DRAM address (step 70) and then checks the memory cells of the selected address (step 72) to determine if any cells are defective.
  • the host computer selects the first DRAM address (step 70) and then checks the memory cells of the selected address (step
  • Controller 20 initially (at time TO) places the row address on a set of ADDRESS lines leading to DRAM 12, drives and output enable signal (OE-) low and a write enable strobe signal (WR-) input to DRAM 12 high to indicate that a memory read cycle is in progress. Controller 20 also drives a row address strobe signal (RAS-) low to tell DRAM 12 that the row address is available. DRAM 12 then begins decoding the row address. Thereafter, at time Tl controller 20 places the column address on the ADDRESS lines and drives a column address strobe signal (CAS-) low.
  • RAS- row address strobe signal
  • DRAM 12 responds to the CAS- signal by decoding the column address and then (time T3 ) sending the data stored at the currently addressed memory location to crossbar switch 18 via the DATA_I/0 lines.
  • time T3 the period between times TO and T3 is 40-70 ns .
  • switch controller 24 configures crossbar switch 18 to appropriately route 32 of the 34 data bits to be read out of DRAM 12 to the DATA lines of bus 16.
  • Switch controller 24 within 15 ns of detecting the start of a memory access cycle switch controller 24, generates a switching instruction (INST) to appropriately configure crossbar switch 18 for the current DRAM address (time Tl) .
  • INDT switching instruction
  • Crossbar switch 18 receives the instruction at time Tl and by time T2 has created paths between the DATA and DATA_I/0 lines in accordance with the instruction. Thus by the time DRAM 12 reads out its addressed data at time T3 , crossbar switch 18 is ready to route it to the DATA lines. The data then passes through the crossbar switch and appears on the DATA lines by time T4.
  • the small delay (less than 10 ns) between times T3 and T4 represents the transit time of the signal through crossbar switch 18 and is the only memory access time overhead of the dynamic column replacement feature of the memory system.
  • FIG. 7 is a diagram illustrating timing of system operation during a memory write access.
  • controller 20 drives the OE- signal high to indicate data is to flow into DRAM 12, places the row address portion of the DRAM address on the ADDRESS lines and then drives the RAS- strobe low (time TO) .
  • DRAM 12 then begins decoding the row address.
  • controller 20 places the column address on the ADDRESS lines and drives the CAS- signal low.
  • DRAM 12 then decodes the column address. Controller 20 then sends a negative-going WR- signal pulse (times T3-T4) to DRAM 12.
  • DRAM 12 On the leading edge of the WR- pulse (time T3 ) DRAM 12 stores the data appearing on its DATA_I/0 input at its currently addressed memory location.
  • the period between times TO and T4 is 40-70 ns .
  • switch controller 24 configures crossbar switch 18 to appropriately interconnect the DATA and DATA_I/0 lines for the current memory address.
  • switch controller 24 detects the start of a memory access cycle and by time Tl, less than 15ns later, has generated an appropriate switching instruction (INST).
  • Crossbar switch 18 receives the instruction at time Tl and at time T2 , within 15ns after Tl , has established the indicated paths between the DATA and DATA_I/0 lines.
  • the data from the host then passes through switch 18 and arrives at DRAM 12 via the DATA_I/0 lines at time T3 when the DRAM is ready to store the data.
  • the dynamic spare column replacement system does not degrade performance of DRAMs having write cycle speeds (elapsed time TO- T4) of 50ns or more.
  • switch controller 24 of FIG. 1 stores a separate switching instruction for each DRAM 12 address having a defective memory cell .
  • the particular column of cells replaced by a spare column depends on the current address.
  • controller 24 may assign spare column replacement with lower address resolution. For example, where DRAM memory is organized into a set of memory banks, the first few bits of an address may refer to a particular memory bank. In an alternative embodiment of the invention, when a memory cell at some particular address of a memory bank becomes defective, the host computer may store a corrective switching instruction and only the first few bits of the address, the bit refereeing to that bank, in switch controller 24.
  • controller 24 tells switch 18 to use the spare column in place of the defective cell.
  • the spare column replaces all cells of a particular column in that bank even though only one cell of the column may be defective.
  • spare columns are assigned on a bank-by-bank basis rather than on an address- by-address basis as in the preferred embodiment.
  • the preferred embodiment of the dynamic spare column replacement system provides more efficient use of spare cell resources.
  • switch controller 24 must for example, store and process 32-bit address words and therefore requires large registers and comparators.
  • the switch controller need only store and process the first few bits of a large address and therefore has smaller registers and comparators .

Abstract

An MxN dynamic spare column replacement memory system for storing M N-bit data words includes a dynamic random access memory (DRAM) (12) formed by a rectangular array of M rows and N+S columns of single-bit memory cells. Each row has a unique address and stores an N-bit word using a selected set of N of its N+S cells. An N-line parallel data bus provides data access to the DRAM (12). Responding to a switching instruction from a swith controller (24) at the start of each memory access cycle, a crossbar switch (18) selectively connects each of the N lines of the data bus to a separate one of the N+S columns. Thus, during a memory read or write access cycle, the N data lines access N cells of an addressed row columns. The remaining S cells of the row are unused. A host computer (14) occasionally checks the DRAM (12) for defective memory cells, and upon finding a defective cell, or cells, in any row, the host (14) stores the row address and a switching instruction in the switch controller (24). At the beginning of each memory access cycle, the switch controller (24) compares the DRAM (12) address to its stored list of addresses of rows having a defective cell. If the current DRAM (12) address matches a stored address, the switch controller (24) switches data bus lines from columns containing the defective cell to spare columns in accordance with a switching instruction stored with the address. Thus, spare cells are assigned for replacement of defective cells on an address-by-address basis.

Description

DYNAMIC SPARE COLUMN REPLACEMENT MEMORY SYSTEM
Background of the Invention Field of the Invention The present invention relates to a system for dynamically replacing damaged cells of a random access memory when they are addressed.
Description of Related Art In orbiting satellites and in other high radiation environments, memory cells within a dynamic random access memory (RAM) are subject to damage when struck by high energy particles. One solution to this problem has been to use memories such as, for example, Silicon Sapphier Insulator (SSI) static random access memories (SRAMs) , that are less subject to radiation damage than DRAMs . However SRAMs are more expensive to manufacture require four to five . times as much integrated circuit surface area.
A second solution to the DRAM radiation damage problem has been to use redundant memory cells at each storage location so that a bit written to a particular address is actually written concurrently to several memory cells. When reading the bit stored at that address, each memory cell "votes" for the logic state of its stored bit. The state receiving the most votes is taken to be the correct state of the bit. Thus, for example if each storage location has five memory cells, the memory would operate correctly if no more than two of the five cells were damaged. The vote counting process may be software or hardware implemented. While this solution improves DRAM reliability, it substantially increases hardware and/or software overhead. Spare column replacement is a third solution to the radiation damage problem. Normally an M-word, N-bit DRAM is an array of M rows and N columns of memory cells . Each row has a unique address and stores an N-bit word with each bit of the word being stored in a separate memory cell along that row. In a spare column replacement system, the DRAM is provided with one or more spare columns of memory cells. Each memory address of the
Figure imgf000004_0001
controller compares the current DRAM address to its stored addresses. If the currently addressed row does not match one of its stored addresses, the switch controller assumes the row has no defective cells and sends a switching instruction to the crossbar switch telling it to connect the data lines to the first N cells of the addressed row. However if the currently addressed row matches one of its stored addresses, the currently addressed row has a defective cell. In such case the switch controller transmits a stored switching instruction to the crossbar switch telling it to connect one of the N data lines to a spare cell instead of to the defective cell. Thus a bit is written to or read out of the spare cell instead of the defective cell.
It is accordingly an object of the invention to provide a memory system that detects and replaces memory cells as they become defective using spare cells in an efficient manner.
The concluding portion of this specification particularly points out and distinctly claims the subject matter of the present invention. However those skilled in the art will best understand both the organization and method of operation of the invention, together with further advantages and objects thereof, by reading the remaining portions of the specification in view of the accompanying drawing (s) wherein like reference characters refer to like elements.
Brief Description of the Drawing (s)
FIG. 1 depicts in block diagram form a memory system with dynamic spare column replacement in accordance with the present invention;
FIG. 2 depicts the crossbar switch of FIG. 1 in more detailed block and schematic diagram form;
FIG. 3 depicts the switch controller of FIG. 1 in more detailed block diagram form;
FIG. 4 depicts a typical controller cell of FIG. 3 in more detailed block diagram form; FIG. 5 is a flow chart illustrating operations carried out by the host computer of FIG; . FIG. 6 is a timing diagram illustrating a read cycle of the memory system of FIG. 1; and
FIG. 7 is a timing diagram illustrating a write cycle of the memory system of FIG. 1.
Description of the Preferred Embodiment ( s ) FIG. 1 illustrates in block diagram form a dynamic spare column replacement system 10 in accordance with the present invention. Memory system 10 includes a 256K x 34-bit DRAM 12 having 256K rows and 34 columns of 1-bit memory cells. Each row of cells has a unique address and stores a separate one of the 256K 32-bit data words. Normally only the first 32 1-bit memory cells within each row store the row's 32-bit data word. The last two cells of each row are spares, available as replacements when one or two of the first 32 cells become damaged, for example by radiation. Thus the last two columns of memory cells within DRAM 12 are "spare" columns. In accordance with the invention, memory system 10 determines when a memory cell in any row becomes defective and thereafter reconfigures data access to DRAM 12 so as to functionally replace the defective cell with one of the row's spare cells whenever the row is read or write accessed.
A host computer 14 read and write accesses DRAM 12 via a conventional computer bus 16. Bus 16 includes an address bus
(ADDR) and a control bus (CONT) through which host computer 14 sends address and control signals to a conventional DRAM controller 20. Bus 16 also includes a 32-bit parallel data bus (DATA) connected to DRAM 12 through a crossbar switch 18. To read data stored at a particular DRAM 12 address, host computer 14 places an address word on the ADDR bus and signals DRAM controller 20 via the CONT bus. In response, DRAM controller 20 transmits various signals (WR-, OE- , RAS- , CAS- and ADDRESS) to DRAM 12 telling it to read out all 34 bits stored in each of the 34 cells of the addressed row. DRAM 12 places the 34-bit data word on 34 lines (DATA_I/0) leading to crossbar switch 18. The crossbar switch 18 selectively routes 32 of the 34 bits appearing on the DATA_I/0 lines to the DATA lines of bus 16 for delivery to host computer 14. When host computer 14 initiates a memory write operation, it places a 32-bit data word on the DATA lines of bus 16, places an address word on the ADDR lines, and signals DRAM controller 20 via the CONT lines to commence a write operation. DRAM controller 20 then tells DRAM 12 via the RAS-, CAS-, R-, OE- and ADDRESS signals to store data appearing on the DATA_I/0 lines from crossbar switch 18 in the row referenced by an address on the ADDR bus .
Crossbar switch 18 selectively routes the 32-bit data word appearing on the DATA lines to 32 of the 34 DATA_I/0 lines leading to DRAM 12 so that DRAM 12 stores the data word from the host computer 14. The manner in which crossbar switch 18 interconnects the DATA lines to the DATA_I/0 lines determines which 32 of the 34 memory cells of the addressed DRAM row actually receive and store the 32 bits appearing on the DATA lines. During a memory read or write cycle, crossbar switch 18 normally connects the first 32 of the 34 lines of the DATA I/O lines of DRAM 12 to the 32 lines of the DATA bus. Thus the 32 bit data words are normally written to and read from the first 32 cells of each DRAM row. However when host computer 14 read or write accesses a DRAM 12 address known to have a defective memory cell, crossbar switch 18 disconnects the DATA line normally connected to the DATA_I/0 line servicing the defective cell and reconnects that DATA line to one of the two spare cells of the currently addressed row. Thus one of the spare cells in the currently addressed row assumes the function of the defective cell.
A switch controller 24 monitors the CONT lines of bus 16 to determine when host computer 14 is read or write accessing DRAM 12. Switch controller 24 stores a list of DRAM 12 addresses containing defective memory cells. Controller 24 also stores along with each address a switching instruction indicating how crossbar switch 18 is to connect the DATA_I/0 lines to the DATA lines when that address is read or write accessed. When switch controller 24 detects from the CONT signals on bus 16 that a memory read or write access has begun, it compares the current memory access address on the ADDR lines of bus 16 to its list of addresses having defective cells. If the current address is not on the list, the currently addressed DRAM row has no defective cells. In such case switch controller 24 sends an instruction (INST) to switch 18 telling switch 18 to connect the 32 DATA lines to the first 32 the DATA_I/0 lines. Switch 18 operates early in the memory access cycle before DRAM 12 operates . Thereafter, later in the memory access cycle, after DRAM 12 has had time to receive control signals from DRAM controller 20 and carry out its read or write operation, the DATA lines access the first 32 memory cells of the currently addressed row within DRAM 12. Thus the word on the DATA lines is written to or read from the first 32 cells of the addressed row.
On the other hand, switch controller 24 may determine from its internal address list that the currently addressed row in DRAM 12 has a defective memory cell. In that case, switch controller 24 sends an instruction to crossbar switch 18 telling it to connect one of the DATA lines normally connected to the DATA_I/0 lines leading to the defective cell to a DATA_I/0 lines leading to one of the spare cells. Thereafter, during the latter portion of the memory access cycle, the host computer 14 read or write accesses the spare cells instead of the defective cell.
Host computer 14 determines which memory cells are defective and maintains the list of defective DRAM addresses and instructions stored within switch controller 24. On system start-up and occasionally thereafter, host computer 14 tests every address in DRAM 12 to determine if any cells of the DRAM are defective. When it finds a row containing a defective cell, host computer 14 writes the address and an appropriate switching instruction into switch controller 24. As described in detail below switch controller 24 includes a set of addressable registers for holding addresses and instructions, and host computer 14 read and write accesses those registers via bus 16.
FIG. 2 is a block diagram illustrating crossbar switch 18 of FIG. 1 in more detail. Switch 18 includes a set of 32 ports 38(l)-38(32) for receiving the 32 DATA lines of bus 16 of FIG. 1 and another set of 34 ports 39(1) -39 (34) for receiving the 34 DATA I/O lines of FIG. 1, a set of 32 horizontal conductors 40 (1) -40 (32) , each connected to a corresponding one ports 38(1)- 38(32) and a set of 34 vertical conductors 42 ( 1 ) -42 (34 ) , each connected to a separate one port 39 ( 1) -39 (34 ) . A set of pass transistors 44, under control of output signals 46 produced by a decoder 48, selectively interconnect the horizontal and vertical conductors so as to provide signal paths therebetween. Three pass transistors 44 are provided for each horizontal line 40(1), where I is any member of the set {1..32}. For any value of I, the three pass transistors 44 selectively link horizontal line 40(1) to one of vertical lines 42(1), 42(33) and 42(34).
During a memory read or write access cycle, decoder 48 receives a 12-bit instruction INST from switch controller 24 of FIG. 1. The first bit of the INST instruction indicates whether the first spare memory cell of a currently addressed row is to replace one of the first 32 cells during the current memory access cycle. Bits 2-6 of the instruction (INST) indicate which of the first 32 cells the first spare cell is to replace. Similarly, bit 7 of instruction INST indicates whether the second spare memory cell of the addressed row is to replace one of the first 32 cells, and bits 8-12 indicate which of the first 32 cells the second spare cell is to replace. Decoder 48 decodes instruction INST and asserts a control signal at the gate of one pass transistor 44 for each horizontal conductor 40(1) so as to connect the 32 DATA lines to a selected subset of 32 of the 34 DATA_I/0 lines as determined by instruction INST.
FIG. 3 illustrates switch controller 24 of FIG. 1 in more detailed block diagram form. Controller 24 includes a set of N table cells 50(1) ..50(N) where N may be any number up to 256. Each cell 50 is capable of storing a DRAM address and a 12-bit switching instruction. Host computer 14 of FIG. 1 may separately access each table cell 50(1) via computer bus 16 so as to read/modify/write the address and instruction pair stored therein. Each table cell 50(1) separately monitors the address appearing on the ADDR lines. If the address on the ADDR lines matches the address stored in any table cell 50(1), that cell sends its 12-bit stored instruction INST(I) to a multiplexer 52 and asserts a REPLACE signal . The REPLACE signal tells
At multiplexer 52 to forward the instruction INST(I) as switching instruction INST to crossbar switch 18 of FIG. 1. This instruction configures the switch to replace one or two defective cells at the current address with spare cell(s). If the current DRAM address on the ADDR lines does not match an address stored in any table cell 50(1), then none of cells 50(1) asserts the REPLACE signal. In that case, multiplexer 52 forwards a 12-bit hard-wired instruction INST(O) to switch 12. The INST(O) instruction tells switch 18 to connect the DATA lines to the first 32 DATA_I/0 lines of FIG. 1 in the normal manner, since none of the first 32 cells of the addressed DRAM row are known to be defective.
FIG. 4 illustrates a typical table cell 50(1) of FIG. 1. Cells 50(1)..50(N) are all similar. Cell 50(1) includes a set of three registers 60, 61 and 62. Register 60 stores the 12 bit instruction and register 62 stores an address. Register 61 stores a single bit VALID indicating whether the address in register 62 is valid. A conventional bus interface circuit 66 provides host computer 14 of FIG. 1 read and write access to registers 60-62 via computer bus 16. A comparator 64 compares the address currently on the ADDR lines to the address stored in register 62. If the two addresses match, comparator 64 asserts an output signal HIT supplied to an input of an AND gate 67. The VALID bit stored in register 61 drives a second input of AND gate 66. If the HIT and VALID bits are both true, AND gate 66 turns on tri-state buffers 68 and 69. When turned on, buffer 68 forwards the instruction INST(I) stored in register 60 to multiplexer 52 of FIG. 3. and buffer 69 asserts the REPLACE signal supplied to the control input of multiplexer 52 of FIG. 3. FIG. 5 is a flow chart illustrating a program carried out by host computer 14 of FIG. 1 when checking DRAM 12 for defective cells and updating the address/ instruction data in switch controller 24 of FIG. 1. Host computer 14 suitably executes the program on system startup and periodically thereafter. Referring to FIG. 5, the host computer selects the first DRAM address (step 70) and then checks the memory cells of the selected address (step 72) to determine if any cells are defective. The host
Figure imgf000011_0001
Figure imgf000011_0002
"column address". Controller 20 initially (at time TO) places the row address on a set of ADDRESS lines leading to DRAM 12, drives and output enable signal (OE-) low and a write enable strobe signal (WR-) input to DRAM 12 high to indicate that a memory read cycle is in progress. Controller 20 also drives a row address strobe signal (RAS-) low to tell DRAM 12 that the row address is available. DRAM 12 then begins decoding the row address. Thereafter, at time Tl controller 20 places the column address on the ADDRESS lines and drives a column address strobe signal (CAS-) low. DRAM 12 responds to the CAS- signal by decoding the column address and then (time T3 ) sending the data stored at the currently addressed memory location to crossbar switch 18 via the DATA_I/0 lines. Currently, for typical prior art DRAMs, the period between times TO and T3 is 40-70 ns . While DRAM controller 20 is busy transmitting the row and column addresses to DRAM 12, switch controller 24 configures crossbar switch 18 to appropriately route 32 of the 34 data bits to be read out of DRAM 12 to the DATA lines of bus 16. Switch controller 24, within 15 ns of detecting the start of a memory access cycle switch controller 24, generates a switching instruction (INST) to appropriately configure crossbar switch 18 for the current DRAM address (time Tl) . Crossbar switch 18 receives the instruction at time Tl and by time T2 has created paths between the DATA and DATA_I/0 lines in accordance with the instruction. Thus by the time DRAM 12 reads out its addressed data at time T3 , crossbar switch 18 is ready to route it to the DATA lines. The data then passes through the crossbar switch and appears on the DATA lines by time T4. The small delay (less than 10 ns) between times T3 and T4 represents the transit time of the signal through crossbar switch 18 and is the only memory access time overhead of the dynamic column replacement feature of the memory system.
FIG. 7 is a diagram illustrating timing of system operation during a memory write access. Referring to FIGS. 1 and 7, when DRAM controller 20 receives a DRAM address on the ADDR bus along with control signals indicating a memory write cycle, controller 20 drives the OE- signal high to indicate data is to flow into DRAM 12, places the row address portion of the DRAM address on the ADDRESS lines and then drives the RAS- strobe low (time TO) . DRAM 12 then begins decoding the row address. Thereafter, at time Tl controller 20 places the column address on the ADDRESS lines and drives the CAS- signal low. DRAM 12 then decodes the column address. Controller 20 then sends a negative-going WR- signal pulse (times T3-T4) to DRAM 12. On the leading edge of the WR- pulse (time T3 ) DRAM 12 stores the data appearing on its DATA_I/0 input at its currently addressed memory location. Currently, for typical prior art DRAMs, the period between times TO and T4 is 40-70 ns .
While DRAM controller 20 is transmitting the row and column addresses to DRAM 12, switch controller 24 configures crossbar switch 18 to appropriately interconnect the DATA and DATA_I/0 lines for the current memory address. At time TO, switch controller 24 detects the start of a memory access cycle and by time Tl, less than 15ns later, has generated an appropriate switching instruction (INST). Crossbar switch 18 receives the instruction at time Tl and at time T2 , within 15ns after Tl , has established the indicated paths between the DATA and DATA_I/0 lines. The data from the host then passes through switch 18 and arrives at DRAM 12 via the DATA_I/0 lines at time T3 when the DRAM is ready to store the data. Given that the time the system requires to appropriately set up and route the data through the crossbar switch 18 is 40 ns or less, and the pulse width (T3-T4) of the WR- signal is 10ns, then the dynamic spare column replacement system does not degrade performance of DRAMs having write cycle speeds (elapsed time TO- T4) of 50ns or more.
While the forgoing specification has described preferred embodiment ( s ) of the present invention, one skilled in the art may make many modifications to the preferred embodiment without departing from the invention in its broader aspects . For example while a 256Kx32 dynamic spare column replacement memory system having 2 spare columns is described herein above, one skilled in the art will appreciate that the size of the memory system and the number of spare columns is a matter of design choice and can be easily modified. In the preferred embodiment of the invention switch controller 24 of FIG. 1 stores a separate switching instruction for each DRAM 12 address having a defective memory cell . Thus the particular column of cells replaced by a spare column depends on the current address. For example for DRAM address N, a cell in a spare column may be assigned to replace memory cell 5 of the addressed DRAM row, while for DRAM address N+l the cell of that same spare column may be assigned to replace memory cell 8 of the addressed DRAM row. In alternative embodiments of the invention, controller 24 may assign spare column replacement with lower address resolution. For example, where DRAM memory is organized into a set of memory banks, the first few bits of an address may refer to a particular memory bank. In an alternative embodiment of the invention, when a memory cell at some particular address of a memory bank becomes defective, the host computer may store a corrective switching instruction and only the first few bits of the address, the bit refereeing to that bank, in switch controller 24. Thereafter whenever any address in that memory bank is read or write accessed, controller 24 tells switch 18 to use the spare column in place of the defective cell. In this embodiment the spare column replaces all cells of a particular column in that bank even though only one cell of the column may be defective. Thus in the alternative embodiment, spare columns are assigned on a bank-by-bank basis rather than on an address- by-address basis as in the preferred embodiment. The preferred embodiment of the dynamic spare column replacement system provides more efficient use of spare cell resources. But in the preferred embodiment, switch controller 24 must for example, store and process 32-bit address words and therefore requires large registers and comparators. In the preferred embodiment, the switch controller need only store and process the first few bits of a large address and therefore has smaller registers and comparators . The appended claims therefore are intended to cover all such modifications as fall within the true scope and spirit of the invention.

Claims

Claim(s) What is claimed is :
1. A dynamic spare column replacement memory system for storing an input N-bit data word (where N is greater than 0) at an address indicated by an input address word, comprising: memory means, having a plurality of N+S bit addressable data storage locations, for receiving the input address word and an input N+S bit data word and for storing the input N+S data word in any one of said plurality of storage locations addressed by the address word; and routing means receiving the input address word and the input
N-bit data word, for delivering the N-bit data word to said memory means as a selected subset of N bits of said N+S bit data word, said switch means selecting the subset of N bits as a function of a value of the received input address word.
2. The memory system in accordance with claim 1 wherein said routing means comprises: an N+S line data bus for connected for conveying said N+S bit data word to said memory means; switch means having N first ports, each for receiving a separate bit of said input N-bit data word, and having N+S second ports, each connected to a separate line of said N+S line data bus, and having means for interconnecting each of said first ports to a separate one of said second ports selected in response to an input switch instruction; and switch control means for receiving said input address word, for generating a switch instruction in response to a value of the input address, and for delivering the generated switch instruction to said switch means as said input switch instruction.
3. The memory system in accordance with claim 2 wherein said switch control means comprises: a plurality of storage cells, each for storing a separate address value and a corresponding switch instruction; and means for receiving said input address word and for transmitting the switch instruction stored in any one storage cell of said storage cells when the input address word matches an address value stored in said storage cell.
4. The memory system in accordance with claim 2 wherein said switch control means comprises: a plurality of storage cells, each for storing a separate address value and a corresponding first switch instruction; means for receiving said input address word, for transmitting the switch instruction stored in any one storage cell of said storage cells to said switch means when the input address word matches an address value stored in said storage cell, and for generating and transmitting a second switch instruction to the switch means when the input address word does not match an address value stored in any one of said storage cells .
5. The memory system in accordance with claim 3 wherein each addressable data storage location of said memory means comprises N+S single bit memory cells, each for storing a separate bit of an N+S bit data word.
6. The memory system in accordance with claim 5 further comprising means for determining when a memory cell of any one of said addressable data storage locations of said memory means becomes defective and for storing a switch instruction and an address value referencing the addressable data storage location having the defective memory cell in one of storage cells of said switch control means.
7. The memory system in accordance with claim 2 wherein said switch control means comprises: a plurality of storage cells, each for storing a separate address value and a corresponding switch instruction; and means for receiving said input address word and for transmitting the switch instruction stored in any one storage cell of said storage cells when a portion of the input address word matches an address value stored in said storage cell.
8. The memory system in accordance with claim 2 wherein said switch control means comprises: a plurality of storage cells, each for storing a separate address value and a corresponding first switch instruction; means for receiving said input address word, for transmitting the switch instruction stored in any one storage cell of said storage cells to said switch means when a portion of the input address word matches an address value stored in said storage cell, and for generating and transmitting a second switch instruction to the switch means when said portion of the input address word does not match an address value stored in any one of said storage cells.
9. The memory system in accordance with claim 7 wherein each addressable data storage location of said memory means comprises N+S single bit memory cells, each for storing a separate bit of an N+S bit data word.
10. The memory system in accordance with claim 9 further comprising means for determining when a memory cell of any one of said addressable data storage locations of said memory means becomes defective and for storing a switch instruction and an address value matching a portion of an address of addressable data storage location having the defective memory cell in one of storage cells of said switch control means .
11. The memory system in accordance with claim 1 wherein said memory means comprises a dynamic random access memory.
12. The memory system in accordance with claim 1 wherein said routing means comprises a crossbar switch.
13. A dynamic column replacement memory system for storing an N-bit data word arriving on a data bus at an address indicated by a value of an input address, the apparatus comprising: a memory comprising an array of M rows and N+S columns of memory cells, where M is greater than 1, and N and S are each greater than 0, each row of said array corresponding to a separate value of said input address; switch means for connecting said data bus to a selected subset of N of said N+S columns of memory cells in accordance with an input switching instruction; memory controller means for receiving said input address and for transmitting control signals to a row of said memory cells corresponding to a value of input address, the control signals telling each cell of the row to store a bit of the data word appearing on any of one said data lines to which the cell may be connected via said switch means; and switch controller means for receiving said input address and for transmitting a switching instruction to said switch means in response to the input address, wherein the subset of N memory cells selected by said switching instruction is a function of the value of the received input address.
14. The memory system in accordance with claim 13 wherein said switch controller comprises means for receiving and storing a reference address and a switching instruction, for receiving said input address, for comparing said input address to the stored reference address, and for transmitting said stored instruction to said switch means when the input address matches the stored reference address.
15. The memory system in accordance with claim 14 further comprising means for determining when a cell of said array is defective and for transmitting said reference address and switching instruction to said switch controller for storage therein, wherein the value of the reference address corresponds to a row including the defective cell and wherein the subset of N memory cells selected by said switching instruction does not include the defective cell.
PCT/US1997/017186 1996-09-19 1997-09-19 Dynamic spare column replacement memory system WO1998012637A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/710,571 1996-09-19
US08/710,571 US5781717A (en) 1996-09-19 1996-09-19 Dynamic spare column replacement memory system

Publications (1)

Publication Number Publication Date
WO1998012637A1 true WO1998012637A1 (en) 1998-03-26

Family

ID=24854592

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/017186 WO1998012637A1 (en) 1996-09-19 1997-09-19 Dynamic spare column replacement memory system

Country Status (2)

Country Link
US (1) US5781717A (en)
WO (1) WO1998012637A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9117035B2 (en) 2005-09-26 2015-08-25 Rambus Inc. Memory system topologies including a buffer device and an integrated circuit memory device
US11328764B2 (en) 2005-09-26 2022-05-10 Rambus Inc. Memory system topologies including a memory die stack

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2850953B2 (en) * 1996-07-30 1999-01-27 日本電気株式会社 Semiconductor device
JP3459056B2 (en) * 1996-11-08 2003-10-20 株式会社日立製作所 Data transfer system
EP0911747B1 (en) * 1997-10-20 2004-01-02 STMicroelectronics S.r.l. CAD for redundant memory devices
JP3558252B2 (en) * 1997-11-10 2004-08-25 株式会社アドバンテスト Semiconductor memory test equipment
US6178549B1 (en) * 1998-03-12 2001-01-23 Winbond Electronics Corporation Memory writer with deflective memory-cell handling capability
US6408401B1 (en) 1998-11-13 2002-06-18 Compaq Information Technologies Group, L.P. Embedded RAM with self-test and self-repair with spare rows and columns
US6260156B1 (en) 1998-12-04 2001-07-10 Datalight, Inc. Method and system for managing bad areas in flash memory
KR100459162B1 (en) * 1999-01-23 2004-12-03 엘지전자 주식회사 Optical recording medium and method for formatting of the same
DE19917589C1 (en) * 1999-04-19 2000-11-02 Siemens Ag Random access memory
US7363422B2 (en) * 2000-01-05 2008-04-22 Rambus Inc. Configurable width buffered module
US7017002B2 (en) * 2000-01-05 2006-03-21 Rambus, Inc. System featuring a master device, a buffer device and a plurality of integrated circuit memory devices
US6706402B2 (en) 2001-07-25 2004-03-16 Nantero, Inc. Nanotube films and articles
US6574130B2 (en) 2001-07-25 2003-06-03 Nantero, Inc. Hybrid circuit having nanotube electromechanical memory
US6919592B2 (en) * 2001-07-25 2005-07-19 Nantero, Inc. Electromechanical memory array using nanotube ribbons and method for making same
US6911682B2 (en) 2001-12-28 2005-06-28 Nantero, Inc. Electromechanical three-trace junction devices
US6643165B2 (en) 2001-07-25 2003-11-04 Nantero, Inc. Electromechanical memory having cell selection circuitry constructed with nanotube technology
US7259410B2 (en) 2001-07-25 2007-08-21 Nantero, Inc. Devices having horizontally-disposed nanofabric articles and methods of making the same
US7566478B2 (en) 2001-07-25 2009-07-28 Nantero, Inc. Methods of making carbon nanotube films, layers, fabrics, ribbons, elements and articles
US6835591B2 (en) 2001-07-25 2004-12-28 Nantero, Inc. Methods of nanotube films and articles
US6924538B2 (en) 2001-07-25 2005-08-02 Nantero, Inc. Devices having vertically-disposed nanofabric articles and methods of making the same
US6784028B2 (en) 2001-12-28 2004-08-31 Nantero, Inc. Methods of making electromechanical three-trace junction devices
US7176505B2 (en) 2001-12-28 2007-02-13 Nantero, Inc. Electromechanical three-trace junction devices
US7335395B2 (en) 2002-04-23 2008-02-26 Nantero, Inc. Methods of using pre-formed nanotubes to make carbon nanotube films, layers, fabrics, ribbons, elements and articles
FR2843208B1 (en) * 2002-07-31 2005-03-04 Iroc Technologies DEVICE FOR RECONFIGURING A MEMORY SET WITH DEFECTS
US20040039873A1 (en) * 2002-08-21 2004-02-26 Hou-Yuan Lin Management system for access control modes of a DRAM module socket
US7560136B2 (en) 2003-01-13 2009-07-14 Nantero, Inc. Methods of using thin metal layers to make carbon nanotube films, layers, fabrics, ribbons, elements and articles
US7143306B2 (en) * 2003-03-31 2006-11-28 Emc Corporation Data storage system
JP4062247B2 (en) * 2003-12-11 2008-03-19 ソニー株式会社 Semiconductor memory device
DE602004008240T2 (en) * 2004-06-14 2008-05-15 Stmicroelectronics S.R.L., Agrate Brianza Method for managing defective memory blocks in a non-volatile memory and non-volatile memory for carrying out the method
US7464225B2 (en) * 2005-09-26 2008-12-09 Rambus Inc. Memory module including a plurality of integrated circuit memory devices and a plurality of buffer devices in a matrix topology
US7656727B2 (en) * 2007-04-25 2010-02-02 Hewlett-Packard Development Company, L.P. Semiconductor memory device and system providing spare memory locations
US8892942B2 (en) * 2007-07-27 2014-11-18 Hewlett-Packard Development Company, L.P. Rank sparing system and method
US7797595B2 (en) * 2008-06-18 2010-09-14 On-Chip Technologies, Inc. Serially decoded digital device testing
US8255610B2 (en) 2009-02-13 2012-08-28 The Regents Of The University Of Michigan Crossbar circuitry for applying a pre-selection prior to arbitration between transmission requests and method of operation of such crossbar circuitry
US8230152B2 (en) * 2009-02-13 2012-07-24 The Regents Of The University Of Michigan Crossbar circuitry and method of operation of such crossbar circuitry
US9514074B2 (en) 2009-02-13 2016-12-06 The Regents Of The University Of Michigan Single cycle arbitration within an interconnect
US8549207B2 (en) * 2009-02-13 2013-10-01 The Regents Of The University Of Michigan Crossbar circuitry for applying an adaptive priority scheme and method of operation of such crossbar circuitry
EP2502234B1 (en) 2009-11-20 2019-01-09 Rambus Inc. Bit-replacement technique for dram error correction
KR101277479B1 (en) * 2010-08-31 2013-06-21 에스케이하이닉스 주식회사 Semiconductor memory device
WO2012046343A1 (en) * 2010-10-08 2012-04-12 富士通株式会社 Memory module redundancy method, storage processing device, and data processing device
US9230620B1 (en) * 2012-03-06 2016-01-05 Inphi Corporation Distributed hardware tree search methods and apparatus for memory data replacement
US9158619B2 (en) * 2012-03-30 2015-10-13 Intel Corporation On chip redundancy repair for memory devices
US8942051B2 (en) * 2012-07-27 2015-01-27 Taiwan Semiconductor Manufacturing Company, Ltd. Mechanisms for built-in self test and repair for memory devices
US9411678B1 (en) 2012-08-01 2016-08-09 Rambus Inc. DRAM retention monitoring method for dynamic error correction
WO2014074390A1 (en) 2012-11-06 2014-05-15 Rambus Inc. Memory repair using external tags

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5179536A (en) * 1989-01-31 1993-01-12 Fujitsu Limited Semiconductor memory device having means for replacing defective memory cells
US5303192A (en) * 1989-03-20 1994-04-12 Fujitsu Limited Semiconductor memory device having information indicative of presence of defective memory cell
US5406565A (en) * 1989-06-07 1995-04-11 Mv Limited Memory array of integrated circuits capable of replacing faulty cells with a spare

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4051354A (en) * 1975-07-03 1977-09-27 Texas Instruments Incorporated Fault-tolerant cell addressable array
US4389715A (en) * 1980-10-06 1983-06-21 Inmos Corporation Redundancy scheme for a dynamic RAM
JPS58130495A (en) * 1982-01-29 1983-08-03 Toshiba Corp Semiconductor storage device
US4837747A (en) * 1986-11-29 1989-06-06 Mitsubishi Denki Kabushiki Kaisha Redundary circuit with a spare main decoder responsive to an address of a defective cell in a selected cell block
US5253354A (en) * 1990-08-31 1993-10-12 Advanced Micro Devices, Inc. Row address generator for defective DRAMS including an upper and lower memory device
JP2730375B2 (en) * 1992-01-31 1998-03-25 日本電気株式会社 Semiconductor memory
JP3040625B2 (en) * 1992-02-07 2000-05-15 松下電器産業株式会社 Semiconductor storage device
JP3268823B2 (en) * 1992-05-28 2002-03-25 日本テキサス・インスツルメンツ株式会社 Semiconductor storage device
US5321697A (en) * 1992-05-28 1994-06-14 Cray Research, Inc. Solid state storage device
JP3020077B2 (en) * 1993-03-03 2000-03-15 株式会社日立製作所 Semiconductor memory

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5179536A (en) * 1989-01-31 1993-01-12 Fujitsu Limited Semiconductor memory device having means for replacing defective memory cells
US5303192A (en) * 1989-03-20 1994-04-12 Fujitsu Limited Semiconductor memory device having information indicative of presence of defective memory cell
US5406565A (en) * 1989-06-07 1995-04-11 Mv Limited Memory array of integrated circuits capable of replacing faulty cells with a spare

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9117035B2 (en) 2005-09-26 2015-08-25 Rambus Inc. Memory system topologies including a buffer device and an integrated circuit memory device
US9563583B2 (en) 2005-09-26 2017-02-07 Rambus Inc. Memory system topologies including a buffer device and an integrated circuit memory device
US9865329B2 (en) 2005-09-26 2018-01-09 Rambus Inc. Memory system topologies including a buffer device and an integrated circuit memory device
US10381067B2 (en) 2005-09-26 2019-08-13 Rambus Inc. Memory system topologies including a buffer device and an integrated circuit memory device
US10535398B2 (en) 2005-09-26 2020-01-14 Rambus Inc. Memory system topologies including a buffer device and an integrated circuit memory device
US10672458B1 (en) 2005-09-26 2020-06-02 Rambus Inc. Memory system topologies including a buffer device and an integrated circuit memory device
US11043258B2 (en) 2005-09-26 2021-06-22 Rambus Inc. Memory system topologies including a memory die stack
US11328764B2 (en) 2005-09-26 2022-05-10 Rambus Inc. Memory system topologies including a memory die stack
US11727982B2 (en) 2005-09-26 2023-08-15 Rambus Inc. Memory system topologies including a memory die stack

Also Published As

Publication number Publication date
US5781717A (en) 1998-07-14

Similar Documents

Publication Publication Date Title
US5781717A (en) Dynamic spare column replacement memory system
US5111386A (en) Cache contained type semiconductor memory device and operating method therefor
US7539896B2 (en) Repairable block redundancy scheme
US7082491B2 (en) Memory device having different burst order addressing for read and write operations
KR940008140B1 (en) Semiconductor memory device having cash memory
US5519664A (en) Dynamic random access memory persistent page implemented as processor register sets
JP2777247B2 (en) Semiconductor storage device and cache system
US5627786A (en) Parallel processing redundancy scheme for faster access times and lower die area
US6041422A (en) Fault tolerant memory system
US6525987B2 (en) Dynamically configured storage array utilizing a split-decoder
US7350018B2 (en) Method and system for using dynamic random access memory as cache memory
US5367655A (en) Memory and associated method including an operating mode for simultaneously selecting multiple rows of cells
JPH10208493A (en) Memory having redundant array and control method
US20080072121A1 (en) Method and Apparatus For Repairing Defective Cell for Each Cell Section Word Line
US5708613A (en) High performance redundancy in an integrated memory system
US6542430B2 (en) Integrated memory and memory configuration with a plurality of memories and method of operating such a memory configuration
US6141727A (en) Device and method for controlling data storage device
CN117423376A (en) Memory control circuit, memory repairing method and electronic equipment
JP2000187620A (en) Semiconductor memory device
JPH03205680A (en) Memory device having a plurality of memory cell of matrix arrangement
JP2003141894A (en) Semiconductor memory
JPH04271087A (en) Semiconductor storage device with built-in cache memory
JP2000305858A (en) Semiconductor device and semiconductor system
JPH10188554A (en) Method for controlling memory of computer system, and computer system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1998515021

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase