US20140032701A1 - Memory network methods, apparatus, and systems - Google Patents

Memory network methods, apparatus, and systems Download PDF

Info

Publication number
US20140032701A1
US20140032701A1 US14/042,016 US201314042016A US2014032701A1 US 20140032701 A1 US20140032701 A1 US 20140032701A1 US 201314042016 A US201314042016 A US 201314042016A US 2014032701 A1 US2014032701 A1 US 2014032701A1
Authority
US
United States
Prior art keywords
network node
network
path
node
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/042,016
Inventor
David R. Resnick
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
US Bank NA
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Priority to US14/042,016 priority Critical patent/US20140032701A1/en
Publication of US20140032701A1 publication Critical patent/US20140032701A1/en
Assigned to U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT reassignment U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICRON TECHNOLOGY, INC.
Assigned to MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT reassignment MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: MICRON TECHNOLOGY, INC.
Assigned to U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT reassignment U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT CORRECTIVE ASSIGNMENT TO CORRECT THE REPLACE ERRONEOUSLY FILED PATENT #7358718 WITH THE CORRECT PATENT #7358178 PREVIOUSLY RECORDED ON REEL 038669 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST. Assignors: MICRON TECHNOLOGY, INC.
Priority to US15/888,725 priority patent/US10681136B2/en
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/167Interprocessor communication using a common memory, e.g. mailbox
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17356Indirect interconnection networks
    • G06F15/17368Indirect interconnection networks non hierarchical topologies
    • G06F15/17381Two dimensional, e.g. mesh, torus

Definitions

  • DRAM dynamic random access memory
  • FIG. 1 is a diagrammatic block view of a bridge architecture for a memory system, according to various embodiments.
  • FIG. 2 is a diagrammatic block view of a shared bus architecture for a memory system, according to various embodiments.
  • FIG. 3 is a diagrammatic block view of a network architecture for a memory system showing interconnected network nodes having dedicated processors, according to various embodiments.
  • FIG. 4 is a diagrammatic block view of a network architecture for a memory system showing interconnected network nodes sharing processors, according to various embodiments.
  • FIG. 5 is a diagrammatic block view of a network architecture for a memory system showing network nodes placed in different geometric planes sharing processors, according to various embodiments.
  • FIG. 6 is a diagrammatic block view of a network architecture for a memory system showing network nodes placed in different spatial planes that are interconnected to each other and share processors, according to various embodiments.
  • FIG. 7 is a diagrammatic block view of a three dimensional memory system showing network nodes interconnected with each other and sharing a processor, according to various embodiments.
  • FIG. 8 is a diagrammatic block view of a memory system that allows for network fault recovery while recovering data from memory in a multi-dimensional memory network, according to various embodiments.
  • FIG. 9 is a flowchart that describes a method of routing data in a multi-dimensional memory system, according to various embodiments.
  • FIGS. 1 through 9 Specific details of several embodiments are set forth in the following description and in FIGS. 1 through 9 to provide an understanding of such embodiments.
  • various embodiments may be implemented within a physical circuit that includes physical components (e.g., “hardware”), or they may be implemented using machine-readable instructions (e.g., “software”), or in some combination of physical components and machine-readable instructions (e.g., “firmware”).
  • Surface area reduction and a consequent increase in the packing density of memories can be achieved by decreasing the horizontal feature size of memory arrays and devices. In various embodiments, this can occur by forming memory systems that are significantly three-dimensional, so that the memory devices extend vertically into and above the substrate, in addition to generally extending across the surface of the substrate.
  • FIG. 1 is a diagrammatic block view of a bridge architecture for a memory system 100 , according to various embodiments.
  • memory system 100 includes processors ( 104 , 114 ), memory ( 110 , 120 ), bridges ( 102 , 112 ), and a network node 101 .
  • processor 104 is coupled to a dedicated memory 110 and a bridge 102 .
  • Architecture 100 also includes a processor 114 coupled to a dedicated memory 120 and a bridge 112 .
  • Network node 101 can be used to couple bridge 102 and bridge 112 .
  • the architecture shown in FIG. 1 can be used in conjunction with other memory systems and architectures disclosed herein.
  • FIG. 2 is a diagrammatic block view of a shared bus architecture for a memory system 200 , according to various embodiments.
  • Shared bus architecture 200 includes a shared bus 208 coupled to processors 210 , 212 , 214 , and 216 , a memory 206 , and a bridge 204 .
  • a network node 202 is coupled to the bridge 204 to connect memory system 200 to other similar memory systems.
  • the architecture shown in FIG. 2 can be used in conjunction with other memory systems and architectures disclosed herein.
  • IO input/output
  • each processor ( 104 , 114 ) has its own memory ( 110 , 120 ) and possibly has its own IO capability. This means that software and performance issues may be created when processors share those resources. If one processor (for example, 104 ) needs data from another processor's memory (e.g., 120 ), the first processor ( 104 ) has to generate and send a request message to the second processor ( 114 ) asking for the data it needs, and then wait for the second processor ( 114 ) to stop what it is doing to service the request and reply back to the first processor ( 104 ). This means there can be significant performances losses due to software overhead that do contribute directly to computation—overhead as a result of the time loss waiting for needed data to be returned.
  • combining the network structure and memory used to support multiple processors that make up a distributed system allows for envisioning new ways of system construction. If this can be achieved, system performance can be improved, making it easier and faster to perform data sharing. Data can be accessed using a network request, no matter where the requested data resides within a network.
  • memory systems using interconnects similar to that shown in FIG. 3 can be built.
  • FIG. 3 is a diagrammatic block view of a network architecture for a memory system 300 showing interconnected network nodes ( 302 , 304 , 306 , 308 ) coupled to dedicated processors ( 322 , 324 , 326 , 328 ), according to various embodiments. While a two-dimensional mesh network is shown here, the concept is easily extended to three or more dimensions (e.g., hypercube), torus structures, etc. Other kinds of network architectures (e.g., Clos network variations) can also be used, depending on system requirements and on the level of complexity that can be supported by the network node logic.
  • network architectures e.g., Clos network variations
  • the processors shown in FIG. 3 can include multiple processors within a single package or die (multi- or many-core processors) or multiple independent processors that connect to a single network node (e.g., 302 , 304 , 306 , and 308 ).
  • each processor ( 322 , 324 , 326 , and 328 ) has a memory ( 312 , 314 , 316 , and 318 ) attached to it. This arrangement provides local storage of intermediate values from calculations performed by a particular processor, which are not available to processors situated in other parts of the memory system 300 .
  • FIG. 3 can be used in conjunction with other memory systems and architectures disclosed herein.
  • FIG. 4 is a diagrammatic block view of a network architecture for a memory system 400 showing interconnected network nodes sharing processors, according to various embodiments.
  • Memory system 400 includes network nodes ( 412 , 414 , 416 , 418 , 422 , 424 , 426 , 428 , 432 , 434 , 436 , 438 , 442 , 444 , 446 , 448 ), memory ( 413 , 415 , 417 , 419 , 423 , 425 , 427 , 429 , 433 , 435 , 437 , 439 , 443 , 445 , 447 , 449 ), and processors ( 410 , 420 , 430 , 440 ).
  • network nodes 412 , 414 , 416 , 418 , 422 , 424 , 426 , 428 , 432 , 434 , 436 , 438 , 442 ,
  • memory 413 is coupled to network node 412
  • memory 415 is coupled to network node 414
  • memory 417 is coupled to network node 416
  • memory 419 is coupled to network node 418 .
  • Processor 410 is coupled to network nodes 412 , 414 , 416 , and 418 .
  • Memory 423 is coupled to network node 422
  • memory 425 is coupled to network node 424
  • memory 427 is coupled to network node 426
  • memory 429 is coupled to network node 428 .
  • Processor 420 is coupled to network nodes 422 , 424 , 426 , and 428 .
  • Memory 433 is coupled to network node 432
  • memory 435 is coupled to network node 434
  • memory 437 is coupled to network node 436
  • memory 439 is coupled to network node 438 .
  • Processor 430 is coupled to network nodes 432 , 434 , 436 , and 438 .
  • Memory 443 is coupled to network node 442
  • memory 445 is coupled to network node 444
  • memory 447 is coupled to network node 446
  • memory 449 is coupled to network node 448 .
  • Processor 440 is coupled to network nodes 442 , 444 , 446 , and 448 .
  • high-speed serial interfaces are provided for network interconnection of the processor with multiple paths, each of considerable bandwidth that can all run in parallel. This means that each processor package can be connected to multiple network nodes, providing memory access parallelism and allowing for memory/network structures that increase the benefits of such structures over most others that are currently available.
  • the memory network shown in FIG. 4 can be multidimensional, perhaps having a torus structure, etc.
  • Each of the processors ( 410 , 420 , 430 , and 440 ) can have a bandwidth that is a multiple of the bandwidth of the memory and network nodes shown in FIG. 3 .
  • 3D three-dimensional
  • One of the concerns with developing network structures that have multiple dimensions can be that the resulting network logic is quite complex, with the complexity sometimes growing as the square of the number of paths through each network node.
  • One way to simplify the design is to take advantage of multiple paths that can originate with each processor ( 410 , 420 , 430 , and 440 ), so as to have each path going to separate memory networks along different physical dimensions (e.g., X, Y, Z dimensions).
  • each processor ( 410 , 420 , 430 and 440 ) has three network-memory paths then there can be three different two-dimensional (2D) mesh networks, one network for each dimension, instead of a single 3D network. This arrangement may produce smaller 2D networks that are a fraction of the size, and have a smaller number of paths through the logic in each network node.
  • FIG. 5 is a diagrammatic block view of a network architecture for a memory system 500 showing network nodes placed in different geometric planes sharing processors, according to various embodiments.
  • FIG.5 shows a set of one-dimensional networks interconnected through the processors ( 510 , 512 , 514 , and 516 ) to form a 2D network.
  • Each network node shown in FIG. 5 has a maximum of two connections for the network, as each network node only handles a single dimension, and connections for the local memory and for a processor, rather than two connections for each network dimension, along with the memory and processor.
  • memory system 500 includes an integrated package 501 comprising a network node 502 , and memory 503 .
  • the memory network shown in FIG. 5 scales similarly to the networks shown in FIG. 3 and FIG. 4 , and can be built for any specified network size. In some embodiments, memory networks of a greater number of dimensions can be reasonably constructed by adding a path for each added dimension from each processor. This implementation is further described below.
  • complex network structures can be built to have a multiple processor chip connecting to different points within the networks. For example, consider connecting processor 510 to network node 502 (X 11 ) and network node 518 (Y 11 ), and connecting processor 512 to network node 504 (X 12 ) and network node 520 (Y 21 ).
  • processor 510 to network node 502 (X 11 ) and network node 518 (Y 11 )
  • processor 512 to network node 504 (X 12 ) and network node 520 (Y 21 ).
  • one of the characteristics of such a network can be that network communications and data might pass through the processors ( 510 , 512 , 514 , and 516 ) to get data, which can be distributed over the memory network.
  • processor A which has immediate access to memory data in memory 503 and 519 (coupled to network nodes 502 (X 11 ) and 518 (Y 11 ), respectively), wants data from memory 505 (coupled to network node 504 (X 12 )), a request signal is transferred through X 11 to X 12 , which, after accessing the data, returns it by reversing the request path. If data, however is needed from network node 524 (Y 22 ) then the request might be sent over the following path:
  • Processor A ( 510 ) ⁇ X 11 ⁇ X 12 ⁇ Processor B ( 512 ) ⁇ Y 21 ⁇ Y 22 .
  • the request (and the response) can be sent through another processor.
  • This arrangement of having processors designed to simply pass through requests and responses is not usually an efficient way to improve processor performance, to reduce system power requirements, or to simplify packaging.
  • the architecture can be modified so that network node pairs that are connected to a same processor (e.g., a same processor core) also include a network link between them, providing a “hop” path.
  • the result can be something like that shown in FIG. 6 .
  • FIG. 6 is a diagrammatic block view of a network architecture for a memory system 600 showing network nodes placed in different geometric planes that are interconnected to each other and share processors, according to various embodiments. Although FIG. 6 does not show the memory connected to each network node, it should be understood as being present. Similarly, the arrangement shown in FIG. 6 is only one of many that are possible.
  • memory system 600 includes an integrated package 609 comprising a network node 622 and a processor 602 .
  • network node 622 includes a left port 601 , a right port 603 , and a hop port 605 .
  • the configuration shown in FIG. 6 adds an extra link to the network nodes ( 622 , 624 , 632 , 634 , 642 , 644 , 652 , 654 , 662 , 664 , 672 , and 674 ), thereby avoiding routing network traffic through the processors 602 - 612 .
  • Each network node (e.g., network node 622 ) in FIG.
  • left port 601 has three ports (e.g., left port 601 , right port 603 , and hop 605 ) that are used to couple to other network nodes and a port (e.g., 607 ) to couple to the processor (e.g., Processor A ( 602 )).
  • the terms left port, right port do not denote any specific physical location on the node but instead they merely designate one of two ports on the device.
  • requests from any processor 602 , 604 , 606 , 608 , 610 , and 612 ) can be received by either of the corresponding network nodes connected to it.
  • a minimum length path can follow a Manhattan routing scheme with the additional rule that the last routing dimension should be in the dimension that corresponds to the network node placement. For example, if processor A ( 602 ) wants to get data from network node 654 (Y 32 ), the request path can be something like the following:
  • the path can be something like the following:
  • the message traverses nodes in the injected dimension until the request arrives at the correct address corresponding to another dimension.
  • the request is automatically sent down the “hop” path to the other node in the node pair and then down the network path in the other dimension until it arrives at the correct node.
  • the hop port 605 is used when the data from memory connected to network X 23 is requested at network node X 11 .
  • the configuration shown in FIG. 6 includes a node group 690 coupled to a node group 692 .
  • node group 690 includes network nodes 642 , 644 , and processor 606 .
  • the node group 692 includes network nodes 652 and 654 , and processor 608 .
  • network node 642 is coupled to a first memory (not shown in FIG. 6 ), and network node 652 is coupled to a second memory (not shown in FIG. 6 ).
  • Each of network nodes 642 and 652 include a left port, a right port and a hop port in addition to the processor port that is used to couple to the processors 606 and 608 , respectively.
  • memory system 600 includes a network node 622 disposed in an x-path, the network node 622 including a first x-path port ( 601 ), a second x-path port ( 603 ), a hop path port ( 605 ) and a processor port to couple to processor 602 .
  • memory system 600 includes a network node ( 624 ) disposed in a y-path, the network node 624 including a first y-path port, a second y-path port, a processor port and a hop path port.
  • memory system 600 includes a third network node disposed in a z-path, the third network node including a first z-path port, a second z-path port, a processor port and two hop path ports.
  • FIG. 7 is a diagrammatic block view of a three-dimensional memory system showing a node group 700 having network nodes ( 704 , 706 , and 708 ) interconnected with each other and coupled to a processor ( 702 ), according to various embodiments.
  • Processor 702 is coupled to network node 704 (disposed in the X-path) along a path using processor link 705 .
  • Processor 702 is coupled to network node 708 (disposed in the Y-path) along a path using processor link 706 .
  • Processor 702 is coupled to network node 706 (disposed in the Z-path) along a path using processor link 707 .
  • FIG. 6 illustrates a single network node group.
  • FIG. 7 illustrates a single network node group.
  • this concept can be extended even further, using an additional processor path for each added network dimension, for example, to construct a four-dimensional network. N-dimensional networks can be constructed in this manner.
  • FIG. 8 is a diagrammatic block view of a memory system 800 that allows for network fault recovery while recovering data from memory in a multi-dimensional memory network, according to various embodiments.
  • Memory system 800 includes a processor 802 , network nodes ( 804 , 806 , 808 , and 810 ), and hop path 812 .
  • Processor 802 is coupled to network nodes 804 , 806 , 808 , and 810 .
  • Network nodes 804 , 806 , 808 , and 810 are connected to paths 815 , 817 , 819 , and 821 that in turn may be connected to other network nodes.
  • Network node 804 is disposed in a W-path ( 814 , 815 ), network node 806 is disposed in an X-path ( 816 , 817 ), network node 808 is disposed in a Y-path ( 818 , 819 ), and network node 810 is disposed in a Z-path ( 820 , 821 ).
  • processor 802 comprises a substrate with more than one embedded processors.
  • each node in the multidimensional network can have components tasked to handle only a single network dimension, so that the resulting network structure has great resiliency.
  • the processor D wants to get data from the memory attached to network node 644 (Y 31 )
  • the request would normally go along the following path: Processor D ⁇ X 21 ⁇ X 22 ⁇ X 23 ⁇ Y 32 ⁇ Y 31 .
  • the local logic simply sends the request to a hop path (e.g., 812 ) along with a flag that contains information indicating that the preferred routing dimension (the X dimension) is not to be used for the next network hop.
  • the flag provides the processor 802 information to determine as to what the new minimum path would be for future requests. Consequently, X 22 will be able to send the requests to Y 22 .
  • the rerouted request arriving at Y 22 is then sent to Y 21 .
  • the request then follows the path Y 21 ⁇ X 12 ⁇ X 13 ⁇ Y 31 .
  • the hop path between X 23 and Y 32 fails.
  • the request arriving at X 23 is sent on to X 24 (not shown) along with a flag indicating that the preferred dimension is not to be used for the next hop.
  • the request will then be sent into the Y-dimension, reaching Y 31 after a few more hops.
  • Broken links in the network may also occur along the final network dimension.
  • processor D wants data from X 23 , and the link from X 21 to X 22 is down.
  • Node X 21 sends the request to Y 12 using the previous rule of taking the hop path if the desired path is down, along with generating a flag that provides for routing in the non-preferred dimension first.
  • Y 12 notes that there is zero Y network distance to be covered.
  • Y 21 can send the request to Y 11 or to Y 13 (not shown). Assuming that Y 11 was chosen, the request will go to Y 11 , which then sends the request along the path Y 11 ⁇ X 11 ⁇ X 12 ⁇ Y 21 ⁇ Y 22 ⁇ X 22 ⁇ X 23 .
  • the path is broken in the Y 22 to X 22 link. In that case, the request will be sent to Y 23 (not shown), reaching X 23 after more hops. This occurs because the request has to find another route to get back into the X-dimension at a node close to X 23 or at X 23 .
  • FIG. 9 is a flowchart that describes a method 900 of routing data in a multi-dimensional memory system, according to various embodiments. As shown below various network routing rules can be followed to access memory in a multi-dimensional memory network described herein.
  • an “index” represents the location of a node in a particular dimension (e.g., X, Y, or Z-dimension). The number of indices used to locate a node includes
  • method 900 includes generating a request to access a first memory coupled to a destination network node.
  • method 900 includes sending the request to an originating network node, the request including a plurality of indices corresponding to a plurality of dimensions.
  • method 900 includes determining the originating network node whether the request includes a first index associated with a first dimension.
  • method 900 includes sending the request to a first network node along the first dimension, if the request includes a first index.
  • method 900 includes transferring the request to a hop path, if the request includes a second index associated with a second dimension.
  • simple rules can provide network resiliency by automatically routing requests around failed network components and paths. Using such rules, network data flow management can be provided within each network node.
  • the routing rules can include at least one of the following:
  • Rule—1 If a request indicates that the request should flow in a particular dimension (e.g., along an X-path, Y-path, Z-path or W-path) of the network, then send the request to a next node in that dimension.
  • a particular dimension e.g., along an X-path, Y-path, Z-path or W-path
  • Rule—2 If a request is at the correct node location for the network dimension (for example, the request is traveling along the X dimension and arrives at the Y index corresponding to the destination node), but has not arrived at its destination, send the request to the local hop path.
  • Rule—3 If it is desirable to proceed in the current network path dimension, but the request cannot (e.g., due to a path error or failure), then send the request to the hop path and set a flag to prevent returning to the sending node/route in a non-preferred dimension).
  • Rule—4 If the request uses a hop path, but it is found to be impossible to proceed to a node residing in a desired dimension, then simply send the request to the next node and set a flag to prevent any return to the sending node/route using a non-preferred dimension.
  • Rule—5 If making a memory request, traverse the network in a specific dimension order, with the dimension of the address of the destination node being the last dimension in the specified order.
  • the order of choosing the dimensions is X ⁇ Y ⁇ Z
  • a request sent to the local Z node for a requesting processor is sent along the order Z ⁇ X ⁇ Y. This can result in distributing requests across network components and minimizing the number of path hops in a request.
  • a network node becomes a distributed entity for these networks, loss of a node component will not take down all communication through the failed node, but only along the path corresponding to the network dimension of the failing component. As described below, getting around such failures may be managed.
  • networks of most any dimensionality and scale can be built using a single kind of network node.
  • Higher dimensional networks may have shorter network latencies and higher bidirectional bandwidths than lower dimensional networks; in each case a single kind of network-memory component can be the building block.
  • each network node component may be simplified to contain five or fewer bidirectional ports, one of them dedicated to a processor port.
  • system memory can be contained within each network component, so that system memory scales with the network, independent of the number of network processors and the capability of the processors, depending on how the network is built and configured. Recovery from network errors may then be simplified and automated.
  • processors may have a higher level of memory and network access parallelism for higher local memory bandwidths and reduced average memory latency.
  • the processors can have two or more paths that travel in the same dimension.
  • one way to increase the memory size and packaging density includes adding network nodes that increase the total system memory. These added nodes can leave out processing capabilities if not needed.
  • network groups can be provided such that they support different kinds of IO capabilities.
  • a network node can be optimized for, or designated for IO functions rather than for computation.
  • a network can be formed in which one of the network dimensions is used by IO processors or other type of special processors.
  • one plane of processors may comprise inter-mixed IO and signal processors. In this way, data may be moved in the IO signal plane without interfering with data traffic between the computational nodes.
  • processors described herein may comprise a single integrated circuit having one or more processing units (e.g., cores). Multiple processors can be connected to each network node, which may comprise an integrated circuit that routes data between a memory and processor. Processors, network nodes and memory can reside on the same integrated circuit package. In some embodiments, such processors comprise a single-core processor, a multi-core processor, or a combination of the two. In some embodiments, the processor of a particular node group includes one or more cores of a multi-core processor. In some embodiments, processors include an application specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • the network node described herein includes an IO driver circuit.
  • the network node and the memory are disposed within a single package.
  • the network node, the memory and the processor are disposed in a single package.
  • the network node is configured to perform Error Check and Correction (ECC) during data communication between the memory and processor.
  • ECC Error Check and Correction
  • Network nodes can include routers provided to route data between memory and processors across a memory network.
  • network nodes include an interface device that has a plurality of routing elements.
  • the memory discussed herein includes Dynamic Random Access Memory (DRAM) arrays. In some embodiments, the memory discussed herein includes a NAND flash memory array. In some embodiments, the memory discussed herein includes a NOR flash memory array. In some embodiments, the memory size can be proportional to the network dimensionality. Local memory bandwidth can be proportional to the network dimensionality as well.
  • DRAM Dynamic Random Access Memory
  • the memory discussed herein includes a NAND flash memory array. In some embodiments, the memory discussed herein includes a NOR flash memory array. In some embodiments, the memory size can be proportional to the network dimensionality. Local memory bandwidth can be proportional to the network dimensionality as well.

Abstract

Apparatus and systems may include a first node group include a first network node coupled to a memory, the first network node including a first port, a second port, a processor port, and a hop port. Network node group may include a second network node coupled to a memory, the second network node including a first port, a second port, a processor port, and a hop port, the hop port of the second network node coupled to the hop port of the first network node and configured to communicate between the first network node and the second network node. Network node group may include a processor coupled to the processor port of the first network node and coupled to the processor port of the second network node, the processor configured to access the first memory through the first network node and the second memory through the second network node. Other apparatus, systems, and methods are disclosed.

Description

    RELATED APPLICATIONS
  • This is a divisional of U.S. patent application Ser. No. 12/389,200, filed Feb. 19, 2009, and incorporated herein by reference in its entirety.
  • BACKGROUND
  • Many electronic devices, such as personal computers, workstations, computer servers, mainframes and other computer-related equipment, including printers, scanners and hard disk drives, make use of memory that provides a large data storage capability, while attempting to incur low power consumption. One type of memory that is well suited for use in the foregoing devices is the dynamic random access memory (DRAM).
  • The demand for memory devices having increased capacity in large multi-processor systems continues to rise as chip size limitations provide a limiting influence. The surface area occupied by the components of individual memory cells has been steadily decreased so that the packing density of the memory cells on a semiconductor substrate can be increased along with the gate delays being decreased. However, shrinking the device surface area can result in reducing manufacturing yield, as well as increasing the complexity for interconnects used to connect the numerous banks of memory devices with other devices such as processors. Additionally, during miniaturization, interconnect delays do not scale as well as gate delays.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Various embodiments are described in detail in the discussion below and with reference to the following drawings.
  • FIG. 1 is a diagrammatic block view of a bridge architecture for a memory system, according to various embodiments.
  • FIG. 2 is a diagrammatic block view of a shared bus architecture for a memory system, according to various embodiments.
  • FIG. 3 is a diagrammatic block view of a network architecture for a memory system showing interconnected network nodes having dedicated processors, according to various embodiments.
  • FIG. 4 is a diagrammatic block view of a network architecture for a memory system showing interconnected network nodes sharing processors, according to various embodiments.
  • FIG. 5 is a diagrammatic block view of a network architecture for a memory system showing network nodes placed in different geometric planes sharing processors, according to various embodiments.
  • FIG. 6 is a diagrammatic block view of a network architecture for a memory system showing network nodes placed in different spatial planes that are interconnected to each other and share processors, according to various embodiments.
  • FIG. 7 is a diagrammatic block view of a three dimensional memory system showing network nodes interconnected with each other and sharing a processor, according to various embodiments.
  • FIG. 8 is a diagrammatic block view of a memory system that allows for network fault recovery while recovering data from memory in a multi-dimensional memory network, according to various embodiments.
  • FIG. 9 is a flowchart that describes a method of routing data in a multi-dimensional memory system, according to various embodiments.
  • DETAILED DESCRIPTION
  • Various embodiments include processing systems, semiconductor modules, memory systems and methods. Specific details of several embodiments are set forth in the following description and in FIGS. 1 through 9 to provide an understanding of such embodiments. One of ordinary skill in the art, however, will understand that additional embodiments are possible, and that many embodiments may be practiced without several of the details disclosed in the following description. It is also understood that various embodiments may be implemented within a physical circuit that includes physical components (e.g., “hardware”), or they may be implemented using machine-readable instructions (e.g., “software”), or in some combination of physical components and machine-readable instructions (e.g., “firmware”).
  • Surface area reduction and a consequent increase in the packing density of memories can be achieved by decreasing the horizontal feature size of memory arrays and devices. In various embodiments, this can occur by forming memory systems that are significantly three-dimensional, so that the memory devices extend vertically into and above the substrate, in addition to generally extending across the surface of the substrate.
  • Examples of memory devices discussed herein are described in U.S. patent application Ser. No. 11/847,113, entitled “MEMORY DEVICE INTERFACE METHODS, APPARATUS, AND SYSTEMS,” filed on Aug. 29, 2007, and assigned to Micron Technology, Inc.
  • Examples of network nodes (routers) discussed herein are described in U.S. patent application Ser. No. 12/033,684, entitled “METHOD DEVICE WITH NETWORK ON CHIP METHODS, APPARATUS, AND SYSTEMS,” filed on Feb. 19, 2008, and assigned to Micron Technology, Inc.
  • FIG. 1 is a diagrammatic block view of a bridge architecture for a memory system 100, according to various embodiments. In an example embodiment, memory system 100 includes processors (104, 114), memory (110, 120), bridges (102, 112), and a network node 101. In some embodiments, processor 104 is coupled to a dedicated memory 110 and a bridge 102. Architecture 100 also includes a processor 114 coupled to a dedicated memory 120 and a bridge 112. Network node 101 can be used to couple bridge 102 and bridge 112. In various embodiments, the architecture shown in FIG. 1 can be used in conjunction with other memory systems and architectures disclosed herein.
  • FIG. 2 is a diagrammatic block view of a shared bus architecture for a memory system 200, according to various embodiments. Shared bus architecture 200 includes a shared bus 208 coupled to processors 210, 212, 214, and 216, a memory 206, and a bridge 204. In some embodiments, a network node 202 is coupled to the bridge 204 to connect memory system 200 to other similar memory systems. In various embodiments, the architecture shown in FIG. 2 can be used in conjunction with other memory systems and architectures disclosed herein.
  • Large multiprocessor systems can be built using either the bridge architecture shown in FIG. 1 or the shared bus architecture shown in FIG. 2. In both architectures the network structure and interconnect hardware can be used to provide a high performance networked system. In some embodiments, a variety of standard input/output (IO) channels (e.g., provided as part of an Infiniband™ communications link) and other mechanisms can be used to couple additional computational resources than that can be accommodated on a particular motherboard or similar packaging arrangement.
  • In the bridge architecture shown in FIG. 1, each processor (104, 114) has its own memory (110, 120) and possibly has its own IO capability. This means that software and performance issues may be created when processors share those resources. If one processor (for example, 104) needs data from another processor's memory (e.g., 120), the first processor (104) has to generate and send a request message to the second processor (114) asking for the data it needs, and then wait for the second processor (114) to stop what it is doing to service the request and reply back to the first processor (104). This means there can be significant performances losses due to software overhead that do contribute directly to computation—overhead as a result of the time loss waiting for needed data to be returned.
  • In the shared bus architecture shown in FIG. 2, the number of processors that can reasonably form part of the group is limited because of electrical power issues in constructing the bus and, to a larger extent, due to memory size and bandwidth constraints that are part of providing satisfactory service to the connected processors. Shared bus systems are often self-limiting, and thus often grow using network or IO channel interconnects to scale for larger systems. This reintroduces the same losses and issues described above for the bridge architecture.
  • In some embodiments, combining the network structure and memory used to support multiple processors that make up a distributed system allows for envisioning new ways of system construction. If this can be achieved, system performance can be improved, making it easier and faster to perform data sharing. Data can be accessed using a network request, no matter where the requested data resides within a network. In some embodiments, memory systems using interconnects similar to that shown in FIG. 3 can be built.
  • FIG. 3 is a diagrammatic block view of a network architecture for a memory system 300 showing interconnected network nodes (302, 304, 306, 308) coupled to dedicated processors (322, 324, 326, 328), according to various embodiments. While a two-dimensional mesh network is shown here, the concept is easily extended to three or more dimensions (e.g., hypercube), torus structures, etc. Other kinds of network architectures (e.g., Clos network variations) can also be used, depending on system requirements and on the level of complexity that can be supported by the network node logic.
  • In some embodiments, the processors shown in FIG. 3 can include multiple processors within a single package or die (multi- or many-core processors) or multiple independent processors that connect to a single network node (e.g., 302, 304, 306, and 308). In some embodiments, each processor (322, 324, 326, and 328) has a memory (312, 314, 316, and 318) attached to it. This arrangement provides local storage of intermediate values from calculations performed by a particular processor, which are not available to processors situated in other parts of the memory system 300. However, if some of the processors request access to data distributed between various memory (312, 314, 316, and 318), then various data management issues can be raised as a result of using memory referencing schemes. In various embodiments, the architecture shown in FIG. 3 can be used in conjunction with other memory systems and architectures disclosed herein.
  • One of the many potential benefits of using the distributed memory networks described herein is that all the memory can appear as a single set of addresses in the network; avoiding the need to build request messages from one process to another to access data. Memory latency (access time) is non-uniform in these memory structures, so there may be a performance benefit to having job and data management software keeping data close to the processors that use the data. In addition, the impact of not keeping the data close to the processors is less than that for the network-memory structures shown in FIG. 1 because there is no need for message passing to send and receive data.
  • Sometimes, performance issues arise when using multi-core processor integrated circuits (ICs). As the number of cores within a single IC increases, the arrangement effectively looks more and more like the bus architecture shown in FIG. 2. In this case, bandwidth is shared, and as the number of cores and threads increase, the fraction of available bandwidth per core or thread may be reduced.
  • FIG. 4 is a diagrammatic block view of a network architecture for a memory system 400 showing interconnected network nodes sharing processors, according to various embodiments. Memory system 400 includes network nodes (412, 414, 416, 418, 422, 424, 426, 428, 432, 434, 436, 438, 442, 444, 446, 448), memory (413, 415, 417, 419, 423, 425, 427, 429, 433, 435, 437, 439, 443, 445, 447, 449), and processors (410, 420, 430, 440).
  • As shown in FIG. 4, memory 413 is coupled to network node 412, memory 415 is coupled to network node 414, memory 417 is coupled to network node 416, and memory 419 is coupled to network node 418. Processor 410 is coupled to network nodes 412, 414, 416, and 418.
  • Memory 423 is coupled to network node 422, memory 425 is coupled to network node 424, memory 427 is coupled to network node 426, and memory 429 is coupled to network node 428. Processor 420 is coupled to network nodes 422, 424, 426, and 428.
  • Memory 433 is coupled to network node 432, memory 435 is coupled to network node 434, memory 437 is coupled to network node 436, and memory 439 is coupled to network node 438. Processor 430 is coupled to network nodes 432, 434, 436, and 438.
  • Memory 443 is coupled to network node 442, memory 445 is coupled to network node 444, memory 447 is coupled to network node 446, and memory 449 is coupled to network node 448. Processor 440 is coupled to network nodes 442, 444, 446, and 448.
  • In some embodiments, high-speed serial interfaces are provided for network interconnection of the processor with multiple paths, each of considerable bandwidth that can all run in parallel. This means that each processor package can be connected to multiple network nodes, providing memory access parallelism and allowing for memory/network structures that increase the benefits of such structures over most others that are currently available.
  • In some embodiments, the memory network shown in FIG. 4 can be multidimensional, perhaps having a torus structure, etc. Each of the processors (410, 420, 430, and 440) can have a bandwidth that is a multiple of the bandwidth of the memory and network nodes shown in FIG. 3. In some embodiments, where a three-dimensional (3D) network interconnect can be used, there is an option to keep each processor connected to the network nodes as shown in FIG. 4 (as a result various spatial planes or dimensions may be used for connections) or to have one or more processors be connected to network nodes in two or more spatial planes or dimensions. One of the concerns with developing network structures that have multiple dimensions (such as having multiple source and destinations, as in Clos networks) can be that the resulting network logic is quite complex, with the complexity sometimes growing as the square of the number of paths through each network node.
  • One way to simplify the design is to take advantage of multiple paths that can originate with each processor (410, 420, 430, and 440), so as to have each path going to separate memory networks along different physical dimensions (e.g., X, Y, Z dimensions). In some embodiments, if each processor (410, 420, 430 and 440) has three network-memory paths then there can be three different two-dimensional (2D) mesh networks, one network for each dimension, instead of a single 3D network. This arrangement may produce smaller 2D networks that are a fraction of the size, and have a smaller number of paths through the logic in each network node.
  • FIG. 5 is a diagrammatic block view of a network architecture for a memory system 500 showing network nodes placed in different geometric planes sharing processors, according to various embodiments. FIG.5 shows a set of one-dimensional networks interconnected through the processors (510, 512, 514, and 516) to form a 2D network. Each network node shown in FIG. 5 has a maximum of two connections for the network, as each network node only handles a single dimension, and connections for the local memory and for a processor, rather than two connections for each network dimension, along with the memory and processor. In one embodiment, memory system 500 includes an integrated package 501 comprising a network node 502, and memory 503.
  • The memory network shown in FIG. 5 scales similarly to the networks shown in FIG. 3 and FIG. 4, and can be built for any specified network size. In some embodiments, memory networks of a greater number of dimensions can be reasonably constructed by adding a path for each added dimension from each processor. This implementation is further described below.
  • In some embodiments, complex network structures can be built to have a multiple processor chip connecting to different points within the networks. For example, consider connecting processor 510 to network node 502 (X11) and network node 518 (Y11), and connecting processor 512 to network node 504 (X12) and network node 520 (Y21). In some embodiments, one of the characteristics of such a network can be that network communications and data might pass through the processors (510, 512, 514, and 516) to get data, which can be distributed over the memory network.
  • For example, if processor A (510), which has immediate access to memory data in memory 503 and 519 (coupled to network nodes 502 (X11) and 518 (Y11), respectively), wants data from memory 505 (coupled to network node 504 (X12)), a request signal is transferred through X11 to X12, which, after accessing the data, returns it by reversing the request path. If data, however is needed from network node 524 (Y22) then the request might be sent over the following path:
  • Processor A (510)→X11→X12→Processor B (512)→Y21→Y22.
  • In some embodiments if the needed data is not on the same X or Y path as that of the requesting processor, then the request (and the response) can be sent through another processor. This arrangement of having processors designed to simply pass through requests and responses is not usually an efficient way to improve processor performance, to reduce system power requirements, or to simplify packaging.
  • In some embodiments, the architecture can be modified so that network node pairs that are connected to a same processor (e.g., a same processor core) also include a network link between them, providing a “hop” path. The result can be something like that shown in FIG. 6.
  • FIG. 6 is a diagrammatic block view of a network architecture for a memory system 600 showing network nodes placed in different geometric planes that are interconnected to each other and share processors, according to various embodiments. Although FIG. 6 does not show the memory connected to each network node, it should be understood as being present. Similarly, the arrangement shown in FIG. 6 is only one of many that are possible.
  • In some embodiments, memory system 600 includes an integrated package 609 comprising a network node 622 and a processor 602. In an example embodiment, network node 622 includes a left port 601, a right port 603, and a hop port 605. The configuration shown in FIG. 6 adds an extra link to the network nodes (622, 624, 632, 634, 642, 644, 652, 654, 662, 664, 672, and 674), thereby avoiding routing network traffic through the processors 602-612. Each network node (e.g., network node 622) in FIG. 6 has three ports (e.g., left port 601, right port 603, and hop 605) that are used to couple to other network nodes and a port (e.g., 607) to couple to the processor (e.g., Processor A (602)). The terms left port, right port do not denote any specific physical location on the node but instead they merely designate one of two ports on the device. In using such a network, requests from any processor (602, 604, 606, 608, 610, and 612) can be received by either of the corresponding network nodes connected to it. A minimum length path can follow a Manhattan routing scheme with the additional rule that the last routing dimension should be in the dimension that corresponds to the network node placement. For example, if processor A (602) wants to get data from network node 654 (Y32), the request path can be something like the following:
  • X11→X12→X13→Y31→Y32.
  • In some embodiments, if the data from network node 652 (X23) is needed instead, then the path can be something like the following:
  • Y11→Y12→X21→X22→X23.
  • In some embodiments, when a request is injected into the network by a processor, the message traverses nodes in the injected dimension until the request arrives at the correct address corresponding to another dimension. In some embodiments, if the data is not in the node, then the request is automatically sent down the “hop” path to the other node in the node pair and then down the network path in the other dimension until it arrives at the correct node. For example, the hop port 605 is used when the data from memory connected to network X23 is requested at network node X11.
  • The configuration shown in FIG. 6 includes a node group 690 coupled to a node group 692. In some examples, node group 690 includes network nodes 642, 644, and processor 606. In some examples, the node group 692 includes network nodes 652 and 654, and processor 608. In some embodiments, network node 642 is coupled to a first memory (not shown in FIG. 6), and network node 652 is coupled to a second memory (not shown in FIG. 6). Each of network nodes 642 and 652 include a left port, a right port and a hop port in addition to the processor port that is used to couple to the processors 606 and 608, respectively.
  • In some embodiments memory system 600 includes a network node 622 disposed in an x-path, the network node 622 including a first x-path port (601), a second x-path port (603), a hop path port (605) and a processor port to couple to processor 602. In some embodiments, memory system 600 includes a network node (624) disposed in a y-path, the network node 624 including a first y-path port, a second y-path port, a processor port and a hop path port. In some embodiments, memory system 600 includes a third network node disposed in a z-path, the third network node including a first z-path port, a second z-path port, a processor port and two hop path ports.
  • FIG. 7 is a diagrammatic block view of a three-dimensional memory system showing a node group 700 having network nodes (704, 706, and 708) interconnected with each other and coupled to a processor (702), according to various embodiments. Processor 702 is coupled to network node 704 (disposed in the X-path) along a path using processor link 705. Processor 702 is coupled to network node 708 (disposed in the Y-path) along a path using processor link 706. Processor 702 is coupled to network node 706 (disposed in the Z-path) along a path using processor link 707.
  • Thus, it can be noted that if the architecture shown in FIG. 6 is extended to three dimensions, the result is something like that shown in FIG. 7, which illustrates a single network node group. In a similar way, this concept can be extended even further, using an additional processor path for each added network dimension, for example, to construct a four-dimensional network. N-dimensional networks can be constructed in this manner.
  • In most cases of multi-dimensional networks, when a hop path is taken to change network dimension, only a single hop to the next node component in a node group may be taken. This activity distributes requests to minimize path conflicts and network hot-spots. If a request is sent from a node in the X-path to a node in the Y-path, and the final destination node is not located in the Y-dimension, then the request can be transferred on to the next dimension, Z.
  • FIG. 8 is a diagrammatic block view of a memory system 800 that allows for network fault recovery while recovering data from memory in a multi-dimensional memory network, according to various embodiments. Memory system 800 includes a processor 802, network nodes (804, 806, 808, and 810), and hop path 812. Processor 802 is coupled to network nodes 804, 806, 808, and 810. Network nodes 804, 806, 808, and 810 are connected to paths 815, 817, 819, and 821 that in turn may be connected to other network nodes. Network node 804 is disposed in a W-path (814, 815), network node 806 is disposed in an X-path (816, 817), network node 808 is disposed in a Y-path (818, 819), and network node 810 is disposed in a Z-path (820, 821). In some embodiments, processor 802 comprises a substrate with more than one embedded processors.
  • With the network structure shown in FIG. 8, each node in the multidimensional network can have components tasked to handle only a single network dimension, so that the resulting network structure has great resiliency. Referring back to FIG. 6, if the processor D wants to get data from the memory attached to network node 644 (Y31), the request would normally go along the following path: Processor D→X21→X22→X23→Y32→Y31. However, if the path between X22 and X23 is down (e.g., X23 has totally failed) when a request arrives at a node (such as X22) from which the desired path cannot be taken, the local logic simply sends the request to a hop path (e.g., 812) along with a flag that contains information indicating that the preferred routing dimension (the X dimension) is not to be used for the next network hop. In some embodiments, the flag provides the processor 802 information to determine as to what the new minimum path would be for future requests. Consequently, X22 will be able to send the requests to Y22. The rerouted request arriving at Y22 is then sent to Y21. The request then follows the path Y21→X12→X13→Y31.
  • In another example, assume that instead of the path between X22 and X23 failing, the hop path between X23 and Y32 fails. As a result, the request arriving at X23 is sent on to X24 (not shown) along with a flag indicating that the preferred dimension is not to be used for the next hop. The request will then be sent into the Y-dimension, reaching Y31 after a few more hops.
  • Broken links in the network may also occur along the final network dimension. For example, consider that processor D wants data from X23, and the link from X21 to X22 is down. Node X21 sends the request to Y12 using the previous rule of taking the hop path if the desired path is down, along with generating a flag that provides for routing in the non-preferred dimension first. Y12 notes that there is zero Y network distance to be covered. As a result, Y21 can send the request to Y11 or to Y13 (not shown). Assuming that Y11 was chosen, the request will go to Y11, which then sends the request along the path Y11→X11→X12→Y21→Y22→X22→X23. If the network node X22 has failed, then the path is broken in the Y22 to X22 link. In that case, the request will be sent to Y23 (not shown), reaching X23 after more hops. This occurs because the request has to find another route to get back into the X-dimension at a node close to X23 or at X23.
  • FIG. 9 is a flowchart that describes a method 900 of routing data in a multi-dimensional memory system, according to various embodiments. As shown below various network routing rules can be followed to access memory in a multi-dimensional memory network described herein. In the embodiments described herein, an “index” represents the location of a node in a particular dimension (e.g., X, Y, or Z-dimension). The number of indices used to locate a node includes
  • At block 902, method 900 includes generating a request to access a first memory coupled to a destination network node.
  • At block 904, method 900 includes sending the request to an originating network node, the request including a plurality of indices corresponding to a plurality of dimensions.
  • At block 906, method 900 includes determining the originating network node whether the request includes a first index associated with a first dimension.
  • At block 908, method 900 includes sending the request to a first network node along the first dimension, if the request includes a first index.
  • At block 910, method 900 includes transferring the request to a hop path, if the request includes a second index associated with a second dimension.
  • In some embodiments, simple rules can provide network resiliency by automatically routing requests around failed network components and paths. Using such rules, network data flow management can be provided within each network node. In some embodiments, the routing rules can include at least one of the following:
  • Rule—1: If a request indicates that the request should flow in a particular dimension (e.g., along an X-path, Y-path, Z-path or W-path) of the network, then send the request to a next node in that dimension.
  • Rule—2: If a request is at the correct node location for the network dimension (for example, the request is traveling along the X dimension and arrives at the Y index corresponding to the destination node), but has not arrived at its destination, send the request to the local hop path.
  • Rule—3: If it is desirable to proceed in the current network path dimension, but the request cannot (e.g., due to a path error or failure), then send the request to the hop path and set a flag to prevent returning to the sending node/route in a non-preferred dimension).
  • Rule—4: If the request uses a hop path, but it is found to be impossible to proceed to a node residing in a desired dimension, then simply send the request to the next node and set a flag to prevent any return to the sending node/route using a non-preferred dimension.
  • Rule—5: If making a memory request, traverse the network in a specific dimension order, with the dimension of the address of the destination node being the last dimension in the specified order. Thus, if memory coupled to Y21 is to be accessed for example, in a 3D network where the order of choosing the dimensions is X→Y→Z, then a request sent to the local Z node for a requesting processor is sent along the order Z→X→Y. This can result in distributing requests across network components and minimizing the number of path hops in a request.
  • Rule—6: Replies to request are not constrained to follow the same return path as a request, but may occur in a reverse order. This can help distribute responses within the network.
  • In some embodiments, because a network node becomes a distributed entity for these networks, loss of a node component will not take down all communication through the failed node, but only along the path corresponding to the network dimension of the failing component. As described below, getting around such failures may be managed.
  • In some embodiments, networks of most any dimensionality and scale can be built using a single kind of network node. Higher dimensional networks may have shorter network latencies and higher bidirectional bandwidths than lower dimensional networks; in each case a single kind of network-memory component can be the building block.
  • In some embodiments, each network node component may be simplified to contain five or fewer bidirectional ports, one of them dedicated to a processor port. In some embodiments, system memory can be contained within each network component, so that system memory scales with the network, independent of the number of network processors and the capability of the processors, depending on how the network is built and configured. Recovery from network errors may then be simplified and automated.
  • With multiple network/memory nodes connected to each processor IC for higher dimensional networks, processors may have a higher level of memory and network access parallelism for higher local memory bandwidths and reduced average memory latency. In situations where processors have more paths available than the number of dimensions needed for an envisioned network, the processors can have two or more paths that travel in the same dimension.
  • In some embodiments, where node groups do not include any processors, one way to increase the memory size and packaging density includes adding network nodes that increase the total system memory. These added nodes can leave out processing capabilities if not needed. For example, network groups can be provided such that they support different kinds of IO capabilities. A network node can be optimized for, or designated for IO functions rather than for computation.
  • In some embodiments, a network can be formed in which one of the network dimensions is used by IO processors or other type of special processors. For example, in a 3D network, one plane of processors may comprise inter-mixed IO and signal processors. In this way, data may be moved in the IO signal plane without interfering with data traffic between the computational nodes.
  • In some embodiments, processors described herein may comprise a single integrated circuit having one or more processing units (e.g., cores). Multiple processors can be connected to each network node, which may comprise an integrated circuit that routes data between a memory and processor. Processors, network nodes and memory can reside on the same integrated circuit package. In some embodiments, such processors comprise a single-core processor, a multi-core processor, or a combination of the two. In some embodiments, the processor of a particular node group includes one or more cores of a multi-core processor. In some embodiments, processors include an application specific integrated circuit (ASIC).
  • In some embodiments, the network node described herein includes an IO driver circuit. In some embodiments, the network node and the memory are disposed within a single package. In some embodiments, the network node, the memory and the processor are disposed in a single package. In some embodiments, the network node is configured to perform Error Check and Correction (ECC) during data communication between the memory and processor. Network nodes can include routers provided to route data between memory and processors across a memory network. In some embodiments, network nodes include an interface device that has a plurality of routing elements.
  • In some embodiments, the memory discussed herein includes Dynamic Random Access Memory (DRAM) arrays. In some embodiments, the memory discussed herein includes a NAND flash memory array. In some embodiments, the memory discussed herein includes a NOR flash memory array. In some embodiments, the memory size can be proportional to the network dimensionality. Local memory bandwidth can be proportional to the network dimensionality as well.
  • While various embodiments have been illustrated and described, as noted above, changes can be made without departing from the disclosure. The accompanying drawings that form a part hereof show by way of illustration, and not of limitation, various embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived there from. This Detailed Description, therefore, is not to be taken in a limiting sense.
  • Although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the various embodiments shown. Furthermore, although the various embodiments have described redundant signal transmission systems, it is understood that the various embodiments may be employed in a variety of known electronic systems and devices without modification. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those skilled in the art upon reviewing the above description.
  • The Abstract of the Disclosure is provided to comply with 37 C.F.R. §1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features may be grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.

Claims (20)

What is claimed is:
1. A system, comprising:
a first network node disposed in an x-path, the first network node including a first x-path port, a second x-path port, a first processor port, a first hop path port and a second hop path port;
a second network node disposed in a y-path, the second network node including a first y-path port, a second y-path port, a second processor port and a third hop path port and a fourth hop path port;
a third network node disposed in a z-path, the third network node including a first z-path port, a second z-path port, a third processor port and a fifth hop path port and a sixth hop path port; and
a processor coupled to the first network node using the first processor port, coupled to the second network node using the second processor port, and coupled to the third network node using the third processor port;
wherein the first network node is coupled to a first memory, the second network node is coupled to a second memory, and the third network node is coupled to a third memory;
wherein at least one of the first hop path port and the second hop path port are coupled to at least one of the third, fourth, fifth and sixth hop path ports; and
wherein the x-path, the y-path, and the z-path are in different dimensions with respect to each other.
2. The system of claim 1, wherein the first memory includes a NOR flash memory array and the second memory includes a NAND flash memory array.
3. The system of claim 1, wherein at least one of the first, the second, and the third network nodes includes a router.
4. The system of claim 1, wherein at least one of the network dimensions corresponding to the x-path, the y-path, and the z-path is used by at least one Input/Output processor.
5. The system of claim 1, wherein the processor is coupled to the first network node using one or more one links.
6. The system of claim 1, wherein the first network node and the processor is disposed in a single package.
7. The system of claim 1, wherein the first network node, the second network node, the third network node and the processor is arranged as a two-dimensional mesh network.
8. The system of claim 1, wherein the first network node, the second network node, the third network node and the processor is arranged as a hypercube network.
9. The system of claim 1, wherein the first network node, the second network node, the third network node and the processor is arranged as a torus structure.
10. The system of claim 1, wherein the first network node, the second network node, the third network node and the processor is arranged as a Clos network.
11. A method of routing data for a multi-dimensional memory network:
receiving at an originating node a request to access a first memory coupled to a destination network node, the request including a plurality of indices corresponding to a plurality of dimensions;
determining at the originating network node whether the request includes a first index associated with a first dimension;
sending the request to another network node along the first dimension, if the request includes a first index; and
sending the request to a hop path, if the request includes a second index associated with a second dimension.
12. The method of claim 11, wherein if the request is provided to the hop path and is unable to proceed to a particular node in a desired dimension, then sending the request to the next node.
13. The method of claim 12, further comprising setting a flag indicating no return to the originating node.
14. The method of claim 12, further comprising setting the flag to indicate no return on a previously used route.
15. The method of claim 12, wherein if the request indicates that the request should flow in a particular dimension of the network, then sending the request to a next node in the particular dimension.
16. The method of claim 12, further comprising routing requests around failed network nodes.
17. The method of claim 12, further comprising routing requests around failed hop paths.
18. The method of claim 11, wherein the minimum path length path between the originating node and the destination node is determined using a Manhattan routing scheme.
19. The method of claim 11, further comprising:
accessing data from the destination node, wherein if the data is not available at a first node, then the request is automatically sent to a hop path to follow a network path in another dimension until it arrives at destination node.
20. The method of claim 18, further comprising:
determining the minimum path length between the originating node and the destination node using a last routing dimension in a dimension that corresponds to network node placement.
US14/042,016 2009-02-19 2013-09-30 Memory network methods, apparatus, and systems Abandoned US20140032701A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/042,016 US20140032701A1 (en) 2009-02-19 2013-09-30 Memory network methods, apparatus, and systems
US15/888,725 US10681136B2 (en) 2009-02-19 2018-02-05 Memory network methods, apparatus, and systems

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/389,200 US8549092B2 (en) 2009-02-19 2009-02-19 Memory network methods, apparatus, and systems
US14/042,016 US20140032701A1 (en) 2009-02-19 2013-09-30 Memory network methods, apparatus, and systems

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/389,200 Division US8549092B2 (en) 2009-02-19 2009-02-19 Memory network methods, apparatus, and systems

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/888,725 Continuation US10681136B2 (en) 2009-02-19 2018-02-05 Memory network methods, apparatus, and systems

Publications (1)

Publication Number Publication Date
US20140032701A1 true US20140032701A1 (en) 2014-01-30

Family

ID=42560865

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/389,200 Active 2032-08-03 US8549092B2 (en) 2009-02-19 2009-02-19 Memory network methods, apparatus, and systems
US14/042,016 Abandoned US20140032701A1 (en) 2009-02-19 2013-09-30 Memory network methods, apparatus, and systems
US15/888,725 Active 2029-08-05 US10681136B2 (en) 2009-02-19 2018-02-05 Memory network methods, apparatus, and systems

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/389,200 Active 2032-08-03 US8549092B2 (en) 2009-02-19 2009-02-19 Memory network methods, apparatus, and systems

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/888,725 Active 2029-08-05 US10681136B2 (en) 2009-02-19 2018-02-05 Memory network methods, apparatus, and systems

Country Status (7)

Country Link
US (3) US8549092B2 (en)
EP (1) EP2399201B1 (en)
JP (2) JP5630664B2 (en)
KR (1) KR101549287B1 (en)
CN (1) CN102326159B (en)
TW (1) TWI482454B (en)
WO (1) WO2010096569A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10681136B2 (en) 2009-02-19 2020-06-09 Micron Technology, Inc. Memory network methods, apparatus, and systems

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9779057B2 (en) 2009-09-11 2017-10-03 Micron Technology, Inc. Autonomous memory architecture
EP2738948A1 (en) * 2012-11-28 2014-06-04 Sercel Method for setting frequency channels in a multi-hop wireless mesh network.
US9258191B2 (en) 2012-12-13 2016-02-09 Microsoft Technology Licensing, Llc Direct network having plural distributed connections to each resource
JP5985403B2 (en) * 2013-01-10 2016-09-06 株式会社東芝 Storage device
JP6005533B2 (en) 2013-01-17 2016-10-12 株式会社東芝 Storage device and storage method
US10089043B2 (en) 2013-03-15 2018-10-02 Micron Technology, Inc. Apparatus and methods for a distributed memory system including memory nodes
CN105706068B (en) * 2013-04-30 2019-08-23 慧与发展有限责任合伙企业 The storage network of route memory flow and I/O flow
US9779138B2 (en) 2013-08-13 2017-10-03 Micron Technology, Inc. Methods and systems for autonomous memory searching
JP5931816B2 (en) 2013-08-22 2016-06-08 株式会社東芝 Storage device
JP5902137B2 (en) 2013-09-24 2016-04-13 株式会社東芝 Storage system
US10003675B2 (en) 2013-12-02 2018-06-19 Micron Technology, Inc. Packet processor receiving packets containing instructions, data, and starting location and generating packets containing instructions and data
US9558143B2 (en) * 2014-05-09 2017-01-31 Micron Technology, Inc. Interconnect systems and methods using hybrid memory cube links to send packetized data over different endpoints of a data handling device
US9645760B2 (en) 2015-01-29 2017-05-09 Kabushiki Kaisha Toshiba Storage system and control method thereof
JP6313237B2 (en) * 2015-02-04 2018-04-18 東芝メモリ株式会社 Storage system
WO2016173611A1 (en) * 2015-04-27 2016-11-03 Hewlett-Packard Development Company, L P Memory systems
KR101797929B1 (en) 2015-08-26 2017-11-15 서경대학교 산학협력단 Assigning processes to cores in many-core platform and communication method between core processes
KR20180071514A (en) 2016-12-20 2018-06-28 에스케이하이닉스 주식회사 Device for coding packet and routing method in memory network including the same

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010043614A1 (en) * 1998-07-17 2001-11-22 Krishna Viswanadham Multi-layer switching apparatus and method
US20020015340A1 (en) * 2000-07-03 2002-02-07 Victor Batinovich Method and apparatus for memory module circuit interconnection
US20030058873A1 (en) * 1999-01-29 2003-03-27 Interactive Silicon, Incorporated Network device with improved storage density and access speed using compression techniques
US20060101104A1 (en) * 2004-10-12 2006-05-11 International Business Machines Corporation Optimizing layout of an application on a massively parallel supercomputer
US20070169001A1 (en) * 2005-11-29 2007-07-19 Arun Raghunath Methods and apparatus for supporting agile run-time network systems via identification and execution of most efficient application code in view of changing network traffic conditions
US20070250604A1 (en) * 2006-04-21 2007-10-25 Sun Microsystems, Inc. Proximity-based memory allocation in a distributed memory system
US20080062891A1 (en) * 2006-09-08 2008-03-13 Van Der Merwe Jacobus E Systems, devices, and methods for network routing
US20080285562A1 (en) * 2007-04-20 2008-11-20 Cray Inc. Flexible routing tables for a high-radix router
US20090106529A1 (en) * 2007-10-05 2009-04-23 Abts Dennis C Flattened butterfly processor interconnect network
US7603137B1 (en) * 2005-01-27 2009-10-13 Verizon Corporate Services Group Inc. & BBN Technologies Corp. Hybrid communications link
US7603428B2 (en) * 2008-02-05 2009-10-13 Raptor Networks Technology, Inc. Software application striping
US20090307460A1 (en) * 2008-06-09 2009-12-10 David Nevarez Data Sharing Utilizing Virtual Memory
US20100049942A1 (en) * 2008-08-20 2010-02-25 John Kim Dragonfly processor interconnect network
US20100162265A1 (en) * 2008-12-23 2010-06-24 Marco Heddes System-On-A-Chip Employing A Network Of Nodes That Utilize Logical Channels And Logical Mux Channels For Communicating Messages Therebetween

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3072646B2 (en) * 1989-03-20 2000-07-31 富士通株式会社 Communication control method between parallel computers
JPH05181816A (en) * 1992-01-07 1993-07-23 Hitachi Ltd Parallel data processor and microprocessor
JPH0675930A (en) * 1992-08-27 1994-03-18 Toshiba Corp Parallel processor system
JP3224963B2 (en) * 1994-08-31 2001-11-05 株式会社東芝 Network connection device and packet transfer method
JPH09190418A (en) * 1996-01-12 1997-07-22 Hitachi Ltd Method for controlling network
JP3860257B2 (en) * 1996-06-28 2006-12-20 富士通株式会社 How to determine the channel
US6289021B1 (en) 1997-01-24 2001-09-11 Interactic Holdings, Llc Scaleable low-latency switch for usage in an interconnect structure
JP4290320B2 (en) * 2000-09-28 2009-07-01 富士通株式会社 Routing device
US7299266B2 (en) 2002-09-05 2007-11-20 International Business Machines Corporation Memory management offload for RDMA enabled network adapters
US7788310B2 (en) * 2004-07-08 2010-08-31 International Business Machines Corporation Multi-dimensional transform for distributed memory network
WO2007050959A2 (en) * 2005-10-27 2007-05-03 Qualcomm Incorporated A method and apparatus for frequency hopping in a wireless communication system
US7836220B2 (en) 2006-08-17 2010-11-16 Apple Inc. Network direct memory access
KR100801710B1 (en) * 2006-09-29 2008-02-11 삼성전자주식회사 Non-volatile memory device and memory system
JP5078347B2 (en) * 2006-12-28 2012-11-21 インターナショナル・ビジネス・マシーンズ・コーポレーション Method for failing over (repairing) a failed node of a computer system having a plurality of nodes
US20090019258A1 (en) * 2007-07-09 2009-01-15 Shi Justin Y Fault tolerant self-optimizing multi-processor system and method thereof
US7623365B2 (en) * 2007-08-29 2009-11-24 Micron Technology, Inc. Memory device interface methods, apparatus, and systems
CN101222346B (en) * 2008-01-22 2013-04-17 张建中 Method, apparatus and system for single-source multicast interaction and multi-source multicast interaction
US9229887B2 (en) * 2008-02-19 2016-01-05 Micron Technology, Inc. Memory device with network on chip methods, apparatus, and systems
US8656082B2 (en) * 2008-08-05 2014-02-18 Micron Technology, Inc. Flexible and expandable memory architectures
US8549092B2 (en) 2009-02-19 2013-10-01 Micron Technology, Inc. Memory network methods, apparatus, and systems

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010043614A1 (en) * 1998-07-17 2001-11-22 Krishna Viswanadham Multi-layer switching apparatus and method
US20030058873A1 (en) * 1999-01-29 2003-03-27 Interactive Silicon, Incorporated Network device with improved storage density and access speed using compression techniques
US20020015340A1 (en) * 2000-07-03 2002-02-07 Victor Batinovich Method and apparatus for memory module circuit interconnection
US20060101104A1 (en) * 2004-10-12 2006-05-11 International Business Machines Corporation Optimizing layout of an application on a massively parallel supercomputer
US7603137B1 (en) * 2005-01-27 2009-10-13 Verizon Corporate Services Group Inc. & BBN Technologies Corp. Hybrid communications link
US20070169001A1 (en) * 2005-11-29 2007-07-19 Arun Raghunath Methods and apparatus for supporting agile run-time network systems via identification and execution of most efficient application code in view of changing network traffic conditions
US20070250604A1 (en) * 2006-04-21 2007-10-25 Sun Microsystems, Inc. Proximity-based memory allocation in a distributed memory system
US20080062891A1 (en) * 2006-09-08 2008-03-13 Van Der Merwe Jacobus E Systems, devices, and methods for network routing
US20080285562A1 (en) * 2007-04-20 2008-11-20 Cray Inc. Flexible routing tables for a high-radix router
US20090106529A1 (en) * 2007-10-05 2009-04-23 Abts Dennis C Flattened butterfly processor interconnect network
US7603428B2 (en) * 2008-02-05 2009-10-13 Raptor Networks Technology, Inc. Software application striping
US20090307460A1 (en) * 2008-06-09 2009-12-10 David Nevarez Data Sharing Utilizing Virtual Memory
US20100049942A1 (en) * 2008-08-20 2010-02-25 John Kim Dragonfly processor interconnect network
US20100162265A1 (en) * 2008-12-23 2010-06-24 Marco Heddes System-On-A-Chip Employing A Network Of Nodes That Utilize Logical Channels And Logical Mux Channels For Communicating Messages Therebetween

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10681136B2 (en) 2009-02-19 2020-06-09 Micron Technology, Inc. Memory network methods, apparatus, and systems

Also Published As

Publication number Publication date
JP2014157628A (en) 2014-08-28
US20180159933A1 (en) 2018-06-07
JP2012518843A (en) 2012-08-16
JP5877872B2 (en) 2016-03-08
WO2010096569A2 (en) 2010-08-26
JP5630664B2 (en) 2014-11-26
CN102326159B (en) 2015-01-21
EP2399201A4 (en) 2013-07-17
US10681136B2 (en) 2020-06-09
KR101549287B1 (en) 2015-09-01
WO2010096569A3 (en) 2010-12-16
EP2399201A2 (en) 2011-12-28
TW201036366A (en) 2010-10-01
KR20110123774A (en) 2011-11-15
EP2399201B1 (en) 2014-09-03
TWI482454B (en) 2015-04-21
US20100211721A1 (en) 2010-08-19
CN102326159A (en) 2012-01-18
US8549092B2 (en) 2013-10-01

Similar Documents

Publication Publication Date Title
US10681136B2 (en) Memory network methods, apparatus, and systems
US10409766B2 (en) Computer subsystem and computer system with composite nodes in an interconnection structure
US8819611B2 (en) Asymmetric mesh NoC topologies
EP3140748B1 (en) Interconnect systems and methods using hybrid memory cube links
US10554496B2 (en) Heterogeneous SoC IP core placement in an interconnect to optimize latency and interconnect performance
KR101082701B1 (en) Information processing system, apparatus of controlling communication, and method of controlling communication
US20130073814A1 (en) Computer System
US5898827A (en) Routing methods for a multinode SCI computer system
US11526460B1 (en) Multi-chip processing system and method for adding routing path information into headers of packets
US11899583B2 (en) Multi-dimensional cache architecture
JP2017208736A (en) Defect tolerance router for network on-chip
JPH01274266A (en) Computer system
CN117319324A (en) Computing system and communication method
US20160370820A1 (en) Reference voltage generator and reference voltage generator for a semiconductor device
US20130232319A1 (en) Information processing system, routing method and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT, CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:038669/0001

Effective date: 20160426

Owner name: U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGEN

Free format text: SECURITY INTEREST;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:038669/0001

Effective date: 20160426

AS Assignment

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT, MARYLAND

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:038954/0001

Effective date: 20160426

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:038954/0001

Effective date: 20160426

AS Assignment

Owner name: U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT, CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REPLACE ERRONEOUSLY FILED PATENT #7358718 WITH THE CORRECT PATENT #7358178 PREVIOUSLY RECORDED ON REEL 038669 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:043079/0001

Effective date: 20160426

Owner name: U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGEN

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REPLACE ERRONEOUSLY FILED PATENT #7358718 WITH THE CORRECT PATENT #7358178 PREVIOUSLY RECORDED ON REEL 038669 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:043079/0001

Effective date: 20160426

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT;REEL/FRAME:047243/0001

Effective date: 20180629

AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:050937/0001

Effective date: 20190731