WO2002069469A1 - Fault tolerance in a supercomputer through dynamic repartitioning - Google Patents

Fault tolerance in a supercomputer through dynamic repartitioning Download PDF

Info

Publication number
WO2002069469A1
WO2002069469A1 PCT/US2002/005566 US0205566W WO02069469A1 WO 2002069469 A1 WO2002069469 A1 WO 2002069469A1 US 0205566 W US0205566 W US 0205566W WO 02069469 A1 WO02069469 A1 WO 02069469A1
Authority
WO
WIPO (PCT)
Prior art keywords
computer system
signals
torus
global
tree
Prior art date
Application number
PCT/US2002/005566
Other languages
French (fr)
Inventor
Dong Chen
Paul W. Coteus
Alan G. Gara
Todd E. Takken
Original Assignee
International Business Machines Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corporation filed Critical International Business Machines Corporation
Priority to EP02706408A priority Critical patent/EP1374360A4/en
Priority to KR1020037010921A priority patent/KR100570145B1/en
Priority to US10/469,002 priority patent/US7185226B2/en
Priority to CNB028054253A priority patent/CN1319237C/en
Priority to JP2002568482A priority patent/JP4524073B2/en
Priority to PCT/US2002/005566 priority patent/WO2002069469A1/en
Publication of WO2002069469A1 publication Critical patent/WO2002069469A1/en
Priority to JP2007144007A priority patent/JP4577851B2/en

Links

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/20Modifications to facilitate cooling, ventilating, or heating
    • H05K7/20709Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
    • H05K7/20836Thermal management, e.g. server temperature control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F04POSITIVE - DISPLACEMENT MACHINES FOR LIQUIDS; PUMPS FOR LIQUIDS OR ELASTIC FLUIDS
    • F04DNON-POSITIVE-DISPLACEMENT PUMPS
    • F04D25/00Pumping installations or systems
    • F04D25/16Combinations of two or more pumps ; Producing two or more separate gas flows
    • F04D25/166Combinations of two or more pumps ; Producing two or more separate gas flows using fans
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F04POSITIVE - DISPLACEMENT MACHINES FOR LIQUIDS; PUMPS FOR LIQUIDS OR ELASTIC FLUIDS
    • F04DNON-POSITIVE-DISPLACEMENT PUMPS
    • F04D27/00Control, e.g. regulation, of pumps, pumping installations or pumping systems specially adapted for elastic fluids
    • F04D27/004Control, e.g. regulation, of pumps, pumping installations or pumping systems specially adapted for elastic fluids by varying driving speed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2041Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with more than one idle spare processing component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2051Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant in regular structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17356Indirect interconnection networks
    • G06F15/17368Indirect interconnection networks non hierarchical topologies
    • G06F15/17381Two dimensional, e.g. mesh, torus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/141Discrete Fourier transforms
    • G06F17/142Fast Fourier transforms, e.g. using a Cooley-Tukey type algorithm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G5/006Details of the interface to the display terminal
    • G09G5/008Clock recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L7/00Arrangements for synchronising receiver with transmitter
    • H04L7/02Speed or phase control by the received code signals, the signals containing no special synchronisation information
    • H04L7/033Speed or phase control by the received code signals, the signals containing no special synchronisation information using the transitions of the received signal to control the phase of the synchronising-signal-generating means, e.g. using a phase-locked loop
    • H04L7/0337Selecting between two or more discretely delayed clocks or selecting between two or more discretely delayed received code signals
    • H04L7/0338Selecting between two or more discretely delayed clocks or selecting between two or more discretely delayed received code signals the correction of the phase error being performed by a feed forward loop
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/70Control systems characterised by their outputs; Constructional details thereof
    • F24F11/72Control systems characterised by their outputs; Constructional details thereof for controlling the supply of treated air, e.g. its pressure
    • F24F11/74Control systems characterised by their outputs; Constructional details thereof for controlling the supply of treated air, e.g. its pressure for controlling air flow rate or air velocity
    • F24F11/77Control systems characterised by their outputs; Constructional details thereof for controlling the supply of treated air, e.g. its pressure for controlling air flow rate or air velocity by controlling the speed of ventilators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2028Failover techniques eliminating a faulty processor or activating a spare
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B30/00Energy efficient heating, ventilation or air conditioning [HVAC]
    • Y02B30/70Efficient control or regulation technologies, e.g. for control of refrigerant flow, motor or heating

Definitions

  • the present invention relates generally to the provision of fault tolerance in a i parallel computer's interconnection networks by software controlled dynamic repartitioning.
  • a large class of important computations can be performed by massively parallel computer systems.
  • Such systems consist of many identical compute nodes, each of which typically consist of one or more CPUs, memory, and one or more network interfaces to connect it with other nodes.
  • SOC system-on-a-chip
  • the present invention provides fault tolerance in a supercomputer through dynamic repartitioning.
  • a multiprocessor, parallel computer is made tolerant to hardware failures by providing extra groups of redundant standby processors and by designing the system so that these extra groups of processors can be swapped with any group which experiences a hardware failure.
  • This swapping can be under software control, thereby permitting the entire computer to sustain a hardware failure but, after swapping in the standby processors, to still appear to software as a pristine, fully functioning system.
  • Figure 1 illustrates a very simplified 8 node section of a parallel computer and the torus links between those 8 nodes. It is a partial illustration of the torus links of a full array of nodes wherein each node actually has 6 torus links in + and - x, y, z directions, and the links wrap in each logical direction (x, y or z) from the highest numbered node back to the lowest numbered node, so as to maintain 6 torus links in 6 directions for all nodes in the system.
  • Figure 2 is a very simplified illustration of a global combining tree of the massively parallel supercomputer, and is a partial illustration of a full global combining tree which connects all nodes over an entire partition of compute nodes.
  • Figure 3 illustrates the operation of the link chip which controls repartitioning.
  • Figure 4 can be viewed conceptually as a floor plan of the massively parallel supercomputer and illustrates 9 rows of 8 compute racks separated by 8 aisles, wherein each of the 8 racks in each row contains 2 midplanes, and each midplane contains 8 8 x 8 compute nodes.
  • Figure 5 illustrates the routing of torus signal into and out of a link card through cables which connect to neighboring midplanes, through the link card, and then into and out of the torus on the current midplane.
  • the link ASICs optionally connect the 3-dimensional 8 x 8 8 torus on the current midplane to the torus of the larger machine.
  • Figure 6 illustrates the routing of global tree signals, which are the signals of the global combining tree network, into and out of a link card, though cables which connect to neighboring midplanes, through the link card, and then into and out of the midplane compute ASICs.
  • the link ASICs and top level compute ASICs collectively determine how the tree on the current midplane is connected to the global combining tree in the larger system.
  • Figure 7 illustrates the routing of interrupt signals, which are the signals of the global interrupt signal network, into and out of link card through cables which connect the neighboring midolanes, through the link card ASICs and FPGA and then into and out of the midplane.
  • interrupt signals which are the signals of the global interrupt signal network
  • the first of these networks is a three dimensional (3D) torus, in which each compute node connects by 6 links in the + and - x, y, z directions to its 6 logically adjacent nearest neighbor compute nodes, and each compute node has 6 bidirectional torus ports.
  • the massively parallel supercomputer comprises 64 x 32 x 32 compute nodes, wherein each compute node includes an ASIC with 2 processors, one processor of which performs processing as part of the massively parallel supercomputer, and the second processor performs message passing operations.
  • Figure 4 can be viewed conceptually as a floor plan of the massively parallel supercomputer and illustrates 9 rows of compute racks separated by 8 aisles to facilitate service.
  • Each of the 9 racks in each row is about the size of a refrigerator and contains 2 midplanes.
  • Each midplane is a basic building block and contains 8 x 8 8 compute nodes, wherein each compute node comprises a multiprocessor as explained above.
  • the physical machine architecture is most closely tied to a 3D torus. This is a simple 3 -dimensional nearest neighbor interconnect which is "wrapped" at the edges. All 6 nearest torus neighbors are equally distant, except for tirrie-of-flight differences such as exist between different racks of compute node ASICs, making code easy to write and optimize. Each node therefore supports 6 independent bi- directional nearest neighbor links.
  • Figure 1 illustrates a very simplified view of 8 nodes of a parallel supercomputer's torus and the links between those 8 nodes, and is a partial illustration of a full array of nodes wherein each node actually has 6 torus links in + and - x, y, z directions.
  • the links wrap in each logical direction (x, y or z) from the highest numbered node back to the lowest numbered node, so as to maintain 6 torus links in 6 directions for all nodes in the system.
  • Figure 1 also illustrates schematically an x, y, z coordinate system consistent with the x, y, z coordinate system of Figure 4.
  • the massively parallel supercomputer has compute circuit cards and link circuit cards which plug into the midplane.
  • the circuit cards are wired in 2 x 2 x 2 sub-cubes while midplanes, two per rack, are wired as 8 x 8 x 8 sub-cubes.
  • the operative 64k machine is a 64 x 32 x 32 torus, although to provide redundancy to compensate for faulty components the machine is physically implemented as a 72 x 32 x 32 torus, wherein the additional 8 x 32 x 32 nodes are provided for redundancy purposes to provide extra groups of redundant standby processors.
  • the massively parallel supercomputer includes two other completely separate communication link networks.
  • the second communication link network is global combining tree of links, as illustrated in Figures 2 and 6.
  • the third communication link network is a set of global interrupt signals, as illustrated in Figure 7.
  • the combining tree of links and the global interrupt signals are similar to each other in their tree structures and provide communication over an entire partition (64 x 32 x 32 compute nodes), of the machine, both of which are discussed below.
  • Figure 2 is a very simplified illustration of a global combining tree of the massively parallel supercomputer which extends over the entire machine, allowing data to be sent from any node to all others (broadcast), or to a subset of nodes. Global sums, minimum and maximum can also be calculated. Message passing is supported on the global combining tree, and controlled by a second processor within each compute node, allowing intensive operations like all-to-all communications to proceed independent of the compute node.
  • a multiprocessor parallel computer is made tolerant to hardware failures by providing extra groups of redundant standby processors, and by designing the system so that these extra groups of processors can be swapped with any group which experiences a hardware failure.
  • This swapping can be under software control, thereby permitting the entire computer to sustain a hardware failure but after_swapping in the standby processors, to still appear to software as a pristine, fully functioning system.
  • the massively parallel supercomputer is organized into groups of 512 multiprocessors (8 x 8 x 8 nodes) per midplane, with link chips that steer signals over cables between midplanes.
  • the link chips (6 chips per link circuit card) are the primary way by which software reconfiguration of the system is enabled.
  • the massively parallel supercomputer can be logically repartitioned by software control. This permits a large group of racks (as illustrated in Figure 4), physically cabled together as one system, to be logically divided into multiple subsystems. Each of these logically separated subsystems can then simultaneously run different code, or some separated systems can be serviced while others compute. Logical repartitioning therefore facilitates code development and system maintenance.
  • Figure 3 illustrates, and the following section explains, the operation of the link chip which controls repartitioning. The subsequent section details the types of subdivisions which are possible.
  • the massively parallel supercomputer's torus, global combining tree and global interrupt signals pass through the link chip when tracking between different midplanes.
  • This chip serves two functions. First, it redrives signals over the cables between midplanes, improving the high speed signal shape and amplitude in the middle of a long, lossy trace-cable-trace connection between compute ASICs on different midplanes. Second, the link chip can redirect signals between its different ports. This redirection function is what enables the massively parallel supercomputer to be dynamically repartitioned into multiple, logically separate systems.
  • the link chip performs two types of torus signal redirection for system repartitioning, called regular redirection and split redirection.
  • Regular redirection removes one midplane from one logical direction (along either of the x, y or z axes as illustrated in Figure 4) of the large compute system. Regular redirection is shown in Modes 1 and 2 of Figure 3. It involves ports C, F, A and B of the link chip. Ports C and F are attached to the plus direction and minus direction by cables between the current midplane and the higher or lower order midplane in a particular torus logical direction, x, y or z, as shown at the top of Figure 5. These cable connections are show by the arrows in Figure 4 labeled Logical X cables 40, Logical Y cables 42 and Logical Z cables 44.
  • Ports A and B connect to a midplane torus loop which circles within the midplane through eight compute processors in series, as illustrated in Figure 3, and also in Figure 5 as midplane X torus 51, midplane Y torus 52 and midplane Z torus 53.
  • the link chip When operating in Mode 1, the link chip routes signals from the previous midplane through port C, through the current midplane, as illustrated by a midplane torus loop, and on to the next midplane through port F. It thereby makes the current midplane part of the larger compute system.
  • the cable signals from the previous midplane enter through port C and are passed directly to the next midplane through port F, removing the current midplane from the larger compute system. Also in Mode 2, torus signals on the current midplane are connected to and loop within the midplane through ports A and B, creating a smaller compute system.
  • Split redirection permits dividing a large 64x32x32 node section of the machine into two equal 32x32x32 halves or four 16x32x32 quarters. As implemented in the link chip, split redirection could permit a great variety of system divisions. However, due to cost and signal integrity concerns on long cables, split redirection is only physically cabled in the logical X direction and only on the number of rack rows ( Figure 4) necessary to permit dividing the large system into two equal halves or four quarters. Split redirection is shown in Modes 3 and 4 of Figure 1. Eight Modes, 3 through 10, are necessary to accomplish split redirection, though only two, Modes 3 and 4 are shown in Figure 1 for purposes of illustration, and the remaining modes operate in an analogous manner.
  • the link chip redefines the cable ports which it considers to be the plus or minus cable directions to neighboring midplanes. It either redefines the plus direction port from the regular port C to split port D or E, or it redefines the minus direction port from the regular port F to the split port D or E or both.
  • the regular cables are shown by the thin lines with arrows (logical x cables 40, logical y cables 42, and logical z cables 44) in Figure 4, and the split cables 46 are shown as fat lines without arrows (near the center of logical x cables).
  • the logical x cables extend along the x direction, and similarly for the logical y cables in the y direction and the logical z cables in the z direction.
  • Figure 4 illustrates how the Logical X cables are connected between racks.
  • the row numbers are indicated by numbers 0-8 on the left.
  • the Logical x cables are often connected to every other row, with cables between rows 0-2, 1-3, 2-4, 3-5, etc. except for the ends with one cable 0-1 and one cable 7-8.
  • These cables allow a connection of a midplane to a neighboring midplane along the x axis without any one cable being unduly long. Similar cable connection schemes can be employed along the y and z axes.
  • split cables enable x-dimension torus connections other than along the regular logical x cables. For instance, if the machine were being divided into two smaller machines, with a first machine having rows 0-4 and a second machine having rows 5-8, then split cable 46' could be switched in place of logical cable 40', so that the x cables for the first machine are now 0-2, 2-4, 4-3, 3-1 and 1-0, and the second machine could be switched in a similar manner.
  • FIG. 4 illustrates the massively parallel supercomputer cabling and partitioning.
  • Logical repartitioning enables a range of options for how the machine can be subdivided.
  • Figure 4 illustrates examples of both regular and split partitioning, and shows how a midplane can be isolated from the system for service.
  • Split partitioning can divide the large 72x32x32 cabled massively parallel supercomputer into two subsystems of approximately equal halves, a 40x32x32 subsystem, and a 32x32x32 subsystem. This can be done in one of two ways, to ensure that two 32x32x32 subsystems can always be created when a midplane is malfunctioning, independent of where that midplane is physically located (by using the split cables 46 in the manner as explained above under Split redirection). Either the 40x32x32 subsystem is the top five rows of racks and the 32x32x32 subsystem is the bottom four rows, or the reverse.
  • a split partition can be used to divide the system between the top four rows of racks and the bottom five rows.
  • the bottom five rows numbered 0, 1, 2, 3 and 4 form one 40x32x32 subsystem and the top four rows 5, 6, 7 and 8 (all having racks numbered 6 designing system partition # 6) form a separate 32x32x32 subsystem.
  • Both subsystems can be operated in these sizes, or they can be further subdivided using regular partitioning.
  • Regular partitioning can isolate one 1 -midplane (8-node) long section from any logical torus direction. If a midplane in the 4/5 rack of row 1 in Figure 4 is malfunctioning, then regular partitioning can be used to isolate row 1 in the logical x direction from the rest of the 40x32x32 lower system, creating the 32x32x32 system labeled 1 in rows 0, 2, 3 and 4 (system #1) and an 8x32x32 system in row 1 whose racks are labeled with numbers 2, 3 and 4/5.
  • Regular partitioning of this 8x32x32 section in row 1 in the logical y direction isolates the 3 and 4/5 racks from the 2 racks, giving a 8x24x32 section (2 racks, system #2) and an 8x8x32 section (3 and 4/5 racks). Twice regular partitioning of the 8x8x32 section in the logical z direction isolates the 4/5 rack and the 3 rack, resulting in an 8x8x16 section (2 rack, system #3) and two 8x8x8 sections (4/5 racks, systems #4 and #5), one of which can be serviced while all other subdivisions compute. Similar partitioning can be used in different combinations to subdivide and isolate different subsections.
  • Figure 5 illustrates the routing of torus signals into and out of a link card through cables which connect to neighboring midplanes, through the link card, and then into and out of the torus ori the current midplane.
  • the link ASICs optionally connect the 3 -dimensional 8 x 8 8 torus on the current midplane to the torus of the larger machine.
  • the + and - x, y, and z signals are coupled to respectively the + and - logical x cables 40, logical y cables 42, and logical z cables 44 of Figure 4.
  • the signals to and from "to split 1" and "to split 2" in the x direction in Figure 5 are coupled to the + and - split cables 46 of Figure 4.
  • the split cables 46 are only provided along the x direction, although in more complex embodiments they could also be provided along the y and z directions.
  • the link card includes + and - ASICs for each of the x, y and z directions, which operate as explained above with reference to Figure 3.
  • the global combining tree and global interrupt signals are routed through the same link chips and cables as the torus signals, as can be seen by comparing the top sections of Figure 5, 6 and 7. Regular and split repartitioning therefore break the tree into logical subpartitions in exactly the same way as the torus. Within a logical sub-partition the I/O processors on each midplane are then software reconfigured to connect the tree within the partition.
  • Figures 6 and 7 illustrate the routing of global combining tree and global interrupt signals through cables and link chips (with x, y, z link chips being illustrated) between midplanes which also carry the torus signals.
  • link chips When the link chips are reconfigured, this sets which midplanes are connected in each of the system's logical partitions.
  • the combining tree network and the interrupt signal network both need to be further configured so that the head of the combining tree and the head of the interrupt signal network are both defined throughout each logical machine partition. This can be accomplished in many ways.
  • Figure 6 illustrates the routing of global tree signals, which are the signals of the global combining tree network, which are routed over precisely the same cables as the torus signals of Figure 5.
  • the massively parallel supercomputer uses a group of top-level midplane compute processors (ASICs) on each midplane to collectively define which of the six off-midplane cable directions (signals through link chips) to neighboring midplanes are defined as up-tree (from a perspective view, towards the top of the tree of Figure 2), or traveling to a higher logical level in the tree, and which are defined as down-tree (from a perspective view, towards the bottom of the tree of Figure 2).
  • These top level midplane ASICs have three global tree ports each, and the ports can be switched under software control to define which ports are up- tree and down-tree.
  • Collectively these top level midplane ASICs define one of the six off-midplane cable links as up-tree and the other five as down-tree, and they provide a tree connection for the other lower level midplane ASICs, as shown in Figure 6.
  • Figure 7 illustrates the routing of interrupt signals, which are the signals of the global interrupt signal network, which are also routed over precisely the same cables as the torus signals of Figure 5. Since the interrupts are simple AND and OR functions, the complex processors of the midplane compute ASICs are not required to perform arithmetic operations when connecting multiple down-tree branches to an up-tree link. A group of top level compute ASICs is not needed to connect the interupt network's off-midplane up-tree and down-tree links. The interrupt routing of Figure 7 can thereby be simplified compared to the global tree routing of Figure 6.
  • the link chips in Figure 7 communicate between themselves over lines 54 and together present a single bidirectional up-tree signal over lines 55 to a link FPGA (ASICs Floating Point Gate Array) on the link card.
  • This FPGA can perform down-tree broadcasts and up-tree AND and OR logic functions. It communicates down-tree signals over the five down-tree cable connections and into the midplane.
  • FIGs 5, 6 and 7 illustrate that the routing of all of the torus signals, the global tree signals, and the interrupt signals between the cables and the link card is precisely the same. All three network travel over the same cables, and each link card handles the routing and repartiotioning of all three types of signals for all three types of networks.

Abstract

A multiprocessor, parallel computer is made tolerant to hardware failures by providing extra groups of redundant standby processors and by designing the system so that these extra groups of processors can be swapped with any group which experiences a hardware failure. This swapping can be under software control, thereby permitting the entire computer to sustain a hardware failure but, after swapping in the standby processors, to still appear to software as a pristine, fully functioning system.

Description

FAULT TOLERANCE IN A SUPERCOMPUTER THROUGH DYNAMIC REPARTITIONING
CROSS-REFERENCE
The present invention claims the benefit of commonly-owned, co-pending United States Provisional Patent Application Serial Number 60/271,124 filed February 24, 2001 entitled MASSINELY PARALLEL SUPERCOMPUTER, the whole contents and disclosure of which is expressly incorporated by reference herein as if fully set forth herein. This patent application is additionally related to the following commonly-owned, co-pending United States Patent Applications filed on even date herewith, the entire contents and disclosure of each of which is expressly incorporated by reference herein as if fully set forth herein. U.S. patent application Serial No. (YOR920020027US1, YOR920020044US1 (15270)), for "Class Networking Routing"; U.S. patent application Serial No. (YOR920020028US1 (15271)), for "A Global Tree Network for Computing Structures"; U.S. patent application Serial No. (YOR920020029US1 (15272)), for 'Global Interrupt and Barrier Networks"; U.S. patent application Serial No. (YOR920020030US 1
(15273)), for Optimized Scalable Network Switch"; U.S. patent application Serial No. (YOR920020031US1, YOR920020032US1 (15258)), for "Arithmetic Functions in Torus and Tree Networks'; U.S. patent application Serial No. (YOR920020033US1, YOR920020034US1 (15259)), for 'Data Capture Technique for High Speed Signaling"; U.S. patent application Serial No. (YOR920020035US 1 (15260)), for 'Managing Coherence Via Put/Get Windows'; U.S. patent application Serial No. (YOR920020036US1, YOR920020037US1 (15261)), for "Low Latency Memory Access And Synchronization"; U.S. patent application Serial No. (YOR920020038US1 (15276), for 'Twin-Tailed Fail-Over for Fileservers Maintaining Full Performance in the Presence of Failure"; U.S. patent application Serial No. (YOR920020039US1 (15277)), for "Fault Isolation Through No- Overhead Link Level Checksums'; U.S. patent application Serial No. (YOR920020040US1 (15278)), for "Ethernet Addressing Via Physical Location for Massively Parallel Systems"; U.S. patent application Serial No. (YOR920020041US 1 (15274)), for "Fault Tolerance in a Supercomputer Through Dynamic Repartitioning"; U.S. patent application Serial No. (YOR920020042US1 (15279)), for "Checkpointing Filesystem"; U.S. patent application Serial No. (YOR920020043US1 (15262)), for "Efficient Implementation of Multidimensional Fast Fourier Transform on a Distributed-Memory Parallel Multi-Node Computer"; U.S. patent application Serial No. (YOR9-20010211US2 (15275)), for "A Novel Massively Parallel Supercomputer"; and U.S. patent application Serial No. (YOR920020045US1 (15263)), for "Smart Fan Modules and System".
BACKGROUND OF THE INVENTION 1.Field of the Invention
The present invention relates generally to the provision of fault tolerance in a i parallel computer's interconnection networks by software controlled dynamic repartitioning.
2.Discnssion of the Prior Art
A large class of important computations can be performed by massively parallel computer systems. Such systems consist of many identical compute nodes, each of which typically consist of one or more CPUs, memory, and one or more network interfaces to connect it with other nodes.
The computer described in related U.S. provisional application Serial No. 60/271,124, filed February 24, 2001, for A Massively Parallel Supercomputer, leverages system-on-a-chip (SOC) technology to create a scalable cost-efficient computing system with high throughput. SOC technology has made it feasible to build an entire multiprocessor node on a single chip using libraries of embedded components, including CPU cores with integrated, first-level caches. Such packaging greatly reduces the component count of a node, allowing for the creation of a reliable, large-scale machine.
SUMMARY OF THE INVENTION
The present invention provides fault tolerance in a supercomputer through dynamic repartitioning. A multiprocessor, parallel computer is made tolerant to hardware failures by providing extra groups of redundant standby processors and by designing the system so that these extra groups of processors can be swapped with any group which experiences a hardware failure. This swapping can be under software control, thereby permitting the entire computer to sustain a hardware failure but, after swapping in the standby processors, to still appear to software as a pristine, fully functioning system.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing objects and advantages of the present invention for the provision of fault tolerance in a supercomputer through dynamic repartitioning may be more readily understood by one skilled in the art with reference being had to the following detailed description of several embodiments thereof, taken in conjunction with the accompanying drawings wherein like elements are designated by identical reference numerals throughout the several views, and in which:
Figure 1 illustrates a very simplified 8 node section of a parallel computer and the torus links between those 8 nodes. It is a partial illustration of the torus links of a full array of nodes wherein each node actually has 6 torus links in + and - x, y, z directions, and the links wrap in each logical direction (x, y or z) from the highest numbered node back to the lowest numbered node, so as to maintain 6 torus links in 6 directions for all nodes in the system.
Figure 2 is a very simplified illustration of a global combining tree of the massively parallel supercomputer, and is a partial illustration of a full global combining tree which connects all nodes over an entire partition of compute nodes.
Figure 3 illustrates the operation of the link chip which controls repartitioning.
Figure 4 can be viewed conceptually as a floor plan of the massively parallel supercomputer and illustrates 9 rows of 8 compute racks separated by 8 aisles, wherein each of the 8 racks in each row contains 2 midplanes, and each midplane contains 8 8 x 8 compute nodes. Figure 5 illustrates the routing of torus signal into and out of a link card through cables which connect to neighboring midplanes, through the link card, and then into and out of the torus on the current midplane. The link ASICs optionally connect the 3-dimensional 8 x 8 8 torus on the current midplane to the torus of the larger machine.
Figure 6 illustrates the routing of global tree signals, which are the signals of the global combining tree network, into and out of a link card, though cables which connect to neighboring midplanes, through the link card, and then into and out of the midplane compute ASICs. The link ASICs and top level compute ASICs collectively determine how the tree on the current midplane is connected to the global combining tree in the larger system.
Figure 7 illustrates the routing of interrupt signals, which are the signals of the global interrupt signal network, into and out of link card through cables which connect the neighboring midolanes, through the link card ASICs and FPGA and then into and out of the midplane.
DETAILED DESCRIPTION OF THE INVENTION
The massively parallel Supercomputer described in U.S. provisional application Serial No. 60/271,124 describes a massively parallel computer having (x, y, z) (wherein x = 64, y = 32, z = 32) compute nodes connected by several separate communication networks. The first of these networks is a three dimensional (3D) torus, in which each compute node connects by 6 links in the + and - x, y, z directions to its 6 logically adjacent nearest neighbor compute nodes, and each compute node has 6 bidirectional torus ports.
The massively parallel supercomputer comprises 64 x 32 x 32 compute nodes, wherein each compute node includes an ASIC with 2 processors, one processor of which performs processing as part of the massively parallel supercomputer, and the second processor performs message passing operations.
Figure 4 can be viewed conceptually as a floor plan of the massively parallel supercomputer and illustrates 9 rows of compute racks separated by 8 aisles to facilitate service. Each of the 9 racks in each row is about the size of a refrigerator and contains 2 midplanes. Each midplane is a basic building block and contains 8 x 8 8 compute nodes, wherein each compute node comprises a multiprocessor as explained above. The physical machine architecture is most closely tied to a 3D torus. This is a simple 3 -dimensional nearest neighbor interconnect which is "wrapped" at the edges. All 6 nearest torus neighbors are equally distant, except for tirrie-of-flight differences such as exist between different racks of compute node ASICs, making code easy to write and optimize. Each node therefore supports 6 independent bi- directional nearest neighbor links.
Figure 1 illustrates a very simplified view of 8 nodes of a parallel supercomputer's torus and the links between those 8 nodes, and is a partial illustration of a full array of nodes wherein each node actually has 6 torus links in + and - x, y, z directions. The links wrap in each logical direction (x, y or z) from the highest numbered node back to the lowest numbered node, so as to maintain 6 torus links in 6 directions for all nodes in the system. Figure 1 also illustrates schematically an x, y, z coordinate system consistent with the x, y, z coordinate system of Figure 4. The massively parallel supercomputer has compute circuit cards and link circuit cards which plug into the midplane. The circuit cards are wired in 2 x 2 x 2 sub-cubes while midplanes, two per rack, are wired as 8 x 8 x 8 sub-cubes. The operative 64k machine is a 64 x 32 x 32 torus, although to provide redundancy to compensate for faulty components the machine is physically implemented as a 72 x 32 x 32 torus, wherein the additional 8 x 32 x 32 nodes are provided for redundancy purposes to provide extra groups of redundant standby processors.
In addition to the 6torus links of each node to 6 nearest neighboring nodes, the massively parallel supercomputer includes two other completely separate communication link networks. The second communication link network is global combining tree of links, as illustrated in Figures 2 and 6. The third communication link network is a set of global interrupt signals, as illustrated in Figure 7. The combining tree of links and the global interrupt signals are similar to each other in their tree structures and provide communication over an entire partition (64 x 32 x 32 compute nodes), of the machine, both of which are discussed below. Figure 2 is a very simplified illustration of a global combining tree of the massively parallel supercomputer which extends over the entire machine, allowing data to be sent from any node to all others (broadcast), or to a subset of nodes. Global sums, minimum and maximum can also be calculated. Message passing is supported on the global combining tree, and controlled by a second processor within each compute node, allowing intensive operations like all-to-all communications to proceed independent of the compute node.
Pursuant to the present invention a multiprocessor parallel computer is made tolerant to hardware failures by providing extra groups of redundant standby processors, and by designing the system so that these extra groups of processors can be swapped with any group which experiences a hardware failure. This swapping can be under software control, thereby permitting the entire computer to sustain a hardware failure but after_swapping in the standby processors, to still appear to software as a pristine, fully functioning system.
System Repartitioning
In the massively parallel supercomputer described herein, three main separate interconnection networks can benefit from this dynamic repartitioning: a three dimensional torus, a global combining tree, and a set of global interrupts. The massively parallel supercomputer is organized into groups of 512 multiprocessors (8 x 8 x 8 nodes) per midplane, with link chips that steer signals over cables between midplanes. The link chips (6 chips per link circuit card) are the primary way by which software reconfiguration of the system is enabled.
The massively parallel supercomputer can be logically repartitioned by software control. This permits a large group of racks (as illustrated in Figure 4), physically cabled together as one system, to be logically divided into multiple subsystems. Each of these logically separated subsystems can then simultaneously run different code, or some separated systems can be serviced while others compute. Logical repartitioning therefore facilitates code development and system maintenance. Figure 3 illustrates, and the following section explains, the operation of the link chip which controls repartitioning. The subsequent section details the types of subdivisions which are possible.
Link Chip
The massively parallel supercomputer's torus, global combining tree and global interrupt signals pass through the link chip when tracking between different midplanes. This chip serves two functions. First, it redrives signals over the cables between midplanes, improving the high speed signal shape and amplitude in the middle of a long, lossy trace-cable-trace connection between compute ASICs on different midplanes. Second, the link chip can redirect signals between its different ports. This redirection function is what enables the massively parallel supercomputer to be dynamically repartitioned into multiple, logically separate systems.
The link chip performs two types of torus signal redirection for system repartitioning, called regular redirection and split redirection.
Regular redirection Regular redirection removes one midplane from one logical direction (along either of the x, y or z axes as illustrated in Figure 4) of the large compute system. Regular redirection is shown in Modes 1 and 2 of Figure 3. It involves ports C, F, A and B of the link chip. Ports C and F are attached to the plus direction and minus direction by cables between the current midplane and the higher or lower order midplane in a particular torus logical direction, x, y or z, as shown at the top of Figure 5. These cable connections are show by the arrows in Figure 4 labeled Logical X cables 40, Logical Y cables 42 and Logical Z cables 44. Ports A and B connect to a midplane torus loop which circles within the midplane through eight compute processors in series, as illustrated in Figure 3, and also in Figure 5 as midplane X torus 51, midplane Y torus 52 and midplane Z torus 53.
When operating in Mode 1, the link chip routes signals from the previous midplane through port C, through the current midplane, as illustrated by a midplane torus loop, and on to the next midplane through port F. It thereby makes the current midplane part of the larger compute system.
When operating in Mode 2, the cable signals from the previous midplane enter through port C and are passed directly to the next midplane through port F, removing the current midplane from the larger compute system. Also in Mode 2, torus signals on the current midplane are connected to and loop within the midplane through ports A and B, creating a smaller compute system.
Split redirection
Split redirection permits dividing a large 64x32x32 node section of the machine into two equal 32x32x32 halves or four 16x32x32 quarters. As implemented in the link chip, split redirection could permit a great variety of system divisions. However, due to cost and signal integrity concerns on long cables, split redirection is only physically cabled in the logical X direction and only on the number of rack rows (Figure 4) necessary to permit dividing the large system into two equal halves or four quarters. Split redirection is shown in Modes 3 and 4 of Figure 1. Eight Modes, 3 through 10, are necessary to accomplish split redirection, though only two, Modes 3 and 4 are shown in Figure 1 for purposes of illustration, and the remaining modes operate in an analogous manner. In split redirection the link chip redefines the cable ports which it considers to be the plus or minus cable directions to neighboring midplanes. It either redefines the plus direction port from the regular port C to split port D or E, or it redefines the minus direction port from the regular port F to the split port D or E or both. The regular cables are shown by the thin lines with arrows (logical x cables 40, logical y cables 42, and logical z cables 44) in Figure 4, and the split cables 46 are shown as fat lines without arrows (near the center of logical x cables). The logical x cables extend along the x direction, and similarly for the logical y cables in the y direction and the logical z cables in the z direction. Figure 4 illustrates how the Logical X cables are connected between racks. The row numbers are indicated by numbers 0-8 on the left. Note that the Logical x cables are often connected to every other row, with cables between rows 0-2, 1-3, 2-4, 3-5, etc. except for the ends with one cable 0-1 and one cable 7-8. These cables allow a connection of a midplane to a neighboring midplane along the x axis without any one cable being unduly long. Similar cable connection schemes can be employed along the y and z axes.
The split cables enable x-dimension torus connections other than along the regular logical x cables. For instance, if the machine were being divided into two smaller machines, with a first machine having rows 0-4 and a second machine having rows 5-8, then split cable 46' could be switched in place of logical cable 40', so that the x cables for the first machine are now 0-2, 2-4, 4-3, 3-1 and 1-0, and the second machine could be switched in a similar manner.
Torus Partitioning
Figure 4 illustrates the massively parallel supercomputer cabling and partitioning.
Logical repartitioning enables a range of options for how the machine can be subdivided. Figure 4 illustrates examples of both regular and split partitioning, and shows how a midplane can be isolated from the system for service.
Split partitioning can divide the large 72x32x32 cabled massively parallel supercomputer into two subsystems of approximately equal halves, a 40x32x32 subsystem, and a 32x32x32 subsystem. This can be done in one of two ways, to ensure that two 32x32x32 subsystems can always be created when a midplane is malfunctioning, independent of where that midplane is physically located (by using the split cables 46 in the manner as explained above under Split redirection). Either the 40x32x32 subsystem is the top five rows of racks and the 32x32x32 subsystem is the bottom four rows, or the reverse. For example, if a midplane in row 1 in the rack indicated by square 4/5 in Figure 4 needs servicing, then a split partition can be used to divide the system between the top four rows of racks and the bottom five rows. In this case the bottom five rows numbered 0, 1, 2, 3 and 4 form one 40x32x32 subsystem and the top four rows 5, 6, 7 and 8 (all having racks numbered 6 designing system partition # 6) form a separate 32x32x32 subsystem. Both subsystems can be operated in these sizes, or they can be further subdivided using regular partitioning.
Regular partitioning can isolate one 1 -midplane (8-node) long section from any logical torus direction. If a midplane in the 4/5 rack of row 1 in Figure 4 is malfunctioning, then regular partitioning can be used to isolate row 1 in the logical x direction from the rest of the 40x32x32 lower system, creating the 32x32x32 system labeled 1 in rows 0, 2, 3 and 4 (system #1) and an 8x32x32 system in row 1 whose racks are labeled with numbers 2, 3 and 4/5. Regular partitioning of this 8x32x32 section in row 1 in the logical y direction isolates the 3 and 4/5 racks from the 2 racks, giving a 8x24x32 section (2 racks, system #2) and an 8x8x32 section (3 and 4/5 racks). Twice regular partitioning of the 8x8x32 section in the logical z direction isolates the 4/5 rack and the 3 rack, resulting in an 8x8x16 section (2 rack, system #3) and two 8x8x8 sections (4/5 racks, systems #4 and #5), one of which can be serviced while all other subdivisions compute. Similar partitioning can be used in different combinations to subdivide and isolate different subsections.
Figure 5 illustrates the routing of torus signals into and out of a link card through cables which connect to neighboring midplanes, through the link card, and then into and out of the torus ori the current midplane. The link ASICs optionally connect the 3 -dimensional 8 x 8 8 torus on the current midplane to the torus of the larger machine. At the top of Figure 5, the + and - x, y, and z signals are coupled to respectively the + and - logical x cables 40, logical y cables 42, and logical z cables 44 of Figure 4. The signals to and from "to split 1" and "to split 2" in the x direction in Figure 5 are coupled to the + and - split cables 46 of Figure 4. As explained above, the split cables 46 are only provided along the x direction, although in more complex embodiments they could also be provided along the y and z directions. The link card includes + and - ASICs for each of the x, y and z directions, which operate as explained above with reference to Figure 3.
Tree and Interrupt repartitioning
The global combining tree and global interrupt signals are routed through the same link chips and cables as the torus signals, as can be seen by comparing the top sections of Figure 5, 6 and 7. Regular and split repartitioning therefore break the tree into logical subpartitions in exactly the same way as the torus. Within a logical sub-partition the I/O processors on each midplane are then software reconfigured to connect the tree within the partition.
Figures 6 and 7 illustrate the routing of global combining tree and global interrupt signals through cables and link chips (with x, y, z link chips being illustrated) between midplanes which also carry the torus signals. When the link chips are reconfigured, this sets which midplanes are connected in each of the system's logical partitions. However, upon repartitioning, the combining tree network and the interrupt signal network both need to be further configured so that the head of the combining tree and the head of the interrupt signal network are both defined throughout each logical machine partition. This can be accomplished in many ways.
Figure 6 illustrates the routing of global tree signals, which are the signals of the global combining tree network, which are routed over precisely the same cables as the torus signals of Figure 5.
For the global combining tree, the massively parallel supercomputer uses a group of top-level midplane compute processors (ASICs) on each midplane to collectively define which of the six off-midplane cable directions (signals through link chips) to neighboring midplanes are defined as up-tree (from a perspective view, towards the top of the tree of Figure 2), or traveling to a higher logical level in the tree, and which are defined as down-tree (from a perspective view, towards the bottom of the tree of Figure 2). These top level midplane ASICs have three global tree ports each, and the ports can be switched under software control to define which ports are up- tree and down-tree. Collectively these top level midplane ASICs define one of the six off-midplane cable links as up-tree and the other five as down-tree, and they provide a tree connection for the other lower level midplane ASICs, as shown in Figure 6.
Figure 7 illustrates the routing of interrupt signals, which are the signals of the global interrupt signal network, which are also routed over precisely the same cables as the torus signals of Figure 5. Since the interrupts are simple AND and OR functions, the complex processors of the midplane compute ASICs are not required to perform arithmetic operations when connecting multiple down-tree branches to an up-tree link. A group of top level compute ASICs is not needed to connect the interupt network's off-midplane up-tree and down-tree links. The interrupt routing of Figure 7 can thereby be simplified compared to the global tree routing of Figure 6. For the global interrupts the link chips in Figure 7 communicate between themselves over lines 54 and together present a single bidirectional up-tree signal over lines 55 to a link FPGA (ASICs Floating Point Gate Array) on the link card. This FPGA can perform down-tree broadcasts and up-tree AND and OR logic functions. It communicates down-tree signals over the five down-tree cable connections and into the midplane.
Figures 5, 6 and 7 illustrate that the routing of all of the torus signals, the global tree signals, and the interrupt signals between the cables and the link card is precisely the same. All three network travel over the same cables, and each link card handles the routing and repartiotioning of all three types of signals for all three types of networks.
While several embodiments and variations of the present invention for a fault tolerance in a supercomputer through dynamic repartitioning are described in detail herein, it should be apparent that the disclosure and teachings of the present invention will suggest many alternative designs to those skilled in the art.

Claims

CLAIMSHaving thus described our invention, what we claim as new and desire to secure by Letters Patent is:
1. A method of providing fault tolerance in a parallel computer system which includes a plurality of parallel processors to render the computer system tolerant to hardware failures comprising: providing the computer system with extra groups of redundant standby processors; designing the computer system so that the extra groups of redundant standby processors can be switched to operate in place of a group of processors of the computer system which experiences a hardware failure.
2. The method of claim 1 , wherein the switching is under software control, thereby permitting the entire computer system to sustain a hardware failure, and after switching in of the standby processors, the computer system appears to software as a fully functioning and operative computer system.
3. The method of claim 1 , wherein the computer system comprises a massively parallel computer system comprising a plurality of substantially identical compute nodes, each of which comprises one or more CPUs, memory, and one or more network interfaces to connect it with other compute nodes.
4. The method of claim 1 , wherein the computer system comprises an array of a x b x c compute nodes connected as a three dimensional torus wherein each compute node connects by 6 links, including wrap links, in the + and - x, y, z directions to 6 adjacent compute nodes
5. The method of claim 4, wherein each compute nodes includes an ASIC with a multiprocessor, one processor of which performs processing as part of the massively parallel supercomputer, and a second processor which performs message passing operations of the compute node.
6. The method of claim 4, wherein the computer system also includes communication links over a global combining tree of links, and a similar combining tree for a set of global interrupt signals.
7. The method of claim 6, wherein the computer system's torus, global combining tree, and global interrupt signals pass through a link chip which redirects signals between different ports of the link chip to enable the computer system to be partitioned into multiple, logically separate systems.
8. The method of claim 7, wherein the link chip also serves a second function of redriving signals over the cables between midplanes to improve the high speed shape and amplitude of the signals.
9. The method of claim 6, wherein each link chip performs two types of signal redirection, regular redirection which removes one midplane from one logical direction along either of the x, y, or z axes of the computer system, and split redirection which permits dividing the computer system into two halves or four quarters.
10. The method of claim 6, wherein the global combining free and global control signals are routed through the same link chips and cables as the torus signals, such that regular and split redirection and repartitioning change the free into logical subpartitions in exactly the same way as the torus.
11. The method of claim 10, wherein upon repartitioning, the global combining tree and interrupt signals are further configured so that the head of the combining tree and the head of the interrupt networks are both defined throughout each logical machine partition.
PCT/US2002/005566 2001-02-24 2002-02-25 Fault tolerance in a supercomputer through dynamic repartitioning WO2002069469A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
EP02706408A EP1374360A4 (en) 2001-02-24 2002-02-25 Fault tolerance in a supercomputer through dynamic repartitioning
KR1020037010921A KR100570145B1 (en) 2001-02-24 2002-02-25 Fault tolerance in a supercomputer through dynamic repartitioning
US10/469,002 US7185226B2 (en) 2001-02-24 2002-02-25 Fault tolerance in a supercomputer through dynamic repartitioning
CNB028054253A CN1319237C (en) 2001-02-24 2002-02-25 Fault tolerance in supercomputer through dynamic repartitioning
JP2002568482A JP4524073B2 (en) 2001-02-24 2002-02-25 Fault tolerance in supercomputers via dynamic subdivision
PCT/US2002/005566 WO2002069469A1 (en) 2001-02-24 2002-02-25 Fault tolerance in a supercomputer through dynamic repartitioning
JP2007144007A JP4577851B2 (en) 2001-02-24 2007-05-30 Fault tolerance in supercomputers via dynamic subdivision

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US27112401P 2001-02-24 2001-02-24
US60/271,124 2001-02-24
PCT/US2002/005566 WO2002069469A1 (en) 2001-02-24 2002-02-25 Fault tolerance in a supercomputer through dynamic repartitioning

Publications (1)

Publication Number Publication Date
WO2002069469A1 true WO2002069469A1 (en) 2002-09-06

Family

ID=68499838

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/005566 WO2002069469A1 (en) 2001-02-24 2002-02-25 Fault tolerance in a supercomputer through dynamic repartitioning

Country Status (6)

Country Link
US (1) US7185226B2 (en)
EP (1) EP1374360A4 (en)
JP (2) JP4524073B2 (en)
KR (1) KR100570145B1 (en)
CN (1) CN1319237C (en)
WO (1) WO2002069469A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100382041C (en) * 2003-05-09 2008-04-16 株式会社东芝 Computer system and damaged computer replacing control method to be applied for the system
CN111811116A (en) * 2020-07-07 2020-10-23 北京丰联奥睿科技有限公司 Configuration method of multi-connected air conditioning system

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1319237C (en) * 2001-02-24 2007-05-30 国际商业机器公司 Fault tolerance in supercomputer through dynamic repartitioning
KR100592752B1 (en) * 2001-02-24 2006-06-26 인터내셔널 비지네스 머신즈 코포레이션 Arithmetic functions in torus and tree networks
US20060001669A1 (en) * 2002-12-02 2006-01-05 Sehat Sutardja Self-reparable semiconductor and method thereof
US7185225B2 (en) * 2002-12-02 2007-02-27 Marvell World Trade Ltd. Self-reparable semiconductor and method thereof
US7340644B2 (en) * 2002-12-02 2008-03-04 Marvell World Trade Ltd. Self-reparable semiconductor and method thereof
US7178059B2 (en) * 2003-05-07 2007-02-13 Egenera, Inc. Disaster recovery for processing resources using configurable deployment platform
US7904663B2 (en) * 2003-12-18 2011-03-08 International Business Machines Corporation Secondary path for coherency controller to interconnection network(s)
US8336040B2 (en) 2004-04-15 2012-12-18 Raytheon Company System and method for topology-aware job scheduling and backfilling in an HPC environment
US8335909B2 (en) 2004-04-15 2012-12-18 Raytheon Company Coupling processors to each other for high performance computing (HPC)
US9178784B2 (en) 2004-04-15 2015-11-03 Raytheon Company System and method for cluster management based on HPC architecture
US7376890B2 (en) * 2004-05-27 2008-05-20 International Business Machines Corporation Method and system for checking rotate, shift and sign extension functions using a modulo function
US7475274B2 (en) * 2004-11-17 2009-01-06 Raytheon Company Fault tolerance and recovery in a high-performance computing (HPC) system
KR100748715B1 (en) * 2005-12-27 2007-08-13 주식회사 텔레칩스 Hardware task management system
US20070174655A1 (en) * 2006-01-18 2007-07-26 Brown Kyle G System and method of implementing automatic resource outage handling
US8078907B2 (en) * 2006-01-19 2011-12-13 Silicon Graphics, Inc. Failsoft system for multiple CPU system
US8516444B2 (en) 2006-02-23 2013-08-20 International Business Machines Corporation Debugging a high performance computing program
US7512836B2 (en) 2006-12-11 2009-03-31 International Business Machines Corporation Fast backup of compute nodes in failing midplane by copying to nodes in backup midplane via link chips operating in pass through and normal modes in massively parallel computing system
JP2008165381A (en) * 2006-12-27 2008-07-17 Ricoh Co Ltd Image processing device and image processing method
US8412981B2 (en) * 2006-12-29 2013-04-02 Intel Corporation Core sparing on multi-core platforms
US20080235454A1 (en) * 2007-03-22 2008-09-25 Ibm Corporation Method and Apparatus for Repairing a Processor Core During Run Time in a Multi-Processor Data Processing System
US9330230B2 (en) * 2007-04-19 2016-05-03 International Business Machines Corporation Validating a cabling topology in a distributed computing system
US7984150B2 (en) * 2007-07-31 2011-07-19 Hewlett-Packard Development Company, L.P. Cell compatibilty in multiprocessor systems
JP2009104300A (en) * 2007-10-22 2009-05-14 Denso Corp Data processing apparatus and program
US7958341B1 (en) 2008-07-07 2011-06-07 Ovics Processing stream instruction in IC of mesh connected matrix of processors containing pipeline coupled switch transferring messages over consecutive cycles from one link to another link or memory
US8131975B1 (en) 2008-07-07 2012-03-06 Ovics Matrix processor initialization systems and methods
US8145880B1 (en) 2008-07-07 2012-03-27 Ovics Matrix processor data switch routing systems and methods
US8327114B1 (en) 2008-07-07 2012-12-04 Ovics Matrix processor proxy systems and methods
US7870365B1 (en) 2008-07-07 2011-01-11 Ovics Matrix of processors with data stream instruction execution pipeline coupled to data switch linking to neighbor units by non-contentious command channel / data channel
JP2010086363A (en) * 2008-10-01 2010-04-15 Fujitsu Ltd Information processing apparatus and apparatus configuration rearrangement control method
US20110202995A1 (en) * 2010-02-16 2011-08-18 Honeywell International Inc. Single hardware platform multiple software redundancy
US8718079B1 (en) 2010-06-07 2014-05-06 Marvell International Ltd. Physical layer devices for network switches
US8713362B2 (en) 2010-12-01 2014-04-29 International Business Machines Corporation Obviation of recovery of data store consistency for application I/O errors
US8694821B2 (en) 2010-12-03 2014-04-08 International Business Machines Corporation Generation of standby images of applications
WO2023068960A1 (en) * 2021-10-20 2023-04-27 Федеральное Государственное Унитарное Предприятие "Российский Федеральный Ядерный Центр - Всероссийский Научно - Исследовательский Институт Технической Физики Имени Академика Е.И. Забабахина" Compact supercomputer

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4907232A (en) 1988-04-28 1990-03-06 The Charles Stark Draper Laboratory, Inc. Fault-tolerant parallel processing system
US5271014A (en) 1992-05-04 1993-12-14 International Business Machines Corporation Method and apparatus for a fault-tolerant mesh with spare nodes
US5592610A (en) * 1994-12-21 1997-01-07 Intel Corporation Method and apparatus for enhancing the fault-tolerance of a network
US6189112B1 (en) * 1998-04-30 2001-02-13 International Business Machines Corporation Transparent processor sparing
US7555566B2 (en) 2001-02-24 2009-06-30 International Business Machines Corporation Massively parallel supercomputer

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61201365A (en) * 1985-03-04 1986-09-06 Nippon Telegr & Teleph Corp <Ntt> Automatic reconstitution system for parallel processing system
JPS62274454A (en) * 1986-05-23 1987-11-28 Hitachi Ltd Parallel processing computer
JPH03132861A (en) 1989-10-19 1991-06-06 Agency Of Ind Science & Technol Reconstruction control system for multiprocessor system
US5963746A (en) * 1990-11-13 1999-10-05 International Business Machines Corporation Fully distributed processing memory element
US5594918A (en) * 1991-05-13 1997-01-14 International Business Machines Corporation Parallel computer system providing multi-ported intelligent memory
US5715391A (en) * 1991-11-15 1998-02-03 International Business Machines Corporation Modular and infinitely extendable three dimensional torus packaging scheme for parallel processing
EP0570729A3 (en) * 1992-05-22 1994-07-20 Ibm Apap i/o programmable router
JPH06290158A (en) * 1993-03-31 1994-10-18 Fujitsu Ltd Reconstructible torus network system
US5884018A (en) * 1997-01-28 1999-03-16 Tandem Computers Incorporated Method and apparatus for distributed agreement on processor membership in a multi-processor system
US6115829A (en) * 1998-04-30 2000-09-05 International Business Machines Corporation Computer system with transparent processor sparing
GB2359162B (en) * 1998-11-10 2003-09-10 Fujitsu Ltd Parallel processor system
FR2795840B1 (en) * 1999-07-02 2001-08-31 Commissariat Energie Atomique NETWORK OF PARALLEL PROCESSORS WITH FAULT TOLERANCE OF THESE PROCESSORS, AND RECONFIGURATION PROCEDURE APPLICABLE TO SUCH A NETWORK
US6789213B2 (en) * 2000-01-10 2004-09-07 Sun Microsystems, Inc. Controlled take over of services by remaining nodes of clustered computing system
JP3674515B2 (en) * 2000-02-25 2005-07-20 日本電気株式会社 Array type processor
ATE437476T1 (en) * 2000-10-06 2009-08-15 Pact Xpp Technologies Ag CELL ARRANGEMENT WITH SEGMENTED INTERCELL STRUCTURE
CN1319237C (en) * 2001-02-24 2007-05-30 国际商业机器公司 Fault tolerance in supercomputer through dynamic repartitioning
US7080156B2 (en) * 2002-03-21 2006-07-18 Sun Microsystems, Inc. Message routing in a torus interconnect

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4907232A (en) 1988-04-28 1990-03-06 The Charles Stark Draper Laboratory, Inc. Fault-tolerant parallel processing system
US5271014A (en) 1992-05-04 1993-12-14 International Business Machines Corporation Method and apparatus for a fault-tolerant mesh with spare nodes
US5592610A (en) * 1994-12-21 1997-01-07 Intel Corporation Method and apparatus for enhancing the fault-tolerance of a network
US6189112B1 (en) * 1998-04-30 2001-02-13 International Business Machines Corporation Transparent processor sparing
US7555566B2 (en) 2001-02-24 2009-06-30 International Business Machines Corporation Massively parallel supercomputer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1374360A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100382041C (en) * 2003-05-09 2008-04-16 株式会社东芝 Computer system and damaged computer replacing control method to be applied for the system
CN111811116A (en) * 2020-07-07 2020-10-23 北京丰联奥睿科技有限公司 Configuration method of multi-connected air conditioning system

Also Published As

Publication number Publication date
EP1374360A1 (en) 2004-01-02
CN1319237C (en) 2007-05-30
JP4577851B2 (en) 2010-11-10
US7185226B2 (en) 2007-02-27
KR100570145B1 (en) 2006-04-12
JP2007220147A (en) 2007-08-30
KR20030077034A (en) 2003-09-29
US20040153754A1 (en) 2004-08-05
JP4524073B2 (en) 2010-08-11
EP1374360A4 (en) 2010-02-17
JP2004532447A (en) 2004-10-21
CN1493101A (en) 2004-04-28

Similar Documents

Publication Publication Date Title
US7185226B2 (en) Fault tolerance in a supercomputer through dynamic repartitioning
EP2147375B1 (en) Fault recovery on a parallel computer system with a torus network
US6470441B1 (en) Methods and apparatus for manifold array processing
US7330996B2 (en) Twin-tailed fail-over for fileservers maintaining full performance in the presence of a failure
US9880972B2 (en) Computer subsystem and computer system with composite nodes in an interconnection structure
US5243704A (en) Optimized interconnect networks
US20060282648A1 (en) Network topology for a scalable multiprocessor system
US5271014A (en) Method and apparatus for a fault-tolerant mesh with spare nodes
JPH06290157A (en) Net
Yeh et al. Multilayer VLSI layout for interconnection networks
Kumar et al. A transputer-based extended hypercube
Parhami Message-Passing MIMD Machines
Jindal Simulation Analysis of Permutation Passibility behavior of Multi-stage Interconnection Networks A Thesis Report Submitted in the partial fulfillment of the requirements for the award of the degree of ME in Software Engineering
Chittor et al. Link switching: a communication architecture for configurable parallel systems
Barry Methods and apparatus for manifold array processing

Legal Events

Date Code Title Description
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 1020037010921

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 2002568482

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 028054253

Country of ref document: CN

Ref document number: 10469002

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2002706408

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1020037010921

Country of ref document: KR

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 2002706408

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWG Wipo information: grant in national office

Ref document number: 1020037010921

Country of ref document: KR