CA2132097A1 - Fiber optic memory coupling system - Google Patents

Fiber optic memory coupling system

Info

Publication number
CA2132097A1
CA2132097A1 CA002132097A CA2132097A CA2132097A1 CA 2132097 A1 CA2132097 A1 CA 2132097A1 CA 002132097 A CA002132097 A CA 002132097A CA 2132097 A CA2132097 A CA 2132097A CA 2132097 A1 CA2132097 A1 CA 2132097A1
Authority
CA
Canada
Prior art keywords
data
bus
memory
nodes
fmc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002132097A
Other languages
French (fr)
Inventor
John D. Acton
Lawrence C. Grant
Jack M. Hardy, Jr.
Steven Kent
Steven Schelong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Microsystems Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2132097A1 publication Critical patent/CA2132097A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/0292User address space allocation, e.g. contiguous or non contiguous base addressing using tables or multilevel address translation means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4027Coupling between buses using bus bridges
    • G06F13/4045Coupling between buses using bus bridges where the bus bridge performs an extender function

Abstract

A system for coupling sets of plurality of nodes (6) that are memory coupled to pass write only data memory to memory via a data link (4) that further includes an optical fiber controller (1) coupled to each data link (4). Each controller (1) is interconnected through fiber (4) for high speed data transfers from one set of nodes (6) to another. The controller (1) is capable of connection implementing three and four cable interfaces. The data is transmitted through the fiber (4) serially but the controller (1) is adapted to receive parallel data and convert to serial form and vice versa.

Description

W~ 93/19422 PCT/U~3/02~39 3 FIELD oF THE INVEr~TION
4 : This invention relates to a novel fiber optic memory :
5~ ~interconn~ction for linking special memory busses of processing ~: ~ 6nodes and has particular application for real tim~ data processing 7sys~ems operating at large distances.
8:
B~CKGROUND OF ~HE INVENTION
`~ 10Systems for updating memories in coupl~d nodes are known from 11U.S. Patent No. 4,991,079 the content of which is here incorporated 12~by reerenc2 and:from~U.S.~Serial No. 07j403,779 filed September 8, 13~9~89~ which is a continu2tion of Serial No. 06/880,222 filed June 14~30~ 198~6, nsw abandone~, the::content of which is here incorporated 5~ by:~referénce, ~11 of ~which are commonly owned with th2 present 6 ~applLcation. Such systems use~wo ported m~mor~es and are used to 17~ trans~er w~ites~to one~memory in one node auto~ati~al}y and at hiyh 8~ speed~to memory in other:~nodes wi~h:the intervention of a CPU.
19~ 5uch~systems,~however,~haYe~a:distance limitation o about 120 feet O;~ and~eight:~nodes.~ :The ~pr~sent invention is an :impruvement that 21 ~ ena~le~isùch~sys~ems~:to:~b~`:connected over a distance. The present Z2~ state~Qf~:the:~art allows for connection~ o~ 3 kilometers and up to 3~ ten~ilo~eters~with~a high speed~data interface.

~25~
26 ~ The present~ inv~ntion prov~des a ~eans for:connecting such Z7~ ~memo~ coupled processing systems: over a large distance and ;2~B ~provides for high ~peed~data ~rans~ers between th~ systems, copying 29~ data:~from~the memory;~of a:~node in one system:to the ~emory of a :30 : node~in another system. ~
31~ Other and furthe;r~advantages of the present invention will 3~2 `~:becom:~ireadily~ evident :~rom the following description:of a 33 :preferred e~bodiment when taken in conjunction with the appended drawi:ngs. : ::
~ 36 :
2~32097 l 2 2 BRIEF pESCRIPTION OF THE DRAWINGS
3 Figure l illustrates the system of the present invention;
4 Figurs 2 illustrates the fiber to memory coupled system controller (FM~);
6 Figure 3 illustrates the FMC to memory coupling bus interface;
7 Fîgure 4 illustrate~ the method of handling a memory write 8 transf~r by the FMC;
9~ : Figure 5 illustrates an example of address t~anslation as performed by the present invention, ll: Figure 6 illustrates the FMC separated into data path l2 quadrants;
13~ Figure 7 illustrates Quadrant O of the FMC;
14 Figure 8 illustrates Quadrant l of the FMC;
15~ igure 3 illus rates Quadrant 2 of the FMC;
16 Figure lO illustrates Quadrant 3 of t~e F~C;
:l7~ Fi~ure ll illustrates a parallel~serial latch used in the l8~ ~pres~nt~invention;
lg ~ Figure 12 illustrates the basic packet format;
20~ Figure 13 illus~rates ~he format for a data packet, 2l~ Figur~ 14:illustrates th2 format for a general purpose async 22~ data packet;
2~3 ~ :~Figure l5 illustrates the format for a data packet of a MCS-II
24 : multidrop console; : :~
25~ Figure 16 illustrates the format for an interrupt packet;
;:~:Z6 ~ ~Figure 17 illustrates:~the mode of operation of the F~C for ~he 27 handling of async seri~l data;
28 Figure 18 illustrates th mode of operation for handling 29 :~speciaI async data pass through;
.
Figure l9 iliustrates handling of interrupts by the FMC;
31 ~ :Figure 20 illustrates an example of network linking clusters 32 of nodes;
33 Figure 21 illustrates a secondary backup high speed link for 34 the system;
Figure 22 illustrates internal loopback for memory write 36 transfers:

YY~ 93/19~22 PCT/US93/02~39 2I32~7 Figure 23 illustrates external loopback for memory write 3transfers;
4Figure 24 illustrates memory coupling bus loopback;
5Figure 25a illustrates internal loopback ~or general purpose 6async data:
~7Figure 25b i}lus~rates external loopback for general purpose 8async data;
Figure 25c illustrates port-port loopback;
10~igure 26 illustrates cluster F~C error handling;
1~Figure 27 illustrates hu~ FMC error handling;
12~igure 28 illustrates typical configurations o~ the pr~sent 13invention;
14~Figure 29 illustrates a star configuration of the present 15 ~:: inven~lon. ~ :
6~
7 :: e~ Ln~:Ç~IPT~o~ OF THE_PR~FERRED EMBO~IMEMT
18~ This-invention relates to a high speed data interface system h~lg~ ~using~fiber~ optic ~emory~:interconnection as shown in Figure 1.
20~ This~:~interface ~yste~ :~connects memory coupled systems over :21 ;~: distanoes~ up to 10 kl~lometers. Each memory coupled ~ystem 2~ ~:comprises;a data link:or me30ry coupling bus 5 to which up ~o eight 23 ~odes 6 are~ coupled ~a h:comprising a processor, I/O, a two or more Z4 ~ ported: memory and~a~ proc-ssor to memory bus. Write/r~ad sense 25~ controllers;couple the~busses and the me~ory so writes o~ly are 26 sensed and re~lected:to t~e me~ories of other nodes without CPU
~ intervention. This i~: described in det~il in the patent and 2B application noted above, both of which ar~ here incorporated by 29~ reference. The: bUe 5 of~each memory coupled system is connected to a fiber-to-memory coupIing system controller ~FMC) 1~ Each FMC has 31 both an input and output port for connection wi~h another FMC. The 32 : input port 2 is Por receiving transmitted data from another memory oupled system and t~e output port 3 is for transmitting data to 34: another ~emory co~pled system. The transmission of the data is through fibPr optic cables 4.
36The ~iber Optic Memory Coupling Sy~tem ~FOMCS) also provides WO 93/19422 PCT/US93/0283g 2132~9ti' 2 the capability for nodes on separate MC busses to exchange async 3 serial data across the fiber link and for a node on one MC bus to : 4 in~errupt a node on another. Each FMC supports 8 gene~al purpose : ~ async ports and 16 interrupt lines (8 input and 8 output). The 6 multidrop console link incorporated in the MCS-II fourth cable is ~7 also supported. These features are intended to a}low remote 8 ~ booting of nodes across the fiber and to provide a means of ~ 9 synchronlzing the actions of nodes.
; ~ 10 Throughout this document, the term "MCS cluster" (or simply 11 "cluster") is used to refer to a MC bus and its attached n~des and 12 FMC. In configurations of only two or three clusters, pairs of : ~
13 :~FMCs are used to directly connect MC busses (seB Figure 28~o In a 14~ ~ configuration of more than:three clusters, a FOMCS hub is used to ~connec~:all the rlusters in a star configuration ~see Figure 29).
6 :~ The configuration programming link shown in Figures 28 and 29 17 : ~is used:~o ~stablish the operation~l mode of each FMC. FMCs whirh are~ not~ dirèctly connected to the progxamming link receive 19 programmin~g:information:via packets sent over:the fiber linkO The 20~ FMC~ can;be seen ln more detail in Figure 2. The ~MC includes a 21 ~ Receive~ Data Path:7, ~an output latch 9, a raceive FIFO llf a 2~ ~receive~Erro~ Det2ction- ircuit 13, two receive latches 15 and 17, 23 : a~recei~er l9, an input latch 10, a hit and translation RAM 12, a 24 ~transmit~FIFO 14, a transmit Error Detection Circuit 16, two 25 ~; tra~smit~latches 18 and~20 and a transmitter 22.
~ : ~ The ~N 1 consi~ts o~ four main sectio~s: th~ memory coupled 27 system ~ interf~ce, the data paths (Rx and Tx) 7 and 8, the high 28 speed serial data link interface 4, and the microprocessor. These Zg ~ areas are delineated using dashed lines in the functional block diagram (Figure 2). The Fiber Transition Module (FTM) connects to 31 ~ the~MC high speed serial link interface. The FTM is a known and 32 : conventional piece of hardware and serves to connPct the electrical 33~ signals to corres~onding light signals and vice versa.
34 The Memory Coupling (MC) bus interface is the FMC's link to the Memory Coupling System (MCS) and the nodes ~n that bus. The 36 FMC lmplements a thre -cable, so-signal interface or those MCS

W~ 93/~9~22 PCT/US93~02839 . . ~
~132097 s 2 networks that use a standard MC bus (24-bit addresses), and a 3~ four-cable, 120~signal interface for those MCS networks that use i the 28-bit addresses provided in MCS-II. Provision is also made in 5 :th four-cable interface for ev~ntual suppor~ of 32-bit addresses.
6~ ~ The~FMC~supports all the address lines defined in the four-cable interface; however, in an environment where addresses are a full 32 ;8~ ~its, the FMC:will only reflect into and out o ~he first 256 9 megabytes~of ~emory.
~ The FMC appears as a typiGal MCS node on the bus, usi~g one 11 of the nine: bus IDs (O -- 8) available in MCS or MCS-II. The MCS
12: ~used by:the FMC is set durin9 initialization by ~he configuration ~;l3~ prog~ramming lin~
~14~ D~tails o~ the ~MC lnter~ace to the MC bus are illustr ted in 15~ Figure~: 3.~ The :FMC M~bus interface receives bus transfers by 16~ latching the memory addr ss, ~emory d~ta, flag bits and parity on 17: : :the~rising~ edge of DATA VALID (assumin~ DATA VALID is not being 18~ driven~y~the FMC itsel~f~ The two different bit counts for the 19~ addr~s~ flags and parity~ ~lines re~lect differences between the 0 ~:~three~cable~MCS:: bus~and:the four cable MCS-II bus. The smaller 2~ ounts~pply:to ~he~t~rer:cable bus. Note that 32 bits of address 22~ are~indicated~for t~e~;four cable MCS~ bus rather than 28. As 3 :::men~ioned e~rlier, signals~to support the addi~ional four address 24~ b~its~are~rese ~ ~d in~the~fourth cable fsr future expansion. One 25~ ~bit~:~o~dd~parity is~proYided for each byte of address and data, ~26 ~ resul~ing in ~he:seven~or eight parity bits shown in ~lgure:3.
27 . Th~ flag- bit~ :quali~fy the received tr2ns~er. For ~emory 28: wri~te~ transfers, one~flag: bit (referred to as ~he "~-bit") 2g :~ indi~tes~whe~her~the memory write is to a byte location in~memory 30~ as~ opposed to a halfword or:word. The other two ~lag bit~, which ~31 ~ :~are~only present in the MCS-II bus, d'fferentiate between memory 32 write transfers and other types of transers. While only me~ory 33 write transfers appear~on an MCS-II bus in a cluster, the MCS-II
~34 ~ bus in the ~OMCS hu~ is used to dis~ribute interrupt, async and ~other types of data in addition to memory write traffic.
~36 : As indirated in Figure 3, the address, flags and parity are WO 93/~9422 PCI/llS93/~2~39 213~9~
-2 treated as 32, 3 and 8 bit quan'cities, respectively, when received 3 by the FMC. If a three cable MCS bus is connected to th~ P~SC, the 4 receivers for the eight most significant address bits, the two 5 extra flag bits and the eighth parity bit are disabled. Zeros are ;6 put in the MC input latch for the missinq address and flag bits and 7 ~ ~ a~ one is inserted for the mis5ing parity bit. In a four c:able 8: environ~ent, 32 bit addresses are received but the F2~C will discard g ~: an~ r`eceived memory write trancfer in which the most significant 10 four address bits are not zero.
11 If receipt of a ~ransfer causes the ~5C Tx FIFO 14 to becon~e ;:12: ha:lf: :~ull, the FMC will drive GLOBAL BVSY on the MC bus to prevent 13; o~er~low. GLOBAL BUSY is deassert~d when the Tx FI~O 14 becomes 4 ~ :les~ ~than hal~ full~ ::: The CONTROL BUSY signal is only present in 15 ~ the ~M~S-II: bus and~ is~ only utili~ed in the FOMCS hul:~. If receipt 16 ~: ~of a :non-memory-write type trans~er causes the ~C FIFO used to ::17 : hs~ld~ suc:h~transfers to ~become half full, the FMC will dri~v~ t:ONTROI.
18 ~ BUS~ to ~prevent overflow. ~:: When the FIFO becomes les~ than half :19 ~ fu11~ ONTROL 13USY is~ deasserted~
20 ~ }3e~ore~ the ~MC~ ~:an~nerate a tran~fer on the ~SC bus, it must 21~ first~ac~uire the~right ~Q acce~s the bus:. To accomplish t~list the 22~ assérts the~ REQUES T line Dn the ~us which cs~rresponds to the ~ NC5~ID~:~he FMC has been~pro5~rammed to use. The FMC ther~ ~onitors 24 ~ tbe~ corresponding G~ANT;~:line. When the~bus arbit~3r asserts the 5 `G~ANT~line, ~the FMG;~drlves ~the~memory address, memory data, flag 26 ~ :bits and~ parity 9n:the bus. The FMC's MCS ID is also dri~ren on the 21 ~ rlode i d line of the bus. On the next rising ~d~e of the VALID
28 ~EN~BLE signal, the FMC drives DATA VALID. (The VALID ENABLE signal 2g;: ~ is~a free running clock generated by the bus arbiter to synchronize , . ~
: 30 : the ~ ac:tions of nod2s on the bus . ~ : ,, ~31 ::~ :Note that the MC~ output latch illustrated in Figure 3 only 2 ~ supplies~ ~8 bits of :the memory address rather than 32. The dri~er 33: inputs for the remaining four address bits are tied to ground : 3 4 ~ (i.e., the bits are forced to be zeros). ~
: 35 If receipt o a memory write transfer packet causes the FMC
36 Rx FIFO 11 to bec~me half ~ull and MC bus burst reSluest mode is :

2 e~abled, the FMC will drive BURST REQUEST. Asserting the BURST
3 REQUEST signal, which is only prese~t in the MCS-II bus, causes the 4 bus arbiter to enter a mode in which only the F~C is granted access to the bus. After asserting BURST REQVEST and delaying long 6 ~ enough to guarantee propagation of the signal to the arbiter, the 7 FMC deasserts G~OBAL BUS~. If no other node on the bus is 8 assertin~ GLO~AL 8USY, the arbiter will begin issuing grants to the 9 FMC.: If one or more okher nodes is asserting GLOBA~ BUSY, the arbitex waits for the busy condition to clear and then begins 11 issuing grants to the ~MC. ~Jote that the FMC can safely deassert ~12 ~GLOBAL BUSY even if it is unable to accept mQre transfers from the 3~ ~ bus~because the arbiter will only grant bus access to the FMC.
14 ;~ hen~the FMC ha5 unloaded enough packets front its Rx FIF0 11 15 :~ to;~c:ause the fill l:evel to drop below hal~full, ~U~ST REQUEST is 16 ~ deasserted and the arbiter returns to the nor~al "fairness"
17 :arbitra~ti~n sche~eO If ~he FMC is unable to acc~pt m~re transers 18 from ~he bus, it will assert GLO~AL BUSY b2fore dea~serting BURS~
19 : ~EQUEST.:
20 ~If She~ 's Tx :~:FIFO ~ is less than half ~ullo the FMC will 21~kèep ~ST~ REQUEST ~asserted for 8 MC bus cycles and then release ~t 22~for~ 16 cycl:es and then:a5sert it for 8, release it for 16~and so on 23;~:until the :~x FIF0 ~ill level drops below half full. If both the 24~Rx ~and Tx FI~Os are~half~ or ~nore full, the F~C w~ll assert EIURST
25 ~EQUEST continuousl~ until enough packe~s ha~e ;been: unloaded fro~
26 ~the~ ~x FIFO to caus:e the fill level to drop below half full. O~ce 2~7BURST REQUEST is deasserted, the arbiter returns to the normal 2B~Ifairness~ arbitration scheme. If the FMC is unable to accept more 29 ~transfers from the bus, it will assert GLOBAL BUSY before deasserting BURST REQUEST.
31Support for burst request mode in~ the FMC is enabled or 32disabled vi~ a c:onfiguratiorl programming ~ommand. Only one ~IC on 33a given MC bus can have burst request mode enabled but at least one 3~~MC in every FMC-to-~C link must have the mode ~nabled to ensure 35reliable operation. Note that enabling burst request mode in the 36FMC only means that the FMC is able to drive BURST REQUEST on the WO 93~19422 PCT/US93/02839 ~c~
213209~ , .

2 bus. For burst request mode to be useful, the four cable MCS-II
3 environment with Memory Coupling Controllers (MCC's) as the bus 4 ~rbiter/terminators is re~uired. MCC's provide bus ter~ination a~d arbitration in a conventional manner as Xnown from the previously 6 noted patent and application and are sometimes referred to as 7 refleotive memory controllers.
~: 8 The FMC Rx and Tx data paths 7 and 8 move data between the 9 MCS interface and the hi~h speed serial data link. They also interface to the microprocessor allowing asynchronous data, 11 interrupt transfers and flow control information to move between 12 ~ clusters.
3 ~ : F:igure 4 illustrates the manner in which ~he FMC processes ~ 14 ~ MCS memory write transfers.:~ As indicated, not all transfers are :~ ~15: transmitted as packets ov~r the high speed serial data link. Only 16: ~ those~transfers whi~h co~respond to memory writes into sele~ted regions of memory are ~transmitted over the high speed link.
;:Transfe~s which do not~fall within these selected regions ~re 9~ s~imply discarded. ; ~:~
20~ In the other direction, packets containing me~ory write ~21 ;~ tr~ansfers are rec~ived over the high speed link and decoded ~y the 22~ FMC. :Note that~in the~Rx path 7 t~ere is no hit/translation RAM so 3~ all~me~ory write packets`xeceived over the~high speed link cause 24~ :~memory~write transfers~to ~e generated on the ~C bus. When a packet has ~een received which represents à:write into memory, the 26~ MC~requests th~ use~of the ~C bus and when;the reque5t is grant~d, :~ 27 generates a memory write transfer on the bus.
:2~ ~ In the Tx path ~, the regions of memory to be reflected are ~: 29~ ~de~ined during ~onfiguration programming of the FMC. During ~3~ prDgramminy~ the total memory address space is viewed as a sequence 31 o~ 8K byte (~ word) blocks.~ Multiple regio~s~as small as ~ bytes 32 can be defined. Any number of regions, segregated or concatenated, 33 ~ up through the entire address range may ~e established.
34 Another feature of the FMC which is illustrated in Figure 4 :: 35 is memory address translation. Prior to packetizing a memory ~ ~36 write transfer and transmitting it over the high ~peed link, the .

... . _ _ __.__,__ _ . . .... , , ............... _ . ., . _ _ . .

~ WO 93/~9422 2 1- 3 2 ~ 9 7 PCr/VS93/Q2~3~

, 2::~ ;memory: address is modlfiedO MCS physical addres5es are 24 bits 3 ~ long:(MCS~ or:28 bit~s long (MCS-II). A MCS physical address is 4~ rans1ated~by using t~e most significant 11 bits (MCS~ r lS ~its :5:~ :(MCS~ to~addre5s~a~hit/translation RAM on thP FMC. The value 6~ read~ from:~he RA~ ~is: the most signi~icant 15 bits of the new addre~s~
Note-~th?t ~he least significant 13 bits of a ~emory addresc ;9 ~ a:re~una~ffected:by~the~tr~nsl tion pro ess.~~ Thus, 8k byte blocks lO~ are~mapped~fr~m~he address~space of t~e source MC bus to that of the~:destina:tion MC bus~and~vice verse~ ~s illustrated i~ Figure 5, ~12~ this~feature~is very use~ul because it allows clusters to share 3~ memory~:~ragio~s whieh~; reside~ in the different places in each 14 ~ cluster's physical addr~ss space.
i ~ The ontents of th~ hit/translation R~M are ~stablished ~,~ :. : ' : :

.
' .~ . ' WO 93/19422 P~r~US93/02839 "~
09~

during corlfig~aration program~ing of the FHC. The size of memory 2 addr~ses ( i . e., 24 or 28 bits) on tAe source MC bus is also 3 ~ es~ablished during coniguration programming. Eac~ location in the 4 hit/transl~tion RAM repre~ents a 8R byte block of me~nory and 5: contain~ a hit bit and th~ 15 ~it trAnslation value- I~ the hit 6 ~ blt is s~t, ~e~ory writ~ lnto the ~K byte memory block are 7 ~ ~ r~ t~d over t}~e high ~p~ed link. If t~e bit is re~et, ~;uch 8 ~eD~ory write~ are ignor~d.
9 In the context o~ t~e MCS ~tar network configuration, the FMC
regiorl fi~lection and addres~ transl~tion features can ~e used 1:o 11 :segre~at;e the ~lCS cluste~s connected to~ the hub into groups such 12~ ~ that~ ~mosy writ~ trafic i~ only reflected within a grollp, not 3 ~betweer~ gr~oup~. ~ote ~t if an administratiYe ~y~t~m (i.e., not 14;: one o~: the nod~- in the ~ear network) i5 used to program the E~Cs, 15: ~ a~ secure network c~n b~ achl~ved in wh~ch cluster grouE~s can 16: ~ ~ co-xi~t: but ~not a~ect: ach~ others ~emory. ~ total iaolation of ? ~ grCtùps~ is :not des~red, o~erlapping regions c~n ~e used.
18 ~ Thé~FMC~ DIicroproces~or has the abi}ity to in~erject data into 19~ and~ reDove~ dat~ froDl both th~ Tx and RX data paths 7 as~d 8. Such 20;: ~data is~referred to a$ con'crol data to distinguish it from ~emory 21~ write~ ansfers. General~ purpose ~sync data, ~CS-II : ~ultidrop 22; ~ conæole ~ync d~ta ~and~ terrupt pulses are ~11 trç~ated as control 24 ~ n t~e FN~ ~reeelv-- dat~ over the general purpose async ports, ~:t fs~r~s t~ datn lntc) packet~ and ln~ects those packe~s 26 int~ the trans~Ait data ~tr~a~ sent over the hig~ speed link. When ~:27 a packet o~ a~ync data:~ recei~ed by the FMCf the bytes are broken 2&~ out of the packet ~nd~ s~n~ oYer t~e appropriate gen~ral purpose 29: async:port. MCS-II ~ultidrop console data i~ ~andled in a si~ilar 30 :::~fashion.
31 :~ When an en~led input interrupt line is pulsed, the FMC
32 generates a packet t~at ~ ~ent over the high speed link:. Upon 33 ~ receip~ of an in~errupt~pa~ke~ fro~ the serial d~t~ link, the FMC
34 pulses the appropriat~output interrupt line.
~ Other control ~ran~ers include con~iguration programming WO 93/19422 PCr/US93/02839 .

information, hlgh ava~lability messages, error indications, and 2 reset ~ndication~
3 Flow control in a ~SCS ne'cwork is acco~plished by means of 4 flow control bit~ in t~e pa~k~tR sent from E'MC to FMC ~nd by m~z~ns o~ ~lC bus busy ~ nalfi, Internal to th~ FffC, alle~mory write data 6 tr~n~;~r~ 3nd oth~r type~ o~ data transfer~ ., a3ync , ~7 inter~t, ~ultidrop, etc. ) are handled separately 80 that flow ~8 s~ontrol ~ay be ~pplied to non-lDe3nory write tran~ers with<:7ut g af~ecting th~ ~e~sory wr~te tra~fic. Each packei: sent fro~ 4C to FMC contain~ two ~low control ~it~, one to cause t~@ rec:ei~ing ~SC
to~ ce;~se or r~s~e tran~issiorl of ~e~ory write transfer packe~c~
2 and ~ano~her to cllu~e it to ce~ or resuJQ~ txan8mi~sion of other 13 ~ t~ of packet~. ~hls not~on of two ~p~rate data str~a~ns also 14 ~ appli~s to~ tb- h~ C: bus where there ar~ separa~e ~u~ bu~y 15 ~ :signal~, ~sn~ :Eor memory write tr~n~er~ and ~noth~r for all other 16 ~ t~es of~ tran fer~.
17 ~ addit~onal type of flow contro1 is burst res~ues~ mode.
8;: ~ Bur~ r~qudst ~ n~cQs~ary to ~n23ure that lock up~ do not 19: ~ occur~on ~ o-F~C 1~nks.~ ~hen link ut~liz~t~on i~ high in both 20 ~direet~on~ h~ potenti~ acists ~or a F~SC ts~ FMC link ~o lock up 21 ~ 2~ condit~on in wh~ch ~oth ~C~ ar~ as~Eting busy on ~heir 2~2~; ~ re~p-ct~v-;~C ou1139e~ Elecau~e ~ ~usy ~condition on the ~C busses 23~ ~: pr-~rents th~ ~PlC8 f~ qenerating ~7u~ 'cransfer~, the E~qC~ are 2~ unable ~o ~edue~ ll lev~ls o~ their Tx FIFO~ 14 and will thereroro n~v~r da~s~ us bu~y.
26 ~u~st r~que!l~t ~od~ alle~ia'c~ thi~ probl~n by ~llowing a PMC
:27 to:unload ~r~n~f~r~ from i~ ~ FIFO ll while ~nsuring ~hat the ~MC
2a :: will ~not h~Y~ to ~c:cept ~additiollal tr~n~f:~rs into its Tx FIFO 14.
9 This~ D~e;~n~ t~e F~C c:an ~ccept D~ors me~ory write transfer packets froD~ ~he r~aote ~qC whlch in tur~ ~llow the ren~ote l~MC t:o unload 31 lts Tx FIFO 14 a~d eventually clsar the busy cc~nditis:n on the 32 remote MCS bus. Clearing the blasy con~ition ~llows the remote F~C
33 to unload i~s ~c FIFO :ll. The re~ote EMC can then accept more 34 m~ory write tran~er pacXets whic~ allows t~ local FMC to unload its Tx FI~O 14 and elear the busy condition on the local ~C bus.

2~32~9~

1 A~ shown in Figure 6, the ~ C data paths are logically 2 subdi~ided into gu~drant~. In ~uadrant 0, MC bus transfers are 3 rec~ived ~nd uo~ed to the Tx FI~O 14. The Tx FIFO 14 ~rves as the 4 bound~ ~ between quadrants 0 and ~. -In quadrant 2, transfers 5~ r~oved from the Tx FIFO 14 ~r~ packetized and trans~itted over the 6 high:~paed seri~ k 4O
Pac~ets are r~c~ived o~er the h~gh ~p~ed link 4 in quadrant :
8 3 a~d the packet c~ntents~ar~ ~o~ed to the ~x FIFO 11. The RX
9 FIFO 11 serve~ as the boundary between quadrants 3 and 1. In quadrant ~, lnform~tion remov~d from t~ Rx FIFO 11 is used to gen~xat- tr~nsfers on th- MC bus.
12 :~ An i~portant fe~tur~ o: ~ a FMC d~sign, p~rticularly ~rom ~
13~ di~nos~ic per~pecti~ that the latches in each quadrant can 14- :~be~;acce~c~d in both a~ p~r~llel and 5arial f~shion. During normal 15~ ~operation of the F~C,~ d~ta~oves through the latches using the 16 ~ par~llel i~terace. The ~lt~rnate seri~l interface allows the :17~ ~ croproc~s~or on the FMC to shift data seri~lly into and out of 18 th-~latch~
Flgure 7 ~hows~ detalled~view of~quadrant 0. Proce~sing :20;~ begins~in~qu~dra~t O wh~n the control logIc 25 d~tects tha~ the MC
:21 ~nput~ ch 1~ has been:loaded. ~e input Iatch is unloaded and 22~ :odd par~ty i~ co~puted~ on the ~ddress and data. The computed 23 ~ par$~ty 18~ then co~r~d~o ~Xe receiv~d parity. At the sa~e ti~e, 2~4` ~h~ ~ ignif1cant addr--~ f~ur ~it~ ~r~ checked to ~ec i any 25~ o~ the~ ~r~ non-zoro~ ~:
:~ 26 wh~lQ ~h~ parity and ~ddress checks are taking place, 15 bits 27~ of ~h~ ~ddr~s ar~ used to address the hi~/~ranslation RAM 12. (If 28 ~the~addre~ came fro~a three cable MCS ~us, the upper four of the 29 ~ ~5 ~bi~s u~d to ~ddre~s t~e RA~ will be zer~s.~ The least 30: si~ni~ic~nt 15 bit~ o~the 16 bit value r~d ~ro~ the RAM become 31 thQ new,~ or tran~la~ed, address bits. The mos~ signi~icant bit 32 o~ ~he value re~d fro~ RAH is the window hit bit.
33 The destination of the ~ ~ransfer is determined by the 34 pari~y ~nd address checks, the hit bit, and the fla~ bits received :35 with the ~r~ns~er. I~ ~ny o~ the most significant ~our address bits WO 93/19422 2 1 ~ 2 0 9 7 PCI/l~Sg3/02~39 is l~on-~ero, the tran~fer iS discarded. If the received parity 2 does not ~ l tb~ comput~d parity, the trans~er i~ clocked into 3 the ~icropro essoP ~n~erface FIFO 21. The transfer is also pl~ced 4 into the Dlicro FI~O if the parity is good but the f lag bi~s 5 indicate a eontrol type transfer (which only occur~ in the MCS
6 hu~ e destina1:ion of a ~e~or~ write type transfer with good 7 :~ par~ty depends on th~ setting of the hit bit. If the hit bit is 8 reset, the transfer i~ ~i~ply discarded. If the hit bit is set, 9 the ~emory data, translated Dlemory address and flag bits are 10: clo~3ced into the Tx FIFO 14.
The contents of th~ hit/translation ~ 12 are initialized 12~ : ~ by t:~:e~ FMC Dlicroproce~or~. To change a location in the RAM 12, th~
13 ~icroproces~or l~ir~t puts the new ~ralue into the loading bufers lq : 2~. T~e ;aicroproce~or then caus~s G~OBAL BUSY to be asserted on ~th~ ~C~ bux ~nd val~: lo~ag -nough for a~y transfer in progress to pass ~ through t~e MC :input latch 10 . Then, the Dlicroprocessor 17~ causes~ the :hit/translation RAM 12 to be written with the value from 18~ e~; loading ~uf~xs. ~ GLO~AL BUSY is 6ubsequently d~asserted.
F:~gure 8 ~hows: ~ dletailad view o~ quadrant 1. Quadrant I
20 ~ ~ ~ prooe~sing~ begin~ when the cQntrc~l logic 25 detècts that the MC
21: outp~t~ latc~ 9: ~ e~apty and eitl~er the Rx FIFO lt îs not empty or :2~ th~ re~ int~rfac~ 2;1 i~ r~ques'cing use of the latch 9. I~ th~re 3 ~ is~ hing in the ~FIFO 11 ~nd the ~icro interface 21 is not 2~4 r~ ing:, : th~ ~IFO 11 iL8 r~a~ ~nd parity i~ c0~2put~d c~ the ~emory ~ddr~ and da~ltao ~hs data, addr~ss, flag~ and computed 26 parity are then clocked int;o the ~C output latch.
7 ~ Dlic:r~ ter~àc~ 21 is only used when ~ FMC in a FOMCS hub 28: needs to di~tribute interrupt, a~ync of other control information z9~ to the o~Aer ~C8 in thé ~ub. I control busy is not asserted on 30~ the 2~ub bus, t~e ~icro~interface logic 21 requests use of the MC
31 output latch 9. When the latch g i~ a~ailable, the control data 32 i5 seri~lly shift~d irlto ~he latch 9. Odd parity for ~e 'cransfer 33 is computed by the microprocessor and shifted in~o ~he latch 9 3 4 followang the data.
:3~ Figure 9 shows a detailed view of quadrant 2. Pxocessing WO 93/19422 PCr/VSg3/02839 ~,~3z09~ `

1 4"
1 begin~ in quadrant 2 when the control logic detects that the 2 latc~e 18 and 20 ~re empty and finds that the Tx FIF0 14 i~ not 3 e~pty or t~at ~hQ ~icroprocessor has loaded the micro interface ~ 4 latch. If the ~icro l~tch ha~ ~een loaded, the contents of the : 5 latch ar~ transferred to the Tx latche~ 18 and 20. Of the ?2 bits 6 transferr~d fro~ the ~icro latch, 64 are also clock~d into a pair 7 of EDC ~error detection code) ~enerators 16a and 16b. The 8 generated eight bit EDC and the 72 bits from the ~icro latch are g cloc~ed into the Tx latch~s 18 and 20. The contents of the Tx l~tches 18 and 20 ar~ then p~ sed to t~e high speed link serial :~ ~ 11 trans~itter 22 in two 40 bit transfers.
12 ~ I~ th~ ~icro latch i5 ~e~pty b~t the Tx FIF0 14 is not, the 13 ~ m~Daory: dat~ addr~s a~ad flags are read fr~ the FIF0 14. An 14~ ~ ~dtitional ni~- ~its :are g~nerat~d by the control logic to make a 15~ total ~o~ 72. Again,- an EDC is generated on 64 o~ the 72 and an 80 16 ~: ~ bit :pack~ i5 clocked into the Tx }atches 18 and 20. The contents :17 ~ of~the Ta~ latch~s 18 and 20 are then passed to the transmitter 22.
18~ If the control :logic datects that eit}ler th~ RX FIFO 11 or the , nic~o F~FO in quadr~nt 3 are half::full, flow control laç~ bits are 2~0 ~. assert~d in tXe BO bit ~p~scket clocked into the Tx latches 1~ and 21 ~ 20.: Iî no pac:ltet is ~avaiîable to clock into the latches, the 22 con~rol~ c ~25: ~caus~ sp~icial flow control packet '~:o be cloc:ked 23~ ~ ~ into ~:he ~x la~ch~ I8: and 20 for trans~ission to the re~ote F~C.
. 4 ~ 9ep~ra~ ~low control~ ~blt~ ~re defined for the Rx FIFO 11 and the quadranS 3 ~cro ~IFO ;:80 th~t the ~Elow of ~e~ory vrite and non 26 D~eDu:~y write paC~ t~ G~n ~e throttl~d ~;eparately. Once the FIFO(~) 27 b~coD~e 1~ an half full, a pac:ket is s~nt to infs3rm th~ remote 28 F~C~ th t: i~ cari res~e trans~ission of one or both types of 9 : packets:. :
~ Figure 10 ~how~: detailed view of quadrant ~. Quadrant 3 31 proc:e-sing lb~in~ when: 'che firs~ 40 bits of a packet are received 32: over the ~gh ~peed ser~ nk. Control lc?ic 2S causes the 40 33 bits to b6~ clocked irlto ~ staging latch 15~ Wh~n the rest of the 34 pacic~t arrives, thQ entire 80 bit packet i~ c:locked into the Rx lat:ch 17 . Fro~ th~ RX latch 17, 64 bits of the packet are moved WO ~3/1~4~2 PCT/US93/02839 `; 213~f)97 .

1 to a pair of EDC generators/checkers 13a and 13b. The received EDC
: 2 is compar~d to t~.~at generated and an indication 3~ the outcome is 3 pro~ided to the control logic.
: 4 : The destination o~ t~e contents of the packet depend on the packe~ type bit~ ~xamined by t~e control lo~ic and the result of 6 : the~EDC check. If tb~ EDC chæck failfi, the pa~ket content~ are 7 ~ :~o~ed to ~h~ ~icro int~r~ace FI~O 23. The packe~ eontents are also 8 :~oved~to th~ ~icro FIFO 23 if the pac~et type flag~ indicate tha~
~:: g t~e pacXet conta~ns async, interrupt ~r other control infor~ation.
~10 Ot~erwi~e, the me~ory addre~s, data and flags are moved to the RX
:FIFO 11.: .
~lZ ~ Among t~e 12 pæcket~bit~ examined ~y the control logic 25 are 13 :~ he~low control ~its, on~ ~or throttling ~a~ory write pac~et 14~ ~trans~i~sion ~nd tbe other ~for throttling non ~emory write p~cket 15:~ tr~ns~is~ion. A set`flow controI bit in a received packet causes the~F~to c~e tran~is ion of:the corresponding typ~ of packQt l?~ until a packet is recelved with that 10w con~rol bit reset.
18~ igure~ how~a~block di~gram of the ~MD 29B18 lat h used l9~ to implement ~he MC~inpu~:and~output latches 9 and 10 and ~he ~x `:~ 20 ~ latches;l5 and 1? and~:l;atches 18 and 20 on the FMC. In the FMC
;~2~ design,~the nor~al d~ta path through ~hese latc~es is the par~llel 2;2~ onè~ ~rom~the D inpu~:t~ough th@ pipeline re~ister 29 :to the Q
;23~ output~ The ~alternat~ erial path is ~ccessible by: t~e FM~
~icroproce~sor and 1~;pri~arily us~ or di~gno~tic purpo~es. Not ~:~ 2$~ t~at t~ ~ux 30 (wh~ch ~ ontrolle~ ~y the ~ODE signal) aliow~
26 : ei~er ~ ~ cont~nt$ o~ the ~ado~ ~eqi~er 31 or the ~ i~put to 27 s~rve as the i~put or ~h~ pipeline re~îster 29. Also, note that 28~ :the output o~ the pipeline~regi~ter 29 is f~ed back ~nto the shadow 9~ register 31. Thus,~with appropriate ~ardware control, data can ;30~ ~enter the latch in serlal~and exit in parallel or ~nter par~llel 31 ~ :and exit in serial.
32 ~ These features ar~exploited by the FMC d~sign which allows 33 ~ the ~icroprocessor to:serially load a latch in one qu~drant, ~ove 34 the da~a along ~he nor~al parallel path from that latch to a latch ; 35 in another ~uadrant and then serially read the Gontents of the ~:

W~ 93/194~2 PCI/VS93/02839 2~3~09~ ,".~

destination latc2~ . By comparing the data loaded into the f irst 2latch wit~ t~at r~ad back froDI th~ ~econd, the microprocessor can 3de~en3ine lf the data path is functionalO
4In the ~OMCS architecture, She high speed serial link is : 5impleDlented with a trans~it/receive pair of Gazell~ Hot Rod chips.
6~ The Gaz~lle Hc~t ~od trang~itt~r chip converts~ 40- bit par~ l data 7: in~o ser~al data that i~ tran5mitted o~rer a ~iber or coax link~
: 8At th~ remote end of the: link, th~ d~ta i5 r~converl:ed to the original parallel data by a Hot Rod receiver ~hip. Over this link, 10the~FMC~ exchange infon0ation in the ~orm of 80 bit packets. The Gazelle tran5mitt~r sends each 80-bit packet ~s two 40-bit data 12::frames~ Wh~n no pac3c~t i5 avail~ble to send, the tran~2itter ~ends 13~ ync ~ran~es o maint~in~link ~ynchroni~ation.
14~~rh~ data on the serial da~a link i~ 4B/5B NRZI (Non~Return 15to: Zaro, :In~ert on on~s~ encod~d. This allow~ th~ rece~ver to 16 ~rec~er ~bot~ d~ta~ ~nd ~clock s~nals fro~ the data str~A~, 17~:p~eclud~ny~the need :for a separate clock signal to ~e sent alcsng 18~ data. D~ta: rat~: as hi~h a~ 1 Gbps (one bill~ion bit~ per 9 ~~ second)~ are supported. ~ ~
20~Transmission~ ~rroz~detection i~ aocomplish~d via an eight 21~ ~~bit~error~de~ection~code (EDC) whi~h is inc:luded in e~ch packet.
22 ~ ~ O~:the~72~;~re~ain:ing~b~its in;the pac~et, only 64 are inc:}uded in th~
23: :: EDC c~l~ula~ion. The; eigh~ unprotec:ted bi~ ~re t~e ~ir~t ~our 24 ~ ~it~: o~ eac~ ~of the :~0~it~hal~s Of ~ p~ ~t.
qhe ba~c pack~t :for~at i~ ~hown in Figur~ 12.
26 l~ 0 and 40 are u~ed to dis~i~guish the two 40-bit 27~: ~sec1:icsrl~ o the packet. ~hen receivinq a pack~t, A P~qC ~lw~ys ;28 ~ ~exp~ct~ th~ ~ir~t ~t of the f~r~t 40 bit ~e~tion to b~ a zero and 29 ~ ;the~ first b~t o~ sec:ond 40-bit -section to be a one.
30 ~ The BSD ~nd BSC bits (~its 1 and 41, respectively) are used 31~ by the transmitting FMC to throttle transmissions by t~e F'M
32 ~ recei~ing tlle pack~t. The BSD bit is used t~ tell t~e re ::eiving 33 FMC: ~o cease or resume transmission of ~eæory writ~ data packets;
34 the BSC bit is used to tell 'che receiving FHC to ce~se or restlme 35 transmission of control packets. The bits are set to tell the wo 93~1gq22 2 I 3 2 0 ~ 7 PCI~US93/02839 ..

receiving FMC to cease the associated type of trans~issions; they 2 are reset to tell the receiving FMC to resu~e tran5missions.
3 q'he Tl!P bit (bit 4 ) indicates the packet type . I ~ the bit 4: is res~at, the packet contains a llte3ll0ry write transer. If the }~it~5 is set, t,he packet is design~te~ as ~ control pack~t. Control : 6 : packets zlre used to tran~fer ~sync data, irlterrupt dat~ and any : other type of in~Formation which F~SCs ~ust ~xchang~.
The VAL bit (bit 5)~ indicates whether the cro~ -hatched g portions of the packet (bits 6 - 39 ~nd 44 - 7l) actually contain 10: ~ ~ny ~alid inforlllatlorl. If the VAL bit i~ ~et, the r~c~iving FMC
wi11 cons1der the inforIDation ir~ the c~oss-hatched po~tions to be 1:2 ~ val~id:.: If the ~AL bit i~ reset~ t~e receiving ~C will only pay 3 att~nt~ion ~o the flow ;eontrol information encoded ~Ln th~ pac:ket e.~ the ~;settin~s~ of the~ BSD ~nd BSC bits) . When a FMC does not 5~ ; have~ any ~data to 9end~, ~it wi11 periodic~11y tran~;~it pac3cet~ w~th 1;6 ~ the ~ AL ~i~ reset and~ the ~low control bits 5et o~ re~et as 17 ~ ~ appropriate .~
18~ A~ mentloned ~earIi~r, ~ ~b~t 0: - 3 and blt~ 40 - 43 ~re not 9~ ncluded~ in the EDC ~ calcul2ltion. "~his ~o~ans th~t t~ ~3SD bi~
2~ ~ : and/.or- t~e~ BSC ~it ~ay~ ~ in :error in a recei~Yed p~ck~t: and no ~
2l ~ error~will~b~ detected. ~ However, ven if a~ in~alid flow~ control 2: ~::: indication i 5 recei~d: and ~cted upon, the ~ next pae:ket r~ceived 23 ~ will almost ~ certain1y: correcl: the proble2~
:241 ~ : ~The ~shaded porl:io~ o~E the packet (b~ts 2 - 3 and b~ts :~2 -43) Are r-~erved.~ Th- ~:oo~vent~on of h~din~ re~er~d portion~ of : 26 pack~t~ i~ follow~ad throu~hout th~s docl2ment.
27~ 3it~ 72 - 79 contain~ 'che ED~ which i~ cs~put~d on bi~ 4 - 39 28~ ~ and ~bi'cs ~4 - 71. ~ ` ~ The: ~I~C is ~n 8-bit ch~ k byte ges~erated 2~9 according :to ~ modi~ied Ha~ing 5:ode whi~ch will allow dete~tion 30 ~ f~all 5ingl2- and dou~le-bit error5 and some tripl~-bit errors.
3:1 Because the NRZI encod ~ e~e u~ed by the ~azelle Hot Rot :
; ~: 32~ :ch~ p~, .5e on th~ high jpeed ~2erial link mediwa will cause 33~ ~sequent __ ~c~ubIe-bi~ err~rs which the recei~ring FP~C will be a:ble~3 4 to: detect by c~smparing tAe EDC in the Esack~t tG that computed -3 5 : during receipt of the packe~ . :
:

WO 93/19422 PCI/US93/0283~

~,~3?.~9~

The me~nory write ~ransfer packet i~ used by the F'MC to 2 transmit ~ ~emory addres~ and associated data over the high speed 3 link. The address and data represent a memory write tran~fer which 4 the FMC received fro~ tl~e MC bus. The format of the packet is S shown in Figure 13.
6 The FBT bit (bit 7 ) is l:he F-b~t as80ciated with the melaory 7 write. ~he F-bit i~ set if tl~e ~e~ory write tran~fer repre~ents a writ2 to a single byte of meD~ory. The F-bit is reset 1~ the 9 transfer represents a write to a half-word sr word.
Note ~hat if t:he ~e~o~ addr~s is z 24-bit address, bits 44-11 47 of the packet will cont~in zeroai .
12 ~ The gener~l pu~pose æsync data pac:ket format is shown in 3~ Figure 14.
14 The packet can cont~in up to f ive bytes of a~ync data . The f irst~ ~our bytes are plac-d in bits 8 39 wit~ the ~irst ~e in 16~ ~ bits ~-15, the s~corld in 16023 and so on. The fift~ byte i3 plac~d 17 :~ ~ in ~its 64-71. The recei~Jing FMC determines the ac:tual number of 18~ dat~;by~s in the packet:by checking the Byt~ Count fi~ld (bits 4S
`19~ 4~
:~ : The Destination~Clu~ter ~ield (bit~ 48 o 55) contal~s the 21 : :address o~ the ~CS cluster for which the pack~t is ulti~ately 22 ;:~: in~nded. The F~C:in:tha de~tination cluster u~es the Dectin~tion Por~ ld~(bit~ 56 - 63~ to deter~ine which gener~l purpo~e as~nc 24 ~ ~p~t to u~e wh~ tr~n~itt~ng the a~ync d~ta ~eod~d ~ro~ the pacX~t. ~ .
26 The ~ ~it (blt 44) ind~cates ~hat the origin2ting FMC
27 :rec~i~ed an a~ync br~ak indication. When t~c BR~ bit i~ set, the 28:~; packet doe~ not contain any async data (i.e., t~e Byte Count field 29 ~ con~ains a zero). Upon receipt of a general purpose async packet wi~ ~RR set, the FMC in:th~ destination cluster flushes ~ts output 31 buffer for the desti~ation port and gen~rates a break on that port.
32 The MCS-II ~ultidrop con~ole data packet for~at i~ illu-trated 33 in Figure 15.
34 The packet can contain up to four bytes o~ a~ync da~a. The four bytes are placed in bits 8 - 39 with the first ~yte in bits WO 93/1M22 PCI'/USg3/02839 . . .
8 - 15, the second in 16 - 23 and so on. The receiving FMC
2deter~ine~ the actual nu~nber of d~ta byt~s in the packet by 3ch~cking th~2 ~te Count ~ield tbits 45 - 47).
4The Destination Clust~2~ field ~bits 48 - 55) conta~ns ~he 5addr~ss of the ~iCS cluster fo2~ which the packet is ultimately 6intealded. The conterlt~ of this field are only ~eanir~gful if ~he : 7BRD bit i~ re~et, ~: 8Th~ S~urc~ Cluster ~ield (}~it~ 56 - 63) contains the addr~s~
9of the cluster in which the c~ri~inating ~C reside~ Becala6e 10~ultiple pack~ts ~re r~qul~d tc~ ~nd ~ multidrop 2l~e~s~ge, a P~C
in a c:~ust~r ~ay be rec~ivin~ p~ of ~e~ age~ fro~R 'cwo or ~ore clu~t~rs at the ~;a~e ti~e. The ~30urce clu~te~ $ield in t~e p~ck~t~
13~allows th~ recei~fing ~ PNC to ~¢gregate ~ae~sage piece~ ~a~;ed on source cl uster and thu~ properly reeonst~ct the ~essages .
1 S ~Th~:SOI~ bi~ ~it 64~ dic~tss:whether the al~ync: data in the 16 :~pa~3cet r~pr~sent5 tbe ~tart o~ a ~ultldrop ~e~s2g~ or not. I~ th~
17 ~~502~ ~bit~ is set, the~: packet cGntains the first 1 - 4 byte~
:: 18 ~depending on th2 By~e Count) o~ a ~nultidrop ~essage ~, If the SOM
l9: ~it ~is;~ r~t, the packet contain~ d~ta from within the body o~ a 2 0 ~nes~age . ~
21 ~Th~ EO~ bIt (bit 65) indlcat~. whether ~e ~sync datzl in the 2:packet r~l2pr~s~nts th2:end of 21 ~ultidrop ~aess2lg~ or not. I~ the 23 ~ ~EO~b~t i~ sel;, ~he ;p~cl~t corltain~ the 1~Jt 1 4 byte~ (depend~ng 2 4 ~ on the Byte Count) o ~a ~ult~drop ~es~age~ ~e E:OP~ bit 25 r~set, a'c le~t one~ ~ore pack~ of dz~ ro~ t~ ~essnge will ~6 c~110w~, 27 Th~ l3RD bit (bit 67): is set if the pack~t is ~o be broadcas'c 28 :~ throughout the FOMCS network to all clu~ter~. If th~ ESRr~ b~t i5 ~: 29 ~ set, all ~C~ in a~ ~FOP~CS hu~ will ~orw2lrd tlne pacXet to ~heir30 ~r~spec~ r~note clu~ter~. All cluster F~Cs re~eiving such a ~: 31~packet will ac:c~pt ~t (~s6u~ing they hav~ ~ultidrop support 32 ~enabled) .
33The interrupt packet fonnat i~ shown in Figur~ 16.
34The Destinatior~ Cluster f ield (bit~ 48 55) contains the 3 5address of the MCS cluster for which th~ p~cket i~ ultimately ~3~o9~

tend@d. The FMC in th~ destination cluster uses the Destination 2 Line ~leld (bi'cs 56 - 63) to deter~ne ~hich output interrupt line 3 to pul~e.
4 ~ otorola 680~0 ~nicroprocessor on the FMC provides theS c:onf~gura~ion 8y5tem interfac~ ag well ~ data ~anage~ent for all 6 : traJls~ ot~r th~n ~e~or~r writa transfers. Thi~ inc:ludes all 7 ~ asynchronous data, interrup'cs ~nd error logg~ng. It also provides 8 dia~nostic: capabil~tie~ driven through the configuration g p~ogram~ing interface.
; ~:Four S~gnetics 68681 DUART~ on the ~5C provide support ~or he~x~han5~ of async ~er~al data ~)etween the nodes and/or devices 12 ~ in:~separ~ate ~S clu~ter~O ~ qhe a~ync: ~uppo~ i8 deslgned in such 3 ~ ;a~w~y~ at~co~rlicatins~ ~en~ie~ need not ~ aware that the ;~sync 14 ~ ~dat~ ::is~:as:tuall~r tra~relinsl o~rer a high 5peed ~iber or connection.
lS~ Fr~ ~ per~p~ctive ~ ~of th- ~nti~l~s, eo~munication i~ not 6~ funGtioNally~di~rent ~r~;the case where the nodes and/or d~vic~s 17~ are~ dir~ 1y conn~oted vi~ RS-~32 eabl~s . ~ No ~pec~al protocol ~18~ onna~on~ h~s~to ~b~ in~erted by the co~unic4tlng entiti~s into 19 ~: the~ yn~¢::dat~ ~ream to: llow ~the s~ ta to move throug~l the MCS
20 ~ n~atwork~ F~ur~ 17 ~ trat~s~ the handl~ng of a~ync: ~erial data zl ~ by~ a~ clus~r.:~
22 ~ A~ :~a~ync ser~al~d~ ; iJ receiv-d fro~ a node or device~ ~the 23~ pa:ckat~æ~ Sh~ t~ ~nd:~tran~its th~ paclcets over t he high 24~ p-~d s~ori~ add~tion to a~y~c dat~ ch paok~t contains ~5 an addr@~ w~1ch ~cilitat~: routi~g of th~a p~clc~ through the ~rs 26 ~ network. In~ o~her dir~ction, the FMC recei~es and decodes 27: : p~Gk~t~ of ~ agyTle d~t~ fro~the hlgh ~pe~d~ l~nk., q~e address Ln the 28~ ~ ~ :p~c:ket i~ di~c3rd~d ~and t~ async da'ca is E~as to the ~node or 29 ~ de~ice.~
30 ~ The addres~ in ~n~ ~ync p~cket is designed to ~llow routing 31 o tb~ paclcet through~ CS hub. To acco~plish t~is, the address 32 consist~ of two ~ieldE~, ~ destinat~on cl~ster ield ~nd a 33 ~ destination port field. The destination cluster ~ield indicatPs 34 the clu~ter to which t~e packet i~3 to be routed. The destination 35 port field indica~e~ wh~c~ async linX of the PHC in the destination :; ~

WO 93/19422 PCI/U~i93/~2839 213~97 clus~er tha~ the data is intended for. The destination FMC
2validate~ the addres~ wh~n it recei~res the packet. If the 3destinatiorl cluster ~ddress doe~ not match the local cluster : address, the packet i disc~rded .
Th~ FM~:: m2intain~ ~ list of eight async ~ddresses, one for 6s~ach ge~eral purpo~e a~ync po~. l~n a~ync data i~ received fro~
: ~ ~on~ bf th~ ~ght port~, the ~It: look~ up the~ ~sync ~ddr2~ for th~t ;~ ~ 8port and includ~ it in the pack~t ( ~) sent o~rer the high speed glink. Thus, t~ere i5 ~ a static corlnection between each gener~l : ~ lOpurpose async port of ~ :~PlC and an async port of a E~SC: in so~e other ~cluster.
12 ~ ~ ~The cont~nt~ of the~ t of a~ync addresises ~nd th~ local 13c luster ~ addr~ r~ a~ta~ ;h~d during c:onfigur~tion progra~Dming 14 ~ ~ of the F~IC~ Th~ p~y~ica~l c~ar~ct~ri~Sic~ o~ the async links (i.e., : lS:~baud r~t~, c:haractor lengtb, ~tc.) are al~o established during Gonfi~ura~ion pr~gra~in~.
17~;; In~ a~ ub, th~ ~Cs are progra~ed to oper~te in a ~pecial 8 ~syrlc~; d~ta pa~s-through mo6i~e. Thi~ ~ mode of op~ration is 19 ~illùstr~ted ~in Figur~ 18~ Async paokets rec~i~red o~ter the high 20~sp~ed~:link: are dec~ed ~r th~ r~iving FMC and ~ent o~rsr ~he! hub 21~MC bus~: ts :th~ other FMCs in t~e hub~ Th~ adâress field in~er~ed 22~by:~th~ F~C~ ~t th~ o~g~:nating clus~er i~ p~s3~d aïong ov~r ~he ~
2~3 ~lC~ as &rell ~ th~ ~ync dat~. Each F~C ~n ~e hub which 2:~c~ addr~ and a~ync ~ data ~ro~ e ~SC bus c~mp~res th~
z5 ~ d-stin~t~on clu~t~r fi~ld to th~ addr~s~ o~ the clu3~r ~t the 26 - r~ e ond o~ lt~ ~gh ~p~d I1nk. If a ~laa~ch t:lCCUr5, the FPlC
27 ~ bui~ds ~nd 'crans~its:an~ async packet over the ~gh speed data link.
28~ ~ : The r~ote . îu~ter addr~s~ used ~ a F~C in thQ hub to perfor~
29: the ~ routing ~unction ju~t mentioned i~ established durin~
30 ::configura~ion progr~ing. Since the z~ links of a FMC in a hub ; 31are no~ connec~ed ~nd:~4re ignored by the FMC, no async address list 3~or :~syne~link p~Ay~ical aharac~eri~tics can ba pr~ra~ed.
33To acco~nmodate t~e c~se where the hlgh speed link of a FMC in 34one hub i~ directly connes:ted to ~ F~C in another hub (rather than 35to a ~gCS clust~r), ~ync a~dress c~ecking D~ay ~lso be disabled ir~

W~ 93/19422 PCr/US93/02~39 2~,32~9rl a ~MC via con~iguration proqramming. Thus, all async packet 2 traffic over the bus of one hub will also appear on the bus of the 3 otl~er hub.
q The genexal purpo~e a~ync ports of a FMC support hardware flow control u~ing DTR and C~S . Wherl a P'MC w~ she~ to trans~nit over a g~neral purpo~e ;~sync port and hard~rare f low control is enabled 7 for: that po~t, the FMC only trans~nits a~ long a~ it sees CTS
8 : asser~d. A loss of ~CTS~ during tr~ns~i~ion cau~es the port to 9 . cease trans~ission until CTS reappears. Thus, the de~ice connected to ~n PffC ~3ync por* can throttle FMC d~t~ tran~mi~sion for that 11 port ~y asserting and dea~erting CTS.
2 ~ ~ Tl~e ~fC can al . o throttle ~ device which i~ ~nding lt dal:a 13 ~v~r~ ~ gen~r~l purpo~ ync port assuming that an appropriate 14~ ~ cabl~ ~s ~used~ and the device pays ~ttention to the ~TS signal.
~5 ~ Whe~ :th~ FMC d~t~rmin~s: tl~'c ~:ts rec~ve bu~f~r ~or the port is n~arly~3Eull, it d~s~rts ~TR 'co infona the device cormected to the 17: ~ port of th~ condition.~ Assu~inq a cable which connects 'che port ' s 18~ ~ ~rR;signal~ to the de~ice'~ ~CTS sign~l:, the device w~ll see a loss 19 ~:~: oP CT~ Th~s should caù~e t~e de~ice to cease trans~ission until 20: the: ~C a~;~ert~ tc~ ~indicate that more data can be accepted.
21~ ring :conigurat~on~ progra~ing of the FMC, a~ync flow 22 ~ ~:ontrol u~în~ nd ~TS can bs ~rlabled or disabled on a per port 23 ~ ~basi~. If ~ ~/~æ ~}o~r control ~ not d~ired, P?lC general purpose 24~ asy~s:~ port~ c~n b~a~ oon~igur~d for XON/XOFF ~low contre~l or flow 2S cor~tr2l ean b~ ~is~l~d~altog~th~r~ ~
26 ~ To d~l Wit21 tho ~tu~tion in whic~ a FMC i5 connected to an 27 a~ync d~ta ~ourc~ ~h~ch i~ tran~mitting data ~uch faster th~n the 28: : device co~ ctg~d to: t:h- de~stin~tion FMC can accept it; an ~sync 29 flow s:ontrol D~ecb~ni~ also impl~3mented between FMCs:. An E~MC
~:~n ser~d ~low co~trol packet~3 ~o t~rottl~ the transmission of async 3 l data pae:~cet-e~ by ~he F~C cor~nected to tl~e ~ource of the async da'ca .
~2 Flow cos~trol can be a~sert~d on a per a~ync port basis so that the 33 flow o~E a~;ync packet for other async ~or~s is unaffected.
34 ~he FHC ~upporta 16 internlpt lin~: e~ght input lines and 35 eight output line~. T2~e lines allow an interrupt puI~e generated W~ 93t19422 2 ~ 3 2 ~ 9 7 P(~/US93/02839 ..

~3 by a node or devic:~ in one MCS cluster to be passed, in effect, 2 ac~o~s th2 2~CS networic to a node or deYice in ano'cher cluster. The 3: pul~ ~2nt~r~ ~e FMC in the originating cluster via an ir put 4 interrupt l ine, i~ p~sed across the MCS network as a packet and ~ leave~; the ~C in the destin~tion clu~ter via an output in~errupt 6 line. A~ with a~ync data, thi~ is de~igned in suc:h a way that the :7 n~de ~ne~d~ not b~ ~are that the ~nterrupt pulse i9 travelling over:::8: a high speed: se~ nk and not actually dires::tly connected.
9 ~ overview o~ int~rrupt passing for a FO~CS configuration is IO show~ in~ Fi~ure 19. The :proce~ volved in passislg ~n interrup'c acro~s FOMCS is v~ ilar ~o that used to pass ~sync dat~. The 2 ~ P~C detect5: I:h¢ pul~e on the input intern~pt line and constructs13 ~ a ~sp~cia~ interrup~ packet w~ich is transmitted ov~r th~ high speed 4; l~nk. ~ interrup~ p~cket contains 2In address which facilitates ~ ~:r~u~ing t~rough the ~FOl~S~ networX. At th~ destination F~C, the 16~ packet ~ d~coded ~:and; tl~è ~ddress is used to determine which ~17~ int~rupt line should be~ pul~ed.
a~ As~ ~in ~the ~-ync~c~se, the ~addres~ in an interrupt packet 9~ consists~ two ~i~lds.~ ~ The destination clu~t~r fi~ld identifies 2~ the~ clu~ter: to which~ the:~paclset i~ to be rou~d. The destination21~ ne~ giold ~ndacat~ which output interrupt line is to be pulsed.
22 ~ As~ with ~ àsync pack-t~ the destin~tion F~IC validates the cluster 2~3: ~ cld~:o~a received~lJat-rrupt packet and dasca~ds the packet i~ t~e 2~ destinat1on 6:lu~t~r ¢~-~ not ~natc:~ the local clust~r address.
T~ C nulint~in~: a li8t of ~ig~t ~r~terrupt line zldd:resses, :26 one for ezl~:h i~pu~ lnterrupt line. Wben the F~C de~ec:~ a pulse 27 on on~ o~ t~ ~npu~ lnterrupt lines, it looks up the line address 28: and includes it in the: packet sen~ over the~ ~igh sp~ed link. Thus, ~ ~ , ~9 ~ t~ere i~; a static c:~nnection be~ween each input interrupt line of a ~E~MC and an output line of a Fl~C ~n some o~her clus~er. ~e 31~ contents o~ the lis~ of int~rrupt line addresses are established ~: 32 s~u~ing con~igura~ion ~rogra~ing of the F~C.
3 3 In a FO~CS h~, th@ FM~s ar~ progra~med to operate in 34 pas~-through mode v~ ilar to t:hat de. cribed earlier for async ~ ~ 35 ~ data. Interrupt pacJcets received over lligh speed lin~ are :, : : ~

WQ93/19422 PCI/US93/0~839 2~,3209~

distributed over the MC bus to the other E'MCs in the hub. Each ~HC
2: in th~ hub rec~ives the interrupt line address from the ~IC bus and 3 coaapare~ ti e cluster field to the ~ddress of the cluster at ~he 4 : remote ~nd o its high speed link. If a match occurs, the F~C
5: bullds and transmits an lnterrupt packet over the high ~peed lir~k.
6:: The remote clu~t~r addre~;~ used by ~ F~C in th~ hub to per~orm 7 ~ ~ the routin;f ~unction ~ ~u~t~ ~entioned i~ e~tablished durin51 8 c:onfiguration programmirl~. Interrupt addres~ checking can also be g disabled in a FMC to acco~ date t~e si tuation where ~ )8 are 10 directly connected. Sin . e tlle interrupt liales of a ErMC in a hub are~no~ conn~cted and ;~re~ ignored ~y the F~4C, no input interrupt 2~ addr~ss li~t c~n ~b~ program~ed.
3~One ohannel of a :~Signetlc~ 68681 DUART on the FMC i~ us~d to 14~ proY~dé~support for th- ~CS-II ~ultidrop console link. ~The oth~r 15 ~ ~ cl~annel o~ this DUART; i- ~used ~for the configuration program~ing 6~l'h~ F~C ~ bebaves ba~ically as a gateway to allow console 7~ tr~ to ~:low between r~odes in separa~e MCS-II clust~rs. In each 18 ~ clu~t~r,~ C intercept~ ~esC~ges intended for remote nodes and lg ~ tran~it~them~ a6 pack~t~ to ~the ~SC~ in the appropriate c:lus~er~.
20 ~ FN~# :r~c~iv~n~ the :p~s:k~ts decode the~a and send the: messag2s 21 ~ o the`~:de tination node~. :The fact th~t this is happening is~
22~ es~e~11y tr~n~pdr~nt; to ~ the n~ . An ~xa~ple of a FOMCS
Z3 ~ :networX lln3~ng clu~er~ o~ ~4CS-II nMle~ appear~ in Figure 20.
2~Xn~th~ NC~-II ~ultidrop con~olQ environ~ent, one of ~he drop~
25 a~:~ a~ lthQ ~ t~r arld all: ot:~sr~ ar2 2~1a~lres. On}y thQ ~a~cer can ~26 i~it~at~ ~oB~e ~xchange~ on ~e ~nultidrop link, a sl~ve can only 27 tran~ t af~ rec~in~ ~ poll ~essage fro~ th~ ma~ r. The F~C
2~: : in a ~:lust~r: a~umes that lthere is a local master in the cluster.
~29~n the MC recsiv@s~ a: ~es~sage ;!lddress~d to a reD~ote node, it 30~ ~ tra~i'c~ the message~as one or more packets over its high speed , 31 ~erial link: wh~n the EPIC receive~i one or more packets from the :32: higb ~speed li~c coFItaining ~ message rom a remote nole, it buffers 33 thh~ Dles~ ge until i~ is polled by th~ master. Then, the F~C
34 transmits t~s~ sav~d ~essage over t~ae local multidrop link. ~ote ~: 35 t~lat ~ cluster E~C~s slave address on the ~ul~idrop link is its MC

WO 93tl9422 2 l 3 2 0 9 7 PCI/lJS93/02839 .

bus nodl3 id.
2 Th~ handling of ~ultidrop messa~e data in the FOMCS network 3 : i~ ba~icaIly the ~affl~ ~s that for g~n~ral purpose async: dat~ and q inter~upts. Multidrop d a is routed through the FOP4CS n2twork in 5~ special packets. Each D~ultidrop d~ta packet ~ont~in~ a destination 6 cluster addre~s ~which is valldated by ~h~ destination F~C,~ and 7 par~ of a 3les~age. Unli}ce int~rrupt packeés or pa5:kets o~ general 8 purpose a~ync data, ~ul~idrop ~es~age packet~ alsG contain ~ sourc~
9 ~ cluster addrQss. The receiving F~IC u e5 the source addr~ss to segregate pieces o me~æages ~rom dif'ere~ lusters into different bu~er~
~12 ~ The source~ clust~r ~ddress whic. ~ rlode inserts ~nto zl 13 ~nessage is: supplied by th~ ~C in the clu3t~r. The multidrop 14 ~as~er periodic~lly ~ends res~uest-c:luster-addres:~ message to the 15~ FH~;which cau~ t to ~roa:c~3st a respon~e ~ess~ge containing the cluster~addr~ss ~o all nodes i~ the clusSer. ~his clus~er,address 17~ ;is~the:address~tabli-hed during cohfigu~ation programming via the 8 ~ Defi~ lu ter Add~æs ~;co~nd. Note that ~n a MCS-II cluster 19 ~ :where~ th~e i~ no ~C~ tG ~upply the ~ource clust~
~: ~ 2~ ~ ~oDunica~ion b~twe~n nodes is unzffected bec~us~ th~ source : 21 aluster~address field~:~s~ not include~ in m~ssage~ ~xchanged by 22 ~ nodes~:in~t~e ~e~clu~ter~ :
23 ~ In F~ CS coD~u~ation~ where ~ hub is pr~sent, ~ultidrop 2~ ~ ;dat~ ~ rout~d ~r~ ~h~ r~ ving F~C go ~he ot~ F~ n t~e hub ~1~ the hub ~ bua. ~cket~ ara ~hen built ~nd tr~n~itted ~ro~
~':26 F~Cs in the hub to the FHCs in the ~estin~tion clu~ter~. ~s with ~: 27 general purpo~e asy~c nd interrupts, a PMC in the hub look~ at the ~: :
2~ destin~t~on clu~t~r~address to deter~ine if it s~ould build and 29 trans~it a pack~ o~er its high speed lin~. If the destination 30 : ~clu~er ~atches th~ address of the cluster connected to the ~MC's 31 serial dat l~nk, th~ F~C forwards the ~ultidrop data.
32 : ~he MCS;II ~ultidrop bro~dca~t cap~b~llty i~ ~lso supported 33 by FOMCS. ~hen the FMC in a cluster rece~v~s a ~e~eage over ~e 34 local ~ultidrop link~that is to be broad~ast to all nodes, it sends the ~essage ~CtOS~ :its high speed link in pa~kets which have a WO ~3/19422 PCT/US~3/02839 ~, ~3 2 ,-?~

1 bxoadeast flag set~ ~ Cs in a hub will always forward such packets : ~ 2 regardles~ of ~h~ ontents o~ the destination clu~ter ~eld. All 3 clu~t~r F~Cs r~ei~in~ the broadcast ~egsage will trans~it it over 4 their loca~ multidrop Ii~ks when polled by the local ~asters.
5~ During configuration program~ing, ~ clust~r FMC is prografflmed 6: ~w1th ~ t o~ remote clu~ter ~ddre5se5. ~hen multldrop fiupport 7 : ~# ~nabled, ~he F~C ~nt~rcept~ any ~es~age on the ~ultidrop link 8 which i~:~addres~ed to ~ nod~ in one of t~e clu~t~r~ ~n t~e lis~.
The local ~ultidrop master can obtain tb~ list fro~ the FMC by ~ending it an inquiry ~ssage. The F~C responds by encoding the t: i~to ~ remote-clust@x-list ~essa~e which it: sends o~er the 12 ~ ù1tidrop 1inX. ;:~
F~C ~upport8 an optional configuration which ~low~
14 ~ àutoma~:ic failo~er~to a ~econdary high ~peed serial link s~ould the ::~pri~ary:~: link go down.: ~his high a~ilab~lity ~eature i~
16~ 11u~trat-d~ln Figur~ ~21.
17~ hs ~econd~ry:~F~C~1n each cluster ~oni~or the he~lth cf the 18: ~pri~ary~::FMCs.~ Eac~ econdary F~C is configured for th~ ~a~e MC bus:
l9 ~ node~ d ;j6~ the~pri~ary~ ~onitor~ but the second~ri~s do not ~ nt~ct::on th~ ~C:~u~a long a~ the pri~aries ~re healthy. In 2~ ot~er~ word~,: all De~ory~ write traffic between th- clusters goes 22~ ~over~he pri~ary hi~h~spe~d li~k until ~ failure OCCNrS~ ~ikewise, 3~ all~other~types of pack~t;~t~affic (i.e., async, in~erxupt, ~c.) 24 ~ go~v~r~h-~pri~æry;11~k:un~ f~lure i~ d~ect-dO
25: To k~p ~h~ ~Qao~ry FMC~ in~or~ed a to t~ir h~alth, ~he :26 pri~arle~ p~r~od~ca11y ~n~ their r~pecti~e ~econdaries ~n ~
:27 ~ okayn i~d1cat~on ~ia a health-ch~ek li~. The U~RT o~ a ~otorola : 28~ 6890l~:~F~ on th~ F~C i~ u~ed to i~ple~nt the health check link~
29 ~ ~ Th-~ pri~:ri-s al~o periodically exchange t~t packets to de~er~ine i:~ ~ e pri~ary link ~g ~ti}l functioning.
31 ~ I a ~econdary F~C does not re~eiv~ an "I'm ok~y" indication 32 within a ~peciied tlme period since th~ la~t ind~cation or if the 33 secondary rec~ives a link ~ilure indication fro~ the primary, the 34 ~econd~y~ initiates t~e fail-over process ~y~ orcing the pri~ry FMG to c~se operation, and 2~ ending a ~ailure-detected W 093/194~ 2 1 32 0 ~ 7 PCT/US93/02839 1 packet to the econdary in t~e re~ote cluster. The remote 2 s~condary F~C then forc~s it~ pri~ary FMC to cease operation. The 3 remote secondary then ~nds a fail- wer-co~plete packet back to 4 the secondary which d~tected the failure. Subsequent communication :: 5 between the clu~ters occur~ over th~ ~econd~ry high ~peed link.
:~ 6 A econd~ ~ F~C force~ ~ primary to c~a~ operation by 7 ~s~rting ~ spec~al ad~ition~l ~ignal which i8 includ~d in S~e healt~-ch~Ck link e~ble connecting ths pri~ry ~nd ~condary. When 9 asserted t~is ~ig~l pl~ces the primary F~C into a ~; : lO hardware co~troll~d offl~e state. In thi~ ~tate, all I/0 int-F~ac~5 on the ~ C are di~led so th~t th~ ~MC is u~ble to 12~ ass~rt :~ny ~ignal5 on the~ ~C ~U5~ trans~lt over the high ~peed 13:~ ~1ink~ ~rans~it over any~o~ it5 async links or pul~e any of its ~ 14 ou~put ln~errupt lin~s. ~The ~ ct is ident~c l tQ that achlev~d ;~ : 15 ~ ~y puttin~ t~e FMC onlin~/o~fline ~witch lnto t~e of~line posi~ion.
6 ~ Th~ (ol~) pri~ary ~C is held in this offline ~tate until the ~17~ se~ondary~ F~C (wh~ch~ the new pri~a~y) is res~t via a power 18 ~ ycle~ hardware re~et or a received Rese~ co~and.
l9~ le th~ old p~imary F~C i~ being held in this ~f1i~e s~ate, 0::~ it ~oni~ors ~he~hea1~h-check ~in~ (if ~t ~ operatlonal enoug~ to ;21 ~ ~do ~o)~ or~ th~ r~c~ o~ "I9~ okay~ ch~ract~rs fro~ ~he new 22 ~p~im~ry. If ~he ~old ~ri~ary receive~ su6h ch~racters~ it 23 r~con~igurQs it~e~ ~o :a~u~ the 5~cond~y rol~. Thu~, when 2~ t~p~ d~ to r~turn ~h~ old pri~ary F~C to ~n online st~te, ~t behæ~s a~ ~ ~econ~ry ~nd re~ains in a (~ir~war~-controlled) :26 of~l~n~ ~tat~. ~hi~:pr~ent~ the o}d prlmary fro~ contending wlth 2? ~h~ n~w pri~ry on~ths:~C bus.
~:~ :28 T~e f~ over process can ~l~o be initiated m~n~ally via : 29 ~ the conf~gur~ion pr ~ ra~ming linX. To cause a fail-over, a command i~ ~ent over t~e~ progr~m~ing li~k to on~ of t~e secondary 31 FMCs.: The fail-over:pro~eeds as described above and when it is 32 co~plete, ~ respon~ indicating such is r~tuxned via the 33 progra~ing link.
34 While the previou~ discus~ion rel~tes to h~gh availability : 35 of th~ high speed link betw~en clusters, the ~igh ~ailability WO 93~19422 . PCI ~US93/02839 ~3209~

feature can be extended to the hub in a FOMCS star netwcrk. ~igh 2 avaiIability in a hub i~ achi~-Jed by having a secondary E~SC ~or 3 each pri~oary FHC ln the h~ab. Thus, each cluster is cormected to the hub by two high speed l inks, a primary and a secondary .
S ~ail-over to a secondary link is essentially the sam~ ~5 the ~: 6: cluster to cluste~-case.
~: 7~ Note t~at to~ really achi~ high aYai~ ity in a cluster, 8~ nodes which have async or in'cerrupt: connection~ to the primasy 9 F~C ~us~ ~ave a redundant set o connections to the second~ry.
10 When the ~condary FMC takes ov~r, th~ nod0s ~u~t cwitch over to the econ~aary async and~interrupt con~ect~on~. To facilitate thi6 12~ :~: prQC~ , one of th~ outpu~ ~int~rrupt lin~ of a ~econdary F2lC c~n 3 ~:~:b~ u ed~o;info ~ the node(~) th~t f2~ over has occurred. During 14~ configuration progra~ing of a~6~co~dary FMC, an output lnterrupt ~15 ~line:c~n ba designated ~or~his purpose.
16~ The:~ail-over process is not auto~atica~ly rever~ible.
1;7~ 0nc~ thè~s~condary F~Cs~have taken wer, they beco~e in effect ~he 18~ primari~ :When t~e ~:re~iou~ prL~ary F~Cs and/or hiqb speed link 19 :~: a~e~p~ired, they ~u~t be~progra~ed to b~hav~ ~s the ~econdaries 20~ ; (if~h~y~ha~e~ no:alreAdy~ reconf~ured the~elves as secondaries~.
21~ Confi~uration:~of a FMC a~ a pri~ry or secondary occurs 2~ during~configuration progr~m~ing. The ~aximu~ ti~e p~riod that a 2~3 ;~ s~cond~ry:~will a~llow~ etween ~I'~ okay~ ~ndications ~ro~ ~he 24:~ pr~ary~ al~o e~tabl~h~. Th~ hlgh a~ail~b~ y ~eature c~n ~al o b~:d~bl~d altoq~th~r. ~ .
, 26 ~ ~no~her u~ag~ og:tho ~icroproc~or ~s to allow diagnosis ~: 27 o~ ~n ~ndividu~l F~C ~nd~or ~igh spee.d serial link ~lthout 28 : ~ringing~ th~ ~n~ir~ ::MC~ net~ork offline. Thi& cap~bil~ty is 29: ~particul~rly;useful in t~a star network~environm~nt where it is 30 : undesirable to ~hut down the entir~ hub just t4 diaqno~ a problem ;31~ ~betw~en *he hub and ~ parti ular cluster. I~ the proble~ turn~ out 32: ~to be ~he high speed l~in~:or the FMC in ~he clu~ter, it can be 33 corrected wi~hout taking down the entire hub. Of course~ software 34 resynchronig~tion will probably ~ nece 52ry betw~n ~he affected clu~er and the other clusters it was communicating with but the ~: , WO 93~19422 . 2 1 3 2 0 9 7 PCl/US93/0~839 ;

rest of the clu5ters s:onnected 'co the hu}:) can continue 'co 2 c:om~us~icate without lnterruption.
3 SpeGiZIl operational ~ode~, re~rred to as hub diagno~tlc and 4 clust~2r diagnostic modes, are :available to allow the EMC to be ; 5 ~ ~isol~ted from ~he high ~p~ed coax or fiber link for te~t~ng 6 : purpo5~ . In th~se ~cides, th~ F~iC operaltion i~ identical to non~lal 7: hu}:~ or clu~ter mode except th~'c the output of the Gazelle Hot P~o~
8 Tx ~hip is direc:tly c~nnected to the input of the Rx chip. To g con~igure the ErMC: to operate in hub. of cluster d~agnostic mode, the ~ ~S~t: FMC Online~Offline com~and used l:o place the FMC in a ir~w~re-~controll~d oflin2 5t~te. Then, the de~ired mode is 12~ 5-lected~ Yl~ the Set~ Op~rational Mode co~asld and the P~C is ~3~ re~urned ~ ~t:o ~ar online : 5t~te by ~eans of a second Set FP~C
14 :~ Onli~e/Of:fline co~r~d.~
5~ Indiv~dual functional z~rea~ o f th~ ~C can ~e tested by sending~Spec~y Diagno~tic ~ Loopback Dat~ co~mand~ to the ~C.
17 ~ ar~iat;i~ns~ ~of the 5pecify l~ia~nostic Loopb~ck Dat~ command can be used~to~lnvok~ ~ory~ite tr~nsfer loop~clc, ~sync d~ta~loopbac:k 19~s~ and~ln1:e~rup~ locsp~zlclc.:~::A FI~C will only ~cc:ept such commands when :2~ : in~ :the~ f:ir~war~-controlled offline st~e.
21 ~ The ~FMC: support~ three diagno-tic loopback 1110~ 5 for D~emory :22 ~writ:2~trans~rs:: inte2n~ external and ~C bus loopback. Int~rn~l :Z3 ~ 100p~3ac:k~ 0p-~ d~t~ rough t~e F~C frc~ ~the ~5C bus input latch to 2 4 ~ ~th~ C~: b~ ~output lat~a :~ Ext~rn~l loopb~ck te~t~ th~ ~a~e path ~exc~pt ~ t the da~ is ~c~ually tra~sa~ltt~d o~er tlle h~gh ~pe~d 26 ~ ~ serial link and iooped back Yia an external loopb~ck cable ; 27 : :~ connection. ~ Internal loopback: ~aode is shc)~rn in ~iqure 22 .
28~ ; Extern~ opb~ck ~ e~ Ls shown in Figure 23. ~c bus loop~acX
29~:: loop~: ~d~ta ~roD~ t~ MC ~u~ output laltch to ~h~ c bus input latch 3~ via ~the~MC bus. Thi~ loopb~ck ~ode i~ illustr~ted in Figure 24.
31~ ~ ~ Note t~at when internal or external loop~ack is performed, 32: the~ F~C hit/translatioJI RAM ~ust also be progra~ed ap~ropriately 33: to achieve ~che desired effect. Loopback ~nodes czm be used to 34~ : specifically test refl~ction region or addxess~ tr~n~la~ion logic 35: or t~e hit/translation RAM can be progr~ ed such that all regions WO 93~19422 P~/U593~02839 ~,~ 3~09~

are reflected and address trans~ation does not actually do anythinq 2 ( i . e ., t~e s:~riginal addre~s ~nd the translated addre$~ are the3 same ) .
4 In internal loopback mod~, the Mc bu~ interface of ~he F~C
is disabled and the Gaæell~ }~ot Rod c~ip~ are configured ~uch th21t 6 the serial output of the translRitter chip i~ connected directly to 7 the serial input of the recei~r chip. The SpQC~y Diagnostic 8 I,oopback Data com~and which initiated the loopback ~1160 contains 9 an address and data p~tt~rn to be looped through the FMC hardware.
The F~C in~erts the add~s~ and d~ta pattern into th~ ~IC bus input 11 latch. The addr~s ~nd ~ata proceed through the Tx path and a~
12; 'che :end o f the Tx p~th ~ transDIitted ~er~ally in a packet. T~e ~: ~:13 Packet is looped back: tbrough: the Rx path and t~e addr~s and d~ta 14 end up in the ~C bus outpu~ l~tch. T~e F~C removes ~e addre~ ~nd s~ata f rom the l~tch ~nd includ~s th~ in the respon~e sQnt ~ack 16 :via~ th- progra~ing link~
17 ~ I n ext~rnal loopbac3c 2ll0de, the MC bus interac:~ of ~h~ FMC
18~ is di~abled. Açlain, the~ Specl~y Diagno~tic l,oopback Data co~ar.d 19 ~hich initiated th~ loopback conl:ain~ an ~ddre~s a~ dat~ patt~rn to be looped ~hrough the F~C h~rd~ar~ and the ~xt~rnal loopback 21 ~ connection. Th~ e~t~ 'cho ~ddres~ and data pat~ nto the :~ ~22 ~ ~C ~us: :input lateh. Th~ ~ddr~s and d~t~ proceed throug21 the Tx 23 : pa~h ~nd at t~ ~nd of th~ ~x pa~ are tr~nsDIitted ~r~lly in a 24~ pack~t. T~e pacX2t ~s loo~Qd l:hrough the e~ern~l ~ loopback conrlection, rs~:ei~ ga~n ~y ~ E~IC and ~ove~ through t~ ~x path 2 6 to ~e NC bu~ olatput l~tc~ . A r~pon~e i8 en~ l~ack ~ia t~
27 prog~a~ing link cont îning th~ ~ddre~s ~and da~a pat~ern rsad from 28~ the latch, 29 In ~C 3)U5 loopback D~ode, the high ~3p~ed 3e:rial link interface is disabled d th~ I~ C bu~; interf~c~ of the ~MC i5 enabled bu1:
31 func'cions somew~at d~fferently fro~n nor~Dal. To per~orm ~he PSC bus 32 loopback, the ~MC illsert~ the ~ddre~s ~ns~ da~ p~tt~rn ~rom the 3 3 Specify Diagnostic L~opb~ck Dat~ coD~nand ints ~ bus output 34 latch. The FMC ~C bus interface hardware ~en regu~ he bu. and when i~ r~ eives a gr~nt, qenerzltes a ~rans~r on t~a ~5C bus with W0 93/19422 21~ 2 0 9 7 P~/US93/02839 the data v~llid sign~l deassert~d. T~is causes any nodes on th~ bus 2 to ignore the trar3sfer. 'rh~ FMC MC~ bus interface hardware, 3 how~ver, i~ configured to rec~iv~ it~ o~n tr~nsfer. The looped 4 :~ da~a i~ read fro~n the MC bus input latch and retua~ned in the 5 r~sponse sent back v~a th~ prograD~ming lin)c.
:6 Tho F2~C support~ thr~ di~gno~tic loop~ack ~ode~ ~or g~n~ral 7 ~ purpose async data: internal, ext:ern~l, and port:-port~ The three 8 v~rietie~ ar~ ~llus~rated in Figure 25. T~e ~C perfor~ ~he 9 requested async loopb~ck ~ th~ result o~ a Spec~y Di~gnostic :: ~ 10 ~opback Dat~ com~n~nd rece~ved over th~ con~igur2tion progr~nminq :
nk.
12 ~ The ~Specify r)i~no~t:ic Loop~ck D~ta ~om~and in~or~s the F~IC
13 ~ ~ as~to w~ich ~ port(~) will p~rt~6ipate in the loopback test ~nd 4~: provide$ t~e d~t~ ~o lt~ loope~ an a~ync port i5 selecl:~d ~or ~15 ~ ~inte~rn~l ~oo~ack, t~Q ~F~C ~n~ti~liz~ the port: 31~rdware to w~-~p 16 ~ trans~itt~d data back to ~e r~c~ive ~de of th~ port. ~he F~C
hen : cause6 the port to ~trans~it th~ data . I ~ the port is unctionirly correctly, t~ 4C i~medi~t~ly receives the dzlt~ back 19 ~ : ~again~ he r~cei~d ~data is pas~ed bat:k over the p~ogra~ing link 20~ ln~the~co~nd respon~ If no d8ta is recei~ed, the FMC re~urns 21:~ : a: response~ indical;~ng ~ueh~,:
22~ External l~opb clc ~ res that a loopbac:k plug be connect~d 23~ to t~- port wh~ch ~ps tr~n~ltt-d data bac3c ~co the recei~r~ pln 24:~ f ~b- port. Th- ~SC ;~ t:~:aliz~s l:h~ t hardw~re or nonhal operation ard tr~it~ t:h~ data ~;~ tho sp~elfiQd port. Ag2i~n, 26 the data ~hould ~ diately res:~ived ~rcs~ th~ ~a~ port. Th~
27 response to th~ s:o~2aand contains the rec~iYed data or indicates 28 : railure o~ the loo~back Gperationl~
29: : Porl:-port loopbac:k requires that ~ 232 cable connect the two selected port~. ~: Tl~e ~C in~tialise the ports for norDIa}
31 operation and tr~n~uaits th~ data via the port $p~ci~ied in the 32 Spec~y D~2lgnocltic l,oopback ~t~ eo~and. The r~spon~e to the 33: command contains the data received fro~ th~ other por~: or indi~ates 34 ~ail~re of th~ loopback operation.
The E~C supports external loc~p~aek for ~h~ int~rrupt lines.
:

W0 93/19422 PCI`/USg3/0283g ;s"
1,~ 3?,09~ ~
-A Sp~ciy ~iagno8tic loopback Data command to the F~qC selects a 2 pair o int~rrupt line~ that will participate in the loopback, one 3 input ~nd one output. Note that loopback requires th~t a wire connect the s~lect~d int~rrupt lines. To perform the loopback, S ~h~ E~IC ge~aerates ~ pulse on the output line of the pair and 6 ~ report:s in its co~and re~ponse whether or not a pul~e w~ detected 7 on tl~e input line.
8~:~he F~IC ~;upporl:8 two dia~no~tic: loopbae:)c modes for the MCS-II
9 D~ultidrop console link: internal and external. In'cernal loopback 10 is perfonEIed exactly a~ w~ th t~e ~eneral purpose ~syno por~:s ,.
11 External loop~ac:k i~ functionally identic~l 'co the method u~;ed with ~12: th~ ç~ener 1 E~U~ose async ports , but becau~e o~ the uniqu8 raature ~13: ~o~ ~t~ multldrop 11nk, no ~extern~l loopbacX cable i~ requ~red.
14 ~ ~HoweYè~,~th~ ~xt~rnal~loopbac~ test will dri~e the ~ultidrop link.
::15 :Th~re~o~, ~f te~t~n~ ~ de~ired without bringing th~ entir~
1~6 ~ ~u1tid~o~ nk o~line,~ t~e:MCS cab e which c~rri~s the ~ult~drop 7 ~linX~hould ~- unplugged rom the F~C cha~s backpla~e prior to 18: ~ ~in~ti~tlng~d1agno~tic t-~t~:~and replaced vh~n tes~ing:i~ concluded.
Th~ MC support~two:diagnostic loopback ~odes for the high 20 ~ ail~lity;~ h ch~ck li~X: internal ~nd @xt~rnal. 80th modes :2~ ar~peror~ed ~n the ~me~manner as with the gene~al purpose asyn : 22 ~ ~por~
23~Each pack~t tr~ns rred acro~ the ~igh speed seri~l link in ~24~ FO~CS~onf~gur~t~on con~ins ~n error d~t~c~ion code (~C) which ~: ; 25 al10ws~tha:r~s~iv~ng P~C ~o det~n~ne ~f an ~rror occurrad during 26 transmi~lon~. Pack~t~ r~c2iYed vith good ~De arc ~l~o chec~ed for 2? ~gal or illogi~ ett~ngs. P~rity is checked for tra~s~ers ~: 28 recelve~ acros~ ~e::~C~u~and transfers rece~d with good parity 29 are~al~o ~he~ked ~or~11leg~l or lllo~lcal bit ~etting~.
~:~; 30;For packet ~rror~ and transfe~ ~rrors det~cted by the 3:1 receiving F~C, th~ FOMCS philosophy i~ to report the error to the ~;~ 3~ nod~(~) in tb~ ne~rest clu~ter rather th~n reporting ~he error to 3:3~ all nodes in the network. No atte~pt is ~ade to handle errors in 34 me~ory write trans~er p~ck~ts differently fro~ errors detected in ~35 other t~pes o~ packets because once zn ~rror i~ d~tected in a : ~

WO 93/1~422 PCI /US93~83~
`~"` 2132097 pac)cet, the entire contents of the packet are sllspect including t:he 2 packet t~e f ield .
3 Th~ ~eth~d o~ reporting errors ~ illustrated ~n Figures 26 4 ~~ and 27. When a FMC ~n a clu~ter detects an error in a packet ~: ~ 5 . recei~ed over the h~g~ speed link or datect~ an error in a MC bu~
6tra~sfer, lt: r~port3 the error to the node(~) in th~ cluster by:
7l ) ~oa~cing ~ ity error on the MC hus a~nd/or 2 ) directly 8intermpting the nsde (~) .: To force a parity error, the FMC
9arbitrates for the ~C ~bus and when grar~ted acces~, generates a : ~: lO transfer wh~re the p~rity dxiven on the bu~ doe~ not ~ tch the address ~nd da~ Thi~ c~us~s the P~C po~t of the each nQde to 2 ~dètect a tran~fer ~th bad parity and ~hope~ully) report i~ to the 13 ~n~e uia ~ pari~y ~rror int-rrupt. The direct interrupt pproach 14:~util~e~: o~e or ~ore of-the FMC'~ ht output interrupt lin~sO
;15 ~~en a F~C: in~a ~ detect~ :an error ~n a transf~r r~ce~ved 16~ ~ ~ cver the hub ~gC bus or dstects an error in a paclcet receiv,ed ov~r 17~ ~ its ~ h spe~d ~in~ it~builds an error packet ard trans~its it to l8: ~ the~C~ at t:he e~ther Qnd of the high sp~ed link. ~, Receipt o~ the rror~p~cket causes: the:~ E~C in ~the cluster to report ~ error as ZO: :désc~i~ied ~arlier.: Note that w~ether or nQt the ~C in the luster 2~receiv~s~ the: p~cket~correctly, an error ~will be repor~:ed.
22~To ~assist with~ diagno~1~ of pack~t tr~nsmission or MC bus 23 ~transfer ~errors, the~ ~lC:: ~ce~p~ a copy~ the l~st bac~ packe~ ~if 2q~ny)~a~d~the l~st bad 2lC bu~ tr3n~fer (i~ ~ny) tha~ ~t received.
:2~The$~ ~op~:ca~ ece~sad at ~ny ~e ~i~ the.con~guration 26: programming link.
~: n~ 27 Duxing configu~tion programming, th~ ~rror reporting 28 ~beh~vior o~ a FMC ~n a cluster is e:sta~lished. The :~MC ~an be 29 ~ progr2~ed ~o report errors by gen~ratinq:MC bus p~ri~y errors ~and/or by generating:~interrupts. ~ the interrupt ~pproach is 31~ celected, one or ~re:output ~nterrupt lines~:of the F~C can be 32 program~ed as outputs or error signal~. Each of th~ se~ected 33 ~ lines c~n be furtber quali ied as to when the llne ~s pulsed: 1) 34 w~en a bad ~C bu~ ~ran~er is detected (or when an ~rror packet is received indica~ing tha~ ~e re~ote FMC detected ~ b~d tran~fer), WO 93~9422 PCI/U~93/02839 ~3~o9~

. ~
2) wi~en a bad pack~t is received over the high speed link (or when 2 an error pack~t is received indicating that tha remote EMC detected 3 . a bad pack~), or 3~ when either a bad tran3fer or a bad packet is 4 detected.
Al'chough the pre6ent invention ha been shown and described 6 with re~erence ~o pr~ferred embodi~ents, ch nges and modi~ication~
7: ~r~ possible to thosa ~cilled in t2 e art which do not d~part from 8 t~e ~pirit and ~ontemplation o~ the inventi~e concept~ taugh~
9 herein. ~ Such ~re dee~ed to fall within the purview of the invention ~3 clailæed.

.
: :~ : :

: ~

~: ~
::

Claims (10)

We claim:
1. A system comprising:
a first and second set of plurality of nodes;
a first data bus associated with and connecting said first set of plurality of nodes;
a second data bus associated with and connecting said second set of plurality of nodes:
each node of said set of plurality of nodes including a processing unit, a memory, a bus coupled to the processing unit and memory, and a sensor means for sensing a write to the memory and for transmitting the sensed write on said associated data bus;
first converter means connected to the first data bus for converting data on said first data bus to corresponding optical signals and received optical signals to corresponding data for transmission on said first data bus;
second converter means connected to the second data bus for converting data on said second data bus to corresponding optical signals and received optical signals to corresponding data for transmission on said second data bus; and fiber optic means for optically transmitting data from one converter means to the other converter means.
2. The system of claim 1 wherein each said converter means includes conversion means for converting parallel data to serial optical signals and vice versa.
3. The system of claim 1, wherein each said converter means includes a FIFO for temporarily storing data.
4. The system of claim 1 wherein each node further includes I/O
means for introducing I/O data into memory and wherein said sensor means responsive to a write to memory of I/O data transmits same on said associated data bus.
5. A system for connecting memory coupled systems, comprising:

a plurality of nodes;
a first data bus connecting a first group of said plurality of nodes:
a second data bus connecting a second group of said plurality of nodes;
first converter means connected to the first data bus;
second converter means connected to the second data bus, first fiber optic means for carrying transmitting data from the first converter means to the second converter means;
second fiber optic means for carrying transmitted data from the second converter means to the first converter means;
the first and second converter means each comprising:
input latch means for receiving data from a respective data bus;
hit and translation RAM means connected to the input latch means for determining the destination of data received from the data bus;
first micro-interface means connected to the input latch means for controlling the hit and translation RAM means;
transmission FIFO means connected to the input latch means for latching the data received from the data bus;
error detection means for determining if an error exists : in the data in the transmission FIFO means;
first and second transmission latches connected to the transmission FIFO means;
transmitter means connected to the first and second transmission latches for transmitting the data to another converter means;
receiver means for receiving data transmitted from another converter means;
first and second receiver latches for latching the received data;
error detection means connected to the first and second latch means for checking if an error exists in the received data;
second micro interface means for testing the received data if the check by the error detection means fails;
receive FIFO means connected to the first and second receive latch means for holding the received data after checking by the error detection means;
output latch means for transmitting the received data to the respective data bus.
6. A system as claimed in Claim 5, further comprising first and second backup controllers for transmitting data between memory coupled systems upon a determination the first and second controllers are not working properly.
7. A system as claimed in Claim 6, wherein said input, output, transmit and receive latches of each of said plurality of controllers are able to be accessed in both a parallel and serial fashion.
8. system as claimed in claim 1, wherein data is transmitted through the optical fiber means in 80 bit data frames.
9. A method comprising the steps of:
establishing first and second data links;
establishing first and second sets of nodes, each node including a processing unit, a memory, a bus coupled to the processing unit and memory and a sensing means for sensing a write to memory;
sensing a write to a memory in one of said nodes of said first set;
transmitting said sensed write on said first data link; and sensing the sensed write transmitted of said first data link and optically transmitting same to a remote point whereupon it is transmitted on said second data link to the memory of one of the nodes of said second set without intervention of the processing unit of said one node of said second set.
10. The method of claim 9 comprising the further steps of:
writing I/O data from an I/O source into the memory of a node in one of the sets of nodes;
sensing the written I/O data and transmitting same on the data link associated with said node; and optically transmitting the written I/O data to a memory in a node of the other sets of nodes via its associated data link.
CA002132097A 1992-03-25 1993-03-25 Fiber optic memory coupling system Abandoned CA2132097A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US85757892A 1992-03-25 1992-03-25
US07/857,578 1992-03-25

Publications (1)

Publication Number Publication Date
CA2132097A1 true CA2132097A1 (en) 1993-09-30

Family

ID=25326303

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002132097A Abandoned CA2132097A1 (en) 1992-03-25 1993-03-25 Fiber optic memory coupling system

Country Status (8)

Country Link
US (1) US5544319A (en)
EP (1) EP0632913B1 (en)
JP (1) JPH07505491A (en)
AU (1) AU3936693A (en)
CA (1) CA2132097A1 (en)
DE (1) DE69331053T2 (en)
ES (1) ES2170066T3 (en)
WO (1) WO1993019422A1 (en)

Families Citing this family (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3155144B2 (en) * 1994-03-25 2001-04-09 ローム株式会社 Data transfer method and device
US5603064A (en) * 1994-10-27 1997-02-11 Hewlett-Packard Company Channel module for a fiber optic switch with bit sliced memory architecture for data frame storage
CA2182539A1 (en) * 1995-09-29 1997-03-30 Nandakumar Natarajan Method and apparatus for buffer management
KR19980064365A (en) * 1996-12-19 1998-10-07 윌리엄비.켐플러 Apparatus and method for distributing address and data to memory modules
FR2759798B1 (en) * 1997-02-19 2001-08-24 Bull Sa METHOD FOR INITIALIZING A SERIAL LINK BETWEEN TWO INTEGRATED CIRCUITS INCLUDING A PARALLEL SERIAL PORT AND DEVICE FOR IMPLEMENTING THE METHOD
US6202108B1 (en) 1997-03-13 2001-03-13 Bull S.A. Process and system for initializing a serial link between two integrated circuits comprising a parallel-serial port using two clocks with different frequencies
US7103653B2 (en) * 2000-06-05 2006-09-05 Fujitsu Limited Storage area network management system, method, and computer-readable medium
US6791555B1 (en) 2000-06-23 2004-09-14 Micron Technology, Inc. Apparatus and method for distributed memory control in a graphics processing system
US7941056B2 (en) 2001-08-30 2011-05-10 Micron Technology, Inc. Optical interconnect in high-speed memory systems
US7069464B2 (en) * 2001-11-21 2006-06-27 Interdigital Technology Corporation Hybrid parallel/serial bus interface
CA2467844C (en) * 2001-11-21 2008-04-01 Interdigital Technology Corporation Method employed by a base station for transferring data
US20030101312A1 (en) * 2001-11-26 2003-05-29 Doan Trung T. Machine state storage apparatus and method
US7024489B2 (en) * 2001-12-31 2006-04-04 Tippingpoint Technologies, Inc. System and method for disparate physical interface conversion
US7133972B2 (en) 2002-06-07 2006-11-07 Micron Technology, Inc. Memory hub with internal cache and/or memory access prediction
JP4671688B2 (en) * 2002-06-24 2011-04-20 サムスン エレクトロニクス カンパニー リミテッド Memory system comprising a memory module having a path for transmitting high-speed data and a path for transmitting low-speed data
US7200024B2 (en) 2002-08-02 2007-04-03 Micron Technology, Inc. System and method for optically interconnecting memory devices
US7117316B2 (en) 2002-08-05 2006-10-03 Micron Technology, Inc. Memory hub and access method having internal row caching
US7254331B2 (en) 2002-08-09 2007-08-07 Micron Technology, Inc. System and method for multiple bit optical data transmission in memory systems
US7149874B2 (en) 2002-08-16 2006-12-12 Micron Technology, Inc. Memory hub bypass circuit and method
US6820181B2 (en) 2002-08-29 2004-11-16 Micron Technology, Inc. Method and system for controlling memory accesses to memory modules having a memory hub architecture
US7836252B2 (en) * 2002-08-29 2010-11-16 Micron Technology, Inc. System and method for optimizing interconnections of memory devices in a multichip module
US7102907B2 (en) 2002-09-09 2006-09-05 Micron Technology, Inc. Wavelength division multiplexed memory module, memory system and method
WO2004034641A1 (en) * 2002-10-09 2004-04-22 Xyratex Technology Limited Connection apparatus and method for network testers and analysers
US7028147B2 (en) * 2002-12-13 2006-04-11 Sun Microsystems, Inc. System and method for efficiently and reliably performing write cache mirroring
US6898687B2 (en) * 2002-12-13 2005-05-24 Sun Microsystems, Inc. System and method for synchronizing access to shared resources
US6795850B2 (en) * 2002-12-13 2004-09-21 Sun Microsystems, Inc. System and method for sharing memory among multiple storage device controllers
US6917967B2 (en) * 2002-12-13 2005-07-12 Sun Microsystems, Inc. System and method for implementing shared memory regions in distributed shared memory systems
US7245145B2 (en) 2003-06-11 2007-07-17 Micron Technology, Inc. Memory module and method having improved signal routing topology
US7120727B2 (en) 2003-06-19 2006-10-10 Micron Technology, Inc. Reconfigurable memory module and method
US7428644B2 (en) 2003-06-20 2008-09-23 Micron Technology, Inc. System and method for selective memory module power management
US7260685B2 (en) 2003-06-20 2007-08-21 Micron Technology, Inc. Memory hub and access method having internal prefetch buffers
US7107415B2 (en) 2003-06-20 2006-09-12 Micron Technology, Inc. Posted write buffers and methods of posting write requests in memory modules
US9529762B2 (en) * 2003-06-30 2016-12-27 Becton, Dickinson And Company Self powered serial-to-serial or USB-to-serial cable with loopback and isolation
US20050002728A1 (en) * 2003-07-01 2005-01-06 Isaac Weiser Plastic connector for connecting parts and method therefor
US7389364B2 (en) 2003-07-22 2008-06-17 Micron Technology, Inc. Apparatus and method for direct memory access in a hub-based memory system
US7210059B2 (en) 2003-08-19 2007-04-24 Micron Technology, Inc. System and method for on-board diagnostics of memory modules
US7133991B2 (en) 2003-08-20 2006-11-07 Micron Technology, Inc. Method and system for capturing and bypassing memory transactions in a hub-based memory system
US7136958B2 (en) 2003-08-28 2006-11-14 Micron Technology, Inc. Multiple processor system and method including multiple memory hub modules
US7310752B2 (en) 2003-09-12 2007-12-18 Micron Technology, Inc. System and method for on-board timing margin testing of memory modules
US7194593B2 (en) 2003-09-18 2007-03-20 Micron Technology, Inc. Memory hub with integrated non-volatile memory
US7120743B2 (en) * 2003-10-20 2006-10-10 Micron Technology, Inc. Arbitration system and method for memory responses in a hub-based memory system
US7216196B2 (en) * 2003-12-29 2007-05-08 Micron Technology, Inc. Memory hub and method for memory system performance monitoring
US7330992B2 (en) 2003-12-29 2008-02-12 Micron Technology, Inc. System and method for read synchronization of memory modules
US7188219B2 (en) 2004-01-30 2007-03-06 Micron Technology, Inc. Buffer control system and method for a memory system having outstanding read and write request buffers
US7788451B2 (en) 2004-02-05 2010-08-31 Micron Technology, Inc. Apparatus and method for data bypass for a bi-directional data bus in a hub-based memory sub-system
US7412574B2 (en) * 2004-02-05 2008-08-12 Micron Technology, Inc. System and method for arbitration of memory responses in a hub-based memory system
US7181584B2 (en) 2004-02-05 2007-02-20 Micron Technology, Inc. Dynamic command and/or address mirroring system and method for memory modules
US7366864B2 (en) 2004-03-08 2008-04-29 Micron Technology, Inc. Memory hub architecture having programmable lane widths
US7257683B2 (en) 2004-03-24 2007-08-14 Micron Technology, Inc. Memory arbitration system and method having an arbitration packet protocol
US7120723B2 (en) 2004-03-25 2006-10-10 Micron Technology, Inc. System and method for memory hub-based expansion bus
US7213082B2 (en) * 2004-03-29 2007-05-01 Micron Technology, Inc. Memory hub and method for providing memory sequencing hints
US7447240B2 (en) 2004-03-29 2008-11-04 Micron Technology, Inc. Method and system for synchronizing communications links in a hub-based memory system
US6980042B2 (en) 2004-04-05 2005-12-27 Micron Technology, Inc. Delay line synchronizer apparatus and method
US7590797B2 (en) 2004-04-08 2009-09-15 Micron Technology, Inc. System and method for optimizing interconnections of components in a multichip memory module
US7162567B2 (en) * 2004-05-14 2007-01-09 Micron Technology, Inc. Memory hub and method for memory sequencing
US7222213B2 (en) * 2004-05-17 2007-05-22 Micron Technology, Inc. System and method for communicating the synchronization status of memory modules during initialization of the memory modules
US7363419B2 (en) 2004-05-28 2008-04-22 Micron Technology, Inc. Method and system for terminating write commands in a hub-based memory system
US7310748B2 (en) 2004-06-04 2007-12-18 Micron Technology, Inc. Memory hub tester interface and method for use thereof
US7519788B2 (en) 2004-06-04 2009-04-14 Micron Technology, Inc. System and method for an asynchronous data buffer having buffer write and read pointers
US7392331B2 (en) 2004-08-31 2008-06-24 Micron Technology, Inc. System and method for transmitting data packets in a computer system having a memory hub architecture
US9319282B2 (en) * 2005-02-28 2016-04-19 Microsoft Technology Licensing, Llc Discovering and monitoring server clusters
US20070022257A1 (en) * 2005-07-19 2007-01-25 Via Technologies, Inc. Data bus logical bypass mechanism
US7904789B1 (en) * 2006-03-31 2011-03-08 Guillermo Rozas Techniques for detecting and correcting errors in a memory device
US7801141B2 (en) * 2008-07-25 2010-09-21 Micrel, Inc. True ring networks with gateway connections using MAC source address filtering
KR102035108B1 (en) * 2013-05-20 2019-10-23 에스케이하이닉스 주식회사 Semiconductor system
US10484139B2 (en) * 2014-09-19 2019-11-19 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Address verification on a bus
CN115905036A (en) * 2021-09-30 2023-04-04 华为技术有限公司 Data access system, method and related equipment

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4269221A (en) * 1979-08-01 1981-05-26 Adams Harold R Valve stem lock
FR2469751A1 (en) * 1979-11-07 1981-05-22 Philips Data Syst SYSTEM INTERCOMMUNICATION PROCESSOR FOR USE IN A DISTRIBUTED DATA PROCESSING SYSTEM
US4356546A (en) * 1980-02-05 1982-10-26 The Bendix Corporation Fault-tolerant multi-computer system
FR2476349A1 (en) * 1980-02-15 1981-08-21 Philips Ind Commerciale DISTRIBUTED DATA PROCESSING SYSTEM
US4396995A (en) * 1981-02-25 1983-08-02 Ncr Corporation Adapter for interfacing between two buses
NL8400186A (en) * 1984-01-20 1985-08-16 Philips Nv PROCESSOR SYSTEM CONTAINING A NUMBER OF STATIONS CONNECTED BY A COMMUNICATION NETWORK AND STATION FOR USE IN SUCH A PROCESSOR SYSTEM.
GB2156554B (en) * 1984-03-10 1987-07-29 Rediffusion Simulation Ltd Processing system with shared data
US4675861A (en) * 1984-11-28 1987-06-23 Adc Telecommunications, Inc. Fiber optic multiplexer
US4811210A (en) * 1985-11-27 1989-03-07 Texas Instruments Incorporated A plurality of optical crossbar switches and exchange switches for parallel processor computer
US4748617A (en) * 1985-12-20 1988-05-31 Network Systems Corporation Very high-speed digital data bus
DE3788826T2 (en) * 1986-06-30 1994-05-19 Encore Computer Corp Method and device for sharing information between a plurality of processing units.
US4935894A (en) * 1987-08-31 1990-06-19 Motorola, Inc. Multi-processor, multi-bus system with bus interface comprising FIFO register stocks for receiving and transmitting data and control information
US5276806A (en) * 1988-09-19 1994-01-04 Princeton University Oblivious memory computer networking
US4933930A (en) * 1988-10-31 1990-06-12 International Business Machines Corporation High speed switch as for an optical communication system
US5321813A (en) * 1991-05-01 1994-06-14 Teradata Corporation Reconfigurable, fault tolerant, multistage interconnect network and protocol

Also Published As

Publication number Publication date
AU3936693A (en) 1993-10-21
US5544319A (en) 1996-08-06
EP0632913A1 (en) 1995-01-11
ES2170066T3 (en) 2002-08-01
DE69331053T2 (en) 2002-07-04
EP0632913A4 (en) 1997-05-02
EP0632913B1 (en) 2001-10-31
WO1993019422A1 (en) 1993-09-30
DE69331053D1 (en) 2001-12-06
JPH07505491A (en) 1995-06-15

Similar Documents

Publication Publication Date Title
CA2132097A1 (en) Fiber optic memory coupling system
JP2886173B2 (en) adapter
CA1185375A (en) Dual path bus structure for computer interconnection
KR100611268B1 (en) An enhanced general input/output architecture and related methods for establishing virtual channels therein
FI100834B (en) Data communication processor for packet switching networks
US4363094A (en) Communications processor
US7996574B2 (en) Messaging mechanism for inter processor communication
CN100357922C (en) A general input/output architecture, protocol and related methods to implement flow control
CA1252574A (en) Local area network special function frames
US5245703A (en) Data processing system with multiple communication buses and protocols
US6279050B1 (en) Data transfer apparatus having upper, lower, middle state machines, with middle state machine arbitrating among lower state machine side requesters including selective assembly/disassembly requests
US6317805B1 (en) Data transfer interface having protocol conversion device and upper, lower, middle machines: with middle machine arbitrating among lower machine side requesters including selective assembly/disassembly requests
US6134647A (en) Computing system having multiple nodes coupled in series in a closed loop
CN100573499C (en) Be used for fixed-latency interconnect is carried out the method and apparatus that lock-step is handled
US20020007428A1 (en) Data assembler/disassembler
JP2004318901A (en) High-speed control and data bus system mutually between data processing modules
CN1154166A (en) PCI to ISA interrupt protocol converter and selection mechanism
CN100422969C (en) Method and appts. for communicating transaction types between hubs in computer system
EP1609071B1 (en) Data storage system
US20010052056A1 (en) Novel multiprocessor distributed memory system and board and methods therefor
US20080313365A1 (en) Controlling write transactions between initiators and recipients via interconnect logic
CN100445973C (en) Method for arbitrating bus control right and its arbitrator
JPS61131060A (en) Network control system
US6901475B2 (en) Link bus for a hub based computer architecture
US20070011548A1 (en) Centralized error signaling and logging

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued