CA2087735A1 - System for high-level virtual computer with heterogeneous operating systems - Google Patents

System for high-level virtual computer with heterogeneous operating systems

Info

Publication number
CA2087735A1
CA2087735A1 CA002087735A CA2087735A CA2087735A1 CA 2087735 A1 CA2087735 A1 CA 2087735A1 CA 002087735 A CA002087735 A CA 002087735A CA 2087735 A CA2087735 A CA 2087735A CA 2087735 A1 CA2087735 A1 CA 2087735A1
Authority
CA
Canada
Prior art keywords
computer
target
program
computers
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002087735A
Other languages
French (fr)
Inventor
Yuan Shi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Temple University of Commonwealth System of Higher Education
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2087735A1 publication Critical patent/CA2087735A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45537Provision of facilities of other operating environments, e.g. WINE

Abstract

A system facilitates a high level virtual computer (HLVC) in a heterogeneous hardware and software environment. A user specifies the hardware and software configuration of a virtual computer which is routed into a configurator (12), which activates a distributed process controller (14) and supporting components in the desired virtual computer. Each processor in the virtual computer is equipped with a language injection library (24) and runtime servers (daemons (20)). In one embodiment, the language injection library facilitates transference of data among the processors in the system during the execution of an application program, and isolates the details of coordinating the heterogeneous hardware and software in the virtual computer. The disclosed embodiments easily lend themselves to the construction of scalable special purpose processors.

Description

V ) _~V ~'~ 71V ` ` i ' ' ' ' ~(_ l i l, ~ l / U~ I 1 0 20~773~ :
.;
,:

~Y8~X ~0~ ~IG8-L~Bh V~R~L CO~P~T~R
W ~ ~E~ROG~Bo~ o~RA~G ~Y8TEM8 , ~iel~ o~ t~e I~entio~ :.
The present invention is a system for creating high-; level virtual computers (HL~C's) among a set of.compu~ers having various languages, com~unication pr~tocols, and - ;
operating systems.
:- .
k~rou~ o~ t~o Xnva~ a : It is common in co~puter applications for an operator to wish to execute programs involYing several separate computers simultaneously. TXe simultaneous use of saveral computers ~ay be required ~or a parallel-processing-application, where the labor of carrying out a large number of calculations within a single application is divided a~ong several computers to incr~asa total speed. Alternatively, a corporate user :~
may have at his dispasal sev~ral différent computers used by various pa~ts o~ the company; e.g. the personnel, accounting, production, etc. departments each of which may use a different compu~er system. The corporate user ~ay wish to run a program without having to be conc~rned with the location-~ of data and subprograms. For example, it w~uld be conv~nient to have a number o~ idling ~..
computer~ to help with the calculation when a single : compute~ can~ot deliver the results in time.
A si~nificant problem arises, however, ir the various computers which an operator would like to use are :, SUBST~TUTE~ SHEE~
: . .
:

, . . .

~V~/u1YYu PCT/~S91/~5~16 20~773~

of different protocols and/or operating systems. There are any number of operating systems in popular use today, such as UNIX, VMS, or VM. Similarly there are any number of protocols ~or interprocPssor communication in common use, such as TCP/IP, DECNET, SNA, or OSI. Obviously a computex having operating system A and pro~ocol ~ will not be able to easily share data and computing power with another computer with operating system C and prstocol D.
The present inven~ion facilitates coordination of heterogeneous computer systems, particularly for large applications. The system o~ the present invention enables a user at a host station to create a high-level virtual computer, or HLVC, out o~ a networ~ of heterog-eneous computers. ~igure 1 is a schematic dia~ram showing how a network of different computers of various operating systems may be combined to form various ~L~C's.
The nine co~puters shown in the diagram are physically connected by a common network bus. Various multi-proces-sor iapplications may require the use of subsets of the computers on the network, or all of the computers. By initiating the flow of data or instructions among various computers in the system, various HLVC's may be created.
For exiample, HLVC 1 in Figure 1 consists of the three computers in the dotted circle, while HLVC 2 may include the computers in the dashed circle, and HLVC3 may include the ~omputer encompassed by ~he dotted-dashed circle.
~hVC's are defined by the interaction and flow of data and/or instructionsi~mong a certain s2t o~ computers ~or a particular application.
In addition to physical con~ection of various computers in the HLVC, it is e~sential that different program modules, or 'isoftware chips," which may be embodied in various computers, be coordina~ed within the HLVC. T~us, an HLVC i~ marked not only by di~ferent : 35 types of hardware, but by a cooperation of software which may be sp~ead among various location~. Figure 2 is a ~;UBSTITUTE SltEET

: ~

:. .
~ , . , ,.. . . ,-.;. . , .. ~ .

. .
2~773~

symbolic rendering of the interaction of different software chips in an HLvc. Each software chip ~a~so known as a "module" or "sub-programl') is considered as a set of pro~ram instructions suitable ~or a particular type of computer in the ~LVC which produces an output in response to a data input. The interconnec~ion between th~ outpu~ and inpu~s is by means o~ data buses. In an ~LVC the output ~rom on~i ~oftware chip may also be ~he input to another software chip, as shown in Figure ~.
The execution locations of the software chips can ~orm Single Instruction Multiple Data (SI~D) sometimes refer-red to as scatter and Gather (SAG), Multiple Instruction ~ultiple Data (MIMD), and pipelined parallel components of various granularity. The operation of an HLVC mus~ be undeirstood as a means for building efficient distributed application systems based~ on distributed h~terogeneous computers.
The present invention facilitates khe d~velopment, synthesis, and auto~atic execution control of large applications using heterogeneous computer systems. With the present invention an operator simply enters into a host computer general instxuctions defining which com-puters in ~hie network are to be uied for which tas~s, or.
elects to have the system apportion the load among the 2S computers. Each soft~are chip in the syste~ includes withi~ a set of short programs, called up by instructions from the host process, which e~fect the necessary modi~i-cation~ to the different co~puters so that data or . instructions may ~e transferred from one computer to another computer. The system itself, not the operator, performs the "chores" o~ coordinating the different languages and operating systems by generating or initiat-ing dynamic service programs, or "daemons", at runtime.
In this approach, since the code generation is performed within the various co~puters as needed during the course o~ one application, eficient dae~ons are created for any ~, , SUBSrITUTE Sl !EET

- -, , ; , , , ~ , i , 1;.. .
, . ~ . ,.. :.. , .............. . . ., - .. .. ....... ... .. . ..
.. ~ .. ,, .. . , , , ... , ,.. ,, ,.. .. . ,... , ... -. .

. ~ .,: . " .. -:
:, , , :~ , , , , . :. :, , ,- . . . ,. . , , ... ~

'/UiYYU

` 2~3773~3 computing or communication device by speci~ying pre-defined desired properties. This approach offers a greater degree o~ specialization in the generated pro-grams in comparison to building a general-purpose system S (such as a distributed operating system) affording both efficiency and flexibility in the resulting application.
In another embodiment of the present invention, computation intensive functions or programs are imple~
mented into the system by way of a high level interface.
This high level interface minimizes the effort needed to expended by the user of the HLVC system to have these computation extensi~e programs performed.
The system of the present invention carries out its objects in a practical manner-because -autbmatic code generation overhead, normally introduced by algorithmic generalization, is limited to the mini~al level and will only grow linearly with the size of interprocess com-munic~tion needs. In cases where,efficiency is of prime consideration in a heterogeneous computing environment, such as in real-time applications, the syste~ can serve as a tool to find the best program partition and process allocaticn strategy. Once,the near optimal is found, the automatically generated or initiated daemons may be further refined ~y implementing them using lower level, 25' machine dependent codes.

The present invention is a system for enabling com-munication among a plurality of target computers. Each ~' of the target computers includes at least one ; communication channel, memory space accessible through the communication channel, and a li~rary of language '' injections in the memory space. Each language injection is a high level interface between the programmer and lower level interprocessor communication control com-mands. Each lan~uage injection is a subroutine in the ':' SuBsTlTuTc SltEET
':

`. . .. . . , . ; ...... , . , : ... . . .
,; . i . ";

U J ~

2~7~3~
target computer for the transference of data or instruc tions through the co~munication channel accarding i-o a runtime supplied interprocess communication pat~ern. The target computers are operatively connected to at least one host computer. The host computer includes means for processing a computing environment specification, the computing environment specification being a set of instructions for activating at least a subset o~ the target computers for accepting data, and instructions from the host computer. The host computer further in-cludes means for processing an application program. The application program includes sub-programs runable by at least one of the subset of target computers, and further includes at least one interprocessor control command ~or activating one o~ the language inj ections in a target computer. The host computer also includes means for transferring the sub-programs to the appropriate target computers for compilation and execution.
In one embodiment, related to computational inten-sive programs, such as a FRACTAL display program, the user is provided with a high level interface to initiate the execution o~ the program. The present in~ention, by means of predeter~ined control of multiple heterogenous computers, reduces the execution time of the computation 25 intensive programs. The computers perform known parallel processing techniques, such as SIMD (SA~), MI~D and pipe-lining.

: Bri~ Do~ri~tio~ of tho Dra~in~i3 For the purpose of illustrating the invention, there is shown in the drawings a form which is presently pre~exred; it being understood, however, that this inven-tion is not limited to the precise arrangements and instrumentalities shown.
Figure 1 is a general diagram showing the operation of a virtual computer.
:
SVBSTIT(~TE SHEET

~` ,. .

.` : ~ : ` , , , `, `
..

u l :~y~J y(~ y ~

2~773~

Figure 2 is a diagram generally showing the inter-connections of software chips within a virtual compu~
Figure 3 is a block diagram showing the main elements of the present invention.
Figure 4 is a flowchart illustrating the function of the CSL compiler.
Figure 5 is a flowchart illustrating the function of the Distributed Process Controller.
Figure 6 is a block diagram showing the interaction o the present invention with two target computers.
Fisure 7 is a flowchart illustrating the function of the Remote Command Interpreter Daemon.
Figure 8 is a ~lowchart illustrating the function of the Port ID Management Daemon.
Fisure 9 is a flowchart illustrating the ~unction o~
the Sequential File Daemon.
Figure 10 is a flowchart illustrating the function of the Mailbox Daemon.
Figure 11 is a flowchart illustrating the function of the Tuple Space Daemon.
Figure 12 is a simplified flowchart showing an example application.
Figure 13 i5 a diagram illustrating the fu~ction of ~ - hierarchical programming using the present invention.
; 25 Figure 14 is a diagram showing the HLVC component interconnection related to a second embodiment of the present invention.
Figure 15 is a diagram illustrating the function of the Nodecap monitor daemon (NODEC~PD).
Figure 16 is a diagram illustrating the function of the Con~iqurator (CSL Compil~r) of the second embodiment.
- Figure 17 is an illustration showing some o~ the paramet~rs of the X.Prcd file generated by the CSL
compiler o~ the second embodimen~.
Figure 18 is a diagram showing the components of a `- high-level virtual computer o~ the second embodiment.
``~ SlJBSTITUTE SH~El' ,~
. ' - ..~i ; ,-, .. .. ,. .. . . , . . . .. . ",, .

2~773~

Figure 19 is a diagram showing the process ofinject~ng distributed runc~ions (input, output and compu~atlon) in~o a user program.
Figure 20 is a diagram showing the sub components of the Distributed Function Library and two exemplary runtime scenarios.
Figure 21 is a dia~ram composed of Figure 21(a), (b) and (c) respectively showing the three basic distributed parallelism techniques: SIMD (SAG), MIMD a~d pipelines.
`lO Figure 2~ is a diagram illustrating the function of the Remote Command Interpreter Daemons (RCID) of the second embodiment.
Figure 23 is a diagram illustrating the functi,on of the Distxibuted Process Controller (DPC) o~ the second .. . . . .
embodiment.

DetAile~ DsRcrinti~n Q~ ~he ~ve~tion General Overview Fiyure 3 is a diagra~ -showing the relationships among the different elements of the present invention and the hetexogeneous computers in the system. Firs~, a general overview of the system funstion will be described with reference to Figure 3; detailed explanations of individua~l elements of the system will follow.
The system of the present invention comprises four major elements: (a) a Configuration Specification Lan-gua~e (CSL) compiler or "configurator" to be described with reference to Figures 4 and 16 for first and second ~mbodiments, respectively; (b) a Distributed Process Controller (DPC) to be described with reference to ~` Figures 5 and 23 for the first and second em~odiments, respectively; (c) a Language Injection Library (LIL) to be described with reference to Figures 6 and l9 for the first and second embodiments, respectively; and (d) a runtime support daemon library to be described with reference to Figure5 7-11, 15 and 22 related to both the .
SUB~TITUTE SHEET

.

. , , , . ~ .. :. : . . : . . . . .; ,., , , , ,.:, . .. .. . .. . .

.. . ..
. : ; : ,,, . :, ,, ,,:~ ::

~t 0 ~1 ~/U I ~)0 PCI'/l Sg I /05 1 1 6 2~77~

first and second embodiments.
Figure 3 shows the general relationship among the - elements of the invention. Pigure 3 illustrates a host computer by means of a phantom box 10. The host computer 10 allows the operator, in one embodiment, to enter in-structions for a multiprocessor application. In another embodim~nt, to be more fully described, an application console is pro~ided-to allow the operation to command execution of multiprocessor applications by the mere entrance of the name of the program. The host computer 10 is connected through a common bus 16 to any number of network computers (WSl, WS2 ... WSk). (Hersinafter, - "network" will refer to the computers physically con-~ected--to each other; the subset of computers~--~in-the lS network used in a particular application will be xeferred to as the "system".) The system of the present invention is defined by software embodied in the various computers in the network. Whether a particular computer operative-~ ly connected to the network is a host computer, a target :- 20 computer, a master computer, a worker compu~er, depends on the nature of the multiprocessor application.
The elements of one embodi~ent of the .present invention within the host computer are shown within the box lO in Figure 3. There may be more than one host 2~ computer within any computer network; indeed, a network can be so created that every computer in the network may function as a host computer. The configurator 12 acts as the interface between the entire system and the operator.
` The configurator 12 acts as a compiler for two inputs from the operator: the specification in configuration specification language (CSL) and the network capability, also known as the computing environment specification language (CESL) specification. The CESL specification is a definition of the capabilitie~ and properties of all of the hardware and system software in the entir~ network.
The CESL specification is generally pre~programmed into SU8ST'TUT._ SHEET

U 1 7~` PCT/ ~59l/0~ll6 _9_ 2~773~

the system and includes information abou~ the available languages, operating system, and protocol for each com-puter in the network. This information interacts with the Language Injection Library, as will be explained below.
Figure 4 is a flowchart showing the function of the CSL compiler in the con~igurator 1~ for one embodiment of the present invention. For another embodiment, the configuration is shown in Figure 16. The CSL specifi-cation and the network specification, illustrated inFigure 4, are entered by the user. With the network specification, the configurator creates a network ~raph, which is a list of all of the computers physically avail-. ...able in the network, including in~ormation about the15 operating systems, programming lan~uages and communica-tion protocols. A CSL specification also includes information about the interdependency of the various sub-programs in the application prosram. For example, it may be necessary that a first sub-program be completed before another sub-program begins. The CSL compiler analyzes the sequences and dependencies and ascertains that there ~ is no contradiction in the CSL specification.
~ Another function of the CSL compiler is to generate single-instruction-multiple-dataworkers, or"slaves", in . 25 the control processing units CPU's which are not eXpli-. . citly required by the CSL speci~ication. The "slaves"
are sub-programs within the application program which are not explicitly assigned for execution on a particular computer. If these sub-programs access data through a ~ 30 "tuple space" (which will be explained below), these :~ programs may be run anywhere in the network having a suit~ble compiler. The "slave" programs in an applica-tion, then, may ~e placed anywhere they are compilable in - a network, and the CSL compiler will place them wherever ~ 35 there is room.
`; Although general purpose computers commonly have ~`` SUBSTITUTE SHEET
.~, .

~ . ~ . , . . .... . ,.. , ,. , ~ . .

~U~YU PCT/~591/û5116 20~7733 only one operating system, it is common for a computer to acco~modate a number of types of protocols. Some proto-cols are more efficient for certain purposes than others.
Another function of the CSL compiler is to calculate which protocols available on a particular netwoxk com-puter are more efficient for a particular purpose.
Finally, the CSL compiler generates code to be sent to the distributed process controller (DPC) 14. For one embodiment, a flow chart of the DPC is illustrated in Figur~ 5, whereas for another embodiment ~he flow chart of the DPC is illustrated in Figure 23. To the DPC 14 illustrated in Figure 5, to be described, the CSL com-piler generates the application precedence, by which the ~ ~various interdependencies of the -software chips (sub-i 15 programs) are specified. The sub-programs which are part of the main application program are sent to the target computers where ~heir execution is desired (in a s~atic system) or chosen (in a dynamic system), as well as the Interprocess Communication (IPC) patterns for the trans-ference o~ instructions among computers during the execu-tion of the application program, as will be explained below. In a first embodiment, the application prece-denc~, software chips (also known as modules or sub-programs), and IPC patterns are all sent out within pro~ram "shells" for loading into the DPC 14 and the target computers, in a manner Xnown in the art. In the second embodiment, the program shells are transferred ~rom DPC to respective RCID's and carried out by the RCIDs. The new em~odiment merges the sub-program shells with the dependency in~ormation file, thus simplifying the clearing process when an application is completed.
An instruction is operated by the configurator to activate the DPC when it is time for the application to begin. The DPC receives from the con~igurator an IPC
pattern and a control script for each software chip. If, under the instructions in a particular application, ~BST1TUTE SHEET

, . . ,,. : , , : . .

~lui~/UI~y~ 9i/~

2~ ~ 7~

certain so~tware chips must be run before other software chips, the sequence or dependency is also controlled by the DPC.
If the user elects to use dynamic process alloca-tion, the DPC includes a ranking of individual computersin the network, such as by 5peed or suitability for a particular purpose, so that the DPC may choose among several possible comput~rs in the network for the purpose o~ maximizing efficiency. This programmed ranking of individual computers in the network is based on the CESL
specification. Of course, the operator ~ay deliberately choose that certain sub-programs be executed at certain locations; for example, the sub-program for displaying . . . the final r~sults of an applicatio~ would typically be executed at the user's host location.
Figure 5 is a ~lowchart for one embodiment showing the operation of the distributed process controller . (DPC). If the execution location of a particular module (software chip) is not specified by the user, the su~-~. 20 program is considered "dynamic", and th DPC will choose, ., . based on its internal ranking of computers on the net-. work, the most e~icient combination of network com-puters. If the particular execution location is not "dynamic", the DPC will simply rsspond to the user instructions and address ~he requested network computer through its remote command interpreter d~emon (RCID), as will be explained in detail below wikh referenc to Figure 7. Once a connection is made between the DPC and the desired target computer, the module's IPC pattern and shell may be loaded to the target computer. Due to the . presence of the commercially available Network File System (NFS), the module's executable body IPC pattern `. and shell may not need to be lcaded to target computer.
CES~ specification should give ~his information and direct the DPC to op~rate accordingly.
In the course Or the execution o~ a single applicat-8U8STITU~E SHEEI

, .. , " . . . .

w09~/0l990 PC~/~591/0;116 2~773~

ion program having a nu~ber o f sub-programs or modules, it is common that a particular module may be run several times. Thus, whether or not the graph is "dynamic", the DPC, after sending a process to a particular target com-S puter, will query whether the process must be sent again, as indicated by the "more?" boxes in the ~lowchart. Fur-ther, the DPC spawns an interactive server shell, or query shell, which monitors the process of the applica-tion program. Whien a particular module in an application 10 program is completed, a network message interrupt is sent to the DPC. Upon receiving this interrupt, the DPC
updates the graph it has created for the system, to indioate that a certain module is finished for the r purpo~es of the pres~nt applicition. Finally, when all 15 of the modules are run the required number of times, the DPC terminates.
The DPC 14 within the host computer 10 is connected by a bus 16 to the ports of the target computers WSlj wS2 ... WSX in the network. The means by which instructions 20 from the host computer are transferred to the target com-puters will be @xplained in the following section.
~ ' ' A Simle Exam~le - Figure 6 is a schiematic diagram showing the essen-tial hardware elements for carrying out a simple multi-processor operation. This multi-proceissor operation i5 applicable to the first em~odiment illustrated in Figures 13. Another embodiment applicable to computation intensive programs is to be d~scribed with reference to Figures 14-23. For this example, applicable to Figures 1-13, the objective of the program will be for a message "hello" to be transferred from a ~irst workstation computer WS1 to a second workstation computer WS2.
The host computer 10 shown in Figure 6 i5 the same as the host computer shown in slightly different form in Figure 3, and again the host computer 10 may be embodied .

~U8STITUTE S~IEE~

... . ..

. :. ., .. ~-.. . . . . .

. . v, ~ 1 3 ~ / U~
~377~
in a second computer on the network or in any computer so adapted on the network. Thus, although the host computer 10 is shown as a separate element in Figure 6, it may be embodied within either WS1 or WS2 as well.
As an illustration of the capabili~les of the present invention, assu~e that computer WSl includes a VMS operating sy~,~em, a DECNET protocol, and a FORTRAN
- compiler. In contrast, WS2 includes a UNIX operating system, a TCP/IP protocol, and a PASCAL compiler. The lo specific types of operating system, protocol, or compiler are not necessary to the invention but are shown in this example to illustrate the function of the present inven-tion in a heterogeneous system. The object of this exàmple program is to initiate a message tthe word "hello") in a short program in WSl, and then to output that message into a port in WS2, where a program in WS2 will read the message, for instance to display it on a monitor. The entire multiprocessor operation is initi-:~ ated by the operator through the host computer.
The operator enters into the host computer 10 the CESL sp~cification and the IPC patterns through the CSL
~, speci~ication. ~he C~SL specification is typically pre-programmed into the host computer upon the physical connection o~ YariOUs computers onto the network; gener-ally the operator need not bother with the CESL at the beginning o~ Pach application. The IPC pattern is first specified- in the CSL (first e~bodiment) or a process : graph (second embodiment). The pattern is not completely defined until process locations are determined. The actual programs, the "write_hello" program in FORTRAN for WSl and the "read hello" program in PA5CAL for WS2, are compiled using the process to be described with reference to Figure 19. For this example, the IPC pattern (which : may be in any particular format) includes instructions to locate the "write hello" program in WS1, and the "read_hello" program in WS2 over the networX~

SUBSTITUTE Si JEET

, , . , . - , . . . . ... . .. . . .. .. .. . . . .. .
,' .. , ...... . -. . `.', `; ` . ` ``. ~.: ., . :
, ~ , ;i . . , : . . .. , , ,, . ;.,~. . .. ... . .

~Y'/UI`j~() P~ s9l/ojll6 2~773~

Within each of the target computers are certain essent^ial elements: adjacent a well-known port in WS1 and WS2 is a "daemon" 20. As will be explained below, the daemon 20 is a short monitoring program by which a target computer on the network waits for a signal from the host i computer. ~djacent the daemon in each target computer is a memory space 22. As u~ed here, "memory space" in each computer is used broadly for any means for storing data or instructions. Within the memory space 22 is a lan-guage injection librarv (LIL) 24, which is a li~rary of subroutines which are activated by instructions from the host computer, as will be explained below. Also within : each target computer is the customary compiler and operating system, which in the case of WSl in Figure ~ is a FORT~AN compiler and VMS operating system aAd a PASCAL
compiler and UNIX operating syst~m are in the computer WS2. Also embodied within th~ compuker is an input-output protocol which is used by the system software on both computers.
The IPC pattern in the configurator 12 passes to the ~: DPC.14, shown in Figure 3, and activates the daemons 20 i~ computers WSl and WS2 shown in Figure 6. The daemon 20 is a Remote Command Interpreter Daemon (RCID), (the ` exact operation of which will be explained below with reference to Figure 7), which in effect activates . computers WS1 and WS2, initializing each computer to expec~ a program to be loaded into its memory 2~. The RCID (daemon) in each computer WSl and WS2 is a short monitoring program which branches upon an instruction from a host computer.
After the daemons in WS1 and WS2 are caused to branch, the host computer sends to co~puters WSl and WS2 respectively the execution programs for the application, in this ca~e the "write~hello" program for WS1 and the 3s "read hello" program for WS2, as shown in Figure 6.
The "write_hello" program for WS1 is entered into SUBSrITUTE Sl IEET

: ; ~,: , ". . . . . ................ . .

., ~ .," . . .~, .. ".. ,. . , ; , .,.,,. .. . , .` ..

I U 1 7 Y ~ PCTi~ /o~

2 ~
the host computer in FORTRAN so that it may be compiled within WSl i~sel~. This program includes the instruc-tions to create a string variable of the w~rd "hello" and output the variable "hello" to an output port. Since, in this example, WSl ha-~ a VMS operating system, it should be pointed out that the "native" communication protocol for the VMS operating system is DECN~T, which re~uire~
"name abstraction" for input and output ports. That is, in DECNET output ports are identified by name, and the specific physical port used at any given time will vary depending on how many ports are being used at a particu-lar time. However, with name abstraction, a declared output port is simply given a name, and the name stays bound to its declared purpose no matter which actual physical port the data eventually passes through. Thus, ~or this case the program must name the output port, for example "tmp" for "temporary" for output to WS2.
The program sent from the host computer 10 to computer WS2 is the "read hello" programj ha~ing the function of ac~epting the string variable 'Ihello" from the output port "tmp" from ~Sl. In this example, WS2 has a UNIX operating syste~. The native protocol for the UNIX operating system is TCP/IP, which uses "port"
abstraction as opposed to the "name" abstraction in the V~S. With "port" abstraction, specific physical ports are assigned functions by the program. Progra~s are sensitive ~o the specific number of a port, and not to a temporary name given in the course of a program. With port abstraction, a user must use what is known as a "yellow pages" to obtain the correct numbered port to which a certain output is sent out, or at which a certain input is expected. This "yellow pages" technique is well-known in UNIX progra~ming. Note also that either DECNET or TCP/IP can be used by the system as a whole.
For this example, wherein a message is written in computer WSl and sent to WS2, the program is initiated in SlJBST'TUTE SHE~F~

- - ... .: . .. , . ~ : . , . .i .,,, : . ,-. .. . ...... ... .. ..

u1~u PCT/~591/05116 2~773~
one computer and the second computer reacts to the first computer. WS2 is thus in a position of having to wait for a data input ~rom WSl. In order to e fect this waiting, the "read hello" program must include the set-ting up of a mailbox or equivalent memory space, and aninstruction ~o read the contents of the mailbox when data is entered therein from a portO Thus, the message "hello" enters WS2 through a port, and then goes into a mail~ox (shown as 30) where it waits until it is read by the pro~ram in WS2. The mailbox is also in the form of a daemon; it is a monitoring loop which branches upon the input of data. - A mailbox is only one type of data structure possible with the present invention; other - - types of data structures for more complicated- applica-lS tions will be discussed below.
Because WSl uses a VMS operating system and wS2 in this example uses a UNIX operating system, provision must be made somewhere in the system for reconciling tha name abstraction of VMS (WSl) with the port abstraction of UNIX (WS2) by designating a common communication proto-col. The system takes care of the translations between port numbars and names. Specifically, if TCP/IP is selectad, the program in WS2-gives the appropriate port number ~o the WSl program and spawns a mailbox daemon (which is the setting aside of buffer space associated with the declared port). The "read hello" program in WS2 does not read direct~y from the port ~ut reads from the contents of the mailbox associated with the por~ when the string variable is inserted through the port into the mailbox in WS2.
In addition to reconciling the port abstraction and name abstraction, there are any number of necessary sub-routines going on "underneath" thP main programs which are entered into the host computer by the operator. The adYantage of the present invention is that the organiza-tion of the syskem itself (in a software sense) is such SlJBSTilU,_ S;~_~T

- ` . ~ . , . ., . ,.,,, `. . ....... .

~ v ~ r ~ J I / u~ l l o 2~773~
that the "chores~' of coordinating multiple heterogeneous computers (in this case two computers) are performed automatically by a library of "language injections" which are activated by the operator's instructions, the details of which the operator ordinarily never sees. For exam-ple, the reconciling o~ port abstrac~ion would be per-formed by sub-programs within the invention automatically in response to a genexal IPC instruction. It is the activation of these language injections, or subroutines, lo by the IPC patterns in the system which allow programs such as the present example to be executed with minim~m instructions from the operator. In order to activate the necessary computers in the networ~ for a particular ap-plication, the user enters a CSL which is ganerally lS stated in one or a very small number of program lines.
For instance, if the entire system-wide application is named "talk", a typical phrasing of the IPC pattern would look like this:
configuration:talX;
M:write~exec loc-wsl) - e F:msg -é
M:read(exec loc=ws2) where "exec loc" is a possible CSL instruction for "execution location". This one line of program tells the system to expect a program named "write3' which should be executed at WSl, a pro~ram "read" which should be exe-cuted at WS2, and that ~here will be a flow of data from WS1 to WS2.
When this instruction is compiled by the configura-tor in the host computer 10, a large number of sub-routines will be activated. The configurator is essen-tially a program which accepts the terse ins~ructions from the user and in response generates directives to guide runtime utilities to operate efficiently.
Certain advantages of the present invention over the previously known methods of virtual computer programming should be pointPd out. Heretofore, if a sub-program SUE~STITUTE SHE~ET

; ~ . ~ . - . .. .. . ;., ~ . . . .. .

.

WO92/01990 PC~/~S91/05116 2~3773~

within a larger execution program was intended to be exe-; cuted on a par~icular computer in a network, the sub-pro-gram itsel~ would have to include the numerous lines of program required to coordinate the interp~ocessor com-munication, such as port abstraction or name abstraction.
A programmer would have to coordinate the dif~erent ports in the various computers in advance, be~ore running the entire program. Such coordination is time-ccnsuming and restricts subsequent changes in the program; for example, if it were simply decided to change the execution loca-tion of one sub-program ~rom a UNIX FORTRAN work station to a VMS-FORTRAN work station, a large number of lines in the sub-proyram itself would have to be changed to accommodate the differen~ method of identifying ports.
Thus, with prior art methods even the simple change in execution location requires substantial alteration of the execution programs.
In contrast, with the present invention, each sub-program is independent of its exe~ution location. In order to change the execution location of a particular sub-program, the user need only change the CSL specifica-tion. Tha system automatically produces new IPC pattern files to coordinate the various computPrs in the system.
The IPC patterns coordinate with the various subroutines in the Language Injection Library (LIL), as will be explained in detail below.

The Daemons Daemons are short monitoring programs which are continuously running at th~ reservcd ports of the com-puters on the network. For one embodiment, the associ-ated daemons are illustrated in Fiqures 7-ll. For another embodiment, the associated daemons are illus-trated in Figures 15 and 22 and also Figures 8-ll. In general, in its no~mal state, each daemon runs as an endless loop until an instruction is receive~ from the SUBSTITUTE SHEET

'.' ' ' ',. ' ' . . ". . ' '. ' ' : ' , -19- ~
~7735 network. The incoming instruction causes the loop to branch. Thus, through the daemons, any computer on the - network can be "called into action" by the host computer for a particular application.
5There axe two classes of daemons, permanent and dynamic. The permanent daemons run con~inuously in all o~ the computers on the network, in expectation o~
instructions Srom the host computer. The dynamic daemons are created, or "spawned", within individual computers incidental to a particular application.
Por the first embodiment, there are three permanent daemons: the Remote Command Interpreter Daemon (RCID) (Figure 7), the Port Identifier Management Daemon (PIMD) (Figure 5), and daemons ~or external files, such as the Sequential File Daemon (SEQD) (Figure 9). The ~econd embodiment to be described hereina~er, shares the daemons illustrated in Figures 8 and 9. Figure 7 shows a flowchart of the RCID, which was mentioned briefly in the simple example given above. The function of the RCID
is to be available to receive a signal from the host com-puter at a "well known port", in a manner known in the art. An interrupt ~or the daemon from the host computer will send its signal through this well known port.
Processes are i~nitiated when a signal is sent from the host computer to the RCID in one o~ the co~puters on the network. The Xey step in the spawn subprocess is a conversion of the incomi~g signal to the operating system format in the computer~ The daemon pro~ram branches into one or ~ore of these processes when it is interrupted by a signal fro~ the host comput~r. If the operating system -in the host computer operates by port abstraction, for ; example, the daemon will initiate a process within the local operating system by which a name will be created for a port in the operating system through which the incoming program from the host computer may enter.
Similarly, with name a~straction, the daemon will initi-~:!t ~ T~T1 IT~ C!U--~--~0~~/01990 PCT/~:S91/05116 2~773~

: ate a process by whirh a speci~ic port is prepared for the incoming program having a pre-defined port name.
Other proc~sses which may ensue when the loop is inter-rupted include a load calculation and report, for exam-ple, to report back to the host computer how much space - has been used as a result of the program entering the port. Another possiblP instructi~n may be a "terminate"
instruction. Different subprocesses may be initiated by the daemon depending on the actual signal from the host computer which interrupts the loop in the local daemon.
Network computers using UNIX may also requir~ a "delete zo~bie" instructiun to be activated by the RCID. The "delete zombie" instruction erases data or program lines - which have entered the local computer as a result of a~-~
particular application. As is known in the art, with the UNIX operating system, a "zombie" is a program which is no longer needed but remains in the memory.
The Port Identifier Management Daemon (PIMD) of Fig~re 8 provides the service of bi~ding unique port num-bers to usPr defined objects. Figure 8 shows a simplaflowchart of the PIMD. Once again a loop is established - at a "well known port" to wait for an interrupt from the host computer. Depending on the signal by which the loop is interrup~ed by the host computer, the PIMD can be used to declare, or register, a user-given name to a parti-cular port, remove that name, or reset the internal data ~tructures. This da~mon is re~uired only in operating syste~s where port abstraction is u~ed I such as UNIX.
Figure 9 shows a flowchart for the sequential file daemon (SEQD). The seguential file is one of many forms for accessing external ~iles known in the art. Other external file systems in common use include the ISAM
(Index Sequential Access Mechanism), the r~lative or "HASH" ~ile, and the database or knowledgebase files.
The distribution o~ all of these external files work in a basically similar fashion, and although the sequential SUBSTITlJT~ SHEET

.

- ,. . . . , ~ , .,. .. ,.. ;,, . .. , ., " ,. .

~ ~ :
~092/01990 PCr/~S9l/0;ll6 ~773~
file is described in detail here, it would be apparent tothose skilled in the art to apply other f~rms of external files to the present invention.
The sequential file is useful for applications, such S as personnel files. With a sequential file, the files are scanned so that every record in an entire file is read. The other external file systems mentioned would use more sophisticated means of locating a desired record in a file more ~uickly, if the processing of the entire file is not desirable.
The sequential file daemon shown in Figure 9 exists at a well-known port of a target computer, allowing all remote co~puters to access the sequential ~iles on this computer.~~-A loop is set up at a well-known port, wh1ch branches upon the input of a request. A request would include the name of the desired file, and one of four operations (OPEN, CLOSE, ~EAD, WRITE) to be performed on the accessed file and information about the requesting computer.
The sequential file daemon shown in Figure 9 al~o includes an "index" system which is common in sequential file applications but is not necessary to the present invention. With the "index" system, every ten requests for data out of the file are handled by a particular software structure. Once a particular remote software chip requests data ~rom the file, that request is given an index number, and subsequent requests from the same software ehip will be handled by the same software structure within the file. If more than ten requests accessing differe~t ~iles are made, it spawn another software structure, referred to by the next highest index numb~r. For example, requests 1-10 will be given index number "1", 11-20 will be given index number "2", etc.
This system is well known in the art. Sequential file daemon may be implemented as a permanent or dynamic daemon depending upon user's preference while RCID and SIIB~STITUTF ~

':: . ' ' ' ' , - , . . . . ' . ~ . . ' . ! - .. ..
, ' : - . : ' . . .. .. ' ":': .. ' . '',: ' . : : ' ':' ' . : , ' . ; '", .. , . . .. :
'" ""'''' '''' ;; ' ' :' '' ': "' ' """ ;."'' ''"', ' ' "' i . ', ~

: . . ' . .: ' .~ .

~vY~/ulY~v PCT/~591/0;116 -2~-~$~
PI~D are mandatory permanent daemons.
It should be pointed out that all permanent daemons use "well Xnown ports" in an operating system where port abstraction is used, such as UNIX, or "generic names", such as RCID, PIMD, and SEQD in systems were name abstra-ction is used, such as VMS.
The dynamic daemons are daemons which are spawned incidental to a particular application. An IPC data ob-ject, such as a mailbox, external file, or tuple space, is an entity to which all defined semantic operations, such as READ, WRITE, PUT, GET, or EXIT, are implemented.
This not only simplifies the design and implementation of dynamic daemons, but also improves the general efficien~
-- ,, cy .... ..
Since each dynamic daemon corresponds to a named entity in an applica~ion, these names are used to derive network object names. In an operating system where port abstraction is used, the network object name is derived by the CSL compiler. For both embodiments of the present invention there are two types of dynamic daemons, the tuple space daemon (TSD Figure ll) and the mailbox daemon (MBXD Figure l0) or SEQD, if tha user so desires. These daemons are created incidental to use of IPC data ob jects, which will be described in detail below.
IPC Data Objects The inter-process communication (IPC) data objects are software mechanisms by which sub-programs in the user's application communicate with each other. There are three types o~ IPC data objects in common use: the mailbox, the tuple space, and the sequential file. The se~uential file is implemented by a SEQD daemon, as described above. These different kinds of IPC data objects are used for various specific tasks.
The mailbox data object is an arrangement whereby a quantity of data is sent from one computer to a second .

SUBSTITUTE S~tEFr .

: . : ~ . ,........... ~ . . . `

- w0~2/01990 PCT/~S91/05116 2~g'~7'~
computer, the information being held temporarily in the second computer until the second computer is ready ~o use it. Figure l0 shows a ~lowchart for a mailbox daemon (MBXD). The MBXD is created through the RCID and is assigned to a specific port obtained from PI~D, or a net-wor~ name (if name abstraction is used). The MBXD will then set up a loop at the application specific port and wait for a request such as "read_mail," "put mail," or "close mail". The MBXD creates a local mailbox (that is, sets aside a specific space in the me~ory of the local co~pu~er) and then relays the message coming through the port into the mailbox, where it may be read by the sub-program on the local computer. In response to the -- - "close_mail" request, the daemon sends an end-of-file (EOF) message to tha local mailbox and removes the binding of the name of the mailbox from the specific port, so that the port and the ~ame may be used again within the application.
The MBXD ~an be used to construct coarse-grain MIMD
(multiple instruction multipl~ data) components and pipe-lines. The MIMD and pipeline functions are to be further discussed with regard to a second embodiment. When there are multiple senders, multiple MBXD's may be adapted to ~ultiplex into a single local mailbox.
Another important type o~ IPC data object having its own specific daemon is ~he t'tuple space", which has associated with it a tuple space daemon (TSD) (Figure 11). The tuple space concept is very important for parallel-processing applications; that i5, applications in which the labor of a very large program is apportioned automatically among several computers to save tim~. The advanta~e o~ tuple space is that by its v~ry nature the arrangement allows automatic load-balancing: in ess~nce, with tuple space computers in a network, the computers which can complete tasks the quickest are automatically given more tasXs to do.

C ~ --T

: - :- . . ,, ,, . ", . ... .

9'/01990 PCT/~S91/05116 2~1$773~

The general principle of tuple spacP is known in the art (see, for example, Gelernter, ~'Getting the Job Done", ~yte, ~ovember, 1988, p. 301). Basically, a tuple space is best thought of as a ~'bag~ of "tuples". These tuples represen~ divisible computational portions of a larger application. In a heterogenous system, some computers on the network may work faster than other computers. With tuple space, every computer in the system simply grabs one tuple from the tuple space, performs the computation and then puts the results back into the tuple space before taking another tuple to do. The important point is that each computer in the system may simply take another task as soon as its present task is completed, if - the tuple size is optimized,-this balanced the ioads of heterogenous computers automatically. In contrast to a sequential system, wherein the networX computers effec-tively "stand in line" and one particularly long tas~ can hold up the completion of shorter, simpler tasks.
In tuple space, there are ~or present purpos~s three important operations, I'put", "get", and "read". When a particular task is completed by one of the computers in the system, it may be returned to the tuple space as simply a quantity of ordinary data, which may be accessed ; ~nd read in a subsequent tasX. A "put" is the placing of data into the tuple space. A "get" is the removal of data from the tuple space. In a 9'read" (RD), a computer merely looks at a tuple in the tuple space but does not remove it ~rom the tuple space, so that it ~ay also be read by another computer in the system.
Figure ll shows a flowchart for the tuple space daemon (TSD). As with the MBXD, the port-for the input ~' or output o~ instructions or data for a tuple space is created and named by a PI~D (Figure 8). The TSD of Figure lO will be associated with one application spe-cific port. In addition to the tuple space commands of put, get, and read, the TSD is al50 sensitive to open and nc~-r1TI ~ Sl lEET

: . ~ .. . ,.:. : : : .

2~7~

close commands through an application-specific port.
Figure 12 is a simplified flowchart, for the first embodiment of the present invention, showing an applica-tion involving both mailboxes and tuple spaces. The objective of the program is to multiply two matrices of di~ensions NxN. Each value within the matrix requires a dot-multiplication operation of a row in the firs~ matrix and a column in the second matrix. In this example, the various dot-multiplication operations are apportioned amon~ several "worker" computers. In this example, the matrices to be multiplied ara located in two mailboxes Mbx A and Mbx B. The values ~or the two matrices may have been entered into these mailboxes piecemeal over a period of time. When the application program is run, the lS data from the mailboxes is sent to an initializer pro-gram, which accesses t~ple space tsl. Once the data is in the tuple space tsl, the various worker computers access individual rows and columns and perform the dot-multiplication operation. As soon as a worker is fin-ished with one operation, it will send the result back tothe tuple space and take another row and column for another operation. Because worker computers m~y operate at varying speeds, it is likely that, by the time the matrices are completely nultiplied, some workers will have performed more operations than others; but this versatility is the v rtue of the tuple space. When the multiplication is completed, the results are sent ~rom the tuple space into a collector program which deposits the final answer in mailbox (Mbx) C.
Below th~ flowchart in Figure 12 is shown a typical CSL specification for this operation. The first line of the specification gives a name "Matrix_multiply" to the entire operation. The next line organizes files (shown with a capital F) and modules (identified by the capital M). Thus, files A and B are organized as mailboxes, file TS1 is organized as a tuple space, and C is organized as ITUTE SHEFr ,: . : : . , ., . , . :

ulY~u pCT/~591/05116 2~773~

another mailbox. The "initializer" and "collector~ pro-grams are organized as modules, or sub-programs, within the application program. The arrows between the organ-ized files and modules indicate the flow of data in the course of running the application program. The following lines organize the relationship of the worker computers to the tuple space tsl. Thus, the line "F:TSl U M:Wor-kerl U F:TSl" indicates that data will be moving from the tuple space TSl to a program called "worker 1" and then lo bac~ to tuple space TSl. The various worker programs may or may not be embodied in different computers; in fact, the operator will probably not care about where the worker programs are physically executed. It should also be pointed out- that the "worker" programs, as they all access a tuple space and their physieal location was not speci~ied in the CSL, are 'Islave" programs ~or purposes of locating the programs by the CSL compiler.
Below the CSL specification in Fiqure 12 is a representative CESL specification. ~he various lines show a typical specification with available languages, ; operating systems, and ~ommunication protocols.

Lanqua~e Iniection Librarv rLIL) The essential attribute of an HLVC is the transfer-ence of data and program instructions among various com-puters. In th~ present invention, this transference of data and instructions is effected by the IPC data ob-jects, using software mechanisms such as the mailbox, tuple space, or the sequential file. These mechanisms - 30 may be implemented using dynamic daemons. This interac-tion between dynamic daemons and application programs exists in the form of "language injections". The collec-tion of language injections within each computer in the network is a "language injection li~rary" (LI~) previous-ly discussed with regard to Figure 6. The LIL for asecond embodiment is to be described hereina~ter with .'~1 IRSTlTUT- SHEET

' ' ', . ' . .', "" :,.', ' ', :
. . . ,, . . ,'' ' ., .. .. ""' ' , ' ' "' ' ,, , ' '` ' ' ' ' ' ' ' ' ' ' . ' " ` " ". ' .

~ ~J Y_/ U I ~YU

2~773~

reference to Figures 16 and 17.
Each language injection is a software subroutine, - corresponding to a specific IPC operation on a specific IPC data object. Each such language injection is called into action by the co~mand in the application program being run by the particular computer. The source code of the language injection library should be compilable and runable in any operating system and with any protocol.
That i9, ~rom the programmer's view point, the IPC
commands which activate the subroutines of the language injection library are the same no matter which languaye and for what protocol or operating system the application program is intended. However, the object code form for each language injection will vary depending on the protocol and operating system of the target computer.
Within each computer, the language injections are written in a form runable in that particular compu~er. When the source code of the language in~ection library is included in the application program, it is said that the program is "linked" ~o be described with reference to Figure 19.
The set of input and output LIL functions are used by application programs or subprograms (modules) for net-work transparent programming. LIL is preprogrammed and - compilable on multiple operating systems and communica-tion protocols. This is done by using pre-processor statements. For example, if the LIL is to be implemented in C and a c~de segment should only be read by a C
compiler under Digital VMS operating system, then this segment must be controlled by a pre-processor symbol, say VMS. Obviously, the most popular operating system may be the default re~uiring no control symbols. Various communication protocols, such as TCP/IP, OECNET, SNA and OSI, should all be treated the same way.
Note that the inclusion of various operating systems in a LIL will only contribute to the compilation size of LIL while the inclusion of many communication protocols SUBSTITUTE S~lEFr , , ,., ", .. ....
~, . . ... ,...... , . -~, . .. .~,'. .

9~/~199~ PCT/~59l/05ll6 2~773~

may contribute to runtime overhead. To avoid excessive runtime overhead, a management scheme is necessary to control the LIL to operate with no more than three (3) communication protocols. This number corresponds to the maximum number of communication protocols supported by a single computer among all accessible computers in a network.
Different from conventional programming language macros, LIL routines base their operati,ons on a runtime supplied IPC pattern file. This IPC pattern file dic-tates the physical (actual) communication patterns of the underlying program (or module) at runtime to be described with reference to the lower part of Figure 19.
--- For a first embodiment, ever,v LIL function performs- -a certain task on a given data object (identified by a user defined name). For a second embodiment, to be described with reference to Figures 14-23, a high level .interface is provided to reduce the necessar,v steps of an operator or user to perform parallel programming tasks, especially, those associated with computation intensive functions. Every data object for the first embodiment ~ssumes a set of permissible semantic operations. For example, a tuple space object may have "open/close", "put", "get" and "read" operations while a mailbox object may assume "open~close" "read" and "write" operations.
The main features of typical input/output functions are provided as follows. The description uses the syntax: "return_type function_name (parameters)" for all functions. The "return_type" is given as either "void"
or "integer'l, depending on whether a signal is returned to the user. These features may be further described with reference to some specified functions.
a) void cnf_init t ) This is an implicit function called a~ the start of an image (the ~irst time execution of cnf_open) to initialize internal data structures for the image.

SUBSTITUTE SHE~

, , . ' . - , :, ~ , , . ~, . .,.,,,, ~.. . . ..

-29- ~ __ .v ~
~3773~
The source of information is a program speci~ic IPC
pattern file whose format is shown below:
conf = application_system_name debug = -1, 0-3 max_d = maximum diameter of the associated network module: Name 1 logical_name : type In_or_Out physical_info comment module: Name 2 .

module: Name n This in~ormation must be accessible as global ~ariables by other functions in LIL. The main objective for these data structures is to maintain the relations ~~~~~ -~between user defined logical data objects and thëir : 20 runtime locations. A typical implementation would be to use either dynamic memories or static arrays.
Note that the logical name of the execution image is obtained from an environment variable: CNF
MODUTE. The generation of the IPC pattern file and the definition of CNF_MODULE are all due to the Configurator ~(static processor allocation) or DPC tdynamic processor allocation).
- b) int cnf_open(user defined_object_lname) : This is a generic open function for all types of objects. It performs preparation for a given data object for ~uture access. For example, if the objec~ is a sequential ~ile, it opens the file and records the file descriptor. If the object is a tuple space, it then creates necessary data structures for future connection needs.
An integer, referred to as an object handle, is : returned as the result of this call. This integer points to the entry of the internal data structure and should be used in all subsequent calls concerning this object. The function returns -1 if the normal open acti~ities cannot ui~ PCT/~9l/~ll6 ~30~
2~773~

be completed successfully.
c) void cnf_close(user_defined_object_name~
This is also a generic function. It clos~s a user defined object by shutting down communication links and/or file descriptors associated with the object.
d) int cnf_fread(ob~ect_handle, ~uf~er, size) This function performs an input operation for a given file. The output is placed in a "buffer" with maximum size "size". The actual number of bytes is returned at completion.
The file may reside on a different computer than the underlying process. The function must co~muni-cate with the appropriate daemon using the results - ob~ained from cnf_open.
e) int cn~_fwrite(object_handle, buffer, sizP) This function writes "size" bytes in "bu~fer"
to the actual runtime storage pointed by the handle. The actual number of bytes written is returned at completion.
This file may also be located in a different host (according to the IPC pattern file). The results obtained in cn~_open is used h~re.
f) int cnf_fsee~(object_handle, offset, direction) This function change5 the current read pointer for the given handle. Similar considerations should be given to remote ~iles. It returns -1 if an EOF is encountered, returns 0 if BO~ and 1 otherwise.
g) int cnf_mread(object_handle, buffer, size, sync) This function reads in~ormation ~rom the run-time device pointed by the handle and places it in "buf-ferl'. When "sync" = 0, the read is a-~ynchronous and when "sync" = 1 synchronous. The actual number of bytes read is returned at completion.
Note that this function should always read a local IPC structure, such as a message queue for UNIX
system V derivatives, a FIFO for BSD based systems or a ~ail~ox for VMS systems.

~, O~rl~ S~

: , , , . ~ : , . . : , , , -, : :: :: . .. . . . .
, , , : . , .: : . , , ;, ,,, -, . : , . .

~7~5 h) int cnf_mwrite(object_handle, buffer, size, sync) This function writes "size" bytes from "buf~er"
to the given object. According to the IPC pattern, this - write may be a local or remote operation.
5Note that if the IPC pattern indicates ~hat the receiver is on a remote computer, this function must communicate with a remote dynamic daemon whose functions is to relay the information from this ~function (of the sender module) to a local I~C structure on the receiver computer. Otherwise, this function writes directly to a ; local IPC structure.
The actual nu~ber of bytes written to the receiver is returned at completion.
i) int cnf_tsread(object_handle, key, key_siæe, buffer, buffer_size, sync) If "sync" = 1 this tuple space read function may wait for the tuple with desired key to arrive and reads the information to "buffer" when the tuple is present. Otherwise, the function perfor~s a read and the control returns immediately. The actual number of bytes read is returned at completion.
Note that "key" contains a tuple search pat-tern. A regular expression match is performed by the dynamic tuple space daemon upon reques~. The location 25 -and pcrt number of the dynamic daemon is obtained from the cn~ _ open function. If more than one tuple match "key" an arbitrary tuple will be chosen.
j) int cnf_tsget(object~ handle, key, key_size, bu~f fer, buffer_size, sy~c) 30 This function acts similarly as cnf~tsread provided that the matched tuple is extracted from the tuple space upon the completion of the operation. If more than one tuple match "key" an arbitrary tuple will . . .
be chosen.
35 k) int cnf_tsput(object_handle, key, key_size, bu ~f fer, buffer_size) ~ .
8U~STITVTE SHEET
:.
.

. .

/ U 1 7 i U

2~.,77~

This function inserts a tuple "key" with con-tent in "buffer" to a tuple space. The location of the tuple space daemon is found in the IPC pattexn file. The actual number of bytes written to the space is re~urned at completion.
1) void cnf_term This function closes all objects that are not yet closed and terminates the underlying program with status " 1 " .
10As mentioned earlier, the user may define new object types with new operations to be used as a distri-buted IPC medium, such as a distributed database, know-ledgebase, etc. The operations can ~e very diferent from above but the spirit of implementation should be the same as d~scribed for network transparent programming (or HLVC programming).
Finally, LIL should be supplied to application programmers in an object library form such that only necessary functions will be lin~ed to an application 20 program ~efore producing an executable.
There may be many communication protocols operating in a large heterogeneous computer network. With the present system, the communication protocol ~hat is requixed for a particular application is not a concern of the operator at compilation time.
Comp`uters which unders~and different communication protocols cannot directly exchange infor~ation. However, some computers may support a number of communication protocols. Such a computer can serve as a bridge for information exchanges between ~ifferent computers in a large networX. The runtime overhead of this implementation is propcrtional to the number of dif~erent communication protocols usable in one operating syctem.

3SSimilarly, different computers having compilers of different languages can be used independently with the .

.. . . , ,:,, ,.. ; ., . ,'~ ., -. .

2~773~

language injections. But the process must be linked with LIL for execution. The overhead of this solu~ion is equivalent to the overhead of any multi-language inter-face.
Hierarchical HLVC_Construction Figure 13 is a diagram illustrating numerous virtual computers arranged hierarchically. Each level loo, 102, 104 and 106 represents an abstraction which may be embodied in any number of computers anywhere in the world. Each level 100 ... 106 comprises Device (DEV), ~odule (MDL) or Group (GRP) computing tasks previously referred to as so~tware chips. The various levels are in a hierarchical relationship; that is, the virtual com-puter with so~tware chips at level 100 manipulates dataat a higher leYel of generality than the computer having software chips at 102, which in turn sperates at a higher level o~ generality than the software chips at levels 104 and 106. To give a concrete example, level 100 may be comprised of all the so~tware chips of all of the finan-cial and accounting pla~ning for a large corporation.
Within the general planning of the large corporation may be a number of divisions or departments, each with its own system of accounting; personnel, inventory, produc tion, etc. The company may be divided not only by ; department but geographically as well; a New York office and a Philadelphia o~fice may have inventory and person-nel departments of their own. on even lower levels, individual managers within a depart~ent at a location may operate their own virtual computers, these computers having direct access to data files on very low levels.
Examples of very low levels in the computer hierarchy : would be a data ~ile on one employee within one depart-. ment at one location, or the inventory file for one particular part produced at a particular location by a particular department. By "levels of generality~ is ` SUBS~ITUTE SHEET

, .. . . - , - . .~ ,; .. -, ~ . ~ , , , - ~ . . . . .
.. ; . . ~ ~ . .... .

~vO92/019so PCT/~S91/05116 2~773~

meant that virtual computers at higher levels will not be interested in the details of production, personnel, etc.
many levels further down, but only in the data (or information) the lower levels produce. The idea of a hierarchical HLVC construction is that data will be decentralizad and kept closest to the managers who use it most often. In a corporation having several thousand people, it is probably not necessary for a central office to have duplicates of the personnel files of every janitor. However, computers at higher levels of general-ity may well be interested in information such as aggre-gate personnel data for all geographical locations, or aggregate information about all of the departments at a particular location. ~~
It is co~mon in the computerization of businesses that Yarious equipment will be purchased at different times, and therefore be o~ different designs. However, it would be expensive to transfer all of the data ~rom other departments or locations into a new computer whenever a ~ew computer is purchased in the company. If two companies merge, it may be impractical to merge large quantities of data throughout the two merged corporations into a uniform system having one operating system and one language. Occasionally, it may be desired to request data from an external source. The present invention enables the coordination of numerous co~puters at various levels of ~enerality without having to alter the programs within the various co~puters themselves, or to waste space with ~nnecessary duplication of data at different levels. Looking at Figure 13, consider level l00 to be a top manage~ent level in a corporation, level 106 to represent the computer at a personnel office at a partic-ular location, and modules l, 2 and 3, shown below level 106, to represent personnel files of three individual employees at that location. Depending on the chosen structure of the hierarchy, level 102 could be considered ~ : .

~ T~TUTc S~.E~T
'~
'~ .

., ~ ,J~ ` ' r ~ J 7 1 / VJ I I O

2~773~

to represent personnel for all locations, or all depart-ments at the geographical location associated with level 102.
With the present invention, a user at level 100 could access the individual ~iles at modules 1, 2 and 3, and even process the data in the modules with instruc-tions at level 102 (for examples, health insurance premiums or income tax) for printout at level 100. Thus, in~ormation may be retrieved at 106, processed at 102 and read out at 100; thereby yielding a completely decentral-ized virtual computer. The key advantage of decentrali-- zation is that the information in modules 1, 2, and 3 can be updated by a user at level 106, who presumably would have close contact with the employees. This update could be accomplished, without having to go through the higher : level 102, 104 or 100.
In Figure 13r the various levels lO0, 102, 104, and 106 each represent a ~irtual computer. With each virtual computer are a number of software chips, whic~ may or may not be embodied within the same physical computer. In the hierarchies possible with the present invention, there are several types of so~tware chips, the modules (MDL), the groups (GR2l, and the slave programs, not shown. There are also devices (DEV), such as the ter-minal shown in level 100, which do not ha~e a softwarefunction, except to act as a terminal, printer, or other device. The modules ~DL are independently-running software programs, which exist at the bottom terminus of a hierarchy, whether the level in the hierarchy is toward the top, as is level 100, or at the bottom as is level 106. The group GRP software chips operate at a higher levei o~ generality than the ~odules, and acces~ data from modules, as shown in phantom.- Within each level, the various software chips are related into a virtuàl computer ~y any number of interface data objects llO
which are generally shown. These data objec~s 110 are .
SlJ3SrlTUTE SHEET
~' ., ` , - . . , ; , ., . , : , ` , , ` , , , : - , ` ~ - ` - , , - ,`, , . . , .; ` . . . ~,, , .` . . .` ` , .
~ ;; " ," , ` ,. ~, i - ~ .. . : . , ~ ,: , . .. " , . .. . .. .. . . . . . . . .

r~ a~l/uJllo -36~
2 ~ ~ 7 7 3 ~

the mean by which data may be transferred among the software chips in the virtual computer, such as by tuple space or mailboxes. The interface data objects 110 exist not only between software chips within a lPvel but between software chips on different levels, as shown.
The system of the present invention enables ~uch a hie~archy to be practically realized in a heterogenous cQmputing environment.
With decentralization, hierarchies can be automati-cally organized and changed at will by simply reorganiz-ing the access of various computers to other computers.
A high-level computer is really defined as a computer with access to all of the abstracted data in the network, while a low-le~el computer would be one without access to highsr level data. This hierarchical arrangement has much less overhead than, for example, a monolithic tuple space, wherein all of the information in a corporation is theoretically available in an enormous pool. If a large number of computers are sharing a single pool of data, a number of serious problems are created. First, in order to provide security and create a hierarchy of computers, just about every piece of information in the tuple space must be accessed by a security code, thus considerably incraasing the overhead. Secondly, long scanning periods would be re~uired for finding a particular piece of data and organization o~ data in a meanin~ful way (eOg., by department among various geographical locations, or all - departments within a geographical location) would be -` difficult to manage. Further, i~ is likely that this necessary coding and classifying of data in the large pool of a tuple space would be beyond the physical capacity of most computers. With the present invention, data may be shared am~ng a large number of computers, but the data is actually stored generally only in the com-~5 puter wherein it is used most often. However, higher-level c~mputers are able to access data stored in lower-.
~:`
SU5ST~TUTE SHcET

~ . .
" . ~
` . ., -,.... , .,.,, " ` ` , '. . ~` ,` ,. , ,:. `.. . . .
,`: ~ j . ,` .. . . . ` : ` `
. " , " . ` . ` . . ...

2~a7~

level computers.
It should now be appreciated that the first embodi-men~ of the present invention, illustrated in Figures l-13, provides a system that creates high-level virtual S compu~ers, (H~VC's) from among a set of heterogeneous computers each of which may have various languages, communication protocols and operating systems. A second embodi~ent of the present invention particularly suited for accommodating computation intensive functions may be lo described with reference to ~igures 14-23.
The second embodiment is configured in a manner similar to that of Figure 6. The system for executing the computation intensive application programs, to be described with reference to Figures 14-23, comprised the host computer 10 and the target computers WSl, WS2, ...
WSk as well as the application console of Fi~ure 14 to be described.

HLVC For Com~u~ation Intensive Proqrams The second embodiment, primarily through the addi-tion of the application console and program enhancements to the configurator and distributed process controller, drastically reduces the amount of e~fort needed by user to perform program parallelization. The present inven-tion provides a high level interface ~or the user which is particularl~ suited to accommodating computation intensive programs. The second em~odiment provides ~eans - for enhancing known parallel processing (PAR) techni~ues adapted to these computation intensive programs. The multiple heterogeneous computers, previously discussed, cooperatively interact to reduce the execution time of these c~mputation intensive pro~rams. The PAR processes of the present invention, includes the previously men-`~tioned three types of distributed parallelism, which are ~;35 SIMD (SAG), MI~D and pipslin~ illustrated in Figure 21 to be descri~ed. The means ~or facilitating these parallel '~ , SUBSrITUTE S~EET

` . -, .: ~: . . . ., . . . , . , .; . . , . . . ~

r ~ , J ~J l / U.~ l l o -38- 2~g ~73~

(PAR) processing operations and for reducing th~ neces-sary effort by the user is shown in Figure 14.
The block diagram of Figure 14 is different from the block diagram of the first embodiment shown in Figure 3, in that the ~locks CSL and CES1 o Figure 3 are merely illustrated in Figure 14 as control lines 202 and 204, respectively, emanating t4 and from the application console 2~6. The application consolP 206 further has a control line 208 indicated as Activating User System.
Fiqure 14 illustrates various elements, more fully shown on their related Figure, some of which are comprised of various components indicated by corresponding letter symbols, and all o~ which are shown in Table 1.

~lement ~o~e~cl ture Rslatad ` 20 Fiqura : 206 application console lR
210 distributed application configuration 18 ~ 210A configurator (cls compiler) 16 `- 25 ZlOB Node.Cap 15 210C X.Prcd 17 ~12 process development 18 212A language injection library ~LIL) 19 . 212B distributed function library (DFL) 20 30 214 user program 19 214A conventional compiler library --214B other user programs --216 user process lg 218 distributed application execution 18 35 218A distributed pro~ess controller (DPC) 23 218B runtime daemon library 15, 22 .

SU ~STITUT~- SH~rT

-. i , . - , , ... ";. ., . .,, . . ,., :
-,., . ,. . .. , ;:. ,; . . .. ;,~ .
: ~ . . . , ; , . .-, : -, . .. . ... . .. . .
,;

7/1)1~)9~) PC~ S9I/O~ 6 2~773~
NodeCapd Da_mon The Nod~Cap 210B is a file containing information from a CES~ specification of the first embodiment.
Figure 15 illustrates nodecap daemon (NOD~C~PD) 210E
comprised of NODECAP 210B and an additional fil~ 210D
(NET CTL.DAT) which supplies the NODECAPD with the multiple network information.
(NODECAPD) 210E of Figure 15 is added to the daemon library for maintaining the accuracy of ~odeCap file. It periodically monitors the computers (WSl ~O~ WSk) and validates any entry made into the NODECAP file 210B.
Figure 15 shows a typical task pexformed by the NODECAPD
210E that receives information from the files 210D and 210B. This task periodically checks for identification within the networks (as shown in 210D of Figure 15) and then waits (sleeps) for a duration specified in Delta (see 2~0D) before rechecking for this same information.
The NODECAPD exchanges information with the remote com-mand interpretor daemon (RCID), to be described with reference to Figure 22, both of whic~ control the ex~
change of information between the host computer 10 and - ;the computers WSl .... WSk. Note that NodeCap file is essential to the processor allocation tasXs in CSL
compiler and DPC.
; Confiqurator For Computation Intensive PrQgrams The configurator of Figure 16 is similar to the previously discussed configurator of Figure 4 wi~h the exceptions that the network specification, shown in :-` 30 Figure 16, is supplied from the NODECAP of Figure 15.
Since the Application Console is a window-based inter-face, such as a commercially available x-window, a graphic data collection p~ogram may be used to record the information normally obtained from the CESL specifica-tion. Further, the module shells and IPC patterns of Figure 4 are merged into one file: X.Prcd (Figure 17).
.

SU85TITUT_ S~F_-,. . , ... . " ~ ,~ . - , ,,. ~. . . " , . ..... .

.. . ~. . . . .
: ~ . ,, "' ,: . . . . . . ..

; ~ .
,-..

~092/0~990 PCT/~591/0;l]6 --~0-2~3~73~

This facilitates the management of auxiliary file for multiple applications.

Precedence Information File: X.Prcd Figure 17 illustrates the format of the application precedence (X.PRCD) file. It contains two groups of information: ta) application sub-program interdependen-cies and (b) individual s~b-program information, such as type, input/output ports, direction, runtime, and physi-cal connection.
Each sub-program (or proc~ss) can be of type PAR
(parallel processing), MDL (a module) or GRP (a sub-system). The input/output port informatlon may be incomplete when X.Prcd is generated by the CSL compiler.
This implies that DPC is going to fill in the missing pieces before starting the user's program.
' A~plication Console The application console (AC) 206 provides the operator with a high level interface for having program-ming tasks performed by the network of Figure 6. By "high l~vel" it is meant that the user may command execution of a program by merely moving the so-called mouse device and pressing the buttons. The application console 206 provides the integrated means for controlling and monitoring the performance of the application pro-grams being executed by the network. The application con~ole comprises a data entry device, a virtual key-board, and a monitoring device virtual display. The application console 206 is best implemented by the distributed window systems, such as the X-Windows (MIT), Open-Look (Sun Micro Systems) and NeXT windows (NeXT
computer). The application console 206 may be considered as a virtual computer device that all~ws it elf to be shared for ~ultiple distributed applications being ac-complished within the network of Figure 6 at the same SUBSTiTUTE SHEEr .

- ., - : : , .,. ", ,,. .: .; ' ' , ::: ,.: : .: , ~0g'/01990 PCT/~S91/051~6 -41- 2~7~

time.
The application console 206 provides the integrated - interface between the user and the programming tasXs components of the high-level virtual computer shown in Figure 18 as comprising; (l) process development 212; (2) distributed application configuration 210; (~) distri buted application execution 218; and (4) d~stributed application maintenance 220.
The distributed application maintenance 220 is : lO comprised of software routines and monitoring means that allow for the so called debugging of the application pro-grams related to the present invention. The debugging means are primarily u~ed during the development phase of the application programs. In generai, thPse means preferably include debuyging and performance analysis ; tools that allow a user to view, in a static mann~r, selected programming segments of the application pro-gxams. No~mally, all debugging and performance monitor-. ing data is collected during the execution of an applica-tion program and analyzed afterwards. The distributed :~ application maintenance 220 tasks allows the parallel -process computations such as SAG, pipeline and MIMD, as well as the softw~re chips DEV, GRP and MDL previously discussed, to be viewed by a user and recorded for analysis purpos~s.
~he application console 206 also allows the user to ~ perform process development 212 utilizin~ the language : injection library (LI~) and the distributed function library (DFL) functions which may be described with reference to Figure l9 showing distributed functions which are embedded in a u~er process. Figure l9 illus-trates the components og the programmed so~tware (LIL and DFL) that are integrated into the usex program to operate in the network computers (host computer lO and WSl ...
3s WSk of Figure 6). This integration provides the routines to operate the application software functions related to ~l t~TITI ITF SHF~T

- - i . , , . .. . . . , . - :

. . . , "
" , , ~ ' ' !, ~ ; ~ ' -', '' .' ' '""'''' '' ' ' ' , ,'; ' '; : .. , ,'' "
:: ' '.' , ' ' ',, ,. ' ' ' WO 9'/01990 Pc~ S91/05ll6 2~773~

the present invention, especially the software of the computation intensive programs.

User Proaram 5- Figure 19 further illustrate that the user program 214 is supplied with information fxom the language in~er-jection library (LIL), previously discussed, and the distributed function library (DFL). The LIL,supplies the operations to the (IPC) data objects, some of which are shown in Figure 19 as cnf open whereas, the DFL supplies routines for the computation intensive programs such as -~ a known ray-tracing program; FRACTAL calculation program, to be descri~ed with reference to ~igure 20. A user ` program 214 having the appropriate routines (to be described) are contained within each of the computers ~host computer 10 and WS1 ... WSk) of the networ~ sf Figure 6.
After the routines, in the form of macro instruc-tions, of the DFL and the LI~, are loaded or placed into the user program 214 within the associated computer (10, WSl ... WSk), then the user program 214 is merged with the codes of linker 214C, and also the codes oS a conven-tio~al compiler runtime library 214A. The compiier in each co~puter (10, WS1 ... WSk) cooperate's with its rèspective operating system in a manner as described with : reference to Figure 6 of the operating sy~tems of the computers (I0, WSl ... WS~) of the network of ~igure 6.
The linking (object code 214B, linker 214 and compiler 214A) produces an executable user program 216 operating within the respective comput~rs (10, WS1 ... WSk) and whose runti~e relations are determined by the remote com-mand interrupt daemon (RCID~ of Figure 21 and the distri-buted process controller (DCP) of Figure 23.
In general, the RCID supplies the initialization information (obtained from DPC) to the user program 216 to allow it to perform its assigned tasks, whereas, the _l It--t~'rlTI l~r~ r~t I__--: ' 'I ' ' . ' ~ '; '', '.. , ~ ' ' " ' ''' ' ' ' , ' ' '.

w09'/~l990 PCT/~S9l/05116 2~773~

user process 216 after the completion of its assigned task supplies ~he final results to the D~P by means of commands such as ~ha~ shown in Figure 19 as being Termin-ation (cnf term). The computation intensiv~ programs, S such as FRACTAL, being run by the user process 216 in each computer (10; WSl ... WSk) ~ay be described with further reference to the DFL illustrated in Figure 20. r Distributed Function Librarv . 10 The distributed function library (DFL) is comprised of three segments: ~1) DFL Table (DFL TB) 212Bl; (2) DFL
Driver (DFL DR) 212B2; and (3) DFL Subroutine (DFL SUB) 212B3. The interaction between the segments 212B2 and 212B3 produces the distributed data 222 shown as com-lS prising 222A (Coords); 222B (IDX~: 222C (Fl): 222D (F2);
222E (F3~; and 222F (F4).
Figure 20 illustra es that the segment DFL TB is referenced by the distributed process controller (DPC) (Figure 23) for processor (10, WSl ... WSk) assignment;
DFL DR is referenced by the user programs to be included in the executable body tFigure 19); the distributed data 222 is created by both the DPC programs of Figure 23 and ~:, the RCID programs of Figure 22; and the DFL SUB is used by the DPC (Figure 23) to create the runtime workers to be described.
Figure 20 uses the letter symbols A, B, and N to show the commonality between the program segments of DFL TB, DFL_DR, DFL_SUB and the distributed data 22. For example, the symbol A is used to show the co~monality of the FRACTAL program illustrated in: (i) (DFL TB) within the confines of the box A; (ii) tDFL DR) within the confines of box A, and (iii) (DFL SUB) within its ~ox A.
The subscript for the sy~bol A is si~ilarly used ~or the ~.
distributed data 222 (Coords) A+ and (IDX) A2.
The distributed function library of Figure 20 is particularly suited for perSorming the computation inten-SUBSTITU~C SHEET

.. ... , . ~ , .. . . .
, . ; ......................... .. . .
~ ~ .. ,.. " - , :
- . . . . . , . . .. . ; . ", . . ..... .

W092/01990 PCTt~591/05l16 2~773~

sive progra~s by means of parallel processing (PAR). A
PAR process uses the multiple heteroyeneous computers discussed with regard to Figure 6 and which co~puters cooperate with each other to reduce the execution time of the computation intensive programs. The interface with :- each o~ the user programs contain within each of the computers of Figure 6 is pro~ided by the routines o~ the : distribu~ed function library shown in Figure 20. The PAR
processes prlmarily related to the present invention are the SIMD ~SAG); MI~D; and pipeline techniques all of which are known in the computer art. An abbreviated description of the FRACTAL program performed by the PAR
type: SAG is given in Table 2.
T~B~ 2 DI8~IB~T~D Y~N~ ~ON ~BRARY tDF~) ~RAC~A~ PROG~AM
.
8eom~nt DF~ TB (A~
Name: FRACTAL
: 25 Par Type: SAG
Data Bas2es: Coords, IDX
Relative CPU ~owers: wsl:1 ws2:1.5 ws3:3.6 D~ DR (A) FRACTA~ (image, x,y) - Pack coords to grains - Insert to tuple spa~e "Coords"
- Wait results from "IDX"
- Assemble results - ~eturn SUBSI~ITUTE SHEET

wOg2/01990 PCT/~S91/05116 2 ~

Distributed Parallelism Processinq Figure 21 functionally illustrates three types of distributed parallelism processing which are (1) SIMD
: (SAG~; (2) MIMD; and (3) pipeline which are respectively illustrated in Figure 21(a); Figure 21(b~; and Figure 21(c). Figure 21(a) functionally illustrates, by means of directional arrows shown in phantom, the interaction between the host computer 10 and the working computers (WSl ... WSk) involved with the performance of the FRACTAL program generally illustrated in Table 2.
Figure 21(a) illustrates that the host computer 10 . contains programming segments 212Bl and 212B2, discussed with regard to Figure 20, and also that each of the working computers (WSl ... WSk) contains the programming segment 212B3 also of Figure 20. The host computer 10 is shown to embody a Scattex and Gather (SAG) Process and two tuple spaces TStl) and TS(2~. Functionally, the SAG
process places the tuples related to the FRACTAL program into TS(l) for distribution to (scatter) the working computers (WSl ... WSk). The working computers place the results to TS(2) to be gathered by the SAG process. The segmentation o~ the tuples in the host computer 10 and the working computers (~Sl ... WSX) may ~e as described in "Parallelizinq 'Scatte~ - And - Gather' Applications : 25 Usin~ Hetero~eneous Netwo~k Workstations," by Yuan Shi and Kostas Blathras disclosed at the Proceedings of the Fifth Siam Conference on Parallel Processing ~or Scien-tific Computing held in Houston, Texas in March 1991, and - herein incorporated by reference.
As illustrated in Figure 20, after each respéctive working processor WSl ... ~Sk has performed its computa-tions related to its recei~ed tuple, the respective result is transferred to the host computer 10 by way of tuple space TS(2). The host computer gathers such information and continues its process until the ex~cution of the FRACTAL program is completed.

SUBSTITUTE Sl'EET

,, ,. ,. ." .. . . ...

.,. ~ - . . . .

~09'/0l990 PCT/~S9l/0;1l6 2~77~

A computation intensive function could also be accomplished by th~ parallel process MIM~ or plpeline respectively shown in Figures 21tb) and (c). Figures 21 (b) and (c) generally illustrates the principles of operation of the MIMD and pipeline processes. The overall operation of the distributed parallelism proces-sing of Figures 21(a), (b), and (c) may be further described with reference to Figure 14.

Overall Operation of Second Embodiment Figure 14, in its lower portions, illustrates that the working computers (WS1, W52 ... WSk), as well as the host computer 10 shown in phantom, are interconnected by the bus 16. The worXer computers (WS1 ... WSkj each con-15 tain the Port ID Management Deamon (PIMD), previously discussed with reference to Figure 8, and also the Remote :Command Interrupt Daemon (RCID) of Figure 22. The daemons (PI~D and RCID) are supplied from the host computer lO each time the network (10, WSl ... WSk) is 20 updated. The PIMD and RCID daemons perform the chores tocoordinate the data exchange between the computers (lO, WSl ... WSk) each of which may have a different operating sys~em (OS1, OS2 ... OS~) and a different compiler. The PIMP and RCID daemons are supplied from the xuntime 25daemon library 218b contained in host computer 10. The .RCID daemon related to the second embodiment of the present invention is illustrated in Fi~ure 22.

Remote Command In~erpr~E~Daemon 30The Remote Co~and Interpreter Daemon (RCID) of Figure 22 receives information from the MODECAPD of Figure 15 and from the distributed process controller (DPC) of Figure 23. The DPC is shown as entering Figure ~ 22 from bus 16 and into the program segment (wait for :35 (DPC)) of the subset (process creation). A comparison of the RCID of the second embodiment of Figure 22 with the .

~' TlTuT~,~u~

-~ WO92/01~90 PCT/~S91/0~116 2a~773~

RCID of the ~irst embodiment o~ Figure 7, reveals that . Figure 22 is simplified. The RCID of Figure 22 receives from the DPC, by way of bu5 16, a single message that causes the RCID to respond and receive information from ; 5 the DPC that is used for initialization and external control functions so as to dixect the exchange of infor-mation within the net~ork of Figure 14. T~e primary means of controlling and exchanging information, related to the second em~odiment of the present invention, is determin~id by the distributed process controller which is : functionally illustrated in Figure 23.

Distributed Proceg~Sgr5r~11er : Figure 23 illustrateis a general flow chart of the distributed process controller (DPC) comprised of seg ments 230, 232 ... 248. Ths entry (230) into the flow chart is established by the availability o~ the applica-tion precedence (X.Prcd) of Figuxe 17 which, as previous-ly disc~issed, is developed by the configurator (CSL
compiler) of Figure 16. The comma~d to execute or have the DPC (segment 232) read the X.Prcd is caused by the application console 206 generating the control signal 208 (Activating User System). The distributed process controller (DPC) of Figure 23 is applicable to three types of processing one of which is the PAR type related to the second em~odiment o~ the present invention and the remaining ~wo of which are the GRP and MDL types previo-usly discussed with regard to the ~irst embodiment. The programming decisions highlights of the PAR type are shown in seqment 236, whereas, the programming decision highlights of the MDL and GRP process typeii are respec-tively shown in segme~ts 238 and 240. Each of the segments 236, 238 and 240, as shown in Figure 23, share the programming segments 232, 234, 242.
The segmient 232, in response to control signal 208, causes the reading of the X.PRCD program and in a manner SUE~STITUTE SHEET

.- ; . ` . . `: , .. ~,, . . .. . . ;. `, ~V09'/01990 PCT/~:S91/0;116 ~49-203773~

well known in the art, constructs a dependency matrix with linXed process initialization and con~rol informa-tion. Upon the completion of segment 232 the step-by-step sequence of Figure 23 sequences to segment 234.
Programming segment 234, checks for the type (PAR, GRP or MDL) of process to be per~ormed, and then, sequen-ces to segm~nt 236 for PAR type, to segment 238 for a GRP
; type, or to segment 240 for a MDL type. The second embodiment of the present invention is primarily related to the PAR process type shown in segment 236.
Segment 236 is divided into: (i) the SAG (SIMD) processing, (ii) the pipeline processing, and (iii) the MIMD processin~. The SAG (SI~D) processing, previously discussed with regard to Figures 20 and 21a, has program-ming highlights illustrated in the top portion of segment236. The SAG process of segment 236 indicated as SAG
(SIMD) parallelizable, performs four ~unctions. The first (a) determines the assignment of the working com-puters (WSl ... WSk) to the proqram (e.~. F~ACTAL) being performed. If the process (SAG) has a processor (WSl ...
WSk) assigned to it, then, the SAG process uses such, otherwise it searches the informa~ion contained in the DFL_TB to determine and then use the most power processor (lO, WSl ... WSk). This selected computer is to host the SAG processO The segment 236 then (b) obtains from the - DFL_TB table, the function name of the program being per~ormed such as FRACT~L, and obtains the corresponding name o~ the tuple spac~s related to that particular ~ro-gram. The segment 236 then (c) assigns two tuple space daemons, previously described with reference ~o Figure 11, to t~e host computer which serves as the calling SAG
: processor. Finally (d), the SAG program of segment 236 : determines which of working computers WSl ..... WSk are assigned to executing the named program (e.q., FRACTAL).
Upon completion of (d), the SAG program sequences to seqment 2~2.

: : - , . :` ,: .: : ~

W09~/o1sgo PCT/~S91/0~116 2~3~3r(l73~

Segment 242 supplies all the initialization recordsto all of the computers (lO, WSl ... WSk) within the net-work shown in Figure 6, and then starts the execution of - the one or more processes previously d~scribed witn reference to Figures 20 and 21(a). For the process (SAG) bein~ described which uses more than computer (lO, WSl ... WSk), the segment 242 sends executables (programs capable of being executed) to the one or more RCID(s) respectively contained in the one or more working com-puters (WS~ ... WSX) assigned to execute the SAG process.
If the process being per~ormed is being accomplished by only the host computer l0 or a Netwsrk File System is in place, then executables are not sent to the RCID. For the SAG process, the executables are sent to the respec-tive RCID's which, in turn, initiate the user proces~ 216 of Figure l9. The user process 216 is shown as being included in segment 244 of Figure 23. The user processor 216 then accomplishes its portion of the simultaneous, : parallel-processing application, performed and upon its completion, sends the remote termination call comprised of the IPC pattern shown within segment 246. This ter-`mination call is then routed to the entry location of segment 248 as shown in Figure 23.
- Segment 248 upon receiving the termination call m~difies the previously discussed sequential dependen-cies, and raturns the step-by-step progression to segment 234 to await the assignment o~ its next tas~ whereon the step-by-st2p progression of segment 236 ... 248 is repeated in response to ~ach received new task.
It should now be appreciated that the practice of the present invention provides ~or a second embodiment that p2rforms parallel processing for computation inten~ive programming. The execution of such programming is initi-ated by the user ~erely entering in the name o~ program, e.g., FRACTAL, by way o~ the application console 206. In the practice o~ the pre5ent invention, it has been esti-C! I 11~:1 C!TIT1 ITC ~ LJ ~T

.

:: ` ` ; ` ` ~ ;~ ' ' .: `: ` '` "`'' `, ' " ' ' ' ":, " ` .

:
PcT/~:s9l/osllh 2~n7~3~ :
mated that by use of six working processing computer (WSl ... WSk) in cooperating with one host compu~er 10, reduces the execution time normally taken ~or a computational intensive program such as FRACTAL by a factor of six ti~es assu~ing all computers are of equal power.
The present invention, described for both the first and second embodiments, may serve as a digital processing system for many applications. One such application is real time signal processors such as those found for radar and sonar applications.

Re~l ~ime Si~nal Processors ~ ~ Real time Signal Processors are special purpose 1~ computers. Due to the contiguous flow of data, tradi-tionally a plpelines architecture has been used to achieve computation speedup. It is now recognized that all types of parallel structures: ~IND, SIMD and pipe-lines, all previously discussed primarily with re~erence to Figures 14-23, can be used to produce higher per~or-mance signal processors. The present invention provides -a system and a method for the design and implementation of scalable and customizable signal processors.
For such signal processors, multiple homogeneous CPU's, such as those described with reference to Figure 6, with local memories are conn~cted through a high speed bus 16 with multiple I/O ports rOr peripherals. The size of the local me~ory should be sufficient to acco~modate at least two typical grain real time processes. The grain is the size of a live process.
Configurator (CLS compil~r), such as those discussed with reference to Figures 4 and 16, may b~ derived to generate interprocess communication pat~erns (IPC), previously discussed, and control data (X. Prcd) adapted to the needs of real time processing. For a signal processor with hard real time requirement, that is, one SuBsTlTuT- ~ FF r WOg~/01990 PCT/~S9t/051l6 2~77~

in which the time measurements must supply absolute (hard) values, the configurator is used to identify the near optimal program partitioning and processor alloca-tion. once the partition and allocation are found, all S pre-partitioned processes are re-coded to form the near optimal partition. All configurator generated IPC
patt~rns and dependency matrixes are hard coded into the optimally partitioned programs. For signal processors with soft (non absolute) time requirements, the re-coding process may be skipped to reduce labor costs.

: Real Time Siqnal Processor Daemons Each processor unit, including a CPU and a local memory, such as those discussed with regard to Flgure 6, contains programs that can provide similar functicns as RCID and PIMD, similar to those described for Figures 2 and 8, respectively, as well as a minimal function scheduler (First-In First-Out). The ~CID program in this case is to load real time process routines and attach it to the process list. The PIMD server in this case is to provide the grouping of CPU ID; interrupt vector index;
and application port assignments. The minimal function scheduler (FIFO) simply waits at'a software interrupt associated with each process list (per CPU) which is to be set by the arrivals of real time processes. The scbedul~r (FIFO) transfers the control of CPU to the first process on the list. After the completion, the real time process transfers the control to the next process on the list, if any, until the list is empty.
Every process occupies the entire CPU from its activation to its termination phases. For real time processing applications, it is preferred that each CPU ~10, WSl ...
WSk of Figure 6) be dedicated to process the data associ-ated with the input signal se~uences.

SU3STITU ~ c SHE.T

" . .. ; ~ . ` ~ ` ., ~ . . .. . ; , , ` . :

. . . , ` ` .. ' .. ` ;. ,, ,., . ,.` ' ` `
. , ~ . . ` ~ . .. `.. .. ~ . .. . ` . .... ,, .; ... . .

~0 9~/0~990 PCr/l~S91/0;1 16 2~7~3'~

R2al Time Siqnal Processors Distributed Process Controller (DPC) A distributed process controller (DPC), similar to that described with reference to Figure 23l controls the signal procPssor for its sub-task coordination according to a X.Prcd similar to tha~ described for Figure 17. It is preferred that the X.Prcd which i5 particularly suited for real ti~e processing, be arranged to form part of the DPC. Also one of the ~PU's (10, WS1 ... WSk of Figure 6) 10 should be dedicated to service the DPC.

Real Time Sianal Processor Tuple Space Daemons Tuple space daemons act as servers operating in each CiU, that receives requests from other CPUs and manipu-lates the tuples through their local memory. The use ofeach tuple space server indicates the existence of one virtual vector processor. One CPU should be dedicated to each tuple space server. The number of dedlcated tuple space CPUs is determined by the application being preformed by the real tims processors. One can form multiple pipelined virtual vector processors using multiple tuple space CPU's (and associated worker CPU's).

~
Language Injection Library (LIL), such as those described with reference to Figure 19, contains pre-pro-grammed codes for open/put/get/read/close tuple spaces and open~read/write/close remote communication ports.
This library is used in the customization process to determine the near optimal process partition and proces-sor allocation o~ the real time processors. Once the near opti~al is found, the inj~ction routines are physi-cally included as code segments into the real-time proc~ss.
- For ~his application, it is preferred that the ~ TI IT~ T

- : . . , . , .~ .

- .. . . . .

wos~J01990 RCT/~S91tO5116 20~773~

mailbox daemons, described with re~erence to Figure lO, also become part o~ the language injection library (LIL), thus part of the real time process.

A~nlication of Real Tim2 Siqnal Processin~
A signal processing application is accomplished by first partitioned to form SIMD, MIMD and pipelined com-ponents. The partitioning may be accomplished in a ~anner as previously mentioned technical article "Paral-lelizing 'Scatter-And-Gather' Application Using Hetero-geneous Network Workstations." Generally speaking, it has been determined that SIMD components give the best speed advantages and MIMD compone~ts can give a fair speed advantage. Pipelines are the least preferred. A
SIMD component can be viewed as a Scatter-And-Gather (SAG) element. A "SAGable" computation, as us~d herein, is a computation that can be transformed into a repeti-tive application of a single sub-task. For example, multiplying two ~atrices can be trans~ormed into repeated applications of dot products of respective vectors. All existing techniques for vectorization can be employed to identify and maximize the SAGable co~ponents. MIMD
components co~e from the data~low analysis of the signal processing appl~cation. Processor assignment should takP
into account that independent processes can be run on difference CPUs. Finally, pipelines are to be formed by streamlining the dataflow along the most natural order (shortest delay) of the underlying architecture. For best results of pipeline technlques, it is preferred that processor a signment to form stages of similar time delays.
It should now be appreciated that the practice of the present invention may be used for real time signal pro-cessors. The real time applications may be accomplished by the parallel processing techniques such as those described hsrein ~or SIMD (SAG), ~I~D and pipelining.

SUE~STITUTE SHEET

,~ ; , .... , i .. . . ` , ~9'/01990 PCT/~S9I/OSI16 2~37~3~

The hardware architecture is a simple stream-lined multiprocessor platform which can be scaled into many dimensions to optimize the performance. The processor requirements are uniform allowing the use of large quantities of commercially available CPU chips for one signal processor~
The present invention may be embodied in other specific forms without departing f rom th~ spirit or essential attributes thereo~ and, accordingly, reference should be made to the appended claims, rather than to the foregoing specification, as indicating the scope of the invention.

.. ..

SU~STITUTE Sh'-ET

.. .. . ~, - : .- ~ .,, ., ... ... , -

Claims (21)

1. A system for execution of an application program including a plurality of sub programs among a plurality of target computers, comprising:
(a) a plurality of target computers, each target computer including (i) at least one communication channel, (ii) memory space accessible through the at least one communication channel, (iii) a library of language injections in the memory space, each language injection being runable by the target computer in response to an external inter-processor control command, each language injection being a subroutine facilitating the operation of a pre-defined data object;
(b) at least one host computer operatively connected to the plurality of target computers, including (i) means for processing a computing environment specification, the computing environment specification being a set of instructions for activating at least a subset of target computers for accepting data and instructions from the host computer, (ii) means for processing the application program, the application program including at least one sub-program runable on at least one of the subset of target computers, the application program further includ-ing at least one interprocessor control command for activating at least one library injection in a target computer, whereby the sub-program may be compiled and executed in any target computer having a compatible compiler without alteration of the sub-program, (iii) means for transferring the sub-program from the at least one host computer to at least one target computer for compilation and execution in the target computer in response to a signal from the host computer.
2. A system as in claim 1, having language injections adapted for the operation in one or more target computers of a mailbox mechanism.
3. A system as in claim 1, having language injections for operating in the target computers a mechanism for accessing an external file.
4. A system as in claim 1, having language injections for the operation of at least one tuple space among the target computers.
5. A system as in claim 1, having language injections for operation of mailboxes, external files, and tuple spaces among the target computers.
6. A system as in claim 1, wherein the software characteristic of the at least one host computer is embodied within at least one of the plurality of target computers.
7. A system as in claim 1, wherein each target computer further includes a daemon associated with the at least one communication channel, the daemon being a program whereby the at least one communication channel is monitored for an instruction from the host computer, and also for declaring other communication channels in the system for subsequent use in the application program.
8. A system according to claim 1, wherein said application program is adapted for real time processing.
9. A system for the creation of a virtual computer for execution of an application program including a plurality of sub-programs among at least one host com-puter and a plurality of target computers, comprising:
a configurator in the host computer, adapted to compile an application software specification, the application software specification being instructions related to the desired execution of individual sub-programs at particular target computers;
daemon means associated with each target computer, including at least one software loop adapted to branch upon an instruction external to the target com-puter to access the target computer, whereby at least one sub-program may be executed in the target computer;
a distributed process controller, adapted to activate the daemon means associated with each target computer in response to the application software specifi-cation; and a library of language injections in each target computer, each language injection being runable within the target computer in response to an instruction external to the target computer, each language injection being a subroutine facilitating the transference of data among the target computers, whereby a sub-program may be compiled and executed in any target computer having a compatible compiler without alteration of the sub-pro-gram.
10. A system as in claim 9, wherein the language-injection library includes software means adapted to create daemons incidental to the execution of an applica-tion program, whereby data is transferred among a plural-ity of target computers.
11. A system-as in claim 10, wherein the software means adapted to create daemons incidental to the execu-tion of an application program include means for creating a mailbox.
12. A system as in claim 10, wherein the software means adapted to create daemons incidental to the execu-tion of an application program include means for the creation of a tuple space.
13. A system as in claim 9, wherein a subset of the target computers require name abstraction and another subset of the target computers require port abstraction, further including means for reconciling port abstraction and name abstraction among the target computers.
14. A system according to claim 9, wherein said application program is adapted for real time processing
15. A method of executing an application program having a plurality of sub-programs to be executed among a plurality of target computers, comprising the steps of:
entering into a host computer the application program, including interprocessor commands related to desired target computers for the execution of various sub-programs;
providing within each target computer daemon means, comprising at least one software loop adapted to branch upon an external instruction, thereby permitting a sub-program to be loaded into the target computer for compilation and execution within the target computer;
providing within each target computer a language injection library, comprising a plurality of subroutines runable by the target computer and adapted to facilitate the transference of data among target computers without alteration of the sub-program, the subroutines being executed in response to interprocessor commands.
16. A method as in claim 15, further comprising the step of causing the host computer to send an instruction to at least one target computer, thereby activating the daemon means in the target computer and causing at least one subroutine in the language injection library to be executed by the target computer.
17. A method as in claim 16, further comprising the step of creating, by means of activating at least one subroutine in the language injection library of at least one target computer, a mailbox.
18. A method as in claim 14, further comprising the step of creating, by means of at least one subroutine from the language injection library of at least one target computer, a tuple space.
19. A method as in claim 15, wherein said applica-tion program is adapted for real time processing.
20. A system for execution of a computational intensive application program including a plurality of subprograms among a plurality of target computers, comprising:
(a) a data entry device;
(b) a plurality of target computers, each computer including:
(i) at least one communication channel, (ii) memory space accessible through the at least one communication channel, (iii) a library of language injections in the memory space, each interjection serving as a routine for controlling data in said plurality of target computers, (iv) a distributed function library in said memory space that cooperates with said interjections of said library of language injections, said distributed function library comprising subroutines for computational intensive programs serving as said application programs to be executed by said system, said computation intensive programs and said cooperating injections both being initiated for execution by said data entry device; and (c) at least one computer operatively connected to the plurality of target computers, including:
(i) means for processing a computer environmental specification, the computer environmental specification being a set of instructions for activating at least a subset of target computers for accepting data and instructions from the host computer, (ii) means for processing the computa-tional intensive application programs, the computation intensive application programs including at least one sub-program runable on at least one of the subset of target computers, whereby the at least one sub-program of said computational intensive programs may be compiled and executed in any target computer having a compatible com-piler without altering the contents thereof, (iii) means for transferring the sub-program of the computational intensive application program from the at least one host computer to at least one target computer for compilation and execution in the target computer in response to a command signal from said data entry device.
21. A system according to claim 20, wherein computation intensive application program is adapted for real time processing.
CA002087735A 1990-07-20 1991-07-19 System for high-level virtual computer with heterogeneous operating systems Abandoned CA2087735A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US55692090A 1990-07-20 1990-07-20
US556,920 1990-07-20

Publications (1)

Publication Number Publication Date
CA2087735A1 true CA2087735A1 (en) 1992-01-21

Family

ID=24223355

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002087735A Abandoned CA2087735A1 (en) 1990-07-20 1991-07-19 System for high-level virtual computer with heterogeneous operating systems

Country Status (6)

Country Link
US (1) US5381534A (en)
EP (1) EP0540680A4 (en)
JP (1) JPH06502941A (en)
AU (1) AU8449991A (en)
CA (1) CA2087735A1 (en)
WO (1) WO1992001990A1 (en)

Families Citing this family (109)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5845078A (en) * 1992-04-16 1998-12-01 Hitachi, Ltd. Network integrated construction system, method of installing network connection machines, and method of setting network parameters
US5687315A (en) * 1992-04-16 1997-11-11 Hitachi, Ltd. Support system for constructing an integrated network
JPH0628322A (en) * 1992-07-10 1994-02-04 Canon Inc Information processor
US5802290A (en) * 1992-07-29 1998-09-01 Virtual Computer Corporation Computer network of distributed virtual computers which are EAC reconfigurable in response to instruction to be executed
JP3370704B2 (en) * 1992-10-12 2003-01-27 株式会社日立製作所 Communication control method
JPH06161919A (en) * 1992-11-25 1994-06-10 Fujitsu Ltd Message control system
JPH06301555A (en) * 1993-02-26 1994-10-28 Internatl Business Mach Corp <Ibm> System for plural symbiotic operating systems on micro kernel and for personality use
US5848234A (en) * 1993-05-21 1998-12-08 Candle Distributed Solutions, Inc. Object procedure messaging facility
JP2814880B2 (en) * 1993-06-04 1998-10-27 日本電気株式会社 Control device for computer system constituted by a plurality of CPUs having different instruction characteristics
JP3670303B2 (en) * 1993-09-01 2005-07-13 富士通株式会社 Data conversion method and data conversion apparatus
US6038586A (en) * 1993-12-30 2000-03-14 Frye; Russell Automated software updating and distribution
US5764949A (en) * 1994-09-29 1998-06-09 International Business Machines Corporation Query pass through in a heterogeneous, distributed database environment
US5588150A (en) * 1994-09-29 1996-12-24 International Business Machines Corporation Push down optimization in a distributed, multi-database system
US6381595B1 (en) * 1994-09-29 2002-04-30 International Business Machines Corporation System and method for compensation of functional differences between heterogeneous database management systems
US5768577A (en) * 1994-09-29 1998-06-16 International Business Machines Corporation Performance optimization in a heterogeneous, distributed database environment
US5771381A (en) * 1994-12-13 1998-06-23 Microsoft Corporation Method and system for adding configuration files for a user
US5742829A (en) * 1995-03-10 1998-04-21 Microsoft Corporation Automatic software installation on heterogeneous networked client computer systems
US5751972A (en) * 1995-03-28 1998-05-12 Apple Computer, Inc. System for run-time configuration of network data transfer paths
US5724556A (en) * 1995-04-14 1998-03-03 Oracle Corporation Method and apparatus for defining and configuring modules of data objects and programs in a distributed computer system
US5734865A (en) * 1995-06-07 1998-03-31 Bull Hn Information Systems Inc. Virtual local area network well-known port routing mechanism for mult--emulators in an open system environment
US6138140A (en) * 1995-07-14 2000-10-24 Sony Corporation Data processing method and device
US6584568B1 (en) 1995-07-31 2003-06-24 Pinnacle Technology, Inc. Network provider loop security system and method
US6061795A (en) * 1995-07-31 2000-05-09 Pinnacle Technology Inc. Network desktop management security system and method
US5694537A (en) * 1995-07-31 1997-12-02 Canon Information Systems, Inc. Network device which selects a time service provider
EP0850545A1 (en) * 1995-09-15 1998-07-01 Siemens Aktiengesellschaft Operational environment system for communication network service applications
DE19536649A1 (en) * 1995-09-30 1997-04-03 Sel Alcatel Ag Method for coupling data processing units, method for controlling a switching center, data processing unit, controller and switching center
US6714945B1 (en) 1995-11-17 2004-03-30 Sabre Inc. System, method, and article of manufacture for propagating transaction processing facility based data and for providing the propagated data to a variety of clients
US6122642A (en) * 1996-01-18 2000-09-19 Sabre Inc. System for propagating, retrieving and using transaction processing facility airline computerized reservation system data on a relational database processing platform
JPH09168009A (en) * 1995-12-15 1997-06-24 Hitachi Ltd Network operation information setting system
US5797010A (en) * 1995-12-22 1998-08-18 Time Warner Cable Multiple run-time execution environment support in a set-top processor
US6345311B1 (en) * 1995-12-27 2002-02-05 International Business Machines Corporation Method and system of dynamically moving objects between heterogeneous execution environments
US5761512A (en) * 1995-12-27 1998-06-02 International Business Machines Corporation Automatic client-server complier
US5774728A (en) * 1995-12-27 1998-06-30 International Business Machines Corporation Method and system for compiling sections of a computer program for multiple execution environments
US5734820A (en) * 1996-03-11 1998-03-31 Sterling Commerce, Inc. Security apparatus and method for a data communications system
US6233704B1 (en) * 1996-03-13 2001-05-15 Silicon Graphics, Inc. System and method for fault-tolerant transmission of data within a dual ring network
US5748900A (en) * 1996-03-13 1998-05-05 Cray Research, Inc. Adaptive congestion control mechanism for modular computer networks
US5862313A (en) * 1996-05-20 1999-01-19 Cray Research, Inc. Raid system using I/O buffer segment to temporary store striped and parity data and connecting all disk drives via a single time multiplexed network
US5903873A (en) * 1996-05-31 1999-05-11 American General Life And Accident Insurance Company System for registering insurance transactions and communicating with a home office
US5799149A (en) * 1996-06-17 1998-08-25 International Business Machines Corporation System partitioning for massively parallel processors
US5854896A (en) * 1996-06-17 1998-12-29 International Business Machines Corporation System for preserving logical partitions of distributed parallel processing system after re-booting by mapping nodes to their respective sub-environments
US5941943A (en) * 1996-06-17 1999-08-24 International Business Machines Corporation Apparatus and a method for creating isolated sub-environments using host names and aliases
US5881227A (en) * 1996-06-17 1999-03-09 International Business Machines Corporation Use of daemons in a partitioned massively parallel processing system environment
US5768532A (en) * 1996-06-17 1998-06-16 International Business Machines Corporation Method and distributed database file system for implementing self-describing distributed file objects
US5956728A (en) * 1996-07-17 1999-09-21 Next Software, Inc. Object graph editing context and methods of use
US20040139049A1 (en) * 1996-08-22 2004-07-15 Wgrs Licensing Company, Llc Unified geographic database and method of creating, maintaining and using the same
US5781703A (en) * 1996-09-06 1998-07-14 Candle Distributed Solutions, Inc. Intelligent remote agent for computer performance monitoring
US5764889A (en) * 1996-09-26 1998-06-09 International Business Machines Corporation Method and apparatus for creating a security environment for a user task in a client/server system
US5881269A (en) * 1996-09-30 1999-03-09 International Business Machines Corporation Simulation of multiple local area network clients on a single workstation
AT1751U1 (en) * 1996-09-30 1997-10-27 Kuehn Eva COORDINATION SYSTEM
US6611878B2 (en) 1996-11-08 2003-08-26 International Business Machines Corporation Method and apparatus for software technology injection for operating systems which assign separate process address spaces
US7035906B1 (en) 1996-11-29 2006-04-25 Ellis Iii Frampton E Global network computers
US7024449B1 (en) 1996-11-29 2006-04-04 Ellis Iii Frampton E Global network computers
US7805756B2 (en) * 1996-11-29 2010-09-28 Frampton E Ellis Microchips with inner firewalls, faraday cages, and/or photovoltaic cells
US7634529B2 (en) 1996-11-29 2009-12-15 Ellis Iii Frampton E Personal and server computers having microchips with multiple processing units and internal firewalls
US8312529B2 (en) 1996-11-29 2012-11-13 Ellis Frampton E Global network computers
US6725250B1 (en) 1996-11-29 2004-04-20 Ellis, Iii Frampton E. Global network computers
US8225003B2 (en) * 1996-11-29 2012-07-17 Ellis Iii Frampton E Computers and microchips with a portion protected by an internal hardware firewall
US6732141B2 (en) 1996-11-29 2004-05-04 Frampton Erroll Ellis Commercial distributed processing by personal computers over the internet
US7926097B2 (en) 1996-11-29 2011-04-12 Ellis Iii Frampton E Computer or microchip protected from the internet by internal hardware
US20050180095A1 (en) * 1996-11-29 2005-08-18 Ellis Frampton E. Global network computers
US7506020B2 (en) 1996-11-29 2009-03-17 Frampton E Ellis Global network computers
US6167428A (en) * 1996-11-29 2000-12-26 Ellis; Frampton E. Personal computer microprocessor firewalls for internet distributed processing
US6463527B1 (en) * 1997-03-21 2002-10-08 Uzi Y. Vishkin Spawn-join instruction set architecture for providing explicit multithreading
US6065116A (en) * 1997-05-07 2000-05-16 Unisys Corporation Method and apparatus for configuring a distributed application program
US6289388B1 (en) 1997-06-02 2001-09-11 Unisys Corporation System for communicating heterogeneous computers that are coupled through an I/O interconnection subsystem and have distinct network addresses, via a single network interface card
US6473803B1 (en) 1997-06-02 2002-10-29 Unisys Corporation Virtual LAN interface for high-speed communications between heterogeneous computer systems
US6016394A (en) * 1997-09-17 2000-01-18 Tenfold Corporation Method and system for database application software creation requiring minimal programming
US5911073A (en) * 1997-12-23 1999-06-08 Hewlett-Packard Company Method and apparatus for dynamic process monitoring through an ancillary control code system
US6076174A (en) * 1998-02-19 2000-06-13 United States Of America Scheduling framework for a heterogeneous computer network
US6272593B1 (en) * 1998-04-10 2001-08-07 Microsoft Corporation Dynamic network cache directories
US6148437A (en) * 1998-05-04 2000-11-14 Hewlett-Packard Company System and method for jump-evaluated trace designation
US6189141B1 (en) 1998-05-04 2001-02-13 Hewlett-Packard Company Control path evaluating trace designator with dynamically adjustable thresholds for activation of tracing for high (hot) activity and low (cold) activity of flow control
US6164841A (en) * 1998-05-04 2000-12-26 Hewlett-Packard Company Method, apparatus, and product for dynamic software code translation system
US6233619B1 (en) 1998-07-31 2001-05-15 Unisys Corporation Virtual transport layer interface and messaging subsystem for high-speed communications between heterogeneous computer systems
US7127701B2 (en) * 1998-09-18 2006-10-24 Wylci Fables Computer processing and programming method using autonomous data handlers
US6216174B1 (en) 1998-09-29 2001-04-10 Silicon Graphics, Inc. System and method for fast barrier synchronization
US6542994B1 (en) 1999-04-12 2003-04-01 Pinnacle Technologies, Inc. Logon authentication and security system and method
US6550062B2 (en) 1999-04-30 2003-04-15 Dell Usa, Lp System and method for launching generic download processing in a computer build-to-order environment
US6757744B1 (en) 1999-05-12 2004-06-29 Unisys Corporation Distributed transport communications manager with messaging subsystem for high-speed communications between heterogeneous computer systems
US7280995B1 (en) * 1999-08-05 2007-10-09 Oracle International Corporation On-the-fly format conversion
US6732139B1 (en) * 1999-08-16 2004-05-04 International Business Machines Corporation Method to distribute programs using remote java objects
EP1912124B8 (en) 1999-10-14 2013-01-09 Bluearc UK Limited Apparatus and system for implementation of service functions
JP4475614B2 (en) * 2000-04-28 2010-06-09 大正製薬株式会社 Job assignment method and parallel processing method in parallel processing method
US6950850B1 (en) * 2000-10-31 2005-09-27 International Business Machines Corporation System and method for dynamic runtime partitioning of model-view-controller applications
US7251693B2 (en) * 2001-10-12 2007-07-31 Direct Computer Resources, Inc. System and method for data quality management and control of heterogeneous data sources
US8312117B1 (en) 2001-11-15 2012-11-13 Unisys Corporation Dialog recovery in a distributed computer system
AU2003222256B2 (en) * 2002-03-06 2008-09-25 Canvas Technology, Inc. User controllable computer presentation of interfaces and information selectively provided via a network
US20030177166A1 (en) * 2002-03-15 2003-09-18 Research Foundation Of The State University Of New York Scalable scheduling in parallel processors
US7457822B1 (en) 2002-11-01 2008-11-25 Bluearc Uk Limited Apparatus and method for hardware-based file system
US7454749B2 (en) * 2002-11-12 2008-11-18 Engineered Intelligence Corporation Scalable parallel processing on shared memory computers
JP4487479B2 (en) * 2002-11-12 2010-06-23 日本電気株式会社 SIMD instruction sequence generation method and apparatus, and SIMD instruction sequence generation program
CA2525578A1 (en) 2003-05-15 2004-12-02 Applianz Technologies, Inc. Systems and methods of creating and accessing software simulated computers
NL1024464C2 (en) * 2003-10-06 2005-04-07 J A A A Doggen Beheer B V Development and performance programs are for computer application and enable user to create via development program information relative to computer application
US20060015866A1 (en) * 2004-07-16 2006-01-19 Ang Boon S System installer for a reconfigurable data center
US7421575B2 (en) * 2004-07-16 2008-09-02 Hewlett-Packard Development Company, L.P. Configuring a physical platform in a reconfigurable data center
US20060015589A1 (en) * 2004-07-16 2006-01-19 Ang Boon S Generating a service configuration
US8244854B1 (en) 2004-12-08 2012-08-14 Cadence Design Systems, Inc. Method and system for gathering and propagating statistical information in a distributed computing environment
US7979870B1 (en) 2004-12-08 2011-07-12 Cadence Design Systems, Inc. Method and system for locating objects in a distributed computing environment
US8108878B1 (en) * 2004-12-08 2012-01-31 Cadence Design Systems, Inc. Method and apparatus for detecting indeterminate dependencies in a distributed computing environment
US8806490B1 (en) 2004-12-08 2014-08-12 Cadence Design Systems, Inc. Method and apparatus for managing workflow failures by retrying child and parent elements
US7644058B2 (en) * 2006-04-25 2010-01-05 Eugene Haimov Apparatus and process for conjunctive normal form processing
US20090019258A1 (en) * 2007-07-09 2009-01-15 Shi Justin Y Fault tolerant self-optimizing multi-processor system and method thereof
US20090119441A1 (en) * 2007-11-06 2009-05-07 Hewlett-Packard Development Company, L.P. Heterogeneous Parallel Bus Switch
US8125796B2 (en) 2007-11-21 2012-02-28 Frampton E. Ellis Devices with faraday cages and internal flexibility sipes
US20100318973A1 (en) * 2009-06-10 2010-12-16 Tino Rautiainen Method and apparatus for providing dynamic activation of virtual platform sub-modules
US8429735B2 (en) 2010-01-26 2013-04-23 Frampton E. Ellis Method of using one or more secure private networks to actively configure the hardware of a computer or microchip
US9465717B2 (en) * 2013-03-14 2016-10-11 Riverbed Technology, Inc. Native code profiler framework
US11188352B2 (en) 2015-11-10 2021-11-30 Riverbed Technology, Inc. Advanced injection rule engine
US11892966B2 (en) * 2021-11-10 2024-02-06 Xilinx, Inc. Multi-use chip-to-chip interface

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3643227A (en) * 1969-09-15 1972-02-15 Fairchild Camera Instr Co Job flow and multiprocessor operation control system
US4031512A (en) * 1975-05-29 1977-06-21 Burroughs Corporation Communications network for general purpose data communications in a heterogeneous environment
US4253145A (en) * 1978-12-26 1981-02-24 Honeywell Information Systems Inc. Hardware virtualizer for supporting recursive virtual computer systems on a host computer system
US4309754A (en) * 1979-07-30 1982-01-05 International Business Machines Corp. Data interface mechanism for interfacing bit-parallel data buses of different bit width
JPS5938870A (en) * 1982-08-30 1984-03-02 Sharp Corp Electronic computer
JPS6057438A (en) * 1983-09-08 1985-04-03 Hitachi Ltd Virtual computer system controller
US4774655A (en) * 1984-10-24 1988-09-27 Telebase Systems, Inc. System for retrieving information from a plurality of remote databases having at least two different languages
US4975836A (en) * 1984-12-19 1990-12-04 Hitachi, Ltd. Virtual computer system
US4825354A (en) * 1985-11-12 1989-04-25 American Telephone And Telegraph Company, At&T Bell Laboratories Method of file access in a distributed processing computer network
IT1184015B (en) * 1985-12-13 1987-10-22 Elsag MULTI-PROCESSOR SYSTEM WITH MULTIPLE HIERARCHICAL LEVELS
US4720782A (en) * 1986-01-13 1988-01-19 Digital Equipment Corporation Console unit for clustered digital data processing system
JPS62165242A (en) * 1986-01-17 1987-07-21 Toshiba Corp Processor
US4780821A (en) * 1986-07-29 1988-10-25 International Business Machines Corp. Method for multiple programs management within a network having a server computer and a plurality of remote computers
US4839801A (en) * 1986-11-03 1989-06-13 Saxpy Computer Corporation Architecture for block processing computer system
US4935870A (en) * 1986-12-15 1990-06-19 Keycom Electronic Publishing Apparatus for downloading macro programs and executing a downloaded macro program responding to activation of a single key
US5031089A (en) * 1988-12-30 1991-07-09 United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Dynamic resource allocation scheme for distributed heterogeneous computer systems

Also Published As

Publication number Publication date
JPH06502941A (en) 1994-03-31
AU8449991A (en) 1992-02-18
US5381534A (en) 1995-01-10
WO1992001990A1 (en) 1992-02-06
EP0540680A1 (en) 1993-05-12
EP0540680A4 (en) 1993-11-18

Similar Documents

Publication Publication Date Title
CA2087735A1 (en) System for high-level virtual computer with heterogeneous operating systems
US5067072A (en) Virtual software machine which preprocesses application program to isolate execution dependencies and uses target computer processes to implement the execution dependencies
Carriero et al. How to write parallel programs: A guide to the perplexed
Davis et al. Data flow program graphs
US4961133A (en) Method for providing a virtual execution environment on a target computer using a virtual software machine
US6006277A (en) Virtual software machine for enabling CICS application software to run on UNIX based computer systems
Shu et al. Chare kernel—a runtime support system for parallel computations
Bemmerl et al. MMK-a distributed operating system kernel with integrated dynamic loadbalancing
Lopez POLO problem oriented language organizer
Volz et al. Some problems in distributing real-time Ada programs across machines
Agha et al. An actor-based framework for heterogeneous computing systems
Teo et al. Structured parallel simulation modeling and programming
Kahn A small-scale operating system foundation for microprocessor applications
Goldman et al. Building interactive distributed applications in C++ with the programmers’ playground
Shu et al. A multiple-level heterogeneous architecture for image understanding
Civera et al. The μ Project: An Experience with a Multimicroprocessor System.
Bishop et al. Distributed Ada: An Introduction
Mehrotra et al. Language support for multidisciplinary applications
Ferrante et al. Object oriented simulation: Highlighs on the PROSIT: parallel discrete event simulator
Rizk et al. Design and implementation of a C-based language for distributed real-time systems
Matos et al. Reconfiguration of hierarchical tuple-spaces: Experiments with Linda-Polylith
Dodds Jr et al. Development methodologies for scientific software
Jerebic TriOS operating system
Cavouras Computer system evaluation through supervisor replication
Thornborrow Utilising MIMD Parallelism in Modular Visualization Environments

Legal Events

Date Code Title Description
FZDE Discontinued