CA1291828C - Binary tree parallel processor - Google Patents

Binary tree parallel processor

Info

Publication number
CA1291828C
CA1291828C CA000545782A CA545782A CA1291828C CA 1291828 C CA1291828 C CA 1291828C CA 000545782 A CA000545782 A CA 000545782A CA 545782 A CA545782 A CA 545782A CA 1291828 C CA1291828 C CA 1291828C
Authority
CA
Canada
Prior art keywords
mov
lcall
processor
data
movx
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA000545782A
Other languages
French (fr)
Inventor
Daniel P. Miranker
Salvatore J. Stolfo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Columbia University of New York
Original Assignee
Columbia University of New York
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=25416008&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=CA1291828(C) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Columbia University of New York filed Critical Columbia University of New York
Application granted granted Critical
Publication of CA1291828C publication Critical patent/CA1291828C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0721Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment within a central processing unit [CPU]
    • G06F11/0724Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment within a central processing unit [CPU] in a multiprocessor or a multi-core unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17337Direct connection machines, e.g. completely connected computers, point to point communication networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8007Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors single instruction multiple data [SIMD] multiprocessors
    • G06F15/8023Two dimensional arrays, e.g. mesh, torus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • G06F16/90348Query processing by searching ordered data, e.g. alpha-numerically ordered data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4494Execution paradigms, e.g. implementations of programming paradigms data driven
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • G06F11/0754Error or fault detection not based on redundancy by exceeding limits
    • G06F11/0757Error or fault detection not based on redundancy by exceeding limits by exceeding a time limit, i.e. time-out, e.g. watchdogs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multi Processors (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

ABSTRACT OF THE DISCLOSURE

A plurality of parallel processing elements are connected in a binary tree configuration, with each processing element except those in the highest and lowest levels being in communication with a single parent processing element as well as first and second (or left and right) child processing elements. Each processing element comprises a processor, a read/write or random access memory, and an input/output (I/O) device. The I/O device provides interfacing between each processing element and its parent and children processing elements so as to provide significant improvements in propagation speeds through the binary tree. The I/0 device allows the presently preferred embodiment of the invention to be clocked at 12 megahertz, producing in the case of a tree of 1023 processors, each having an average instruction cycle time of 1.8 s, a system with a raw computational throughput of approximately 570 million instructions per second. The I/O
device communicates data and queries from the root processing element to all other N processing elements in the array in one processor instruction cycle instead of in O(log2N) processor instruction cycles as in prior art binary tree arrays.
Primitive queries are executed in parallel by each processing element and the results made available for reporting back to the root processing element. In several important cases, these results can be combined and reported back to the root processing element in a single processor instruction cycle instead of in O(log2N) processor instruction cycles as in prior art binary tree arrays. Thus, the elapsed time for a broadcast and report operation is in effect a constant time regardless of the number of processors in the array.

Description

~29~8X8 A related patent is "Parallel Processing Method", U.S. Patent 4,843,540, issued ~une 27, 1989.
This invention relates generally to data processing and more particularly to a binary tree-structured parallel processing machine employing a large number of processors, each such processor incorporating its own I/O device.
Throughout the history of the computer there has been a continuing demand to increase the throughput of the computer. Most of these efforts have concentrated on increasing the speed of operation of the computer so that it is able to process more instructions per unit time. In serial computers, however, these efforts have in certain senses been self-defeating since all the processing is performed by a single element of the computer leaving most of the resources of the computer idle at any time.
In an effort to avoid some of these problems, special purpose machines such as array processors have been developed which are especially designed for the solution of special classes of problems. Unfortunately, while commercially successful in the performance of certain computational tasks, such computers fall far short of adequate performance in others.
In recent years substantial efforts have been made to increase throughput by operating a plurality of processors in parallel. See, for example, Chuan-lin Wu and Tse-yun Feng, Interconnection Networks for Parallel and Distributed Processing (IEEE 1984). One ~C
~"

~29~ 8 such parallel processor is that in which a plurality of processors are connected in a tree-structured network, typically a binary tree. S. A. Browning, "Computations on a Tree of Processors," Proc. VLSI
Conf., California Institute of Technology, Pasadena, Jan. 22-24, 1979; A. M. DeS~ain et al., "The Computer as a Component", (l979unpublished); A. Mago, "A Cellular Language-directed Computer Architecture," Proc._VLSI
Conf., California Institute of Technology, Jan. 22-24, 1975; R. J. Swan et al., "Cm*-A Modular, Multimicro-processor", Proc. 1977 NCC, pp. 645-655 (June 1977);
J. R. Goodman et al., "Hypertree: A Multiprocessor Interconnection Topology", IEEE Trans. on Computers, Vol. C-30, No. 12, pp. 923-933 (Dec. 1981), reprinted in Wu and Feng at pp.46-56; J.L. Bentley and H. T.
Kung, "Two Papers on a Tree-Structured Parallel Computer", Technical Report, Dept. of Computer Science, Carnegie-Mellon University, Sept. 1979.
In a binary tree computer, a large number of processors are connected so that each processor e~cept those at the root and leaves of the tree has a single parent processor and two children processors. The processors typically operate synchronously on data flowing to them from the parent processor and pass results to descendant processors.
Important problems in the storage and retrieval of data can be analyzed following J.L. Bentley, "Decomposable Searching Problems", Information Processina Letters, Vol. 8, No. 5, pp.244-250 (June 1978). Bentley defines a static searching problem as one of preprocessing a set F of N objects into an internal data structure D and answering queries about the set F by analyzing the data structure D. Bentley defines three functions of N that characterize the complexity of the searching function: ~the amount of storage S required by D to store the N objects, the preprocessing time P required to form D in S, and the time Q required to answer a query by searching D.

.. . . . . ..... . . . . . . ..

129~8;~8 An illustration oE a problem that can be solved by such a database is the membership problem. In this case N elements of a totally ordered set F are pre~
processed so that queries of the form "Is x in F?" can be answered quickly. The common solution for serial computers is to store F in a sorted array D and binary search. Thus, the membership problem can be computed on sequential computers with the following complexity:
S = N; P = O(NlogN); Q = O(logN).
~entley defines a decomposable searching problem as one in which a query asking the relationship of a new object x to a set of objects F can be written as:
Query (x,F) = B q (x,f) where B is the repeated application of a commutative, associative binary operator b that has an identity and q is a primitive query applied between the new object x and each element f of F. Hence the membership problem is a decomposable searching problem when cast in the form:
Member (x,F) = OR equal (x,f) where OR is the logical function OR and equal is the primitive query "Is x equal to f?" applied between the object x and each element f of F.
The key idea about this type of problem is its decomposability. To answer a query about F, we can combine the answers Oe the query applied to arbitrary subsets of F.
This type of problem is well suited to quick execution in a parallel processing environment. The set F is partitioned into a number of arbitrary subsets equal to the number of available processors.
The primitive query q is then applied in parallel at each processor between the unknown x that is communi-cated to all processors and the locally sl:ored set element f. The results are then combined in parallel by log2N repetitions of the operator b, first per-forming b computations on N/2 adjacent pairs of lX9~

processors, the b computations on N/4 pairs of results of the first set of computations and so on until a single result is obtained.
The complexity of this operation in the parallel processing environment is computed as follows. Each of the N elements of the set F must be distributed among the processors so that the N elements are distributed among all the processors. The number of time steps to do this equals the number of elements of the set. Thus, P = O(N). If each element is stored in a different processor such that S = N, the time required to answer a primitive query is a single time step; and the time required to compute the final answer is the number of time steps required to report back through the binary tree which is O(log2N). Thus, Q = O(l) +O(log2N). Compared with the complexity of the membership problem when executed on a serial computer, the use of a parallel processor provides substantial savings in the preprocessing time required to build the data structure since there is no need to store the data structure in an ordered array.
Bentley and Kung proposed a specific tree struc ture illustrated in Fig. l which was designed to achieve throughputs on the order described above. As shown in Fig. l, their tree structure comprises an array of processors Pl-Pl0 organized into two binary trees that share leaf processors P4-P7. Data ~lows in one binary tree from root processor Pl to leaf proces-sors P4-P7. Data is operated on at the leaf proces-sors and the results flow in the second tree from leafprocessors P~-P7 to root processor P10. Obviously, many more processors can be used in the array if desired.
To load data into each of leaf processors P4-P7 of Fig. 1, the data for each leaf processor is pro-vided to the root processor Pl at successive time steps and is routed through the array to each leaf ~!9i8~3 processor via intermediate processors P2 and P3.
Thus, it takes at least one time step per leaf proces-sor to load data into the leaf processors.
The data structure is queried by providing the query to root processor Pl and propagating the query in parallel to each leaf processor. The results of the query are then reported out through processors P8 and P9 to root processor P10 with each of these processors computing a result from two inputs higher up in the tree. As will be apparent, propagation times of the query and the result through the binary tree introduce significant delays in overall through-put comparable to those of a serial computer.
While the time required to answer a single query in the parallel processor is comparable to that in a serial computer, queries can sometimes be processed in pipeline fashion in the parallel processor while they cannot be in the serial computer. Thus, after O(log2N) steps, results begin to flow out of the parallel processor at a rate of one per time step. If the number of queries is large enough that the pipe filling and flushing times can be considered negligi-ble, the complexity can be computed as: S = N, P = O(N) and Q = O(l).
There are, however, numerous instances in which pipelining cannot be used to minimize the effect of propagation delays in the binary tree. These include:
1. Decomposable searching problems where the number of "external" queries is small, or a series of queries are generated internally from processing elements within the tree. Internally generated queries would need to migrate to the root in logN
steps under Bentley and Kung's scheme, and be "broad-cast" down once again in logN steps. Since each query would force pipe flushing, Q = O(logN) for all queries. Artificial Intelligence production systems provide an illustration of these kinds of problems.

-6- 3;2~1~28 2. Searching problems where a single data structure D cannot be constructed. That is, for certain sets of complex (or multi-dimensional) objects, searching problems cannot be applied to a single data structure D. Consider relational data-bases where each element of a set is a complex record structure with possibly numerous fields or keys. To search such records, a data structure D would neces-sarily be needed for each field. Hence, in this case P(N) = kN for k fields in each record.
3. A set F of first order predicate logic literals, i.e., a knowledge base. We wish to compute a series of unifications of a series of "goal literals" against each of the elements of F. Since logic variables can bind to arbitrary first order terms during unification, a single query can change the entire set of elements in the knowledge base by binding and substituting new values for variable terms. Successive queries, therefore, are processed against succeedingly different sets of literals. (The simpler case involves frequent modifications to a dynamic set F, necessitating frequent reconstruction oE D. Relational databases provide a convenient example.) Hence, Query(Xi~Fi) = B q(Xi'fi) ~here Fi = f~G~n (Fi_l,Query (xi_l, Fi_l)) the case of logic programming, ~nç~n is suh~ti tution after logical unification, while for relational databases funct~o~ may be i~sert or dqlete.
4. Problems where a single query in a series of queries cannot be computed without knowing the result of the previous query. In dynamic pro-gramming approaches to statistical pattern matching tasks, a single match of an unknown against the set of reference templates cannot be computed without knowing the best match(es) of the previous unknown(s). Hence, for a series of unknown xi, i - 1, . . .,M, Query(xi,F) = B q ~xi, Query(xi_l,F) ,f) .

-~ -7- ~8 In this case, the same pipe flushing phenomenon appears as in the first case noted above.
5. Searching problems where we wish to compute a number of dif~erent queries about the same unknown x over a set, or possibly different sets.
Hence, Queryi(x,F) = B qi(x,f) for i=l, . . ,~.
Artificial intelllgence production systems provide an illustration of this type of problem as well.
We will refer to problems of this type as almost decomposable searching problems.
Additional deficiencies of binary tree type parallel processors include efficiency and fault tolerance. Efficiency of a computation performed by the tree is often reduced since the amount of computa-tion time required by each processor on a particular cycle may be vastly different depending on its local state. Such differences often result in unnecessary waiting and increased computation time. Additionally, it is well known that binary tree processors are inherently not very fault tolerant. Since any such fault has a tendency to ripple through the binary tree architecture, it is imperative that the fault(s) be not only detected but also compensated for so as to produce an accurate computation despite the fault(s).

Summary of the Invention The present invention comprises a plurality of parallel processing elements connected in a binary tree configuration, with each processing element except those in the highest and lowest levels being in communication with a single parent processing element as well as first and second (or left and right) child processing elements. Illustratively, 1023 processing elements are arranged in the tree in ten levels.
Each processing element comprises a processor, a read/write or random access memory, and an input/output (I/O) device. The I/O device provide~

-8- ~2~

interfacing between each processing element and its parent and children processing elements so as to provide significant improvements in propagation speeds through the binary tree. The I/O device allows the presently preferred embodiment of the invention to be clocked at 12 megahertz. The average processor instruction cycle time is 1.8 microseconds, producing in the case of a tree of 1023 processors a system with a raw computational throughput of approximately 570 million instructions per second, very little of which is required for communication overhead.
To minimize propagation delays in the binary tree computer of the present invention, the I/O device communicates data and queries from the root processing element to all other N processing elements in the array in one processor instruction cycle instead of in O(log2N) processor instruction cycles as in prior art binary tree arrays.
Primitive queries are executed in parallel by each processing element and the results made available for reporting back to the root processing element. In several important cases, these results can be combined and reported back to the root processing element in a single processor instruction cycle instead of in O(log2N) processor instruction cycles as in prior art binary tree arrays. This mode of operation is called Broadcast/Match/Resolve/Report.
The result is that the elapsed time for a broad-cast and report operation with the apparatus of the present invention is in effect a constant time regard-less of the number of processors in the array. In the present embodiment of the invention, this time is approximately 1.5 microseconds/byte. As a result, each of the above noted almost decomposable searching problems is efficiently run by the present invention.
Internally generated queries, as in case 1, can be reported and broadcast in a constant number of machine cycles. Similarly, case 4 is efficiently handled by -9~ 8Z8 the same constant time broadcast/report cycle. In case 2, multiple data structures are not needed for the processor array of the present invention. Queries about different fields within a record can be imple-mented by broadcast of the field location prior to theexecution of the query. In case 3, the successively different sets are computed on the fly and left intact in each processing element while successive queries are continuously broadcast, each in a constant number of machine cycles.
Another key capability of the present invention is to provide direct hardware support for quickly computing a range of commutative and associative binary operators b. In the membership problem defined above, the binary operator OR is repeatedly applied to all of the results of the primitive query: equal (x,f). In a sequential environment, this operation may require linear time to compute. In a parallel environment it can be computed in log time. With the present invention, it is computed in constant time.
The I/O circuit of the present invention also provides a high speed function resolve and report function that determines a most favorable value of a set of values stored in the processing elements and reports this value. Illustratively, this value is a minimum value and the function is called min-resolve.
Min-resolve calculates in hardware in one instruction cycle the minimum value of a set of values distributed one to a processing element. Not only is the minimum value reported, but the processing element with the minimum value is set to a "winner state," providing an indication to all processing elements of the losers and the single winner in the computation, which can subsequently be reported in constant time. (Ties are arbitrated in hardware according to a fixed processing element ordering scheme.) The membership problem can thus be computed by applying min-resolve to zeros and ones (distributed throughout the tree after the -10- lZ~18~8 application of the complemented equality operator) to compute OR. Similarly data generated within the processor array can be read in sorted order in O(N) time by sequential enumeration of min-resolved winners.
~ or example, in the case of the membership problem, the min-resolve function is implemented as follows:
l. Preprocess N objects of set E by distri-buting each element in turn to one processing elementof the array. Note S=N for the entire machine and P=N.
2. Broadcast the unknown object x to all processing elements in constant time.
3. Apply the query equal (x,f)? ln parallel at each processing element. In parallel, each proces-sing element sets the min-resolve value to 0 if x=f, else sets to 1.
4. Min-resolve in one instruction cycle.
The overall computation time is Q=O(l), the sum of steps 2, 3 and 4. In cases where the primitive query q is more complex than equality, the running time is proportional to the slowest of the entire set of concurrent executions of q.
Thus, if four data elements are to be stored in the processor array of the present invention, it will take four instruction cycles to store such data elements in four processing elements. At any time thereafter a query may be processed by presenting the query to the root processing element. Within one instruction cycle, the query is propagated to each processing element, is tested against the information stored at each processing element and an answer is reported back.
Other illustrations of problems which are well suited to solution on the apparatus of the present invention are artificial intelligence production systems. Examples of such production systems are 2~B

described in Salvatore J. Stolfo, "Five Parallel Algorithms for Production System Execution on the DAD0 Machine", Proc. of National Conference on Artificial Intelliqence, AAAI, University of Texas at Austin, pp.300-307 (August 6-10, 1984).
Logic programming, relational database operations and statistical pattern recognition can be similarly defined as almost decomposable searching problems.
Each of these problems shares the same common program-ming paradigm on the processor array of the presentinvention:
- Distribution of an initial set of data.
- ~roadcast of data in constant time.
- Local execution of some primitive query against a prestored element of the reference set tunification of first order logic literals, relational selection of tuples, dynamic time warp match of template of a speech utterance, respectively).
- Resolution to find the best answer in constant time.
- Report of the final answer.
From the foregoing, several key principles of the present invention can be identified:
- The form of the data manipulated by the apparatus of the present invention can be of any type or size and hence can process the same wide range of data that any conventional computer system in exis-tence today can manipulate.
- The primitive query q can be any computa-tion that is programmable on any computer in existencetoday. Hence, the apparatus of the present invention is not limited to any particular class of applications.
- The inventive apparatus can quickly execute arbitrary storage and retrieval functions definable as an almost decomposable searching problem -12~

for any application area definable including AI, signal processing, database processing, text proces-sing, numeric processing, etc.
For example, the primitive query q can be more complex than simple equality testing. In the speech recognition area we may wish to define a primitive query q which calculates the degree of match between the unknown and each of the reference set elements (set F in our example). In this case q can be a function which calculates 0 if the unknown exactly matches the set element, infinity (the largest number representable in the computer's memory) if there are no similarities at all, and some integer representing a distance metric, i.e., roughly how close the unknown matches the set element. A variety of programmable techniques can be employed in defining this function.
The present invention supports them all since each processing element is a programmable computer.
Thus, in the case of a computer match of names a suitably defined primitive query q might rate the partial match of "Smith" to "Smythe" as closer (to 0) than would the evaluation of the match of "Jones", "Mason" and "Hamershold". If we name q in this con-text as distance, then to determine whether "Smith" is an element of our set, or whether "Smith" closely matches some element of F, we define Query(x,F) = MIN distance (x,f) The key point is that the principles set forth above for fast execution of almost decomposable searching problems are generic and hence applicable to a wide range of problem areas--indeed, to all problem areas.
In this brief example, the distance function can pertain to partial matching of character strings, Manhattan distances of a set of points in the plane, 3~ partial matches of AI symbolic expressions (definable for example in LISP expressions, or First Order Logic expressions or PROLOG expressions), or a set of vectors representing measured parameters of some f -i8Z8 signal, as for example LPC coefficients of a speech or acoustic waveform. The present invention supports all of these data types and others which may be defined, represented and stored on any computer system.
Accordingly, it is a principal object of the present invention to provide new and improved parallel processing systems.
Another object of the invention is to provide a plurality of processing elements arranged in a binary tree configuration wherein each processing element has associated with it its own I/O device capable of communication with that processing element as well as other processing elements.
A further object of the invention is to provide a parallel processor arranged in a binary tree config-uration having improved efficiency and fault tolerance capabilities.
A still further object of the invention is to provide a parallel processor which essentially guaran-tees proper operation of a binary tree machine aftertwo successive faults, as well as a 50~ chance of proper operation after a third successive fault.
Another object of the invention is to provide a system which improves the efficiency of parallel processing machines by reducing the computation time deviation of each processor from the mean computation time.

Brief Description Of The Drawings These and other objectsr features and advantages of the invention will become more readily apparent with reference to the following description of the invention ln which:
Fig. 1 is a schematic illustration orf a prior art binary tree structure;
Fig. 2 illustrates the general configuration of a preferred embodiment of the invention comprising a binary tree of 1023 processing elements;

Fig. 3 depicts, in block diagram form, a single processing element comprising an 8751 processor, an I/O device, and a random access memory;
Figs. 4A and 4B depict a detailed schematic of S the single processing element of Fig. 3;
Fig. 5 is a functional flow chart of the under-lying model for a single computation cycle;
Figs. 6A and 6B are a schematic diagram of 'he I/O device comprising broadcast, report-resolve, debug, parity check, instruction decoder, memory support and address latch blocks;
Fig. 7 is a general schematic of the broadcast block of Figs. 6A and 6B;
Fig. 8 is a detailed schematic of the broadcast block of Fig. 7;
Fig. 9 is a general schematic of the resolve portion of the resolve-report block of Figs. 6A and 6B;
Figs. lOA and lOB are a detailed schematic of the resolve-report block of Figs. 6A and 6B;
Figs. ll through 16 are detailed schematics of the block 660 of Figs. lOA and lOB;
Figs. 17A and 17P are a detailed schematic of the instruction decoder block of Figs. 6A and 6B;
Fig. 18 is a general schematic of a fault control system; and Fig. l9 is a flow chart of the software kernel.

Detailed Description of the Invention As shown in Fig. 2, the binary tree parallel processor of the present invention comprises a plural-ity of processing elements identified as PEl through PE1023 arranged in the form of a binary tree. Each PE
except the root PEl which has no parent processing element and the leaf processing elements PES12-PE1023 which have no children processing elements communi-cates with a parent processing element as well as two -15~ 8~ 8 children processing elements. For example, PE2 communicates with its parent PEl over communication line C2, with a first (or left) child PE4 over com-munication line C4 and with a second ~or right) child PE5 over communicatio~ line C5. Similarly, PE4 communicates with its parent PE2 over communication line C4 as well as with its left and right children PE8 and PE9 (not shown) over communication lines C8 and C9.
Root processing element PEl communicates with a host coprocessor 3 via board to board interface circuitry 7 over line 5. In the preferred embodiment, a Digital Equipment Corp. VAX 11/750 (TM) serves as the host coprocessor and is the only device that would be apparent to a user of the present invention. The presently preferred embodiment may thus be viewed as a transparent back-end processor to the VAX 11/750.
Referring to Figs. 3, 4A and 4B, a single proces-sing element of the preferred embodiment is shown in block diagram form. This processing element comprises an Intel 8751 microcomputer chip functioning as processor 30, a read/write or random access memory (RAM) 40 comprising two 8kx8 static random access memory chips 42, 44 and a 16kxl parity RAM 46, and an I/O device 50.
Processor 30 is an 8-bit microcomputer incorpora-ting a 4K erasable programmable read only memory (EPROM) and a 256-byte RAM on a single silicon chip.
The EPROM may advantageously be used for subsequent debugging, modifications or the like. Processor 30 is provided with four parallel, 8-bit ports which sim-plify interconnection between processors and greatly assist the I/O procedure. RAMs 42, 44 function as the local memory for processor 30.
I/O device 50 serves as the primary means for inputting and outputting instructions and data to and from the processing element. I/O device 50 contains memory support circuitry and also computes parity for -16~ 8~

both memory and communication, facilitating fault detection. I/O device 50 preferably is an IC imple-mented in gate array technology to allow the parallel processor to be clocked at 12 megahertz, the maximum speed of processor 30 and RAMs 42, 44, 46. The average machine instruction cycle time is 1.8 micro-seconds, producing a system with a raw computational throughput of roughly 570 ~illion instructions per second. As will be detailed below, very little of this computational resource is required for communi-cation overhead.
I/O device 50 is accessed as a memory mapped device, that is, processor 30 accesscs the I/O device by reading and writing to special locations in its address space. I/O device 50 contains six writable locations and eight readable locations. The four eight-bit parallel ports of processor 30 are desig-nated ports PO through P3. Port PO is connected to an address/data bus 51 that connects processor 30 to port Pl of its left and right children processors, that serves as a data bus between processor 30 and memory 40 and also serves as the address bus for the low order eight bits of address which are stored in a latch in I/O device 50. Port Pl serves as the con-nection between processor 30 and its parent. Theconnection between port PO of processor 30 and port Pl of its children serves as the data path between the processors and may be a direct processor chip to processor chip connection or may be an off the board connection via a bus transceiver.
Five bits of eight-bit port P2 represented on pins A8 to A12 provide the high order five bits of address to memory 40. One bit of port P2 represented on pin A13 of processor 30 is used as the direction control for the bus transceiver while another bit represented on pin A14 of processor 30 serves as an incoming handshake line.

1~8 Five bits of the lines from port P3 are one bit of the lines from port P2 are used as handshake lines when accessing neighboring processing elements, and two bits from port P3 are used as the read and write strobes for memory.
Referring to Figs. 4A and 4B, pins LACK of port P2, LINT of port P3 on processor 30 are connected to pins PACK, PINT of its left child while pins RACK, RINT of port P3 on processor 30 are connected to pins PACK, PINT of its right child.
Similarly, pins PACK, PINT of port P3 on processor 30 are connected to either LACK, LINT (if left child) or RACK, RINT
(if right child) of its parent. Signals on these six pins represent handshaking to/from the children/parent processors.
An asynchronous four cycle handshake protocol is used to move data when transfers are limited to neighboring processors (tree neighbor communications).
Referring again to Figs. 3, 4A and 4B, lines 52, 53, 54 comprise the data lines connected from processor 30 of the PE to its parent, left child and right child, respectively.
As will be apparent, data line 52 of a PE is connected to either data line 53 or data line 54 of its parent on the next upper level. Similarly, lines 56, 57, 53 comprise I/0 lines connected from I/0 device 50 of the PE to its parent, left child and right child, respectively; and I/0 line 56 of a PE
is connected to either I/0 line 57 or I/0 line 58 of its parent on the next upper level.
Referring to Fig. 5, there is shown a functional flow chart of the underlying model for a single computation cycle as performed by a plurality of PEs in accordance with the present invention. In essence, a single computation cycle comprises a broadcast phase 60, a compute phase 65, a resolve phase 70 and a report phase 75. The broadcast phase 60 comprises the steps of broadcasting data and/or instructions to a subset of PEs in the binary tree. Following this broadcast, a computation is performed by a second subset of PEs during the compute phase 65. Subsequent to this computation, the resolve phase 70 determines the identity of a third subset of PEs which will report the result of the computation during the report phase 75. Illustratively the data and instructions are broadcast by the host coprocessor.
For this model, four software dependent modes of operation are distinguishable, depending on the subset of PEs selected during each phase and the type of data and/or instructions which is broadcast. These four modes are: SIMD for single instruction stream multi-ple data stream, MIMD for multiple instruction stream multiple data stream, MSIMD for multiple-single instruction stream multiple data stream and SPMD for single program multiple data.
More specifically, in SIMD mode, each PE is first loaded with its own data and then a single stream of instructions is broadcast to all PEs. Depending on its local state, which is the result of the originally broadcast data and the preceding instructions, each SIMD PE determines which instructions from the stream to execute.
In MIMD mode, each PE is first broadcast its local program and data and then each PE is logically disconnected from its neighbor PEs and executes independently. Logical reconnection to the rest of the binary tree occurs when the PE has finished its computation.
In MSIMD mode, the binary tree of PEs is parti-tioned into a number of subtrees which maintain the full functionality o~ the ordinal tree. The root of each such subtree which is operating in MIMD mode is functionally identical to the root of the ordinal tree except for the I/O capabilities with a host. The system operating in MSIMD mode can there~ore be viewed as a set of binary trees.
In SPMD mode each PE is first broadcast a copy of the common program and then broadcast its local data.
Unlike MIMD mode, the PE does not logically disconnect lZ~Z~
itself at this point, but rather continues to receive the data stream that is broadcast from above. The computation of each PE is thus determined by three components, namely, the common program that each PE
S executes, the local data with which the PE was origi-nally loaded and the continuous data stream being broadcast from above.
The I/O device must know whether its PE and its children's PEs are in the MIMD or SIMD mode because a PE in MIMD is logically disconnected from its parent and therefore no I/O communication may occur between them. Three bits of information corresponding to the MIMD/SIMD status of a PE, its left child and its right child are used by the I/O device.
To maintain these bits a writable bit in each I/O
device is used to set itself to MIMD or SIMD. Another pair of bits MIMD_RC and MIMD LC are used to indicate if that PE's children are in MIMD or SIMD mode. The pair of bits MIMD_RC and MIMD LC are writable in two ways. One is directly by the parent processor. The second way is by propagation of an MIMD/SIMD status bit from the I/O device of a child to its parent.
This bit is readable directly by the parent or it may be read and simultaneously latched into the MIMD RC or MIMD LC bits of the parent.
As indicated above, processor 30 and RAM 40 of each PE are off-the-shelf integrated circuits. The I/O device is not. The I/O device of a PE essentially performs five operations, namely the I/O functions of broadcast, resolve and report as well as functions of memory support and debugging. These functions are accom-plished by the apparatus of ~igs. 6A and 6B comprising a broadcast block 100, a debug block 200, an instruc-tion decoder block 300, a memory support block 400, a parity check block 500, a report-resolve block 600 and an address latch block 700.

-20- 1~8~8 Instruction decoder 300 controls the I/O device by controlling the addressing of the broadcast block 100, debug block 200, parity check block S00 and resolve-report block 600, as well as the MIMD status of its children and itself, supervises transmission of interrupts to the processor and records the address in the address latch 700 at the time of an interrupt.
Memory support block 400 handles all external memory operations. This block determines which 8Kx8 RAM to select, enables the 16Kxl parity RAM.

Each broadcast block lO0 handles system wide broadcasts of data and instructions. It is responsi-ble for receiving a byte from a parent PE and trans-mitting a byte to children PEs within two 8751processor instruction cycles and then re-syncronis~ so another byte can be transmitted from the MIMD root processor.
Debug block 200 handles system wide interrupts from itself, its parent and its children and generally propagates the interrupt signal throughout the entire tree.
Parity check block 500 checks the byte communi-cation among the numerous PEs as well as memory operations for parity errors.
Resolve-report block 600 is responsible for determining which single PE is to be enabled to report its result by transmitting it to the host coprocessor.
Address latch 700 provides the I/O device with addresses and/or data. ~ddress latch 700 is con-trolled by the address latch enable on line ALE from processor 30. When ALE is high, the address on line AD(7..~) or the data on line IO_BUS7..~ is transmitted through the latch onto line AD_LATC~ 7..~ to instruc-tion decoder 300.
Before discussing the individual components ofI/O device 50 it is helpful to set forth the register organization of this device. As indicated above, the -21- ~ ~ 8 I/O device i9 memory mapped to the external RAM
address space of the processor 30, illustratively to the first 256 bytes of an Intel 8751 microprocessor.
In the present embodiment of the invention, there are 14 defined addresses, consisting of data, control, and I/O processor status registers, organized into three functional blocks o control/status block, broadcast block and resolve/report block. The description of these registers is set forth in Table I below:

-22- 129~8 TABLE I

hex address reaister bit bit name description 00SYNC bO mimd_s mimd status of self PE
bl mimd_rc mimd status of right child PE
b2 mimd_lc mimd status of left child PE
01CTRL bO globl_int initiate global interrupt bl mimd_s mimd status of self b2 parity_msk disable parity interrupts b4 global msk disable global interrupts b5 kernel mode enable kernel mode 02STAT bO parity_err a parity error has occurred bl refresh _ err external RAM refresh is overdue b2 global _ err a global interrupt has occurred b3 mem_parity bad parity source was external RAM
b4 ext mimd Ic dynamic left child mimd status b5 ext_mimd_rc dynamic right child mimd status 03PCAR - - Parity Check Address Register 04RESET - - Reset Register 05STATE bO parity-err a parity error has occurred bl b ready broadcast logic ready b2 b dav broadcast data register full 8~:8 TABLE I ( Cont ' d ) hex addressrecister bit bit name description b3 rr ready resolve/report logic ready b4 rr-dav resolve/report data register full bS global_int global interrupt has occurred B IN - - Broadcast Input Register 1011 B OUT - - Broadcast Output Register 12 B STATUS bO sub tree rdy broadcast ready state broadcast 157 output b2 b ready 1 o q i c i 5 ready (mimd mode) b3 tree ready logical AND of subtree ready lines (r/o) b4 b _ dav broadcast data is available (simd mode) b5 tree _ enable tree communication enable b6 output parity dynamic parity of B_OUT
(r/o) b7 input parity dynamic parity of B IN
(r/o) 2520 RR IN - - Resolve/Report Input Register 21 RESOLVE - - Resolve Output Register 22 REPORT - - Report Output Register 23 RR STATE 0 bO rc fifo_enb right child fifo enable bl lc_fifo_enb left child fifo enable bZ rr go_logic resolve/report ~go~l line to parent b3 rr in_clk enb RR IN clock enable b4 en xmt resolve/report transmitter enable TABLE I (Cont'd) hex address reqister bit bit name description b5 rr_ready resolve/report logic is ready for next byte b6 rr_dav resolve/report data is available b7 rr in_parity RR IN parity bit (r/o) 24 RR STATE I bO obc rc right child one-bit-compare flag bl obc lc left child one-bit-compare f lag b2 obc s local one-bit-compare - flag b3 kili-rc kill right child flag (r/o) b4 kill Ic kill left child flag (r/o) b5 suicide kill self flag (r/o) b6 kill p kill parent flag (r/o) b7 rr out parity resolve/report output parity bit (r/o) Unused registers in each of these blocks, and other registers between hex addresses 30 to 18FFFF are reserved for future expansion of the I/0 device, and referencing these registers, as well as any registers beyond the upper limit of external RAM, will yield indeterminate results. The function of these regis-ters is as follows.

The Control/Status 810ck The SYNC, CTRL, STAT, PCAR, RESET and STATE
registers in the Control/Status Block are used to provide processor 30 with information about the current state of the I/0 device and its neighbors, as well as allowing the processor 30 to save and restore lZ~

its I/0 device's context (during an interrupt handler, for instance~ and providing explicit (though limited) control over the global communication paths.
The SYNC register allows processor 30 to specify the MIMD status flags used by the I/0 device to control the handling of Broadcast and Resolve/Report operations. In normal operation, writing to the SYNC
register causes the signals present on the I/o device's dynamic MIMD status lines to be stored in the internal SYNC latches, which values control the behavior of the Broadcast and Resolve/Report logic.
In kernel mode, however, the internal SYNC latches are loaded from the source operand rather than the exter-nal MIMD status lines. This provides a way for processor 30 to explicitly control the behavior of ~roadcast and Resolve/Report operations with respect to neighboring I/0 devices, regardless of their actual state. Resetting the I/0 device clears the SYNC
register, indicating SIMD operation for the local I/0 device as well as its descendants. Reading the SYNC
register yields the contents of the I/0 device's internal SYNC latches.
The CTRL (Control) register may be used to specify SIMD and MIMD operation for the local I/0 device, specify which events will cause the I/0 device to interrupt processor 30, enable or disable kernel mode operation, or initiate global interrupts.
The CTRL register contains bits corresponding to the I/0 device's MIMD-self latch, the kernel mode enable latch, mask bits for the three sources of I/0 device interrupts to processor 30 and a global inter-rupt trigger. With the exception of the global interrupt trigger, each of these bits is a simple latch, and may be read or written in either user mode or kernel mode. Writing a one bit to the global interrupt trigger initiates a global interrupt to the entire tree, and interrupts the I/0 device regardless of the setting of the global interrupt mask bit.

-26- lZ~18~8 Resetting the I/O device places the I/O device in SIMD
mode, disables parity and global interrupts, and disables kernel mode.
The STAT (Status) register provides processor 30 with various information about internal and external I/O device events. The presence of a one bit in any of parity_err, refresh_err, or global_err indicates that the corresponding event has occurred. The I/O
device can also interrupt processor 30 whenever any of these events occurs, depending on the values of the interrupt mask bits in the CTRL register. When a parity error has been detected, the mem_parity bit will be one if the source of the bad parity was external RAM, zero if the source was the communi-cations logic between processing elements. Resettingthe I/O device clears these bits.
Reading ext_mimd_lc and ext mimd_rc will return the current value of the signals on the external MIMD
pins connected to the left and right descendant I/O
devices. Since these values are not latched, write operations have no effect on them. (These are the values loaded into the internal SYNC latches by user mode writes to the SYNC register.) The I/O device latches the low-order address byte from the address bus of processor 30 into the PCAR
(Parity Check Address) register whenever a global interrupt occurs, provided that the PCAR register has been read by processor 30 (or a reset operation has been done) since the occurrence of the previous global interrupt. This interlock is provided so that the data in the PCAR register is not overwritten by new data if subsequent global interrupts occur before processor 30 is able to examine the PCAR register during processing of the initial interrupt. Read and reset operations write-enable the PCAR register internally. Resetting the I/O device also clears it -27- 1~8~B

to zero. Processor 30 is allowed to write data into this register only when the I/O device is in kernel mode.
The RESET register is written by processor 30 in order to perform a software reset of the I/O device, an operation that may only be done in kernel mode.
The source operand of processor 30 is ignored during write operations, and read operations yield indeter-minate data.
The STATE register contains status flags des-cribing the state of the resolve-report and broadcast logic, as well as the parity error and global interrupt indicators.

Broadcast Block The broadcast block contains the B_IN, B_OUT and B_STATUS registers accessed by processor 30 to retrieve global broadcast data in SIMD mode, and to broadcast data to the locally-rooted subtree in MIMD
mode.
Reading the B IN ( 8roadcast Input) register yields the most recent byte of broadcast data and enables the I/O device's broadcast ready pin, indi-cating that the I/O device is ready to receive the next byte from its MIMD ancestor. The B_IN register can be written by processor 30 only when the I/O
device i5 in MIMD mode.
The B_OUT (8roadcast Output) register is written by processor 30 in MIMD mode in order to broadcast a byte of data to the MIMD subtree rooted at the local processing element. The data appears in the 8_IN
register of each descendant SIMD processing element, as well as the 8_ IN register of the local I/O device.
(The semantics of the broadcast operation require that the MIMD root processing element read its own broad-cast data, since MIMD mode processing elements willalso function as SIMD processing elements in order to provide a complete binary tree.) If the B_OUT

register is written in SIMD mode, the data will be transmitted when the I/O device is put in MIMD mode and the subtree is ready to receive it. This is not recommended, however, since a simultaneous broadcast from an ancestor processing element could either over-write the data while the local I/O device is in SIMD
mode or cause an I/O parity error while the local broadcast is in progress.
The s STATUS (Broadcast Status) register contains three kinds of information: bits b ready and b_dav indicate whether the broadcast logic is ready to accept the byte to be broadcast in MIMD mode, and whether new broadcast data is available in SIMD mode, respectively; tree_ready indicates whether or not the locally-rooted subtree is ready to receive data;
input_parity and output_parity contain the dynamic parity bits for the B_IN and B_OUT registers, respec-tively.
Flag b dav is set to one when the I/O device receives broadcast data in SIMD mode from an ancestor processing element, or broadcast data in MIMD mode (since processing elements always receive their own broadcasts). This flag is cleared whenever processor 30 reads the contents of the B_IN register or resets the I/O device. Flag b_ready is set to one whenever the I/O device is in MIMD mode and there is no out-standing data to be transmitted to descendant proces-sing elements. This flag does not, however, indicate that the descendant processing elements have retrieved previous data from their I/O devices. It simply means that the local I/O device is ready to buffer another byte of output. Sub tree_ready can be used in MIMD
mode processing elements to determine whether a broadcast byte can be delivered immediately because it indicates that no processing elements have outstanding data in the B_IN registers of their local I/O device.
Flag tree_enable can be cleared in order to disable all serial communication between the local I/O device ~x~

and all of its children. This has the side-efect of clearing the sub_tree_ready flag in all ancestor processing elements.
Only b_ready, tree_enable, and b_dav are affected by write and reset operations, and then only if the I/O device is in kernel mode. Reset turns on b_enable, tree_enable, and b_ready, and turns off b_dav, indicating that no SIMD input data is available and no MIMD mode is pending.

Resolve/Report Block The Resolve/Report block contains the RR_IN, RESOLVE, REPORT, RR_STATE_O and RR_STATE_l registers accessed by processor 30 to perform the Min-Resolve and Report operations.
When flag rr_dav of the RR_STATE_O register is set, the RR_IN register contains the next available byte of resolve/report data. Reading the RR_IN
register clears this flag. The RR IN register is normally read only by a MIMD mode processor 30 or when the state of the I/O device must be saved.
The RESOLVE register accepts a byte of resolve/
report data and initializes the resolve/report logic.
80th MIMD and SIMD mode processing elements write resolve/report data to this register at the beginning of a resolve/report sequence. Each SIMD I/O device propagates the largest value calculated among itself and its two descendants up to its parent, until the MIMD processing element at the root of the subtree receives the largest such byte in its RR_IN register.
The REPORT register accepts successive bytes of resolve/report data following an initial byte written to the RESOLVE register.
A processing element in MIMD mode must read flag rr_dav of RR_STATE_O register to determine whether or not resolve/report data is available in the RR_IN
register. The other bits of this register are inter-nal status registers and exist only to allow the state l2~ a of the resolve/report logic to be preserved acro~ss interruptions in the resolve/report operation. In SIMD mode, bit rr_ready indicates whether or not the resolve/report logic is ready to accept a new byte of data in the RESOLVE or REPORT registers; bit rr_dav indicates that resolve/report data is available in the RR_IN register. The remaining bits in this register are used only to save or restore the context of the resolve/report logic during a context switch.
The RR_STATE_l register contains additional resolve/report state information, including flags to indicate whether the local processing element is still eligible to compete in the resolve/report operation (i.e. it has not been "killed"), which are monitored by the resolve/report firmware.
Resetting the I/O device has the following effects on its internal registers:
in the SYNC register, the internal MIMD latches are cleared, indicating that the local I/O device and its immediate descendants are in SIMD mode;
in the CTRL register the internal MIMD-self latch is cleared, the parity interrupts and global inter-rupts are disabled and the kernel mode is disabled;
in the STAT register, parity and global interrupt 2S event flags are cleared;
in the PCAR register, the register is cleared and enabled to receive the low-order address byte recorded at the next global interrupt;
in the STATE register, the broadcast and resolve/report logic are reset to their idle state, and parity error and global interrupt flags are cleared;
in the B_STATUS register, all broadcast logic is enabled and the broadcast data-available flag b_dav is cleared; and in the RR_ STATE_0_ and RR_STATE_l registers, all resolve/report logic is enabled and the resolve/report data-available flag rr_dav is cleared.

-31- ~ ~

Instruction decoder 300 of Figs. 6A and 6B is depicted in greater detail in Figs. 17A and 17s.
Instruction decoder 300 comprises six functional blocks, namely primary decoder 320, sub-decoder 330, control register 340, status register 350, synchronous register 360 and parity error address register 370.
The four registers 340-370 are the CTRL, STAT, SYNC
and PCAR registers described above.
Primary decoder 320 decodes the high order bits address latch 700 of Figs. 6A and 6B over line AD_LATCH 6..4 and pin A14 of the 8751 processor of Fig. 4A to initially determine whether the I/O device is being addressed and then, if indeed it has been addressed, to select the appropriate block with either a read or write operation.
Sub-decoder 330 is a block which may be selected by primary decoder 320. Once selected, sub-decoder 330 will read the lowest 3 bits of the address on line ADLATCH(2..0) to sub-decoder 330 to select either an I/O device reset or control register 340 via control register write and control register read lines CNTREGWR, CNTREGRD, status register 350 via status register write and status register read lines STREG0WR, STREG0RD, synchronous register 360 via synchronous register write and synchronous register read lines SYNCREGWR, SYNCREGRD, or parity address register 370 via parity address register read line PADDRREGRD.
Control (CTRL) register 340 contains mask bits for interrupt handling, a kernel enable bit and a set global interrupt bit on line SET_GLOBAL_INT, all of which are used by status register 350. Control register 340 also provides a bit status on line MIMD S
for MIMD/SIMD operation. On reset all masks are enabled, the kernel bit is cleared and the device is configured for SIMD operation.

-32- ~ ~ ~

Status (STAT) register 350 allows for dynamic reading of the MIMD status of the left and right children P~s on lines MIMD_LC, MIMD_RC and receives information from the I!O device concerning the inter-nal state of its children and itself. Status register350 essentially serves to process parity errors and global interrupts and then to interrupt the processor if the masks are disabled. Once the processor is interrupted, it will read the status register to check if the interrupt was caused by a global interrupt or by a parity error. If the interrupt was caused by a parity error, the status register returns a bit which if set, indicates that the error was caused by a memory read operation. If this bit is not set, the parity error was caused by a transmission error in the Broadcast or Resolve-Report block. On reset, all registers are cleared.
Synchronous (SYNC) register 360, upon receiving a write command on line SYNCWR, latches the state information on the MIMD status lines MIM~_LD, MIMD RC
from the children and the MIMD status information MIMD S in control register 340 so that this data is accessible to the other blocks of the I/O device. The kernel enable bit must be set to explicitly write to this register from the data bus. On reset, syn-chronous register 360 clears all bits causing all PE's to be in the SIMD mode.
Parity error address (PCAR) register 370 records the address in the address latch 700 of Figs. 6A and 6B over line AD LATCH7..0 at the time of an interrupt.
Parity error address register 370 is used primarily to record hardware memory chip failures.
Memory support block 400 of Figs. 6A and 6B
handles all external memory operations. This block determines which 8Kx8 dynamic or static RAM memory chip 42,44 of Fig. 3 to select, enables the 16kxl static parity memory chip 46 of Fig. 3 and controls the refreshing of two 8Kx8 ~AMs. Memory support block l~i8~B

400 is controlled by address latch enable ALE on line A14 of processor 30 and the low order bit of the address latch of processor 30 of Fiy. 4A. Line A14 is examined to determine whether or not a memory operation may take place. Similarly, the low order bit of the address latch is examined to determine which of the two 8Kx8 RAMs should be selected for a memory operation while the other RAM is refreshed. The parity chip is enabled over line PC_EN of memory support block 400 whenever an external memory operation occurs.
More specifically, if a memory operation is to occur, memory support block 400 of Figs. 6A and 6B latches the low order eight bits of address that appear on port PO of the processor 30 of Figs. 3 and 4A and provides these low order eight bits to the external memory. Memory support block 400 must also determine whether the memory access is for the I/O
device or for the external memory. This is accomplished by a bit fed to the I/O device from line A14 of port P2 of processor 30 of Figs. 3 and 4A and subsequent memory support block generated control strobes sent to the external memory as well as the I/O device. With respect to error detection, memory support block 400 also generates and checks parity for both memory operation and I/O operations. Upon finding a parity error, the low order eight bits of the address are saved in the PCAR register by memory support block 400`.
A general schematic of the broadcast operation is shown in Fig. 7. Broadcast block 100 comprises a broadcast input register 82, a broadcast output register 80, a multiplexer 84, AND gates 86, 92 and a D-type flip flop 88.
The broadcast input and broadcast output registers are the B
IN and B_OUT registers described above in conjunction with Table I. Broad-12~318~8 cast is performed as a synchronous bit serial opera-tion and depends on the mode of operation, i.e. SIMD, MSIMD, MIMD or SPMD. Broadcast comprises the trans-mission of instructions and/or data to/from the individual PEs. Illustratively, data and instructions are broadcast by the host coprocessor to a subset of PEs which then proceed to execute at least some of the transmitted instructions on at least some of the transmitted data.
The broadcast input register 82 is a shift register with serial input terminal Sin, a shift enable, and a parallel output terminal. If the PE is in SIMD mode, incoming broadcast data illustratively originating from the host coprocessor and transmitted through higher level PEs is shifted serially into the input cell via AND gate 86. When a 0 start bit is shifted to the end of the shift register, it generates on line 90 a bin-ready signal that is routed to shift enable and disables shifting. When the incoming data is read in parallel from the input register, l-bits are parallel loaded into the input shift register to keep it from reading a data bit as a start bit. The parallel output from register 82 illustratively is connected to processor 30 via bus 51 and port 0.
The bin-ready signal provides a control signal to prevent output data from being lost if the output cell is loaded before all the children read the previous broadcast. To check for valid incoming data in input register 82, receiver PEs simply have to poll this control signal. To check whether a byte may be broadcast out, a broadcast-ready signal P_ READY is generated on line 94 by each I/O device to be used by the PE and to be propagated up the tree to the I/O
device of the parent PE. Broadcast-ready is generated by AND gate 92 of a PE by gating together its bin-ready signal on line 90 and the broadcast-ready signals from its two children and the MIMD/SIMD status bits for its children.

D-type flip-flop 88 also receives the incoming data from the parent PE via AND gate 86 and propagates the data through multiplexer 84 and down the tree to the children PEs delayed by one clock cycle. This data may be broadcast to all the PEs of the array with a delay of only one clock cycle for each level of the array. Thus, it takes only nine clock cycles for data to reach the lowest level ~PEs 512-1023) of the array of Fig. 2 and another ten clock cycles to clock a start bit, eight bits of data and a parity bit into the broadcast input shift register 82.
Since a processor instruction cycle typically requires over twenty clock cycles, there is sufficient time in one processor instruction cycle to broadcast data to all the PEs.
The broadcast output register 80 is a lO bit shift register, 8 bits for data, l bit for parity and 1 bit for a 0 value start bit. The shift register has a serial input terminal Sin, a 10 bit parallel input terminal, a load terminal and a serial output terminal SOUt. The serial input terminal Sin is wired to a logic 1. The output shift register shifts continu-ously. In a broadcast out operation, eight data bits which illustratively are the result of a computation by processor 30 or have been received from another PE, one parity bit and one start bit are parallel loaded into the shift register through the load line. These ten bits are then shifted out through the SOUt line and into a multiplexer 84 to provide a multiplexed serial output data.
Referring now to Fig. 8, broadcast block lO0 of Figs. 6A, 6B and 7 is depicted in greater detail. The broadcast block functionally comprises broadcast input shift register llO which receives and holds a byte sent to the I/O device, broadcast output shift regis-ter 140 which can transmit a byte throughout the lower subtree, and a broadcast control block 170 which performs synchronization and supervises the passage of 12!918~8 data down the tree. As indicated above, broadcast input register 110 and output register 140 are both accessible through memory mapping, thus permitting the processor 30 to read or write a byte. The processor can determine whether a byte has been received by checking the b_dav status bit or whether a byte is available to be transmitted by checking the b_ready status bit. These bits may be read from address 12H
(bits 4 and 0, respectively) or address 05H (bits 2 and 1, respectively).
Input shift register 110 and output shift reqis-ter 140 of Fig. 8 correspond to input register 82 and output register 80 of Fig. 7. Broadcast control block 170 of Fig. 8 uses signals broadcast left child B_LC, left child ready LC Ready, broadcast right child B_RC
and right child ready RC_Ready to determine when a byte may properly be broadcast. Similar broadcast parent and parent ready signals are provided for the parent processor on lines B_P and P_Ready of broadcast control block 170. The lines B_P and P_Ready are two of the eight lines 56 thac connect the I/O device of a PE to the I/O device of its parent PE and these lines are received at the parent PE as two (B_LC, LC Ready) of the eight lines 57 from a left child PE or two (B_RC, RC Ready) of the eight lines 58 from a right child PE as the case may be.
The broadcast block of Fig. 8 may be in one of three states, namely idle, receive broadcast or transmit broadcast. The broadcast block generally spends a substantial amount of time in the idle state.
If the processor is not in MIMD mode, it is possible for an I/O device to receive a broadcast issued from above itself in the binary tree. The transition rom the idle state to the receive broadcast state is accomplished as follows. In the idle state the broadcast line to the parent B_P is held high by the parent while input shift register 110 constantly shifts in ones to prevent register 110 from reading a ~2!~8X8 data bit as a 0 start bit as described in conjunction with Fig. 7. The broadcast lines to the children B_LC, B_RC are also held high by the broadcast control block 170. When a subtree signals the MIMD root PE
that it is ready by setting the corresponding ready lines high, a broadcast is issued from the MIMD root PE above in the tree by sending a zero start bit.
This start bit enables control block 170 and is passed on to each of the children. The bits are delayed throùgh D flip-flop 88 so as to hold the transmission synchronous to the system clock. This causes a one bit clock delay per I/O device. Once 9 bits (8 bits data and 1 parity bit) are transmitted, the zero start bit disables input shift register 110 and drives the lS parent ready line P_Ready to the parent low. In like fashion, the P_Ready lines in subsequent levels of the tree go low as the æero start bit is shifted to the output of shift register 110 of that level and AND
gate 92 is disabled.
At each level, the processor 30 checks its ~_Dav status bit to see if input shift register 110 has received the data byte. If so, the processor can read it from the I/O device. The I/O device checks input shift register 110 and if the processor has read the byte and the children's ready lines RC_Ready, LC_Ready are high, then the I/O device forces the parent ready line high signaling that it is ready to receive the next byte. If the processor has not read the byte, then control block 170 keeps the parent ready line low until the byte is read by the processor.
Referring to Fig. 9, there is depicted a general schematic of the resolve portion of the resolve-report operation. Essentially, the resolve operation involves determining the identity of the single PE
which will perform the operation of reporting the result of a computation carried out by a number of PEs. Each PE carrying out the computation is assigned a value. The single winning PE is notified that it ~i~8 -3~--has won and subsequently it alone reports the result.
In the case of a tie, the resolve circuit designates as the winner the PE with the winning value that is encountered first in an "in order" traversal of the binary tree.
The resolve block of each I/O device computes the following values where VP is the minimum resolve value:

VP = Minimum (VS, VL, VR) KL = KP OR (VS > VL) OR (VR > VL) KR = KP OR (VS ~ VR) OR (VL ~ VR) KS = KP OR (VS ~ VL) OR (VS < VR) Where VS is the value of the number offered by the local PE, VL is the VP value computed by the left child and VR is the VP value computed by the right child. In other words, each I/O device computes the minimum of the resolve value offered locally and the minimum values computed by its two children.
KP, for kill from parent, is a Boolean signal coming into an I/O device indicating that the local PE
can not be the winner of a resolve nor may any of its children be the winner. The left child is killed, KL, if there is a kill from its parent KP, or the value either at the local PE or from the right child was less than the value from the left child, VL. KR and KS, for kill right and kill self, are computed similarly. KS is the signal read locally within the PE to determine whether it is the winner of the resolve.
The minimizing part of resolve is computed bit serially as depicted in Fig. 9. ~lock 600 o Figs. 6A
and 62 comprises first and second FIFO serial regis-ters 93, 94, buffer 95, multiplexer 96, comparator 97 and buffer 98. Buffer 95 and buffer 98 are the RESOLVE and RR IN registers described in conjunction with Table I. Serial comparator 97 computes the .

current winning PE by comparing one bit at a time from each of the three sources VS, VL and VR and forwarding the winning bit via multiplexer 96 to the first or second FIFO serial register 93, 95 in a parent PE.
Since the subtree that is computing the resolve may not be a complete binary tree, the value bits from the children may not arrive in the same clock cycle. EIFO
buffers 93, 94 are used to align the incoming bits of the children with the bits of their parent in buffer 95.
~ pon receipt of a KP signal or upon completion of a comparison, comparator 97 also issues the signals KL, KR and KS, determined as indicated by the logic equations set forth above.
The foregoing arrangement permits comparisons of values to be made in pipeline fashion simultaneously on several levels of the binary tree. Thus, the first bit of a winning value may be compared with the first bits of other winning values in a grandparent PE while the second bit is being compared in a parent PE and a third bit is being compared in a child PE and still additional bits are waiting to be compared. Advan-tageously, there is a delay of only a single clock cycle in performing the comparison at each level in the binary tree and the comparator is able to compare one bit of each of the offered values per clock cycle.
As a result, the entire resolve operation can be performed in the time it takes a number offered by a PE to be clocked through the comparator plus the propagation time through the tree of one clock cycle per level. If the number offered is only a byte and the processor array has only ten levels, as in ~ig. 2, the entire resolve operation ca~ be completed in less than twenty-clock cycles which is less than the average processor instruction cycle.
The report operation is implemented in the kernel program by using the resolve circuit. Report is only ~ell defined if there is a single enabled PE in the _40_ 12~8~8 SIMD subtree. To implement the report operations, all PEs do a resolve where the SIMD disabled PEs use maximum values (e.g., FF(hex)) and the single enabled PE offers the value to be reported as its resolve value. The value offered by the enabled PE is there-fore less than or equal to all other values resolved and the correct result is reported.
Resolve-report block 600 of Figs. 6A and 6B is depicted in more detail in Figs. lDA and 10s.
Resolve-report block 600 comprises four blocks, namely subdecoder 610, output register 620, comparater 640 and control block 660. Subdecoder 610 performs decoding and transmits read/write enabling signals to output register 620, comparator 640 and control block 660. Output register 620 which is the RESOLVE
register of Table I receives a byte from processor 30 which is to be transmitted up the binary tree on line ~US7..0 of register 620. Comparator 640 is used primarily in the resolve operation and is a specific implementation of the serial comparator 97 of Fig. 9.
Comparator 640 determines the winner of a resolve and issues KILL commands to children who do not have the proper value. Control block 660 performs trans-mitting, synchronizing and receiving functions. Each PE is responsible for receiving a byte from each of its children and transmitting the winning byte up to its parent as fast as possible. To do this as fast as possible, two FIFOs are used. Once this resolve i5 done, a data path is set up from the winning PE to the MIMD root PE.
In the practice of the invention, the report-resolve block of Figs. 10A and 10B performs two types of transmissions, namely report and resolve. The resolve transmission utilizes two specific addresses.
The first byte of a resolve is written into a first address which resets comparator 640. Following bytes are written to a second address but do not reset the comparator. If comparator 640 of the resolve-report -41~ 828 block does a compare on the three bytes from itself and its two children and determines a winner, then that winner reports the result. However, if there is a tie involving the left child, comparator 640 will call the left child the winner; and if there is a tie between itself and the right child, the comparator determines itself to be the winner. In any case the I/0 device will transmit up the winning byte with a O
start bit. Upon encountering the last byte of a resolve, processor 30 reads the KILL signals from comparator 640 on lines kill right child KILLRC, kill left child KILLLC and kill parent KILLP in order to determine whether that byte was the winning byte of the resolve.
By this means successive levels of the binary tree select a single winning value, pass it on to the next level and disable all processing elements except the processing element which has the winning value.
The report function can then be implemented by having all the disabled processing elements report a maximum value such as FF(hex) when they are disabled and broadcasting an instruction to the winning proces-sing element to report whatever of its data is of interest. Since all the other processing elements are set to a maximum value, operation of the resolve one more time will cause this data from the winning processing element to be selected consistently as the winning value and passed to the root processing element.
Figs. 11-16 depict the elements of control block 660 of Figs. lOA and lOB in greater detail. More specifically, input register 680 of Fig. 11 which is the RR_IN register of Table I receives and holds the winning byte transmitted from the child to an MIMD
35 parent on line BUS(70) of input register 680.
Syncing and data enabling signals SYNC, XMTDTA are input to NAND gate 681 whose output serves as the clock to input register 680.

-42- ~Z~8~8 Two first-in first-o~t devices (FIFOs) 630, 695 of Fig. 12 receive bytes transmitted Erom SIMD
children to their parent. These devices are depicted as FIFOs 93, 94 in Fig. 9. If FIFO 690 of the PE
receives a winning byte from the PE's right child, it will transmit this winning byte up to its parent as fast as possible on line FIRCDO. Similarly, if FIFO
695 of the same PE receives a winning byte from the PE's left child, it will transmit this winning byte up to its parent as fast as possible on line FILCDO.
More specifically, FIFO 690 is supplied with data from the right child over input line RRDRC of Fig. 12.
Control signals PRENXMT applied to NAND gate 692 and XMTDTA applied to D-fllp flop 691 permit the output of data to the PE. Similarly, FIFO 695 is supplied with data from the left child over input line RRDLC.
Control signals PRENXMT applied to NAND gate 697 and XMTDTA applied to D-flip flop 696 permit the output of data to the PE. Reset signals are applied to lines FIFORESET and R/I of NOR gate 693 to reset FIFO 690 for the right child and to NOR gate 694 to reset FIFO
695 for the left child.
Input register 680 of Fig. 11 and output register 620 of Figs. 10A and 10B are memory mapped to permit processor 30 to read or write a byte. Processor 30 may check the status of control block 660 to determine whether a byte has been received or needs to be transmitted.
Fig. 13 depicts the GO circuitry to produce the status signal RRCLGO. Inputs to this circuit include signals RRPARDY and RRPAGO.
Fig. 14 depicts the EMPTY logic to produce the signal PREMPTY. Inputs to this circuit include signals ROUT0WR and RROUTlWR.
~ig. 15 depicts the READY logic to produce the signal RRPARDY. I~puts to this circuit include signals RRRCRDY, SYCRC RRLCRDY, SYNCLC and the RREMPTY signal produced by the EMPT~ logic of Fig. 14.

Fig. 16 depicts the SEND logic to produce the signals XMTDTA, ENXMT and RSTFLL. Inputs to this circuit include FILC, FIRC (both produced by the RECEIVE FIFO circuit of Fig. 12), SYNCLC, SYNCP~C and RRCLGO (produced by the GO logic of Fig. 13).
The resolve-report block may be in any one of several states, namely, idle, transmit resolve-report byte, receive resolve-report byte and start resolve-report.
If the PE is not in MIMD root mode the idle state is defined as follows: a status signal on line RRCLGO
of control block 660 of Figs. 10A and 10B is held logic high by the MIMD Root parent; the right child and left child data lines RRDRC, RRDLC respectively, of control block 660 are held logic high; and the parent ready line RRPARDY also of control block 660 is held logic low by the child.
The I/0 device leaves the idle state and enters the transmit resolve-report state when the SIMD
children's processor 30 writes a resolve byte into the I/0 chip's report-resolve output register 620 of Figs. 10A and 10B. If the SIMD child is a leaf PE in the binary tree, the parent ready line RRPARDY of control block 660 of Figs. 10A and 10B enters the logic high state. If the SIMD child is not a leaf PE, the resolve-report ready status bit, i.e, the bin-ready signal on line 90 of Fig. 7 is ANDed by AND Gate 92 of Fig. 7 with each of the children's ready lines to determine the value of the parent broadcast ready line, as shown in Fig. 7. Once a MIMD root PE deter-mines that the SIMD sub-tree is ready, the PE will cause the status signal on line RRCLGO of control block 660 of Figs. 10A and 10B to enter the low state.
When this occurs, the leaf PEs transmit up the value in output register 620 of Figs 10A and 10B. If the PE
is not a leaf and in SIMD, then t must wait until both of its children have begun to transmit their resolve byte up the binary tree. FIFOs 690, 695 which _44_ ~ 2 ~1 8~ 8 receive bytes transmitted from SIMD right and left children to their parent permit comparator 640 of Figs. 10A and 10B to compare the bits as soon as all three first bits from the left child, right child and their parent are received. Once an SIMD PE has finished a one bit compare, it forces the status signal on line RRCLGO high, even if the parent still has a low line, so as to enable the next byte to be written without any delays.
In addition to the idle, transmit resolve-report byte and start resolve-report states, a MIMD root PE
may also be in the receive resolve-report state. When a subtree signals to a MIMD root PE that it is ready to transmit up a byte, the MIMD root PE's I/O device initially checks that it has written a byte to output register 620 of Figs. 10A and 10B as if it were an SIMD PE. At this point, the MIMD root PE's I00 device drops the status signal on line RRCLGO of control block 660, signalling the lowest portion of the tree to begin transmission. Signal RRCLGO is depicted in the GO Logic and the SEND logic of Figs. 13, 16, respectively. The I/0 device now enters the Receive resolve-report mode. Each child transmits up either its resolve byte or report byte, depending on the operation taking place. Once the children of the MIMD
root PE transmit up the data, the MIMD root PE does a one bit compare on the data and, like any other SIMD
PE, the winning byte is transmitted to resolve-report block's input register 680 of Fig. 10 over line BUS(7..0), where it may be read by processor 30.
Referring again to Figs. 6A and 6B, parity check block 500 checks the byte communication among the numerous PEs as well as memory operations for parity errors. Parity check block 500 essentially reads the : t- ~
_ ~ 5 ~ 18~8 pari ty of the ei ght-bi t data 1 i ne IO_BUS7 . . 0 connecti ng broadcast bloc~ 100, instruction decoder block 300, report-resolve block 600 and address latch block 700 and checks this parity against a parity bit to deter-S mine whether a parity error has occurred.
More specifically, if processor 30 writes to the broadcast block 100 or report-resolve block 600, a parity bit is placed on internal parity bit bus IPB of parity check block 500 and written as the ninth bit to the data registers of report-resolve block 600. Line IPB connects parity check block 500, broadcast block 100 and report-resolve block 600. When a memory write operation takes place, parity check block 500 gener-ates a parity bit which is stored for future reference by being written through data line PB_OUT of parity check block 500 into 16kxl parity RAM 46 of Figs. 3, 4B.
~ henever processor 30 reads the data input registers in broadcast block 100 or resolve-report block 600, the parity of the data is checked against the parity bit transmitted with the data. If they fail to match, a parity error signal PAR_ERR is sent to instruction decoder block 300, which, if the parity error mask is disabled, will interrupt the processor.
When an external memory read operation occurs, the parity of the data in the 8Kx8 RAMs 42, 44 of Fig. 3 is checked against the parity originally written into the 16K~1 RAM 46 of Fig. 3. The line which brings this parity bit from the 16Kxl RAM to the I/O device is PB IN of parity check block 500 of ~igs SA and SB.
If the parity of data bus IO BUS7..~ of parity check block 500 does not match PB_IN, then the parity error P~R ERR line goes low. Parity error line PAR_ERR of parity check block 500 is connected to instruction decoder block 300 and informs the instruction decoder that a parity error has occurred. In both read cases, there is a delay of two clock cycles from when the read line toggles low until the parity bit is checked 12~ 8 against the data bus IO_LUS7..Ø A bit in instruc-tion decoder block 300 knows if the parity failure was local to the I/O device or was memory failure.
Referring back to Figs. 6A and 6B, there is shown debug block 200 which handles system wide, or global, interrupts. These global interrupts may be generated by either processor 30 initiating its own global interrupt (known as debug) or by another I/0 device.
Global interrupts permit any PE in the tree to initiate an interrupt in every other PE as well as the host coprocessor. Debug block 200 may be used to signal a hardware error or may be used to enter a debugging and diagnostic supervisor program. The I/O
device is constructed so the processor may read and write the I/O device's state in order or context switching to be performed. Context switching involves saving the state of the I/O device in external memory.
Upon a global interrupt, the state of the processor and the I/O device are saved and a supervisor program entered. Upon exiting the-supervisor program the state of the PE and the I/O device may be restored and the tree may resume processing from where the inter-rupt occurred.
Debug block 200 is essentially a multidirectional bus repeater. Debug block 200 may receive an inter-rupt signal from four different sources; the local PE, as a result of writing a particular bit to the PE, or any of its three nearest neighbors (parent PE or leEt and right children PEs) in the tree. Upon receipt of an interrupt, debug block 200 will propagate the interrupt in the other three directions until the entire tree is interrupted. Upon removal of the interrupt signal from the interrupt source the block will stop propagating the signal in the other three directions.
Functionally, debug block 200 of Figs 6A and 6B
is a finite state machine. Input signal IO_GLOBAL_INT
to debug block 200 comes from instruction decoder 300 12~ 8 and i5 generated by a global interrupt status bit, which is set high or low by processor 30. Receive interrupt signal INT_IO from debug block 200 feeds instruction decoder 300 to generate an I/O device initiated interrupt. Input/output lines, global interrupt by parent G_INT_P, global interrupt by left child G_ INT_LC, and global interrupt by right child G_INT_RC of debug block 200 may also be used to communicate interrupts. If the global interrupt was generated by the parent, left child or right child, and not by the I/O device, then whichever of these three lines was forced low indicating an interrupt will also force receive interrupt INT_IO low. If the global interrupt is caused by input signal IO_GLOBAL INT to debug block 200, then signals G_INT_P, G_INT_LC and G_INT_RC will also be forced low. Once an interrupt occurs, processor 30 reads a status byte in instruction decoder block 300 which signifies whether the interrupt was local to the I/O
device or was generated by a parent or child and also what should be done.
In accordance with the invention, once an inter-rupt occurs, all or part of the state of the I/O
device should be saved in external memory to be restored at some later time. The state of instruction decoder block 300 o Figs. 6A and 6B should be saved first, then the state of broadcast block 100, and finally the state of resolve-report block 600 in order to allow for any global communications which may have begun before the interrupt to finish before the state is saved. Following this order, no data will be lost as all broadcasts will have been completed by the time instruction decoder block 300 is read out, and all resolve-reports will have settled down by the time the broadcast block is Einished being read out. After the state has been saved, a soft reset of the I/O device can be performed or a pre-determined state can be written to the I/O device. The state must be written -48- 1~ 8 in the same order as discussed above, since the state in instruction decoder block 300 is used in both the broadcast 100 and resolve 600 blocks.
Once the debugging is done, the state of the tree S can be restored. The state of the I/O device is, again, restored in the same order as it was read out, except for one case. The broadcast input shift register must be read out by processor 30 Eirst in order to re-initialize the register. Only after it has been read can data be written into the register.
This case only applies to I/O devices which are to be restored into an MIMD status condition.
In the practice of invention, efficiency is maintained by controlling the amount of computation lS time required by each processor on a particular cycle.
Since the amount of time required by each processor may be vastly different depending on its local state, balancing the load by copying a constrained subset of the data of a processor with a skewed computation load to other processors has been found advantageous. The data may be constrained and thereby partitioned based on any suitable ordering or hashing scheme, thus reducing the mean computation time as well as the standard deviation of the mean.
Additionally, the present invention may advan-tageously be practiced such that operation of the processors will essentially be assured after two successive faults as well as a 50% chance of proper operation after a third successive fault. Although common parity errors can be handled with conventional error correcting methods, complete and irrevocable non-operation of an integrated circuit poses a much more serious problem. However, this can be overcome by replicating the computation four times in a binary tree parallel processor of size 4N+3, where N=1023 for the preferred embodiment.

Referring to Fig. 18, three additional nodes form the uppermost part of the N-size tree, namely T, L and R. These three nodes essentially act as arbitrators assuring agreement between the four identical and concurrent processes in subtrees 1-4. Nodes L and R
work concurrently to assure agreement between their descendant subtrees. T assures agreement between L
and R and aids in the isolation of faults if they occur. T, L and R themselves must be hardened against faults using conventional repliated voting logic.
More specifically, assuming a fault occurs in subtree 2, the results communicated to L by subtrees 1, 2 will differ at some point in the computation.
Immediately upon noticing this discrepancy, L notifies T by setting a pin which T reads continually. As R
has noticed no faults of its own, it communicates valid results to T. T proceeds to transmit R's results to the external host as well as to L.
Node L, using this value supplied by R via T
verifies that subtree 2 has failed. Subsequent operation of L simply passes values from its opera-tional subtree 1 directly to T. Sutree 2 can now either work independently to isolate its own fault for direct manual repair, or remain disconnected.
If another fault occurs in one of valid subtrees 1, 3, 4, the other two computations will remain valid and isolate the faulty subtree in the same manner as discussed above. Thus, T is assured to respond with valid data through two faults. If a third fault occurs, T can choose randomly from the two remaining subtrees with a 50~ probability of success. Such probability may be increased with the aid of hardware for fault detection.
Advantageously, such a fault control system can be implemented in accordance with the present inven-tion without the need of additional PEs. Nodes T, L
and R preferably perform relatively simple functions and do not require a dedicated PE.

_50_ ~ 8 Software control of each PE is preferably pro-vided through a kernel system resident within EPROM of each PE. The kernel provides four functions: powerup diagnostics, low level I/O routines, high level language PPL/M support and high level language PPSL
support depicted in the flow dlagram of Fig. 19.
Appendices I and II, respectively, list the machine code and assembly language versions of the kernel.
With respect to the powerup diagnostic function of the kernel, the PEs reset by initializing their memory to 00 upon the application of power and then examining the global interrupt lines. If the global interrupt lines are low, the PE will execute an EPROM
resident diagnostic program to generally check the integrity of the PE. If the global interrupt lines are idle (i.e., high, 5 volts), the I/O device is reset, the interrupt masks are disabled, low level status bits are initialized and a timer which is activated at the beginning of initialization is checked for time-out. If the timer times-out because the initialization did not complete, the PE will execute the EPROM resident diagnostic program. Alter-natively, if initialization is complete, the PE enters the PPL/M mode.
With respect to the low level I/O function of the kernel, the low level status bits located in the bit address space include bits to indicate whether a PE is a root PE, whether a PE is a right child, whether a PE
is in MIMD and its parent is logically disconnected, whether a left child is logically connected, whether a right child is logically connected, whether a left child physically exists regardless of MIMD/SIMD state and whether a right child physically exists regardless of MIMD/SIMD state. Initialization software prefer-3S ably examines external connections and assigns appro-priate status to the low level status bits.

-51~

Low level I/O routines are generally byte oriented, except for the I/O switch context routines, which are word oriented. If a routine returns a value, the return value is left in the accumulator.
The only routines that may access the processor ports that are connected to parent and child ports of processor 30 are as follows:
Write left child port.
Write right child port.
Write parent port.
Write host (parent port, callable from root only).
Write to both children at once.
Write host one byte but use proper protocol ~parent port, callable from root only).
Read left child port.
Read right child port.
Read parent port.
Read host (parent port, callable from root only).
Read host one byte but use proper protocol ~parent port, callable from root only).
These tree neighbor I/O routines are responsible for executing proper four cycle handshake protocol with the parent and children PEs that are logically con-nected to a local PE. If any such PE is not logically connected, a write is a "no operation" and a subse-quent read will return a zero. While these routines test the status bits, the PPL/M high level kernel maintains the correctness of the status bits.
Since ports of a PE are generally configured for communication in a downward direction, the low level routines which move a byte upward in the binary tree must reconfigure their ports in an upward direction for the upward transmission and then return the ports to the downward direction before exiting.

~2~ 8 The basic low level operation for performing tree neighbor I/O is standard four cycle handshake which requires a ready line and an acknowledge line (Ack).
A typical handshake between a Master and a Slave comprises the following steps:
Master Slave Assert data.
Assert ready.
Wait until ready asserted.
Pick up data, assert Ack.
Wait until Ack asserted.
Reset ready.
Remove data.
Wait until ready reset.
Reset Ack.
Wait until Ack reset.
Assert ready.
Wait until ready asserted.
The following illustrative list of routines are performed through the I/O device:
Read a broadcast byte;
Send out a broadcast byte (MIMD PE only);
Send out the first byte of a resolve;
Send out subsequent bytes of a resolve;
Send out bytes of a report;
Read a resolved or reported byte (MIMD PE
only);
Set the local processor's I/O device to the children's current MIMD/SIMD state;
Set the local processor's I/O device into an MIMD
state;
Set the local processor's I/O device into an SIMD
state.
Set the local processor's I/O device to have the left child in MIMD.
Set the local processor's I/O device to have the left child in SIMD.
Set the local processor's I/O device to have the right child in MIMD.
Set the local processor's I/O device to have the right child in SIMD.
Set global interrupt.

lX~ 8 Reset global interrupt.
Mask off global interrupt.
Allow global interrupt in interrupt CPU.
Mas~ off parity interrupt.
Allow parity interrupt in interrupt CPU.
Predicate returns dynamic status of left child MIMD line. Nonzero means MIMD. (Affects carry bit.) Predicate returns dynamic status of right child MIMD line. Nonzero means MIMD. (Affects carry bit.) Predicate returns the status of the I/O device's left child state. Nonzero ~eans MIMD. (Affects carry.) Predicate returns the status of the I/O device's right child state. Nonzero means MIMD. (Affects lS carry bit.) Predicate returns the status of the I/O device's children's state. Nonzero means both are in MIMD.
(Affects carry bit.) Predicate returns the value of a winner bit in the carry.
Write I/O device control register.
Read I/O device control register.
Write I/O device status register.
Read I/O device status register.
Context save passed a pointer to an area of 12 bytes on an off ,chip RAM. This routine will write out the, current context information of the I/O device and leave the I/O device reset and in SIMD mode. (Afects DPTR and carry.) Restore context passed a pointer to an off device RAM containing a previously written context. This routine will restore the context to an I/O device.
(Affects DPTR and carry.) In general, the I/O device comprises a number of memory mapped data and status registers. Communi-cation routines have the form, busy wait on a status bit, when the status bit indicates that the data .. .. . . . .

~54-register is ready and that the data may therefore be read or written. The predicates read and mask the appropriate status bits.
Referring back to Eig. 19, and with respect to the PPL/M high level kernel, the present invention powers up in PPL/M mode, with all PEs except the root starting to execute an SIMD interpretive loop SIMDLP.
If a PE is determined to be a root PE, it will execute the bootstrap loader to download code and load in a user program.
The bootstrap loader is a PPL/M program itself.
It has an MIMD part in the root and a SIMD part in each of the PEs. Input files are illustratively in Intel 8080 object module format. The bootstrap loader then decides if a particular record of an input file is part of the program image. If so, header informa-tion in the record dictates where the image part is to be loaded in an off chip RAM. This is done in all PEs so each PE has an identical program image in its memory. At the end of the file, control is passed to the user program, according to the start address dictated by the load module.
The parallel behavior o~ the present invention is achieved by placing identical program images of the user program in all PEs combined with the use of runtime routines to control execution and perform the communications. To illustrate this, a portion of PPL/M code before and after it has been transformed by a preprocessor is presented and the actions of the runtime routines are explained as follows:

A segment o~ the PPL/M code:
main:
do;
/*declarations of regular and slice variables*/
declare a,b,c,d word external;
declare s,t,u,v word slice external;
/*code*/

a = b + c;
b = c * d;

/*SIMD block*/
do SIMD;
s = t + v;
a8 = s;
send(lc);
io8 = a8 + s;
recv(p);
t = s;
end;
end;
The same segment of PPL/M code after preprocessing:
main:
do;
/*declarations of regular and slice variables*/
declare a,b,c,d word external;
declare s,t,u,v word external;
/*code*/
example: procedure;
a = b + c;
b = c * d;
/*SIMD block*/
call SIMD;
goto 10:
if (not enl) then goto 11;
s = t + v;
a8 = s;

11: send ~lc);
if (not enl) then goto 12i io8 = a8 + s;
enl = al;
if (not enl) then goto 12;
12: recv(p);
if(not enl) then goto 13;
t = s;
13: return;
10 10:
end;
end;
Notice the "do SIMD" statement has been changed to "call SIMD". Code runs conventionally in a ~IMD
processor until a "caIl SIMD" is encountered. SIMD is a kernel function that looks at the return address stack of processor 30. The return address points to the instruction right after the call SIMD, which is 2 less (due to the jump 10:) than the first address of the SIMD block. Since the program image in all PEs is identical, this address will point to the same SIMD
block in all PEs. The address is broadcast to all the SIMD PEs. The SIMD procedure then jumps to the SIMD
block itself ~since every MIMD PE must also behave like an SIMD PE). At the end of the SIMD block the return instruction acts like the return instruction of the initial SIMD call. This returns the program to the initial "goto" after the call SIMD.
The SIMD interpretive loop SIMDLP in the SrMD PE
receives the two bytes of address and does an indirect subroutine call to the SIMD block. When the SIMD P~
reaches the return address at the end of the SIMD
block it causes the PE to return to the SIMD inter-pretive loop.
All PEs may now execute the same block of code, simulating SIMD behavior. Notice at the beginning of the SIMD block and right after each communication -57~ 8 instruction and assignment to ENl there is a state-ment: "if (not ENl) then goto 11;". This is how SIMD
enable/disable state is simulated. Each time a communication instruction must execute it must be executed in every PE. In other words, every PE must execute precisely the same sequence of communication instructions.
The conditional jumps on ENl jump from communica-tion instruction to communication instruction. Other instructions are skipped only if the PE is disabled.
This together with disallowing other conditionals in SIMD blocks as well as communication instructions in slice procedures insures discipline over the sequence of communications instructions.
15 In addition to the above, the following is a list of special locations used by the PPL/M high level kernel:
Enl The enable bit. Used by communi-cation instructions and SIMD
semantics.
Al Special bit register to hold the winner information of resolve.
A8 Used as the source or destination of communication instructions.
IO8 Used as the source or destination of communication instructions.
cprbyte Destination of the report operation.
Referring again to Fig. 19, DoMIMD is the primi-tive that will cause the partitioning of the binarytree and pass control to the roots of each of the subtrees in a three step process. First, the address of the MIMD routine to which control is being given is broadcast. The partitions and status bits are then setup. Last, the enabled PEs call the MIMD routine.
In PPL/M mode the root is always in MIMD mode although if it is SIMD disabled, it will not do a routine called by DoMIMD.

-58~ 8 An exit is placed at the end of the MIMD code to cause that subtree to reconnect above. In effect, the I/O device switches back into SIMD and updates the local MIMDflag. The last return from the MIMD pro-cedure will unwind the return stack and leave the PEin the SIMD interpretive loop, SIMDLP.
Sync is a command that will cause a wait until all PEs below the PE executing sync have returned to SIMD mode and causes that subtree to completely reconnect.
Thus it can be seen that there is provided a new and improved system for east, efficient and fault tolerant data processing through the use of plurality of processor elements arranged in a binary tree configuration wherein each processor element has associated with it an individual I/O device and memory.

- 5 9~ ~ 8 Copyright Columbia University 1986 ADDR: 00 01 02 03 04 05 06 07 08 09 OA 08 OC OD OE OF
OOOOH=02H OBH DCH 02H ODH 3FH FFH FFH FFH FFH FFH 02H ODH 4FH FFH FFH
OOlOH=FFH FFH FFH 02H ODH 53H FFH FFH FFH FFH FFH 02H ODH 57H FFH FFH
0020H=FFH FFH FFH 02H OD}I 5BH 02H 02H 40H 02H 02H 5D}1 02H 02H 78H 02H
0030H=OAH 6EH 02H OlH 4DH 02H OAH 54H 02H 04H 27H 02H OAH 89H 02H OAH
0040H=8DH 02H 02H 191{ 02H OBH 25H 02H 02H 2FH 02H OAH E3H 02H ODH 74H
OOSOH=02H OAH 83H 02H OAH 86H 02H 03H CEH 02H 03H 7CH 02H 02H EOH 02H
0060H=02H 85H 02H 06H 25H 02H 06H 82H 02H 06H CEH 02H OlH 51H 02H OlH
0070H=7BH 02H OlH A6H 02H OlH DlH 02H OlH F5H 02H 06H OFH 02H 06H 37H
0080H=02H 06H 51H 02H 06H 6BH 02H 06H 9AH 02H 06H B4H 02H 07H OCH 02H
OO90H=06H FDH 02H 07H 3EH 02H 07}1 lAH 02H 07H 26H 02H 07H 32H 02H 07H
OOAOH=4FH 02H 07H 62H 02H 07H Clll 02H 07H C8H 02H 07H FlH 02H 07H F7H
OOBOH=02H 07H FDH 02H 08H 03H 02H 08H 09H 021{ 08H OFH 02H 08H 17H 02H
OOCOH=08H lFH 02H 08H 45H 02H 08H S4H 02H 08H 8DH 02H 08H 35H 02H 08H
OODOH=3BH 02H 08H 41H 02H 07H 75H 02H 07H 88H 02H 07H 9BH û2H 07H AEH
OOEOH=02H OBH 28H 02H OBH 44H 02H OBH 65H 02H OBH 89H 02H 03H 47H 02H
OOFOH=03H 8FH 02H 03H E8H FFH 07H 12~ 06}1 OFH F5H 30}1 85H 2BH 33H 12H
OlOOH=06H CEH 22H A2H OEH B3H 50H 06H 85H 2BH 33H 12H 06H 6BH 12H 06H
OllOH=37H F5H 30H 22H 30H OEH 06H 85H 2BH 33H 12H 06H 6BH 12H 06H 51H
0120H=F5H 30H 22H A2H OEH B3}1 50H 07H 12H 06}1 OFH F5H 30H 80H 03H 85H
0130H=2BH 30H 85H 2CH 33H 12}1 06H 9AH 22H 30H OEH 07H 12H 06H OFH FSH
0140H=30H 80H 03H 85H 2BH 30H 85H 2CH 33H 12H 06H B4H 22H 12}1 OAH 54H
0150H=22H A2H lOH 82H 08}1 B3H E4H 33H F5H 33H 12H 07H lAH 30H ODH 13H
0160H=12H 07H 3EH 30H EOH 02H 80H 03H C3H 80H OlH D3H 92H OFH A2H OFH
0170H=B3H 92H OFH 12H 08H 45H 82H 08H 92H lOH 22H 30H 08H 07H E5H 26H
0180H=F4H F5H 26H 80H 03H 75H 26H FFH 85H 26H 33H 12H 07H lAH 30H ODH
Ol90H=ODH 12H 07}i 3EH F5H 30H 30H 08H 05H E5H 30H F4H F5H 2FH 12H 08H
OlAOH=45H 82H 08H 92H lOH 22H 30H 08H 0711 E5H 26H F4H F5H 26H 80H 03H
OlBOH=75H 26H FFH 85H 26H 33H 12H 07H 26H 30}1 ODH ODH 12H 07H 3EH FSH
OlCOH=30H 30H 08H OSH E5H 30H F4H F5H 2FH 12H 08H 451l 82H 08H 92H lOH
OlDOH=22H A2H 08H B3H 50H 03H 75H 26H FFH 85H 26H 33H 12H 07H lAH 30H
OlEOH=ODH OBH 12H 07H 3EH F5H 30H 30H 08H 03H 85H 30H 2FH 12H 08H 45H
OlFOH=82H 08H 92H lOH 22H A2H 08H B3H 50H 03H 75H 26H FFH 85H 26H 33H
0200H=12H 07H 26H 30H ODH OBH 12H 07H 3EH F5H 30H 30H 081l 03H 85H 30H
0210H=2FH 12H 08H 451l 82H 08H 92H lOH 22H 12H nDH 74H 80H 04H 12H OBH
0220H=25H 22H A2H llH B3H 50H 03H 12H 07H 621{ A2H llH 92H ODH 22H 12H
0230H=ODH 74H 80H 04H 12H OAH E3H 22H E5H 30H B4H OOH 02H 80H FOH 22H
0240H=E5H 26H 90H 05H FB}I 25H EOH 73H llH F7H 80H OAH 31H 03H 80H 06H
0250H=31H 14H 80H 02H 80H OOH 30H 08H 03H 85H 30H 2CH 22H E5H 26H 90H
0260H=06H 05H 25H EOH 73H 80H OAli 31H 23H 80H 06H 31H 39H 80H 02H 80H
û270H=OOH 301l 08H 03H 85H 30H 2BH 22H 85H 26H 2EH 12H ODH 74H 80H 04H
0280H=12H OAH 6EH 22H 22H 85H 27H 26H 51H 78H 12H ODH 74H 80H 04H 85H
0290H=2CH 27H 22H 85H 28H 26H 51H 78H 12H ODH 74H 80H 04H 85H 2CH 28H
02AOH=22H 75H 33H 03H 12H 06H 82H 85H 29H 33H 12H 06H 82H 85H 2AH 33H
02BOH=12H 06H 82H AFH 29H AEH 2AH 74H OOH 6EH 4FH 60H 22H 78H 29H 12H
02COH=ODH B4H 12H ODH 74H 80H lOH 85H 27H 83H 85H 28H 82H EOH F5H 2CH
02DOH=31H 4DH 78H 27H 12H ODH BCH 85H 2FH 33H 12H 06H 82H 80H D4H 22H
02EOH=85H 27H 26H 51H 78H 12H ODH 74H 80H 04H 85H 2CH 27H 22H 85H 28H
02FOH=26H 51H 78H 12H ODH 74H 80H 04H 85H 2CH 28H 22H 75H 33H 05H 12H
0300H=06H 82H 85H 29H 33H 12H 06H 82H 85H 2AH 33H 12H 06H 82H 12H 06H
0310H=25H F5H 29H 12H 06H 25H F5H 2AH AFH 29H AEH 2AH 74H OOH 6EH 4FH
0320H=60H 24H 78H 29H 12H ODH B4H 12H 0611 25H F5H 30H 85H 30H 26H 51H
0330H=78H 12H ODH 74H 80H OEH 85H 27H 83H 8SH 28H 82H E5H 2CH FOH 78H

03SOH=12H 06H 82H 85H 29H 33H 12H 06H 82H AFH 28H AEH 29H 74H OOH 6EH

ADDR: 00 01 02 03 04 OS 06 07 08 09 OA OB OC OD OE OF
0360H=4FH 60H 18H 78H 28H 12H ODH B4H 85H 26H 83H 85}1 27H 82H EOH F5H
0370H=33H 12H 06H 82H 78H 26H 12H ODH BCH 80H DEH 22H 75H 33H OlH 12H
0380H=06H 82H 85H 26H 33H 12H 06H 82H 75H 33H OOH 12H 06H 82H 22H 75H
0390H=33H 05H 12H 06H 82H 75H 33H OOH 12H 06H 82H 75H 33H OOH 12H 06H
03AOH=82H 12H 06H 25H F5H 28H 12H 06}1 25}1 F5H 29H AFH 28H AEH 29H 74H
03BOH=OOH BFH OOH 03}1 9EH 50H 16H 12H 06H 25H 85H 26H 83H 85H 27H 82H
03COH=FOH 78H 28H 12H ODH B4H 78H 26H 12}{ ODH BCH 80H DEH 22H 75H 33H
03DOH=04H 12H 06H 82H 75H 33H OlH 12H 06H 82H 75H 33}{ OOH 12H 06H 82H
03EOH=12H 06H 25}1 F5H 30H E5H 30H 22H 75H 33H 05H 12H 06H 82}1 85H 28H
03FOH=33H 12H 06H 82H 85H 29H 33H 12H 06H 82H 12H 06H 25H F5H 28H 12H
0400H=06H 25H F5H 29H AFH 28H AEH 29H 74H 0011 BFH OOH 03H 9EH SOH 16H
0410H=12H 06H 25H 85H 26H 83H 85H 27H 82H FOH 78H 28H 12H ODH B4H 78H
0420H=26H 12H ODH BCH 80H DEH 22H 85H 27H 31H 85H 28H 32H 85H 31H 26H
0430!1=51H 78H 12H ODH 74H 80H 04H 12H OAH 89H 22H 85H 32H 26H 51H 78H
0440H=12H ODH 74H 80H 04H 12H OAH 8DH 22H 22H 7411 02H 90H 4.0H 06H FOH
0450H=90H 40H 07H E5H 26H FOH A3}J E5H 27H FOH 75H 89H OlH 75H 8CH OOH
0460H=75H 8AH OOH 75H 88H lOH 22H E5}1 12H B4H OlH 13H A2H 03H 92H O9H
0470H=A2H 03H 92H OBH 30H 03H 05H 12}1 07H 88H 80H 03H 12H 07H 75H E5H
0480H=12H B4H 02H 13H A2H 03H 92H OAH A2}T 03H 92}1 OCH 30H 03H 05H 12H
0490H=07H AEH 80H 03H 12H 07}{ 9BH 22H 85H 28H 12H 85H 81H llH 7FH ODH
04AOH=7EH 2B}{ 8FH 26H 8EH 27H 91H 4AH D3H 92H 03H 91H 67H 12H ODH 74H
04BOH=80H 04H 12H OAH 83H 22H 75H 26H OlH 51H 78}1 12H ODH 74H 80H l9H
04COH=12H 06H OFH F5H 30H 75H 33H OOH 12}1 06H CEH D2H lOH 31H SlH A2H
04DOH=lOH 92H 08}1 31H 4DH 12H OAH 83H 22H 75H 88H OOH C3H 92H 03H 91H
04EOH=67H D2H OOH 22H A2H 02H B3H 50H 30H 74H OlH F5H 28H 91H 98H A2H
04FOH=OOH 92H OlH 74H 02H F5H 28H 91H 9811 A2H OOH 92H 03H 91}1 67H 75H
0500H=12H OlH A2H OlH 92H 03H 91H 6711 12H ODH 74H 80H 04H 12H OAH 83}1 0510H=22H 7SH 26H OlH 5111 78H 75H 2CH OOH 51H l9H 22H D2H 08H D2H A5H
0520H=C2H 02H C3}1 92H OAH 92H O9H C3H 92H OBH 92H OCH C2H ODH C3H 92H
0530H=OEH 92H OFH C2H B5H C2H BlH 90H OOH OOH EOH F511 30H 90H OOH OOH
0540H=EOH F5H 30H A2H BOH 92H ODH A2H ODH 92H llH 9011 OOH 00}1 EOH F5H
0550H=30H D2H B5H D2H BlH 12H OBH F8H 30H ODH 05H 12H 07H 4FH 80H 03H
0560H=12H 07H 62H 90H OON lOH EOH F5H 30H 90H OOH 20H EOH F5H 30H 90H
0570H=OOH 001{ E4H FOH 90H OOH OOH EOH 54H 06H B4H 06}1 02H D2H 02H 12H
0580H=07H 75H 12H 07H 9BH A2H llH B3H 50H 05H 12~5 ODH 5FH 80H 30H D2H
0590H=OFH A2H OFH B3H B3H 50H 28H 7F11 04H 7EII E4H 8FH 27}1 8EH 28H 91H
05AOH=27H 51H 2FH 12H ODH 74H 80H 15H 121{ OAH 83H D2H lOH E5H 2CH B4H
05BOH=OlH 02H 80H 03H C3H 80H OlH D3H 92H 08}1 31H 51H 22H 80H D2H 12H
05COH=ODH 74H 80H 34H 12H OAH 83H 12H 06H OFH 30H EOH 02}1 80H 03H C3H
05DOH=80H OlH D3H 92H OEH 75H 33H OOH 12}1 0611 9AH 75H 33H OlH 12H 06H
05EOH=B4H 90H OOH 02H E4H FOH 30H 1 lH 08H 74H 12H 90H OOH OlH FOH 80H
05FOH=06H 74H lOH 90H OOH OlH FOH 22H 12H O9H 04H 41H 48H 41H 4CH 41H
0600H=50H 41H 54H 41H 56H 41H 65H 41H 67H 41H 6BH 41H 6FH 41H 71H 20H
0610H=ODH lOH 20H llH ODH 20H BOH FDH E5H 90H C2H B3H 30H BOH FDH D2H
0620H=B3H 22H 74H OOH 22H 30H BOH FDH 75H 90H FFH D2H A5H C2H B3H 20H
0630H=BOH FDH E5H 90H D2H B3H 22H 20H O9H 03H 74H OOH 22H 75H 80H FFH
0640H=C2H ASH C2H BlH 20H A7H FDH E5H 80H D2H BlH D2H ASH 30H A7H FDH
0650H=22H 20H OAH 03H 74H OOH 22H 75H 80H FFH C2H A5H C2H B5H 20H B4H
0660H=FDH E5H 80H D2H B5H D2H A5H 30H B4H FDH 22H 20H ODH 13H 20H llH
0670H=lOH 20H BOH FDH 85H 33H 90H C2H B3H 30H BOH FDH 75H 90H FFH D2H
0680H=B3H 22H 30H BOH FDH C2H A5H 85H 33H 90H C2H B3H 20H BOH FDH 75H
0690H=9OH FFH D2H B3H D2H A5H 30H BOH FDH 22H 20H O9H OlH 22H 30H A7H
06AOH=FDH 85H 33H 80H D2H A5H C2H BlH 20H A7H FDH 75H 80H FFH D2H BlH
06BOH=30H A7H FDH 22H 20H OAH OlH 22H 30H B4H FDH 85H 33H 80H D2H A5H
06COH=C2H B5H 20H B4H FDH 75H 80H FFH D2H B5H 30H B4H FDH ~2H 20H OAH
06DOH=03H DlH 9AH 22H 20H 09H 03H DlH B4H 22H 30H B4H FDH 30H A7H FAH

~61- 3 ADDR: 00 01 02 03 04 05 06 07 08 09 OA OB OC OD OE OF
06EOH=85H 33H 80H D2H A5H C2H BlH C21{ BSH 20H B4H FDH 20H A7H FAH 75H
06FOH=80H FFH D2H BlH D2H BSH 30H B4H FDH 30H h7H FAH 22H 30H ODH 4DH
0700H=78H 05H E;2H 30H ElH FCH 78H llH E5H 33H F2H 22H 78H OSH E2H 30H
0710H=E2H FCH 78H lOH E2H F5H 33H FSH 30H 22H 78H 05H E2H 30H E3H FCH
0720H=78H 21H E5H 33H F2H 22H 78H 05H E2H 30H E3H FCH 78H 22H E5H 33H
0730H=F2H 22H 78H OSH E2H 30H E3H FCH 78H 22H ESH 33H F2H 22H 30H ODH
0740H=OCH 78H 05H E2H 30H E4H FCH 78H 20H E2H F5H 33H 22H 80H FEH 78H
07SOH=OlH E2H 44H 22H F2H 78H OOH E2H 44H OlH F2H 78H OlH E2H 54H DFH
0760H=F2H 22H 78H OlH E2H 44H 20H F2H 78H OOH E2H 54H FEH F2H 78H OlH
0770H=E2H 54H DDH F2H 22H 78H OlH E2H 44H 20H F2H 78H OOH E2H 44H 04H
0780H=F2H 78H OlH E2H 54H DFH F2H 22H 78H OlH E2H 44H 20H F2H 78H OOH
0790H=E2H 54H FBH F2H 78H OlH E211 54H DFH F2H 22H 78H OlH E2H 44H 20H
07AOH=F2H 78H OOH.E2H 44H 02H F2H 78H OlH E2H 54H DFH F2H 22H 78H OlH
07BOH=E2H 44H 20H F2H 78H OOH E2H 54l{ FDH F215 78H OlH E2H 54H DFH F2H
07COH=22H 78H OlH E2H 44H OlH F2H 22H 78H OlH E21{ 54H FEH F2H 78H 02H
07DOH=E2H 54H FBH F2H 22H 78H OlH E2H 44H lOH F2H 22H 78H OlH E2H 54H
07EOH=EFH F2H 22H 78H OlH E2H 44H 04H F2H 22H 78H OlH E2H 54H FBH F2H
07FOH=22H 78H OlH E5H 33H E2H 22H 78H OlH E2H F5H 33H 22H 78H 02H E5H
0800H=33H F2H 22H 78H 02H E2H F5H 33H 22H 78H 05H E2H F5H 33H 22H 78H
0810H=OOH E2H A2H E2H 92H 12H 22H 78H 001{ E2H A2H ElH 92H 12H 22H 78H
0820H=OOH E2H C2H 12H 20H ElH 03H 02H 08H 32H 20H E2H 03H 02H 08H 32H
0830H=D2H 12H A2H 12H 22H 78H 02H E2H A2H E4H 22H 78H 02H E2H A2H E5H
0840H=22H 78H OOH F2H 22H 78H ()5H E2H 30H E3H FCH 78H 24H E2H A2H E5H
0850H=B3H 92H 12H 22H 90H 4EH OOH 78H ooH E2H FOH A3H 08H E2H FOH A31{
0860H=08H E2H FOH A3H 08H E2H FOH A3H 78H lOH E2H FOH A3H 08H E2H FOH
0870H=A3H 08H E2H FOH A3H 78H 20H E2H FOH A3H 08H E2H FOH A3H 08H E2H
0880H=FOH A3H 08H E2H FOH A3H 08H E2H FOH A2H 12H 92H 13H 90H 4EH OOH
0890H=78H OOH EOH F2H A3H 08H FOH E2H A3H 08H EOH F2H A3H 08H EOH F2H
08AOH=A3H 78H lOH E2H EOH F2H A3H 08H EOH F2H A3}1 08H EOH F2H 781{ 21H
08BOH=A2H 13H 74H FFH SOH 02H 74H OOH F2H A3H 78H 20H EOH F2H A3H 08H
08COH=EOH F2H A3H 08H EOH F2H A3H 08H EOH F2H 75H 26H 43H 12H 03H 7CH
08DO}1=75H 26H 45H 12H 03H 7CH 22H 75H 26H 52H 12}1 03H 7CH 75H 26H 45H
08EOH=12H 03H 7CH 22H 12H 06~{ 25H F5H 32H ESH 08H 25H 32H FSH 08H 12H
08FOH=06H 25H F5H 31H ESH 08H 25H 31H F5H 08H 22H llH E4H 85H 31}{ 09H
0900H=85H 32H OAH 22H C2H lOH C2H OFH 75H 33H 05H 12H 06H 82H 75H 33H
O910H=OOH 12H 06H 82H 75H 33H OOH 12H 06M 82H 12H 06H 25H FSH 30H 12H
0920H=06H 25H FSH 30H 12H 06H 25H F5H 08H 85H 08H ODH llH FB}{ ESH ODH
0930H=84H 02H 02H 80H OEH ESII ODH B4H 06H 02H 80}1 45H ESH ODH B4H 04H
0940H=02H 21H FBH 75H lOH Ol}l 75H OFH OOH AFH 09H AEH OAH lEH BEH FFH
O950H=OlH lFH ADH OFH ACH lOH EEH C3H 9CH EFH 9DH 40H 15H 12H 06H 25H
0960H=25H 08H FSH 08H 78H lOH 74H OlH 26H F6H SOH 04H 18H E4H 36H F6H
0970H=SOH D7H 12H 06H 25H FFH ESH 08H F4H 04H 6FH 60H 02H D2H lOH 80H
0980H=A3H 12H 06H 25H 64H OOH 60H 02H D2H OFH llH E4H 85H 31H 26H 12H
0990H=02H 78H 12H ODH 74H 80H 04H 85H 2CH OBH 22H 85H 32H 26H 12H 02H
09AOH=78H 12H ODH 74H 80H 04H 85H 2CH OCH 22H 75H lOH OlH 75H OFH OOH
O9BOH=AEH 09H AFH OAH 74H 04H 12H ODH AAH ADH OFH ACH lOH EFH C3H 9CH
O9COH=EEH 9DH 40H 28H 12H 06H 25H F5H OEH 85H OEH 26H 12H 02H 78H E5H
O9DOH=08H 25H OEH F5H 08H 12H ODH 74H 80H 04H 12H OBH B7H 22H 78H lOH
O9EOH=74H OlH 26H F6H SOH 04H 18H E4H 36H F6H 50H C4H 12H 06H 25H FFH
09FOH=ESH 08H F4H 04H 6FH 60H 02H D2H lOH 21H 24H 12H 06H 25H F5H OEH
OAOOH=ESH 08H 25H OEH FSH 08H 75H lOH OlN 75H OFH OOH AEH 09H AFH OAH
OAlOH=74H 02H 12H ODH AAH ADH OFH ACH lOH EFH C3H 9CH EEH 9DH 40H 15H
OA20H ` 12H 06H 25H 25H 08H F5H 08H 78H lOH 74H OlH 26H F6H 50H 04H 18H
OA30H=E4H 36H F6H 50H D7H 12H 06H 25H FFH ESH 08H F4H 04H 6FH 60H 02H,, OA40H=D2H lOH 30H lOH 02H llH ~:AH 30H OFH 02H llH D7H 90H 40H OlH 12H
OA50H=ODH 9AH 21H 04H 30H 08H 08H 85H 2CH 33H 12H 07H lAH 80H 06H 75H

12,~

ADDR: 00 01 02 03 04 OS 06 07 08 09 OA OB OC OD OE OF
OA60H=33H FFH 12H 07H lAH 301{ ODH 05H 12H 07}1 3EH F5H 2~H 22H 30H ODH
OA70H=06H 85H 2EH 33H 12H 06H FDH 12H 07H OCH F5H 30H 30H 08H 03H 85H
OA80H=30H 2CH 22H D2H 08H 22H C2H 08H 22H 85H 2CH 31H 22H 85H 2CH 32H
OA9OH=30H 08H 03H 12H 07H ~FH 12H 06H 37}1 30H EOH 02H 80H 03H C3H 80H
OAAOH=OlH D3H B3H 82H OBH 92H O9H 12H 06H 51H 30H EOH 02H 80H 03H C3H
OABOH=80H OlH D3H B3H 82H OCH 92H OA}I 12H 08H 41H A2H OBH B3H 50H 03H
OACOH=12H 07H 75H A2H OCH B3H 50H 03H 12H 07H 9BH E4H A2H 08H 33H F5H
OADOH=33H 12H 06H 6BH A2H OBH 72H llH 92H ODH 30H 08H 05H 78}{ 31H 121{
OAEOH=ODH AlH 22H 51H 83H 12H 06H 37H A2}1 OBH B3H FFH E4H 33H 4FH F5H
OAFOH=30H 12H 06H 51H A2H OCH B3H FFH E4H 33H 4FH 55H 30H F5H 30H 12H
OBOOH=08H 41H 30H OBH 08H 12H 08H OFH B3H 92H 09H 80H 03H 12H 07H 75H
OBlOH=30H OCH 08H 12H 08H 17H B3H 92H OAH 80H 03H 12H 07H 9BH 85H 30H
OB20H=33H 12H 06H 6BH 22H 51H 86H 22H 12H 06H OFH F5H 2DH A2H 08H B3H
OB30H=50H 03H 75H 2CH OOH 78H 27H 12H ODH AlH 85H EOH 30H 85H 30H 33H
OB40H=12H 06H CEH 22H 12H 06H 37H F5H 2DH 12H 06H 51H F5H 31H A2H 08H
OB50H=B3H 50H 03H 75H 2CH OOH 78H 21H 12H OD1{ AlH 85H EOH 30H 85H 30H
OB60H=33H 12H 06H 6BH 22H 12H 06H OFH F5H 2DH 12H 06H OFH F5H 2El{ A2H
OB70H=08H B3H 50H 03H 75H 2CH OOH 78H 27H 12H ODH AlH 85H 2EH 33H 12H
OB80H=06H CEH 85H 2FH 33H 12H 06H CEH 22H 12H 06H 37H F5H 2DH 12H 06H
OB9OH=37H F5H 2EH 12H 06H 51H F5H 31H 12H 06H 51H F5}1 32H A2H 08H B31{
OBAOH=SOH 03H 75H 2CH OOH 78H 27H 12H ODH AlH 85H 2EH 33H 12H 06H 6BH
OBBOH=85}{ 2FH 33H 12H 061[ 6BH 22H A2H 08H B3H 5011 02H 80H 18H AFl{ OBH
OBCOH=AEH OCH 7DH OFl{ 7CH FFH EEH D3H 9CH EFH 9DH 40H O9H 85H OBH 83H
OBDOH=85H OCH 82H E5H 2CN FOH 78H OBH 12H ODH BCH 22H 75H DOH OOH 75H
OBEOH=81H 4DH 75}1 80H FFH 75H 90H FFH 75H BOH FFH 75H AOH BFH E4H 78H
OBFOHS04H F2H 12H ODH 4EH 02H 05H lCH 74H OOH 90H 3FH FFH 7BH 40H 7AH
OCOOH=FFH A3H FOH DAH FCH DBH F8H 78H 02H E2H 30H E2H 03H 12H OCH 8CH
OClOH=90H 40H OOH 74H OlH C2H OOH FO}I FSl{ 30H EOH B5H 30H OEH 23H 30H
OC20H=EOH FSH 20H OOH 12H D2H OOH 90H 40H OlH 80H EBH 30H OOH 04H C2H
OC30H=AlH 80H FEH C2H AOH 80H FEH 90H 40H 001{ 74H OlH 78H OlH C8H 08H
OC40H=23H FSH 82H C8H FOH FSH 30H EOH BSH 30H 3BH 30H E7H FOH 90H lOH
OCSOH=04H F8H 74H 011{ F9H 44H 40H F5H 83H E8H FOH F5H 30H EOH B5H 30H
OC60H=25H E9H 23H 30H E4H EEH 75H A8H 83H 75H B8H OOH 90H 401{ OFH 74H
OC70H=02H FOH 74H 4011 A3H FO}{ 74H OFH A3H FOH 78H 02H E2H 54}1 F8H F2H
OC80H=14H lOH 78H OlH F2H 22H G2H AOH C2H Al}l 80H FEH 12H 07H C8H F2H
OC90}1=78H 02H E2H 20H E2H 1~6H 78H OlH 74H lOH F2H 75H 08H FFH 12H OCH
OCAOH=AEH 75H 08H AAH 12H OCH AEH 75H 081{ CCH 12H OCH AEH 22H C2H OOH
OCBOH=90H 40H OOH 12H OCH FOH 90H 40H 001{ 1211 OCH FDH 12H ODH 06H 20H
OCCOH=OOH 07H D2H OOH 90H 40H OlH 80H FOH C2H OOH 90H 40H OOH 7BH 20}1 OCDOH=7AH FFH 74H OlH FOH F5H 30H EOH BSH 30H 45H 23H 30H EOH F5H A311 OCEOH=A3H DAH EFH DBH EBH 20}1 OOH 07H D2H OOH 90H 40H OlH 80H DFH 22H
OCFOH=7BH 40H 7AH FFH E5H 08H FOH A3H DAH ` FCH DBH F6H 22H 7AH FFH 7CH
ODOOH=FFH DCH FEH DAH FAH 22H 7BH 20H 7AH FFH EOH B5H 08H 12H 12H ODH
ODlOH=18H A3H A3H DAH F5H DBH FlH 221{ 30H B2H OlH 22H C2H A2H 80H FEH
OD20H=30H OOH 04H C2H AlH 80H FEH C2H AOH 80H FEH 75H 88H OOH 12H 07H
OD30H=ClH 85H llH 81H C2H 03H 12H 04H 67H C2H OOH 12H 07H C81{ 32H 78H
OD40H=OSH E2H 30H EOH 06H 78H OlH E2H 44H OlH F2H 12H 40H OFH 32H 12H
ODSOH=40}1 06H 32H 12H 40H 03H 32H 12H 40H 09H 32H 12H 40H OCH 32H 90H
OD60H=40H lSH 74H 02H FOH A3H 12H 07H OCH FOH A3H 12H 07H OCH FOH 12H
OD70H=40H 15H 80H EBH A8H 81H 86H 83H 18H 86H 82H A3H A3H 85H 83H 33H
OD80H=121{ 06H FDH 12H 07H OCH 85H 82H 33H 12H 06H FDH 12H 07H OCH E4H
OD90H=73H E4H 93H F5H FOH 74H OlH 93H 80H OBH EOH F5H FOH A3H EOH 80H
ODAOH=04H 86H FOH 08H E6H COH EOH COH FOH 22H 60H 07H F4H 04H 2FH FFH
ODBOH=40H OlH lEH 22H 08H 16H B6H FFH 02H 18H 16H 22H 08H 06H B6H OOH
ODCOH=02H 18H 06H 22H

-63~ .318~:8 t Copyright, Columbia University 1986 ~d~sm O to dc3h OOOOH=LJMP OBDCH
0003}1=LJMP OD3FH
0006H=MOV R7,A
0007H=MOV R7,A
0008H=MOV R7,A
OOO9H=MOV R7, A
OOOAH=MOV R7,A
OOOBH=LJMP OD4FH
OOOEH=MOV R7,A
OOOFH=MOV R7,A
OOlOH=MOV R7,A
0011H=MOV R7,A
0012H=MOV R7,A
0013H=LJMP OD53H
0016H=MOV R7,A
0017H=MOV R7,A
0018H=MOV R7,A
OOl9H=MOV R7,A
OOlAH=IIOV R7,A
OOlBH=LJMP ODS7H
OOlEH=MOV R7,A
OOlFH=MOV R7,A
0020H=MOV R7, A
002 lH=MOV R7, A
0022H=MOV R7,A
0023H=LJMP ODSBH
0026H=LJMP 0240H
0029H=LJMP 025DH
002C}I=LJMP 0278H
002FH=LJMP OA6EH
0032H=LJMP 014DH
0035H=LJMP OA54H
0038H=LJMP 0427H
003BH=LJMP OA89H
003EH=LJMP OA8DH
0041H=LJMP 0219H
0044H=LJMP OB25H
0047H=LJMP 022FH
004AH=LJMP OAE3H
004DH=LJMP OD74H
0050H=LJMP OA83H
0053H=LJMP OA86H
0056H=LJMP 03CEH
OOS9H=LJMP 037CH
005CH=LJMP 02EOH
005FH=LJMP 0285H
0062H=LJMP 0625H
0065H=LJMP 0682H
0068H=LJMP 06CEH
006BH=LJMP O lS lH
006EH=LJMP 017BH
0071H=LJMP OlA6H
0074H=LJMP OlDlH
0077H=LJMP O lFSH
007AH=LJMP 060FH
007DH=LJ~P 0637H
0080H=LJMP 065 lH
0083H=LJMP 066BH

-64~ 31~
o J86H=LJMP 069AH
0089H=LJMP 06B4H
Q08CH=LJMP 070CH
008FH=LJMP 06FDH
0092H=LJMP 073EH
0095H=LJMP 07lAH
0098H=LJ~IP 0726H
OO9BH=LJMP 0732H
009EH=LJMP 074FH
OOAlH=LJMP 0762H
OOA4H=LJMP 07ClH
OOA7H=LJMP 07C8H
OOAAH=LJMP 07FlH
OOADH=LJMP 07F7H
OOBOH=LJMP 07FDH
OOB3H=LJMP 0803H
OOB6H=LJMP 0809H
OOB9H=LJMP 080FH
OOBCH=LJMP 0817H
OOBFH=LJMP 081FH
OOC2H=LJMP 0845H
OOC5H=LJMP 0854H
OOC8H=LJMP 088DH
OOC8H=LJMP 0835H
OOCEH=LJMP 083BH
OODlH=LJMP 0841H
OOD4H=LJMP 0775H
OOD7H=LJMP 0788H
OODAH= W MP 079BH
OODDH=LJMP 07AEH
OOEOH=LJMP OB28H
OOE3H=LJMP OB44H
OOE6H=LJMP OB65H
OOE9H=LJMP OB89H
OOECH=LJ~IP 0347H
OOEFH=LJMP 038FH
OOF2H=LJMP 03E8H
OOFSH=MOV R7,A
OOF6H=INC @Rl OOF7H=LCALL 060FH
OOFAH=MOV 30}I,A
OOFCH=MOV 33H,2BH
OOFFH=LCALL 06CEH
0102H=RET
010311=MOV C,OEH
0105H=CPL C
0106H=JNC OlOEH
0108H=MOV 33H,2BH
OlOBH=LCALL 066BH
OlOEH=LCALL 0637N
OlllH=MOV 30H,A
0113H=RET
0114H=JNB OEH,OllDH
0117H=MOV 33H,2BH
OllAH=LCALL 066BH
OllDH=LCALL 065lH
0120H=MOV 30H,A
0122H=RET
0123H=MOV C,OEH

5H=CPL C
0126H=JNC 012FH
0128H=LCALL 060FN
012BH=MOV 30H,A
012DH=SJMP 0132H
012FH=MOV 30H,2BH
0132H=MOV 33H,2CH
0135H=LCALL 069AH
0138H=RET
0139H=JNB OEH,0143H
013CH=LCALL 06OFH
013FI{=MOV 3OH,A
0141H=SJMP 0146H
0143H=MOV 30H,2BH
0146H=MOV 33H,2CH
0149H=LCALL 06B4H
014CH=RET
014DH=LCALL OA54H
01501{=RET
Ol51H=MOV C,lOH
0153H=ANL C,08H
015511=CPL C
0156H=CLR A
0157}1=RLC A
0158}1=MOV 33}1,A
015Al{=LCALL 07lAlI
OlSD}I=JNB ODII,0173H
0160}{=LCALL 073EH
0163H=JNB EO}1,0168H
0166H=SJMP 016BH
01681{=CLR C
0169H=SJMP 016CR
016BH=SETB C
016CH=MOV OFH,C
016EH=MOV C,OFH
0170H=CPL C
017lH=MOV OFH,C
0173}{=LCALL 0845H
0176H=ANL C,08H
0178H=~IOV lOH,C
017AH=RET
017BH=JNB 08H,0185H
017EH=MOV A,26H
0180H=CPL A
0181H=MOV 26H,A
0183H=SJMP 01881{
0185H=MOV 26H,#FFH
0188N=MOV 33H,26H
018BH=LCALL 07lAH
018EH=JNB ODH,019EH
Ol91H=LCALL 073EH
0194H=MOV 30H,A
0196H=JNB 08H,019EH
Ol99H=MOV A,30H
Ol9BH=CPL A
Ol9CH=MOV 2FH,A
Ol9EH=LCALL 0845H
OlAlH=ANL C,08H
OlA3H=MOV lOH,C

66- ~ ~
-~ =RET
OlA6H=JNB 08H,OlBOH
OlA9H=MOV A,26H
OlABH=CPL A
OlACH=MOV 26H,A
OlAEH=SJMP OlB3H
OlBOH=MOV 26H,#FFH
OlB3H=MOV 33H,26H
OlB6H=LCALL 0726H
OlB9H=JNB ODH,OlC9H
OlBCH=LCALL 073EH
OlBFH=MOV 30H,A
OlClH=JNB 08H,OlC9H
OlC4H=MOV A,3OH
OlC6H=CPL A
OlC7H=MOV 2FH,A
OlC9H=LCALL 0845H
OlCCH=ANL C,08H
OlCEH=MOV lOH,C
OlDOH=RET
OlDlH=MOV C,08H
OlD3H=CPL C
OlD4H=JNC OlD9H
OlD6H=MOV 26H,#FFH
OlD9H=MOV 33H,26H
OlDCH=LCALL 07lAH
OlDFH=JNB ODH,OlEDH
OlE2H=LCALL 073EH
OlE5H=MOV 30H,A
OlE7H=JN~ 08H,OlEDH
OlEAH=MOV 2FH,30H
OlEDH=LCALL 0845H
OlFOH=ANL C,08H
OlF2H=MOV lOH,C
OlF4H=RET
OlF5H=MOV C,08H
OlF7H=CPL C
OlF8H=JNC OlFDH
OlFAH=MOV 26H,#FFH
OlFDH=MOV 33H,26H
0200H=LCALL 0726H
0203H=JNB ODH,021lH
0206H=LCALL 073EH
020911=MOV 30H,A
020BH=JNB 08H,021lH
020EH=MOV 2FH,30H
021lH=LCALL 0845H
0214H=ANL C,08H
0216H=MOV lOH,C
0218H=RET
0219H=LCALL OD74H
021CH=SJMP 0222H
021EH=LCALL OB25H
022lH=RET
0222H=MOV C,llH
0224H=CPL C
0225H=JNC 022AH
0227H=LCALL 0762H
022~H=MOV C,llH

-67- lX~ 8 J22CH=Mov ODII,C
022EH=RET
022FH=LCALL OD74H
0232H=SJMP 0238H
0234H=LCALL OAE3H
0237H=RET
0238H=MOV A,30H
023AH=CJNE A,#OOH,023FH
023DH=SJMP 022FH
023FH=RET
0240H=MOV A,26H
0242H=MOV DPTR,#05FBH
0245H=ADD A,.ACC
0247H=JMP @A+DPTR
0248H=ACALL OOF7H
024AH=SJMP 0256H
024CH=ACALL 0103H
024EH=SJMP 0256H
025OH=ACALL 0114H
0252H=SJMP 0256H
0254H=SJMP 0256H
0256H=JNB 08H,025CH
0259H=MOV 2C}1,30H
025CH=RET
025DH=MOV A,26H
025FH=MOV DPTR,#06051{
026211=ADD A,.ACC
0264H=JMP @A+DPTR
0265H=SJMP 0271H
0267H=ACALL 0123H
02691{=SJMP 0271H
026BH=ACALL 0139H
026DH=SJMP 027lH
026FII=SJMP 0271H
0271H=JNB 08H,0277H
0274H=MOV 2BH,3011 0277H=RET
0278H=MOV 2EH,26H
027BH=LCALL OD74H
027EH=SJMP 0284H
0280H=LCALL OA6EH
0283H=RET
0284H=RET
0285H=MOV 26H,27H
0288H=ACALL 0278H
028AH=LCALL OD74H
028DH=SJMP 0293H
028FH=MOV 27H,2CH
0292H=RET
0293H=MOV 26H,28H
0296H=ACALL 0278H
0298H=LCALL OD74H
029BH=SJMP 02AlH
029DH=MOV 28H,2CH
02AOH=RET
02AlH=MOV 33H,#03H
02A4H=LCALL 0682H
02A7H=MOV 33H,29H
02 ~ 1=LCALL 0682H

` -68-,2 i=MOV 33H,2AH
02BOH=LCALL 0682H
02B3H=MOV R7,29H
02B5H=MOV R6,2AH
02B7H=MOV A,#OOH
02B9H=XRL A,R6 02BAH=ORL A,R7 02BBH=JZ 02DFH
02BDH=MOV RO,#29H
02BFH=LCALL ODB4H
02C2H=LCALL OD74H
02C5H=SJ~IP 02D7H
02C7H=MOV .DPH,27H
02CAH=MOV .DPL,28H
02CDH=MOVX A,@DPTR
02CEH=MOV 2CH,A
02DOH=ACALL 014DH
02D2H=MOV RO,#27H
02D4H=LCALL ODBCH
02D7H=MOV 33H,2FH
02DAH=LCALL 0682H
02DDH=SJMP 02B3H
02DFH=RET
02EOH=MOV 26H,27H
02E3H=ACALL 0278H
02E5H=LCALL OD74H
02E8H=SJMP 02EEH
02EAH=MOV 27H,2CH
02EDH=RET
02EEH=MOV 26H,28H
02FlH=ACALL 0278H
02F3H=LCALL OD74H
02F6H=SJMP 02FCH
02F8H=MOV 28H,2CH
02FBH=RET
02FCH=MOV 33H,#05H
02FFH=LCALL 0682H
0302H=MOV 33H,29H
0305H=LCALL 0682H
0308H=MOV 33H,2AH
030BH=LCALL 0682H
030EH=LCALL 0625H
0311H=MOV 29H,A
0313H=LCALL 0625H
0316H=MOV 2AH,A
0318H=MOV R7,29H
031AH=MOV R6,2AH
031CH=MOV A,#OOH
03lEH=XRL A,R6 03lFH=ORL A,R7 0320H=JZ 0346H
0322H=MOV RO,#29H
0324H=LCALL ODB4H
0327H=LCALL 0625H
032AH=MOV 30H,A
032CH=MOV 26H,30H
032FH=ACALL 0278H
0331H=LCALL OD74H
0334H=SJMP 0344H

0336H=MOV .DPH,27H
0339H=MOV .DPL,28H
033CH=MOV A,2CH
033EH=MOVX @DPTR,A
033FH=MOV RO,#27H
034111=LCALL ODBCH
0344H=SJMP 0318H
0346H=RET
0347H=MOV 33H,#03H
034AH=LCALL 0682H
034DH=MOV 33H,28H
0350H=LCAI.L 0682H
0353H=MOV 33H,29H
0356H=LCALL 0682H
0359H=MOV R7,28H
035BH=MOV R6,29H
035DH=MOV A,#OOH
035FH=XRL A,R6 036OH=ORL A,R7 0361H=JZ 037BH
0363H=MOV RO,#28H
0365H=LCALL ODB4H
03681{=MOV .DP}{,26H
036BH=MOV .DPL,27H
036EH=MOVX A,@DPTR
036FH=MOV 33H,A
03711{=LCALL 0682H
037411=MOV RO,#26H
0376H=LCALL ODBCH
0379H=SJMP 0359H
037BH=RET
037CH=MOV 33}{,#0lH
037FH=LCALL 0682H
0382H=MOV 33H,26H
0385}1=LCALL 0682H
0388H=MOV 33H,#OOH
038BH=LCALL 0682H
038EH=RET
038FI{=MOV 33H,#05H
0392H=LCALL 0682H
0395H=MOV 33H,#OOH
0398H=LCALL 0682H
039BH=MOV 33H,#OOH
039EH=LCALL 0682H
03AlH=LCALL 0625H
03A4H=MOV 28H,A
03A6H=LCALL 0625H
03A9H=MOV 29H,A
03ABH=MOV R7,28H
03ADN=MOV R6,29H
03AFH=MOV A,#OOH
03BlH=CJNE R7,#OOH,03B7H
03B4H=SUBB A,R6 03BSH=JNC 03CDN
03B7H=LCALL 0625H
03BAH=MOV .DPH,26H
03BDH=MOV .DPL,27H
03COH=MOVX @DPTR,A
03ClH=MOV RO,#28H

3?~8 .C3H=LCALL ODB4H
03C6H=MOV RO,#26H
03C8H=LCALL ODBCH
03CBH=SJMP 03ABH
03CDH=RET
03CEH=MOV 33H,#04H
03DlH=LCALL 0682H
03D4}{=MOV 33H,#OlII
03D7H=LCALL 0682H
03DAH=MOV 33H,#OOH
03DDH=LCALL 0682H
03EOH=LCALL 0625H
03E3H=MOV 30H,A
03E5H=MOV A,30H
03E7H=RET
03E8H=MOV 33H,#05H
03EBH=LCALL 0682H
03EEH=MOV 33H,28H
03FlH=LCALL 0682H
03F4H=MOV 33H,29H
03F7H=LCALL 0682H
03F~I=LCALL 0625H
03FDH=MOV 28H,A
03FFH=LCALL 0625}I
0402H=MOV 29H,A
0404H=MOV R7,28H
0406}1=MOV R6,29H
0408H=MOV A,#OOH
040AH=CJNE R7,#OOH,0410H
040DH=SUBB A,R6 040EH=JNC 0426H
0410H=LCALL 0625H
0413H=MOV .DPH,26H
0416H=MOV .DPL,27H
0419H=MOVX @DPTR,A
04lAH=MOV RO,#28H
041CH=LCALL ODB4H
041FH=MOV RO,#26H
042111=LCALL ODBCH
04241{=SJMP 0404H
0426H=RET
0427H=MOV 3lH,27H
042AH=MOV 32H,28H
042DH=MOV 26H,31H
0430H=ACALL 0278H
0432H=LCALL OD74H
0435H=SJMP 043BH
0437H=LCALL OA89H
043AH=RET
043BH=MOV 26H,32H
043EH=ACALL 0278H
0440H=LCALL OD74H
0443H=SJMP 0449H
0445H=LCALL OA8DH
0448H=RET

044AH=MOV A,#02H
044CH=MOV DPTR,#4006H
044FH=MOVX @DPTR,A

J~ .i=MOV DPTR,#4007H
0453H=MOV A,26H
0455H=tlOVX @DPTR,A
0456H=INC DPTR
0457H=MOV A,27H
0459H=MOVX @DPTR,A
045AH=MOV .TMOD,#OlH
045DH=MOV .THO,#OOH
0460H=MOV .TLO,#OOH
0463H=MOV .TCON,#lOH
046$H=RET
0467H=MOV A,12H
04S9H=CJNE A,#OlH,047F}I
046CH=MOV C,03H
046EH=MOV O9H,C
047OH=MOV C,03H
04721{=MOV OBH,C
0474H=JNB 03H,047CH
0477H=LCALL 0788H
047AH=SJMP 047FH
047CH=LCALL 0775H
047FH=MOV A,12H
0481H=CJNE A,#02H,0497H
0484H=MOV C,03)I
0486H=MOV OAH,C
0488H=MOV C,03H
048AH=MOV OCH,C
048CH=JNB 03H,0494H
048FH=LCALL 07AEH
0492H=SJMP 0497H
049411=LCALL 079BH
04971{=RET
0498H=MOV 12H,281{
049BH=MOV llH,.SP
049EH=MOV R7,#ODH
04AOH=MOV R6,#2BH
04A2H=MOV 26H,R7 04A4H=MOV 27H,R6 04A6H=ACALL 044AI{
04A8H=SETB C
04A9H=MOV 03H,C
04ABH=ACALL 0467H
04ADH=LCALL OD74H
04BOH=SJMP 04B6H
04B2H=LCALL OA83H
04B5H=RET
04B6H=MOV 26H,#OlH
04B9H=ACALL 0278H
04BBH=LCALL OD74H
04BEH=SJ~IP 04D9H
04COH=LCALL 060FH
04C3H=MOV 30H,A
04C5H=MOV 33H,#OOH
04C8H=LCALL 06CEH
04CBH=SETB lOH
04CDH=ACALL Ol5lH
04CFH=MOV C,lOH
04DlH=MOV 08H,C
04D3H=ACALL 014DH

-72~ 8 1 0 04DSH=LCALL OA83H
04D8H=RET
04D9H=MOV .TCON,#OOH
04DCH=CLR C
04DDH=MOV 03H,C
04DFH=ACALL 04671{
04ElH=SETB OOH
04E3H=RET
04E4H=MOV C,02H
04E6H=CPL C
04E7H=JNC 0519H
04E9H=MOV A,#OlH
04EBH=MOV 28H,A
04EDH=ACALL 0498H
04EFH=MOV C,OOH
04FlH=MOV OlH,C
04F3H=MOY A,#02H
04F5H=MOV 28H,A
04F7H=ACALL 0498H
04F9II=MOV C,OOH
04FBH=MOV 03H,C
04FDH=ACALL 0467H
04FFH=MOV 12H,#OlH
0502H=MOV C,OlH
0504H=MOV 03H,C
0506H=ACALL 0467H
0508H=LCALL OD74H
050BH=SJMP 05111{
050DH=LCALL OA83}{
0510H=RET
0511}{=MOV 26H,#OlH
0514H=ACALL 0278H
0516H=MOV 2CH,#OOH
0519H=ACALL 0219H
051BH=RET
05lCH=SETB 08H
05lE}{=SETB A5H
0520H=CLR 02H
0522H=CLR C
0523H=MOV OAH,C
0525H=MOV O9H,C
0527H=CLR C
0528H=MOV OBH,C
052AH=MOV OCH,C
052CH=CLR ODH
052EH=CLR C
052FH=MOV OEH,C
053lH=MOV OFH,C
0533H=CLR .Tl 0535H=CLR .TXD
0537H=MOV DPTR,#OOOOH
053AH=MOVX A,@DPTR
053BH=MOV 30H,A
053DH=MOV DPTR,#OCOOH
0540H=MOVX A,@DPTR
054lH=MOV 30H,A
0543H=MOV C,.RXD
0545H=MOV ODH,C
0547H=MOV C,ODH

-73- l l 0549H=MOV llH,C
054BH=MOV DPTR,#OOOOH
054EH=MOVX A,@DPTR
054FH=MOY 30H,A
055lH=SETB .Tl 055311=SETB .T,D
0555H=LCALL OBF8H
0558H=JNB ODH,0560H
055BH=LCALL 074FH
055EH=S~IP 0563H
0560H=LCALL 0762H
0563H=MOY DPTR,#OOlOH
0566H=MOVX A,@DPTR
0567H=MOV 30H,A
0569H=MOV DPTR,#0020H
056CH=MOYX A,@DPTR
056DH=MOV 3OH,A
056FH=MOV DPTR,#OOOOH
0572H=CLR A
0573H=MOVX @DPTR,A
0574H=MOV DPTR,#OOOOH
0577H=MOVX A,@DPTR
0578H=ANL A,#06}1 057AH=CJNE A,#06H,057FH
057DH=SETB 02H
057FH=LCALL 0775H
0582H=LCALL 079BH
0585H=MOV C,llH
0587H=CPL C
0588H=JNC 058FH
058AH=LCALL OD5FH
058DH=SJMP 05BFH
058FH=SETB OFH
059lH=MOV C,OFH
0593H=CPL C
0594H=CPL C
0595H=JNC 05BFH
0597H=MOV R7,#04H
0599}1=MOV R6,#E4H
059BII=MOV 27H,R7 059DH=MOV 28H,R6 059FH=ACALL 0427H
05AlH=ACALL 022FH
05A3H=LCALL OD74H
05A6H=SJMP 05BDH
05A8H=LCALL OA83H
05ABH=SETB lOH
05ADH=MOV A,2CH
05AFH=CJNE A,#OlH,OSB4H
05B2H=SJMP 05B7H
05B4H=CLR C
05BSH=SJMP 05B8H
05B7H=SETB C
05B8H=MOV 08H,C
05BAH=ACALL Ol5lH
05BCH=RET
05BDH=SJMP 059lH
05BFH=LCALL OD74H
05C2H=SJMP 05F8H

-74- 1~ 8 1~
05C4H=LCALL OA83H
05C7H=LCALL 060FH
05CAH=JNB EOH,05CFH
05CDH=SJMP 05D2H
05CFH=CLR C
05DOH=SJMP 05D3H
05D2H=SETB C
05D3H=MOV OEH,C
05DSH=MOV 33H,#OOH
05D8H=LCALL 069AH
05DBH=MOV 33H,#OlH
05DEH=LCALL 06B4H
05ElH=MOV DPTR,#0002H
05E4H=CLR A
05E5H=MOVX @DPTR,A
05E6H=JNB llH,05FlH
05E9H=MOV A,#12H
05EBH=MOV DPTR,#OOOlH
05EEH=MOVX @DPTR,A
05EFH=SJMP 05F7H
05FlH=MOV A,#lOH
05F3H=MOV DPTR,#OOOlH
OSF6H=MOVX @DPTR,A
05F7H=RET
05F8H=LCALL 0904H
05FBH=AJMP 0248H
05FDH=AJMP 024CH
05FFH=AJMP 0250H
0601H=AJMP 0254H
0603H=AJMP 0256H
0605H=AJMP 0265H
0607H=AJMP 0267H
0609H=AJMP 026BH
060BH=AJMP 026FH
060DH=AJMP 027lH
060FH=JB ODH,0622H
0612H=JB llH,0622H
0615H=JB .RXD,0615H
0618H=MOV A,.Pl 061AH=CLR .INTl 061CH=JNB .RXD,06lCH
061FH=SETB .INTl 062lH=RET
0622H=MOV A,#OOH
0624H=RET
0625H=JNB .RXD,0625H
0628H=MOV .Pl,#FFH
062BH=SETB A5H
062DH=CLR .INTl 062FH=JB .RXD,062FH
0632H=MOV A,.Pl 0634H=SETB .INTl 0636H=RET
0637H=JB O9H,063DH
063AH=MOV A,#OOH
063CH=RET
063DH=MOV .PO,#FFH
0640H=CLR A5H
0642H=CLR .TXD

-75- ' 1 ~

0644H=JB A7H,0644H
0647H=MOV A,.P0 0649H=SETB .TXD
064BH=SETB ASH
064DH=JNB A7H,064DH
0650H=RET
065lH=JB OAH,0657H
0654H=MOV A,#OOH
0656H=RET
0657H=MOV .PO,#FFH
065AH=CLR A5H
065CH=CLR .Tl 065EH=JB .TO,065EH
066lH=MOV A,.PO
0663H=SETB .Tl 0665H=SETB A5H
0667H=JNB .T0,0667H
066AH=RET
066BH=JB ODH,068lH
066EH=JB llH,0681H
067lH=JB .RXD,067lH
0674H=MOV .Pl,33H
0677H=CLR .INTl 0679H=JNB .RXD,0679H
067CH=MOV .Pl,#FFH
067FH=SETB .INTl 068lH=RET
0682H=JNB .RXD,0682H
068SH=CLR A5H
0687H=MOV .Pl,33H
068AH=CLR .INTl 068CH=JB .RXD,068CH
068FH=MOV .Pl,#FFH
0692H=SETB .INTl 0694H=SETB A5H
0696H=JNB .RXD,0696H
0699H=RET
069AH=JB 09H,069EH

069EH=JNB A7H,069EH
06AlH=MOV .PO,33H
06A4H=SETB ASH
06A6H=CLR .TXD
06A8H=JB A7H,06A8H
06ABH=MOV .P0,#FFH
06AEH=SETB .TXD
06BOH=JNB A7H,06BOH
06B3H=RET
06B4H=JB OAH,06B8H
06B7H=RET
06B8H=JNB .T0,06B8H
06BBH=MOV .PO,33H
06BEH=SETB A5H
06COH=CLR .Tl 06C2H=JB .TO,06C2H
06C5H=MOV .PO,#FFH
06C8H=SETB .Tl 06CAH=JNB .TO,06CAH
06CDH=RET

-76- 1~ 8 l 4 06CEH=JB OAH,06D4H
06DlH=ACALL 069AH
06D3H=RET
06D4H=JB 09H,06DAH
06D7H=ACALL 06~4H
06D9H=RET
06DAH=JNB .T0,06DAH
06DDH=JNB A7H,06DAH
06EOH=MOV .PO,33H
06E3H=SETB A5H
06ESH=CLR .TXD
06E7H=CLR .Tl 06E9H=JB .T0,06E9H
06ECH=JB A7H,06E9H
06EFH=MOV .P0,#FFH
06F2H=SETB .TXD
06F4H=SETB .Tl 06F6H=JNB .T0,06F6H
06F9H=JNB A7H,06F6H
06FCH=RET
06FDH=JNB ODH,074DH
0700H=MOV R0,#05}{
0702H=MOVX A,@R0 0703H=JNB ElH,0702H
0706H=MOV R0,#1lH
0708H=MOV A,33H
070AH=MOVX @R0,A
07OBH=RET
070CH=MOV R0,#05H
07OEH=MOVX A,@R0 07OFH=JNB E2H,07OEH
0712H=MOV RO,#lOH
0714H=MOVX A,@R0 0715H=MOV 33H,A
0717H=MOV 30H,A
0719H=RET
071AH=MOV R0,#05H
07lCH=MOVX A,@RO
07lDH=JNB E3H,07lCH
0720}1=MOV R0,#21H
0722H=MOV A,33H
0724H=MOVX @R0,A
0725H=RET
0726H=MOV RO,#05H
0728H=MOVX A,@R0 0729H=JNB E3H,0728H
072CH=MOV RO,#22H
072EH=MOV A,33H
0730H=MOVX @R0,A
073lH=RET
0732H=MOV RO,#OSH
0734H=MOVX A,@R0 0735H=JNB E3H,0734H
0738H=MOV RO,#22H
073AH=MOV A,33H
073CH=MOVX @RO,A
073DH=RET
073EN=JNB ODH,074DH
074lH=MOV R0,#OSH

_77_ ~r~ B~8 1 5 0743H=MOVX A,@RO
0744H=JNB E4H,0743H
0747H=MOV RO,#20H
0749H=MOVX A,@RO
074AII=MOV 33H,A
074CH=RET
074DI{=SJMP 074DH
074FH=MOV RO,#OlH
075lH=MOVX A,@RO
0752H=ORL A,#22H
0754H=MOVX @RO,A
0755H=MOV RO,#OOH
0757H=MOVX A,@RO
0758H=ORL A,#OlH
075AH=MOVX @RO,A
075BH=MOV RO,#OlH
075DH=MOVX A,@RO
075EH=ANL A,#DFH
0760H=MOVX @RO,A
076lH=RET
0762H=MOV RO,#OlH
0764H=MOVX A,@RO
0765H=ORL A,#20H
0767H=MOVX @RO,A
0768}1=MOV RO,#OOH
076AII=MOVX A,@RO
076BH=ANL A,#FEH
076DH=MOVX @RO,A
076E}I=MOV RO,#OlH
077011=MOVX A,@RO
077ll{=ANL A,#DDH
0773}{=MOVX @RO,A
0774H=RET
0775}1=MOV RO,#OlH
0777H=MOVX A,@RO
07781{=ORL A,#20H
077AH=MOVX @RO,A
077B}{=MOV RO,#OOH
077DH=MOVX A,@RO
077EH=ORL A,#04H
0780H=MOVX @RO,A
078lH=MOV RO,#OlH
0783H=MOVX A,@RO
0784H=ANL A,#DFH
0786H=MOVX @RO,A
0787H=RET
0788H=MOV RO,#OlH
078AH=MOVX A,@RO
078BH=ORL A,#20H
078DH=MOVX @RO,A
078EH=MOV RO,#OOH
0790H=MOVX A,@RO
0791H=ANL A,#FBH
0793H=MOVX @RO,A
n794H=MOV RO,#OlH
0796H=MOVX A,@RO
0797H=ANL A,#DFH
0799H=MOVX @RO,A
079AH=RET

-78- 1 ~
079BH=MOV RO,#OlH
079DH=MOVX A,@RO
079EH=ORL A,#20H
07AOH=MOVX @RO,A
07AlH=MOV RO,#OOH
07A3H=MOVX A,@RO
07A4H=ORL A,#02H
07A6H=MOVX @RO,A
07A7H=MOV RO,#OlH
07A9H=MOVX A,@RO
07AAH=ANL A,#DFH
07ACH=MOVX @RO,A
07ADH=RET
07AEH=MOV RO,#OlH
07BOH=MOVX A,@RO
07BlH=ORL A,#20H
07B3H=MOVX @RO,A
07B4H=MOV RO,#OOH
07B6H=MOVX A,@RO
07B7H=ANL A,#FDH
07B9H=MOVX @RO,A
07BAH=MOV RO,#OlH
07BCH=MOVX A,@RO
07BDI{=ANL A,#DFH
07BFII=MOVX @RO,A
07COH=RET
07ClH=MOV RO,#OlH
07C3H=MOVX A,@RO
07C4H=ORL A,#OlH
07C6H=MOVX @RO,A
07C7H=RET
07C8H=MOV RO,#OlH
07CAH=MOVX A,@RO
07CBH=ANL A,#FEH
07CDH=MOVX @RO,A
07CEH=MOV RO,#02H
07DOH=MOVX A,@RO
07DlH=ANL A,#FBII
07D3H=MOVX @RO,A
07D4H=RET
07D5H=MOV RO,#OlH
07D7H=MOVX A,@RO
07D811=ORL A,#lOH
07DAH=MOVX @RO,A
07DBH=RET
07DCH=MOV RO,#OlH
07DEH=MOVX A,@RO
07DFH=ANL A,#EFH
07ElH=MOVX @RO,A
07E2H=RET
07E3H=MOV RO,#OlH
07ESH=MOVX A,@RO
07E6H=ORL A,#04H
07E8H=MOVX @RO,A
07E9H=RET
07EAH=MOV RO,#OlH
07ECH=MOVX A,@RO
07EDH=ANL A,#FBH
07EFH=MOVX @RO,A

-79- 1?.~ 8 L ~
07FOH=RET
07FlH=MOV RO,#OlH
07F3H=MOV A J 33H
07F5H=MOVX A,@RO
07F6H=RET
07F7H=MOV RO,#OlH
07F9H=MOVX A,@RO
07FAH=MOV 33H,A
07FCH=RET
07FDH=MOV RO,#02}I
07FFH=MOV A,33H
080lH=MOVX @RO,A
0802H=RET
0803H=MOV RO,#02H
0805H=MOVX A,@RO
0806H=MOV 33H,A
0808H=RET
0809H=MOV RO,#05H
080BH=MOVX A,@RO
080CH=MOV 33H,A
080EH=RET
080FH=MOV RO,#OOH
0811I{=MOVX A,@RO
0812H=MOV C,E2H
0814H=MOV 12H,C
0816H=RET
0817}1=MOV RO,#OOH
0819H=MOVX A,@RO
081AH=MOV C,ElH
OBlCII=MOV 12H,C
08lEH=RET
081FH=MOV RO,#OOH
082lH=MOVX A,@RO
0822H=CLR 12H
0824H=JB ElH,082AH
0827}1=LJMP 0832H
082AH=JB E2H,0830H
082DH=LJ~lP 0832H
0830H=SETB 12H
0832H=MOV C,12H
0834H=RET
0835H=MOV RO,#02H
0837H=MOVX A,@RO
0838H=MOV C,E4H
083AH=RET
083BH=MOV RO,#02H
083DH=MOVX A,@RO
083EH=MOV C,E5H
0840H=RET
0841H=MOV RO,#OOH
0843H=MOVX @RO,A
0844H=RET
0845H=MOV RO,#05H
0847H=MOVX A,@RO
0848H=JNB E3H,0847H
084BH=MOV RO,#24H
084D}{=MOVX A,@RO
084EH=MOV C,E5H
0850H=CPL C

-80~ 8 L8 0851H=MOV 12H,C
0853H=RET
0854H=MOV DPTR,#4EOOH
0857H=MOV RO,#OOH
0859H=MOVX A,@RO
085AH=MOVX @DPTR,A
085BH=INC DPTR
085CH=INC RO
085DH=MOVX A,@RO
085EH=MOVX @DPTR,A
085FII=INC DPTR
0860H=INC RO
0861H=MOVX A,@RO
0862H=MOVX @DPTR,A
0863H=INC DPTR
0864H=INC RO
0865H=MOVX A,@RO
0866H=MOVX @DPTR,A
0867H=INC DPTR
0868H=MOV RO,#10l{
086AH=MOVX A,@RO
086BH=MOVX @DPTR,A
086CH=INC DPTR
086DH=INC RO
086EH=MOVX A,@RO
086FII=MOVX @DPTR,A
0870H=INC DPTR
087lH=INC RO
0872H=MOVX A,@RO
0873H=MOVX @DPTR,A
0874H=INC DPTR
0875H=MOV RO,#20H
0877H=MOVX A,@RO
0878H=MOVX @DPTR,A
0879H=INC DPTR
087AH=INC RO
087BH=MOVX A,@RO
087CH=MOVX @DPTR,A
087DH=INC DPTR
087EH=INC RO
087FH=MOVX A,@RO
0880H=MOVX @DPTR,A
0881H=INC DPTR
0882H=INC RO
0883H=MOVX A,@RO
0884H=MOVX @DPTR,A
0885H=INC DPTR
0886H=INC RO
0887H=MOVX A,@RO
0888H=MOVX @DPTR,A
0889H=MOV C,12H
088BH=MOV 13H,C
088DH=MOV DPTR,#4EOOH
0890H=MOV RO,#OOH
0892H=MOYX A,@DPTR
0893H=MOVX @RO,A
0894H=INC DPTR
0895H=INC RO
0896H=MOVX @DPTR,A

-81- 12~ 1 9 0897H=MOVX A,@RO
0898H=INC DPTR
0899H=INC RO
089AH=MOVX A,@DPTR
089BH=MOVX @RO,A
089CH=INC DPTR
089DH=INC RO
089EH=MOVX A,@DPTR
089FH=MOVX @RO,A
08AOH=INC DPTR
08AlH=MOV RO,#lOH
08A3H=MOVX A,@RO
08A4H=MOVX A,@DPTR
08ASH=MOVX @RO,A
08A6H=INC DPTR
08A7H=INC RO
08A8H=MOVX A,@DPTR
08A9H=MOVX @RO,A
08AAH=INC DPTR
08AB11=INC RO
08ACH=MOVX A,@DPTR
08ADH=MOVX @RO,A
08AE}I=MOV RO,#21}{
08BOH=MOV C,13H
08B2H=MOV A,#FFH
08B4H=JNC 08B8H
08B6}1=MOV A,#OOH
08B8H=MOVX @RO,A
08B9H=INC DPTR
08BAH=MOV RO,#20H
08BCH=MOVX A,@DPTR
08BDH=MOVX @RO,A
08BEH=INC DPTR
08BFH=INC RO
08COH=MOVX A,@DPTR
08ClH=MOVX @RO,A
08C2H=INC DPTR
08C3H=INC RO
08C4H=MOVX A,@DPTR
08C5H=MOVX @RO,A
08C6H=INC DPTR
08C7H=INC RO
08C8H=MOVX A,@DPTR
08C9H=MOVX @RO,A
08CAH=MOV 26H,#43H
08CDH=LCALL 037CH
08DOH=MOY 26H,#45H
08D3H=LCALL 037CH
08D6H=RET
08D7H=MOV 26H,#52H
08DAH=LCALL 037CH
08DDH=MOV 26H,#45H
08EOH=LCALL 037CH
08E3H=RET
08E4H=LCALL 0625H
08E7H=MOV 32H,A
08E9H=MOV A,08H
08EBH=ADD A,32H
08EDH=MOV 08H,A

-82- 12~ 8 ~
08EFH=LCALL 0625H
08F2H=MOV 31H,A
08F4H=MOV A,08H
08F6H=ADD A,31H
08F8H=MOV 08H,A
08FAH=RET
08FBH=ACALL 08E4H
08FDH=MOV O9H,3lH
O900H=MOV OAH,32H
090311=RET
0904H=CLR lOH
0906H=CLR OFH
0908}{=MOV 33H,#OSH
0903H=LCALL 0682H
O90EH=MOV 33H,#OOH
O9llH=LCALL 0682H
0914H=MOV 33H,#OOH
0917H=LCALL 0682H
09lAH=LCALL 0625H
O9lDH=MOV 30H,A
O9lFH=LCALL 0625H
0922H=MOV 30}{,A
0924H=LCALL 0625H
0927}{=MOV 08H,A
0929H=MOV ODH,OBH
092CH=ACALL 08FBH
092EH=MOV A,ODH
0930H=CJNE A,#02H,0935H
0933H=SJMP 0943H
0935H=MOV A,ODH
0937H=CJNE A,~06H,093CH
093AH=SJMP 0981H
093CH=MOV A,ODH
093EH=CJNE A,#04H,0943H
0941H=A~IP O9FBH
0943H=MOV lOH,#OlH
0946H=MOV OFH,#OOH
0949H=MOV R7,09H
094BH=MOV R6,OAH
094DH=DEC R6 094EH=CJNE R6,#FFH,0952H
095lH=DEC R7 0952H=MOV R5,OFH
0954H=MOV R4,10H
0956H=MOV A,R6 0957H=CLR C
095aH=SUBB A,R4 0959H=MOV A,R7 095AH=SUBB A,R5 095BH=JC 0972H
095DH=LCALL 0625H
0960H=ADD A,08H
0962H=MOV 08H,A
0964H=MOV RO,#lOH
0966H=MOV A,#OlH
0968H=ADD A,@RO
0969H=MOV @RO,A
096AH=JNC 0970H
096CH=DEC RO

-83- 1~ 8 ~ 1 096DH=CLR A
096EH=ADDC A,@RO
096FH=MOV @RO,A
0970H=JNC 094gH
0972H=LCALL 0625H
0975H=MOV R7,A
0976H=MOV A,08H
0978H=CPL A
09791{=lNC A
097AH=XRL A,R7 097BH=JZ 097FH
097DH=SETB lOH
097FH=SJMP 0924H
0981H=LCALL 0625H
0984H=XRL A,#OOH
0986H=JZ 098AH
0988}1=SETB OFH
098AH=ACALL 08E4H
098CH=MOV 26H,31H
098FH=LCALL 0278H
0992H=LCALL OD7411 0995H=SJMP O99BH
0997H=MOV OBH,2CH
O99A}I=RET
O99B}l=MOV 26}1,32}1 O99E}I=LCALL 0278H
O9AlH=LCALL OD74H
O9A4H=SJMP O9AA}I
O9A6H=MOV OCH,2CH
O9A9H=RET
O9AAH=MOV lOH,#OlH
O9ADH=MOV OFH,#OOH
O9BOH=MOV R6,09H
O9B2H=MOV R7,OA}I
O9B4H=MOV A,#04H
O9B6H=LCALL ODAAH
O9B9H=MOV R5,OFH
O9BBH=MOV R4,lOH
O9BDH=MOV A,R7 O9BEH=CLR C
O9BFH=SUBB A,R4 O9CO}I=MOV A,R6 O9ClH=SUBB A,R5 O9C2H~JC O9ECH
O9C4H=LCALL 0625H
O9C7H=MOV OEH,A
O9C9H=MOV 26H,OEH
O9CCH=LCALL 0278H
O9CFH=MOV A,08H
O9DlH=ADD A,OEH
O9D3H=MOV 08H,A
O9D5N=LCALL OD74H
O9D8H=SJMP O9DEH
O9DAH=LCALL OBB7H
O9DDH=RET
O9DEH=MOV RO,#lOH
O9EOH=MOV A,#OlH
O9E2H=ADD A,@RO
O9E3H=MOV @RO,A

-8~ lB~
O9E4H=JNC O9EAH
09E6H=DEC RO
O9E7H=CLR A
O9E8H=ADDC A,@RO
09E9}1=MOV @RO,A
O9EAH=JNC O9BOH
O9ECH=LCALL 0625H
O9EFH=MOV R7,A
O9FOH=MOV A,08H
O9F2H=CPL A
O9F3H=INC A
O9F4H=XRL A,R7 O9FSH=J~ O9F9H
O9F7H=SETB lOH

O9FBH=LCALL 0625H
09FEH=MOV OEH,A
OAOOH=MOV A,08H
OA02H=ADD A,OEH
OA04}{=MOV 08H,A
OA06H=MOV lOH,#OlH
OA09H=MOV OFH,#OOH
OAOCH=MOV R6,09H
OAOEH=MOV R7,OAH
OAlOH=MOV A,#02H
OAl2}{=LCALL ODAAH
OAl5H=MOV R5,OFH
OAl7H=MOV R4,lOH
OAl9H=MOV A,R7 OAlAH=CLR C
OAlBH=SUBB A,R4 OAlCH=MOV A,R6 OAlDH=SUBB A,R5 OAlEH=JC OA35H
OA20H=LCALL 0625H
OA23H=ADD A,08H
OA2SH=MOV 08H,A
OA27H=MOV RO,#lOH
OA29H=MOV A,#OlH
OA2BH=ADD A,@RO
OA2CH=MOV @RO,A
OA2DH=JNC OA33H
OA2FH=DEC RO
OA30H=CLR A
OA3lH=ADDC A,@RO
OA32H=MOV @RO,A
OA33H=JNC OAOCH
OA35H=LCALL 0625H
OA38H=MOV R7,A
OA39H=MOV A,08H
OA3BH=CPL A
OA3CH=INC A
OA3DH=XRL A,R7 OA3EH=JZ OA42H
OA40H=SETB lOH
OA42H=JNB lOH,OA47H
OA45H=ACALL 08CAH
OA47H=JNB OFH,OA4CH
OA4AH=ACALL 08D7H

8 ~3 OA4CH=MOV DPTR #400lH
OA4FH=LCALL OD9AH
OA52H=AJMP 0904H
OA54H=JNB 08H OA5FH
OA57H=MOV 33H 2CH
OASAH=LCALL 07 lAH
OASDH=SJMP OA65H
OA5FH=MOV 33H #FFH
OA62H=LCALL 07 lAH
OA65H=JNB ODH OA6DH
OA6 8H=LCALL O 7 3EH
OA6BH=MOV 2FH A
OA6DH=RET
OA6EH=JNB ODH OA77H
OA7 lH=MOV 33H 2E;H
OA74H=LCALL 06FDH
OA77}1=LCALL 070CH
OA7AH=MOV 3OH A
OA7CH=JNB 08H OA82H
OA7FH=MOV 2CH 30H
OA82H=RET
OA83H=SETB 0811 OA8511=RET
OA86H=CLR 08H
OA8811=RET
OA8911=llOV 3lH 2CH
OA8CH=RET
OA8D}I=MOV 32H 2CH
OA9011=JNB 08H OA96H
OA9 3H=LCALL O 7 4FH
OA96H=LCALL 0637H
OA9911=JNB EOH OA9EH
OA9CH=SJMP OAAlH
OA9EH=CLR C
OA9FH=SJMP OM2H
OAAlH=SETB C
OAA2H=CPL C
OAA3H=ANL C OBH
OAA5H=MOV 09H C
OAA711=LCALL 065 lH
OAMH=JNB EOH OAAFH
OAADH=SJMP OAB2H
OAAFH=CLR C
OABOH=SJMP OAB3H
OAB2H=SETB C
OAB3H=CPL C
OAB4H=ANL C OCH
OAB6H=MOV OAH C
OAB8H=LCALL 0841H
OABBH=MOV C OBH
OABDH=CPL C
OABEH=JNC OAC3H
OACOH=LCALL 0775H
OAC3H=MOV C OCH
OAC5H=CPL C
OAC6H=JNC OACBH
OAC8H=LCALL 079BH
OACBH=CLR A
OACCH=MOV C 08H

1?~ 8 ~ 4 OACEH=RLC A
OACFH=MOV 33H,A
OADlH=LCALL 066BH
OAD4H=MOV C,08H
OAD6H=ORL C,llH
OAD8H=MOV ODH,C
OADAH=JNB 08H,OAE2H
OADDH=MOV RO,#31H
OADFH=LCALL ODAlH
OAE2H=RET
OAE3H=ACALL OA83H
OAE5H=LCALL 0637H
OAE8H=MOV C,OBH
OAEAH=CPL C
OAEBH=MOV R7,A
OAECH=CLR A
OAEDH=RLC A
OAEEH=ORL A,R7 OAEFH=MOV 30H,A
OAFlH=LCALL 065lH
OAF4H=MOV C,OCH
OAF6H=CPL C
OAF7H=MOV R7,A
OAF8H=CLR
OAF9H=RLC A
OAFAII=ORL A,R7 OAFBH=ANL A,30H
OAFDH=MOV 30H,A
OAFFH=LCALL 084lH
OB02H=JNB OBH,OBODH
OB05H=LCALL 080FH
OB08H=CPL C
OB09H=MOV O9H,C
OBOBH=SJMP OBlOH
OBODH=LCALL 0775H
OBlOH=JNB OCH,OBlBH
OBl3H=LCALL 0817H
OB16}1=CPL C
OBl7H=MOV OAH,C
OBl9H=SJMP OBlEH
OBlBH=LCALL 079BH
OBlEH=MOV 33H,30H
OB21H=LCALL 0668H
OB24H=RET
OB25H=ACALL OA86H
OB27H=RET
OB28H=LCALL 060FH
OB2BH=MOV 2DH,A
OB2DH=MOV C,08H
OB2FH=CPL C
OB30H=JNC OB35H
OB32H=MOV 2CH,#OOH
OB35H=MOV RO,#27H
OB37H=LCALL ODAlH
OB3AH=MOV 30H,.ACC
OB3DH=MOV 33H,30H
OB40H=LCALL 06CEH
OB43H=RET
OB44H=LCALL 0637H

~ 5 - 8 7 ~ ?~8 OB47H=MOV 2DH A
OB49H=LCALL 065lH
OB4CH=MOV 3lH,A
OB4EH=MOV C,08H
OB50H=CPL C
OB5l}l=JNC OB56H
OB53H=MOV 2CII,#OOH
OB56H=MOV RO,#27H
OB58H=LCALL ODAlH
OB5BH=MOV 30H,.ACC
OB5EH=MOV 33H,30H
OB61H=LCALL 066BH
OB64H=RET
OB65H=LCALL 060FH
OB68H=MOV 2DH,A
OB6A}{=LCALL 060FH
OB6DH=MOV 2EH,A
OB6FH=MOV C,08H
OB7lH=CPL C
OB72H=JNC OB77H
OB741{=MOV 2CH,#OOH
OB7711=MOV RO,#27H
OB79H=LCALL ODAlH
OB7CH=MOV 33H,2EH
OB7Fll=LCALL 06CEH
OB8211=MOV 33H,2FH
OB8511=LCALL 06CEH
OB8811=RET
OB89H=LCALL 0637H
OB8CH=MOV 2DH,A
OB8EII=LCALL 36371{
OB9lH=MOV 2EH,A
OB93H=LCALL 065lH
OB96H=MOV 31H,A
OB98H=LCALL 065lH
OB9BH=MOV 32}{,A
OB9DH=MOV C,081{
OB9FH=CPL C
OBAO}I=JNC OBA5H
OBA2H=MOV 2CH,#OOH
OBA512=MOV RO,#27H
OBA7H=LCALL ODAlH
OBAAH=MOV 33H,21;H
OBADII=LCALL 066BH
OBBOH=MOV 33H,2FH
OBB3H=LCALL 066BH
OBB6H=RET
OBB7H=MOV C,08H
OBB9H=CPL C
OBBAH=JNC OBBEH
OBBCH=SJMP OBD6H
OBBEH=MOV R7,OBH
OBCOH=MOV R6,OCH
OBC2H=MOV R5,#0FH
OBC4H=MOV R4,#El~H
OBC6H=MOV A,R6 OBC7H=SETB C
OBC8H=SUBB A,R4 OBC9H=MOV A,R7 OBCAH=SUBB A,R5 OBCBH=JC 08D6H
OBCDH=MOV .DPH,OBH
OBDOH=MOV .DPL,OCH
OBD3H=MOV A,2CH
OBD5H=MOVX @DPTR,A
OBD6H=MOV RO,#OBH
OBD8H=LCALL ODBCH
OBDBH=RET
OBDCH=MOV .PSW,#OOH
OBDFH=MOV .SP,#4DH
OBE2H=MOV .PO,#FFH
OBE5H=MOV .Pl,#FFH
OBE8H=MOV .P3,#FFH
OBEBH=MOV .P2,#BFH
OBEEH=CLR A
OBEF}I=MOV RO,#04H
OBFlH=MOVX @RO,A
OBF2H=LCALL OD4EH
OBF5H=LJMP 05lCH
OBF8H=MOV A,#OOH
OBFAH=MOV DPTR,#3FFFH
OBFDH=MOV R3,#401I
OBFFH=MOV R2,#FFH
OCOlH=INC DPTR
OC02H=MOVX @DPTR,A
OC03H=DJNZ R2,0COlH
OC05H=DJNZ R3,OBFFH
OC07H=MOV RO,#02H
OC09H=MOVX A,@RO
OCOAH=JNB E2H,OClOH
OCODH=LCALL OC8CH
OClOH=MOV DPTR,#4000H
OC13H=MOV A,#OlH
OC15H=CLR OOH
OC17H=MOVX @DPTR,A
OC18H=MOV 30H,A
OClAH-MOVX A,@DPTR
OClBH=CJNE A,30H,OC2CH
OClEH=RL A
OClFH=JNB EOH,OC17H
OC22H=JB OOH,OC37H
OC25H=SETB OOH
OC27H=MOV DPTR,#400lH
OC2AH=SJMP OC17H
OC2CH=JNB OOH,OC33H
OC2FH=CLR AlH
OC3lH=SJMP OC3lH
OC33H=CLR AOH
OC35H=SJMP OC35H
OC37H=MOV DPTR,#4000H
OC3AH=MOV A,#OlH
OC3CH=MOV RO,#OlH
OC3EH=XCH A,RO
OC3FH=INC RO
OC40H=RL A
OC4lH=MOV .DPL,A
OC43H=XCH A,RO
OC44H=MOVX @DPTR,A

- 8 9 ~ 8 OC45H=MOV 30H,A
OC47H=MOVX A,@DPTR
OC48H=CJNE A,30H,OC86H
OC4BH=JNB E7H,OC3EH
OC4EH=MOV DPTR,#1004H
OCSlH=MOV RO,A
OC52H=MOV A,#OlH
OC54H=110V Rl,A
OC55H=ORL A,#40H
OC57H=MOV .DPH,A
OC59H=MOV A,RO
OC5AH=MOVX @DPTR,A
OC5BH=MOV 30H,A
OCSDH=MOVX A,@DPTR
OC5EH=CJNE A,30H,OC86H
OC6lH=MOV A,Rl OC62}1=RL A
OC63H=JNB E4H,OC54H
OC66H=MOV .IE,#83H
OC69H=MOV ~IP,#OOH
OC6CH=MOV DPTR,#400FH
OC6FH=MOV A,#02H
OC7lH=MOVX @DPTR,A
OC72H=MOV A,#40}1 OC74H=INC DPTR
OC75H-MOVX @DPTR,A
OC76H=MOV A,#OFH
OC781{=INC DPTR
OC79H=MOVX @DPTR1A
OC7AH=MOV RO,#02H
OC7CII=MOVX A,@RO
OC7DH=ANL A,#F8H
OC71;`H=MOVX @RO,A
OC80H=MOV A,#lOH
OC82H=MOV RO,#OlH
OC84H=MOVX @RO,A
OC85H=RET
OC86H=CLR AOH
OC88H=CLR AlH
OC8AH=SJMP OC8AH
OC8CH=LCALL 07C8H
OC8FH=MOVX @RO,A
OC9OH=MOV RO,#02H
OC92H=MOVX A,@RO
OC93H=JB E2H,OC8C}1 OC96H=MOV RO,#OlH
OC98H=MOV A,#lOH
OC9AH=MOVX @RO,A
OC9BH=MOV 08H,#FFH
OC9EH=LCALL OCAEH
OCAlH=MOV 08H,#AAH
OCA4H=LCALL OCAEH
OCA7H=MOV 08H,#CCH
OCAAH=LCALL OCAEH
OCADH=RET
OCAEH=CLR OOH
OCBOH=MOV DPTR,#4000H
OCB3}1=LCALL OCFOH
OCB6H=MOV DPTR,#4000H

-9o-OCB9H=LCALL OCFDH
OCBCH=LCALL OD06H
OCBFH=JB OOII,OCC9H
OCC2H=SETB OOH
OCC4H=MOV DPTR,#400lH
OCC7H=S~IP OCB9H
OCC9H=CLR OOH
OCCBH=MOV DPTR,#4000H
OCCEH=MOV R3,#20H
OCDOH=MOV R2,#FFH
OCD2H=MOV A,#OlH
OCD4H=MOVX @DPTR,A
OCD5H=MOV 30H,A
OCD7H=MOVX A,@DPTR
OCD8H=CJNE A,30H,OD2OH
OCD3H=RL A
OCDCH=JNB EOH,OCD4H
OCDFH=INC DPTR
OCEOH=INC DPTR
OCElH=DJNZ R2,OCD2H
OCE3H=DJNZ R3,OCDOH
OCE5H=JB OOH,OCEFH
OCE8H=SETB OOH
OCEAII=MOV DPTR,#400lH
OCED11=SJMP OCCEH
OCEFH=RET
OCFOH=MOV R3,#40H
OCF2H=MOV R2,#FFH
OCF4H=MOV A,08H
OCF6H=MOVX @DPTR,A
OCF7H=INC DPTR
OCF8H=DJNZ R2,OCF6H
OCFAH=DJNZ R3,OCF2H
OCFCH=RET
OCFDH=MOV R2,#FFH
OCFFH=MOV R4,#FFH
ODOlH=DJNZ R4,ODOlH
OD03H=DJNZ R2,OCFFH
OD05H=RET
OD06H=MOV R3,#20H
OD08H=MOV R2,#FFH
ODOAH=MOVX A,@DPTR
ODOBH=CJNE A,08H,OD20H
ODOEH=LCALL OD18H
ODllH=INC DPTR
ODl2H=INC DPTR
OD13H=DJNZ R2,ODOAH
OD15H=DJNZ R3,OD08H
ODl7H=RET
OD18H=JNB .INTO,ODlCH
ODlBH=RET
ODlCH=CLR A2H
ODlEH=SJMP ODlEH
OD20H=JNB OOH,OD27H
OD23H ¢ LR AlH
OD25H=SJMP OD25H
OD27H ¢ LR AOH
OD29H=SJ?~P OD29H
OD2BH=MOV .TCON,#OOH

91 1?~ 8 ~9 OD2EH=LCALL 07ClH
QD31H=MOY .SP,llH
OD34H=CLR 03H
OD36H=LCALL 0467H
OD39H=CLR OOH
OD3BH=LCALL 07C8H
OD3EH=RETI
OD3FH=MOV RO,#05H
OD41}{=MOVX A,@RO
OD42H=JNB EOH,OD4BH
OD45H=MOV RO,#OlH
OD47H=MOVX A,@RO
OD48H=ORL A,#OlH
OD4AH=MOVX @RO,A
OD4BH=LCALL 400FH
OD4EH=RETI
OD4FH=LCALL 4006H
OD52H=RETI
OD53H=LCALL 4003H
OD56H=RETI
OD57H=LCALL 4009H
OD5AH=RETI
OD5BH=LCALL 400CI{
OD5EH=RETI
OD5F1{=MOV DPTR,#4015H
OD62H=MOV A,#02H
OD64H=MOVX @DPTR,A
OD65H=INC DPTR
OD66H=LCALL 070CH
OD69H=MOVX @DPTR,A
OD6AH=INC DPTR
OD6BH=LCALL 070CH
OD6EH=MOVX @DPTR,A
OD6FH=LCALL 4015H
OD72H=SJMP OD5FH
OD74H=MOV RO,.SP
OD76H=MOV .DPH,@RO
OD7811=DEC RO
OD791{=MOV .DPL,@RO
OD7BH=INC DPTR
OD7CH=INC DPTR
OD7DH=MOV 33H,.DPH
OD80H=LCALL 06FDH
OD83H=LCALL 070CH
OD86H=MOV 33H,.DPL
OD89H=LCALL 06FDH
OD8CH=LCALL 07OCH
OD8F}{=CLR A
OD9OH=JMP @A~DPTR
OD9lH=CLR A
OD92H=MOVC A,@A+DPTR
OD93H=MOV .B,A
OD9SH=MOV A,#OlH
OD97H=MOVC A,@A+DPTR
OD98H=SJMP ODA5H
OD9AH=MOVX A,@DPTR
OD9BH=MOV .B,A
OD9DH=INC DPTR
OD9EH=MOVX A,@DPTR

-92- 1~9~'8 "
OD9FH=SJMP ODA5H
ODAlH=MOV .B,@RO
ODA3H=INC RO
ODA4H=MOV A,@RO
ODA5H=PUSH .ACC
ODA7H=PUSH .B
ODA9H=RET
ODAAH=JZ ODB3H
ODACH=CPL A
ODADH=INC A
ODAEH=ADD A,R7 ODAFH=MOV R7,A
ODBOH=JC ODB3H
ODB2H=DEC R6 ODB3H=RET
ODB4H=INC RO
ODB5H=DEC @RO
ODB6H=CJNE @RO,#FFH,ODBBH
ODB9H=DEC RO
ODBAH=DEC @RO
ODBBH=RES
ODBCH=INC RO
ODBDH=INC @RO
ODBEH=CJNE @RO,#OOH,ODC3H
ODClH=DEC RO
ODC21{=INC @RO
ODC3H=RET
~exit

Claims (9)

1. A parallel processor array comprising:
a plurality of processing elements, each comprising:
a processor having an arithmetic logic unit, control store, program sequences and instruction decoder;
a read/write memory associated with said processor;
an input/output means associated with said processor and read/write memory;
means for interconnecting said processing elements in a binary tree in which each processing element except those at extremities of the binary tree is connected to one parent processing element and at least first and second child processing elements;
said input/output means comprising:
means for broadcasting information received from a parent processing element to said child processing elements; and means for determining a priority among information received from said child processing elements and information received from the processor with which said input/output means is associated;
wherein the input/output means of the processing elements connected in said binary tree cooperate so that information is broadcast from a first parent processing element to the child processing elements in said binary tree or subtree that are most remote from said first parent processing element in less than an average processor instruction cycle and a priority is determined among information at each processing element in said binary tree or subtree, each in less than an average processor cycle.
2. The apparatus of claim 1, wherein said broadcasting means comprises:

a register having a serial data input and an output to said processor;
a flip-flop;
first means for applying data signals simultaneously to said serial data input and said flip-flop, and second means for applying data signals from said flip-flop to said first means in first and second child processing elements, whereby data signals are propagated through the binary tree or sub-tree by means of the flip-flops of the processing elements of the binary tree or sub-tree.
3. The apparatus of claim 2 further comprising:
means for generating a register full signal when said register is full and means for disabling the transmission of additional data to said plurality of processing elements while said register full signal is being generated.
4. The apparatus of claim 1 wherein the determining means comprises:
first, second and third registers, said first register storing data associated with said processing element and said second and third registers storing data associated with said first and second child processing elements, a comparator for comparing the data stored in said first, second and third registers to select a winning data in accordance with a predetermined priority, and means for reporting said winning data to one of said second or third registers in a parent processing element.
5. The apparatus of claim 4 further comprising means for disabling each processing element whose data is not selected as winning data, as a result of which every processing element except one is disabled by the time the winning data is reported to the first parent processing element in the binary tree or sub-tree.
6. The apparatus of claim 5 further comprising means for reporting to the first parent processing element in the binary tree information stored by the processing element that is not disabled.
7. The apparatus of claim 1 further comprising means for subdividing the binary tree into a plurality of sub-trees.
8. The apparatus of claim 7 wherein each sub-tree is operated in a single instruction multiple data mode and the plurality of sub-trees are operated in a multiple instruction multiple data mode.
9. The apparatus of claim 7, wherein at least two of the sub-trees execute identical programs on identical data and the results of such program execution are compared to detect faults in the sub-trees.
CA000545782A 1986-09-02 1987-08-31 Binary tree parallel processor Expired - Fee Related CA1291828C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US06/902,547 US4860201A (en) 1986-09-02 1986-09-02 Binary tree parallel processor
US902,547 1986-09-02

Publications (1)

Publication Number Publication Date
CA1291828C true CA1291828C (en) 1991-11-05

Family

ID=25416008

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000545782A Expired - Fee Related CA1291828C (en) 1986-09-02 1987-08-31 Binary tree parallel processor

Country Status (9)

Country Link
US (1) US4860201A (en)
EP (1) EP0279854B1 (en)
JP (1) JP2763886B2 (en)
KR (1) KR930009619B1 (en)
AU (1) AU598425B2 (en)
CA (1) CA1291828C (en)
DE (1) DE3787886T2 (en)
IL (1) IL83734A (en)
WO (1) WO1988001771A1 (en)

Families Citing this family (186)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4843540A (en) * 1986-09-02 1989-06-27 The Trustees Of Columbia University In The City Of New York Parallel processing method
US5040109A (en) * 1988-07-20 1991-08-13 Digital Equipment Corporation Efficient protocol for communicating between asychronous devices
US5020059A (en) * 1989-03-31 1991-05-28 At&T Bell Laboratories Reconfigurable signal processor
US5471622A (en) * 1989-10-04 1995-11-28 Paralogic, Inc. Run-time system having nodes for identifying parallel tasks in a logic program and searching for available nodes to execute the parallel tasks
US5522083A (en) * 1989-11-17 1996-05-28 Texas Instruments Incorporated Reconfigurable multi-processor operating in SIMD mode with one processor fetching instructions for use by remaining processors
US5537593A (en) * 1990-02-12 1996-07-16 Fmc Corporation Method for solving enumerative search problems using message passing on parallel computers
EP0444368B1 (en) * 1990-02-28 1997-12-29 Texas Instruments France Digital Filtering with SIMD-processor
US5230047A (en) * 1990-04-16 1993-07-20 International Business Machines Corporation Method for balancing of distributed tree file structures in parallel computing systems to enable recovery after a failure
US5517626A (en) * 1990-05-07 1996-05-14 S3, Incorporated Open high speed bus for microcomputer system
US5280547A (en) * 1990-06-08 1994-01-18 Xerox Corporation Dense aggregative hierarhical techniques for data analysis
US6970834B2 (en) * 1990-06-15 2005-11-29 Arachnid, Inc. Advertisement downloading computer jukebox
US5930765A (en) * 1990-06-15 1999-07-27 Martin; John R. Downloading method for songs and advertisements
CA2093355A1 (en) * 1990-10-03 1992-04-04 David C. Douglas Parallel computer system
US5809292A (en) * 1990-11-13 1998-09-15 International Business Machines Corporation Floating point for simid array machine
US5963746A (en) * 1990-11-13 1999-10-05 International Business Machines Corporation Fully distributed processing memory element
US5590345A (en) * 1990-11-13 1996-12-31 International Business Machines Corporation Advanced parallel array processor(APAP)
US5794059A (en) * 1990-11-13 1998-08-11 International Business Machines Corporation N-dimensional modified hypercube
ATE180586T1 (en) * 1990-11-13 1999-06-15 Ibm PARALLEL ASSOCIATIVE PROCESSOR SYSTEM
US5734921A (en) * 1990-11-13 1998-03-31 International Business Machines Corporation Advanced parallel array processor computer package
US5966528A (en) * 1990-11-13 1999-10-12 International Business Machines Corporation SIMD/MIMD array processor with vector processing
US5588152A (en) * 1990-11-13 1996-12-24 International Business Machines Corporation Advanced parallel processor including advanced support hardware
US5617577A (en) * 1990-11-13 1997-04-01 International Business Machines Corporation Advanced parallel array processor I/O connection
US5815723A (en) * 1990-11-13 1998-09-29 International Business Machines Corporation Picket autonomy on a SIMD machine
US5630162A (en) * 1990-11-13 1997-05-13 International Business Machines Corporation Array processor dotted communication network based on H-DOTs
US5828894A (en) * 1990-11-13 1998-10-27 International Business Machines Corporation Array processor having grouping of SIMD pickets
US5708836A (en) * 1990-11-13 1998-01-13 International Business Machines Corporation SIMD/MIMD inter-processor communication
US5765011A (en) * 1990-11-13 1998-06-09 International Business Machines Corporation Parallel processing system having a synchronous SIMD processing with processing elements emulating SIMD operation using individual instruction streams
US5765015A (en) * 1990-11-13 1998-06-09 International Business Machines Corporation Slide network for an array processor
US5765012A (en) * 1990-11-13 1998-06-09 International Business Machines Corporation Controller for a SIMD/MIMD array having an instruction sequencer utilizing a canned routine library
US5963745A (en) * 1990-11-13 1999-10-05 International Business Machines Corporation APAP I/O programmable router
US5625836A (en) * 1990-11-13 1997-04-29 International Business Machines Corporation SIMD/MIMD processing memory element (PME)
IE920032A1 (en) * 1991-01-11 1992-07-15 Marconi Gec Ltd Parallel processing apparatus
JP3047998B2 (en) * 1991-02-13 2000-06-05 株式会社日立製作所 Processor allocation method and apparatus in parallel computer
US5978831A (en) * 1991-03-07 1999-11-02 Lucent Technologies Inc. Synchronous multiprocessor using tasks directly proportional in size to the individual processors rates
US5321813A (en) 1991-05-01 1994-06-14 Teradata Corporation Reconfigurable, fault tolerant, multistage interconnect network and protocol
US5594918A (en) * 1991-05-13 1997-01-14 International Business Machines Corporation Parallel computer system providing multi-ported intelligent memory
US20080228517A1 (en) * 1992-03-06 2008-09-18 Martin John R Computer jukebox and jukebox network
US6047122A (en) * 1992-05-07 2000-04-04 Tm Patents, L.P. System for method for performing a context switch operation in a massively parallel computer system
JP2647330B2 (en) * 1992-05-12 1997-08-27 インターナショナル・ビジネス・マシーンズ・コーポレイション Massively parallel computing system
JP2642039B2 (en) * 1992-05-22 1997-08-20 インターナショナル・ビジネス・マシーンズ・コーポレイション Array processor
US5394556A (en) * 1992-12-21 1995-02-28 Apple Computer, Inc. Method and apparatus for unique address assignment, node self-identification and topology mapping for a directed acyclic graph
JPH06208460A (en) * 1993-01-11 1994-07-26 Hitachi Ltd Microprogram memory control system
US5742806A (en) * 1994-01-31 1998-04-21 Sun Microsystems, Inc. Apparatus and method for decomposing database queries for database management system including multiprocessor digital data processing system
US5748780A (en) * 1994-04-07 1998-05-05 Stolfo; Salvatore J. Method and apparatus for imaging, image processing and data compression
US5615127A (en) * 1994-11-30 1997-03-25 International Business Machines Corporation Parallel execution of a complex task partitioned into a plurality of entities
US7334030B2 (en) * 1994-12-19 2008-02-19 Apple Inc. Method and apparatus for the addition and removal of nodes from a common interconnect
US5875301A (en) * 1994-12-19 1999-02-23 Apple Computer, Inc. Method and apparatus for the addition and removal of nodes from a common interconnect
US5748877A (en) * 1995-03-08 1998-05-05 Dell Usa, L.P. Method for executing embedded diagnostics from operating system-based applications
US5692184A (en) * 1995-05-09 1997-11-25 Intergraph Corporation Object relationship management system
US5794243A (en) * 1995-12-11 1998-08-11 International Business Machines Corporation Method and apparatus for executing a binary search in a data cache
US7266725B2 (en) 2001-09-03 2007-09-04 Pact Xpp Technologies Ag Method for debugging reconfigurable architectures
DE19651075A1 (en) 1996-12-09 1998-06-10 Pact Inf Tech Gmbh Unit for processing numerical and logical operations, for use in processors (CPU's), multi-computer systems, data flow processors (DFP's), digital signal processors (DSP's) or the like
DE19654595A1 (en) 1996-12-20 1998-07-02 Pact Inf Tech Gmbh I0 and memory bus system for DFPs as well as building blocks with two- or multi-dimensional programmable cell structures
DE19654593A1 (en) 1996-12-20 1998-07-02 Pact Inf Tech Gmbh Reconfiguration procedure for programmable blocks at runtime
DE19654846A1 (en) 1996-12-27 1998-07-09 Pact Inf Tech Gmbh Process for the independent dynamic reloading of data flow processors (DFPs) as well as modules with two- or multi-dimensional programmable cell structures (FPGAs, DPGAs, etc.)
EP1329816B1 (en) 1996-12-27 2011-06-22 Richter, Thomas Method for automatic dynamic unloading of data flow processors (dfp) as well as modules with bidimensional or multidimensional programmable cell structures (fpgas, dpgas or the like)
US6542998B1 (en) 1997-02-08 2003-04-01 Pact Gmbh Method of self-synchronization of configurable elements of a programmable module
DE19704728A1 (en) 1997-02-08 1998-08-13 Pact Inf Tech Gmbh Method for self-synchronization of configurable elements of a programmable module
DE19704742A1 (en) * 1997-02-11 1998-09-24 Pact Inf Tech Gmbh Internal bus system for DFPs, as well as modules with two- or multi-dimensional programmable cell structures, for coping with large amounts of data with high networking effort
US5991764A (en) * 1997-03-12 1999-11-23 International Business Machines Corporation Data structure specifying differing fan-in tree and fan-out tree computation patterns supporting a generic reduction object for data parallelism
US5987255A (en) * 1997-03-12 1999-11-16 International Business Machines Corporation Method of, system for, and article of manufacture for providing a generic adaptor for converting from a sequential iterator to a pre-thread parallel iterator
US5937194A (en) * 1997-03-12 1999-08-10 International Business Machines Corporation Method of, system for, and article of manufacture for providing a generic reduction object for data parallelism
US6237134B1 (en) 1997-03-12 2001-05-22 International Business Machines Corporation Method of, system for, and article of manufacture for providing a generic adaptor for converting from a non-future function pointer to a future function object
US8686549B2 (en) 2001-09-03 2014-04-01 Martin Vorbach Reconfigurable elements
US6000024A (en) * 1997-10-15 1999-12-07 Fifth Generation Computer Corporation Parallel computing system
DE19861088A1 (en) 1997-12-22 2000-02-10 Pact Inf Tech Gmbh Repairing integrated circuits by replacing subassemblies with substitutes
DE19807872A1 (en) 1998-02-25 1999-08-26 Pact Inf Tech Gmbh Method of managing configuration data in data flow processors
NO984746D0 (en) * 1998-10-09 1998-10-09 Fast Search & Transfer Asa Digital processing unit
NO309169B1 (en) * 1998-11-13 2000-12-18 Interagon As Sokeprosessor
US6006259A (en) * 1998-11-20 1999-12-21 Network Alchemy, Inc. Method and apparatus for an internet protocol (IP) network clustering system
NO992269D0 (en) * 1999-05-10 1999-05-10 Fast Search & Transfer Asa ° engine with two-dimensional scalable, parallel architecture
EP1228440B1 (en) 1999-06-10 2017-04-05 PACT XPP Technologies AG Sequence partitioning in cell structures
US6973559B1 (en) * 1999-09-29 2005-12-06 Silicon Graphics, Inc. Scalable hypercube multiprocessor network for massive parallel processing
US6745240B1 (en) 1999-11-15 2004-06-01 Ncr Corporation Method and apparatus for configuring massively parallel systems
US6519697B1 (en) 1999-11-15 2003-02-11 Ncr Corporation Method and apparatus for coordinating the configuration of massively parallel systems
US6412002B1 (en) 1999-11-15 2002-06-25 Ncr Corporation Method and apparatus for selecting nodes in configuring massively parallel systems
US6418526B1 (en) 1999-11-15 2002-07-09 Ncr Corporation Method and apparatus for synchronizing nodes in massively parallel systems
US7089240B2 (en) * 2000-04-06 2006-08-08 International Business Machines Corporation Longest prefix match lookup using hash function
ATE476700T1 (en) 2000-06-13 2010-08-15 Richter Thomas PIPELINE CT PROTOCOLS AND COMMUNICATIONS
US7595659B2 (en) 2000-10-09 2009-09-29 Pact Xpp Technologies Ag Logic cell array and bus system
US8058899B2 (en) 2000-10-06 2011-11-15 Martin Vorbach Logic cell array and bus system
US6990555B2 (en) * 2001-01-09 2006-01-24 Pact Xpp Technologies Ag Method of hierarchical caching of configuration data having dataflow processors and modules having two- or multidimensional programmable cell structure (FPGAs, DPGAs, etc.)
JP4114480B2 (en) 2001-02-24 2008-07-09 インターナショナル・ビジネス・マシーンズ・コーポレーション Global interrupt and barrier network
WO2002069168A1 (en) * 2001-02-24 2002-09-06 International Business Machines Corporation A global tree network for computing structures
US9037807B2 (en) 2001-03-05 2015-05-19 Pact Xpp Technologies Ag Processor arrangement on a chip including data processing, memory, and interface elements
US7444531B2 (en) 2001-03-05 2008-10-28 Pact Xpp Technologies Ag Methods and devices for treating and processing data
US7210129B2 (en) 2001-08-16 2007-04-24 Pact Xpp Technologies Ag Method for translating programs for reconfigurable architectures
US7844796B2 (en) 2001-03-05 2010-11-30 Martin Vorbach Data processing device and method
WO2005045692A2 (en) 2003-08-28 2005-05-19 Pact Xpp Technologies Ag Data processing device and method
US7581076B2 (en) 2001-03-05 2009-08-25 Pact Xpp Technologies Ag Methods and devices for treating and/or processing data
JP2004533691A (en) 2001-06-20 2004-11-04 ペーアーツェーテー イクスペーペー テクノロジーズ アクチエンゲゼルシャフト Methods for processing data
GB0119146D0 (en) * 2001-08-06 2001-09-26 Nokia Corp Controlling processing networks
US7996827B2 (en) 2001-08-16 2011-08-09 Martin Vorbach Method for the translation of programs for reconfigurable architectures
US6957318B2 (en) * 2001-08-17 2005-10-18 Sun Microsystems, Inc. Method and apparatus for controlling a massively parallel processing environment
US7434191B2 (en) 2001-09-03 2008-10-07 Pact Xpp Technologies Ag Router
US8686475B2 (en) 2001-09-19 2014-04-01 Pact Xpp Technologies Ag Reconfigurable elements
US7577822B2 (en) 2001-12-14 2009-08-18 Pact Xpp Technologies Ag Parallel task operation in processor and reconfigurable coprocessor configured based on information in link list including termination information for synchronization
WO2003060747A2 (en) 2002-01-19 2003-07-24 Pact Xpp Technologies Ag Reconfigurable processor
AU2003214003A1 (en) 2002-02-18 2003-09-09 Pact Xpp Technologies Ag Bus systems and method for reconfiguration
US8914590B2 (en) 2002-08-07 2014-12-16 Pact Xpp Technologies Ag Data processing method and device
US7251690B2 (en) * 2002-08-07 2007-07-31 Sun Microsystems, Inc. Method and system for reporting status over a communications link
US7028122B2 (en) * 2002-08-07 2006-04-11 Sun Microsystems, Inc. System and method for processing node interrupt status in a network
WO2005010632A2 (en) * 2003-06-17 2005-02-03 Pact Xpp Technologies Ag Data processing device and method
US7657861B2 (en) 2002-08-07 2010-02-02 Pact Xpp Technologies Ag Method and device for processing data
AU2003286131A1 (en) 2002-08-07 2004-03-19 Pact Xpp Technologies Ag Method and device for processing data
JP4388895B2 (en) 2002-09-06 2009-12-24 ペーアーツェーテー イクスペーペー テクノロジーズ アクチエンゲゼルシャフト Reconfigurable sequencer structure
US8676843B2 (en) * 2002-11-14 2014-03-18 LexiNexis Risk Data Management Inc. Failure recovery in a parallel-processing database system
US7185003B2 (en) * 2002-11-14 2007-02-27 Seisint, Inc. Query scheduling in a parallel-processing database system
US7240059B2 (en) * 2002-11-14 2007-07-03 Seisint, Inc. System and method for configuring a parallel-processing database system
US6968335B2 (en) 2002-11-14 2005-11-22 Sesint, Inc. Method and system for parallel processing of database queries
US7945581B2 (en) * 2002-11-14 2011-05-17 Lexisnexis Risk Data Management, Inc. Global-results processing matrix for processing queries
US7293024B2 (en) * 2002-11-14 2007-11-06 Seisint, Inc. Method for sorting and distributing data among a plurality of nodes
US7403942B1 (en) 2003-02-04 2008-07-22 Seisint, Inc. Method and system for processing data records
US7657540B1 (en) 2003-02-04 2010-02-02 Seisint, Inc. Method and system for linking and delinking data records
US7912842B1 (en) 2003-02-04 2011-03-22 Lexisnexis Risk Data Management Inc. Method and system for processing and linking data records
US7720846B1 (en) 2003-02-04 2010-05-18 Lexisnexis Risk Data Management, Inc. System and method of using ghost identifiers in a database
US7155440B1 (en) * 2003-04-29 2006-12-26 Cadence Design Systems, Inc. Hierarchical data processing
KR101200598B1 (en) * 2003-09-09 2012-11-12 실리콘 하이브 비.브이. Integrated data processing circuit with a plurality of programmable processors
US8086645B2 (en) * 2003-12-16 2011-12-27 Oracle International Corporation Compilation and processing a parallel single cursor model
US7958160B2 (en) * 2003-12-16 2011-06-07 Oracle International Corporation Executing filter subqueries using a parallel single cursor model
US7451133B2 (en) * 2003-12-16 2008-11-11 Oracle International Corporation Executing nested subqueries of parallel table functions in the parallel single cursor model
US7685095B2 (en) * 2003-12-16 2010-03-23 Oracle International Corporation Executing a parallel single cursor model
US7340452B2 (en) * 2003-12-16 2008-03-04 Oracle International Corporation Parallel single cursor model on multiple-server configurations
JP2006215816A (en) * 2005-02-03 2006-08-17 Fujitsu Ltd Information processing system and its control method
US9384818B2 (en) * 2005-04-21 2016-07-05 Violin Memory Memory power management
US8112655B2 (en) * 2005-04-21 2012-02-07 Violin Memory, Inc. Mesosynchronous data bus apparatus and method of data transmission
US8452929B2 (en) 2005-04-21 2013-05-28 Violin Memory Inc. Method and system for storage of data in non-volatile media
EP2383661A1 (en) 2005-04-21 2011-11-02 Violin Memory, Inc. Interconnection system
US9582449B2 (en) 2005-04-21 2017-02-28 Violin Memory, Inc. Interconnection system
US9286198B2 (en) 2005-04-21 2016-03-15 Violin Memory Method and system for storage of data in non-volatile media
US8789021B2 (en) * 2005-06-30 2014-07-22 International Business Machines Corporation Method and apparatus for object-oriented load testing of computing systems
US7475056B2 (en) * 2005-08-11 2009-01-06 Oracle International Corporation Query processing in a parallel single cursor model on multi-instance configurations, using hints
US8566928B2 (en) * 2005-10-27 2013-10-22 Georgia Tech Research Corporation Method and system for detecting and responding to attacking networks
EP1974265A1 (en) 2006-01-18 2008-10-01 PACT XPP Technologies AG Hardware definition method
US8516444B2 (en) 2006-02-23 2013-08-20 International Business Machines Corporation Debugging a high performance computing program
US7796527B2 (en) * 2006-04-13 2010-09-14 International Business Machines Corporation Computer hardware fault administration
US20070242611A1 (en) * 2006-04-13 2007-10-18 Archer Charles J Computer Hardware Fault Diagnosis
US7697443B2 (en) * 2006-04-13 2010-04-13 International Business Machines Corporation Locating hardware faults in a parallel computer
US7779016B2 (en) * 2006-09-14 2010-08-17 International Business Machines Corporation Parallel execution of operations for a partitioned binary radix tree on a parallel computer
US8028186B2 (en) 2006-10-23 2011-09-27 Violin Memory, Inc. Skew management in an interconnection system
US8713582B2 (en) * 2006-10-26 2014-04-29 International Business Machines Corporation Providing policy-based operating system services in an operating system on a computing system
US8032899B2 (en) 2006-10-26 2011-10-04 International Business Machines Corporation Providing policy-based operating system services in a hypervisor on a computing system
US8656448B2 (en) * 2006-10-26 2014-02-18 International Business Machines Corporation Providing policy-based application services to an application running on a computing system
US9330230B2 (en) * 2007-04-19 2016-05-03 International Business Machines Corporation Validating a cabling topology in a distributed computing system
US7958274B2 (en) * 2007-06-18 2011-06-07 International Business Machines Corporation Heuristic status polling
US8296430B2 (en) 2007-06-18 2012-10-23 International Business Machines Corporation Administering an epoch initiated for remote memory access
US7831866B2 (en) * 2007-08-02 2010-11-09 International Business Machines Corporation Link failure detection in a parallel computer
US20090080339A1 (en) * 2007-09-20 2009-03-26 Nicholas Geoffrey Duffield Multicast-based inference of temporal delay characteristics in packet data networks
US8233402B2 (en) 2007-09-20 2012-07-31 At&T Intellectual Property Ii, L.P. Multicast-based inference of temporal loss characteristics in packet data networks
US8218811B2 (en) 2007-09-28 2012-07-10 Uti Limited Partnership Method and system for video interaction based on motion swarms
US9065839B2 (en) 2007-10-02 2015-06-23 International Business Machines Corporation Minimally buffered data transfers between nodes in a data communications network
US7984450B2 (en) * 2007-11-28 2011-07-19 International Business Machines Corporation Dispatching packets on a global combining network of a parallel computer
US8266168B2 (en) * 2008-04-24 2012-09-11 Lexisnexis Risk & Information Analytics Group Inc. Database systems and methods for linking records and entity representations with sufficiently high confidence
US8090733B2 (en) 2008-07-02 2012-01-03 Lexisnexis Risk & Information Analytics Group, Inc. Statistical measure and calibration of search criteria where one or both of the search criteria and database is incomplete
WO2010011813A1 (en) * 2008-07-23 2010-01-28 Alkermes, Inc. Complex of trospium and pharmaceutical compositions thereof
US7895260B2 (en) * 2008-07-28 2011-02-22 International Business Machines Corporation Processing data access requests among a plurality of compute nodes
US10027688B2 (en) * 2008-08-11 2018-07-17 Damballa, Inc. Method and system for detecting malicious and/or botnet-related domain names
US8755515B1 (en) 2008-09-29 2014-06-17 Wai Wu Parallel signal processing system and method
US9411859B2 (en) 2009-12-14 2016-08-09 Lexisnexis Risk Solutions Fl Inc External linking based on hierarchical level weightings
US8504875B2 (en) * 2009-12-28 2013-08-06 International Business Machines Corporation Debugging module to load error decoding logic from firmware and to execute logic in response to an error
US8578497B2 (en) * 2010-01-06 2013-11-05 Damballa, Inc. Method and system for detecting malware
US8826438B2 (en) 2010-01-19 2014-09-02 Damballa, Inc. Method and system for network-based detecting of malware from behavioral clustering
US8365186B2 (en) 2010-04-14 2013-01-29 International Business Machines Corporation Runtime optimization of an application executing on a parallel computer
US8504730B2 (en) 2010-07-30 2013-08-06 International Business Machines Corporation Administering connection identifiers for collective operations in a parallel computer
US9189505B2 (en) 2010-08-09 2015-11-17 Lexisnexis Risk Data Management, Inc. System of and method for entity representation splitting without the need for human interaction
US9516058B2 (en) 2010-08-10 2016-12-06 Damballa, Inc. Method and system for determining whether domain names are legitimate or malicious
US8565120B2 (en) 2011-01-05 2013-10-22 International Business Machines Corporation Locality mapping in a distributed processing system
US9317637B2 (en) 2011-01-14 2016-04-19 International Business Machines Corporation Distributed hardware device simulation
US8631489B2 (en) 2011-02-01 2014-01-14 Damballa, Inc. Method and system for detecting malicious domain names at an upper DNS hierarchy
US8689228B2 (en) 2011-07-19 2014-04-01 International Business Machines Corporation Identifying data communications algorithms of all other tasks in a single collective operation in a distributed processing system
US9250948B2 (en) 2011-09-13 2016-02-02 International Business Machines Corporation Establishing a group of endpoints in a parallel computer
US10547674B2 (en) 2012-08-27 2020-01-28 Help/Systems, Llc Methods and systems for network flow analysis
US9680861B2 (en) 2012-08-31 2017-06-13 Damballa, Inc. Historical analysis to identify malicious activity
US9894088B2 (en) 2012-08-31 2018-02-13 Damballa, Inc. Data mining to identify malicious activity
US10084806B2 (en) 2012-08-31 2018-09-25 Damballa, Inc. Traffic simulation to identify malicious activity
US9166994B2 (en) 2012-08-31 2015-10-20 Damballa, Inc. Automation discovery to identify malicious activity
US9571511B2 (en) 2013-06-14 2017-02-14 Damballa, Inc. Systems and methods for traffic classification
US20160224398A1 (en) * 2015-01-29 2016-08-04 Intellisis Corporation Synchronization in a Multi-Processor Computing System
US10061531B2 (en) 2015-01-29 2018-08-28 Knuedge Incorporated Uniform system wide addressing for a computing system
US9552327B2 (en) 2015-01-29 2017-01-24 Knuedge Incorporated Memory controller for a network on a chip device
US9930065B2 (en) 2015-03-25 2018-03-27 University Of Georgia Research Foundation, Inc. Measuring, categorizing, and/or mitigating malware distribution paths
EP3093773B1 (en) * 2015-05-13 2019-07-10 Huawei Technologies Co., Ltd. System and method for creating selective snapshots of a database
US10027583B2 (en) 2016-03-22 2018-07-17 Knuedge Incorporated Chained packet sequences in a network on a chip architecture
US10346049B2 (en) 2016-04-29 2019-07-09 Friday Harbor Llc Distributed contiguous reads in a network on a chip architecture
CN109960186B (en) * 2017-12-25 2022-01-07 紫石能源有限公司 Control flow processing method and device, electronic equipment and storage medium
US11693832B2 (en) * 2018-03-15 2023-07-04 Vmware, Inc. Flattening of hierarchical data into a relational schema in a computing system

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4101960A (en) * 1977-03-29 1978-07-18 Burroughs Corporation Scientific processor
US4247892A (en) * 1978-10-12 1981-01-27 Lawrence Patrick N Arrays of machines such as computers
US4251861A (en) * 1978-10-27 1981-02-17 Mago Gyula A Cellular network of processors
DE2920994A1 (en) * 1979-05-23 1980-11-27 Siemens Ag DATA SEND / RECEIVER DEVICE WITH PARALLEL / SERIAL AND SERIAL / PARALLEL CHARACTERS CONVERSION, IN PARTICULAR FOR DATA EXCHANGE BETWEEN COMMUNICATING DATA PROCESSING SYSTEMS
US4345309A (en) * 1980-01-28 1982-08-17 Digital Equipment Corporation Relating to cached multiprocessor system with pipeline timing
US4435758A (en) * 1980-03-10 1984-03-06 International Business Machines Corporation Method for conditional branch execution in SIMD vector processors
US4344134A (en) * 1980-06-30 1982-08-10 Burroughs Corporation Partitionable parallel processor
US4445171A (en) * 1981-04-01 1984-04-24 Teradata Corporation Data processing systems and methods
US4412285A (en) * 1981-04-01 1983-10-25 Teradata Corporation Multiprocessor intercommunication system and method
US4583164A (en) * 1981-08-19 1986-04-15 Tolle Donald M Syntactically self-structuring cellular computer
US4466060A (en) * 1982-02-11 1984-08-14 At&T Bell Telephone Laboratories, Incorporated Message routing in a computer network
DE3215080A1 (en) * 1982-04-22 1983-10-27 Siemens AG, 1000 Berlin und 8000 München ARRANGEMENT FOR COUPLING DIGITAL PROCESSING UNITS
US4622632A (en) * 1982-08-18 1986-11-11 Board Of Regents, University Of Washington Data processing system having a pyramidal array of processors

Also Published As

Publication number Publication date
EP0279854A1 (en) 1988-08-31
AU8032887A (en) 1988-03-24
JP2763886B2 (en) 1998-06-11
KR880701918A (en) 1988-11-07
JPH01501261A (en) 1989-04-27
WO1988001771A1 (en) 1988-03-10
US4860201A (en) 1989-08-22
DE3787886T2 (en) 1994-04-14
EP0279854B1 (en) 1993-10-20
AU598425B2 (en) 1990-06-21
IL83734A (en) 1991-06-10
DE3787886D1 (en) 1993-11-25
KR930009619B1 (en) 1993-10-07
EP0279854A4 (en) 1990-01-26
IL83734A0 (en) 1988-02-29

Similar Documents

Publication Publication Date Title
CA1291828C (en) Binary tree parallel processor
US8090704B2 (en) Database retrieval with a non-unique key on a parallel computer system
US5193187A (en) Fast interrupt mechanism for interrupting processors in parallel in a multiprocessor system wherein processors are assigned process ID numbers
US20090043910A1 (en) Query Execution and Optimization Utilizing a Combining Network in a Parallel Computer System
US20090037376A1 (en) Database retrieval with a unique key search on a parallel computer system
US7577874B2 (en) Interactive debug system for multiprocessor array
US20070180334A1 (en) Multi-frequency debug network for a multiprocessor array
Mueller-Thuns et al. VLSI logic and fault simulation on general-purpose parallel computers
Kruskal et al. A complexity theory of efficient parallel algorithms
Barsotti et al. Fastbus data acquisition for CDF
Agrawal et al. Sequential circuit test generation on a distributed system
Vick et al. Adptable Architectures for Supersystems
Tanase et al. Composable, non-blocking collective operations on power7 ih
JP2557175B2 (en) Computer system
Su et al. Parallel Algorithms and Their Implementation in MICRONET.
US6775814B1 (en) Dynamic system configuration for functional design verification
JP2552075B2 (en) Computer system
Haralick et al. Proteus: a reconfigurable computational network for computer vision
CN112765925B (en) Interconnected circuit system, verification system and method
Al-Azzeh Review of methods of distributed barrier synchronization of parallel processes in matrix VLSI systems
Briggs et al. A Shared-Resource Multiple Microprocessor System for Pattern Recognition and Image Processing
Marsland et al. NMP-A network multi-processor
Briggs et al. FOR PATTERN RECOGNITION AND IMAGE PROCESSING
George et al. Parallel Processing Experiments on an SCI-based Workstation Cluster
Maruyama et al. Architecture of a parallel machine: Cenju‐3

Legal Events

Date Code Title Description
MKLA Lapsed