WO2008060948A2 - Method and system for parallelization of pipelined computations - Google Patents

Method and system for parallelization of pipelined computations Download PDF

Info

Publication number
WO2008060948A2
WO2008060948A2 PCT/US2007/084113 US2007084113W WO2008060948A2 WO 2008060948 A2 WO2008060948 A2 WO 2008060948A2 US 2007084113 W US2007084113 W US 2007084113W WO 2008060948 A2 WO2008060948 A2 WO 2008060948A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
stage
stages
gangs
processors
Prior art date
Application number
PCT/US2007/084113
Other languages
French (fr)
Other versions
WO2008060948A3 (en
Inventor
Vladimir Kotlyar
Mayan Moudgill
Yuriy M. Pogudin
Original Assignee
Sandbridge Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sandbridge Technologies, Inc. filed Critical Sandbridge Technologies, Inc.
Priority to US12/513,838 priority Critical patent/US9110726B2/en
Priority to EP07864131A priority patent/EP2080104A4/en
Publication of WO2008060948A2 publication Critical patent/WO2008060948A2/en
Publication of WO2008060948A3 publication Critical patent/WO2008060948A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30076Arrangements for executing specific machine instructions to perform miscellaneous control operations, e.g. NOP
    • G06F9/30079Pipeline control instructions, e.g. multicycle NOP
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline, look ahead using a plurality of independent parallel functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3824Operand accessing
    • G06F9/3826Bypassing or forwarding of data results, e.g. locally between pipeline stages or within a pipeline stage

Definitions

  • This invention relates to methods for computational pipelining. More specifically, this invention relates to a method for parallel pipelining of computations.
  • computational pipeline refers to a sequence of computational steps, each of which consume data produced by one or more previous steps. The results are then used in subsequent steps.
  • computational pipeline will be used in this context.
  • stage typically refers to computational steps such as those that make up a computational pipeline. If a computational process involves more than one stage, the stages may be labeled A, B, C, etc., as part of a naming convention. Each stage, labeled "x,” uses only data from preceding stages.
  • a simple example of a pipeline is one where each computational stage depends only upon its immediate predecessor. Therefore, for a simple pipeline, once a stage completes its i-th execution, the immediate successor stage may initiate its i-th execution.
  • stages may be executed in many possible, different orders.
  • one execution order may be AO, Al, A2, BO, Bl, A3, B2, etc.
  • a standard operating order is to execute the Os first, followed by the Is, etc.
  • the standard execution order is AO, BO, Al, Bl, A2, B2, A3, etc.
  • the number of processors is equal to or greater than the number of stages.
  • each stage is assigned to a different processor.
  • a stage begins as soon as its predecessor is completed. Looking at a dynamic picture of a process, the invocations of the stages might appear as follows:
  • the information that is passed between the stages, A and B, is generally of an identical shape for all execution counts of those stages.
  • the information may consist of only a few words.
  • the information that is passed between the stages also may consist only of a few words.
  • the "processors" executing these stages are fixed functions with dedicated hardware to implement or effectuate the communication.
  • processors executing the stages are similar to general purpose Central Processing Units ("CPUs") without dedicated communication resources.
  • the target is a multi-processor, using either a shared memory or message passing to communicate between processors.
  • Coarse pipelining provides one template for distributing work to processors.
  • the first draw-back is associated with the latency of the first result.
  • the first result will be obtained only after all of the counts except for the last stage have been executed.
  • BO will be complete only after three (3) time units (or invocations) whereas with coarse pipelining, BO will be complete after two (2) time units.
  • this difference becomes more pronounced with a greater number of pipeline stages and a higher number of execution counts.
  • the second draw-back is associated with memory requirements.
  • the data produced by all of the counts of a particular stage is stored before the next stage is executed.
  • some of the data produced by all of the stages needs to be stored simultaneously. If the pipeline is a simple one, only one segment of data for each stage must be stored. Further, by changing the amount of work done in each stage, it is possible to control the amount of data that must be stored.
  • Both latency and memory requirements may be decreased by reducing the amount of work performed by each stage. Of course, this results in an increase in the number of execution counts. Increasing the number of execution counts, however, may decrease the overall efficiency of the computation, since more time will be expended in inter-stage synchronization and communication.
  • each processor receives a packet of information and then processes that information to completion.
  • this processing schedule may not be possible when there exist forward dependencies between successive executions of the same stage. In other words, this may not be possible when the first iteration of Al cannot be started until the last iteration of AO is completed.
  • a non-pipelined schedule may not be executable when the input data for the first stage itself is being produced in a pipelined fashion. For example, if the data for Al only becomes available some time after the data for AO, Al cannot be initiated at the same time as AO. This scenario occurs in environments where the input is produced in a realtime stream, such as for wireless or multi-media.
  • the method includes selecting the amount of work each work item performs, determining the number of stages, partitioning each stage into a number of teams and number of gangs of processors in the teams, and assigning teams and gangs to specific processors.
  • Another aspect of the invention is to provide a method where the amount of work for each work item is a function of the amount of available memory and/or overhead synchronization, and/or the number of stages is a function of load balance and/or parallelism, and/or the number of teams and gangs is a function of load balance and/or throughput.
  • a further aspect of the invention is to provide a method where the number of gangs in a team is a function of work dependency of a stage, efficiency of parallelization, memory utilization and/or latency.
  • Still one additional aspect of the invention is to provide a method where the assignment to specific processors is a function of the communication network between processors and/or access to memory regions.
  • An aspect of the invention is to provide a method that includes rewriting the code for the stages and data objects to reflect the teams and gangs; and write code for communication for input and output data and for synchronization of the teams and gangs.
  • Another aspect of the invention includes analyzing dependency and data flow and eliminating spurious dependencies.
  • One further aspect includes providing a buffer for data produced by M teams, gangs or stages and used by N teams, gangs or stages; and wherein the buffer has at least M+N areas for data.
  • areas assigned to the M teams, gangs or stages and the N teams, gangs or stages are continuously reassigned so that data stored in an area by the M teams, gangs or stages in one cycle is accessed by the N teams, gangs or stages in the next or other subsequent cycles.
  • the buffer has more than M+N areas for data to account for cycle differences between the availability of data to be stored and when it can be accessed.
  • the invention includes instructions for storing data include a send reserve function to obtain a buffer area suitable for the given iteration and a send commit function indicating that the data is stored and available for reading; and the instructions for accessing data include a receive reserve function to obtain data from a buffer area suitable for the given iteration and a receive commit function indicating that the data is no longer needed,
  • the invention provides for a processor system that includes parallel pipeline stages that perform work on a sequence of work items and a buffer for data produced by M stages and used by N stages, the buffer comprising at least M+N areas for data stored by the M stages and accessed by the N stages.
  • the processor includes features such as where the areas assigned to the M stages and the N stages are continuously reassigned so that data stored in an area by the M stages in one cycle is accessed by the N stages in the next or other subsequent cycles.
  • the buffer has more than M+N areas for data to account for cycle differences between the availability of data to be stored and when it can be accessed.
  • the instructions for storing data include a send reserve function to obtain a buffer area suitable for the given iteration and a send commit function indicating that the data is stored and available for reading; and the instructions for accessing data include a receive reserve function to obtain data from a buffer area suitable for the given iteration and a receive commit function indicating that the data is no longer needed.
  • stages are partitioned into a number of teams and number of gangs in the teams; and M and N each represent the total number teams, gangs and stages.
  • Figure 1 is a flow diagram of the logic employed by embodiment(s) of the invention.
  • Figure 2 is a block diagram illustration the organizational scheme employed by embodiment s) of the invention. Description of Embodiment(s) of the Invention
  • each stage is mapped exactly onto one processor. As should be appreciated by those skilled in the art, this is not required for data processing. It is possible to implement a stage using multiple processors and different stages using differing numbers of processors.
  • stage A is implemented using a gang of three processors while stage B is implemented using a gang of two processors.
  • gangs one reason to rely on the use of gangs is to increase the amount of work (or increase the amount of data processed) done in parallel.
  • stage A and B When processed in this manner, the time needed to complete the task may be reduced.
  • stage B One additional reason to use gangs for processing is to improve processor utilization. Consider an example with two stages, A and B, where processing of stage A takes 1.5 times (1.5x) as long to complete than stage B. In this case, if stages A and B are each implemented using only one processor, stage B will be idle for about 33% of the total processing time. As may be appreciated by those skilled in the art, stage B is idle because it is "waiting" for stage A to complete its processing before both stages may proceed to the next processing cycle. However, as suggested by Table 4, if gangs of processors are utilized, it is possible to engineer processing so that stages A and B both are completed in approximately the same time period.
  • a "team” approach may also be employed to realize certain processing improvements.
  • One example of this approach involves a simple pipeline with two stages, A and B. Stage A takes one (1) unit of time to complete, while stage B takes two (2) units of time to complete. Stage B ⁇ cannot start before stage A ⁇ is completed.
  • a gang of two processors may be assigned to stage B, while one processor (or a gang including only one processor) is assigned to stage A.
  • processing proceeds as indicated by Table 5, below:
  • stage B is assigned a team of two gangs. Each gang equates to a single processor (processor #2 or processor #3) in this example. The number of teams, therefore, is the number of iterations being processed concurrently for the given stage.
  • stage B requires four (4) units of time for processing.
  • the four time units required to process stage B may be parsed efficiently between two processors, rather than four processors.
  • stage B requires only two (2) time units to complete its processing. Accordingly, in this example, a team of two gangs may be assigned to stage B. Each gang includes two (2) processors.
  • Load balancing is another factor that is considered in data processing, as may be appreciated by those skilled in the art.
  • computation in a pipeline proceeds at the rate of the slowest stage, hi the example illustrated in Table 8, the pipeline includes four stages, A, B, C, and D. If three of the stages each take one (1) unit of time to process and one of the stages takes two (2) units of time to process, a new execution count will occur once every two (2) units of time. In the example illustrated in Table 8, stage C takes two (2) units of time to process.
  • load balancing As discussed above, gangs may be employed to assist with load balancing.
  • load balancing may also include or require splitting a stage into two or more stages.
  • load balancing may include combining two (2) or more stages together.
  • Other combinations and rearrangements of work also may accomplish this same result, as may be appreciated by those skilled in the art.
  • Buffering also may be employed to assist with multi-stage processing.
  • data is written by a stage and is read by its immediate successor stage.
  • a segment of data is written by a stage and then consumed by its successor.
  • reading and writing occur simultaneously.
  • stage B will be reading the data for count n while stage A will be writing the data for count n+1.
  • stage A overwrites the data being read by stage B before stage B has read all of the data.
  • double-buffering addresses this problem.
  • the data storage area is twice as large as the data that is communicated between the stages. If the data storage area is divisible in two parts, an upper part and a lower part, both parts of the data storage area may be used at any given time. For example, during one count, the predecessor stage may write to the lower half of the data storage area while the successor stages reads from the upper half of the data storage area. During the next execution count, the positions are swapped such that data is written to the upper half of the data storage area while data is read from the lower half of the data storage area.
  • the write stage may be implemented using a team of M gangs.
  • the reader stage may be implemented using N gangs. If so, it may be necessary to employ an extension of double-buffering, which is known as M+N buffering.
  • M+N buffering an extension of double-buffering
  • the lifetime of a segment of data is one (1) count. In other words, data produced by one stage during count n will be consumed by another stage during the count n+1. There may, of course, be instances where a segment of data may need to be stored for a longer period of time. In some instances, the successor stage may use data segments from two or more counts of the preceding stage. This example is illustrated in Table 9, below. In this case, the stage BO relies on and uses the results (or data) from both stages AO and Al.
  • one or more data segments may be used from stages that do not immediately preceed the producing stage.
  • the buffering algorithm must reserve additional space for storage of data to avoid data being overwritten during various iterations.
  • the stage CO uses the results of both of stages AO and BO.
  • a coarse pipeline schedule may be generated when it is desired for a particular task to meet a particular latency. In such a case, the computation is divided into stages, and the number of gangs per stage is selected so that each stage takes less time than the latency target. As should be appreciated by those skilled in the art, it is desirable for the processor requirements for this schedule not to exceed the total number of available processors. Once the schedule is selected, the inter-stage buffers are generated. [0059] It is also possible to generate a coarse pipeline schedule in a memory constrained environment.
  • the amount of work done by each stage may be altered by changing the number of iterations of the outermost loop executed per count. In such a case, a load-balanced schedule is generated. Then, the amount of memory required for the buffers is computed. The amount of work done is then scaled by the stages until sized appropriately for the available memory. [0060] As may be appreciated by those skilled in the art, it is also possible to produce a coarse-pipelined code manually. In one contemplated example, a standard imperative language may be used with support for parallel processing, such as C augmented by the pthreads standard threading library. Relying on this language, for example, tasks may be manually partitioned into stages.
  • each stage may be partitioned into separate functions. Each function, then, may be invoked on a unique processor using an explicit pthread_create() call or its equivalent.
  • communication between the threads/stages may be effectuated via the memory, with synchronization performed manually or via a mechanism such as semaphores.
  • Parallelization may include the following steps: (1) work unit adjustment,
  • stage adjustment the amount of work for each work item may be adjusted in at least one of two ways.
  • the work items may be sub-divided per stage to reduce memory consumption. Alternatively, the work items may be enlarged per stage to reduce synchronization overhead.
  • stage adjustment one or more stages may have to be combined or sub-divided to improve load balance or to expose opportunities for parallelization.
  • gangs and teams of processors may be assigned to stages based on load balancing or real-time requirements. For example, if a stage k requires W* units of work, it is desirable for all of the stages to fulfill the ratio W ⁇ /P* in approximately the same manner.
  • W k fPk P ⁇ is the number of processors assigned to stage k.
  • Assignment to physical processors also is a factor taken into account with respect to parallel processing.
  • the cost of communicating between different processors or a parallel computer is not uniform.
  • some pairs of processors may communicate with one another more quickly than other pairs of processors.
  • the cost of access to various memory regions is not uniform. Accordingly, the performance of the final parallel program depends, at least in part, upon the assignment of stages to processors and data to memories (or memory allocations).
  • stage rewriting each iteration of the stage k may be partitioned among P ⁇ processors, since P ⁇ refers to the size of each gang. It is desirable for the code for the stage to be partitioned.
  • the result may be summarized as a sub-program PartitionedStagek(t), where t is the index of the processor within the gang that executes the partition.
  • data object rewriting references to the original data objects used by the stage are transformed to refer to areas where the objects are allocated.
  • communication generation it is desirable to insert code to communicate input and output data.
  • code to synchronize processes in the gangs is provided by
  • G be the index of the gang within the team
  • the final parallel program consists of several code fragments, like the one set forth above, one for each stage, together with the means for starting these fragments on the parallel processors.
  • the parallel computer is capable of copying data between different memory regions.
  • this may include nothing more than a library routine for explicitly copying data.
  • the copying functionality may be a hardware mechanism specially designed to copy asynchronously.
  • the copying functionality may be a library routine implemented on top of message passing primitives.
  • the computer incorporates a shared memory and that a library routine "copy(dest,src,n)" is available to copy n bytes from location "src" to location "dest”.
  • this assumption does not preclude implementation of the techniques of a distributed memory machine.
  • the communication buffer may be an abstract data type with at least the following functions, coded in C language for clarity, as embodied in Code Segment 4:
  • the send_reserve() function obtains the buffer area suitable for the given iteration.
  • the send_commit() function marks the data as available for use by the received side.
  • the receive_reserve() function obtains the data suitable for the given iteration.
  • the reserve_commit() function indicates that the data is no longer needed.
  • Code Segment 5 [0075] In many cases, copying data to/from communication buffers may be avoided if the PartitionedStagekQ routine uses the data directly from the buffers. In such a case, a possible implementation of WorkStagekQ is detailed in Code Segment 6, below:
  • G be the index of the gang within the team
  • q receive_reserve (Buffer, i) ;
  • the communication buffer abstract data type is comprised of three arrays of length Z each.
  • send_iter is an array of integers
  • recv_iter is an array of integers
  • slots is an array of slots.
  • typedef struct ⁇ int send_iter [ Z] ; int recv_iter [Z] ; byte slots [ Z] [K] ; ⁇ Buf fer;
  • the implementation of the send/receive functions follows. For C language notation is used. Equivalent implementations can be obtained using any standard programming language, such as C++, assembly, Matlab, as would be appreciated by those skilled in the art. Code Segment 8 provides one example.
  • the commit routines indicate when the slot is ready for the next operation. If the current iteration is i, then the next iteration is i+Z. The sender signals the receiver and vice versa.
  • Non-uniform memory access is another area to be addressed in pipelining.
  • the Buffer operation may be modified slightly to deal with gangs distributed across multiple processors and multiple memory regions.
  • stage A sends data to stage B.
  • M gangs are assigned to stage A and that N gangs are assigned to stage B.
  • M memory regions R 1 . . . R M and Qi . . . Q N which have affinity to the gangs.
  • j is one of the gangs running stage A.
  • the processors in the gang may access memory region Rj more quickly than other regions.
  • k is one of the gangs running stage B.
  • the processors in the gang may access memory region Q ⁇ more quickly than other regions.
  • storing into any "far” memory region is faster than loading from that "far” memory region.
  • the buffers satisfy the following formula: Z>M+N.
  • W ceiling((M+N)/N).
  • send_iter[], recvjterf] and storage[] arrays may be considered as being indexed by a two dimensional slot index, which is detailed in Code Segment 9, as follows: typedef struct ⁇ int send_iter[N] [W] ; int recv_iter[N] [W] ; byte storage [N] [W] [K] ; ⁇ Buffer;
  • each array segment or slice, recv_iter[j][] and storage]] ][][]] may be stored in the region Q j .
  • the sender teams then perform non-local stores into recv_iter[] and storage[][][] arrays. These stores on many computers may be done asynchronously.
  • the only possible non-local loads are for sender gangs from the send_iter[] array. In many cases the cost of this operation is much lower than the cost of accessing the storage and, therefore, may be discounted.
  • An implementation which completely eliminates non-local loads may use
  • Z ⁇ M+N which is also divisible both by M and N.
  • the send_iter[][] array is an M-by-U array.
  • the segment or slice send_iter[k][] may be allocated to the region R k .
  • Pipelining also considers allocation of buffers between non-adjacent stages.
  • stages B and C depend on stage A. Stage C also depends on stage B. For discussion purposes, one (1) gang is assigned to stage A, one (1) gang is assigned to stage C and ten (10) gangs are assigned to B. As a result, ten (10) concurrent iterations of stage B are executed. It follows, then, that only one (1) iteration of stages A and C is executed concurrently. [0093] Let (b, b+10) be the range of iterations executed by gangs of stage B at any point in time. Let c be the iteration of stage C and a be the iteration of stage A.
  • the bounds on the minimum number of slots required between any two stages may be obtained by writing down all the constraints on the ranges of concurrent iterations for each of the stages. Then, for each pair of communicating stages X->Y, the minimum distance may be determined between the earliest iteration of X and the latest iteration of Y. Since the ranges and the constraints between the iterations are linear inequalities, linear programming techniques may be used to determine these distances. Alternatively a graph-theoretical formulation is possible that propagates the distances top-down within the dependence Directed Acyclic Graph ("DAG") of stages. The details of this formulation should be apparent to those skilled in the art. [0096] Automatic parallelization by a compiler is also considered when pipelining a program.
  • DAG Directed Acyclic Graph
  • the actual mechanics of implementing the partitioning, figuring out the buffer sizes and synchronization may be done by a tool such as a compiler or translator, under programmer control. Generally, this is accomplished by the programmer, who may add tool-directives to the code (such as pragmas or special comments), which then cause the appropriately coarse-pipelined code to be generated.
  • the programmer may be required to do more or less work. Ideally, the only the constraints on the tool should be: (a) available memory, (b) desired latency, and (c) number of processors. Less sophisticated tools may require the programmer to identify the stage boundaries, gang sizes, etc., as may be appreciated by those skilled in the art.
  • Code Segment 10 provides one example of a sequential pipeline, which is an input program written in the pipeline form.
  • i 0, l, 2, . . . ⁇ stagel (Datal) ; stageN(DataN) ; ⁇
  • Code Segment 10 [00100] While this example is provided, as should be appreciated by those skilled in the art, this example should not understood to be limiting of the invention. To the contrary, the input language need not have this exact lexical structure.
  • the program may be written in any number of languages such as C, Fortran or Matlab, among others, as would be appreciated by those skilled in the art. Whichever language a programmer selects, it is assumed that the programmer may transform the code transformed into the form illustrated by Code Segment 10. The details of such transformation from either an imperative or a functional programming language are trivial, as should be apparent to those skilled in the art.
  • Segment 14 still may not be suitable for direct translation in a parallel program. It may remain necessary to specify where the object x is to be stored. In fact, several copies of x may need to be maintained if: (1) used by stage A, (2) communicated between stage A and stage B, (3) used by stage B. The object x may need to be separated into local copies for stage A and stage B, and the "message" to be sent between stage A and stage B.
  • DMA Memory Access
  • stages may be split into separate processors, such as illustrated in Code Segment 15.
  • xA[] requires at least as many elements as there are concurrently executing iterations of stage A. This is the number of gangs allocated to stage A, call it M. Similarly, xB[] needs as many elements as there are gangs allocated to stage B, call it N. Since xTmp ⁇ is accessed by both stage A and stage B, it needs M+N elements. If there are fewer elements in any of the arrays, then some of the gangs will have to wait before being processed.
  • the Buffer data structure may now be used to store the xTmp array.
  • the array element xTmp[i mod Z] is accessed for some Z ⁇ M+N. Making this allocation and the assignment of work to gangs explicit, the parallel code is illustrated in Code Segment 16.
  • XA[G] . . .; wait for xTmp[i mod Z] to become available; copy xA[G] to xTmp[i mod Z]; ⁇
  • the remaining step is to allocate the (now finite) arrays xA[], xB[] and xTmp[] to actual memory regions.
  • One solution is to identify for each of the arrays the memory regions in order of preference, or to identify with each memory region and array the cost of allocating the array in the region. Then various heuristics can be used to solve the resulting graph-theoretical or integer-programming problems.
  • the details include: (1) the use of the underlying Operating System (“OS”) Application Programming Interfaces (“APIs”) to execute the code on the desired processors, (2) run-time implementation of the map of physical processors onto teams and gangs, and (3) implementation of copying using the efficient machine APIs.
  • OS Operating System
  • APIs Application Programming Interfaces
  • the parallelization process follows the following steps: (1) heuristic work item adjustment; (2) heuristic stage size adjustment; (3) allocation of processors to gangs and teams, and to stages; (4) dependence and data-flow analysis; (5) elimination of spurious dependencies by scalar expansion; (6) renaming of objects and generation of copies; and (7) allocation of objects to local and communication storage.
  • AI Artificial Intelligence
  • stages are nested loops with linear bounds, then a memory footprint is estimated for each stage and iteration by counting feasible points in the corresponding polyhedra.
  • a discussion of this may be found in the article by William Pugh, "Counting solutions to Pres burger formulas: how and why, " in the Proceedings of the Association for Computing Machinery Programming Language Design and Implementation ("ACM PLDI”) Conferences, 1994.
  • the loop is restructured, if possible, to reduce the memory footprint.
  • the details are straight- forward for those skilled in the art, including: (1) by similar technique or by profiling, the work of the stages is estimated; (2) if the stages may be represented by loops, then enlarging the unit of work may be implemented by unroll- and-jam technique (A discussion of this may be found in S. Carr and K. Kennedy, "improving the ratio of memory operations to floating-point operations in loops," ACM Transactions on Programming Languages and Systems, vol. 16, no. 6, pp.
  • stages may be represented by loops, then the reduction of unit of work may be achieved by tiling the loops with the single outer loop and then folding the iteration of this loop into the main iteration loop;
  • data-flow analysis is performed at the granularity of the declared objects and re-entrance analysis is used to identify stages that can be instantiated on multiple gangs;
  • data flow and dependence analysis is performed either by solving systems of integer equations and inequalities, or by symbolically executing the stages recording their dependencies; (7) stages are coalesced in order to eliminate cross-iteration dependencies between different stages (which simplifies synchronization);
  • processors are assigned to stages in numbers that balance the resulting run-time;
  • data objects are packed into fast regions either by using a greedy allocation heuristic or by solving a Knapsack or Subset-Sum problem; and (10) the
  • FIG. 1 is a flow diagram illustrating the basic logic scheme employed by the various embodiments of the invention.
  • the process 10 which concerns parallelization of a pipeline having stages operable on sequence of work items, begins at 12.
  • an amount of work is allocated for each work item.
  • At 16 at least one stage is assigned to each work item. Naturally, if there is but one work item, only one stage may be assigned at this point.
  • At 18, the at least one stage is partitioned into a least one team. As discussed above, it is contemplated that only one stage may be assigned.
  • the at least one stage is partitioned into at least one gang. As noted, it is contemplated that, for a particular stage, only one gang may be employed.
  • FIG. 2 illustrates an organizational scheme employed by embodiment(s) and variation(s) of the invention.
  • the processing system 30 includes M stage(s) 32 and N stage(s) 34. With an amount of work having been assigned to each work item, the M stage(s) 32 and the N stage(s) are each assigned to the work items.
  • the M stage(s) 32 and the N stage(s) 34 each are partitioned into team(s) 36, 38, respectively.
  • the team(s) 36, 38 are, in turn, partitioned into gang(s) 40, 42, respectively.
  • parallel processing of the work items is greatly facilitated so that processing is enhanced, as discussed above.
  • the data buffer 48 facilitates communication between the M stage(s) 32 and the N stage(s) 34, as discussed above. It is contemplated that the M stage(s) 32 and the N stage(s) 34 also may communicate directly with one another via another communication link, not illustrated. [00129] As may be appreciated by those skilled in the art, the stages 32, 34, the teams 35, 38, and the gangs 40, 42 may overlap with one another, either partially or completely, depending upon the processing required for a particular sequence of work items. Other variations are also contemplated.

Abstract

A method of parallelizing a pipeline includes stages operable on a sequence of work items. The method includes allocating an amount of work for each work item, assigning at least one stage to each work item, partitioning the at least one stage into at least one team, partitioning the at least one team into at least one gang, and assigning the at least one team and the at least one gang to at least one processor. Processors, gangs, and teams are juxtaposed near one another to minimize communication losses.

Description

METHOD AND SYSTEM FOR PARALLELIZATION OF PIPELINED COMPUTATIONS
Cross-Referenee to Related Applications
[0001] This application relies for priority upon United States Provisional Patent
Application Serial No. 60/865,253, filed on November 10, 2006, the contents of which are incorporated herein by reference.
Field of the Invention
[0002] This invention relates to methods for computational pipelining. More specifically, this invention relates to a method for parallel pipelining of computations.
Description of the Related Art
[0003] The term "computation pipeline" refers to a sequence of computational steps, each of which consume data produced by one or more previous steps. The results are then used in subsequent steps. For purposes of this discussion, "computational pipeline" will be used in this context.
[0004] The term "stage" typically refers to computational steps such as those that make up a computational pipeline. If a computational process involves more than one stage, the stages may be labeled A, B, C, etc., as part of a naming convention. Each stage, labeled "x," uses only data from preceding stages.
[0005] If the invocation of a stage is given a numerical identifier, e.g., 1, 2, 3, etc., then the i-th iteration of a step starts only after the previous steps have completed their i-th execution.
[0006] A simple example of a pipeline is one where each computational stage depends only upon its immediate predecessor. Therefore, for a simple pipeline, once a stage completes its i-th execution, the immediate successor stage may initiate its i-th execution.
[0007] As may be appreciated by those skilled in the art, stages may be executed in many possible, different orders. For a single processor machine, for a simple pipeline containing two stages, A and B, each with three invocations, 1, 2, and 3, there are numerous possible execution orders. For example, one execution order may be AO, Al, A2, BO, Bl, A3, B2, etc. A standard operating order is to execute the Os first, followed by the Is, etc. As a result, the standard execution order is AO, BO, Al, Bl, A2, B2, A3, etc. [0008] In the context of coarse pipelining, it is assumed that many processors will be relied upon. Typically, the number of processors is equal to or greater than the number of stages. As a result, in such a case, it is assumed that each stage is assigned to a different processor. A stage begins as soon as its predecessor is completed. Looking at a dynamic picture of a process, the invocations of the stages might appear as follows:
Table 1
Figure imgf000004_0001
[0009] The information that is passed between the stages, A and B, is generally of an identical shape for all execution counts of those stages. In the Arithmetic Logic Unit ("ALU") of a pipelined processor, the information may consist of only a few words. In a systolic array, the information that is passed between the stages also may consist only of a few words. The "processors" executing these stages are fixed functions with dedicated hardware to implement or effectuate the communication.
[0010] In a coarse pipeline, processors executing the stages are similar to general purpose Central Processing Units ("CPUs") without dedicated communication resources. In particular, the target is a multi-processor, using either a shared memory or message passing to communicate between processors.
[0011] In this context, it is necessary for the amount of computation per stage be sufficient to outweigh the costs of inter-stage (i.e., inter-processor) communication. This, in turn, means that the values communicated between the stages may be fairly large, which is the case with array slices.
[0012] Coarse pipelining provides one template for distributing work to processors.
A more traditional approach to parellization is to first complete all of the A stages, then all of the B stages, and so on. If two (2) processors are used and the two stages each include a count of four (4) invocations, the following schedule may be employed: Table 2
Figure imgf000005_0001
[0013] As might be appreciated by those skilled in the art, there are at least two draw-back associated with this approach.
[0014] The first draw-back is associated with the latency of the first result.
Specifically, the first result will be obtained only after all of the counts except for the last stage have been executed. In the example provided in Table 2, BO will be complete only after three (3) time units (or invocations) whereas with coarse pipelining, BO will be complete after two (2) time units. As may be appreciated by those skilled in the art, this difference becomes more pronounced with a greater number of pipeline stages and a higher number of execution counts.
[0015] The second draw-back is associated with memory requirements. In the traditional approach, the data produced by all of the counts of a particular stage is stored before the next stage is executed. In the case of coarse pipelining, some of the data produced by all of the stages needs to be stored simultaneously. If the pipeline is a simple one, only one segment of data for each stage must be stored. Further, by changing the amount of work done in each stage, it is possible to control the amount of data that must be stored.
[0016] Both latency and memory requirements may be decreased by reducing the amount of work performed by each stage. Of course, this results in an increase in the number of execution counts. Increasing the number of execution counts, however, may decrease the overall efficiency of the computation, since more time will be expended in inter-stage synchronization and communication.
[0017] It is possible to achieve similar control over both the latency and memory requirements by using a non-coarse pipelined schedule, such as the one provided in Table 3, below. Table 3
Figure imgf000006_0001
[0018] In the schedule provided in Table 3, each processor receives a packet of information and then processes that information to completion.
[0019] As may be understood by those skilled in the art, this processing schedule may not be possible when there exist forward dependencies between successive executions of the same stage. In other words, this may not be possible when the first iteration of Al cannot be started until the last iteration of AO is completed.
[0020] As may also be appreciated by those skilled in the art, another scenario in which a non-pipelined schedule may not be executable occurs when the input data for the first stage itself is being produced in a pipelined fashion. For example, if the data for Al only becomes available some time after the data for AO, Al cannot be initiated at the same time as AO. This scenario occurs in environments where the input is produced in a realtime stream, such as for wireless or multi-media.
Summary of the Invention
[0021] It is, therefore, one aspect of the invention to provide a method of parallelizing a pipeline having stages that perform work on a sequence of work items. The method includes selecting the amount of work each work item performs, determining the number of stages, partitioning each stage into a number of teams and number of gangs of processors in the teams, and assigning teams and gangs to specific processors. [0022] Another aspect of the invention is to provide a method where the amount of work for each work item is a function of the amount of available memory and/or overhead synchronization, and/or the number of stages is a function of load balance and/or parallelism, and/or the number of teams and gangs is a function of load balance and/or throughput. [0023] A further aspect of the invention is to provide a method where the number of gangs in a team is a function of work dependency of a stage, efficiency of parallelization, memory utilization and/or latency.
[0024] Still one additional aspect of the invention is to provide a method where the assignment to specific processors is a function of the communication network between processors and/or access to memory regions.
[0025] It is yet another aspect of the invention to provide a method where a pipeline structure exists and the selecting the amount of work, determining the number of stages and assigning processor is an adjustment of the amount of work and the number of stages and a reassignment of the processors of the existing pipeline structure to achieve parallelization.
[0026] An aspect of the invention is to provide a method that includes rewriting the code for the stages and data objects to reflect the teams and gangs; and write code for communication for input and output data and for synchronization of the teams and gangs.
[0027] Another aspect of the invention includes analyzing dependency and data flow and eliminating spurious dependencies.
[0028] One further aspect includes providing a buffer for data produced by M teams, gangs or stages and used by N teams, gangs or stages; and wherein the buffer has at least M+N areas for data.
[0029] In still another embodiment of the invention, areas assigned to the M teams, gangs or stages and the N teams, gangs or stages are continuously reassigned so that data stored in an area by the M teams, gangs or stages in one cycle is accessed by the N teams, gangs or stages in the next or other subsequent cycles.
[0030] In yet another embodiment of the invention, the buffer has more than M+N areas for data to account for cycle differences between the availability of data to be stored and when it can be accessed.
[0031] Further, the invention includes instructions for storing data include a send reserve function to obtain a buffer area suitable for the given iteration and a send commit function indicating that the data is stored and available for reading; and the instructions for accessing data include a receive reserve function to obtain data from a buffer area suitable for the given iteration and a receive commit function indicating that the data is no longer needed,
[0032] In addition, the invention provides for a processor system that includes parallel pipeline stages that perform work on a sequence of work items and a buffer for data produced by M stages and used by N stages, the buffer comprising at least M+N areas for data stored by the M stages and accessed by the N stages.
[0033] The processor includes features such as where the areas assigned to the M stages and the N stages are continuously reassigned so that data stored in an area by the M stages in one cycle is accessed by the N stages in the next or other subsequent cycles.
[0034] In other embodiments, the buffer has more than M+N areas for data to account for cycle differences between the availability of data to be stored and when it can be accessed.
[0035] In still further embodiments, the instructions for storing data include a send reserve function to obtain a buffer area suitable for the given iteration and a send commit function indicating that the data is stored and available for reading; and the instructions for accessing data include a receive reserve function to obtain data from a buffer area suitable for the given iteration and a receive commit function indicating that the data is no longer needed.
[0036] In at least one additional embodiment, some of the stages are partitioned into a number of teams and number of gangs in the teams; and M and N each represent the total number teams, gangs and stages.
[0037] Still further features of the invention will be appreciated by those skilled in the art in view of the description that follows and the drawings appended hereto.
Brief Description of the Drawings
[0038] The invention will now be described in connection with the drawings appended hereto, in which:
[0039] Figure 1 is a flow diagram of the logic employed by embodiment(s) of the invention; and
[0040] Figure 2 is a block diagram illustration the organizational scheme employed by embodiment s) of the invention. Description of Embodiment(s) of the Invention
[0041] Reference will be made to the figures appended hereto to illustrate one or more features of the invention.
[0042] With reference to the discussion of the related art, it has been assumed that each stage is mapped exactly onto one processor. As should be appreciated by those skilled in the art, this is not required for data processing. It is possible to implement a stage using multiple processors and different stages using differing numbers of processors.
One example of this approach is provided in Table 4, below, where five (5) processors are used.
Table 4
Figure imgf000009_0001
[0043] As may be appreciated by those skilled in the art, in some cases, as in the example provided by Table 4, the computations within a stage may include a parallelism within them that may be exploited further by using more conventional parallelization techniques. For purposes of this discussion, as would be appreciated by those skilled in the art, a group of processors that implement a single stage will be referred to as a "gang." Therefore, referring back to Table 4, stage A is implemented using a gang of three processors while stage B is implemented using a gang of two processors. As should be appreciated by those skilled in the art, one reason to rely on the use of gangs is to increase the amount of work (or increase the amount of data processed) done in parallel. When processed in this manner, the time needed to complete the task may be reduced. [0044] One additional reason to use gangs for processing is to improve processor utilization. Consider an example with two stages, A and B, where processing of stage A takes 1.5 times (1.5x) as long to complete than stage B. In this case, if stages A and B are each implemented using only one processor, stage B will be idle for about 33% of the total processing time. As may be appreciated by those skilled in the art, stage B is idle because it is "waiting" for stage A to complete its processing before both stages may proceed to the next processing cycle. However, as suggested by Table 4, if gangs of processors are utilized, it is possible to engineer processing so that stages A and B both are completed in approximately the same time period.
[0045] A "team" approach may also be employed to realize certain processing improvements. One example of this approach involves a simple pipeline with two stages, A and B. Stage A takes one (1) unit of time to complete, while stage B takes two (2) units of time to complete. Stage B^ cannot start before stage A^ is completed. To improve processor utilization, a gang of two processors may be assigned to stage B, while one processor (or a gang including only one processor) is assigned to stage A. In this example, processing proceeds as indicated by Table 5, below:
Table 5
Processor Execution #1 Execution #2 Execution #3 Execution #4 Execution #5
#1 AO Al A2 A3
#2 BO Bl B2 B3
#3 BO Bl B2 B3
[0046] As may be appreciated by those skilled in the art, there are situations where it may not be possible to efficiently partition the work performed of a single stage. In such situations, it is possible to use additional processors by assigning them to the stage and executing different iterations of the stage concurrently. Table 6, below, provides an example of such an arrangement.
Table 6
Processor Execution #1 Execution #2 Execution #3 Execution #4 Execution #5
#1 AO Al A2 A3
#2 BO BO B2 B2
Figure imgf000011_0001
[0047] In the example illustrated in Table 6, stage B is assigned a team of two gangs. Each gang equates to a single processor (processor #2 or processor #3) in this example. The number of teams, therefore, is the number of iterations being processed concurrently for the given stage.
[0048] A more complex example of this concept is illustrated in Table 7. hi Table
7, stage B requires four (4) units of time for processing. In this example, the four time units required to process stage B may be parsed efficiently between two processors, rather than four processors. In addition, when divided between two (2) processors, stage B requires only two (2) time units to complete its processing. Accordingly, in this example, a team of two gangs may be assigned to stage B. Each gang includes two (2) processors.
Table 7
Figure imgf000011_0002
[0049] With the example illustrated in Table 7 in mind, therefore, if the work for a thread is divisible efficiently between M threads, there may be an advantage to provide N teams of M threads assigned to a particular stage. In such a case, it may be said that the team includes M*N threads or that it includes N teams.
[0050] Load balancing is another factor that is considered in data processing, as may be appreciated by those skilled in the art. Put simply, computation in a pipeline proceeds at the rate of the slowest stage, hi the example illustrated in Table 8, the pipeline includes four stages, A, B, C, and D. If three of the stages each take one (1) unit of time to process and one of the stages takes two (2) units of time to process, a new execution count will occur once every two (2) units of time. In the example illustrated in Table 8, stage C takes two (2) units of time to process.
Table 8
Figure imgf000012_0001
[0051] Ideally, of course, it is desirable to design processing such that the amount of computation for each stage is about the same. This is called "load balancing." [0052] As discussed above, gangs may be employed to assist with load balancing.
Of course, as may be appreciated by those skilled in the art, load balancing may also include or require splitting a stage into two or more stages. Alternatively, load balancing may include combining two (2) or more stages together. Other combinations and rearrangements of work also may accomplish this same result, as may be appreciated by those skilled in the art.
[0053] Buffering also may be employed to assist with multi-stage processing. In a simple pipeline, data is written by a stage and is read by its immediate successor stage. In a simple, coarse pipeline, a segment of data is written by a stage and then consumed by its successor. As may be appreciated by those skilled in the art, reading and writing occur simultaneously. As a result, in one example, stage B will be reading the data for count n while stage A will be writing the data for count n+1. For the sake of efficiency, it is desirable for both the reading and writing operations to proceed simultaneously or in parallel. Of course, when performing the reading and writing operations simultaneously, it is desirable to avoid a situation where stage A overwrites the data being read by stage B before stage B has read all of the data. As should be appreciated by those skilled in the art, a technique referred to as "double-buffering" addresses this problem. [0054] With respect to double-buffering, the data storage area is twice as large as the data that is communicated between the stages. If the data storage area is divisible in two parts, an upper part and a lower part, both parts of the data storage area may be used at any given time. For example, during one count, the predecessor stage may write to the lower half of the data storage area while the successor stages reads from the upper half of the data storage area. During the next execution count, the positions are swapped such that data is written to the upper half of the data storage area while data is read from the lower half of the data storage area.
[0055] Generally, there may be a number of writers, M, and a number of readers,
N, for a given pipeline. The write stage may be implemented using a team of M gangs. The reader stage may be implemented using N gangs. If so, it may be necessary to employ an extension of double-buffering, which is known as M+N buffering. [0056] In the discussion to this point, it should be understood that the lifetime of a segment of data is one (1) count. In other words, data produced by one stage during count n will be consumed by another stage during the count n+1. There may, of course, be instances where a segment of data may need to be stored for a longer period of time. In some instances, the successor stage may use data segments from two or more counts of the preceding stage. This example is illustrated in Table 9, below. In this case, the stage BO relies on and uses the results (or data) from both stages AO and Al.
Table 9
Figure imgf000013_0001
[0057] One possible alternative to this scheme is illustrated by Table 10. In Table
10, one or more data segments may be used from stages that do not immediately preceed the producing stage. In such cases, the buffering algorithm must reserve additional space for storage of data to avoid data being overwritten during various iterations. In the illustrated example, the stage CO uses the results of both of stages AO and BO.
Table 10
Figure imgf000014_0001
[0058] As may be appreciated by those skilled in the art, it may also be desired to generate a pipeline schedule. A coarse pipeline schedule may be generated when it is desired for a particular task to meet a particular latency. In such a case, the computation is divided into stages, and the number of gangs per stage is selected so that each stage takes less time than the latency target. As should be appreciated by those skilled in the art, it is desirable for the processor requirements for this schedule not to exceed the total number of available processors. Once the schedule is selected, the inter-stage buffers are generated. [0059] It is also possible to generate a coarse pipeline schedule in a memory constrained environment. If all stages are collections of loop nests, the amount of work done by each stage may be altered by changing the number of iterations of the outermost loop executed per count. In such a case, a load-balanced schedule is generated. Then, the amount of memory required for the buffers is computed. The amount of work done is then scaled by the stages until sized appropriately for the available memory. [0060] As may be appreciated by those skilled in the art, it is also possible to produce a coarse-pipelined code manually. In one contemplated example, a standard imperative language may be used with support for parallel processing, such as C augmented by the pthreads standard threading library. Relying on this language, for example, tasks may be manually partitioned into stages. In addition, the code for each stage may be partitioned into separate functions. Each function, then, may be invoked on a unique processor using an explicit pthread_create() call or its equivalent. With a shared- memory multi-processor, communication between the threads/stages may be effectuated via the memory, with synchronization performed manually or via a mechanism such as semaphores.
[0061] For the sake of simplicity, a pipeline computation may be expressed as in
Codes Segment 1, which follows: for iteration i = 0 , 1 , 2 , . . . { Stagel O ; Stage2 ( ) ;
StageN O ;
}
Code Segment 1
[0062] As may be appreciated by those skilled in the art, it is not required for the computation to be expressed in the form provided by Code Segment 1. However, any pipeline computation is expressible in this form, since the computation consists of stages that perform work on a sequence of work items (or iterations).
[0063] Parallelization may include the following steps: (1) work unit adjustment,
(2) stage adjustment, and (3) assignment of gangs and teams to stages. With respect to work unit adjustment, the amount of work for each work item may be adjusted in at least one of two ways. The work items may be sub-divided per stage to reduce memory consumption. Alternatively, the work items may be enlarged per stage to reduce synchronization overhead. With respect to stage adjustment, one or more stages may have to be combined or sub-divided to improve load balance or to expose opportunities for parallelization. With respect to the assignment of gangs and teams to stages, gangs and teams of processors may be assigned to stages based on load balancing or real-time requirements. For example, if a stage k requires W* units of work, it is desirable for all of the stages to fulfill the ratio W^/P* in approximately the same manner. With respect to the ratio WkfPk, P^ is the number of processors assigned to stage k.
[0064] For real-time throughput requirements, it is presumed that the pipeline must complete work on each item within a time T. If so, a minimum number of processors assigned to each stage will be selected according to equation (1), which is provided below.
P* = ceiling(WT) (1) [0065] There are at least four factors that guide the division of teams into gangs:
(1) work dependencies, (2) efficiency of parallelization, (3) memory utilization, and (4) latency requirements. With respect to work dependencies, if the work for stage k on iteration count j depends on the work for stage k on the iteration count j-1, the two iterations cannot be executed concurrently. In this case, the team for the stage cannot be subdivided. With respect to efficiency of parallelization, the computation for a stage may be inherently sequential or may be split only into a preferred number of partitions. With respect to memory utilization, if a stage uses M gangs, the stage requires M copies of the associated data. With respect to latency requirements, relying on the use of more gangs, which work independently, increases the time for any given iteration to be processed. [0066] Assignment to physical processors also is a factor taken into account with respect to parallel processing. As may be appreciated by those skilled in the art, the cost of communicating between different processors or a parallel computer is not uniform. Depending upon the communication network, some pairs of processors may communicate with one another more quickly than other pairs of processors. Similarly, the cost of access to various memory regions is not uniform. Accordingly, the performance of the final parallel program depends, at least in part, upon the assignment of stages to processors and data to memories (or memory allocations).
[0067] Provided in the order of importance, there are three guidelines to be followed with respect to assignment of stages to physical processors. First, it is desirable to assign gangs to processors that are closest together. Second, it is desirable to assign teams to processors that are close together. Third, if stage A and stage B communicate with one another, it is desirable to map the gangs for stages A and B in such a manner to minimize the cost of transmitting data from stage A to stage B or vice versa. After work unit adjustment, stage adjustment, and the assignment of gangs and/or teams to processors, a transformation of the original pipeline is created that follows Code Segment 2, below: for iteration I' = 0, 1, 2, . . . { Stagel ( ) ; Stage2{) ,-
StageN' () ; }
Code Segment 2 [0068] Implementation of the parallel program will now be discussed. After adjustment, the new stages and iterations may differ in size from the original pipeline. As may be appreciated by those skilled in the art, the new stages and iterations continue to constitute a pipeline. For each stage k, certain parameters are realized including the gang size P^, the number of gangs in the team M& and the assignment of the processors in the team to physical processors.
[0069] The process of implementing the final parallel program may be divided into the following steps or portions (among others), as would be appreciated by those skilled in the art: (1) stage rewriting, (2) data object rewriting, and (3) communication generation. With respect to stage rewriting, each iteration of the stage k may be partitioned among P^ processors, since P^ refers to the size of each gang. It is desirable for the code for the stage to be partitioned. The result may be summarized as a sub-program PartitionedStagek(t), where t is the index of the processor within the gang that executes the partition. With respect to data object rewriting, references to the original data objects used by the stage are transformed to refer to areas where the objects are allocated. With respect to communication generation, it is desirable to insert code to communicate input and output data. In addition, it is desirable to insert code to synchronize processes in the gangs. [0070] An example of code to be run on each physical processor is provided by
Code Segment 3, below:
Workstagek {p : processor_index) {
Let be the index of the processor in its gang
Let G be the index of the gang within the team
Let Mk be the number of gangs for iteration I = G, G+Mk, G+2*Mk, . . . , G+j*Mk, . . . { receive data from the preceding stages
PartitionedStagek(t) ;
Send data to the succeeding stages } }
Code Segment 3
[0071] The final parallel program consists of several code fragments, like the one set forth above, one for each stage, together with the means for starting these fragments on the parallel processors.
[0072] An efficient implementation of communication buffers will now be described. Here, it is assumed that the parallel computer is capable of copying data between different memory regions. On a shared memory parallel computer, this may include nothing more than a library routine for explicitly copying data. As may be appreciated by those skilled in the art, the copying functionality may be a hardware mechanism specially designed to copy asynchronously. On a distributed memory machine, the copying functionality may be a library routine implemented on top of message passing primitives. For clarity, it is assumed that the computer incorporates a shared memory and that a library routine "copy(dest,src,n)" is available to copy n bytes from location "src" to location "dest". Of course, as may be appreciated by those skilled in the art, this assumption does not preclude implementation of the techniques of a distributed memory machine.
[0073] The communication buffer may be an abstract data type with at least the following functions, coded in C language for clarity, as embodied in Code Segment 4:
Typedef . . . Buffer; byte * send_reserve (Buffer buf , int iteration) ; void send_commit (Buffer buf, int iteration) ; byte * receive_reserve (Buffer buf, int iteration) ; void receive_commit (Buffer buf, int iteration) ;
Code Segment 4
The send_reserve() function obtains the buffer area suitable for the given iteration. The send_commit() function marks the data as available for use by the received side. The receive_reserve() function obtains the data suitable for the given iteration. The reserve_commit() function indicates that the data is no longer needed. [0074] Using these functions, send and receive phases of the Workstagek() routine may be implemented as follows, for example, by Code Segment 5: sending data : p = send_reserve (buffer, i) ; copy outgoing data into p send_coπtmit (buffer , i ) ; receiving data : g = receive__reserve (buffer, i) ; copy incoming data out of q receive_coinmit (buffer, i )
Code Segment 5 [0075] In many cases, copying data to/from communication buffers may be avoided if the PartitionedStagekQ routine uses the data directly from the buffers. In such a case, a possible implementation of WorkStagekQ is detailed in Code Segment 6, below:
WorkStagek (proc : processor_index) {
Let t be the index of the processor in its gang
Let G be the index of the gang within the team
Let Mk be the number of teams for iteration I = G, G+Mk, G+2*Mk, . . . , G+j*Mk, . . . , { p = send_reserve (Buffer, i) q = receive_reserve (Buffer, i) ;
PartitionedStagek(p, q, t) ; send_comrnit (Buffer, i ) ; receive_commit (Buffer, i); } }
Code Segment 6
[0076] Possible implementations of buffer communication functionality will now be discussed.
[0077] Uniform memory access is also an area to be addressed with respect to pipelined processing. In this context, it is assumed that K bytes are to be communicated from stage A to stage B. Furthermore, it is assumed that there are M gangs associated with stage A, and N gangs associated with stage B. It is also assumed that there are at most M+N iterations of the pipeline being executed concurrently between the teams for stage A and for stage B. As such, at least M+N instances of the data items are expected to be communicated between the stages A and B. Each instance is referred to as a "slot". [0078] It is assumed in this example that Z is the total number of slots. It is also assumed that each slot has a size K. Each iteration i is assigned slots in round-robin fashion according to the following formula:
b = i mod Z (2)
[0079] The communication buffer abstract data type is comprised of three arrays of length Z each. For definitional purposes, send_iter is an array of integers, recv_iter is an array of integers, and slots is an array of slots. In C language we would express this as illustrated below in Code Segment 7: typedef struct { int send_iter [ Z] ; int recv_iter [Z] ; byte slots [ Z] [K] ; } Buf fer;
Code Segment 7
[0080] Initially, recv jter[i] is set to -1 as follows: recv_iter[i] = -1, for all i between 0 and Z-I. In addition, the variable send_iter[i] is set to i as follows: send_iter[i] = i, for all i between 0 and Z-I. The implementation of the send/receive functions follows. For C language notation is used. Equivalent implementations can be obtained using any standard programming language, such as C++, assembly, Matlab, as would be appreciated by those skilled in the art. Code Segment 8 provides one example. byte * send_reserve (Buffer *B, int iteration) { int slot = iteration % Z; wait until B->send_iter [slot] ==iteration; return B->slots [slot] ; } void send_commit (Buffer *B, int iteration) { int slot = iteration % Z;
B->recv_iter [slot] = iteration; } byte * receive_reserve (Buffer *B, int iteration) { int slot = iteration % Z; wait until B->recv_iter [slot] ==iteration; return B->slots [slot] ; } void receive_commit (Buffer *B, int iteration) { int slot = iteration % Z;
B->send__iter [slot] = iteration + Z; }
Code Segment 8
[0081] The reserve routine(s) operate by waiting until the slot assigned to iteration i is ready for the reserve routine(s). When slot s is ready for the sender, send_iter[s] = i.
Similarly, when slot s is ready for the receiver, recvjterfs] = i.
[0082] Alternatively, the commit routines indicate when the slot is ready for the next operation. If the current iteration is i, then the next iteration is i+Z. The sender signals the receiver and vice versa.
[0083] The benefit of this organization is that it requires no explicit synchronization between the multiple senders and receivers. The synchronization is implicit due to the round-robin assignment of iterations to slots.
[0084] Non-uniform memory access is another area to be addressed in pipelining. [0085] The Buffer operation may be modified slightly to deal with gangs distributed across multiple processors and multiple memory regions. [0086] Again, it is assumed that stage A sends data to stage B. Also, it is assumed that M gangs are assigned to stage A and that N gangs are assigned to stage B. Furthermore, it is assumed that there are M memory regions R1 . . . RM and Qi . . . QN, which have affinity to the gangs. In other words, it is assumed that j is one of the gangs running stage A. Then, the processors in the gang may access memory region Rj more quickly than other regions. Similarly, it is assumed that k is one of the gangs running stage B. In this case, the processors in the gang may access memory region Q^ more quickly than other regions. Moreover, it is assumed that storing into any "far" memory region is faster than loading from that "far" memory region.
[0087] As may be appreciated by those skilled in the art, a number of implementations are possible that reduce non-local memory accesses. In one implementation, Z=W*N buffers are used. In this case, the buffers satisfy the following formula: Z>M+N. One possible choice, among others, is W=ceiling((M+N)/N). In this case, rather than considering the send_iter[], recvjterf] and storage[] arrays as being indexed by a one-dimensional slot index, they may be considered as being indexed by a two dimensional slot index, which is detailed in Code Segment 9, as follows: typedef struct { int send_iter[N] [W] ; int recv_iter[N] [W] ; byte storage [N] [W] [K] ; } Buffer;
Code Segment 9
[0088] The 2-D slot index may be described as follows: <u,v> =<s/W, s mod W>, where s=i mod (W*N). As such, each array segment or slice, recv_iter[j][] and storage]] ][][], may be stored in the region Qj. In this manner, the receiver gang that performs the iteration i need only to load from the local region Qj, with j=i mod N. The sender teams then perform non-local stores into recv_iter[] and storage[][][] arrays. These stores on many computers may be done asynchronously. The only possible non-local loads are for sender gangs from the send_iter[] array. In many cases the cost of this operation is much lower than the cost of accessing the storage and, therefore, may be discounted. [0089] An implementation which completely eliminates non-local loads may use
Z≥M+N, which is also divisible both by M and N. The smallest of these is Z=L*ceiling((M+N)/L), where L is lowest common multiple of M and N. this may be expressed as: Z=W*N=U*M.
[0090] Following this logic, those skilled in the art may appreciated that the recvjter and storage arrays are still N-by-W arrays. The send_iter[][] array is an M-by-U array. The index into the send_iter array, then, may be expressed in the following manner: <a,b>=<s/U, s mod U>. In this case, the segment or slice send_iter[k][] may be allocated to the region Rk.
[0091] Pipelining also considers allocation of buffers between non-adjacent stages.
[0092] Consider the pipeline illustrated in Table 11. In this example, the stages B and C depend on stage A. Stage C also depends on stage B. For discussion purposes, one (1) gang is assigned to stage A, one (1) gang is assigned to stage C and ten (10) gangs are assigned to B. As a result, ten (10) concurrent iterations of stage B are executed. It follows, then, that only one (1) iteration of stages A and C is executed concurrently. [0093] Let (b, b+10) be the range of iterations executed by gangs of stage B at any point in time. Let c be the iteration of stage C and a be the iteration of stage A. It follows, then, that a > b+10, since stage A must be executed with all the iterations on which stage B is working. And c < b, for similar reasons. Therefore, a-c > 11. This means that stages A and C are at least 11 iterations apart. Table 11 illustrates this example.
Table 11
Figure imgf000022_0001
[0094] Therefore, as may be appreciated by those skilled in the art, for the example illustrated in Table 11, there must be at least 12 slots for the buffer between stage A and stage C.
[0095] In general, the bounds on the minimum number of slots required between any two stages may be obtained by writing down all the constraints on the ranges of concurrent iterations for each of the stages. Then, for each pair of communicating stages X->Y, the minimum distance may be determined between the earliest iteration of X and the latest iteration of Y. Since the ranges and the constraints between the iterations are linear inequalities, linear programming techniques may be used to determine these distances. Alternatively a graph-theoretical formulation is possible that propagates the distances top-down within the dependence Directed Acyclic Graph ("DAG") of stages. The details of this formulation should be apparent to those skilled in the art. [0096] Automatic parallelization by a compiler is also considered when pipelining a program.
[0097] With appropriate tools, the actual mechanics of implementing the partitioning, figuring out the buffer sizes and synchronization may be done by a tool such as a compiler or translator, under programmer control. Generally, this is accomplished by the programmer, who may add tool-directives to the code (such as pragmas or special comments), which then cause the appropriately coarse-pipelined code to be generated. [0098] Depending on the sophistication of the tool, the programmer may be required to do more or less work. Ideally, the only the constraints on the tool should be: (a) available memory, (b) desired latency, and (c) number of processors. Less sophisticated tools may require the programmer to identify the stage boundaries, gang sizes, etc., as may be appreciated by those skilled in the art.
[0099] Code Segment 10 provides one example of a sequential pipeline, which is an input program written in the pipeline form. for iteration i = 0, l, 2, . . . { stagel (Datal) ; stageN(DataN) ; }
Code Segment 10 [00100] While this example is provided, as should be appreciated by those skilled in the art, this example should not understood to be limiting of the invention. To the contrary, the input language need not have this exact lexical structure. The program may be written in any number of languages such as C, Fortran or Matlab, among others, as would be appreciated by those skilled in the art. Whichever language a programmer selects, it is assumed that the programmer may transform the code transformed into the form illustrated by Code Segment 10. The details of such transformation from either an imperative or a functional programming language are trivial, as should be apparent to those skilled in the art.
[00101] In addition, for this example, it is assumed that dependence and data-flow analysis is performed on the program. The result is the set of dependence or data-flow relations as follows:
(1) Dependence: { D: <A,k>-><B,j> | P(i,j) }. This means that iteration j of stage B depends on iteration k of stage A, and that the predicate P(ij) is satisfied. Moreover, the dependence arises from the fact that both stage A and stage B read or write within the set of data objects D and that one of the stages writes within the set.
(2) Data-flow: { D : <A,k>-><B,j> | Q(i,j,D) }. This means that the set of data objects D is produced at iteration k of stage A and is consumed by iteration j of stage B. The iterations and the set satisfy the predicate Q(i,j,D). hi fact, a data-flow dependence is an instance of a "read- write" dependence.
[00102] Data-flow relations tell us precisely which data objects have to be communicated. Dependence relations tell us which iterations have to be synchronized. [00103] Reference is now made to Code Segment 11, below. for iteration i = 0, l, 2, . . . { wait for all incoming dependence relations be satisfied stagek(Datak) ; mark all outgoing dependencies as satisfied }
Code Segment 11
[00104] The sequential program in Code Segment 11 is relied upon from which is generated a parallel program that does not violate dependence and data-flow relations. Since the target parallel computer is assumed to have a shared memory, code may be generated for each that is consistent with the code segment illustrated in Code Segment 11. A map data structure, such as a hash table, can be used for each dependence relation <A,k>-><B j> to mark which iterations of stage B become eligible for execution after the iteration k of stage A.
[00105] Consider the simple pipeline illustrated in Code Segment 12, which is provided below. for i = 0, 1, 2, . . . {
A : X = . . B : . . . x . . . ; }
Code Segment 12
[00106] There are three dependence relations in this program: (1) write-read dependence from <A,i> to <B,i>, (2) write-write dependence from <A,i> to <A,j> such that j > I, and (3) read-write dependence from <B,i> to <Aj> such that j > i. Intuitively, only the first write-read dependence represents the actual flow of data in the application.
The other two dependencies result from the same data object x being used at every iteration.
[00107] Data object expansion, such as scalar or array expansion is illustrated in
Code Segment 13. for i = 0, 1, 2, . . . {
A: x[i] = . . .;
B: . . . x[i] . . .; }
Code Segment 13
[00108] Data object expansion is performed to eliminate spurious dependencies. In the example provided by Code Segment 13, x is made into an array indexed by the iteration i. Although be infinitely many iterations are possible, an infinitely number of objects x[i] is not allocated in this example. For simplicity, only as many objects as there are gangs executing the given stage are allocated here.
[00109] Code Segment 14 provides an example of program code after renaming and introduction of copies. for i = 0, l, 2, . . . {
A: xA[i] = . . . ; copy xA[i] to xTmpfi]; B: copy xTmpti] to xB[i]; . . .xB[i]. . } Code Segment 14
[00110] As may be appreciated by those skilled in the art, the program in Code
Segment 14 still may not be suitable for direct translation in a parallel program. It may remain necessary to specify where the object x is to be stored. In fact, several copies of x may need to be maintained if: (1) used by stage A, (2) communicated between stage A and stage B, (3) used by stage B. The object x may need to be separated into local copies for stage A and stage B, and the "message" to be sent between stage A and stage B.
[00111] Even on shared memory multi-processor, it may be necessary to be explicit about data movement, since bulk copies between memory regions might be more efficient than individual reads and writes. Moreover, if the bulk copies are not possible between the regions closest to the processors executing stages A and B, then it may be necessary to copy x through a number of stages. The understanding for those skilled in the art is that the message might be implemented in a number of ways, including an asynchronous Direct
Memory Access ("DMA") copy.
[00112] For our example, suppose a third array xTmp[] is allocated as the communication buffer between stage A and stage B. Then the code is transformed such in the condition illustrated in Figure 6.
[00113] At this point, the stages may be split into separate processors, such as illustrated in Code Segment 15.
Processors for stage A execute: for 1 = 0, 1, 2, . . . { xA[i] = . . . ; copy xA[i] to xTmpfi]; }
Processors for stage B execute: for i = 0, l, 2, . . . { copy xTmp[i] to xB[i] ; . . . xB[i] . . }
Code Segment 15
[00114] This code still uses potentially infinite arrays xA[], xB[] and xTmp[].
Observe that xA[] requires at least as many elements as there are concurrently executing iterations of stage A. This is the number of gangs allocated to stage A, call it M. Similarly, xB[] needs as many elements as there are gangs allocated to stage B, call it N. Since xTmpπ is accessed by both stage A and stage B, it needs M+N elements. If there are fewer elements in any of the arrays, then some of the gangs will have to wait before being processed.
[00115] The Buffer data structure, described above, may now be used to store the xTmp array. Thus at each iteration i, the array element xTmp[i mod Z] is accessed for some Z≥M+N. Making this allocation and the assignment of work to gangs explicit, the parallel code is illustrated in Code Segment 16.
Processors for gang G for stage A execute: for i = G, G+M, G+2*M, . . . { /* there M gangs */
XA[G] = . . .; wait for xTmp[i mod Z] to become available; copy xA[G] to xTmp[i mod Z]; } Processors for gang H for stage B execute: for i = H, H+N, H+2*N, . . . { /* there are N gangs */ wait for xTmp[i mod Z] to contain iteration I; copy xTmp[i mod Z] to xB[H] ; . . .xB[H] . . . ,- }
Code Segment 16
[00116] The remaining step is to allocate the (now finite) arrays xA[], xB[] and xTmp[] to actual memory regions. One solution is to identify for each of the arrays the memory regions in order of preference, or to identify with each memory region and array the cost of allocating the array in the region. Then various heuristics can be used to solve the resulting graph-theoretical or integer-programming problems.
[00117] For example, if there is one fast memory region for the stage A, then we can use heuristics for the Knapsack problems to pack arrays used in stage A into the region. These Knapsack problems are described by M. Gary and D. Johnson, in "Computers and Intractability, " published by W. H. Freedman and Co, 1979. If the region is divided into multiple banks or sub-regions, then heuristics for Subset Sum problems may be used. Other graph-theoretical of integer programming formulations are possible. [00118] As should be appreciated by those skilled in the art, the code in Figure 7 is not complete. Other details can be filled in by those skilled in the art. The details include: (1) the use of the underlying Operating System ("OS") Application Programming Interfaces ("APIs") to execute the code on the desired processors, (2) run-time implementation of the map of physical processors onto teams and gangs, and (3) implementation of copying using the efficient machine APIs. [00119] In general, the parallelization process follows the following steps: (1) heuristic work item adjustment; (2) heuristic stage size adjustment; (3) allocation of processors to gangs and teams, and to stages; (4) dependence and data-flow analysis; (5) elimination of spurious dependencies by scalar expansion; (6) renaming of objects and generation of copies; and (7) allocation of objects to local and communication storage. [00120] Observe that in general the problem of finding the optimal solutions to the steps above can be viewed as an AI search problem in space of possible stage sizes, work unit sizes and processor assignments, hi some cases, the number of stages can be small enough to be able to enumerate possible solutions. In other cases, Artificial Intelligence ("AI") search techniques might be necessary.
[00121] Another possibility is to let the programmer annotate the code with stage sub-divisions and processor assignments and let the compiler perform more menial tasks of allocating storage and generating parallel code.
[00122] A potential implementation that simplifies some of analysis sub-problems is described below.
[00123] If the stages are nested loops with linear bounds, then a memory footprint is estimated for each stage and iteration by counting feasible points in the corresponding polyhedra. A discussion of this may be found in the article by William Pugh, "Counting solutions to Pres burger formulas: how and why, " in the Proceedings of the Association for Computing Machinery Programming Language Design and Implementation ("ACM PLDI") Conferences, 1994.
[00124] Accordingly, the loop is restructured, if possible, to reduce the memory footprint. The details are straight- forward for those skilled in the art, including: (1) by similar technique or by profiling, the work of the stages is estimated; (2) if the stages may be represented by loops, then enlarging the unit of work may be implemented by unroll- and-jam technique (A discussion of this may be found in S. Carr and K. Kennedy, "improving the ratio of memory operations to floating-point operations in loops," ACM Transactions on Programming Languages and Systems, vol. 16, no. 6, pp. 1768-1810, 1994); (3) if the stages may be represented by loops, then the reduction of unit of work may be achieved by tiling the loops with the single outer loop and then folding the iteration of this loop into the main iteration loop; (4) in the source program, the only data objects declared within the scope of the iteration loop are considered candidates for expansion; (5) data-flow analysis is performed at the granularity of the declared objects and re-entrance analysis is used to identify stages that can be instantiated on multiple gangs; (6) data flow and dependence analysis is performed either by solving systems of integer equations and inequalities, or by symbolically executing the stages recording their dependencies; (7) stages are coalesced in order to eliminate cross-iteration dependencies between different stages (which simplifies synchronization); (8) processors are assigned to stages in numbers that balance the resulting run-time; (9) data objects are packed into fast regions either by using a greedy allocation heuristic or by solving a Knapsack or Subset-Sum problem; and (10) the sizes of the communication buffers are determined using the techniques described herein.
[00125] With the foregoing in mind, reference is now made to the figures appended hereto. The figures are intended merely to be illustrative of the embodiment(s) and variation(s) discussed herein. As should be appreciated by those skilled in the art, there are numerous additional variations and equivalents that may be employed without departing from the scope of the invention. Those variations and equivalents are intended to be encompassed by the invention.
[00126] Figure 1 is a flow diagram illustrating the basic logic scheme employed by the various embodiments of the invention. As illustrated, the process 10, which concerns parallelization of a pipeline having stages operable on sequence of work items, begins at 12. At 14, an amount of work is allocated for each work item. At 16, at least one stage is assigned to each work item. Naturally, if there is but one work item, only one stage may be assigned at this point. At 18, the at least one stage is partitioned into a least one team. As discussed above, it is contemplated that only one stage may be assigned. At 20, the at least one stage is partitioned into at least one gang. As noted, it is contemplated that, for a particular stage, only one gang may be employed. At 22, at least one team and at least one gang is assigned to at least one processor. At 24, the process ends. [00127] Figure 2 illustrates an organizational scheme employed by embodiment(s) and variation(s) of the invention. As illustrated, the processing system 30 includes M stage(s) 32 and N stage(s) 34. With an amount of work having been assigned to each work item, the M stage(s) 32 and the N stage(s) are each assigned to the work items. The M stage(s) 32 and the N stage(s) 34 each are partitioned into team(s) 36, 38, respectively. The team(s) 36, 38 are, in turn, partitioned into gang(s) 40, 42, respectively. With this structure, parallel processing of the work items is greatly facilitated so that processing is enhanced, as discussed above.
[00128] Communication links 44, 46 connect the M stage(s) 32 and the N stage(s)
34 to a data buffer 48 with at least M+N data areas, as discussed above. The data buffer 48 facilitates communication between the M stage(s) 32 and the N stage(s) 34, as discussed above. It is contemplated that the M stage(s) 32 and the N stage(s) 34 also may communicate directly with one another via another communication link, not illustrated. [00129] As may be appreciated by those skilled in the art, the stages 32, 34, the teams 35, 38, and the gangs 40, 42 may overlap with one another, either partially or completely, depending upon the processing required for a particular sequence of work items. Other variations are also contemplated.
[00130] As may be appreciated by those skilled in the art, the examples provided herein are merely exemplary of the scope and breadth of the invention. The invention, therefore, encompasses no only the embodiments described herein but also all equivalents thereto.

Claims

What is claimed:
1. A method of parallelizing a pipeline having at least one stage operable on a sequence of work items, the method comprising: allocating an amount of work for each work item; assigning at least one stage to each work item; partitioning the at least one stage into at least one team; partitioning the at least one team into at least one gang; and assigning the at least one team and at least one gang to a at least one processor.
2. The method of claim 1, wherein the at least one gang comprises a plurality of gangs and the at least one processor comprises a plurality of processors.
3. The method of claim 1 , wherein the amount of work allocated for each work item is a function of at least one of available memory and synchronization overhead.
4. The method of claim 1, wherein the at least one stage comprises a number of stages assigned based upon a function of at least one of load balance and parallelism.
5. The method of claim 1, wherein a number of teams and a number of gangs is determined based upon a function involving at least one of load balance and throughput.
6. The method of claim 1 , wherein the number of gangs within the at least one team is a function of at least one of work dependency of the at least one stage, efficiency of parallelization, memory utilization, and latency.
7. The method of claim 1, wherein assignment of the at least one team and the plurality of gangs to specific processors is a function of at least one of a communication network between the plurality of processors and access to memory regions within the processors.
8. The method of claim 7, wherein gangs are assigned to processors that are adjacent to one another to minimize communication costs therebetween.
9. The method of claim 7, wherein teams are assigned to gangs that are adjacent to one another to minimize communication costs therebetween.
10. The method of claim 1, further comprising: providing a buffer for data produced by at least one of M teams, M gangs or M stages and used by at least one of N teams, N gangs or N stages; and wherein the buffer has at least M+N data areas.
11. The method of claim 10, wherein the data areas are continually reassigned such that data stored by the M teams, M gangs or M stages in one cycle is accessible by the N teams, N gangs or N stages in a subsequent cycle.
12. The method of claim 10, wherein the buffer has more than M+N data areas.
13. The method of claim 10, wherein the instructions for storing data include a send reserve function to obtain a buffer area suitable for the given iteration and a send commit function indicating that the data is stored and available for reading; and wherein the instructions for accessing data include a receive reserve function to obtain data from a buffer area suitable for the given iteration and a receive commit function indicating that the data is no longer needed.
14. The method of claim 1, wherein, for a plurality of stages: an initial stage, k, requires Wk units of work to complete; a subsequent stage, k+1, requires W^+/ units of work to complete; the initial stage, k, is assigned to a plurality of processors P*; the subsequent stage, k+1, is assigned to a plurality of processors P^+;; and the initial stage and the subsequent stage satisfy the equation
15. The method of claim 14, the plurality of processors P^ is set to a minimum according to the equation
Pt = ceiling(WT)
where T is the time required to complete processing of the initial stage, k.
16. The method of claim 15, the plurality of processors P^+/ is set to a minimum according to the equation
Ψk+] = ceiling(WWT)
where T is the time required to complete processing of the subsequent stage, k+1.
17. A processor system, comprising:
M stages performing work on a sequence of work items, thereby generating first data;
N stages performing work on a sequence of work items in parallel with the M stages, the N stages operating on the first data generated by the M stages, thereby generating second data; a data buffer in communication with the M stages and the N stages, the data buffer receiving the first data and the second data, wherein the buffer comprises at least M+N data areas.
18. The processor system of claim 17, wherein the data areas are continuously reassigned so that at least the first data stored in one cycle is accessed by the N stages in a subsequent cycle to generate the second data.
19. The processor system of claim 18, wherein the buffer comprises more than M+N data areas.
20. The processor system of claim 17, wherein instructions for storing data include a send reserve function to obtain a buffer area suitable for the given iteration and a send commit function indicating that the data is stored and available for reading; and the instructions for accessing data include a receive reserve function to obtain data from a buffer area suitable for the given iteration and a receive commit function indicating that the data is no longer needed.
21. The processor system of claim 18, wherein at least one of the stages is partitioned into a number of teams and a number of gangs in the teams; and wherein M and N each represent the total number teams, gangs and stages.
PCT/US2007/084113 2006-11-10 2007-11-08 Method and system for parallelization of pipelined computations WO2008060948A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/513,838 US9110726B2 (en) 2006-11-10 2007-11-08 Method and system for parallelization of pipelined computations
EP07864131A EP2080104A4 (en) 2006-11-10 2007-11-08 Method and system for parallelization of pipelined computations

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US86525306P 2006-11-10 2006-11-10
US60/865,253 2006-11-10

Publications (2)

Publication Number Publication Date
WO2008060948A2 true WO2008060948A2 (en) 2008-05-22
WO2008060948A3 WO2008060948A3 (en) 2008-07-24

Family

ID=39402397

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/084113 WO2008060948A2 (en) 2006-11-10 2007-11-08 Method and system for parallelization of pipelined computations

Country Status (4)

Country Link
US (1) US9110726B2 (en)
EP (1) EP2080104A4 (en)
KR (1) KR101545357B1 (en)
WO (1) WO2008060948A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105988952A (en) * 2015-02-28 2016-10-05 华为技术有限公司 Method and apparatus for assigning hardware acceleration instructions to memory controllers

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8074051B2 (en) 2004-04-07 2011-12-06 Aspen Acquisition Corporation Multithreaded processor with multiple concurrent pipelines per thread
EP2210171A1 (en) * 2007-11-05 2010-07-28 Sandbridge Technologies, Inc. Method of encoding register instruction fields
EP2250539A1 (en) * 2008-01-30 2010-11-17 Sandbridge Technologies, Inc. Method for enabling multi-processor synchronization
KR20100133964A (en) * 2008-03-13 2010-12-22 아스펜 액퀴지션 코포레이션 Method for achieving power savings by disabling a valid array
CN102112971A (en) 2008-08-06 2011-06-29 阿斯奔收购公司 Haltable and restartable dma engine
EP2282264A1 (en) * 2009-07-24 2011-02-09 ProximusDA GmbH Scheduling and communication in computing systems
KR101640848B1 (en) * 2009-12-28 2016-07-29 삼성전자주식회사 Job Allocation Method on Multi-core System and Apparatus thereof
KR101658792B1 (en) * 2010-01-06 2016-09-26 삼성전자주식회사 Computing system and method
US8695009B2 (en) * 2011-04-18 2014-04-08 Microsoft Corporation Allocating tasks to machines in computing clusters
US9817700B2 (en) 2011-04-26 2017-11-14 International Business Machines Corporation Dynamic data partitioning for optimal resource utilization in a parallel data processing system
RU2012127578A (en) * 2012-07-02 2014-01-10 ЭлЭсАй Корпорейшн ANALYZER OF APPLICABILITY OF THE SOFTWARE MODULE FOR THE DEVELOPMENT AND TESTING OF THE SOFTWARE FOR MULTIPROCESSOR MEDIA
US10776846B2 (en) * 2016-07-27 2020-09-15 Nike, Inc. Assortment optimization

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4866637A (en) * 1987-10-30 1989-09-12 International Business Machines Corporation Pipelined lighting model processing system for a graphics workstation's shading function
US6115761A (en) * 1997-05-30 2000-09-05 Lsi Logic Corporation First-In-First-Out (FIFO) memories having dual descriptors and credit passing for efficient access in a multi-processor system environment
US7139899B2 (en) * 1999-09-03 2006-11-21 Cisco Technology, Inc. Selected register decode values for pipeline stage register addressing
US6741983B1 (en) * 1999-09-28 2004-05-25 John D. Birdwell Method of indexed storage and retrieval of multidimensional information
US6968445B2 (en) 2001-12-20 2005-11-22 Sandbridge Technologies, Inc. Multithreaded processor with efficient processing for convergence device applications
US6990557B2 (en) 2002-06-04 2006-01-24 Sandbridge Technologies, Inc. Method and apparatus for multithreaded cache with cache eviction based on thread identifier
US6912623B2 (en) 2002-06-04 2005-06-28 Sandbridge Technologies, Inc. Method and apparatus for multithreaded cache with simplified implementation of cache replacement policy
US7689485B2 (en) 2002-08-10 2010-03-30 Cisco Technology, Inc. Generating accounting data based on access control list entries
US6925643B2 (en) 2002-10-11 2005-08-02 Sandbridge Technologies, Inc. Method and apparatus for thread-based memory access in a multithreaded processor
US6842848B2 (en) 2002-10-11 2005-01-11 Sandbridge Technologies, Inc. Method and apparatus for token triggered multithreading
US6904511B2 (en) 2002-10-11 2005-06-07 Sandbridge Technologies, Inc. Method and apparatus for register file port reduction in a multithreaded processor
US6971103B2 (en) 2002-10-15 2005-11-29 Sandbridge Technologies, Inc. Inter-thread communications using shared interrupt register
US7593978B2 (en) 2003-05-09 2009-09-22 Sandbridge Technologies, Inc. Processor reduction unit for accumulation of multiple operands with or without saturation
US7428567B2 (en) 2003-07-23 2008-09-23 Sandbridge Technologies, Inc. Arithmetic unit for addition or subtraction with preliminary saturation detection
US7412588B2 (en) * 2003-07-25 2008-08-12 International Business Machines Corporation Network processor system on chip with bridge coupling protocol converting multiprocessor macro core local bus to peripheral interfaces coupled system bus
US7251737B2 (en) 2003-10-31 2007-07-31 Sandbridge Technologies, Inc. Convergence device with dynamic program throttling that replaces noncritical programs with alternate capacity programs based on power indicator
US7797363B2 (en) 2004-04-07 2010-09-14 Sandbridge Technologies, Inc. Processor having parallel vector multiply and reduce operations with sequential semantics
US8074051B2 (en) 2004-04-07 2011-12-06 Aspen Acquisition Corporation Multithreaded processor with multiple concurrent pipelines per thread
US7475222B2 (en) 2004-04-07 2009-01-06 Sandbridge Technologies, Inc. Multi-threaded processor having compound instruction and operation formats
US7581214B2 (en) * 2004-04-15 2009-08-25 Intel Corporation Live set transmission in pipelining applications
TW200625097A (en) 2004-11-17 2006-07-16 Sandbridge Technologies Inc Data file storing multiple date types with controlled data access
US7725691B2 (en) * 2005-01-28 2010-05-25 Analog Devices, Inc. Method and apparatus for accelerating processing of a non-sequential instruction stream on a processor with multiple compute units
US20060277126A1 (en) * 2005-06-06 2006-12-07 Intel Corporation Ring credit management
US20060288196A1 (en) * 2005-06-20 2006-12-21 Osman Unsal System and method for exploiting timing variability in a processor pipeline
US9146745B2 (en) * 2006-06-29 2015-09-29 Intel Corporation Method and apparatus for partitioned pipelined execution of multiple execution threads
US8819099B2 (en) 2006-09-26 2014-08-26 Qualcomm Incorporated Software implementation of matrix inversion in a wireless communication system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None
See also references of EP2080104A4

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105988952A (en) * 2015-02-28 2016-10-05 华为技术有限公司 Method and apparatus for assigning hardware acceleration instructions to memory controllers
EP3252611A4 (en) * 2015-02-28 2018-03-07 Huawei Technologies Co., Ltd. Method and device for allocating hardware acceleration instructions to memory controller
CN105988952B (en) * 2015-02-28 2019-03-08 华为技术有限公司 The method and apparatus for distributing hardware-accelerated instruction for Memory Controller Hub

Also Published As

Publication number Publication date
KR20090089327A (en) 2009-08-21
EP2080104A2 (en) 2009-07-22
US20100115527A1 (en) 2010-05-06
EP2080104A4 (en) 2010-01-06
WO2008060948A3 (en) 2008-07-24
US9110726B2 (en) 2015-08-18
KR101545357B1 (en) 2015-08-18

Similar Documents

Publication Publication Date Title
US9110726B2 (en) Method and system for parallelization of pipelined computations
Chen et al. GFlink: An in-memory computing architecture on heterogeneous CPU-GPU clusters for big data
US6088511A (en) Nested parallel 2D Delaunay triangulation method
Culler et al. TAM-a compiler controlled threaded abstract machine
US6106575A (en) Nested parallel language preprocessor for converting parallel language programs into sequential code
Gajski et al. Essential issues in multiprocessor systems
US6292822B1 (en) Dynamic load balancing among processors in a parallel computer
US6212617B1 (en) Parallel processing method and system using a lazy parallel data type to reduce inter-processor communication
Schauser et al. Compiler-controlled multithreading for lenient parallel languages
CN103294554A (en) SOC multiprocessor dispatching method and apparatus
Grossman et al. CnC-CUDA: declarative programming for GPUs
US8387009B2 (en) Pointer renaming in workqueuing execution model
Beri et al. A scheduling and runtime framework for a cluster of heterogeneous machines with multiple accelerators
Su et al. Efficient DOACROSS execution on distributed shared-memory multiprocessors
Wesolowski An application programming interface for general purpose graphics processing units in an asynchronous runtime system
Yang et al. Managing asynchronous operations in Coarray Fortran 2.0
Culler Multithreading: Fundamental limits, potential gains, and alternatives
Hong et al. The design of the SACLIB/PACLIB kernels
Moreń et al. Graphcl: A framework for execution of data-flow graphs on multi-device platforms
Danalis et al. Scalable dense linear algebra on heterogeneous hardware
Haines et al. Towards a distributed memory implementation of sisal
Schreiner et al. The design of the PACLIB kernel for parallel algebraic computation
Alrawais Parallel programming models and paradigms: Openmp analysis
Eisenbeis et al. Compiler techniques for optimizing memory and register usage on the Cray 2
Wang et al. micMR: An efficient MapReduce framework for CPU–MIC heterogeneous architecture

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07864131

Country of ref document: EP

Kind code of ref document: A2

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2007864131

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 1020097010221

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 12513838

Country of ref document: US