US20040230770A1 - Method and system for processing program for parallel processing purposes, storage medium having stored thereon program getting program processing executed for parallel processing purposes, and storage medium having stored thereon instruction set to be executed in parallel - Google Patents

Method and system for processing program for parallel processing purposes, storage medium having stored thereon program getting program processing executed for parallel processing purposes, and storage medium having stored thereon instruction set to be executed in parallel Download PDF

Info

Publication number
US20040230770A1
US20040230770A1 US10/873,252 US87325204A US2004230770A1 US 20040230770 A1 US20040230770 A1 US 20040230770A1 US 87325204 A US87325204 A US 87325204A US 2004230770 A1 US2004230770 A1 US 2004230770A1
Authority
US
United States
Prior art keywords
basic block
execution units
program
execution
instructions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/873,252
Inventor
Kensuke Odani
Taketo Heishi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Priority to US10/873,252 priority Critical patent/US20040230770A1/en
Publication of US20040230770A1 publication Critical patent/US20040230770A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/45Exploiting coarse grain parallelism in compilation, i.e. parallelism between groups of instructions
    • G06F8/456Parallelism detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • G06F8/447Target code generation

Definitions

  • the present invention generally relates to program processing technology applicable to a compiler for translating all the source code of a program written in a high-level language into object code, and more particularly relates to code optimization technology specially designed for a parallel processor.
  • VLIW very-long-instruction-word
  • a VLIW processor is supposed to process in parallel a combination of operations that are packed in each and every word of a program.
  • the combination is made by a compiler, which extracts a number of parallelly executable instructions from a source program and then combines them into a single long instruction word, thereby shortening the time taken for the VLIW processor to execute the program.
  • a basic block is a collection of instructions controllable continuously from the first through last ones.
  • a VLIW processor generally executes instructions included in a long, fixed-length word with a fixed parallelism index.
  • the code conversion efficiency attainable by the VLIW processor is not always good.
  • a VLIW processor for executing a variable number of instructions included in a variable-length word was developed recently.
  • the VLIW processor of this type a set of instructions to be executed in parallel is divided into a plurality of groups on respective parallel execution boundaries, thereby making the number of instructions issued per cycle (i.e., index of parallelism) variable.
  • the VLIW processor executes an instruction word of a variable length so as to improve the code conversion efficiency.
  • a group of instructions included between an adjacent pair of execution boundaries will be called an “execution unit (of instructions)”.
  • the VLIW processor can also execute a plurality of units of instructions concurrently while branching and recombining the processing flow.
  • the processor of this type rear-ranges the instructions on the basic block basis.
  • the compilation technology is applied to the processor of this type, even a set of instructions, which could be included in a single execution unit otherwise, might be unintentionally divided into several execution units on the basic block boundaries.
  • the program execution rate attainable by such a processor cannot be regarded sufficiently high considering the potential performance of the VLIW processor.
  • An object of the present invention is increasing the program execution rate of a target machine in performing program processing for parallel processing purposes.
  • an inventive program processing method for parallel processing includes the step of a) subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units. Each of the execution units is made up of parallelly-executable instructions. The method further includes the step of b) combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable within a single cycle.
  • a pair of execution units which are located just before and after a basic block boundary, are found executable within a single cycle, then these execution units are combined into a single execution unit. Even a group of instructions ranging across a basic block boundary, or covering multiple basic blocks, are executable by a target machine within a single cycle if these instructions are classified as single execution unit. Accordingly, it is more probable that a group of instructions ranging across a basic block boundary are executable in parallel, thus cutting down the number of cycles to be done by the target machine to execute a program. As a result, the program execution rate increases.
  • the step b) preferably includes analyzing dependence between instructions belonging to the two execution units located just before and after the basic block boundary. And it is determined based on the analyzed dependence between the instructions whether or not these execution units located just before and after the basic block boundary are combinable into the single execution unit.
  • an execution boundary code indicating the identity as boundary between an associated pair of the execution units is preferably added to each said basic block boundary in the step a).
  • the execution boundary code that has been added to the basic block boundary is preferably removed.
  • instructions which impose less strict constraints on resources of a target machine for executing the program, are selected preferentially in the step a) as respective instructions belonging to first and last execution units of each said basic block.
  • an instruction that is executable only by itself a cycle by the target machine is preferably given a lower priority.
  • an instruction with a short word length is preferably given a higher priority.
  • Another inventive program processing method for parallel processing includes the step of a) subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units.
  • Each of the execution units is made up of parallelly-executable instructions.
  • a particular one of the basic blocks is preferably subdivided into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block.
  • the closest execution unit belongs to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided.
  • a particular basic block is subdivided into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block.
  • the closest execution unit belongs to another set of execution units, into which another basic block that is adjacent to, and combinable with, the particular basic block has already been subdivided.
  • a number of instructions covering multiple basic blocks across a basic block boundary are more likely to be combined into a single execution unit.
  • Even a group of instructions ranging across a basic block boundary are executable by a target machine within a single cycle if these instructions are classified as single execution unit. Accordingly, it is more probable that a group of instructions covering several basic blocks across a basic block boundary are executable in parallel, thus cutting down the number of cycles to be done by the target machine to execute a program. As a result, the program execution rate increases.
  • one of the basic blocks that is located just before the particular basic block may be used as the combinable basic block in the step a).
  • the particular basic block, along with the instruction belonging to the last execution unit of the combinable basic block, may be subdivided into the set of execution units.
  • one of the basic blocks that is located just after the particular basic block may be used as the combinable basic block in the step a).
  • the particular basic block, along with the instruction belonging to the first execution unit of the combinable basic block, may be subdivided into the set of execution units.
  • each said basic block may be subdivided in the step a) into the execution units sequentially in a forward direction from the beginning toward the end of the program.
  • each said basic block may be subdivided in the step a) into the execution units sequentially in a backward direction from the end toward the beginning of the program.
  • one of the basic blocks that belongs to the innermost loop is preferably subdivided into the execution units preferentially.
  • the method may further include the step b) of subdividing each said basic block of the program code into another set of execution units independent of adjacent ones of the basic blocks. Results of the steps a) and b) are compared to each other and one of these steps a) and b) that results in the smaller number of execution units is adopted.
  • An inventive program processor for parallel processing includes an intra basic block parallelizer for subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units. Each of the execution units is made up of parallelly-executable instructions.
  • the processor further includes a basic block boundary parallelizer for combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable within a single cycle.
  • Another inventive program processor is adapted to execute compilation for parallel processing purposes.
  • the processor includes: a compiler front end for translating source code into intermediate code by dividing the source code into a plurality of basic blocks; a parallelizer for converting the intermediate code into code in a parallelly-executable form; and an object code generator for translating the intermediate code in the form converted by the parallelizer into object code executable by a target machine.
  • the parallelizer includes: an intra basic block parallelizer for subdividing each of a plurality of basic blocks, into which the intermediate code has been divided, into a multiplicity of execution units, each being made up of parallelly-executable instructions; and a basic block boundary parallelizer for combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable within a single cycle.
  • Still another inventive program processor for parallel processing includes an expanded basic block parallelizer for subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units, each being made up of parallelly-executable instructions.
  • the expanded basic block parallelizer subdivides a particular one of the basic blocks into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block.
  • the closest execution unit belongs to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided.
  • Yet another inventive program processor is also adapted to execute compilation for the parallel processing purposes.
  • the processor includes: a compiler front end for translating source code into intermediate code by dividing the source code into a plurality of basic blocks; a parallelizer for converting the intermediate code into code in a parallelly-executable form; and an object code generator for translating the intermediate code in the form converted by the parallelizer into object code executable by a target machine.
  • the parallelizer includes an expanded basic block parallelizer for subdividing each of a plurality of basic blocks, into which the intermediate code has been divided, into a multiplicity of execution units, each being made up of parallelly-executable instructions.
  • the expanded basic block parallelizer subdivides a particular one of the basic blocks into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block.
  • the closest execution unit belongs to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided.
  • An inventive storage medium has stored thereon a program getting a program processing procedure executed by a computer for parallel processing purposes.
  • the program processing procedure includes the steps of: a) subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units, each being made up of parallelly-executable instructions; and b) combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable within a single cycle.
  • Another inventive storage medium has stored thereon a program getting a program processing procedure executed by a computer through compilation for parallel processing purposes.
  • the program processing procedure includes the steps of: a) translating source code into intermediate code by dividing the source code into a plurality of basic blocks; b) subdividing each said basic block of the intermediate code into a multiplicity of execution units, each being made up of parallelly-executable instructions; c) combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable within a single cycle; and d) translating the intermediate code in the form converted in the steps b) and c) into object code executable by a target machine.
  • Still another inventive storage medium has stored thereon a program getting a program processing procedure executed by a computer for parallel processing purposes.
  • the program processing procedure includes the step of subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units, each being made up of parallelly-executable instructions.
  • a particular one of the basic blocks is subdivided into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block.
  • the closest execution unit belongs to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided.
  • Yet another inventive storage medium has stored thereon a program getting a program processing procedure executed by a computer through compilation for parallel processing purposes.
  • the program processing procedure includes the steps of a) translating source code into intermediate code by dividing the source code into a plurality of basic blocks; b) subdividing each said basic block of the intermediate code into a multiplicity of execution units, each being made up of parallelly-executable instructions; and c) translating the intermediate code in the form processed in the step b) into object code executable by a target machine.
  • a particular one of the basic blocks is subdivided into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block.
  • the closest execution unit belongs to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided.
  • Yet another inventive storage medium has stored thereon a set of instructions to be executed in parallel.
  • the instruction set is grouped into a plurality of execution units, each being made up of parallelly-executable instructions, and at least one of the execution units is located across a boundary between an associated pair of the basic blocks.
  • FIGS. 1 ( a ) through 1 ( d ) illustrate an exemplary instruction set to be executed by a target machine in program processing according to the present invention
  • FIG. 1( e ) illustrates exemplary executable code processed by the target machine
  • FIG. 1( f ) schematically illustrates how the target machine executes the instructions.
  • FIG. 2 is a block diagram illustrating a configuration for a program processor according to a first embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a processing procedure in the concurrent executability decider included in the program processor shown in FIG. 2.
  • FIG. 4 illustrates exemplary internal form code received by the basic block boundary parallelizer included in the program processor shown in FIG. 2.
  • FIG. 5 is a dependence graph for the internal form code shown in FIG. 4.
  • FIG. 6 illustrates, in an assembly language, object code that has been generated for the internal form code shown in FIG. 4.
  • FIG. 7 illustrates another exemplary internal form code received by the basic block boundary parallelizer included in the program processor shown in FIG. 2.
  • FIG. 8 is a dependence graph for the internal form code shown in FIG. 7.
  • FIG. 9 illustrates, in an assembly language, object code that has been generated for the internal form code shown in FIG. 7.
  • FIG. 10 is a block diagram illustrating a configuration for a program processor according to a second embodiment of the present invention.
  • FIG. 11 illustrates exemplary assembler code.
  • FIG. 12 is a dependence graph for the assembler code shown in FIG. 11.
  • FIG. 13 is a flowchart illustrating a processing procedure in the instruction rearranger included in the program processor shown in FIG. 10.
  • FIG. 14 illustrates exemplary internal form code received by the parallelizer included in the program processor shown in FIG. 10.
  • FIG. 15 is a dependence graph for the basic block B shown in FIG. 14.
  • FIG. 16 illustrates execution units obtained by subdividing the basic block B shown in FIG. 14.
  • FIG. 17 is a dependence graph for the respective first execution units of the basic blocks A and B shown in FIG. 14.
  • FIG. 18 illustrates execution units obtained by subdividing the respective first execution units of the basic blocks A and B shown in FIG. 14.
  • FIG. 19 illustrates, in an assembly language, object code generated for the internal form code shown in FIG. 14.
  • FIG. 20( a ) illustrates exemplary internal form code received by the parallelizer included in the program processor shown in FIG. 10;
  • FIG. 20( b ) illustrates a result of processing performed on the internal form code shown in FIG. 20( a ) according to the second embodiment
  • FIG. 20( c ) illustrates a result of processing performed on the internal form code shown in FIG. 20( a ) according to a modified example of the second embodiment.
  • a processor that can execute a variable number of variable-length instructions per cycle and has an index of parallelism of three is used as the target machine.
  • An instruction set executed by this target machine is made up of a plurality of instruction units, each being composed of 21 bits.
  • the format of a single instruction set includes: a 21-bit instruction (short instruction) consisting of just one unit; and a 42-bit instruction (long instruction) consisting of two units. That is to say, this target machine is compatible with a variable-length instruction system.
  • Each instruction is provided with information indicating whether or not the instruction is adjacent to a boundary between execution units, i.e., execution boundary. If the information indicates that the instruction is adjacent to the execution boundary, then the instruction in question and the next instruction are not executed within the same cycle. Instead, the target machine executes instructions located between a pair of execution boundaries, i.e., instructions belonging to the same execution unit, within the same cycle.
  • the total resources of the target machine that can be allocated to instructions in a single execution unit are not more than 3 arithmetic logic (ALU) units, 1 multiplication unit, 1 LD/ST unit and 1 branch unit.
  • ALU arithmetic logic
  • the target machine Since the number of instructions belonging to a single execution unit may be arbitrarily defined so long as the constraint (1) is met, a variable number of instructions can be issued per cycle. To reduce the load on the hardware, the target machine does not determine whether or not the instructions included in a single execution unit are parallelly executable semantically or in such a manner as to meet the resource constraint imposed by the target machine. That is to say, it must be ensured for the target machine at the program processing end that parallelly executable instructions are appropriately arranged between a pair of execution boundaries.
  • the target machine does not always execute all the instructions included in a single execution unit concurrently. For example, if one of the instructions in a single execution unit arrived later than its expected time, then the instructions in the execution unit might be executed by the target machine separately at different times. Thus, when the program processing is performed, the instructions in an execution unit should be arranged in such an order that the semantic operation of the program is correct even if the instructions are executed at several different times.
  • a branch instruction may be included in an execution unit.
  • an execution unit may include “conditional branch instruction”, “ALU instruction” and “LD instruction” in this order.
  • conditional branch instruction if the first instruction “conditional branch” is executed while meeting the above conditions, then the instructions “ALU” and “LD”, which succeed the branch instruction in the same execution unit, are not executed.
  • an interrupt into a middle of an execution unit is also permitted. In such a case, the instructions, which precede the jump destination instruction in the execution unit, are not executed.
  • FIGS. 1 ( a ) through 1 ( d ) illustrate an exemplary instruction set to be executed by a target machine.
  • opecode op, op 2 , destination register Rd, source register Rs and n-bit constant immn (where n is a constant) are included in this instruction set.
  • Each instruction is further provided with a bit E indicating the presence or absence of the execution boundary. If an instruction is located adjacent to an execution unit boundary, then the bit E is one.
  • the instruction shown in FIG. 1( a ) is a register-to-register operation instruction, while the instructions shown in FIG.
  • 1( b ), 1 ( c ) and 1 ( d ) are operation instructions using less than 5 bit constant, less than 21 bit constant and less than 32 bit constant, respectively.
  • the instructions shown in FIGS. 1 ( a ) and 1 ( b ) are short instructions each composed of 21 bits, while the instructions shown in FIGS. 1 ( c ) and 1 ( d ) are long instructions each composed of 42 bits.
  • the operation instruction using the less than 32 bit constant is nothing but an instruction of transferring a 32-bit constant to a register. This is because the number of remaining bits assignable to the opecode Op is small in the instruction including the 32-bit constant and the number of instructions using the 32-bit constant is limited.
  • FIG. 1( e ) illustrates exemplary executable code (object code) handled by the target machine.
  • the instructions fetched by the target machine are executed on the execution unit basis, i.e., a set of instructions located between one execution boundary and the next. If there are any instructions that were fetched but not executed, then those instructions are stored in an instruction buffer and then executed in the next execution cycle or later.
  • FIG. 1( f ) schematically illustrates how the target machine executes the instructions.
  • each row corresponds to a set of instructions executed by the target machine per execution cycle.
  • Unit 1 is a conditional branch instruction and the conditions of the conditional branch are met in FIG. 1( e ).
  • the conditional branch instruction of Unit 1 is executed, the instructions succeeding Unit 1 in the same execution unit, i.e., those of Units 2 and 3 , are not executed.
  • interrupt into Unit 5 occurs in FIG. 1( e )
  • the instructions preceding Unit 5 in the same execution unit i.e., those of Units 3 and 4
  • this processor is supposed to be used according to the present invention for convenience sake.
  • inventive program processing is not limited to the basic instruction word length of 21 bits, parallelism index of 3 and resource constraints of 3 ALU units, 1 multiplication unit, 1 LD/ST unit and 1 branch unit as exemplified for the processor or to the above combination of executable instructions.
  • the present invention is naturally applicable to other situations where these specifics are different.
  • the information indicating the identity as execution boundary is attached to an instruction itself.
  • the information representing the execution boundary may be stored in a different field separately from the instruction.
  • FIG. 2 is a block diagram illustrating a configuration for a program processor 1 according to a first exemplary embodiment of the present invention.
  • the program processor 1 includes a compiler front end 10 , a parallelizer 20 and an object code generator 30 .
  • the compiler front end 10 retrieves source code 50 , which is written in a high-level language like C and saved in a file format, and analyzes the syntax and semantics thereof to generate internal form code (i.e., intermediate code, or assembler code). Then, the compiler 10 divides the internal form code into a plurality of basic blocks, each of which is an instruction set without any branch or interrupt. If necessary, the compiler 10 optimizes the internal form code so as to reduce the size or execution time of the executable code (object code) to be generated ultimately.
  • internal form code i.e., intermediate code, or assembler code
  • the parallelizer 20 reads the internal form code generated by the compiler front end 10 as program code and then parallelizes the code for the target machine.
  • the parallelizer includes an intra basic block parallelizer 21 and a basic block boundary parallelizer 22 .
  • the intra basic block parallelizer 21 analyzes dependence among the instructions included in each basic block, schedules (or rearranges) the instructions and adds an execution boundary to the basic block, thereby parallelizing the internal form code.
  • the intra basic block parallelizer 21 operates in the same way as the counterpart (i.e., a local parallelizer) in a known program processor (as we disclosed in Japanese Patent Application No., 10-095647).
  • the basic block boundary parallelizer 22 examines each basic block boundary of the internal form code that has been parallelized by the intra basic block parallelizer 21 and removes the execution boundary located at the basic block boundary if permitted.
  • the basic block boundary parallelizer 22 includes a boundary dependence analyzer 23 , a concurrent executability decider 24 and an execution boundary remover 25 .
  • the boundary dependence analyzer 23 analyzes the dependence among all the instructions included in a pair of execution units, which are located just before and after a basic block boundary in question, thereby drawing up a dependence graph.
  • those instructions to be analyzed will be called “instructions in question”.
  • a dependence graph instructions are represented as nodes and dependence among them is represented as edge (or arrow). For example, suppose Instruction (a) must be executed to execute Instruction (b). Since Instruction (b) depends on Instruction (a) in such a case, the dependence therebetween is represented by a description “a ⁇ b”.
  • the dependence is supposed to be “definition-reference” dependence (or data dependence), which represents dependence between an instruction defining a resource and another instruction referring to the same resource.
  • a method of drawing up a dependence graph was disclosed by R. J. Blainey in an article entitled “Instruction scheduling in the TOBEY compiler” (IBMJ. RES. DEVELOP. Vol.38, No. 5, September 1994), for example.
  • the concurrent executability decider 24 decides whether or not all the instructions in question are executable within the same cycle.
  • FIG. 3 is a flowchart illustrating a processing procedure in the concurrent executability decider 24 .
  • the decider 24 decides whether or not there is any data dependence among the instructions in question. If the answer is YES, then the decider 24 decides that those instructions are “non-executable concurrently” within the same cycle. Alternatively, if the answer is NO, then the decider 24 decides in Step a 2 whether or not the resource constraint imposed by the target machine is met when all the instructions in question are executed within the same cycle. If the answer is NO, then the decider 24 decides that those instructions are “non-executable concurrently” within the same cycle.
  • Steps a 1 and a 2 are NO and YES, respectively, i.e., if there is no data dependence among the instructions in question and if the resource constraint imposed by the target machine is met, the decider 24 decides that those instructions are “executable concurrently” within the same cycle.
  • the execution boundary remover 25 removes the execution boundary located at the basic block boundary in question.
  • the object code generator 30 translates the internal form code (assembler code), which has been output from the parallelizer 20 , into object code 60 and outputs the object code 60 as a file.
  • FIG. 4 illustrates exemplary internal form code that has been processed by the intra basic block parallelizer 21 included in the parallelizer 20 .
  • the compiler front end 10 translates the source program 50 into the internal form code, which is made up of a plurality of basic blocks divided.
  • the intra basic block parallelizer 21 subdivides each of the basic blocks into execution units, each of which is made of parallelly-executable instructions.
  • the internal form code such as that shown in FIG. 4 is generated by the intra basic block parallelizer 21 .
  • the basic block boundary parallelizer 22 receives the internal form code such as that shown in FIG. 4.
  • FIG. 4 only a pair of execution units that are located just before and after a basic block boundary in question are illustrated.
  • the execution boundary A is provided due to the existence of the basic block boundary and has nothing to do with the parallelism of the instructions. Thus, an execution boundary code indicating the execution boundary A is added thereto.
  • the boundary dependence analyzer 23 is activated to draw up a dependence graph shown in FIG. 5 from the internal form code shown in FIG. 4. As shown in FIG. 5, there is no dependence among the three instructions in question.
  • the concurrent executability decider 24 is activated.
  • the answer to the query in Step a 1 is NO
  • the resource constraint imposed by the target machine is met (i.e., the answer to the query in Step a 2 is YES).
  • the decider 24 decides that these instructions are “executable concurrently”.
  • the execution boundary remover 25 is activated. Since the concurrent executability decider 24 has affirmed the concurrent executability of the instructions in question, the remover 25 removes the execution boundary code representing the execution boundary A that is located at the basic block boundary in question.
  • the object code generator 30 outputs portion of the object code 60 , corresponding to the internal form code shown in FIG. 4, as the code shown in FIG. 6.
  • the object code is described in an assembly language in FIG. 6 to make the code easily understandable (the same statement will be applied to FIGS. 9 and 19, too).
  • FIG. 6 now that the execution boundary A has been removed, the three instructions in question are executed within the same cycle, thus shortening the time taken for the target machine to execute the program.
  • FIG. 7 illustrates another exemplary internal form code processed by the intra basic block parallelizer 21 included in the parallelizer 20 .
  • FIG. 4 only a pair of execution units located just before and after a basic block boundary in question are illustrated in FIG. 7.
  • the execution boundary B is also provided due to the existence of the basic block boundary and has nothing to do with the parallelism of the instructions.
  • the boundary dependence analyzer 23 is activated to draw up a dependence graph shown in FIG. 8 from the internal form code shown in FIG. 7. As shown in FIG. 8, there is data dependence between instructions “mov 10 , r 0 ” and “mov r 0 , r 1 ”, i.e., the instruction “mov r 0 , r 1 ” depends on the instruction “mov 10 , r 0 ”.
  • the concurrent executability decider 24 is activated.
  • the decider 24 decides that these instructions are “non-executable concurrently”.
  • the execution boundary remover 25 is activated. Since the concurrent executability decider 24 has negated the concurrent executability of the instructions in question, the remover 25 does not remove the execution boundary B located at the basic block boundary in question. As a result, the execution boundary B remains as it is.
  • FIG. 9 illustrates portion of the object code 60 that has been generated by the object code generator 30 from the internal form code shown in FIG. 7. As can be seen from FIG. 9, since the execution boundary B remains, the three instructions in question are executed in two cycles.
  • each of a plurality of basic blocks, into which intermediate code has been divided is subdivided into a multiplicity of execution units, each of which is made up of parallelly-executable instructions. And if two of the execution units, which are located just before and after a basic block boundary, are found executable within the same cycle, then these execution units are combined into a single execution unit. Thus, the number of cycles taken for the target machine to execute the program can be reduced and the program execution rate can be increased.
  • instructions, on which less strict resource constraints are imposed by the target machine may be selected preferentially. More specifically, in selecting instructions belonging to the first and last execution units of a basic block, instructions on which strict resource constraints are imposed, e.g., load, store and multiply instructions, each of which can be executed only by itself a cycle, should preferably be avoided by giving lower priorities thereto. In such a case, instructions with less strict resource constraints are allocated to the execution units located just before and after a basic block boundary. As a result, the basic block boundary parallelizer 22 is more likely to combine a pair of execution units located just before and after a basic block boundary.
  • instructions with short word lengths may be selected preferentially. More specifically, long instructions requiring two units, e.g., instructions using immediate values of more than 5 bits, should preferably be avoided by giving lower priorities thereto. In such a case, instructions with relatively short word lengths are allocated to the execution units located just before and after a basic block boundary. Thus, even in a target machine that can fetch and execute instructions with a limited word length at a time, the basic block boundary parallelizer 22 is more likely to combine a pair of execution units that are located just before and after a basic block boundary.
  • a branched instruction might be inserted into an execution unit located just after a basic block boundary and therefore that execution unit might be short of the required word length in a variable length instruction system.
  • the instructions with short word lengths are selected preferentially, then such a situation can be avoided because the instruction word length of each execution unit is shorter.
  • FIG. 10 is a block diagram illustrating a configuration for a program processor 2 according to a second exemplary embodiment of the present invention.
  • the program processor 2 includes a compiler front end 10 , a parallelizer 40 and an object code generator 30 .
  • the compiler front end 10 shown in FIG. 10 also retrieves source code 50 to generate internal form code (assembler code) that has been divided into basic blocks. Also, the object code generator 30 translates the internal form code, which has been output from the parallelizer 40 , into object code 60 and then outputs the object code 60 as a file in the same way as the counterpart 30 shown in FIG. 2.
  • the parallelizer 40 reads the internal form code generated by the compiler front end 10 as program code and parallelizes the code for the target machine.
  • the parallelizer 40 includes an execution order determiner 41 and an expanded basic block parallelizer 42 .
  • the execution order determiner 41 determines the order of basic blocks in the internal form code read out that are to be passed to the expanded basic block parallelizer 42 .
  • the basic blocks are passed to the expanded basic block parallelizer 42 in the ascending order, i.e., beginning with the last basic block of the program.
  • the expanded basic block parallelizer 42 analyzes dependence among the instructions included in each basic block, schedules the instructions and adds a parallel execution boundary to the basic block, thereby subdividing the basic block into a plurality of execution units, each being made up of parallelly executable instructions.
  • the expanded basic block parallelizer 42 subdivides a particular basic block into execution units along with an instruction belonging to the first one of execution units, into which another basic block has already been subdivided.
  • the latter basic block is located just after, and combinable with, the particular basic block. That is to say, unlike the intra basic block parallelizer 21 shown in FIG. 2, the expanded basic block parallelizer 42 rearranges the instructions of the basic block in question while taking the first execution unit of the next basic block into account.
  • the expanded basic block parallelizer 42 includes a dependence analyzer 43 , an instruction rearranger 44 and an execution boundary adder 45 .
  • the dependence analyzer 43 analyzes dependence among the instructions included in the basic block in question and the instructions included in the first execution unit of the next basic block. In the illustrated embodiment, the dependence of the following four types is analyzed.
  • the dependence analyzer 43 draws up a dependence graph representing dependence among the instructions for each basic block.
  • FIG. 11 illustrates exemplary assembler code
  • FIG. 12 is a dependence graph for the assembler code shown in FIG. 11.
  • each solid line represents data dependence
  • the broken line represents inverse dependence.
  • mem 1 , mem 2 and mem 3 refer to mutually different memory addresses.
  • the instruction rearranger 44 rearranges the instructions in the basic block using the dependence graph drawn up by the dependence analyzer 43 , thereby generating parallelized assembler code for the target machine.
  • FIG. 13 is a flowchart illustrating a processing procedure in the instruction rearranger 44 .
  • the instruction rearranger 44 provisionally places one of the nodes in the dependence graph, which node corresponds to an instruction belonging to the first execution unit of the next basic block.
  • the rearranger 44 repeatedly performs Steps S 2 through S 9 (Loop 1 ) until there is no node left yet to be placed in the dependence graph drawn up by the dependence analyzer 43 , i.e., until the answer to the query in Step S 2 becomes NO.
  • Step S 3 nodes that can now be regarded as candidates to be placed are extracted in Step S 3 from the dependence graph to make a set of candidate nodes to be placed.
  • each candidate node to be placed requires “all the successors have already been placed” or “only an instruction corresponding to the node placed provisionally in Step S 1 is included as successor and dependence thereof on the node placed provisionally is not data dependence”.
  • a “successor” means an instruction that should be executed after a particular instruction has been executed.
  • Steps S 4 through S 8 (Loop 2 ) are repeatedly performed until there is no candidate node left to be placed for the set of candidate nodes to be placed as made in Step S 3 , i.e., until the answer to the query in Step S 8 becomes NO.
  • Step S 4 a node that seems to be optimum candidate to be placed currently is selected from the set of candidate nodes to be placed.
  • the optimum node is selected heuristically with reference to the dependence graph and the provisionally placed region such that the instructions in the entire basic block can be executed in a shortest possible time.
  • such a node as making the total instruction execution time (from the beginning to the end of the dependence graph) longest is supposed to be selected. If there are several instructions meeting this condition, one of the instructions that should be executed last is selected as the optimum node.
  • Step S 5 it is determined whether or not the optimum node is actually placeable. If the answer is YES, then the node is placed. To execute a plurality of instructions within a single cycle, the instructions to be executed should be decoded (or fetched), arithmetic logical operations should be performed thereon and then the results should be restored into registers or memories. Thus, the determination in Step S 5 includes the following conditions:
  • Step S 6 If the optimum node has been placed provisionally, then the set of nodes that have already been placed provisionally up to now are examined to determine in Step S 6 whether or not additional instructions are placeable. The detailed determination process will be described later. If the answer is NO, then Loop 2 is ended.
  • a node that can now be a new candidate node to be placed after the optimum node has been placed is added in Step S 7 as another candidate node to be placed.
  • the new candidate node to be placed should have only an instruction corresponding to the optimum node as a successor that has not been placed yet, and the dependence thereof on the instruction corresponding to the optimum node should not be data dependence. That is to say, the new candidate node to be placed should be associated with an instruction that can be executed within the same cycle as the instruction corresponding to the optimum node, but that cannot be executed in a cycle later than that cycle.
  • Step S 9 the nodes that have been placed provisionally are regarded as fixed nodes in Step S 9 . More specifically, instructions, which correspond to the nodes belonging to the set of provisionally placed nodes, are extracted from the original instruction set, and rearranged into a new instruction set to be passed to the execution boundary adder 45 .
  • the execution boundary adder 45 adds an execution boundary to each instruction set for which the placement has been fixed in Step S 9 .
  • FIG. 14 illustrates exemplary internal form code that has been passed from the compiler front end 10 to the parallelizer 40 .
  • the internal form code shown in FIG. 14 has been divided by the compiler front end 10 into two basic blocks A and B.
  • the execution order determiner 41 receives the internal form code shown in FIG. 14 and determines the order of instructions to be passed to the expanded basic block parallelizer 42 .
  • the basic block is subdivided into execution units in the ascending order by beginning with the end of the program.
  • the basic blocks B and A are passed in this order to the expanded basic block parallelizer 42 .
  • the dependence analyzer 43 analyzes the instructions belonging to the basic block B, thereby drawing up a dependence graph such as that shown in FIG. 15.
  • the solid line arrow represents data dependence, more specifically, an instruction “add r 2 , r 1 ” depends on another instruction “mov 1 , r 1 ”.
  • the instruction rearranger 44 receives the dependence graph shown in FIG. 15 and rearranges the instructions in accordance with the flowchart shown in FIG. 13.
  • Loop 1 is performed for the first time, only the instruction “add r 2 , r 1 ” is selected in Step S 3 as candidate to be placed, regarded as placeable in Step S 5 and then fixed in Step S 9 .
  • Loop 1 is performed for the second time, only the instruction “mov 1 , r 1 ” is selected in Step S 3 as candidate to be placed, regarded as placeable in Step S 5 and then fixed in Step S 9 .
  • the execution boundary adder 45 adds an execution boundary to the instruction set. That is to say, each instruction set fixed is grouped as an execution unit. As a result, the execution units in the basic block B are arranged as shown in FIG. 16.
  • the dependence analyzer 43 analyzes the instructions belonging to the basic block A, thereby drawing up a dependence graph such as that shown in FIG. 17.
  • each solid line arrow represents data dependence, while each broken line arrow represents dependence other than data dependence.
  • a dependence graph is drawn up using the instruction “mov 1 , r 1 ” belonging to the first execution unit of the basic block B as well.
  • the instruction “mov 1 , r 1 ” was already fixed when the basic block B was subdivided into the execution units, and is identified by the solid circle in the dependence graph shown in FIG. 17.
  • the instruction rearranger 44 receives the dependence graph shown in FIG. 17 and rearranges the instructions in accordance with the flowchart shown in FIG. 13.
  • Step S 1 all the instructions belonging to the first execution unit of the next basic block B are placed provisionally. In this example, only the instruction “mov 1 , r 1 ” is placed provisionally.
  • Step S 2 since there are nodes yet to be placed (i.e., the answer to the query in Step S 2 is YES), Loop 1 should be performed.
  • a set of candidate nodes to be placed is made of instructions that are placeable within the same execution unit as the instruction “mov 1 , r 1 ” (in Step S 3 ).
  • the instruction “bt c 0 , Label”, on which no other instructions depend, is placeable within the same execution unit.
  • the instruction “bt c 0 , Label” is regarded as placeable in Step S 5 and placed provisionally.
  • One more instruction is additionally placeable in the same execution unit. Accordingly, the answer to the query in Step S 6 is YES, and a candidate node to be placed is added in Step S 7 .
  • the instructions “mov r 4 , r 5 ” and “mov 0 , r 1 ” are placeable within the same execution unit as the instructions “mov 1 , r 1 ” and “bt c 0 , Label”, and therefore added as candidate nodes to be placed.
  • Step S 4 The procedure returns to Step S 4 and the instruction “mov r 4 , r 5 ” is selected as the optimum node, for example.
  • the instruction “mov r 4 , r 5 ” is regarded as placeable in Step S 5 and placed provisionally. Since all of the three instructions placeable within the single execution unit are now fixed, no other nodes are additionally placeable (i.e., the answer to the query in Step S 6 is NO) and the placement of nodes is fixed in Step S 9 .
  • the execution boundary adder 45 adds an execution boundary to the instruction set. That is to say, each instruction set fixed is grouped as an execution unit.
  • the execution units in the basic block A including the instruction “mov 1 , r 1 ” belonging to the first execution unit of the basic block B, are arranged as shown in FIG. 18.
  • FIG. 19 illustrates portion of the object code 60 generated by the object code generator 30 from the internal form code shown in FIG. 14.
  • an execution boundary is added to every basic block boundary.
  • an execution unit may be composed of instructions ranging across a basic block boundary according to this embodiment.
  • the internal form code shown in FIG. 14 is performed in three cycles as shown in FIG. 19.
  • a particular basic block, along with an instruction belonging to the first execution unit of the next basic block, is subdivided into execution units, each of which is made up of parallelly executable instructions.
  • execution units each of which is made up of parallelly executable instructions.
  • the basic blocks may be subdivided into execution units in the descending order by beginning with the top of the program.
  • a basic block located just before a basic block in question may be combinable with the latter basic block.
  • instructions are rearranged within the range covering a basic block in question and the first execution unit of the next basic block, and an instruction to be executed last within this range is placed first.
  • instructions may be rearranged within a range covering a basic block in question and the last execution unit of the previous basic block, and an instruction to be executed first within this range may be placed first.
  • the execution order determiner 41 may provide a basic block belonging to the innermost loop to the expanded basic block parallelizer 42 preferentially by analyzing the control flow and loops. In such a case, after the basic block belonging to the innermost loop has been processed, a basic block belonging to the next innermost loop is provided. In this manner, the code of the innermost loop, which is executed most frequently, is optimized, thus increasing the program execution rate.
  • FIG. 20( a ) illustrates exemplary internal form code passed from the compiler front end 10 to the parallelizer 40 .
  • This internal form code consists of three basic blocks A, B and C, of which the basic block B is the innermost loop.
  • FIG. 20( b ) illustrates a result of processing performed on the internal form code shown in FIG. 20( a ) according to the second embodiment.
  • FIG. 20( c ) illustrates a result of processing performed on the internal form code shown in FIG. 20( a ) according to this modified example.
  • the number of execution cycles of the innermost loop decreases by one by preferentially optimizing the basic block B as the innermost loop.
  • the innermost loop is expected to be repeatedly executed numerous times. Accordingly, the execution time of the entire program can be shortened according to this modified example.
  • the expanded basic block parallelizer 42 subdivides a basic block into execution units, along with the first execution unit of the next basic block.
  • the parallelizer 42 may subdivide the basic block into execution units independent of other basic blocks, i.e., without using the first execution unit of the next basic block. And one of these two types of subdivisions, which results in the smaller number of execution units, may be adopted.
  • FIG. 20( c ) illustrates a result obtained by subdividing the internal form code shown in FIG. 20( a ) without using the first execution unit of the next basic block. Compare the results shown in FIGS. 20 ( b ) and 20 ( c ), and it can be seen that the number of execution cycles can be reduced if the basic block is processed without using the first execution unit of the next basic block. Thus, according to this modified example, the subdivision result shown in FIG. 20( c ) is preferred.
  • inventive program processing is implemented as program processor, but may be implementable as software performing a similar algorithm. Also, the same functions as those performed by the program processor according to the first and second embodiments are attainable by storing a similar program processing program on a computer-readable storage medium such as floppy disk, hard disk, CD-ROM, MO, or DVD and making a computer execute the program.
  • a computer-readable storage medium such as floppy disk, hard disk, CD-ROM, MO, or DVD
  • the object code generated in the first and second embodiments may be executed by the target machine even when the code is stored on a storage medium such as floppy disk, hard disk, CD-ROM, MO, DVD or semiconductor memory.

Abstract

In a program processing procedure specially designed to perform compilation for parallel processing purposes, a method and system for increasing the program execution rate of a target machine is provided. A compiler front end translates source code into intermediate code that has been divided into basic blocks. A parallelizer converts the intermediate code, which has been generated by the compiler front end, into a parallelly executable form. An execution order determiner determines the order of the basic blocks to be executed. An expanded basic block parallelizer subdivides the intermediate code, which has already been divided into the basic blocks, into execution units, each of which is made up of parallelly executable instructions, following the order determined and on the basic block basis. When a particular one of the basic blocks is subdivided into execution units, an instruction belonging to the first execution unit of the next basic block, which has already been subdivided into execution units, is also used. Finally, an object code generator translates the intermediate code, which has been subdivided into the execution units by the parallelizer, into object code.

Description

    BACKGROUND OF THE INVENTION
  • The present invention generally relates to program processing technology applicable to a compiler for translating all the source code of a program written in a high-level language into object code, and more particularly relates to code optimization technology specially designed for a parallel processor. [0001]
  • As the performance and functions of various microprocessor application components have been increasingly enhanced these days, microprocessors with even higher performance are in higher and higher demand. To meet such a demand, parallel processors like very-long-instruction-word (VLIW) processors have been developed to realize still higher performance through parallel processing. [0002]
  • For example, a VLIW processor is supposed to process in parallel a combination of operations that are packed in each and every word of a program. The combination is made by a compiler, which extracts a number of parallelly executable instructions from a source program and then combines them into a single long instruction word, thereby shortening the time taken for the VLIW processor to execute the program. [0003]
  • According to the compilation technology, instructions are rearranged on a “basic block ” basis, which means a set of instructions to be executed consecutively without any branch or halt. That is to say, a basic block is a collection of instructions controllable continuously from the first through last ones. [0004]
  • A VLIW processor, on the other hand, generally executes instructions included in a long, fixed-length word with a fixed parallelism index. Thus, the code conversion efficiency attainable by the VLIW processor is not always good. To eliminate such a problem, a VLIW processor for executing a variable number of instructions included in a variable-length word was developed recently. In the VLIW processor of this type, a set of instructions to be executed in parallel is divided into a plurality of groups on respective parallel execution boundaries, thereby making the number of instructions issued per cycle (i.e., index of parallelism) variable. In addition, the VLIW processor executes an instruction word of a variable length so as to improve the code conversion efficiency. In this specification, a group of instructions included between an adjacent pair of execution boundaries will be called an “execution unit (of instructions)”. Furthermore, the VLIW processor can also execute a plurality of units of instructions concurrently while branching and recombining the processing flow. [0005]
  • As described above, the processor of this type rear-ranges the instructions on the basic block basis. Thus, if the compilation technology is applied to the processor of this type, even a set of instructions, which could be included in a single execution unit otherwise, might be unintentionally divided into several execution units on the basic block boundaries. As a result, the program execution rate attainable by such a processor cannot be regarded sufficiently high considering the potential performance of the VLIW processor. [0006]
  • SUMMARY OF THE INVENTION
  • An object of the present invention is increasing the program execution rate of a target machine in performing program processing for parallel processing purposes. [0007]
  • Specifically, an inventive program processing method for parallel processing includes the step of a) subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units. Each of the execution units is made up of parallelly-executable instructions. The method further includes the step of b) combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable within a single cycle. [0008]
  • According to the present invention, if a pair of execution units, which are located just before and after a basic block boundary, are found executable within a single cycle, then these execution units are combined into a single execution unit. Even a group of instructions ranging across a basic block boundary, or covering multiple basic blocks, are executable by a target machine within a single cycle if these instructions are classified as single execution unit. Accordingly, it is more probable that a group of instructions ranging across a basic block boundary are executable in parallel, thus cutting down the number of cycles to be done by the target machine to execute a program. As a result, the program execution rate increases. [0009]
  • In one embodiment of the present invention, the step b) preferably includes analyzing dependence between instructions belonging to the two execution units located just before and after the basic block boundary. And it is determined based on the analyzed dependence between the instructions whether or not these execution units located just before and after the basic block boundary are combinable into the single execution unit. [0010]
  • In another embodiment of the present invention, an execution boundary code indicating the identity as boundary between an associated pair of the execution units is preferably added to each said basic block boundary in the step a). In the step b), if the execution units located just before and after the basic block boundary are executable within the single cycle, then the execution boundary code that has been added to the basic block boundary is preferably removed. [0011]
  • In still another embodiment, instructions, which impose less strict constraints on resources of a target machine for executing the program, are selected preferentially in the step a) as respective instructions belonging to first and last execution units of each said basic block. [0012]
  • In this particular embodiment, when the instructions belonging to the first and last execution units of the basic block are selected in the step a), an instruction that is executable only by itself a cycle by the target machine is preferably given a lower priority. [0013]
  • In still another embodiment, when instructions belonging to first and last execution units of each said basic block are selected in the step a), an instruction with a short word length is preferably given a higher priority. [0014]
  • Another inventive program processing method for parallel processing includes the step of a) subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units. Each of the execution units is made up of parallelly-executable instructions. In the step a), a particular one of the basic blocks is preferably subdivided into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block. The closest execution unit belongs to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided. [0015]
  • According to the present invention, a particular basic block is subdivided into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block. The closest execution unit belongs to another set of execution units, into which another basic block that is adjacent to, and combinable with, the particular basic block has already been subdivided. Thus, a number of instructions covering multiple basic blocks across a basic block boundary are more likely to be combined into a single execution unit. Even a group of instructions ranging across a basic block boundary are executable by a target machine within a single cycle if these instructions are classified as single execution unit. Accordingly, it is more probable that a group of instructions covering several basic blocks across a basic block boundary are executable in parallel, thus cutting down the number of cycles to be done by the target machine to execute a program. As a result, the program execution rate increases. [0016]
  • In one embodiment of the present invention, one of the basic blocks that is located just before the particular basic block may be used as the combinable basic block in the step a). The particular basic block, along with the instruction belonging to the last execution unit of the combinable basic block, may be subdivided into the set of execution units. [0017]
  • In an alternate embodiment, one of the basic blocks that is located just after the particular basic block may be used as the combinable basic block in the step a). The particular basic block, along with the instruction belonging to the first execution unit of the combinable basic block, may be subdivided into the set of execution units. [0018]
  • In still another embodiment, each said basic block may be subdivided in the step a) into the execution units sequentially in a forward direction from the beginning toward the end of the program. [0019]
  • In an alternate embodiment, each said basic block may be subdivided in the step a) into the execution units sequentially in a backward direction from the end toward the beginning of the program. [0020]
  • In still another embodiment, one of the basic blocks that belongs to the innermost loop is preferably subdivided into the execution units preferentially. [0021]
  • In yet another embodiment, the method may further include the step b) of subdividing each said basic block of the program code into another set of execution units independent of adjacent ones of the basic blocks. Results of the steps a) and b) are compared to each other and one of these steps a) and b) that results in the smaller number of execution units is adopted. [0022]
  • An inventive program processor for parallel processing includes an intra basic block parallelizer for subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units. Each of the execution units is made up of parallelly-executable instructions. The processor further includes a basic block boundary parallelizer for combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable within a single cycle. [0023]
  • Another inventive program processor is adapted to execute compilation for parallel processing purposes. The processor includes: a compiler front end for translating source code into intermediate code by dividing the source code into a plurality of basic blocks; a parallelizer for converting the intermediate code into code in a parallelly-executable form; and an object code generator for translating the intermediate code in the form converted by the parallelizer into object code executable by a target machine. The parallelizer includes: an intra basic block parallelizer for subdividing each of a plurality of basic blocks, into which the intermediate code has been divided, into a multiplicity of execution units, each being made up of parallelly-executable instructions; and a basic block boundary parallelizer for combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable within a single cycle. [0024]
  • Still another inventive program processor for parallel processing includes an expanded basic block parallelizer for subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units, each being made up of parallelly-executable instructions. The expanded basic block parallelizer subdivides a particular one of the basic blocks into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block. The closest execution unit belongs to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided. [0025]
  • Yet another inventive program processor is also adapted to execute compilation for the parallel processing purposes. The processor includes: a compiler front end for translating source code into intermediate code by dividing the source code into a plurality of basic blocks; a parallelizer for converting the intermediate code into code in a parallelly-executable form; and an object code generator for translating the intermediate code in the form converted by the parallelizer into object code executable by a target machine. The parallelizer includes an expanded basic block parallelizer for subdividing each of a plurality of basic blocks, into which the intermediate code has been divided, into a multiplicity of execution units, each being made up of parallelly-executable instructions. The expanded basic block parallelizer subdivides a particular one of the basic blocks into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block. The closest execution unit belongs to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided. [0026]
  • An inventive storage medium has stored thereon a program getting a program processing procedure executed by a computer for parallel processing purposes. The program processing procedure includes the steps of: a) subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units, each being made up of parallelly-executable instructions; and b) combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable within a single cycle. [0027]
  • Another inventive storage medium has stored thereon a program getting a program processing procedure executed by a computer through compilation for parallel processing purposes. The program processing procedure includes the steps of: a) translating source code into intermediate code by dividing the source code into a plurality of basic blocks; b) subdividing each said basic block of the intermediate code into a multiplicity of execution units, each being made up of parallelly-executable instructions; c) combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable within a single cycle; and d) translating the intermediate code in the form converted in the steps b) and c) into object code executable by a target machine. [0028]
  • Still another inventive storage medium has stored thereon a program getting a program processing procedure executed by a computer for parallel processing purposes. The program processing procedure includes the step of subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units, each being made up of parallelly-executable instructions. A particular one of the basic blocks is subdivided into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block. The closest execution unit belongs to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided. [0029]
  • Yet another inventive storage medium has stored thereon a program getting a program processing procedure executed by a computer through compilation for parallel processing purposes. The program processing procedure includes the steps of a) translating source code into intermediate code by dividing the source code into a plurality of basic blocks; b) subdividing each said basic block of the intermediate code into a multiplicity of execution units, each being made up of parallelly-executable instructions; and c) translating the intermediate code in the form processed in the step b) into object code executable by a target machine. In the step b), a particular one of the basic blocks is subdivided into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block. The closest execution unit belongs to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided. [0030]
  • Yet another inventive storage medium has stored thereon a set of instructions to be executed in parallel. The instruction set is grouped into a plurality of execution units, each being made up of parallelly-executable instructions, and at least one of the execution units is located across a boundary between an associated pair of the basic blocks.[0031]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIGS. [0032] 1(a) through 1(d) illustrate an exemplary instruction set to be executed by a target machine in program processing according to the present invention;
  • FIG. 1([0033] e) illustrates exemplary executable code processed by the target machine; and
  • FIG. 1([0034] f) schematically illustrates how the target machine executes the instructions.
  • FIG. 2 is a block diagram illustrating a configuration for a program processor according to a first embodiment of the present invention. [0035]
  • FIG. 3 is a flowchart illustrating a processing procedure in the concurrent executability decider included in the program processor shown in FIG. 2. [0036]
  • FIG. 4 illustrates exemplary internal form code received by the basic block boundary parallelizer included in the program processor shown in FIG. 2. [0037]
  • FIG. 5 is a dependence graph for the internal form code shown in FIG. 4. [0038]
  • FIG. 6 illustrates, in an assembly language, object code that has been generated for the internal form code shown in FIG. 4. [0039]
  • FIG. 7 illustrates another exemplary internal form code received by the basic block boundary parallelizer included in the program processor shown in FIG. 2. [0040]
  • FIG. 8 is a dependence graph for the internal form code shown in FIG. 7. [0041]
  • FIG. 9 illustrates, in an assembly language, object code that has been generated for the internal form code shown in FIG. 7. [0042]
  • FIG. 10 is a block diagram illustrating a configuration for a program processor according to a second embodiment of the present invention. [0043]
  • FIG. 11 illustrates exemplary assembler code. [0044]
  • FIG. 12 is a dependence graph for the assembler code shown in FIG. 11. [0045]
  • FIG. 13 is a flowchart illustrating a processing procedure in the instruction rearranger included in the program processor shown in FIG. 10. [0046]
  • FIG. 14 illustrates exemplary internal form code received by the parallelizer included in the program processor shown in FIG. 10. [0047]
  • FIG. 15 is a dependence graph for the basic block B shown in FIG. 14. [0048]
  • FIG. 16 illustrates execution units obtained by subdividing the basic block B shown in FIG. 14. [0049]
  • FIG. 17 is a dependence graph for the respective first execution units of the basic blocks A and B shown in FIG. 14. [0050]
  • FIG. 18 illustrates execution units obtained by subdividing the respective first execution units of the basic blocks A and B shown in FIG. 14. [0051]
  • FIG. 19 illustrates, in an assembly language, object code generated for the internal form code shown in FIG. 14. [0052]
  • FIG. 20([0053] a) illustrates exemplary internal form code received by the parallelizer included in the program processor shown in FIG. 10;
  • FIG. 20([0054] b) illustrates a result of processing performed on the internal form code shown in FIG. 20(a) according to the second embodiment; and
  • FIG. 20([0055] c) illustrates a result of processing performed on the internal form code shown in FIG. 20(a) according to a modified example of the second embodiment.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Hereinafter, preferred embodiments of the present invention will be described with reference to the accompanying drawings. [0056]
  • Target Machine
  • First, a target machine applicable to program processing according to the present invention will be described. [0057]
  • According to the present invention, a processor that can execute a variable number of variable-length instructions per cycle and has an index of parallelism of three is used as the target machine. An instruction set executed by this target machine is made up of a plurality of instruction units, each being composed of 21 bits. The format of a single instruction set includes: a 21-bit instruction (short instruction) consisting of just one unit; and a 42-bit instruction (long instruction) consisting of two units. That is to say, this target machine is compatible with a variable-length instruction system. [0058]
  • Each instruction is provided with information indicating whether or not the instruction is adjacent to a boundary between execution units, i.e., execution boundary. If the information indicates that the instruction is adjacent to the execution boundary, then the instruction in question and the next instruction are not executed within the same cycle. Instead, the target machine executes instructions located between a pair of execution boundaries, i.e., instructions belonging to the same execution unit, within the same cycle. [0059]
  • Following is the constraints imposed on instructions that can be included within a single execution unit: [0060]
  • 1) The number of instructions included in a single execution unit should be no greater than three: and [0061]
  • 2) The total resources of the target machine that can be allocated to instructions in a single execution unit are not more than 3 arithmetic logic (ALU) units, 1 multiplication unit, 1 LD/ST unit and 1 branch unit. [0062]
  • This constraint is called “resource constraint”. Only when both of these two conditions are met, the instructions can be executed in parallel. [0063]
  • Since the number of instructions belonging to a single execution unit may be arbitrarily defined so long as the constraint (1) is met, a variable number of instructions can be issued per cycle. To reduce the load on the hardware, the target machine does not determine whether or not the instructions included in a single execution unit are parallelly executable semantically or in such a manner as to meet the resource constraint imposed by the target machine. That is to say, it must be ensured for the target machine at the program processing end that parallelly executable instructions are appropriately arranged between a pair of execution boundaries. [0064]
  • Also, the target machine does not always execute all the instructions included in a single execution unit concurrently. For example, if one of the instructions in a single execution unit arrived later than its expected time, then the instructions in the execution unit might be executed by the target machine separately at different times. Thus, when the program processing is performed, the instructions in an execution unit should be arranged in such an order that the semantic operation of the program is correct even if the instructions are executed at several different times. [0065]
  • As can be seen from the above conditions, a branch instruction may be included in an execution unit. For example, an execution unit may include “conditional branch instruction”, “ALU instruction” and “LD instruction” in this order. In this case, if the first instruction “conditional branch” is executed while meeting the above conditions, then the instructions “ALU” and “LD”, which succeed the branch instruction in the same execution unit, are not executed. Similarly, an interrupt into a middle of an execution unit is also permitted. In such a case, the instructions, which precede the jump destination instruction in the execution unit, are not executed. [0066]
  • FIGS. [0067] 1(a) through 1(d) illustrate an exemplary instruction set to be executed by a target machine. In FIGS. 1(a) through 1(d), opecode op, op2, destination register Rd, source register Rs and n-bit constant immn (where n is a constant) are included in this instruction set. Each instruction is further provided with a bit E indicating the presence or absence of the execution boundary. If an instruction is located adjacent to an execution unit boundary, then the bit E is one. Specifically, the instruction shown in FIG. 1(a) is a register-to-register operation instruction, while the instructions shown in FIG. 1(b), 1(c) and 1(d) are operation instructions using less than 5 bit constant, less than 21 bit constant and less than 32 bit constant, respectively. The instructions shown in FIGS. 1(a) and 1(b) are short instructions each composed of 21 bits, while the instructions shown in FIGS. 1(c) and 1(d) are long instructions each composed of 42 bits. The operation instruction using the less than 32 bit constant is nothing but an instruction of transferring a 32-bit constant to a register. This is because the number of remaining bits assignable to the opecode Op is small in the instruction including the 32-bit constant and the number of instructions using the 32-bit constant is limited.
  • Hereinafter, the actual operation of the target machine will be briefly described. [0068]
  • FIG. 1([0069] e) illustrates exemplary executable code (object code) handled by the target machine. Each executable code is fetched by the target machine on a 64 bit basis (or as a packet) consisting of three 21-bit units. The actual total length of the three units is 63 bits (=21×3) and there is one extra bit left, which is not used for any special purpose. The instructions fetched by the target machine are executed on the execution unit basis, i.e., a set of instructions located between one execution boundary and the next. If there are any instructions that were fetched but not executed, then those instructions are stored in an instruction buffer and then executed in the next execution cycle or later.
  • FIG. 1([0070] f) schematically illustrates how the target machine executes the instructions. In FIG. 1(f), each row corresponds to a set of instructions executed by the target machine per execution cycle.
  • Jump and interrupt out of, or into, a middle of an execution unit will be briefly described. Suppose [0071] Unit 1 is a conditional branch instruction and the conditions of the conditional branch are met in FIG. 1(e). In such a situation, although the conditional branch instruction of Unit 1 is executed, the instructions succeeding Unit 1 in the same execution unit, i.e., those of Units 2 and 3, are not executed. Also, if interrupt into Unit 5 occurs in FIG. 1(e), then the instructions preceding Unit 5 in the same execution unit, i.e., those of Units 3 and 4, are not executed.
  • In the following description, this processor is supposed to be used according to the present invention for convenience sake. However, inventive program processing is not limited to the basic instruction word length of 21 bits, parallelism index of 3 and resource constraints of 3 ALU units, 1 multiplication unit, 1 LD/ST unit and 1 branch unit as exemplified for the processor or to the above combination of executable instructions. The present invention is naturally applicable to other situations where these specifics are different. [0072]
  • Also, in the foregoing description, the information indicating the identity as execution boundary is attached to an instruction itself. Alternatively, the information representing the execution boundary may be stored in a different field separately from the instruction. [0073]
  • Embodiment 1
  • FIG. 2 is a block diagram illustrating a configuration for a [0074] program processor 1 according to a first exemplary embodiment of the present invention. As shown in FIG. 2, the program processor 1 includes a compiler front end 10, a parallelizer 20 and an object code generator 30.
  • The compiler [0075] front end 10 retrieves source code 50, which is written in a high-level language like C and saved in a file format, and analyzes the syntax and semantics thereof to generate internal form code (i.e., intermediate code, or assembler code). Then, the compiler 10 divides the internal form code into a plurality of basic blocks, each of which is an instruction set without any branch or interrupt. If necessary, the compiler 10 optimizes the internal form code so as to reduce the size or execution time of the executable code (object code) to be generated ultimately.
  • The [0076] parallelizer 20 reads the internal form code generated by the compiler front end 10 as program code and then parallelizes the code for the target machine. The parallelizer includes an intra basic block parallelizer 21 and a basic block boundary parallelizer 22.
  • The intra [0077] basic block parallelizer 21 analyzes dependence among the instructions included in each basic block, schedules (or rearranges) the instructions and adds an execution boundary to the basic block, thereby parallelizing the internal form code. The intra basic block parallelizer 21 operates in the same way as the counterpart (i.e., a local parallelizer) in a known program processor (as we disclosed in Japanese Patent Application No., 10-095647).
  • The basic [0078] block boundary parallelizer 22 examines each basic block boundary of the internal form code that has been parallelized by the intra basic block parallelizer 21 and removes the execution boundary located at the basic block boundary if permitted. The basic block boundary parallelizer 22 includes a boundary dependence analyzer 23, a concurrent executability decider 24 and an execution boundary remover 25.
  • The [0079] boundary dependence analyzer 23 analyzes the dependence among all the instructions included in a pair of execution units, which are located just before and after a basic block boundary in question, thereby drawing up a dependence graph. In the following description, those instructions to be analyzed will be called “instructions in question”. In a dependence graph, instructions are represented as nodes and dependence among them is represented as edge (or arrow). For example, suppose Instruction (a) must be executed to execute Instruction (b). Since Instruction (b) depends on Instruction (a) in such a case, the dependence therebetween is represented by a description “a→b”. In the illustrated embodiment, the dependence is supposed to be “definition-reference” dependence (or data dependence), which represents dependence between an instruction defining a resource and another instruction referring to the same resource. A method of drawing up a dependence graph was disclosed by R. J. Blainey in an article entitled “Instruction scheduling in the TOBEY compiler” (IBMJ. RES. DEVELOP. Vol.38, No. 5, September 1994), for example.
  • Based on the dependence graph drawn up by the [0080] boundary dependence analyzer 23, the concurrent executability decider 24 decides whether or not all the instructions in question are executable within the same cycle.
  • FIG. 3 is a flowchart illustrating a processing procedure in the [0081] concurrent executability decider 24. First, in Step al, the decider 24 decides whether or not there is any data dependence among the instructions in question. If the answer is YES, then the decider 24 decides that those instructions are “non-executable concurrently” within the same cycle. Alternatively, if the answer is NO, then the decider 24 decides in Step a2 whether or not the resource constraint imposed by the target machine is met when all the instructions in question are executed within the same cycle. If the answer is NO, then the decider 24 decides that those instructions are “non-executable concurrently” within the same cycle. That is to say, only when the answers to the queries in Steps a1 and a2 are NO and YES, respectively, i.e., if there is no data dependence among the instructions in question and if the resource constraint imposed by the target machine is met, the decider 24 decides that those instructions are “executable concurrently” within the same cycle.
  • If the [0082] concurrent executability decider 24 has decided that the instructions in question are executable within the same cycle, then the execution boundary remover 25 removes the execution boundary located at the basic block boundary in question.
  • The [0083] object code generator 30 translates the internal form code (assembler code), which has been output from the parallelizer 20, into object code 60 and outputs the object code 60 as a file.
  • Hereinafter, characteristic operations of the program processor shown in FIG. 2 will be described by way of operating examples using specific instructions. [0084]
  • OPERATING EXAMPLE 1
  • FIG. 4 illustrates exemplary internal form code that has been processed by the intra [0085] basic block parallelizer 21 included in the parallelizer 20. As described above, the compiler front end 10 translates the source program 50 into the internal form code, which is made up of a plurality of basic blocks divided. Then, the intra basic block parallelizer 21 subdivides each of the basic blocks into execution units, each of which is made of parallelly-executable instructions. As a result, the internal form code such as that shown in FIG. 4 is generated by the intra basic block parallelizer 21.
  • The basic [0086] block boundary parallelizer 22 receives the internal form code such as that shown in FIG. 4. In FIG. 4, only a pair of execution units that are located just before and after a basic block boundary in question are illustrated. The execution boundary A is provided due to the existence of the basic block boundary and has nothing to do with the parallelism of the instructions. Thus, an execution boundary code indicating the execution boundary A is added thereto.
  • First, the [0087] boundary dependence analyzer 23 is activated to draw up a dependence graph shown in FIG. 5 from the internal form code shown in FIG. 4. As shown in FIG. 5, there is no dependence among the three instructions in question.
  • Next, the [0088] concurrent executability decider 24 is activated. In the illustrated example, there is no data dependence among the three instructions in question shown in FIG. 4 (i.e., the answer to the query in Step a1 is NO). And the resource constraint imposed by the target machine is met (i.e., the answer to the query in Step a2 is YES). Thus, the decider 24 decides that these instructions are “executable concurrently”.
  • Subsequently, the [0089] execution boundary remover 25 is activated. Since the concurrent executability decider 24 has affirmed the concurrent executability of the instructions in question, the remover 25 removes the execution boundary code representing the execution boundary A that is located at the basic block boundary in question.
  • As a result, the [0090] object code generator 30 outputs portion of the object code 60, corresponding to the internal form code shown in FIG. 4, as the code shown in FIG. 6. The object code is described in an assembly language in FIG. 6 to make the code easily understandable (the same statement will be applied to FIGS. 9 and 19, too). As can be seen from FIG. 6, now that the execution boundary A has been removed, the three instructions in question are executed within the same cycle, thus shortening the time taken for the target machine to execute the program.
  • OPERATING EXAMPLE 2
  • FIG. 7 illustrates another exemplary internal form code processed by the intra [0091] basic block parallelizer 21 included in the parallelizer 20. As in FIG. 4, only a pair of execution units located just before and after a basic block boundary in question are illustrated in FIG. 7. Just like the execution boundary A shown in FIG. 4, the execution boundary B is also provided due to the existence of the basic block boundary and has nothing to do with the parallelism of the instructions.
  • First, the [0092] boundary dependence analyzer 23 is activated to draw up a dependence graph shown in FIG. 8 from the internal form code shown in FIG. 7. As shown in FIG. 8, there is data dependence between instructions “mov 10, r0” and “mov r0, r1”, i.e., the instruction “mov r0, r1” depends on the instruction “mov 10, r0”.
  • Next, the [0093] concurrent executability decider 24 is activated. In the illustrated example, there is data dependence among the three instructions in question shown in FIG. 7 (i.e., the answer to the query in Step a1 is YES). Thus, the decider 24 decides that these instructions are “non-executable concurrently”.
  • Subsequently, the [0094] execution boundary remover 25 is activated. Since the concurrent executability decider 24 has negated the concurrent executability of the instructions in question, the remover 25 does not remove the execution boundary B located at the basic block boundary in question. As a result, the execution boundary B remains as it is.
  • FIG. 9 illustrates portion of the [0095] object code 60 that has been generated by the object code generator 30 from the internal form code shown in FIG. 7. As can be seen from FIG. 9, since the execution boundary B remains, the three instructions in question are executed in two cycles.
  • According to this embodiment, each of a plurality of basic blocks, into which intermediate code has been divided, is subdivided into a multiplicity of execution units, each of which is made up of parallelly-executable instructions. And if two of the execution units, which are located just before and after a basic block boundary, are found executable within the same cycle, then these execution units are combined into a single execution unit. Thus, the number of cycles taken for the target machine to execute the program can be reduced and the program execution rate can be increased. [0096]
  • In making the intra [0097] basic block parallelizer 21 shown in FIG. 2 subdivide each basic block into execution units, instructions, on which less strict resource constraints are imposed by the target machine, may be selected preferentially. More specifically, in selecting instructions belonging to the first and last execution units of a basic block, instructions on which strict resource constraints are imposed, e.g., load, store and multiply instructions, each of which can be executed only by itself a cycle, should preferably be avoided by giving lower priorities thereto. In such a case, instructions with less strict resource constraints are allocated to the execution units located just before and after a basic block boundary. As a result, the basic block boundary parallelizer 22 is more likely to combine a pair of execution units located just before and after a basic block boundary.
  • Also, in making the intra [0098] basic block parallelizer 21 shown in FIG. 2 subdivide each basic block into execution units, instructions with short word lengths may be selected preferentially. More specifically, long instructions requiring two units, e.g., instructions using immediate values of more than 5 bits, should preferably be avoided by giving lower priorities thereto. In such a case, instructions with relatively short word lengths are allocated to the execution units located just before and after a basic block boundary. Thus, even in a target machine that can fetch and execute instructions with a limited word length at a time, the basic block boundary parallelizer 22 is more likely to combine a pair of execution units that are located just before and after a basic block boundary. Furthermore, a branched instruction might be inserted into an execution unit located just after a basic block boundary and therefore that execution unit might be short of the required word length in a variable length instruction system. However, if the instructions with short word lengths are selected preferentially, then such a situation can be avoided because the instruction word length of each execution unit is shorter.
  • Embodiment 2
  • FIG. 10 is a block diagram illustrating a configuration for a [0099] program processor 2 according to a second exemplary embodiment of the present invention. As shown in FIG. 10, the program processor 2 includes a compiler front end 10, a parallelizer 40 and an object code generator 30.
  • Like the compiler [0100] front end 10 shown in FIG. 2, the compiler front end 10 shown in FIG. 10 also retrieves source code 50 to generate internal form code (assembler code) that has been divided into basic blocks. Also, the object code generator 30 translates the internal form code, which has been output from the parallelizer 40, into object code 60 and then outputs the object code 60 as a file in the same way as the counterpart 30 shown in FIG. 2.
  • The [0101] parallelizer 40 reads the internal form code generated by the compiler front end 10 as program code and parallelizes the code for the target machine. The parallelizer 40 includes an execution order determiner 41 and an expanded basic block parallelizer 42.
  • The [0102] execution order determiner 41 determines the order of basic blocks in the internal form code read out that are to be passed to the expanded basic block parallelizer 42. In the illustrated embodiment, the basic blocks are passed to the expanded basic block parallelizer 42 in the ascending order, i.e., beginning with the last basic block of the program.
  • The expanded [0103] basic block parallelizer 42 analyzes dependence among the instructions included in each basic block, schedules the instructions and adds a parallel execution boundary to the basic block, thereby subdividing the basic block into a plurality of execution units, each being made up of parallelly executable instructions. The expanded basic block parallelizer 42 subdivides a particular basic block into execution units along with an instruction belonging to the first one of execution units, into which another basic block has already been subdivided. The latter basic block is located just after, and combinable with, the particular basic block. That is to say, unlike the intra basic block parallelizer 21 shown in FIG. 2, the expanded basic block parallelizer 42 rearranges the instructions of the basic block in question while taking the first execution unit of the next basic block into account.
  • The expanded [0104] basic block parallelizer 42 includes a dependence analyzer 43, an instruction rearranger 44 and an execution boundary adder 45. The dependence analyzer 43 analyzes dependence among the instructions included in the basic block in question and the instructions included in the first execution unit of the next basic block. In the illustrated embodiment, the dependence of the following four types is analyzed.
  • Data Dependence: [0105]
  • Dependence of an instruction referring to a resource on another instruction defining the same resource; [0106]
  • Inverse Dependence: [0107]
  • Dependence of an instruction defining a resource on another instruction referring to the same resource; [0108]
  • Output Dependence: [0109]
  • Dependence between an instruction defining a resource and another instruction defining the same resource; and [0110]
  • Control Dependence [0111]
  • Dependence of an instruction of the type changing a control flow, e.g., branch or return, which is located at the end of a basic block and should be executed after all the instructions belonging to the same basic block have been executed or within the same cycle. [0112]
  • If the original order of the instructions is changed, then the meaning of the program will be altered, no matter which dependence those instructions are defined by. Thus, in rearranging the instructions, the original dependence should be maintained. [0113]
  • The [0114] dependence analyzer 43 draws up a dependence graph representing dependence among the instructions for each basic block. FIG. 11 illustrates exemplary assembler code, while FIG. 12 is a dependence graph for the assembler code shown in FIG. 11. In FIG. 12, each solid line represents data dependence, while the broken line represents inverse dependence. Also, mem1, mem2 and mem3 refer to mutually different memory addresses.
  • The [0115] instruction rearranger 44 rearranges the instructions in the basic block using the dependence graph drawn up by the dependence analyzer 43, thereby generating parallelized assembler code for the target machine.
  • FIG. 13 is a flowchart illustrating a processing procedure in the [0116] instruction rearranger 44. First, in Step S1, the instruction rearranger 44 provisionally places one of the nodes in the dependence graph, which node corresponds to an instruction belonging to the first execution unit of the next basic block. Next, the rearranger 44 repeatedly performs Steps S2 through S9 (Loop 1) until there is no node left yet to be placed in the dependence graph drawn up by the dependence analyzer 43, i.e., until the answer to the query in Step S2 becomes NO.
  • In [0117] Loop 1, first, nodes that can now be regarded as candidates to be placed are extracted in Step S3 from the dependence graph to make a set of candidate nodes to be placed. In this case, each candidate node to be placed requires “all the successors have already been placed” or “only an instruction corresponding to the node placed provisionally in Step S1 is included as successor and dependence thereof on the node placed provisionally is not data dependence”. In this specification, a “successor” means an instruction that should be executed after a particular instruction has been executed.
  • Then, Steps S[0118] 4 through S8 (Loop 2) are repeatedly performed until there is no candidate node left to be placed for the set of candidate nodes to be placed as made in Step S3, i.e., until the answer to the query in Step S8 becomes NO.
  • First, in Step S[0119] 4, a node that seems to be optimum candidate to be placed currently is selected from the set of candidate nodes to be placed. The optimum node is selected heuristically with reference to the dependence graph and the provisionally placed region such that the instructions in the entire basic block can be executed in a shortest possible time. In the illustrated embodiment, such a node as making the total instruction execution time (from the beginning to the end of the dependence graph) longest is supposed to be selected. If there are several instructions meeting this condition, one of the instructions that should be executed last is selected as the optimum node.
  • Next, in Step S[0120] 5, it is determined whether or not the optimum node is actually placeable. If the answer is YES, then the node is placed. To execute a plurality of instructions within a single cycle, the instructions to be executed should be decoded (or fetched), arithmetic logical operations should be performed thereon and then the results should be restored into registers or memories. Thus, the determination in Step S5 includes the following conditions:
  • Can the instruction in question be fetched concurrently with the instruction that has already been placed provisionally?[0121]
  • Are sufficient resources of the target machine (e.g., ALU, multiplier and so on) still available even if another part of the resources, as well as that already used by the instruction placed provisionally, is used by the instruction in question?[0122]
  • Are sufficient ports still available even if other write and read ports of a register file, as well as those already used by the instruction placed provisionally, are used by the instruction in question?[0123]
  • Only when all of these conditions are met, the optimum node can be regarded as placeable. Other constraints might be additionally imposed depending on the type of the target machine. [0124]
  • If the optimum node has been placed provisionally, then the set of nodes that have already been placed provisionally up to now are examined to determine in Step S[0125] 6 whether or not additional instructions are placeable. The detailed determination process will be described later. If the answer is NO, then Loop 2 is ended.
  • Alternatively, if it has been determined that additional nodes are placeable, a node that can now be a new candidate node to be placed after the optimum node has been placed is added in Step S[0126] 7 as another candidate node to be placed. The new candidate node to be placed should have only an instruction corresponding to the optimum node as a successor that has not been placed yet, and the dependence thereof on the instruction corresponding to the optimum node should not be data dependence. That is to say, the new candidate node to be placed should be associated with an instruction that can be executed within the same cycle as the instruction corresponding to the optimum node, but that cannot be executed in a cycle later than that cycle.
  • After [0127] Loop 2 comes to an end, the nodes that have been placed provisionally are regarded as fixed nodes in Step S9. More specifically, instructions, which correspond to the nodes belonging to the set of provisionally placed nodes, are extracted from the original instruction set, and rearranged into a new instruction set to be passed to the execution boundary adder 45.
  • By performing such a procedure, the nodes included in the dependence graph are placed by beginning with the one to be executed last. [0128]
  • The [0129] execution boundary adder 45 adds an execution boundary to each instruction set for which the placement has been fixed in Step S9.
  • OPERATING EXAMPLE
  • A characteristic operation of the program processor according to the second embodiment shown in FIG. 10 will be described using specific instructions. [0130]
  • FIG. 14 illustrates exemplary internal form code that has been passed from the compiler [0131] front end 10 to the parallelizer 40. The internal form code shown in FIG. 14 has been divided by the compiler front end 10 into two basic blocks A and B.
  • The [0132] execution order determiner 41 receives the internal form code shown in FIG. 14 and determines the order of instructions to be passed to the expanded basic block parallelizer 42. In the illustrated embodiment, the basic block is subdivided into execution units in the ascending order by beginning with the end of the program. Thus, the basic blocks B and A are passed in this order to the expanded basic block parallelizer 42.
  • Subdividing Basic Block B into Execution Units [0133]
  • The [0134] dependence analyzer 43 analyzes the instructions belonging to the basic block B, thereby drawing up a dependence graph such as that shown in FIG. 15. In FIG. 15, the solid line arrow represents data dependence, more specifically, an instruction “add r2, r1” depends on another instruction “mov 1, r1”.
  • The [0135] instruction rearranger 44 receives the dependence graph shown in FIG. 15 and rearranges the instructions in accordance with the flowchart shown in FIG. 13. When Loop 1 is performed for the first time, only the instruction “add r2, r1” is selected in Step S3 as candidate to be placed, regarded as placeable in Step S5 and then fixed in Step S9. When Loop 1 is performed for the second time, only the instruction “mov 1, r1” is selected in Step S3 as candidate to be placed, regarded as placeable in Step S5 and then fixed in Step S9.
  • Every time the [0136] instruction rearranger 44 performs Loop 1, the execution boundary adder 45 adds an execution boundary to the instruction set. That is to say, each instruction set fixed is grouped as an execution unit. As a result, the execution units in the basic block B are arranged as shown in FIG. 16.
  • Subdividing Basic Block A into Execution Units [0137]
  • The [0138] dependence analyzer 43 analyzes the instructions belonging to the basic block A, thereby drawing up a dependence graph such as that shown in FIG. 17. In FIG. 17, each solid line arrow represents data dependence, while each broken line arrow represents dependence other than data dependence. Since the basic block B exists just after the basic block A, a dependence graph is drawn up using the instruction “mov 1, r1” belonging to the first execution unit of the basic block B as well. The instruction “mov 1, r1” was already fixed when the basic block B was subdivided into the execution units, and is identified by the solid circle in the dependence graph shown in FIG. 17.
  • The [0139] instruction rearranger 44 receives the dependence graph shown in FIG. 17 and rearranges the instructions in accordance with the flowchart shown in FIG. 13. First, in Step S1, all the instructions belonging to the first execution unit of the next basic block B are placed provisionally. In this example, only the instruction “mov 1, r1” is placed provisionally. Next, since there are nodes yet to be placed (i.e., the answer to the query in Step S2 is YES), Loop 1 should be performed.
  • When [0140] Loop 1 is performed for the first time, a set of candidate nodes to be placed is made of instructions that are placeable within the same execution unit as the instruction “mov 1, r1” (in Step S3). In this case, only the instruction “bt c0, Label”, on which no other instructions depend, is placeable within the same execution unit. Thus, the instruction “bt c0, Label” is regarded as placeable in Step S5 and placed provisionally. One more instruction is additionally placeable in the same execution unit. Accordingly, the answer to the query in Step S6 is YES, and a candidate node to be placed is added in Step S7. In this example, the instructions “mov r4, r5” and “mov 0, r1” are placeable within the same execution unit as the instructions “mov 1, r1” and “bt c0, Label”, and therefore added as candidate nodes to be placed.
  • The procedure returns to Step S[0141] 4 and the instruction “mov r4, r5” is selected as the optimum node, for example. In such a case, the instruction “mov r4, r5” is regarded as placeable in Step S5 and placed provisionally. Since all of the three instructions placeable within the single execution unit are now fixed, no other nodes are additionally placeable (i.e., the answer to the query in Step S6 is NO) and the placement of nodes is fixed in Step S9.
  • When [0142] Loop 1 is performed for the second time, a set of candidate nodes to be placed is made up of the remaining three instructions “mov 2, r4”, “mov 0, r1” and “cmpeq c0, 0, r0” in Step S3. As can be seen from FIG. 17, there is no dependence among these instructions. Thus, by repeatedly performing Loop 2 three times, all of these instructions are placed provisionally and then fixed in Step S9.
  • Every time the [0143] instruction rearranger 44 performs Loop 1, the execution boundary adder 45 adds an execution boundary to the instruction set. That is to say, each instruction set fixed is grouped as an execution unit. As a result, the execution units in the basic block A, including the instruction “mov 1, r1” belonging to the first execution unit of the basic block B, are arranged as shown in FIG. 18.
  • FIG. 19 illustrates portion of the [0144] object code 60 generated by the object code generator 30 from the internal form code shown in FIG. 14. In the prior art, an execution boundary is added to every basic block boundary. In contrast, an execution unit may be composed of instructions ranging across a basic block boundary according to this embodiment. Thus, the internal form code shown in FIG. 14 is performed in three cycles as shown in FIG. 19.
  • According to this embodiment, a particular basic block, along with an instruction belonging to the first execution unit of the next basic block, is subdivided into execution units, each of which is made up of parallelly executable instructions. Thus, a group of instructions, which belong to two different basic blocks across a basic block boundary, are more likely to be combined into a single execution unit. As a result, the number of cycles taken for the target machine to execute the program can be cut down and the program execution rate increases. [0145]
  • It should be noted that the basic blocks may be subdivided into execution units in the descending order by beginning with the top of the program. In such a case, a basic block located just before a basic block in question may be combinable with the latter basic block. In the foregoing embodiment, instructions are rearranged within the range covering a basic block in question and the first execution unit of the next basic block, and an instruction to be executed last within this range is placed first. Conversely, instructions may be rearranged within a range covering a basic block in question and the last execution unit of the previous basic block, and an instruction to be executed first within this range may be placed first. [0146]
  • MODIFIED EXAMPLE 1 OF EMBODIMENT 2
  • The [0147] execution order determiner 41 may provide a basic block belonging to the innermost loop to the expanded basic block parallelizer 42 preferentially by analyzing the control flow and loops. In such a case, after the basic block belonging to the innermost loop has been processed, a basic block belonging to the next innermost loop is provided. In this manner, the code of the innermost loop, which is executed most frequently, is optimized, thus increasing the program execution rate.
  • FIG. 20([0148] a) illustrates exemplary internal form code passed from the compiler front end 10 to the parallelizer 40. This internal form code consists of three basic blocks A, B and C, of which the basic block B is the innermost loop. FIG. 20(b) illustrates a result of processing performed on the internal form code shown in FIG. 20(a) according to the second embodiment. On the other hand, FIG. 20(c) illustrates a result of processing performed on the internal form code shown in FIG. 20(a) according to this modified example. As can be seen from FIGS. 20(b) and 20(c), the number of execution cycles of the innermost loop decreases by one by preferentially optimizing the basic block B as the innermost loop. In general, the innermost loop is expected to be repeatedly executed numerous times. Accordingly, the execution time of the entire program can be shortened according to this modified example.
  • MODIFIED EXAMPLE 2 OF EMBODIMENT 2
  • In the foregoing embodiment, the expanded [0149] basic block parallelizer 42 subdivides a basic block into execution units, along with the first execution unit of the next basic block. At the same time, the parallelizer 42 may subdivide the basic block into execution units independent of other basic blocks, i.e., without using the first execution unit of the next basic block. And one of these two types of subdivisions, which results in the smaller number of execution units, may be adopted.
  • FIG. 20([0150] c) illustrates a result obtained by subdividing the internal form code shown in FIG. 20(a) without using the first execution unit of the next basic block. Compare the results shown in FIGS. 20(b) and 20(c), and it can be seen that the number of execution cycles can be reduced if the basic block is processed without using the first execution unit of the next basic block. Thus, according to this modified example, the subdivision result shown in FIG. 20(c) is preferred.
  • In the foregoing first and second embodiment, inventive program processing is implemented as program processor, but may be implementable as software performing a similar algorithm. Also, the same functions as those performed by the program processor according to the first and second embodiments are attainable by storing a similar program processing program on a computer-readable storage medium such as floppy disk, hard disk, CD-ROM, MO, or DVD and making a computer execute the program. [0151]
  • Also, the object code generated in the first and second embodiments may be executed by the target machine even when the code is stored on a storage medium such as floppy disk, hard disk, CD-ROM, MO, DVD or semiconductor memory. [0152]
  • As is apparent from the foregoing description, if an execution unit located just before or after a basic block boundary is found executable within the same cycle, then the execution unit is combined into a single execution unit according to the present invention. Also, when a particular basic block is subdivided into execution units, an instruction, which belongs to an execution unit closest to the particular basic block and is included in a basic block to be combined with the particular block, is also used. Thus, a group of instructions, which belong to two different basic blocks across a basic block boundary, are more likely to be combined into a single execution unit and be executed in parallel. As a result, the program execution rate of the target machine increases. [0153]

Claims (29)

1-22 (canceled).
23. A method for processing a program for parallel processing purposes, the method comprising the steps of:
a) subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units, each said execution unit being made up of parallelly-executable instructions; and
b) combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable in parallel.
24. The method of claim 23, wherein the step b) comprises analyzing dependence between instructions belonging to the two execution units located just before and after the basic block boundary, and
wherein it is determined based on the analyzed dependence between the instructions whether or not these execution units located just before and after the basic block boundary are combinable into the single execution unit.
25. The method of claim 23, wherein in the step a), an execution boundary code indicating the identity as boundary between an associated pair of the execution units is added to each said basic block boundary, and
wherein in the step b), if the execution units located just before and after the basic block boundary are executable in parallel, the execution boundary code that has been added to the basic block boundary is removed.
26. The method of claim 23, wherein in the step a), includes the steps of: analyzing dependence among the instructions included in each said basic block; and rearranging the instructions in each said basic block based on the analyzed dependence, and
wherein in the rearranging step, instructions, which impose less strict constraints on resources of a target machine for executing the program, are selected preferentially as respective instructions belonging to first and last execution units of each said basic block.
27. The method of claim 26, wherein when the instructions belonging to the first and last execution units of the basic block are selected in the rearranging step, an instruction that is executable only by itself during a clock cycle by the target machine is given a lower priority.
28. The method of claim 23, wherein the step a) includes the steps of:
analyzing dependence among the instructions included in each said basic block; and
rearranging the instructions in each said basic block based on the analyzed dependence, and
wherein when instructions belonging to first and last execution units of each said basic block are selected in the rearranging step, an instruction with a short word length is given a higher priority.
29. A method for processing a program for parallel processing purposes, the method comprising the step of a) subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units, each said execution unit being made up of parallelly-executable instructions,
wherein in the step a), a particular one of the basic blocks is subdivided into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block, the closest execution unit belonging to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided.
30. The method of claim 29, wherein in the step a), one of the basic blocks that is located just before the particular basic block is used as the combinable basic block, and
wherein the particular basic block, along with the instruction belonging to the last execution unit of the combinable basic block, is subdivided into the set of execution units.
31. The method of claim 29, wherein in the step a), one of the basic blocks that is located just after the particular basic block is used as the combinable basic block, and
wherein the particular basic block, along with the instruction belonging to the first execution unit of the combinable basic block, is subdivided into the set of execution units.
32. The method of claim 29, wherein in the step a), each said basic block is subdivided into the execution units sequentially in a forward direction from the beginning toward the end of the program.
33. The method of claim 29, wherein in the step a), each said basic block is subdivided into the execution units sequentially in a backward direction from the end toward the beginning of the program.
34. The method of claim 29, wherein in the step a), one of the basic blocks that belongs to the innermost loop is subdivided into the execution units preferentially.
35. The method of claim 29, further comprising the step b) of subdividing each said basic block of the program code into another set of execution units independent of adjacent ones of the basic blocks,
wherein result of the steps a) and b) are compared to each other and one of these steps a) and b) that results in the smaller number of execution units is adopted.
36. A system for processing a program for parallel processing purposes, the system comprising:
an intra basic block parallelizer for subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units, each said execution unit being made up of parallely-executable instructions; and
a basic block boundary parallelizer for combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable in parallel.
37. A program processor for executing compilation for parallel processing purposes, the processor comprising:
a compiler front end for translating source code into intermediate code by dividing the source code into a plurality of basic blocks;
a parallelizer for converting the intermediate code into code in a parallelly-executable form; and
an object code generator for translating the intermediate code in the form converted by the parallelizer into object code executable by a target machine,
wherein the parallelizer includes:
an intra basic block parallelizer for subdividing each of a plurality of basic blocks, into which the intermediate code has been divided, into a multiplicity of execution units, each said execution unit being made up of parallelly-executable instructions; and
a basic block boundary parallelizer for combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable in parallel.
38. A system for processing a program for parallel processing purposes, the system comprising:
an expanded basic block parallelizer for subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units, each said execution unit being made up of parallelly-executable instructions,
wherein the expanded basic block parallelizer subdivides a particular one of the basic blocks into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block, the closest execution unit belonging to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided.
39. A program processor for executing compilation for parallel processing purposes, the processor comprising:
a compiler front end for translating source code into intermediate code by dividing the source code into a plurality of basic blocks;
a parallelizer for converting the intermediate code into code in a parallelly-executable form; and
an object code generator for translating the intermediate code in the form converted by the parallelizer into object code executable by a target machine,
wherein the parallelizer includes an expanded basic block parallelizer for subdividing each of a plurality of basic blocks, into which the intermediate code has been divided, into a multiplicity of execution units, each said execution unit being made up of parallelly-executable instructions,
wherein the expanded basic block parallelizer subdivides a particular one of the basic blocks into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block, the closest execution unit belonging to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided.
40. A storage medium having stored thereon a program for a program processing procedure executed by a computer for parallel processing purposes, the program processing procedure comprising the steps of:
a) subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units, each said execution unit being made up of parallelly-executable instructions; and
b) combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable in parallel.
41. A storage medium having stored thereon a program for a program processing procedure executed by a computer through compilation for parallel processing purposes, the program processing procedure comprising the steps of:
a) translating source code into intermediate code by dividing the source code into a plurality of basic blocks;
b) subdividing each said basic block of the intermediate code into a multiplicity of execution units, each said execution unit being made up of parallelly-executable instructions;
c) combining two of the execution units, which are located just before and after a basic block boundary, into a single execution unit if these execution units are executable in parallel;
d) translating the intermediate code in the form converted in the steps b) and c) into object code executable by a target machine.
42. A storage medium having stored thereon a program getting a program processing procedure executed by a computer for parallel processing purposes, the program processing comprising the step of
subdividing each of a plurality of basic blocks, into which program code has been divided, into a multiplicity of execution units; each said execution unit being made up of parallelly-executable instructions,
wherein a particular one of the basic blocks is subdivided into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block, the closest execution unit belonging to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided.
43. A storage medium having stored thereon a program getting a program processing procedure executed by a computer through compilation for parallel processing purposes, the program processing procedure comprising the steps of:
a) translating source code into intermediate code by dividing the source code into a plurality of basic blocks;
b) subdividing each said basic block of the intermediate code into a multiplicity of execution units, each said execution unit being made up of parallelly-executable instructions; and
c) translating the intermediate code in the form processed in the step b) into object code executable by a target machine,
wherein a particular one of the basic blocks is subdivided into a set of execution units along with an instruction belonging to one of the other execution units that is closest to the particular basic block, the closest execution unit belonging to another set of execution units, into which another one of the basic blocks that is adjacent to, and combinable with, the particular basic block has already been subdivided.
44. A storage medium having stored thereon a set of instructions to be executed in parallel,
wherein the instruction set is grouped into a plurality of execution units, each said execution unit being made up of parallelly-executable instructions, and
wherein at least one of the execution units is located across a boundary between an associated pair of the basic blocks and is made up of instructions executable in parallel.
45. A storage medium having stored thereon a set of instructions parallelized by a program processor.
wherein the instruction set is grouped into a plurality of execution units, each said execution unit being made up of instructions executed in parallel, and
wherein at least one of the execution units includes a last instruction of a first basic block and another instruction of a second basic block different from the first basic block.
46. A processor for executing an instruction set stored in a storage medium, said storage medium having stored thereon a set of instructions parallelized by a program processor,
wherein the instruction set is grouped into a plurality of execution units, each said execution unit being made up of instructions executed in parallel, and
wherein at least one of the execution units includes a last instruction of a first basic block and another instruction of a second basic block different from the first basic block.
47. The processor of claim 46, wherein a number of unit in which instructions are executed in parallel is variable.
48. The processor of claim 47, wherein an execution boundary code indicating a boundary between the execution units is added to the instruction set by the program processor.
49. The processor of claim 48, wherein said processor implements a variable-length instruction system.
50. A method of processing a program for parallel processing, the method comprising the step of:
(a) dividing the program into a plurality of basic blocks, each of the plurality of basic blocks is an instruction set without any branch or interrupt in the middle thereof;
(b) subdividing each of the plurality of basic blocks divided by step (a) into a multiplicity of execution units, each of the multiplicity of execution units being made up of parallelly-executable instructions;
(c) determining whether two of the execution units, which are located just before and after a basic block boundary, are executable in parallel or not after step (b); and
(d) combining the two of execution units into one single execution unit if the result of step (c) indicates that the two of the execution units are executable in parallel.
US10/873,252 1999-01-12 2004-06-23 Method and system for processing program for parallel processing purposes, storage medium having stored thereon program getting program processing executed for parallel processing purposes, and storage medium having stored thereon instruction set to be executed in parallel Abandoned US20040230770A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/873,252 US20040230770A1 (en) 1999-01-12 2004-06-23 Method and system for processing program for parallel processing purposes, storage medium having stored thereon program getting program processing executed for parallel processing purposes, and storage medium having stored thereon instruction set to be executed in parallel

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP11004843A JP2000207223A (en) 1999-01-12 1999-01-12 Method and device for program processing for parallel processings, recording medium with program performing program processing for parallel processing recorded therein, and recording medium with instruction sequence for parallel processing recorded therein
JP11-4843 1999-01-12
US09/478,989 US6760906B1 (en) 1999-01-12 2000-01-07 Method and system for processing program for parallel processing purposes, storage medium having stored thereon program getting program processing executed for parallel processing purposes, and storage medium having stored thereon instruction set to be executed in parallel
US10/873,252 US20040230770A1 (en) 1999-01-12 2004-06-23 Method and system for processing program for parallel processing purposes, storage medium having stored thereon program getting program processing executed for parallel processing purposes, and storage medium having stored thereon instruction set to be executed in parallel

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/478,989 Continuation US6760906B1 (en) 1999-01-12 2000-01-07 Method and system for processing program for parallel processing purposes, storage medium having stored thereon program getting program processing executed for parallel processing purposes, and storage medium having stored thereon instruction set to be executed in parallel

Publications (1)

Publication Number Publication Date
US20040230770A1 true US20040230770A1 (en) 2004-11-18

Family

ID=11594977

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/478,989 Expired - Lifetime US6760906B1 (en) 1999-01-12 2000-01-07 Method and system for processing program for parallel processing purposes, storage medium having stored thereon program getting program processing executed for parallel processing purposes, and storage medium having stored thereon instruction set to be executed in parallel
US10/873,252 Abandoned US20040230770A1 (en) 1999-01-12 2004-06-23 Method and system for processing program for parallel processing purposes, storage medium having stored thereon program getting program processing executed for parallel processing purposes, and storage medium having stored thereon instruction set to be executed in parallel

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/478,989 Expired - Lifetime US6760906B1 (en) 1999-01-12 2000-01-07 Method and system for processing program for parallel processing purposes, storage medium having stored thereon program getting program processing executed for parallel processing purposes, and storage medium having stored thereon instruction set to be executed in parallel

Country Status (2)

Country Link
US (2) US6760906B1 (en)
JP (1) JP2000207223A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070169046A1 (en) * 2005-12-21 2007-07-19 Management Services Group, Inc., D/B/A Global Technical Systems System and method for the distribution of a program among cooperating processors
CN100388202C (en) * 2004-12-02 2008-05-14 国际商业机器公司 Computer program functional partitioning system and method for heterogeneous multi-processing systems
CN100388201C (en) * 2004-12-02 2008-05-14 国际商业机器公司 Program code size partitioning system and method for multiple memory multi-processing systems
US20080134138A1 (en) * 2006-12-01 2008-06-05 Fady Chamieh Producer graph oriented programming and execution
US20080134161A1 (en) * 2006-12-01 2008-06-05 Fady Chamieh Producer graph oriented programming framework with undo, redo, and abort execution support
US20080134152A1 (en) * 2006-12-01 2008-06-05 Elias Edde Producer graph oriented programming framework with scenario support
US20090172584A1 (en) * 2007-12-28 2009-07-02 Ahmad Hassan Method and apparatus for interactive scheduling of vliw assembly code
US20090271774A1 (en) * 2005-12-21 2009-10-29 Management Services Group, Inc. D/B/A Global Technical Systems System and method for the distribution of a program among cooperating processing elements
US20100106949A1 (en) * 2008-10-24 2010-04-29 International Business Machines Corporation Source code processing method, system and program
US20100251223A1 (en) * 2005-12-21 2010-09-30 Management Services Group, Inc. D/B/A Global Technical Systems System and method for the distribution of a program among cooperating processing elements
US20120060145A1 (en) * 2010-09-02 2012-03-08 Honeywell International Inc. Auto-generation of concurrent code for multi-core applications
US8307337B2 (en) 2006-12-01 2012-11-06 Murex S.A.S. Parallelization and instrumentation in a producer graph oriented programming framework

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6986128B2 (en) * 2000-01-07 2006-01-10 Sony Computer Entertainment Inc. Multiple stage program recompiler and method
TW525091B (en) * 2000-10-05 2003-03-21 Koninkl Philips Electronics Nv Retargetable compiling system and method
JP4542722B2 (en) * 2001-04-25 2010-09-15 富士通株式会社 Instruction processing method
US7103881B2 (en) * 2002-12-10 2006-09-05 Intel Corporation Virtual machine to provide compiled code to processing elements embodied on a processor device
US7254809B2 (en) * 2003-07-30 2007-08-07 International Business Machines Corporation Compilation of unified parallel C-language programs
US20050108695A1 (en) * 2003-11-14 2005-05-19 Long Li Apparatus and method for an automatic thread-partition compiler
JP3901181B2 (en) * 2004-06-30 2007-04-04 日本電気株式会社 Program parallelization apparatus and method, and program
JP2006338616A (en) * 2005-06-06 2006-12-14 Matsushita Electric Ind Co Ltd Compiler device
US8904151B2 (en) * 2006-05-02 2014-12-02 International Business Machines Corporation Method and apparatus for the dynamic identification and merging of instructions for execution on a wide datapath
WO2008072334A1 (en) * 2006-12-14 2008-06-19 Fujitsu Limited Compile method and compiler
US20080225950A1 (en) * 2007-03-13 2008-09-18 Sony Corporation Scalable architecture for video codecs
JP5278336B2 (en) * 2008-02-15 2013-09-04 日本電気株式会社 Program parallelization apparatus, program parallelization method, and program parallelization program
US9087195B2 (en) * 2009-07-10 2015-07-21 Kaspersky Lab Zao Systems and methods for detecting obfuscated malware
US8646050B2 (en) * 2011-01-18 2014-02-04 Apple Inc. System and method for supporting JIT in a secure system with randomly allocated memory ranges
US11188656B2 (en) * 2018-07-27 2021-11-30 Silicon Laboratories Inc. Secure software system for microcontroller or the like and method therefor
US20240028330A1 (en) * 2022-07-20 2024-01-25 Vmware, Inc. Methods and subsystems that manage code changes submitted for processing by an automated application-development-and-release-management system

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5535393A (en) * 1991-09-20 1996-07-09 Reeve; Christopher L. System for parallel processing that compiles a filed sequence of instructions within an iteration space
US5557761A (en) * 1994-01-25 1996-09-17 Silicon Graphics, Inc. System and method of generating object code using aggregate instruction movement
US5680637A (en) * 1992-01-06 1997-10-21 Hitachi, Ltd. Computer having a parallel operating capability
US5721854A (en) * 1993-11-02 1998-02-24 International Business Machines Corporation Method and apparatus for dynamic conversion of computer instructions
US5842017A (en) * 1996-01-29 1998-11-24 Digital Equipment Corporation Method and apparatus for forming a translation unit
US6035387A (en) * 1997-03-18 2000-03-07 Industrial Technology Research Institute System for packing variable length instructions into fixed length blocks with indications of instruction beginning, ending, and offset within block
US6081880A (en) * 1995-03-09 2000-06-27 Lsi Logic Corporation Processor having a scalable, uni/multi-dimensional, and virtually/physically addressed operand register file
US6105124A (en) * 1996-01-26 2000-08-15 Intel Corporation Method and apparatus for merging binary translated basic blocks of instructions
US6289507B1 (en) * 1997-09-30 2001-09-11 Matsushita Electric Industrial Co., Ltd. Optimization apparatus and computer-readable storage medium storing optimization program
US6378066B1 (en) * 1999-02-04 2002-04-23 Sun Microsystems, Inc. Method, apparatus, and article of manufacture for developing and executing data flow programs, and optimizing user input specifications
US6389587B1 (en) * 1999-02-04 2002-05-14 Sun Microsystems, Inc. User interface for developing and executing data flow programs and methods, apparatus, and articles of manufacture for optimizing the execution of data flow programs
US6449711B1 (en) * 1999-02-04 2002-09-10 Sun Microsystems, Inc. Method, apparatus, and article of manufacture for developing and executing data flow programs
US6611956B1 (en) * 1998-10-22 2003-08-26 Matsushita Electric Industrial Co., Ltd. Instruction string optimization with estimation of basic block dependence relations where the first step is to remove self-dependent branching
US6978451B2 (en) * 2001-05-31 2005-12-20 Esmertec Ag Method for fast compilation of preverified JAVA bytecode to high quality native machine code
US6988183B1 (en) * 1998-06-26 2006-01-17 Derek Chi-Lan Wong Methods for increasing instruction-level parallelism in microprocessors and digital system
US7058945B2 (en) * 2000-11-28 2006-06-06 Fujitsu Limited Information processing method and recording medium therefor capable of enhancing the executing speed of a parallel processing computing device

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5535393A (en) * 1991-09-20 1996-07-09 Reeve; Christopher L. System for parallel processing that compiles a filed sequence of instructions within an iteration space
US5680637A (en) * 1992-01-06 1997-10-21 Hitachi, Ltd. Computer having a parallel operating capability
US5721854A (en) * 1993-11-02 1998-02-24 International Business Machines Corporation Method and apparatus for dynamic conversion of computer instructions
US5557761A (en) * 1994-01-25 1996-09-17 Silicon Graphics, Inc. System and method of generating object code using aggregate instruction movement
US6081880A (en) * 1995-03-09 2000-06-27 Lsi Logic Corporation Processor having a scalable, uni/multi-dimensional, and virtually/physically addressed operand register file
US6105124A (en) * 1996-01-26 2000-08-15 Intel Corporation Method and apparatus for merging binary translated basic blocks of instructions
US5842017A (en) * 1996-01-29 1998-11-24 Digital Equipment Corporation Method and apparatus for forming a translation unit
US6035387A (en) * 1997-03-18 2000-03-07 Industrial Technology Research Institute System for packing variable length instructions into fixed length blocks with indications of instruction beginning, ending, and offset within block
US6289507B1 (en) * 1997-09-30 2001-09-11 Matsushita Electric Industrial Co., Ltd. Optimization apparatus and computer-readable storage medium storing optimization program
US6988183B1 (en) * 1998-06-26 2006-01-17 Derek Chi-Lan Wong Methods for increasing instruction-level parallelism in microprocessors and digital system
US6611956B1 (en) * 1998-10-22 2003-08-26 Matsushita Electric Industrial Co., Ltd. Instruction string optimization with estimation of basic block dependence relations where the first step is to remove self-dependent branching
US6378066B1 (en) * 1999-02-04 2002-04-23 Sun Microsystems, Inc. Method, apparatus, and article of manufacture for developing and executing data flow programs, and optimizing user input specifications
US6389587B1 (en) * 1999-02-04 2002-05-14 Sun Microsystems, Inc. User interface for developing and executing data flow programs and methods, apparatus, and articles of manufacture for optimizing the execution of data flow programs
US6449711B1 (en) * 1999-02-04 2002-09-10 Sun Microsystems, Inc. Method, apparatus, and article of manufacture for developing and executing data flow programs
US7058945B2 (en) * 2000-11-28 2006-06-06 Fujitsu Limited Information processing method and recording medium therefor capable of enhancing the executing speed of a parallel processing computing device
US6978451B2 (en) * 2001-05-31 2005-12-20 Esmertec Ag Method for fast compilation of preverified JAVA bytecode to high quality native machine code

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100388202C (en) * 2004-12-02 2008-05-14 国际商业机器公司 Computer program functional partitioning system and method for heterogeneous multi-processing systems
CN100388201C (en) * 2004-12-02 2008-05-14 国际商业机器公司 Program code size partitioning system and method for multiple memory multi-processing systems
US7765536B2 (en) 2005-12-21 2010-07-27 Management Services Group, Inc. System and method for the distribution of a program among cooperating processors
US8387034B2 (en) 2005-12-21 2013-02-26 Management Services Group, Inc. System and method for the distribution of a program among cooperating processing elements
US8387033B2 (en) 2005-12-21 2013-02-26 Management Services Group, Inc. System and method for the distribution of a program among cooperating processing elements
US20070169046A1 (en) * 2005-12-21 2007-07-19 Management Services Group, Inc., D/B/A Global Technical Systems System and method for the distribution of a program among cooperating processors
US20100251223A1 (en) * 2005-12-21 2010-09-30 Management Services Group, Inc. D/B/A Global Technical Systems System and method for the distribution of a program among cooperating processing elements
US20090271774A1 (en) * 2005-12-21 2009-10-29 Management Services Group, Inc. D/B/A Global Technical Systems System and method for the distribution of a program among cooperating processing elements
US8307337B2 (en) 2006-12-01 2012-11-06 Murex S.A.S. Parallelization and instrumentation in a producer graph oriented programming framework
US20080134138A1 (en) * 2006-12-01 2008-06-05 Fady Chamieh Producer graph oriented programming and execution
US10481877B2 (en) 2006-12-01 2019-11-19 Murex S.A.S. Producer graph oriented programming and execution
US7865872B2 (en) 2006-12-01 2011-01-04 Murex S.A.S. Producer graph oriented programming framework with undo, redo, and abort execution support
US10083013B2 (en) 2006-12-01 2018-09-25 Murex S.A.S. Producer graph oriented programming and execution
US8191052B2 (en) 2006-12-01 2012-05-29 Murex S.A.S. Producer graph oriented programming and execution
US20080134152A1 (en) * 2006-12-01 2008-06-05 Elias Edde Producer graph oriented programming framework with scenario support
US8332827B2 (en) 2006-12-01 2012-12-11 Murex S.A.S. Produce graph oriented programming framework with scenario support
US20080134161A1 (en) * 2006-12-01 2008-06-05 Fady Chamieh Producer graph oriented programming framework with undo, redo, and abort execution support
US9424050B2 (en) 2006-12-01 2016-08-23 Murex S.A.S. Parallelization and instrumentation in a producer graph oriented programming framework
US9201766B2 (en) 2006-12-01 2015-12-01 Murex S.A.S. Producer graph oriented programming framework with scenario support
US8645929B2 (en) 2006-12-01 2014-02-04 Murex S.A.S. Producer graph oriented programming and execution
US8607207B2 (en) 2006-12-01 2013-12-10 Murex S.A.S. Graph oriented programming and execution
US8468504B2 (en) * 2007-12-28 2013-06-18 Streaming Networks (Pvt.) Ltd. Method and apparatus for interactive scheduling of VLIW assembly code
US20090172584A1 (en) * 2007-12-28 2009-07-02 Ahmad Hassan Method and apparatus for interactive scheduling of vliw assembly code
US8595712B2 (en) 2008-10-24 2013-11-26 International Business Machines Corporation Source code processing method, system and program
US8407679B2 (en) * 2008-10-24 2013-03-26 International Business Machines Corporation Source code processing method, system and program
US20100106949A1 (en) * 2008-10-24 2010-04-29 International Business Machines Corporation Source code processing method, system and program
US8661424B2 (en) * 2010-09-02 2014-02-25 Honeywell International Inc. Auto-generation of concurrent code for multi-core applications
US20120060145A1 (en) * 2010-09-02 2012-03-08 Honeywell International Inc. Auto-generation of concurrent code for multi-core applications

Also Published As

Publication number Publication date
US6760906B1 (en) 2004-07-06
JP2000207223A (en) 2000-07-28

Similar Documents

Publication Publication Date Title
US6760906B1 (en) Method and system for processing program for parallel processing purposes, storage medium having stored thereon program getting program processing executed for parallel processing purposes, and storage medium having stored thereon instruction set to be executed in parallel
US7856629B2 (en) Compiler apparatus
JP4979875B2 (en) Retargetable compilation system and method
JP4042604B2 (en) Program parallelization apparatus, program parallelization method, and program parallelization program
JP3896087B2 (en) Compiler device and compiling method
JP3311462B2 (en) Compile processing unit
JP3797471B2 (en) Method and apparatus for identifying divisible packets in a multi-threaded VLIW processor
US6817013B2 (en) Program optimization method, and compiler using the same
US6113650A (en) Compiler for optimization in generating instruction sequence and compiling method
US8930922B2 (en) Software-to-hardware compiler with symbol set inference analysis
US20020100032A1 (en) Software-to-hardware compiler
US20060064682A1 (en) Compiler, compilation method, and compilation program
US5901318A (en) Method and system for optimizing code
JPH1173325A (en) Program converting device and recording medium
JP2005534114A (en) Inter-source split compilation
US7376818B2 (en) Program translator and processor
Gyllenhaal et al. Optimization of machine descriptions for efficient use
JP2000132404A (en) Instruction sequence optimizing device
US5778232A (en) Automatic compiler restructuring of COBOL programs into a proc per paragraph model
JP3311381B2 (en) Instruction scheduling method in compiler
JPH04293150A (en) Compiling method
JP2008523523A (en) Compiling method, compiling device and computer system for loop in program
JP3553845B2 (en) Processor, compiler, coiling method, and recording medium
JP3276479B2 (en) Compilation method
US20220413818A1 (en) Method and apparatus for functional unit balancing at program compile time

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION