US20010011327A1 - Shared instruction cache for multiple processors - Google Patents

Shared instruction cache for multiple processors Download PDF

Info

Publication number
US20010011327A1
US20010011327A1 US09/818,295 US81829501A US2001011327A1 US 20010011327 A1 US20010011327 A1 US 20010011327A1 US 81829501 A US81829501 A US 81829501A US 2001011327 A1 US2001011327 A1 US 2001011327A1
Authority
US
United States
Prior art keywords
processor
base
register
instruction
memory address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/818,295
Other versions
US6378041B2 (en
Inventor
Marc Tremblay
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US09/818,295 priority Critical patent/US6378041B2/en
Publication of US20010011327A1 publication Critical patent/US20010011327A1/en
Priority to US10/100,263 priority patent/US6523090B2/en
Application granted granted Critical
Publication of US6378041B2 publication Critical patent/US6378041B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30098Register arrangements
    • G06F9/3012Organisation of register space, e.g. banked or distributed register file
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0875Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7839Architectures of general purpose stored program computers comprising a single central processing unit with memory
    • G06F15/7842Architectures of general purpose stored program computers comprising a single central processing unit with memory on one IC chip (single chip microcontrollers)
    • G06F15/7846On-chip cache and off-chip main memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30098Register arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30098Register arrangements
    • G06F9/3012Organisation of register space, e.g. banked or distributed register file
    • G06F9/30123Organisation of register space, e.g. banked or distributed register file according to context, e.g. thread buffers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30098Register arrangements
    • G06F9/3012Organisation of register space, e.g. banked or distributed register file
    • G06F9/3013Organisation of register space, e.g. banked or distributed register file according to data content, e.g. floating-point registers, address registers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/34Addressing or accessing the instruction operand or the result ; Formation of operand address; Addressing modes
    • G06F9/342Extension of operand address space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3824Operand accessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3851Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming

Definitions

  • the present invention relates generally to microprocessors and, more particularly, to a shared instruction cache for multiple processors.
  • a microprocessor typically includes a cache memory for storing copies of recently accessed information.
  • the cache memory is generally smaller but faster than main memory (e.g., disk).
  • a microprocessor typically includes an instruction cache for storing recently accessed (i.e., recently used) instructions.
  • the instruction cache is generally located on the same integrated circuit chip (or die) as the core logic of the microprocessor.
  • FIG. 1 is a block diagram of a prior art instruction cache subsystem of a multi-processor system 100 .
  • multi-processor system 100 includes two processors, a P 1 processor 102 and a P 2 processor 104 .
  • P 1 processor 102 and P 2 processor 104 each access a main memory 106 via a bus 108 .
  • P 1 processor 102 caches recently used instructions in an instruction cache 110 .
  • P 2 processor 104 caches recently used instructions in an instruction cache 112 .
  • P 1 processor 102 and instruction cache 110 reside on die (chip) 114 .
  • P 2 processor 104 and instruction cache 112 reside on die 116 .
  • prior art system 100 represents an SMP (Symmetric Multi-Processing) system that shares memory, main memory 106 .
  • instruction cache 110 and instruction cache 112 typically each include two ports, a port for connecting to P 1 processor 102 and P 2 processor 104 , respectively, and a port for connecting to main memory 106 .
  • the ports can be physical ports or logical ports.
  • the present invention provides a shared instruction cache for multiple processors.
  • the present invention provides a cost-effective and high performance instruction cache subsystem in a microprocessor that includes multiple processors (i.e., CPUs (Central Processing Units)).
  • processors i.e., CPUs (Central Processing Units)
  • an apparatus for a microprocessor includes an instruction cache that is shared by a first processor and a second processor, a first register index base for the first processor, and a first memory address base for the first processor.
  • the apparatus also includes a second register index base for the second processor, and a second memory address base for the second processor.
  • a register access is offset using the register index base (e.g., a register address specifier is concatenated with the register index base).
  • a memory access is offset using the memory address base (e.g., a memory address specifier is concatenated with the memory address base).
  • This embodiment provides a shared instruction cache for multiple processors that provides a hardware implemented segmentation of register files and main memory based on which processor is executing a particular instruction (e.g., an instruction that involves a register access or a memory access). For example, this embodiment allows a thread of a multi-threaded computer program that is executed by the first processor and the same thread that is executed by the second processor to generate register files that can later be combined, because the register index base can be set such that the execution of the same thread on the first processor and the second processor do not overlap in their register address specifiers' usage of registers.
  • the same thread can be executed on the first processor and on the second processor and by setting different values in the memory address base, the data written into the main memory can be insured to not overlap such that the results of the execution of the same thread on the first processor and the second processor can subsequently be compared or combined.
  • FIG. 1 is a block diagram of a prior art instruction cache subsystem of a multi-processor system.
  • FIG. 2 is a block diagram of a shared instruction cache for multiple processors in accordance with one embodiment of the present invention.
  • FIG. 3 is a block diagram of the shared instruction cache for the P 1 processor and the P 2 processor of FIG. 2 shown in greater detail in accordance with one embodiment of the present invention.
  • FIG. 4 is a functional diagram of an offset operation using a register index base in accordance with one embodiment of the present invention.
  • FIG. 5 is a functional diagram of an offset operation using a memory address base in accordance with one embodiment of the present invention.
  • the present invention provides a shared instruction cache for multiple processors (i.e., CPUs (Central Processing Units)) (e.g., two CPUs or four CPUs).
  • processors i.e., CPUs (Central Processing Units)
  • CPUs Central Processing Units
  • a multi-processor microprocessor that desires a cost-effective and efficient implementation of an instruction cache subsystem would significantly benefit from the present invention.
  • instruction cache 110 and instruction cache 112 can simultaneously be storing the same instruction(s).
  • the prior art system 100 uses valuable instruction cache storage by duplicating storage of the same instruction(s).
  • the same instruction(s) may be stored in three different locations, main memory 106 , instruction cache 110 , and instruction cache 112 .
  • FIG. 2 is a block diagram showing a multiple processor system having a shared instruction cache in accordance with one embodiment of the present invention.
  • a system 200 includes a main memory 202 , a bus 204 coupled to main memory 202 , and an integrated circuit 206 coupled to bus 204 .
  • integrated circuit 206 On integrated circuit 206 resides a P 1 processor 208 , a P 2 processor 210 , and a shared instruction cache 212 .
  • system 200 is a multiple processor system in which P 1 processor 208 and P 2 processor 210 are integrated on the same die (chip) and share the same instruction cache, instruction cache 212 .
  • instruction cache 212 is a 16-kilobyte cache (e.g., a conventional 16-kilobyte dual-ported cache that uses a well-known (standard) cache architecture of two-way set associative, 32-byte lines to minimize cost and timing risk).
  • Shared instruction cache 212 may be included within the integrated circuit die or external to the integrated circuit die on which one or more of the processors reside.
  • system 200 efficiently uses instruction cache storage space by avoiding duplicating storage of instruction(s) in instruction cache 212 by P 1 processor 208 and P 2 processor 210 .
  • Shared instruction cache 212 advantageously eliminates cache coherency protocols, reduces total area of cache memory, and allows for the automatic sharing of code. For example, a particular instruction (e.g., opcode and register address specifier(s) or memory address specifier(s)) that is cached by both P 1 processor 208 and P 2 processor 210 uses one storage location of instruction cache 212 (e.g., a cache line in instruction cache 212 ).
  • system 200 advantageously allows for the sharing of cached instructions in instruction cache 212 for P 1 processor 208 and P 2 processor 210 . Additionally, because in many instances an operating system maps different library calls to different pages in memory, the use of a shared instruction cache which allows use of the same library call can save substantially half the cache space.
  • a challenge with sharing instructions among multiple processors is that the instruction may specify particular registers or particular memory address locations, while the multiple processors may use different registers or different memory address locations. For example, it may be advantageous for a particular thread of execution of a multi-threaded computer program to be executed on a first processor, and the same thread to be executed on a second processor, and then the register files of the threads of execution on the first processor and the second processor to subsequently be combined. However, if the thread of execution on the first processor and the thread of execution on the second processor specify the same registers, then these register files overlap and, thus, generally cannot be combined.
  • FIG. 3 is a block diagram of shared instruction cache 212 for P 1 processor 208 and P 2 processor 210 shown in greater detail in accordance with one embodiment of the present invention.
  • a system 300 includes P 1 processor 208 , which includes a register index base 302 , a memory address base 304 , and registers (register file) 306 , and P 2 processor 210 , which includes a register index base 308 , a memory address base 310 , and registers (register file) 312 .
  • Registers 306 include 256 registers
  • registers 312 include 256 registers.
  • System 300 allows for the same instruction(s) to be stored in instruction cache 212 for P 1 processor 208 and P 2 processor 210 , but also allows for the same instruction(s) stored in instruction cache 212 to access different registers or a different segment of the register files, registers 306 and registers 312 , of P 1 processor 208 and P 2 processor 210 when executed on P 1 processor 208 and P 2 processor 210 , respectively.
  • System 300 also allows for the same instruction(s) stored in instruction cache 212 and executed on P 1 processor 208 and P 2 processor 210 to access different segments of main memory 202 .
  • system 300 includes register index base 302 and register index base 308 that are used to offset register address specifiers of instructions executed on P 1 processor 208 and P 2 processor 210 , respectively.
  • System 300 includes memory address base 304 and memory address base 310 that are used to offset memory address specifiers of instructions executed on P 1 processor 208 and P 2 processor 210 , respectively.
  • register index base 302 of P 1 processor 208 can be set to 0, which results in a one-to-one correlation between the registers specified in an instruction and the registers used by P 1 processor 208 during execution of the instruction.
  • register index base 308 of P 2 processor 210 can be set to 128.
  • register address specifiers of the instruction executed on P 2 processor 210 are offset using the value of register index base 308 and, in particular, are offset by either adding 128 to the register address specifier value or concatenating 128 to the register address specifier value of the instruction.
  • offsetting register address specifiers of instructions executed on P 2 processor 210 by 128 can be implemented by setting the upper bit of an 8-bit address for the register address specifier to 1.
  • register address specifiers would not need to specify a register address with the upper bit of an 8-bit address set to 1, because it only needs to specify (address) registers in the range of 0 to 127.
  • adding 128 to a register address specifier in this case is nearly free from a performance standpoint, because the register index base stored in register index base 308 can simply be concatenated with the register address specifier.
  • system 300 effectively segments the register files of P 1 processor 208 and P 2 processor 210 into two segments by setting register index base 302 to 0 (i.e., registers 0 to 127) and register index base 308 to 128 (i.e., registers 128 to 255).
  • register index base 302 and register index base 308 can be set to specify eight different segments by using the upper three bits of an 8-bit address for a register address specifier to define the eight different segments.
  • the upper three bits can be set to 000, for a segment including registers 0 to 31, 001 for a segment including registers 32 to 63, 010 for a segment including registers 64 to 95, . . . , and 111 for a segment including registers 223 to 255.
  • eight segments of registers or octants can be defined in this example.
  • System 300 also allows for P 1 processor 208 and P 2 processor 210 executing the same instruction(s) cached in instruction cache 212 to access different locations or segments in main memory 202 .
  • system 300 provides an efficient hardware implemented approach, unlike a software implemented approach, which requires significant additional set-up code.
  • memory address base 304 of P 1 processor 208 can be set to 0.
  • memory address specifiers of an instruction executed on P 1 processor 208 results in a one-to-one correlation between the memory locations accessed in main memory 202 and the memory address specifiers of the executed instruction.
  • memory address base 310 of P 2 processor 210 can be set to 10,000 (base 10, that is, a decimal value).
  • memory address specifiers of the instruction executed on P 2 processor 210 are offset by the value 10,000.
  • main memory 202 is segmented between P 1 processor 208 and P 2 processor 210 (assuming no memory address specifiers exceed 9,999).
  • the offset operation for memory address specifiers can be implemented as an add or a concatenation operation as similarly described above with respect to the offset operation for register address specifiers.
  • memory address base 304 and memory address base 310 each include a load address base for load operations and a store address base for store operations.
  • P 1 processor 208 and P 2 processor 210 may share input data, which is stored in main memory 202 or a data cache (not shown).
  • load address bases of P 1 processor 208 and P 2 processor 210 can be set to the same value.
  • P 1 processor 208 and P 2 processor 210 can process the same image stored as data in data memory (e.g., main memory 202 or a data cache).
  • data memory e.g., main memory 202 or a data cache
  • P 1 processor 208 and P 2 processor 210 can process the stored data image differently.
  • P 1 processor 208 can change the colors of the data image from green to red, while P 2 processor 210 changes the colors of the data image from green to blue. It may be desirable to then compare the differently processed images.
  • the store address bases of P 1 processor 208 and P 2 processor 210 can be set to different values such that the differently processed data images are stored in different storage locations in data memory.
  • instructions for library calls can be shared in instruction cache 212 and executed on P 1 processor 208 and P 2 processor 210 using different segments of the register files of P 1 processor 208 and P 2 processor 210 , respectively.
  • only one library call routine can be written, and system 300 advantageously allows for the segmentation of the register files, such as registers 0 to 127 for P 1 processor 208 and registers 128 to 255 for P 2 processor 210 . Otherwise, a second library routine call would need to be written to achieve the same functionality, a first library routine call specifying registers 0 to 127, and a second library routine call specifying registers 128 to 255.
  • the code can be written with support for sharing, which includes indirect addressing, jump tables, and other methods, which increase the footprint of the routine which slows down execution.
  • such an implementation would require twice the space in main memory 202 for storing the second library routine call and would effectively reduce the instruction cache 212 hit rate and use instruction cache 212 less efficiently.
  • the first and second library routine call in such an implementation would reside in two different storage locations in main memory 202 , there is a possibility that the stored memory routine calls can be stored at locations that are a multiple of 16 kilobytes apart. If instruction cache 212 is a 16-kilobyte instruction cache, then these library routine calls generally could not both simultaneously reside in instruction cache 212 and, thus, would require main memory access, which is expensive from a performance standpoint.
  • System 300 is flexible. If segmentation of the register files or main memory is not desired, then the register address bases and the memory address bases of P 1 processor 208 and P 2 processor 210 can simply be set to the same values (e.g., 0).
  • a computer program can include multiple threads of execution (i.e., a multi-threaded computer program).
  • a compiler that compiles the multi-threaded computer program can generate “set base register” instructions for setting the register index bases (e.g., loading a value into the register index bases) and instructions for setting the memory address bases.
  • the setting of the registers index base registers and the memory address base registers can be performed at the beginning of each thread of the compiled multi-threaded computer program.
  • the base register values are set accordingly.
  • the threads of the multi-threaded computer program can be allocated different segments of the register file or different segments of the main memory or both.
  • thread 1 of a multi-threaded computer program can operate on registers 0 to 31
  • thread 2 can operate on registers 32 to 63
  • . . . , and thread 8 can operate on registers 228 to 255.
  • different threads can be allocated different segments of main memory so that they do not overwrite the same main memory storage locations.
  • system 300 allows for this implementation without requiring significant additional set-up code.
  • an operating system can send thread 1 to P 1 processor 208 , send thread 2 to P 2 processor 210 , and send thread 3 to the next available processor. But by compiling thread 3 such that it has a different register index base and a different memory address base, thread 3 , regardless of which processor it ends up executing on, can be using a different segment of the register file of the processor and a different segment of main memory 202 .
  • instruction cache 212 which is shared between P 1 processor 208 and P 2 processor 210 , are not modified, and in particular, the register address specifiers and the memory address specifiers of the instructions remain the same values.
  • pseudo code of a compiled multi-threaded computer program is listed below.
  • Block 404 includes the MUL ADD operation, which requires 8 bits, an 8-bit Rd that is set to the decimal value of 40, an 8-bit RS 1 that is set to the decimal value of 10, an 8-bit RS 2 that is set to the decimal value of 20, and an 8-bit RS 3 that is set to the decimal value of 30.
  • the MUL ADD operation multiplies RS 1 and RS 2 , then adds RS 3 to the product, and stores the result in RS 4 .
  • the binary equivalents to the decimal values stored in the register address specifiers of block 404 are shown below each sub-block.
  • the register index base stored in register index base register 302 of P 1 processor 208 is concatenated to the register address specifiers of block 404 .
  • the memory address base which is stored in memory address base register 304 , is then concatenated with the binary value equal to 78 to provide a new memory address pointer 506 that points to a storage location of main memory 202 . If the memory address base is set to 1024 in decimal (base 10) or 1000000000 in binary, then pointer 506 points to memory address location 1102 in decimal or 1001001110 in binary in main memory 202 . This concatenation operation can be implemented without requiring significant extra gates in the critical path. If a first thread, which sets register address bases and memory address bases to different values, desires to know the results of the execution of a second thread, then the first thread can set the register address bases and the memory address bases to the same values as the second thread.

Abstract

The present invention provides a shared instruction cache for multiple processors. In one embodiment, an apparatus for a microprocessor includes a shared instruction cache for a first processor and a second processor, and a first register index base for the first processor and a second register index base for the second processor. The apparatus also includes a first memory address base for the first processor and a second memory address base for the second processor. This embodiment allows for segmentation of register files and main memory based on which processor is executing a particular instruction (e.g., an instruction that involves a register access and a memory access).

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application relates to application Ser. No. ______ (attorney docket number SP-2600 US), filed on even date herewith, entitled “A Multiple-Thread Processor For Threaded Software Applications” and naming Marc Tremblay and William Joy as inventors, the application being incorporated herein by reference in its entirety. [0001]
  • This application relates to application Ser. No. ______ (attorney docket number SP-[0002] 2618 US), filed on even date herewith, entitled, “Dual In-line Buffers for an Instruction Fetch Unit”, and naming Marc Tremblay and Graham R. Murphy as inventors, the application being incorporated herein by reference in its entirety.
  • This application relates to application Ser. No. ______ (attorney docket number SP-[0003] 2619 US), filed on even date herewith, entitled, “An Instruction Fetch Unit Aligner”, and naming Marc Tremblay and Graham R. Murphy as inventors, the application being incorporated herein by reference in its entirety.
  • This application relates to application Ser. No. ______ (attorney docket number SP-[0004] 2620 US), filed on even date herewith, entitled, An Efficient Method For Fetching Instructions Having A Non-Power Of Two Size”, and naming Marc Tremblay and Graham R. Murphy as inventors, the application being incorporated herein by reference in its entirety.
  • FIELD OF THE INVENTION
  • The present invention relates generally to microprocessors and, more particularly, to a shared instruction cache for multiple processors. [0005]
  • BACKGROUND OF THE INVENTION
  • A microprocessor typically includes a cache memory for storing copies of recently accessed information. The cache memory is generally smaller but faster than main memory (e.g., disk). In particular, a microprocessor typically includes an instruction cache for storing recently accessed (i.e., recently used) instructions. The instruction cache is generally located on the same integrated circuit chip (or die) as the core logic of the microprocessor. [0006]
  • FIG. 1 is a block diagram of a prior art instruction cache subsystem of a [0007] multi-processor system 100. In particular, multi-processor system 100 includes two processors, a P1 processor 102 and a P2 processor 104. P1 processor 102 and P2 processor 104 each access a main memory 106 via a bus 108. P1 processor 102 caches recently used instructions in an instruction cache 110. P2 processor 104 caches recently used instructions in an instruction cache 112. P1 processor 102 and instruction cache 110 reside on die (chip) 114. P2 processor 104 and instruction cache 112 reside on die 116. Accordingly, prior art system 100 represents an SMP (Symmetric Multi-Processing) system that shares memory, main memory 106. Further, instruction cache 110 and instruction cache 112 typically each include two ports, a port for connecting to P 1 processor 102 and P2 processor 104, respectively, and a port for connecting to main memory 106. The ports can be physical ports or logical ports.
  • SUMMARY OF THE INVENTION
  • The present invention provides a shared instruction cache for multiple processors. For example, the present invention provides a cost-effective and high performance instruction cache subsystem in a microprocessor that includes multiple processors (i.e., CPUs (Central Processing Units)). [0008]
  • In one embodiment, an apparatus for a microprocessor includes an instruction cache that is shared by a first processor and a second processor, a first register index base for the first processor, and a first memory address base for the first processor. The apparatus also includes a second register index base for the second processor, and a second memory address base for the second processor. On each processor, a register access is offset using the register index base (e.g., a register address specifier is concatenated with the register index base). Similarly, on each processor, a memory access is offset using the memory address base (e.g., a memory address specifier is concatenated with the memory address base). This embodiment provides a shared instruction cache for multiple processors that provides a hardware implemented segmentation of register files and main memory based on which processor is executing a particular instruction (e.g., an instruction that involves a register access or a memory access). For example, this embodiment allows a thread of a multi-threaded computer program that is executed by the first processor and the same thread that is executed by the second processor to generate register files that can later be combined, because the register index base can be set such that the execution of the same thread on the first processor and the second processor do not overlap in their register address specifiers' usage of registers. Similarly, the same thread can be executed on the first processor and on the second processor and by setting different values in the memory address base, the data written into the main memory can be insured to not overlap such that the results of the execution of the same thread on the first processor and the second processor can subsequently be compared or combined. [0009]
  • Other aspects and advantages of the present invention will become apparent from the following detailed description and accompanying drawings. [0010]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a prior art instruction cache subsystem of a multi-processor system. [0011]
  • FIG. 2 is a block diagram of a shared instruction cache for multiple processors in accordance with one embodiment of the present invention. [0012]
  • FIG. 3 is a block diagram of the shared instruction cache for the P[0013] 1 processor and the P2 processor of FIG. 2 shown in greater detail in accordance with one embodiment of the present invention.
  • FIG. 4 is a functional diagram of an offset operation using a register index base in accordance with one embodiment of the present invention. [0014]
  • FIG. 5 is a functional diagram of an offset operation using a memory address base in accordance with one embodiment of the present invention. [0015]
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention provides a shared instruction cache for multiple processors (i.e., CPUs (Central Processing Units)) (e.g., two CPUs or four CPUs). For example, a multi-processor microprocessor that desires a cost-effective and efficient implementation of an instruction cache subsystem would significantly benefit from the present invention. [0016]
  • Referring to prior art FIG. 1, [0017] instruction cache 110 and instruction cache 112 can simultaneously be storing the same instruction(s). Thus, the prior art system 100 uses valuable instruction cache storage by duplicating storage of the same instruction(s). In particular, the same instruction(s) may be stored in three different locations, main memory 106, instruction cache 110, and instruction cache 112.
  • Accordingly, FIG. 2 is a block diagram showing a multiple processor system having a shared instruction cache in accordance with one embodiment of the present invention. In particular, a [0018] system 200 includes a main memory 202, a bus 204 coupled to main memory 202, and an integrated circuit 206 coupled to bus 204. On integrated circuit 206 resides a P1 processor 208, a P2 processor 210, and a shared instruction cache 212. Thus, system 200 is a multiple processor system in which P1 processor 208 and P2 processor 210 are integrated on the same die (chip) and share the same instruction cache, instruction cache 212. In a preferred embodiment, instruction cache 212 is a 16-kilobyte cache (e.g., a conventional 16-kilobyte dual-ported cache that uses a well-known (standard) cache architecture of two-way set associative, 32-byte lines to minimize cost and timing risk). Shared instruction cache 212 may be included within the integrated circuit die or external to the integrated circuit die on which one or more of the processors reside.
  • By sharing [0019] instruction cache 212, system 200 efficiently uses instruction cache storage space by avoiding duplicating storage of instruction(s) in instruction cache 212 by P1 processor 208 and P2 processor 210. Shared instruction cache 212 advantageously eliminates cache coherency protocols, reduces total area of cache memory, and allows for the automatic sharing of code. For example, a particular instruction (e.g., opcode and register address specifier(s) or memory address specifier(s)) that is cached by both P1 processor 208 and P2 processor 210 uses one storage location of instruction cache 212 (e.g., a cache line in instruction cache 212). For example, if P1 processor 208 executes a particular library call, and subsequently P2 processor 210 issues the same library call, then if the library call executed by P1 processor 208 is cached in instruction cache 212, P2 processor 210 can access the cached library call in instruction cache 212 rather than having to access main memory 202 via bus 204 thereby increasing performance. Accordingly, system 200 advantageously allows for the sharing of cached instructions in instruction cache 212 for P1 processor 208 and P2 processor 210. Additionally, because in many instances an operating system maps different library calls to different pages in memory, the use of a shared instruction cache which allows use of the same library call can save substantially half the cache space.
  • A challenge with sharing instructions among multiple processors is that the instruction may specify particular registers or particular memory address locations, while the multiple processors may use different registers or different memory address locations. For example, it may be advantageous for a particular thread of execution of a multi-threaded computer program to be executed on a first processor, and the same thread to be executed on a second processor, and then the register files of the threads of execution on the first processor and the second processor to subsequently be combined. However, if the thread of execution on the first processor and the thread of execution on the second processor specify the same registers, then these register files overlap and, thus, generally cannot be combined. [0020]
  • Accordingly, FIG. 3 is a block diagram of shared [0021] instruction cache 212 for P1 processor 208 and P2 processor 210 shown in greater detail in accordance with one embodiment of the present invention. A system 300 includes P1 processor 208, which includes a register index base 302, a memory address base 304, and registers (register file) 306, and P2 processor 210, which includes a register index base 308, a memory address base 310, and registers (register file) 312. Registers 306 include 256 registers, and registers 312 include 256 registers.
  • [0022] System 300 allows for the same instruction(s) to be stored in instruction cache 212 for P1 processor 208 and P2 processor 210, but also allows for the same instruction(s) stored in instruction cache 212 to access different registers or a different segment of the register files, registers 306 and registers 312, of P1 processor 208 and P2 processor 210 when executed on P1 processor 208 and P2 processor 210, respectively. System 300 also allows for the same instruction(s) stored in instruction cache 212 and executed on P1 processor 208 and P2 processor 210 to access different segments of main memory 202. In particular, system 300 includes register index base 302 and register index base 308 that are used to offset register address specifiers of instructions executed on P1 processor 208 and P2 processor 210, respectively. System 300 includes memory address base 304 and memory address base 310 that are used to offset memory address specifiers of instructions executed on P1 processor 208 and P2 processor 210, respectively. By setting (e.g., loading) the register index base registers and the memory address base registers of P1 processor 208 and P2 processor 210 to different values, system 300 allows for a hardware implemented segmentation of the register files and the main memory of system 300.
  • For example, register [0023] index base 302 of P1 processor 208 can be set to 0, which results in a one-to-one correlation between the registers specified in an instruction and the registers used by P1 processor 208 during execution of the instruction. However, register index base 308 of P2 processor 210 can be set to 128. As a result, register address specifiers of the instruction executed on P2 processor 210 are offset using the value of register index base 308 and, in particular, are offset by either adding 128 to the register address specifier value or concatenating 128 to the register address specifier value of the instruction. For example, offsetting register address specifiers of instructions executed on P2 processor 210 by 128 can be implemented by setting the upper bit of an 8-bit address for the register address specifier to 1. If software compilers for system 300 and software written for system 300 only include functions that specify up to 128 registers, then register address specifiers would not need to specify a register address with the upper bit of an 8-bit address set to 1, because it only needs to specify (address) registers in the range of 0 to 127. Thus, adding 128 to a register address specifier in this case is nearly free from a performance standpoint, because the register index base stored in register index base 308 can simply be concatenated with the register address specifier. Accordingly, system 300 effectively segments the register files of P1 processor 208 and P2 processor 210 into two segments by setting register index base 302 to 0 (i.e., registers 0 to 127) and register index base 308 to 128 (i.e., registers 128 to 255).
  • As another example, some microprocessors include processors that only use 32 registers (e.g., Sun Microsystems' well-known SPARC architecture uses 32 registers). Thus, register [0024] index base 302 and register index base 308 can be set to specify eight different segments by using the upper three bits of an 8-bit address for a register address specifier to define the eight different segments. For example, the upper three bits can be set to 000, for a segment including registers 0 to 31, 001 for a segment including registers 32 to 63, 010 for a segment including registers 64 to 95, . . . , and 111 for a segment including registers 223 to 255. Thus, eight segments of registers or octants can be defined in this example.
  • [0025] System 300 also allows for P1 processor 208 and P2 processor 210 executing the same instruction(s) cached in instruction cache 212 to access different locations or segments in main memory 202. In particular, system 300 provides an efficient hardware implemented approach, unlike a software implemented approach, which requires significant additional set-up code. For example, memory address base 304 of P1 processor 208 can be set to 0. Thus, memory address specifiers of an instruction executed on P1 processor 208 results in a one-to-one correlation between the memory locations accessed in main memory 202 and the memory address specifiers of the executed instruction. However, memory address base 310 of P2 processor 210 can be set to 10,000 (base 10, that is, a decimal value). Thus, memory address specifiers of the instruction executed on P2 processor 210 are offset by the value 10,000. Hence, main memory 202 is segmented between P1 processor 208 and P2 processor 210 (assuming no memory address specifiers exceed 9,999). The offset operation for memory address specifiers can be implemented as an add or a concatenation operation as similarly described above with respect to the offset operation for register address specifiers.
  • In one embodiment, [0026] memory address base 304 and memory address base 310 each include a load address base for load operations and a store address base for store operations. For example, it may be desirable for P1 processor 208 and P2 processor 210 to share input data, which is stored in main memory 202 or a data cache (not shown). Thus, load address bases of P1 processor 208 and P2 processor 210 can be set to the same value. For example, P1 processor 208 and P2 processor 210 can process the same image stored as data in data memory (e.g., main memory 202 or a data cache). However, P1 processor 208 and P2 processor 210 can process the stored data image differently. For example, P1 processor 208 can change the colors of the data image from green to red, while P2 processor 210 changes the colors of the data image from green to blue. It may be desirable to then compare the differently processed images. Thus, the store address bases of P1 processor 208 and P2 processor 210 can be set to different values such that the differently processed data images are stored in different storage locations in data memory.
  • For example, instructions for library calls can be shared in [0027] instruction cache 212 and executed on P1 processor 208 and P2 processor 210 using different segments of the register files of P1 processor 208 and P2 processor 210, respectively. Thus, only one library call routine can be written, and system 300 advantageously allows for the segmentation of the register files, such as registers 0 to 127 for P1 processor 208 and registers 128 to 255 for P2 processor 210. Otherwise, a second library routine call would need to be written to achieve the same functionality, a first library routine call specifying registers 0 to 127, and a second library routine call specifying registers 128 to 255. Alternatively, the code can be written with support for sharing, which includes indirect addressing, jump tables, and other methods, which increase the footprint of the routine which slows down execution. Also, such an implementation would require twice the space in main memory 202 for storing the second library routine call and would effectively reduce the instruction cache 212 hit rate and use instruction cache 212 less efficiently. Moreover, because the first and second library routine call in such an implementation would reside in two different storage locations in main memory 202, there is a possibility that the stored memory routine calls can be stored at locations that are a multiple of 16 kilobytes apart. If instruction cache 212 is a 16-kilobyte instruction cache, then these library routine calls generally could not both simultaneously reside in instruction cache 212 and, thus, would require main memory access, which is expensive from a performance standpoint.
  • [0028] System 300 is flexible. If segmentation of the register files or main memory is not desired, then the register address bases and the memory address bases of P1 processor 208 and P2 processor 210 can simply be set to the same values (e.g., 0).
  • For example, a computer program can include multiple threads of execution (i.e., a multi-threaded computer program). In a multi-threaded computer program, a compiler that compiles the multi-threaded computer program can generate “set base register” instructions for setting the register index bases (e.g., loading a value into the register index bases) and instructions for setting the memory address bases. The setting of the registers index base registers and the memory address base registers can be performed at the beginning of each thread of the compiled multi-threaded computer program. Thus, as the threads are allocated to different processors (CPUs), the base register values are set accordingly. Thus, as part of a thread safe compilation process, the threads of the multi-threaded computer program can be allocated different segments of the register file or different segments of the main memory or both. For example, [0029] thread 1 of a multi-threaded computer program can operate on registers 0 to 31, thread 2 can operate on registers 32 to 63, . . . , and thread 8 can operate on registers 228 to 255. Similarly, different threads can be allocated different segments of main memory so that they do not overwrite the same main memory storage locations. Moreover, unlike a software implemented approach, system 300 allows for this implementation without requiring significant additional set-up code. In this example, an operating system can send thread 1 to P1 processor 208, send thread 2 to P2 processor 210, and send thread 3 to the next available processor. But by compiling thread 3 such that it has a different register index base and a different memory address base, thread 3, regardless of which processor it ends up executing on, can be using a different segment of the register file of the processor and a different segment of main memory 202.
  • The instructions stored in [0030] instruction cache 212, which is shared between P1 processor 208 and P2 processor 210, are not modified, and in particular, the register address specifiers and the memory address specifiers of the instructions remain the same values. For example, pseudo code of a compiled multi-threaded computer program is listed below.
  • [0031] BEGIN THREAD 1
  • /*Initialize the base registers*/ [0032]
  • Set register index base [0033]
  • Set store address base [0034]
  • Set load address base [0035]
  • CALL [0036]
  • F(x) [0037]
  • F(x,y) [0038]
  • F(w) [0039]
  • [0040] END THREAD 1 /*do not need to reset the base registers, because the next thread will initialize the base registers*/
  • As shown in the above pseudo code, at the beginning of a compiled [0041] thread 1, the base registers are set or initialized. Thus, a register index base is set to a particular value, a store address base is set to a particular value, and a load address base is set to a particular value. Thread 1 then executes various instructions such as calls to various functions (e.g., library calls). At the end of thread 1, the base registers do not need to be reset, because the next thread will appropriately initialize the base registers.
  • FIG. 4 is a functional diagram [0042] 400 of an offset operation using a register index base in accordance with one embodiment of the present invention. Block 402 is a storage location for an operation code (opcode) that includes a register destination (Rd) and up to three register address specifiers (RS1, RS2, and RS3). Block 404 is a storage location that is loaded with a multiply add opcode, MUL ADD. Block 404 includes the MUL ADD operation, which requires 8 bits, an 8-bit Rd that is set to the decimal value of 40, an 8-bit RS1 that is set to the decimal value of 10, an 8-bit RS2 that is set to the decimal value of 20, and an 8-bit RS3 that is set to the decimal value of 30. The MUL ADD operation multiplies RS1 and RS2, then adds RS3 to the product, and stores the result in RS4. The binary equivalents to the decimal values stored in the register address specifiers of block 404 are shown below each sub-block. The register index base stored in register index base register 302 of P1 processor 208 is concatenated to the register address specifiers of block 404. The result of the concatenation of the register index base with the register address specifiers of block 404, assuming the register index base is set to the decimal value of 64, is shown in block 406. In particular, block 406 is a storage location that includes the MUL ADD opcode, RS1 now set to 104, RS2 now set to 74, RS3 now set to 84, and RS4 now set to 94. The binary equivalents of the decimal values of the register address specifiers of block 406 are shown below each sub-block.
  • FIG. 5 is a functional diagram [0043] 500 of an offset operation using a memory address base in accordance with one embodiment of the present invention. Functional diagram 500 includes block 402 that is a storage location for an opcode that includes up to four register address specifiers. Block 504 is a storage location that is loaded with the “ld [R10+R20], R30” operation. As shown in FIG. 5, R10 and R20 are storage locations in register file 306. The storage location R10 stores the decimal value 60, and the storage location R20 stores the decimal value 18. The load operation results in the addition of the decimal value 60 and the decimal value 18, which equals the decimal value 78 (i.e., a binary value 1001110). The memory address base, which is stored in memory address base register 304, is then concatenated with the binary value equal to 78 to provide a new memory address pointer 506 that points to a storage location of main memory 202. If the memory address base is set to 1024 in decimal (base 10) or 1000000000 in binary, then pointer 506 points to memory address location 1102 in decimal or 1001001110 in binary in main memory 202. This concatenation operation can be implemented without requiring significant extra gates in the critical path. If a first thread, which sets register address bases and memory address bases to different values, desires to know the results of the execution of a second thread, then the first thread can set the register address bases and the memory address bases to the same values as the second thread.
  • Although particular embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that changes and modifications can be made without departing from the present invention in its broader aspects, and therefore, the appended claims are to encompass within their scope all such changes and modifications that fall within the true scope of the present invention. [0044]

Claims (22)

What is claimed is:
1. An apparatus for a microprocessor, comprising:
a shared instruction cache, the shared instruction cache being shared by a first processor and a second processor of the microprocessor; and
a first register index base coupled to the first processor, and
a second register index base coupled to the second processor,
wherein the first register index base and the second register index base enable a single instruction stored in the shared instruction cache to be executed on both the first processor and the second processor.
2. The apparatus of
claim 1
further comprising:
a first memory address base for the first processor and a second memory address base for the second processor.
3. The apparatus of
claim 2
further comprising:
a first set of registers for the first processor and a second set of registers for the second processor.
4. The apparatus of
claim 3
wherein the first set of registers comprises 256 registers, and the second set of registers comprises 256 registers.
5. The apparatus of
claim 3
further comprising:
a bus connected to the first processor and the second processor; and
a main memory connected to the bus.
6. The apparatus of
claim 3
wherein the instruction cache stores an instruction, the instruction being executed by the first processor, and the instruction being executed by the second processor.
7. The apparatus of
claim 3
wherein the instruction cache stores an instruction that comprises a register address specifier, the register address specifier being offset using the first register index base when the instruction is executed by the first processor, and the register address specifier being offset using the second register index base when the instruction is executed by the second processor.
8. The apparatus of
claim 3
wherein the instruction cache stores an instruction that includes the memory address specifier, the memory address specifier being offset using the first memory address base when the instruction is executed by the first processor, and the memory address specifier being offset using the second memory address base when the instruction is executed by the second processor.
9. The apparatus of
claim 3
wherein the first register index base is a first value stored in a first register, the second register index base is a second value stored in a second register, the first memory address base is a third value stored in a third register, and the second memory address base is a fourth value stored in a fourth register.
10. The apparatus of
claim 3
wherein the instruction cache is shared by the first processor, the second processor, and at least one other processor.
11. The apparatus of
claim 3
wherein the first memory address base comprises a first load address base and a first store address base, and the second memory address base comprises a second load address base and a second store address base.
12. A method for a shared instruction cache of a microprocessor, comprising:
storing an instruction in an instruction cache, the instruction cache being shared by a first processor and a second processor of the microprocessor;
storing a first register index base for the first processor;
storing a second register index base for the second processor; and,
using the first register index base and the second register index base to enable the instruction to be executed on both the first processor and the second processor.
13. The method of
claim 12
further comprising:
storing a first memory address base for the first processor and a second memory address base for the second processor.
14. The method of
claim 13
further comprising:
executing the instruction stored in the instruction cache on the first processor by offsetting a register address specifier of the instruction using the first register index base; and
executing the instruction stored in the instruction cache on the second processor by offsetting the register address specifier using the second register index base.
15. The method of
claim 14
further comprising:
segmenting a first register file of the first processor and a second register file of the second processor using the first register index base and the second register index base, respectively, so that the first register file and the second register file can be combined after executing a thread of a multi-threaded computer program on the first processor and the second processor.
16. The method of
claim 13
further comprising:
executing the instruction stored in the instruction cache on the first processor by offsetting a memory address specifier using the first memory address base; and
executing the instruction stored in the instruction cache on the second processor by offsetting the memory address specifier using the second memory address base.
17. The method of
claim 16
further comprising:
segmenting a main memory using the first memory address base and the second memory address base so that a thread of a multi-threaded computer program executed by the first processor uses a first segment of the main memory, and the thread of the multi-threaded computer program executed by the second processor uses a second segment of the main memory, wherein the first segment and second segment do not overlap.
18. The method of
claim 16
further comprising:
storing a first load address base for the first processor and storing a second load address base for the second processor; and
storing a first store address base for the first processor and storing a second store address base for the second processor.
19. The method of
claim 16
wherein the offsetting is performed using a concatenation operation.
20. An apparatus for a shared instruction cache, comprising:
an instruction cache, the instruction cache storing an instruction that can be executed by a first processor and by a second processor;
a first register index base for the first processor stored in a first register; and
a second register index base for the second processor stored in a second register.
21. The apparatus of
claim 20
further comprising:
a first memory address base for the first processor stored in a third register; and
a second memory address base for the second processor stored in a fourth register.
22. The apparatus of
claim 21
wherein the first memory address base and the second memory address base each comprise a load address base and a store address base.
US09/818,295 1998-12-03 2001-03-27 Shared instruction cache for multiple processors Expired - Lifetime US6378041B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US09/818,295 US6378041B2 (en) 1998-12-03 2001-03-27 Shared instruction cache for multiple processors
US10/100,263 US6523090B2 (en) 1998-12-03 2002-03-18 Shared instruction cache for multiple processors

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/204,793 US6212604B1 (en) 1998-12-03 1998-12-03 Shared instruction cache for multiple processors
US09/818,295 US6378041B2 (en) 1998-12-03 2001-03-27 Shared instruction cache for multiple processors

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/204,793 Continuation US6212604B1 (en) 1998-12-03 1998-12-03 Shared instruction cache for multiple processors

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US10/100,263 Continuation US6523090B2 (en) 1998-12-03 2002-03-18 Shared instruction cache for multiple processors

Publications (2)

Publication Number Publication Date
US20010011327A1 true US20010011327A1 (en) 2001-08-02
US6378041B2 US6378041B2 (en) 2002-04-23

Family

ID=22759455

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/204,793 Expired - Lifetime US6212604B1 (en) 1998-12-03 1998-12-03 Shared instruction cache for multiple processors
US09/818,295 Expired - Lifetime US6378041B2 (en) 1998-12-03 2001-03-27 Shared instruction cache for multiple processors
US10/100,263 Expired - Lifetime US6523090B2 (en) 1998-12-03 2002-03-18 Shared instruction cache for multiple processors

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/204,793 Expired - Lifetime US6212604B1 (en) 1998-12-03 1998-12-03 Shared instruction cache for multiple processors

Family Applications After (1)

Application Number Title Priority Date Filing Date
US10/100,263 Expired - Lifetime US6523090B2 (en) 1998-12-03 2002-03-18 Shared instruction cache for multiple processors

Country Status (2)

Country Link
US (3) US6212604B1 (en)
WO (1) WO2000033184A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130297888A1 (en) * 2011-01-07 2013-11-07 Fujitsu Limited Scheduling method and multi-core processor system

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6332215B1 (en) * 1998-12-08 2001-12-18 Nazomi Communications, Inc. Java virtual machine hardware for RISC and CISC processors
US6341338B1 (en) * 1999-02-04 2002-01-22 Sun Microsystems, Inc. Protocol for coordinating the distribution of shared memory
US6983350B1 (en) * 1999-08-31 2006-01-03 Intel Corporation SDRAM controller for parallel processor architecture
US6532509B1 (en) 1999-12-22 2003-03-11 Intel Corporation Arbitrating command requests in a parallel multi-threaded processing system
US6694380B1 (en) 1999-12-27 2004-02-17 Intel Corporation Mapping requests from a processing unit that uses memory-mapped input-output space
US6661794B1 (en) 1999-12-29 2003-12-09 Intel Corporation Method and apparatus for gigabit packet assignment for multithreaded packet processing
US6647546B1 (en) 2000-05-03 2003-11-11 Sun Microsystems, Inc. Avoiding gather and scatter when calling Fortran 77 code from Fortran 90 code
US6802057B1 (en) 2000-05-03 2004-10-05 Sun Microsystems, Inc. Automatic generation of fortran 90 interfaces to fortran 77 code
US6910107B1 (en) * 2000-08-23 2005-06-21 Sun Microsystems, Inc. Method and apparatus for invalidation of data in computer systems
EP1197847A3 (en) 2000-10-10 2003-05-21 Nazomi Communications Inc. Java hardware accelerator using microcode engine
US6732247B2 (en) * 2001-01-17 2004-05-04 University Of Washington Multi-ported memory having pipelined data banks
US6647483B1 (en) * 2001-03-01 2003-11-11 Lsi Logic Corporation Address translation circuit for processors utilizing a single code image
KR20030029913A (en) * 2001-07-07 2003-04-16 코닌클리즈케 필립스 일렉트로닉스 엔.브이. Processor cluster
US7487505B2 (en) * 2001-08-27 2009-02-03 Intel Corporation Multithreaded microprocessor with register allocation based on number of active threads
US6883116B2 (en) * 2001-09-27 2005-04-19 International Business Machines Corporation Method and apparatus for verifying hardware implementation of a processor architecture in a logically partitioned data processing system
US7610451B2 (en) * 2002-01-25 2009-10-27 Intel Corporation Data transfer mechanism using unidirectional pull bus and push bus
US6745288B2 (en) * 2002-05-21 2004-06-01 International Business Machines Corporation Staggering call stack offsets for multiple duplicate control threads
US7471688B2 (en) * 2002-06-18 2008-12-30 Intel Corporation Scheduling system for transmission of cells to ATM virtual circuits and DSL ports
US7337275B2 (en) * 2002-08-13 2008-02-26 Intel Corporation Free list and ring data structure management
US7058829B2 (en) * 2002-08-14 2006-06-06 Intel Corporation Method and apparatus for a computing system having an active sleep mode CPU that uses the cache of a normal active mode CPU
US7433307B2 (en) * 2002-11-05 2008-10-07 Intel Corporation Flow control in a network environment
US6941438B2 (en) * 2003-01-10 2005-09-06 Intel Corporation Memory interleaving
US7020748B2 (en) * 2003-01-21 2006-03-28 Sun Microsystems, Inc. Cache replacement policy to mitigate pollution in multicore processors
US8732368B1 (en) 2005-02-17 2014-05-20 Hewlett-Packard Development Company, L.P. Control system for resource selection between or among conjoined-cores
US9003168B1 (en) * 2005-02-17 2015-04-07 Hewlett-Packard Development Company, L. P. Control system for resource selection between or among conjoined-cores
US7500066B2 (en) * 2005-04-30 2009-03-03 Tellabs Operations, Inc. Method and apparatus for sharing instruction memory among a plurality of processors
US7663635B2 (en) * 2005-05-27 2010-02-16 Ati Technologies, Inc. Multiple video processor unit (VPU) memory mapping
US7730261B1 (en) 2005-12-20 2010-06-01 Marvell International Ltd. Multicore memory management system
US8051272B2 (en) * 2006-05-15 2011-11-01 Samsung Electronics Co., Ltd. Method and system for generating addresses for a processor
US20080005500A1 (en) * 2006-06-28 2008-01-03 Mansoor Ahamed Basheer Ahamed Method, system, and apparatus for accessing core resources in a multicore environment
US20080005525A1 (en) * 2006-06-29 2008-01-03 Rosenbluth Mark B Partitioning program memory
US20080065865A1 (en) * 2006-09-08 2008-03-13 Ilhyun Kim In-use bits for efficient instruction fetch operations
US8782367B2 (en) * 2006-12-20 2014-07-15 Stmicroelectronics S.A. Memory area protection circuit
KR101801920B1 (en) 2010-12-17 2017-12-28 삼성전자주식회사 Configurable clustered register file and Reconfigurable computing device with the same
CN104252425B (en) * 2013-06-28 2017-07-28 华为技术有限公司 The management method and processor of a kind of instruction buffer
US10176147B2 (en) * 2017-03-07 2019-01-08 Qualcomm Incorporated Multi-processor core three-dimensional (3D) integrated circuits (ICs) (3DICs), and related methods

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4129614C2 (en) * 1990-09-07 2002-03-21 Hitachi Ltd System and method for data processing
JPH06103068A (en) 1992-09-18 1994-04-15 Toyota Motor Corp Data processor
US5692207A (en) 1994-12-14 1997-11-25 International Business Machines Corporation Digital signal processing system with dual memory structures for performing simplex operations in parallel
US6029242A (en) 1995-08-16 2000-02-22 Sharp Electronics Corporation Data processing system using a shared register bank and a plurality of processors
US5958038A (en) 1997-11-07 1999-09-28 S3 Incorporated Computer processor with two addressable memories and two stream registers and method of data streaming of ALU operation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130297888A1 (en) * 2011-01-07 2013-11-07 Fujitsu Limited Scheduling method and multi-core processor system
US9367459B2 (en) * 2011-01-07 2016-06-14 Fujitsu Limited Scheduling method and multi-core processor system

Also Published As

Publication number Publication date
US6523090B2 (en) 2003-02-18
US20020174285A1 (en) 2002-11-21
US6378041B2 (en) 2002-04-23
US6212604B1 (en) 2001-04-03
WO2000033184A1 (en) 2000-06-08

Similar Documents

Publication Publication Date Title
US6378041B2 (en) Shared instruction cache for multiple processors
US5239642A (en) Data processor with shared control and drive circuitry for both breakpoint and content addressable storage devices
US5051885A (en) Data processing system for concurrent dispatch of instructions to multiple functional units
US5341500A (en) Data processor with combined static and dynamic masking of operand for breakpoint operation
US6813701B1 (en) Method and apparatus for transferring vector data between memory and a register file
US5019965A (en) Method and apparatus for increasing the data storage rate of a computer system having a predefined data path width
US7610469B2 (en) Vector transfer system for packing dis-contiguous vector elements together into a single bus transfer
US8046568B2 (en) Microprocessor with integrated high speed memory
JPH02190930A (en) Software instruction executing apparatus
JPS6014341A (en) Trap interrupt system for basic instruction set computer system
WO2002050668A2 (en) System and method for multiple store buffer forwarding
US7546442B1 (en) Fixed length memory to memory arithmetic and architecture for direct memory access using fixed length instructions
US5752273A (en) Apparatus and method for efficiently determining addresses for misaligned data stored in memory
US6094711A (en) Apparatus and method for reducing data bus pin count of an interface while substantially maintaining performance
US5732405A (en) Method and apparatus for performing a cache operation in a data processing system
US6427191B1 (en) High performance fully dual-ported, pipelined cache design
US6405233B1 (en) Unaligned semaphore adder
GB2402759A (en) Transferring data files between a register file and a memory
Berenbaum et al. Architectural Innovations in the CRISP Microprocessor.
US5613081A (en) Method of operating a data processor with rapid address comparison for data forwarding
EP0101718B1 (en) Computer with automatic mapping of memory contents into machine registers
US7003650B2 (en) Method for prioritizing operations within a pipelined microprocessor based upon required results
US6321319B2 (en) Computer system for allowing a two word jump instruction to be executed in the same number of cycles as a single word jump instruction
US5649229A (en) Pipeline data processor with arithmetic/logic unit capable of performing different kinds of calculations in a pipeline stage
JP2696578B2 (en) Data processing device

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12