US20100318764A1 - System and method for managing processor-in-memory (pim) operations - Google Patents

System and method for managing processor-in-memory (pim) operations Download PDF

Info

Publication number
US20100318764A1
US20100318764A1 US12/484,062 US48406209A US2010318764A1 US 20100318764 A1 US20100318764 A1 US 20100318764A1 US 48406209 A US48406209 A US 48406209A US 2010318764 A1 US2010318764 A1 US 2010318764A1
Authority
US
United States
Prior art keywords
vector
amo
memory
executed
functional units
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/484,062
Other versions
US8583898B2 (en
Inventor
Terry D. Greyzck
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cray Inc
Original Assignee
Cray Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cray Inc filed Critical Cray Inc
Priority to US12/484,062 priority Critical patent/US8583898B2/en
Assigned to CRAY INC. reassignment CRAY INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GREYZCK, TERRY D.
Publication of US20100318764A1 publication Critical patent/US20100318764A1/en
Application granted granted Critical
Publication of US8583898B2 publication Critical patent/US8583898B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30036Instructions to perform operations on packed data, e.g. vector, tile or matrix operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30029Logical and Boolean instructions, e.g. XOR, NOT
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/3004Arrangements for executing specific machine instructions to perform operations on memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30076Arrangements for executing specific machine instructions to perform miscellaneous control operations, e.g. NOP
    • G06F9/30087Synchronisation or serialisation instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/34Addressing or accessing the instruction operand or the result ; Formation of operand address; Addressing modes
    • G06F9/345Addressing or accessing the instruction operand or the result ; Formation of operand address; Addressing modes of multiple operands or results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/34Addressing or accessing the instruction operand or the result ; Formation of operand address; Addressing modes
    • G06F9/345Addressing or accessing the instruction operand or the result ; Formation of operand address; Addressing modes of multiple operands or results
    • G06F9/3455Addressing or accessing the instruction operand or the result ; Formation of operand address; Addressing modes of multiple operands or results using stride

Definitions

  • the invention relates generally to vector computer software, and more specifically to a system and method for managing processor-in-memory (PIM) operations.
  • PIM processor-in-memory
  • Supercomputers are high performance computing platforms that employ a pipelined vector processing approach to solving numerical problems.
  • Vectors are ordered sets of data. Problems that can be structured as a sequence of operations on vectors can experience one to two orders of magnitude increased throughput when executed on a vector machine (compared to execution on a scalar machine of the same cost). Pipelining further increases throughput by hiding memory latency through the prefetching of instructions and data.
  • a pipelined vector machine is disclosed in U.S. Pat. No. 4,128,880, issued Dec. 5, 1978, to Cray, the disclosure of which is hereby incorporated herein by reference.
  • vectors are usually processed by loading them into operand vector registers, streaming them through a data processing pipeline having a functional unit, and receiving the output in a result vector register.
  • vector processing is faster and more efficient than scalar processing.
  • Overhead associated with maintenance of the loop-control variable is reduced.
  • central memory conflicts are reduced (fewer but bigger requests) and data processing units are used more efficiently (through data streaming).
  • Vector processing supercomputers are used for a variety of large-scale numerical problems. Applications typically are highly structured computations that model physical processes. They exhibit a heavy dependence on floating-point arithmetic due to the potentially large dynamic range of values within these computations. Problems requiring modeling of heat or fluid flow, or of the behavior of a plasma, are examples of such applications.
  • Vectorization typically transforms an iterative loop into a nested loop with an inner loop of VL iterations, where VL is the length of the vector registers of the system. This process is known as “strip mining” the loop.
  • strip mining the number of iterations in the internal loop is either fixed, or defined by the length of a vector register, depending on the hardware implementation; the number of iterations of the external loop is defined as an integer number of vector lengths. Any remaining iterations are performed as a separate loop placed before or after the nested loop, or alternately as constrained-length vector operations within the body of the vector loop.
  • Compilers exist that will automatically apply strip mining techniques to scalar loops within program code to create vectorized loops. This capability greatly simplifies programming efficient vector processing.
  • the memory to processor round trip time has grown rapidly as clock rates increase and the memory to processor interface becomes increasingly pipelined.
  • Systems have been suggested that place processors closer to memory in order to reduce the number of cycles spent transferring data between processors and memory.
  • the processor and the memory are collocated on the same board, or on the same piece of silicon. Such an approach is, however, expensive, requiring special hardware.
  • FIG. 1 shows a functional block diagram of a computer system with vector atomic memory capability
  • FIG. 2 shows a vector atomic memory operation
  • FIG. 3 illustrates vectorization according to the present invention.
  • the present invention provides a system and method for balancing PIM operations against conventional processors.
  • the system and method balance the cost in time of fetching data from memory and storing data to memory against the efficiency of performing calculations in one or more processors. That is, is it better to avoid memory overhead and perform operations in the slower arithmetic units of a processor-in-memory, or is it better to accept the memory overhead in order to gain the increased speed of processors in a distributed processor system?
  • FIG. 1 illustrates a functional block diagram of a computer system 100 , including one or more processors 110 connected through a memory controller 112 to one or more memory devices 114 .
  • Processor 110 is not limited to any particular type of processor. In various embodiments, processor 110 is not a single processor, and may include any number of processors operating in a multi-processor system. In various embodiments, processor 110 includes cache memory.
  • system 100 is a node in a larger system.
  • each node includes four processors 110 and sixteen memory controllers 112 .
  • SerDes serializer/deserializer
  • controllers 112 serve as pin expanders, converting a small number of high-speed differential signals received from the processors 110 on channels 116 into a large number of single-ended signals that interface to commodity DDR2 memory parts on memory channels 118 .
  • Each memory controller 112 manages four DDR2 memory channels, each with a 40-bit-wide data/ECC path.
  • the 32-bit data path, coupled with the four-deep memory access bursts of DDR2 provides a minimum transfer granularity of only 16 bytes.
  • the controller 112 with its associated memory devices have twice the peak data bandwidth and four times the single-word bandwidth of a standard 72-bit-wide DIMM.
  • each memory channel 118 is connected to up to ten DDR2 DRAM.
  • two or more processors 110 are located on a single compute node printed circuit board.
  • the memory controller 112 and its associated memory devices 114 are located on a memory daughter card (MDC) that plugs into a connector on the compute node printed circuit board.
  • MDC memory daughter card
  • Each of the eight MDCs contains 20 or 40 memory parts, providing up to 128 Gbytes of memory capacity per node using 1-Gbit memory parts.
  • memory devices 114 are not limited to any particular type of memory device.
  • memory devices 114 includes DRAM memory.
  • one or more memory devices 114 are double-data-rate two synchronous dynamic random access (DDR2 SDRAM) memory devices.
  • DDR2 SDRAM double-data-rate two synchronous dynamic random access
  • Memory device 114 is not limited to any particular configuration. In various embodiments, memory chips within memory device 114 are organized as five 8-bit devices, for a total of 40 bits. In some embodiments, only 39 of the 40 bits are used, where 32 bits are used for data and 7 bits are used to store an error correction code associated with the data bits. In various embodiments, the remaining bit is used to dynamically map out bad bits within the device, including the spare bit in a spare-bit insertion to repair persistent memory errors within the memory location providing the spare bit and having a persistent bit error.
  • Some processor operations are considered atomic, in that their occurrence can be considered a single event to the rest of the processor. More specifically, an atomic operation does not halfway complete, but either completes successfully or does not complete. This is important in a processor to ensure the validity of data, such as where multiple threads or operations can be operating on the same data type at the same time. For example, if two separate processes intent to read the same memory location, increment the value, and write the updated value back to memory, both processes may read the memory value before it is written back to memory. When the processes write the data, the second process to write the data will be writing a value that is out of date, as it does not reflect the result of the previously completed read and increment operation.
  • This problem can be managed using various mechanisms to make such operations atomic, such that the operation locks the data until the operation is complete or otherwise operates as an atomic operation and does not appear to the rest of the processor to comprise separate read and increment steps. This ensures that the data is not modified or used for other instructions while the atomic instruction completes, preserving the validity of the instruction result.
  • System 100 includes a new type of instruction for a computer processor, in which atomic operations on memory can be vectorized, operating on multiple memory locations at the same time or via the same instruction.
  • This addition to the instruction set makes more efficient use of the memory and network bandwidth in a multiprocessor system, and enables vectorization of more program loops in many program applications.
  • each memory controller 112 includes an atomic memory operation functional unit 120 capable of performing vector atomic memory operation (AMO) instructions.
  • AMO vector atomic memory operation
  • Examples of atomic memory operations included in one embodiment of AMO functional unit 120 include a vector atomic add, vector atomic AND, vector atomic OR, vector atomic XOR, vector atomic fetch and add, vector atomic fetch and AND, vector atomic fetch and OR, and a vector atomic fetch and XOR.
  • the non-fetch versions of these instructions read the memory location, perform the specified operation, between the instruction data and the memory location data, and store the result to the memory location.
  • the fetch versions perform similar functions, but also return the result of the operation to the processor rather than simply storing the result to memory.
  • Strided vectors use a base and a stride to create a vector length of the stride length starting at the base address.
  • Indexed vector access uses a base and a vector of indexes to create a vector of memory addresses, enabling specification of a vector comprising elements that are not in order or evenly spaced.
  • hardware implementation of the vector atomic memory operations includes use of additional decode logic to decode the new type of vector atomic memory instruction.
  • Vector registers in the processor and a vector mask are used to generate the vector instruction, and a single atomic memory instruction in the processor issues a number of atomic memory operations.
  • vector atomic memory operations operate much like scalar atomic memory operations, and the memory manager block provides the atomic memory operation support needed to execute these instructions.
  • system 100 includes vector Atomic Memory Operation (vector AMO or VAMO) instructions.
  • vector AMO vector Atomic Memory Operation
  • VAMO vector Atomic Memory Operation
  • the memory locations defined by the resulting vector of addresses is incremented by the corresponding amount in vector (Vi), on an element-by-element basis.
  • the memory system guarantees that if multiple elements of the vector of addresses defined by (Aj+Vk) are identical, the multiple instances are not performed simultaneously. This guarantees the same result as if the operation was performed with an equivalent series of scalar operations, as would happen on a non-vector architecture.
  • system 100 also includes a vector atomic AND instruction and a vector atomic OR instruction.
  • the memory locations defined by the resulting vector of addresses are ANDed to a corresponding element in vector (Vi), on an element-by-element basis.
  • the memory system guarantees that if multiple elements of the vector of addresses defined by (Ai+Vk) are identical, the multiple instances are not performed simultaneously. This guarantees the same result as if the operation was performed with an equivalent series of scalar operations, as would happen on a non-vector architecture.
  • system 100 also includes a vector atomic OR instruction.
  • the memory locations defined by the resulting vector of addresses are ORed to a corresponding element in vector (Vi), on an element-by-element basis.
  • the memory system guarantees that if multiple elements of the vector of addresses defined by (Ai+Vk) are identical, the multiple instances are not performed simultaneously. This guarantees the same result as if the operation was performed with an equivalent series of scalar operations, as would happen on a non-vector architecture.
  • integer bitwise operations such as bitwise and, bitwise or, and bitwise exclusive or.
  • the possible instructions are not limited by any particular architecture, and are easily extended to support any commutative operation such as floating point and complex addition or multiplication, integer multiplication, or other bit manipulation primitives.
  • One such vector AMO instruction set is described in U.S. patent application Ser. No. 11/946,490, filed Nov. 28, 2007 entitled “Vector Atomic Memory Operations”, the description of which is incorporated herein by reference.
  • FIG. 2 illustrates a vector atomic memory operation.
  • Aj is the address of x
  • the vector Vk is a vector of ix values
  • the vector Vi is a vector of y values.
  • Aj, Vk, W The vector atomic memory operation [Aj,Vk] AADD([Aj,Vk],Vi) can then express:
  • the vector ix can contain repeated values, and memory controller 112 will detect and process them correctly.
  • the Vector AMO may execute out order. In such embodiments, the method should only be used in situations where the operations to the same memory location can occur out of order.
  • the vector AMO instruction set can be used to implement processor-in-memory operations across computer system 100 .
  • a check is made to determine if the calculation to be performed falls within the class of problems with a potential for more efficient execution in a PIM operation than in a processor 110 in a multiprocessor system such as system 100 of FIG. 1 . If the computation falls within the class of problems with a potential for more efficient execution in a PIM operation than in a processor, the code is changed to include the appropriate vector atomic memory operation or operations.
  • the method takes advantage of language constructs available in some programming languages. For instance, UPC and Co-array Fortran both include an extra dimension on their arrays that depict processor number. When system 100 parallelizes across that dimension, it uses the vector AMO instructions as much as possible.
  • An advantage of this method of selecting between processor 110 and the PIM operations of vector AMO functional unit 120 is that it can be adapted to a variety of architectures to balance their use of conventional and PIM operations. In addition, it can be used to parallelize across vectors as well as across processors in multiprocessor systems.
  • FIG. 3 A method of compiling program code is shown in FIG. 3 .
  • the program code includes an operation on an array of data elements stored in memory of a computer system.
  • the program code is scanned at 200 for an equation which could be executed as a vector computation.
  • control moves to 204 , where the equation is reviewed to determine if it is a candidate for a PIM vector operation. If so, control moves to 206 and the equation is vectorized where possible with vector AMO instructions. Control then moves to 208 .
  • control moves to 207 and the equation is vectorized where possible to use the vector functional units of processor 110 . Control then moves to 208 .
  • an equation will most effectively be vectorized with a combination of instructions for the vector functional units in processors 110 and the vector AMO functional units 120 .
  • the compiler will use a combination of vector AMO instructions and vector functional unit instructions.
  • each equation is decomposed into a nested loop, wherein the nested loop comprises an exterior loop and a virtual interior loop.
  • the exterior loop decomposes the equation into a plurality of loops of length N, wherein N is an integer greater than one.
  • the virtual interior loop executes vector operations corresponding to the N length loop to form a result vector of length N, wherein the virtual interior loop includes a vector atomic memory operation (AMO) instruction.
  • AMO vector atomic memory operation
  • N is set to equal the vector length of the vector register in the computer system.
  • the following operations are available to execute in vector AMO functional units 120 .
  • the method of the present invention improves performance of computer system 100 by balancing memory transfer overhead and computational speed when vectorizing.
  • Operations performed in vector AMO functional units 120 avoid the use of vector registers for the computation, moving the functionality onto the hardware memory system. This avoids the time normally taken moving data to and from the central processor.
  • the method works for multiprocessor parallelism in addition to local vectorization, as the hardware memory system of vector AMO functional unit 120 can be used to parallelize across processors as well as across vectors.
  • This method can be used with the vector update method described in U.S. patent application Ser. No. ______, filed herewith, entitled “Vector Atomic Memory Operation Vector Update System and Method”, the description of which is incorporated herein by reference, to determine the equations that can be vectorized to use the vector functional units of processors 110 versus the equations that, if vectorized at all, must be vectorized using vector AMO functional units 120 .
  • each equation is decomposed into a nested loop, wherein the nested loop comprises an exterior loop and a virtual interior loop.
  • the exterior loop decomposes the equation into a plurality of loops of length N, wherein N is an integer greater than one.
  • the virtual interior loop executes vector operations corresponding to the N length loop to form a result vector of length N, wherein the virtual interior loop includes a vector atomic memory operation (AMO) instruction used to execute the interior loop.
  • AMO vector atomic memory operation
  • N is set to equal the vector length of the vector register in the computer system.

Abstract

A system and method of compiling program code, wherein the program code includes an operation on an array of data elements stored in memory of a computer system. The program code is scanned for operations that are vectorizable. The vectorizable operations are examined to determine whether they should be executed at least in part in a vector atomic memory operation (AMO) functional unit attached to memory. If so, the compiled code includes vector AMO instructions.

Description

    RELATED APPLICATION
  • This application is related to U.S. patent application Ser. No. 11/946,490, filed Nov. 28, 2007, entitled “Vector Atomic Memory Operations,” which is incorporated herein by reference.
  • FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
  • The U.S. Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of Contract No. MDA904-02-3-0052, awarded by the Maryland Procurement Office.
  • FIELD OF THE INVENTION
  • The invention relates generally to vector computer software, and more specifically to a system and method for managing processor-in-memory (PIM) operations.
  • BACKGROUND
  • Supercomputers are high performance computing platforms that employ a pipelined vector processing approach to solving numerical problems. Vectors are ordered sets of data. Problems that can be structured as a sequence of operations on vectors can experience one to two orders of magnitude increased throughput when executed on a vector machine (compared to execution on a scalar machine of the same cost). Pipelining further increases throughput by hiding memory latency through the prefetching of instructions and data.
  • A pipelined vector machine is disclosed in U.S. Pat. No. 4,128,880, issued Dec. 5, 1978, to Cray, the disclosure of which is hereby incorporated herein by reference. In the Cray machine, vectors are usually processed by loading them into operand vector registers, streaming them through a data processing pipeline having a functional unit, and receiving the output in a result vector register.
  • For vectorizable problems, vector processing is faster and more efficient than scalar processing. Overhead associated with maintenance of the loop-control variable (for example, incrementing and checking the count) is reduced. In addition, central memory conflicts are reduced (fewer but bigger requests) and data processing units are used more efficiently (through data streaming).
  • Vector processing supercomputers are used for a variety of large-scale numerical problems. Applications typically are highly structured computations that model physical processes. They exhibit a heavy dependence on floating-point arithmetic due to the potentially large dynamic range of values within these computations. Problems requiring modeling of heat or fluid flow, or of the behavior of a plasma, are examples of such applications.
  • Program code for execution on vector processing supercomputers must be vectorized to exploit the performance advantages of vector processing. Vectorization typically transforms an iterative loop into a nested loop with an inner loop of VL iterations, where VL is the length of the vector registers of the system. This process is known as “strip mining” the loop. In strip mining, the number of iterations in the internal loop is either fixed, or defined by the length of a vector register, depending on the hardware implementation; the number of iterations of the external loop is defined as an integer number of vector lengths. Any remaining iterations are performed as a separate loop placed before or after the nested loop, or alternately as constrained-length vector operations within the body of the vector loop.
  • Compilers exist that will automatically apply strip mining techniques to scalar loops within program code to create vectorized loops. This capability greatly simplifies programming efficient vector processing.
  • The memory to processor round trip time (in clock cycles) has grown rapidly as clock rates increase and the memory to processor interface becomes increasingly pipelined. Systems have been suggested that place processors closer to memory in order to reduce the number of cycles spent transferring data between processors and memory. In some processor-in-memory systems, the processor and the memory are collocated on the same board, or on the same piece of silicon. Such an approach is, however, expensive, requiring special hardware.
  • It is clear that there is a need for improved methods of balancing PIM operations against conventional processors in multiprocessor systems.
  • BRIEF DESCRIPTION OF THE FIGURES
  • FIG. 1 shows a functional block diagram of a computer system with vector atomic memory capability;
  • FIG. 2 shows a vector atomic memory operation; and
  • FIG. 3 illustrates vectorization according to the present invention.
  • DETAILED DESCRIPTION
  • The present invention provides a system and method for balancing PIM operations against conventional processors. The system and method balance the cost in time of fetching data from memory and storing data to memory against the efficiency of performing calculations in one or more processors. That is, is it better to avoid memory overhead and perform operations in the slower arithmetic units of a processor-in-memory, or is it better to accept the memory overhead in order to gain the increased speed of processors in a distributed processor system?
  • FIG. 1 illustrates a functional block diagram of a computer system 100, including one or more processors 110 connected through a memory controller 112 to one or more memory devices 114.
  • Processor 110 is not limited to any particular type of processor. In various embodiments, processor 110 is not a single processor, and may include any number of processors operating in a multi-processor system. In various embodiments, processor 110 includes cache memory.
  • In one embodiment, system 100 is a node in a larger system. In one such embodiment, each node includes four processors 110 and sixteen memory controllers 112. Channels 116 between processors 110 and controllers 112 use a 4-bit wide 5.0 Gbaud serializer/deserializer (SerDes) for an aggregate channel bandwidth of 16×2.5 Gbytes/s=40 Gbytes/s per direction—160 Gbytes/s per direction for each node.
  • In one such embodiment, controllers 112 serve as pin expanders, converting a small number of high-speed differential signals received from the processors 110 on channels 116 into a large number of single-ended signals that interface to commodity DDR2 memory parts on memory channels 118. Each memory controller 112 manages four DDR2 memory channels, each with a 40-bit-wide data/ECC path. The 32-bit data path, coupled with the four-deep memory access bursts of DDR2, provides a minimum transfer granularity of only 16 bytes. Thus the controller 112 with its associated memory devices have twice the peak data bandwidth and four times the single-word bandwidth of a standard 72-bit-wide DIMM.
  • In one embodiment, each memory channel 118 is connected to up to ten DDR2 DRAM.
  • In one embodiment, two or more processors 110 are located on a single compute node printed circuit board. In one such embodiment, the memory controller 112 and its associated memory devices 114 are located on a memory daughter card (MDC) that plugs into a connector on the compute node printed circuit board. Each of the eight MDCs contains 20 or 40 memory parts, providing up to 128 Gbytes of memory capacity per node using 1-Gbit memory parts.
  • Returning to FIG. 1, memory devices 114 are not limited to any particular type of memory device. In various embodiments, memory devices 114 includes DRAM memory. In various embodiments, one or more memory devices 114 are double-data-rate two synchronous dynamic random access (DDR2 SDRAM) memory devices.
  • Memory device 114 is not limited to any particular configuration. In various embodiments, memory chips within memory device 114 are organized as five 8-bit devices, for a total of 40 bits. In some embodiments, only 39 of the 40 bits are used, where 32 bits are used for data and 7 bits are used to store an error correction code associated with the data bits. In various embodiments, the remaining bit is used to dynamically map out bad bits within the device, including the spare bit in a spare-bit insertion to repair persistent memory errors within the memory location providing the spare bit and having a persistent bit error.
  • Some processor operations are considered atomic, in that their occurrence can be considered a single event to the rest of the processor. More specifically, an atomic operation does not halfway complete, but either completes successfully or does not complete. This is important in a processor to ensure the validity of data, such as where multiple threads or operations can be operating on the same data type at the same time. For example, if two separate processes intent to read the same memory location, increment the value, and write the updated value back to memory, both processes may read the memory value before it is written back to memory. When the processes write the data, the second process to write the data will be writing a value that is out of date, as it does not reflect the result of the previously completed read and increment operation.
  • This problem can be managed using various mechanisms to make such operations atomic, such that the operation locks the data until the operation is complete or otherwise operates as an atomic operation and does not appear to the rest of the processor to comprise separate read and increment steps. This ensures that the data is not modified or used for other instructions while the atomic instruction completes, preserving the validity of the instruction result.
  • System 100 includes a new type of instruction for a computer processor, in which atomic operations on memory can be vectorized, operating on multiple memory locations at the same time or via the same instruction. This addition to the instruction set makes more efficient use of the memory and network bandwidth in a multiprocessor system, and enables vectorization of more program loops in many program applications. In one embodiment, as is shown in FIG. 1, each memory controller 112 includes an atomic memory operation functional unit 120 capable of performing vector atomic memory operation (AMO) instructions.
  • Examples of atomic memory operations included in one embodiment of AMO functional unit 120 include a vector atomic add, vector atomic AND, vector atomic OR, vector atomic XOR, vector atomic fetch and add, vector atomic fetch and AND, vector atomic fetch and OR, and a vector atomic fetch and XOR. The non-fetch versions of these instructions read the memory location, perform the specified operation, between the instruction data and the memory location data, and store the result to the memory location. The fetch versions perform similar functions, but also return the result of the operation to the processor rather than simply storing the result to memory.
  • There are two vector types in various embodiments, including strided and indexed vectors. Strided vectors use a base and a stride to create a vector length of the stride length starting at the base address. Indexed vector access uses a base and a vector of indexes to create a vector of memory addresses, enabling specification of a vector comprising elements that are not in order or evenly spaced.
  • In one embodiment, hardware implementation of the vector atomic memory operations includes use of additional decode logic to decode the new type of vector atomic memory instruction. Vector registers in the processor and a vector mask are used to generate the vector instruction, and a single atomic memory instruction in the processor issues a number of atomic memory operations. In the memory system, vector atomic memory operations operate much like scalar atomic memory operations, and the memory manager block provides the atomic memory operation support needed to execute these instructions.
  • In one embodiment, system 100 includes vector Atomic Memory Operation (vector AMO or VAMO) instructions. One such instruction is:

  • [Aj,Vk]AADD([Aj,Vk],Vi)
  • This represents an integer addition operation that operates on a series of memory locations defined by adding a base scalar register (Aj) to a vector of offsets (Vk). The memory locations defined by the resulting vector of addresses is incremented by the corresponding amount in vector (Vi), on an element-by-element basis. The memory system guarantees that if multiple elements of the vector of addresses defined by (Aj+Vk) are identical, the multiple instances are not performed simultaneously. This guarantees the same result as if the operation was performed with an equivalent series of scalar operations, as would happen on a non-vector architecture.
  • In one such embodiment, system 100 also includes a vector atomic AND instruction and a vector atomic OR instruction.

  • [Aj,Vk]AAND([Aj,Vk],Vi)
  • This represents an AND operation that operates on a series of memory locations defined by adding a base scalar register (Aj) to a vector of offsets (Vk). The memory locations defined by the resulting vector of addresses are ANDed to a corresponding element in vector (Vi), on an element-by-element basis. The memory system guarantees that if multiple elements of the vector of addresses defined by (Ai+Vk) are identical, the multiple instances are not performed simultaneously. This guarantees the same result as if the operation was performed with an equivalent series of scalar operations, as would happen on a non-vector architecture.
  • In one such embodiment, system 100 also includes a vector atomic OR instruction.

  • [Aj,Vk]AOR([Aj,Vk],Vi)
  • This represents an OR operation that operates on a series of memory locations defined by adding a base scalar register (Aj) to a vector of offsets (Vk). The memory locations defined by the resulting vector of addresses are ORed to a corresponding element in vector (Vi), on an element-by-element basis. The memory system guarantees that if multiple elements of the vector of addresses defined by (Ai+Vk) are identical, the multiple instances are not performed simultaneously. This guarantees the same result as if the operation was performed with an equivalent series of scalar operations, as would happen on a non-vector architecture.
  • In one embodiment, other operations are provided for integer bitwise operations such as bitwise and, bitwise or, and bitwise exclusive or. The possible instructions are not limited by any particular architecture, and are easily extended to support any commutative operation such as floating point and complex addition or multiplication, integer multiplication, or other bit manipulation primitives. One such vector AMO instruction set is described in U.S. patent application Ser. No. 11/946,490, filed Nov. 28, 2007 entitled “Vector Atomic Memory Operations”, the description of which is incorporated herein by reference.
  • FIG. 2 illustrates a vector atomic memory operation. As can be seen in FIG. 2, Aj is the address of x, the vector Vk is a vector of ix values, while the vector Vi is a vector of y values. [Aj, Vk, W]. The vector atomic memory operation [Aj,Vk] AADD([Aj,Vk],Vi) can then express:

  • x[ix[i]]=x[ix[i]]+y[i];
  • where Aj=address of x, Vk=vector of ix values and Vi=vector of y values.
  • The vector ix can contain repeated values, and memory controller 112 will detect and process them correctly.
  • In one embodiment, the Vector AMO may execute out order. In such embodiments, the method should only be used in situations where the operations to the same memory location can occur out of order.
  • The vector AMO instruction set can be used to implement processor-in-memory operations across computer system 100. In one embodiment, a check is made to determine if the calculation to be performed falls within the class of problems with a potential for more efficient execution in a PIM operation than in a processor 110 in a multiprocessor system such as system 100 of FIG. 1. If the computation falls within the class of problems with a potential for more efficient execution in a PIM operation than in a processor, the code is changed to include the appropriate vector atomic memory operation or operations.
  • In one embodiment, the decision whether to execute the computation in a processor or in vector AMO functional unit 120 is based on estimates of memory travel time and processor execution time. For instance, in some embodiments, statements such as ia(i)=ia(i)+1 will always be done in vector AMO functional unit 120 since the elements to be added are all present in the vector AMO functional unit 120. This frees up processor and network bandwidth since the references to ia(i) never have to travel to processor 110; the entire operation happens in memory controller 114. On the other hand, combinations of computations that can be chained in vector functional units may more appropriately be performed in processor 110. In some embodiments, when to use vector AMO functional unit 120 versus processor 110, and vice versa, will be based on heuristics, with the balance selected as a function of observation and expectation.
  • In one embodiment, the method takes advantage of language constructs available in some programming languages. For instance, UPC and Co-array Fortran both include an extra dimension on their arrays that depict processor number. When system 100 parallelizes across that dimension, it uses the vector AMO instructions as much as possible.
  • An advantage of this method of selecting between processor 110 and the PIM operations of vector AMO functional unit 120 is that it can be adapted to a variety of architectures to balance their use of conventional and PIM operations. In addition, it can be used to parallelize across vectors as well as across processors in multiprocessor systems.
  • A method of compiling program code is shown in FIG. 3. In the method of FIG. 3, the program code includes an operation on an array of data elements stored in memory of a computer system. The program code is scanned at 200 for an equation which could be executed as a vector computation. On detecting at 202 that an equation could be executed as a vector computation, control moves to 204, where the equation is reviewed to determine if it is a candidate for a PIM vector operation. If so, control moves to 206 and the equation is vectorized where possible with vector AMO instructions. Control then moves to 208.
  • If at 204, it is determined that the equation being reviewed is not a candidate for a PIM vector operation, control moves to 207 and the equation is vectorized where possible to use the vector functional units of processor 110. Control then moves to 208.
  • In some instances, an equation will most effectively be vectorized with a combination of instructions for the vector functional units in processors 110 and the vector AMO functional units 120. In such instance, the compiler will use a combination of vector AMO instructions and vector functional unit instructions.
  • At 208, a check is made to see if the scan is finished. If not, control moves to 200. Otherwise, control moves to 210 and the vectorized code is saved.
  • In one such embodiment, each equation is decomposed into a nested loop, wherein the nested loop comprises an exterior loop and a virtual interior loop. The exterior loop decomposes the equation into a plurality of loops of length N, wherein N is an integer greater than one. The virtual interior loop executes vector operations corresponding to the N length loop to form a result vector of length N, wherein the virtual interior loop includes a vector atomic memory operation (AMO) instruction.
  • In one embodiment, N is set to equal the vector length of the vector register in the computer system.
  • In one embodiment, the following operations are available to execute in vector AMO functional units 120.
  • Bitwise or—|—AOR
  • Bitwise and—&—AAND
  • Bitwise exclusive or—{circumflex over (0)}—AXOR
  • Integer add—+—AADD
  • In addition, the above operations can be used to provide the following extended operations:
  • Bitwise equivalence—.EQV. in Fortran (done with AXOR and some preprocessing)
  • One's complement—˜—AXOR with a word of all l's
  • Floating point negation—‘-’—AXOR with a constant that just has the sign bit set
  • The method of the present invention improves performance of computer system 100 by balancing memory transfer overhead and computational speed when vectorizing. Operations performed in vector AMO functional units 120 avoid the use of vector registers for the computation, moving the functionality onto the hardware memory system. This avoids the time normally taken moving data to and from the central processor.
  • Furthermore, the method works for multiprocessor parallelism in addition to local vectorization, as the hardware memory system of vector AMO functional unit 120 can be used to parallelize across processors as well as across vectors.
  • This method can be used with the vector update method described in U.S. patent application Ser. No. ______, filed herewith, entitled “Vector Atomic Memory Operation Vector Update System and Method”, the description of which is incorporated herein by reference, to determine the equations that can be vectorized to use the vector functional units of processors 110 versus the equations that, if vectorized at all, must be vectorized using vector AMO functional units 120.
  • In one such embodiment, each equation is decomposed into a nested loop, wherein the nested loop comprises an exterior loop and a virtual interior loop. The exterior loop decomposes the equation into a plurality of loops of length N, wherein N is an integer greater than one. The virtual interior loop executes vector operations corresponding to the N length loop to form a result vector of length N, wherein the virtual interior loop includes a vector atomic memory operation (AMO) instruction used to execute the interior loop.
  • In one embodiment, N is set to equal the vector length of the vector register in the computer system.
  • Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement which is calculated to achieve the same purpose may be substituted for the specific embodiments shown. This application is intended to cover any adaptations or variations of the example embodiments of the subject matter described herein. It is intended that this subject matter be limited only by the claims, and the full scope of equivalents thereof.
  • Such embodiments of the subject matter may be referred to herein individually or collectively by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept, if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description.
  • The accompanying drawings that form a part hereof show, by way of illustration and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims and the full range of equivalents to which such claims are entitled.
  • The Abstract is provided to comply with 37 C.F.R. §1.72(b) requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted to require more features than are expressly recited in each claim. Rather, inventive subject matter may be found in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.

Claims (13)

1. In a vector computer system having a plurality of processors connected to memory, wherein the memory includes one or more vector atomic memory operation (AMO) functional units and the processors include one or more vector functional units, a method of vectorizing an iterative loop, the method comprising:
scanning program code, wherein scanning includes determining whether an operation is vectorizable;
if an operation is vectorizable, determining whether the operation should be executed using a vector AMO instruction in one of the vector AMO functional units;
if an operation is vectorizable and the operation should be executed using a vector AMO instruction in one of the vector AMO functional units, implement at least a portion of the operation as a vector AMO instruction; and
if an operation is vectorizable and the operation should not be executed using a vector AMO instruction in one of the vector AMO functional units, implement at least a portion of the operation to execute in one or more vector functional units of one or more processors.
2. The method of claim 1, wherein determining whether the operation should be executed using a vector AMO instruction in one of the vector AMO functional units is a function of memory transfer overhead.
3. The method of claim 1, wherein determining whether the operation should be executed using a vector AMO instruction in one of the vector AMO functional units is a function of computational speed difference between one or more of the vector functional units and one or more of the vector AMO functional units.
4. The method of claim 1, wherein the program code includes an array having an additional dimension depicting processor number and wherein determining whether the operation should be executed using a vector AMO instruction in one of the vector AMO functional units is a function of processor number.
5. The method of claim 1, wherein the vector atomic memory operation is performed in a memory controller.
6. An article comprising a computer readable medium having instructions thereon, wherein the instructions, when executed in a computer, create a system for executing the method of claim 1.
7. In a vector computer system having a plurality of processors connected to memory, wherein the memory includes one or more vector atomic memory operation (AMO) functional units and the processors include one or more vector functional units, a computer implemented method of compiling program code, the method comprising:
a) scanning the program code for an operation that is vectorizable;
b) determining whether some portion of the vectorizable equation should be executed in the vector AMO functional unit; and
c) replacing the equation with vectorized machine executable code;
wherein, if a determination was made that some portion of the vectorizable equation should be executed in the vector AMO functional unit, the vectorized machine executable code includes vectorization code for performing a mathematical operation using one or more vector atomic memory operations; and
wherein, if a determination was made that some portion of the vectorizable equation should be not executed in the vector AMO functional unit, the vectorized machine executable code includes vectorization code for performing vector operations without using the vector AMO functional unit.
8. The method of claim 7, wherein the vector atomic memory operation is performed in a memory controller.
9. The method of claim 7, determining whether some portion of the vectorizable equation should be executed in the vector AMO functional unit includes determining whether an operation has recurring data points.
10. The method of claim 7, wherein determining whether some portion of the vectorizable equation should be executed in the vector AMO functional unit is a function of memory transfer overhead.
11. The method of claim 7, wherein determining whether some portion of the vectorizable equation should be executed in the vector AMO functional unit is a function of computational speed difference between one or more of the vector functional units and one or more of the vector AMO functional units.
12. The method of claim 7, wherein the program code includes an array having an additional dimension depicting processor number and wherein determining whether some portion of the vectorizable equation should be executed in the vector AMO functional unit is a function of processor number.
13. An article comprising a computer readable medium having instructions thereon, wherein the instructions, when executed in a computer, create a system for executing the method of claim 7.
US12/484,062 2009-06-12 2009-06-12 System and method for managing processor-in-memory (PIM) operations Active 2032-01-15 US8583898B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/484,062 US8583898B2 (en) 2009-06-12 2009-06-12 System and method for managing processor-in-memory (PIM) operations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/484,062 US8583898B2 (en) 2009-06-12 2009-06-12 System and method for managing processor-in-memory (PIM) operations

Publications (2)

Publication Number Publication Date
US20100318764A1 true US20100318764A1 (en) 2010-12-16
US8583898B2 US8583898B2 (en) 2013-11-12

Family

ID=43307405

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/484,062 Active 2032-01-15 US8583898B2 (en) 2009-06-12 2009-06-12 System and method for managing processor-in-memory (PIM) operations

Country Status (1)

Country Link
US (1) US8583898B2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100318769A1 (en) * 2009-06-12 2010-12-16 Cray Inc. Using vector atomic memory operation to handle data of different lengths
WO2014105208A1 (en) * 2012-12-27 2014-07-03 Intel Corporation Vectorization of collapsed multi-nested loops
WO2017091282A1 (en) * 2015-11-23 2017-06-01 Advanced Micro Devices, Inc. Method and apparatus for performing a parallel search operation
WO2017142914A1 (en) * 2016-02-19 2017-08-24 Micron Technology, Inc. Data transfer with a bit vector operation device
WO2017155781A1 (en) * 2016-03-10 2017-09-14 Micron Technology, Inc. Apparatuses and methods for logic/memory devices
EP2656229A4 (en) * 2010-12-21 2018-04-04 Intel Corporation Mechanism for conflict detection using simd
CN108369507A (en) * 2015-10-16 2018-08-03 三星电子株式会社 For using the method and apparatus for handling process instruction in memory
US10346092B2 (en) 2017-08-31 2019-07-09 Micron Technology, Inc. Apparatuses and methods for in-memory operations using timing circuitry
US10416927B2 (en) 2017-08-31 2019-09-17 Micron Technology, Inc. Processing in memory
US20200117454A1 (en) * 2018-10-10 2020-04-16 Micron Technology, Inc. Vector registers implemented in memory
US10741239B2 (en) 2017-08-31 2020-08-11 Micron Technology, Inc. Processing in memory device including a row address strobe manager
US20220413849A1 (en) * 2021-06-28 2022-12-29 Advanced Micro Devices, Inc. Providing atomicity for complex operations using near-memory computing

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102479212B1 (en) 2016-08-17 2022-12-20 삼성전자주식회사 Semiconductor memory device, memory system including the same and method of operating the same
KR102453542B1 (en) 2018-02-21 2022-10-12 삼성전자주식회사 Memory device supporting skip calculation mode and method of operating the same
KR20200082617A (en) 2018-12-31 2020-07-08 삼성전자주식회사 Calculation method using memory device and memory device performing the same
CN111679785A (en) 2019-03-11 2020-09-18 三星电子株式会社 Memory device for processing operation, operating method thereof and data processing system

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4047158A (en) * 1974-12-13 1977-09-06 Pertec Corporation Peripheral processing system
US4128880A (en) * 1976-06-30 1978-12-05 Cray Research, Inc. Computer vector register processing
US4710872A (en) * 1985-08-07 1987-12-01 International Business Machines Corporation Method for vectorizing and executing on an SIMD machine outer loops in the presence of recurrent inner loops
US4817187A (en) * 1987-02-19 1989-03-28 Gtx Corporation Apparatus and method for vectorization of incoming scanned image data
US4821181A (en) * 1986-01-08 1989-04-11 Hitachi, Ltd. Method for converting a source program of high level language statement into an object program for a vector processor
US4833606A (en) * 1986-10-09 1989-05-23 Hitachi, Ltd. Compiling method for vectorizing multiple do-loops in source program
US4858115A (en) * 1985-07-31 1989-08-15 Unisys Corporation Loop control mechanism for scientific processor
US4967350A (en) * 1987-09-03 1990-10-30 Director General Of Agency Of Industrial Science And Technology Pipelined vector processor for executing recursive instructions
US5036454A (en) * 1987-05-01 1991-07-30 Hewlett-Packard Company Horizontal computer having register multiconnect for execution of a loop with overlapped code
US5083267A (en) * 1987-05-01 1992-01-21 Hewlett-Packard Company Horizontal computer having register multiconnect for execution of an instruction loop with recurrance
US5151991A (en) * 1987-10-21 1992-09-29 Hitachi, Ltd. Parallelization compile method and system
US5247696A (en) * 1991-01-17 1993-09-21 Cray Research, Inc. Method for compiling loops having recursive equations by detecting and correcting recurring data points before storing the result to memory
US5408677A (en) * 1992-11-18 1995-04-18 Nogi; Tatsuo Vector parallel computer
US5623685A (en) * 1994-12-01 1997-04-22 Cray Research, Inc. Vector register validity indication to handle out-of-order element arrival for a vector computer with variable memory latency
US6560282B2 (en) * 1998-03-10 2003-05-06 Sony Corporation Transcoding system using encoding history information
US6578197B1 (en) * 1998-04-08 2003-06-10 Silicon Graphics, Inc. System and method for high-speed execution of graphics application programs including shading language instructions
US20040006667A1 (en) * 2002-06-21 2004-01-08 Bik Aart J.C. Apparatus and method for implementing adjacent, non-unit stride memory access patterns utilizing SIMD instructions
US20050240644A1 (en) * 2002-05-24 2005-10-27 Van Berkel Cornelis H Scalar/vector processor
US20060167784A1 (en) * 2004-09-10 2006-07-27 Hoffberg Steven M Game theoretic prioritization scheme for mobile ad hoc networks permitting hierarchal deference
US20060248286A1 (en) * 2003-02-18 2006-11-02 Cray Inc. Optimized high bandwidth cache coherence mechanism
US7360142B1 (en) * 2004-03-03 2008-04-15 Marvell Semiconductor Israel Ltd. Methods, architectures, circuits, software and systems for CRC determination
US20090138680A1 (en) * 2007-11-28 2009-05-28 Johnson Timothy J Vector atomic memory operations
US7656706B2 (en) * 2007-01-05 2010-02-02 The Texas A&M University System Storing information in a memory
US7660967B2 (en) * 2007-02-01 2010-02-09 Efficient Memory Technology Result data forwarding in parallel vector data processor based on scalar operation issue order
US20100318979A1 (en) * 2009-06-12 2010-12-16 Cray Inc. Vector atomic memory operation vector update system and method
US20100318769A1 (en) * 2009-06-12 2010-12-16 Cray Inc. Using vector atomic memory operation to handle data of different lengths

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63120338A (en) 1986-11-10 1988-05-24 Matsushita Electric Ind Co Ltd Program converting device

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4047158A (en) * 1974-12-13 1977-09-06 Pertec Corporation Peripheral processing system
US4128880A (en) * 1976-06-30 1978-12-05 Cray Research, Inc. Computer vector register processing
US4858115A (en) * 1985-07-31 1989-08-15 Unisys Corporation Loop control mechanism for scientific processor
US4710872A (en) * 1985-08-07 1987-12-01 International Business Machines Corporation Method for vectorizing and executing on an SIMD machine outer loops in the presence of recurrent inner loops
US4821181A (en) * 1986-01-08 1989-04-11 Hitachi, Ltd. Method for converting a source program of high level language statement into an object program for a vector processor
US4833606A (en) * 1986-10-09 1989-05-23 Hitachi, Ltd. Compiling method for vectorizing multiple do-loops in source program
US4817187A (en) * 1987-02-19 1989-03-28 Gtx Corporation Apparatus and method for vectorization of incoming scanned image data
US5036454A (en) * 1987-05-01 1991-07-30 Hewlett-Packard Company Horizontal computer having register multiconnect for execution of a loop with overlapped code
US5083267A (en) * 1987-05-01 1992-01-21 Hewlett-Packard Company Horizontal computer having register multiconnect for execution of an instruction loop with recurrance
US4967350A (en) * 1987-09-03 1990-10-30 Director General Of Agency Of Industrial Science And Technology Pipelined vector processor for executing recursive instructions
US5151991A (en) * 1987-10-21 1992-09-29 Hitachi, Ltd. Parallelization compile method and system
US5247696A (en) * 1991-01-17 1993-09-21 Cray Research, Inc. Method for compiling loops having recursive equations by detecting and correcting recurring data points before storing the result to memory
US5408677A (en) * 1992-11-18 1995-04-18 Nogi; Tatsuo Vector parallel computer
US5623685A (en) * 1994-12-01 1997-04-22 Cray Research, Inc. Vector register validity indication to handle out-of-order element arrival for a vector computer with variable memory latency
US6560282B2 (en) * 1998-03-10 2003-05-06 Sony Corporation Transcoding system using encoding history information
US6578197B1 (en) * 1998-04-08 2003-06-10 Silicon Graphics, Inc. System and method for high-speed execution of graphics application programs including shading language instructions
US20050240644A1 (en) * 2002-05-24 2005-10-27 Van Berkel Cornelis H Scalar/vector processor
US20040006667A1 (en) * 2002-06-21 2004-01-08 Bik Aart J.C. Apparatus and method for implementing adjacent, non-unit stride memory access patterns utilizing SIMD instructions
US20060248286A1 (en) * 2003-02-18 2006-11-02 Cray Inc. Optimized high bandwidth cache coherence mechanism
US7360142B1 (en) * 2004-03-03 2008-04-15 Marvell Semiconductor Israel Ltd. Methods, architectures, circuits, software and systems for CRC determination
US20060167784A1 (en) * 2004-09-10 2006-07-27 Hoffberg Steven M Game theoretic prioritization scheme for mobile ad hoc networks permitting hierarchal deference
US7656706B2 (en) * 2007-01-05 2010-02-02 The Texas A&M University System Storing information in a memory
US7660967B2 (en) * 2007-02-01 2010-02-09 Efficient Memory Technology Result data forwarding in parallel vector data processor based on scalar operation issue order
US20090138680A1 (en) * 2007-11-28 2009-05-28 Johnson Timothy J Vector atomic memory operations
US20100318979A1 (en) * 2009-06-12 2010-12-16 Cray Inc. Vector atomic memory operation vector update system and method
US20100318769A1 (en) * 2009-06-12 2010-12-16 Cray Inc. Using vector atomic memory operation to handle data of different lengths

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8826252B2 (en) 2009-06-12 2014-09-02 Cray Inc. Using vector atomic memory operation to handle data of different lengths
US20100318769A1 (en) * 2009-06-12 2010-12-16 Cray Inc. Using vector atomic memory operation to handle data of different lengths
EP2656229A4 (en) * 2010-12-21 2018-04-04 Intel Corporation Mechanism for conflict detection using simd
WO2014105208A1 (en) * 2012-12-27 2014-07-03 Intel Corporation Vectorization of collapsed multi-nested loops
CN104838357A (en) * 2012-12-27 2015-08-12 英特尔公司 Vectorization of collapsed multi-nested loops
CN108369507A (en) * 2015-10-16 2018-08-03 三星电子株式会社 For using the method and apparatus for handling process instruction in memory
WO2017091282A1 (en) * 2015-11-23 2017-06-01 Advanced Micro Devices, Inc. Method and apparatus for performing a parallel search operation
US10528613B2 (en) 2015-11-23 2020-01-07 Advanced Micro Devices, Inc. Method and apparatus for performing a parallel search operation
US11816123B2 (en) 2016-02-19 2023-11-14 Micron Technology, Inc. Data transfer with a bit vector operation device
WO2017142914A1 (en) * 2016-02-19 2017-08-24 Micron Technology, Inc. Data transfer with a bit vector operation device
US10956439B2 (en) 2016-02-19 2021-03-23 Micron Technology, Inc. Data transfer with a bit vector operation device
US9997232B2 (en) 2016-03-10 2018-06-12 Micron Technology, Inc. Processing in memory (PIM) capable memory device having sensing circuitry performing logic operations
US20180294027A1 (en) * 2016-03-10 2018-10-11 Micron Technology, Inc. Apparatuses and methods for logic/memory devices
US11915741B2 (en) 2016-03-10 2024-02-27 Lodestar Licensing Group Llc Apparatuses and methods for logic/memory devices
US10902906B2 (en) * 2016-03-10 2021-01-26 Micron Technology, Inc. Apparatuses and methods for logic/memory devices
US20190296892A1 (en) * 2016-03-10 2019-09-26 Micron Technology, Inc. Apparatuses and methods for logic/memory devices
WO2017155781A1 (en) * 2016-03-10 2017-09-14 Micron Technology, Inc. Apparatuses and methods for logic/memory devices
US10559347B2 (en) * 2016-03-10 2020-02-11 Micron Technology, Inc. Processing in memory (PIM) capable memory device having timing circuitry to control timing of operations
US11594274B2 (en) 2016-03-10 2023-02-28 Micron Technology, Inc. Processing in memory (PIM)capable memory device having timing circuity to control timing of operations
US11276457B2 (en) 2017-08-31 2022-03-15 Micron Technology, Inc. Processing in memory
US11675538B2 (en) 2017-08-31 2023-06-13 Micron Technology, Inc. Apparatuses and methods for in-memory operations
US11016706B2 (en) 2017-08-31 2021-05-25 Micron Technology, Inc. Apparatuses for in-memory operations
US11163495B2 (en) 2017-08-31 2021-11-02 Micron Technology, Inc. Processing in memory
US10741239B2 (en) 2017-08-31 2020-08-11 Micron Technology, Inc. Processing in memory device including a row address strobe manager
US10628085B2 (en) 2017-08-31 2020-04-21 Micron Technology, Inc. Processing in memory
US10346092B2 (en) 2017-08-31 2019-07-09 Micron Technology, Inc. Apparatuses and methods for in-memory operations using timing circuitry
US11894045B2 (en) 2017-08-31 2024-02-06 Lodestar Licensing Group, Llc Processing in memory implementing VLIW controller
US11586389B2 (en) 2017-08-31 2023-02-21 Micron Technology, Inc. Processing in memory
US10416927B2 (en) 2017-08-31 2019-09-17 Micron Technology, Inc. Processing in memory
US11175915B2 (en) * 2018-10-10 2021-11-16 Micron Technology, Inc. Vector registers implemented in memory
US20200117454A1 (en) * 2018-10-10 2020-04-16 Micron Technology, Inc. Vector registers implemented in memory
US11556339B2 (en) 2018-10-10 2023-01-17 Micron Technology, Inc. Vector registers implemented in memory
US20220413849A1 (en) * 2021-06-28 2022-12-29 Advanced Micro Devices, Inc. Providing atomicity for complex operations using near-memory computing

Also Published As

Publication number Publication date
US8583898B2 (en) 2013-11-12

Similar Documents

Publication Publication Date Title
US8583898B2 (en) System and method for managing processor-in-memory (PIM) operations
US8458685B2 (en) Vector atomic memory operation vector update system and method
US9513905B2 (en) Vector instructions to enable efficient synchronization and parallel reduction operations
CN112445753B (en) Hardware apparatus and method for prefetching multidimensional blocks of elements from a multidimensional array
JP6159825B2 (en) Solutions for branch branches in the SIMD core using hardware pointers
US5247696A (en) Method for compiling loops having recursive equations by detecting and correcting recurring data points before storing the result to memory
US8438370B1 (en) Processing of loops with internal data dependencies using a parallel processor
US8484443B2 (en) Running multiply-accumulate instructions for processing vectors
US8572355B2 (en) Support for non-local returns in parallel thread SIMD engine
KR102379894B1 (en) Apparatus and method for managing address conflicts when performing vector operations
JP6236093B2 (en) Hardware and software solutions for branching in parallel pipelines
TWI740851B (en) Data processing apparatus, method and computer program for vector load instruction
US20230376292A1 (en) Compile time logic for detecting and resolving memory layout conflicts
US8826252B2 (en) Using vector atomic memory operation to handle data of different lengths
JP2018500659A (en) Dynamic memory contention detection with fast vectors
Hupca et al. Spherical harmonic transform with GPUs
CN110321161B (en) Vector function fast lookup using SIMD instructions
Ciznicki et al. Elliptic solver performance evaluation on modern hardware architectures
Li et al. Automatic FFT performance tuning on OpenCL GPUs
Sørensen Auto‐tuning of level 1 and level 2 BLAS for GPUs
US11366664B1 (en) Single instruction multiple data (simd) execution with variable width registers
Chen et al. OpenCL-based erasure coding on heterogeneous architectures
Lotrič et al. Parallel implementations of recurrent neural network learning
Kaczmarek et al. Conjugate gradient solvers on Intel Xeon Phi and NVIDIA GPUs
US20230305844A1 (en) Implementing specialized instructions for accelerating dynamic programming algorithms

Legal Events

Date Code Title Description
AS Assignment

Owner name: CRAY INC., WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GREYZCK, TERRY D.;REEL/FRAME:022885/0923

Effective date: 20090615

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8