WO1993023810A1 - Scalable coprocessor - Google Patents

Scalable coprocessor Download PDF

Info

Publication number
WO1993023810A1
WO1993023810A1 PCT/JP1993/000617 JP9300617W WO9323810A1 WO 1993023810 A1 WO1993023810 A1 WO 1993023810A1 JP 9300617 W JP9300617 W JP 9300617W WO 9323810 A1 WO9323810 A1 WO 9323810A1
Authority
WO
WIPO (PCT)
Prior art keywords
coprocessor
computer system
coupled
actual
bus
Prior art date
Application number
PCT/JP1993/000617
Other languages
French (fr)
Inventor
Sameer Kanagala
Original Assignee
Seiko Epson Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seiko Epson Corporation filed Critical Seiko Epson Corporation
Priority to JP5520052A priority Critical patent/JPH06509896A/en
Publication of WO1993023810A1 publication Critical patent/WO1993023810A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/12Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor
    • G06F13/124Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor where hardware is a sequential transfer control unit, e.g. microprocessor, peripheral processor or state-machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • G06F13/287Multiplexed DMA

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Advance Control (AREA)

Abstract

In a computing system, a scalable coprocessor (9) for enhancing communications between a set of central processing units (CPUs) (1) and a set of system resources (2). Scalable coprocessor (9) comprises a single register file (10) compartmentalized into at least two bins, each bin corresponding to a virtual coprocessor channel. Coupled to the register file (10) is a single actual coprocessor (6, 7, 8, 13, 33) for performing operations on the system resources (2). The number of virtual channels can be increased arbitrarily without the need to increase the number of actual channel hardware elements. A set of programmable state machines (11) grants operational authority to the virtual channels in the order desired and for the durations desired. Embodiments of the present invention include a fly-by DMA controller (23), an RAID coprocessor (29), and a striping coprocessor (23).

Description

D E S C R I P T I O N
Title : SCALABLE COPROCESSOR
Technical Field
This invention pertains to the field of computing systems, and in particular to techniques for improving the communications between central processing units and system resources such as input/output controllers and memory.
Background Art
Figure 1 illustrates the conventional method by which central processing units (CPUs) communicate with system resources 2. The computer system comprises a set of m system resources 2, which can include memory and input/output controllers that are in turn coupled to input/output devices. The system comprises at least one CPU 1 for performing computational tasks, running stored programs, and communicating with system resources 2. Figure 1 illustrates a set of n CPUs If all of which are coupled to each other via a system bus 4, typically comprising a set of parallel data lines and a set of parallel address lines. System bus 4, for example, might have 32 parallel data lines and 32 parallel address lines. Using binary arithmetic, this is enough to address 4 Gigabytes of data at random. A bus master 34 is coupled to bus 4 and regulates access thereto.
A DMA controller 3 is associated with each system resource 2. All of the DMA controllers 3 are coupled to system
- 1 - f bus 4. On each DMA controller 3 is typically a bus arbiter 5, a source address pointer 6, a destination address pointer 1 , and a byte counter 8. Bus arbiter 5 is a set of logic that is duplicated on all of the other DMA controllers 3. Bus arbiter 5 5 stores priority information associated with that system resource 2 and indicates to bus master 34 that one of the CPUs 1 wishes to activate the associated system resource 2. Bus master 34 then determines which of the DMA controllers 3 will be given authorization to become operational. Only one DMA
JO controller 3 can be operational at any one time.
The way that a CPU 1 communicates with a system resource 2 is for CPU 1 to place, typically, three pieces of information into the associated DMA controller 3 : the source address (the address where the data that are the subject of the
15 communication are to be found) is stored in source address pointer 6; the address of the destination (location where the data are to be sent) is stored in address pointer 7; and the number of bytes desired to be moved is stored in byte counter 8. During the operational period, byte counter 8 decrements
20 once during each processing (clock) cycle until the count stored therewithin reaches zero, at which point it is known that all of the data have been moved. If CPU 1 wishes to perform operations such as arithmetic, logic, or shift operations on the data in addition to simply moving them, this
25 task can be performed by CPU 1 during the operational cycle- As will be seen, the present invention offers a much more efficient means for handling communications with the system resources 2 .
- 2 - "RAID Coprocessor", a data sheet by Extended Systems
(date unknown) , describes a device that, as does XOR pipeline 30 of the present invention, frees a CPU from parity calculations in a RAID environment. However, this device can process only one block of data at a time, whereas the present invention can process multiple blocks of data simultaneously.
Disclosure of Invention
The present invention is a computer system comprising at least one central processing unit (CPU) (1) capable of performing operations on data stored within a set of system resources (2) . A scalable coprocessor (9) is coupled to the CPUs (1) and the system resources (2) via a system bus (4) . Within the scalable coprocessor (9) and coupled to the CPUs (1) is a single register file (10) that is compartmentalized into at least two bins, each bin corresponding to a virtual coprocessor channel. Coupled to the register file (10) is a single actual coprocessor (6, 7, 8, 13, 33) for performing operations on the system resources (2) . Coupled to the register file (10) is a means (11) for apportioning the operational cycles of the actual coprocessor among the set of virtual coprocessor channels.
The present invention offers the following major advantages over the prior art:
1. The number of virtual channels can be increased (scaled) without the need to increase the number of actual channel hardware elements (such as items 5, 6, 7 and 8 of the prior art) , since there is but one scalable coprocessor (9) -
- 3 - 2. Fewer routing resources means consistent and high-speed routing across all virtual channels and higher system clock speeds.
3. More efficient use is made of the system bus (4), because the virtual channels operate in a time division multiplex mode, with fairness built in.
4. The means (11) for arbitrating which virtual channels gain operational access and for how long is programmable. 5. Dynamic updating of the pointer memory array
(register file 10) by the host CPUs (1) without any unnecessary timing restrictions.
6. A more efficient way of performing scatter/gather operations (read and writes, respectively, when the data are fragmented among many blocks) , because scatter/gather is performed in parallel rather than sequentially as has been conventional.
Brief Description of the Drawings These and other more detailed and specific objects and features' of the present invention are more fully disclosed in the following specification, reference being had to the accompanying drawings, in which:
Figure 1 is a block diagram of the conventional prior art technique of communicating among CPUs 1 and system resources 2;
Figure 2 is a block diagram of scalable coprocessor 9 of the present invention;
- 4 - Figure 3 is a block diagram of state machines 11 of scalable coprocessor 9 of the present invention;
Figure 4 is a sketch of amplitude versus time for gas pedal signal 15 of the present invention; Figure 5 is a block diagram of a fly-by DMA controller
23 embodiment of the present invention; and
Figure 6 is a block diagram of a RAID coprocessor 29 embodiment of the present invention.
Best Mode for Carrying Out The Invention
Figure 2 is a block diagram of a general embodiment of scalable coprocessor 9 of the present invention. The environment is a computer system which comprises at least one central processing unit (CPU) 1. Each CPU 1 can be any active processing element capable of performing an operation on a set of system resources 2. n CPUs 1 are illustrated in Figure 2. CPUs 1 are all coupled together via the same system bus 4. For example, system bus 4 may have 32 parallel data lines and 32 parallel address lines. This is enough to address four Gigabytes of data at random.
By using this invention, each CPU 1 can submit multiple I/O requests simultaneously. For example, a CPU 1 can be a file server that asks for hundreds of files simultaneously. System resources 2 are likewise coupled together via the same system bus 4. Only one operation can be performed on system bus 4 at any given time. System resources 2 comprise, typically, memory and input/output controllers that are in turn
- 5 - coupled to inpu /output devices such as disk drives, tape drives, CD ROMs, Bernoulli boxes, etc.
There is needed only one scalable coprocessor 9 in the computer system. This eliminates the duplication of devices and data paths that was common in the prior art, such as that illustrated in Figure 1. Coprocessor 9 communicates with system resources 2 at speeds approaching memory bandwidth, e.g., 66 MBps in embodiments that have been built, rather than at slower CPU 1 bandwidths. Scalable coprocessor 9 is coupled to CPUs 1 via system bus 4 and comprises register file 10 , a set of state machines 11, and an actual coprocessor comprising at least one address pointer (e.g., source address pointer 6 and destination address pointer 7) , byte counter 8, buffer 13, and (optionally) logic unit 33. Pointers 6 and 7 are storage devices such as registers or programmable counters.
Register file 10 is a storage device such as a random access memory (RAM) that has been preselectedly compartmentalized into an arbitrary number of p+1 storage areas or bins corresponding to the number of virtual coprocessor channels that the user of coprocessor 9 has decided to set up in advance. For example, p+1 may correspond to the number of blocks of data that are known to be present in the computer system. Each bin or channel stores at least three pieces of information: the beginning address of the data that are to be used as the source for an operation, the beginning address of the desired destination for the data after the operation has been performed, and the size of the data that are the subject
- 6 - of the operation. Additionally, information can be stored in the channel indicating the type of arithmetic, logic, or shift operation desired to be performed on the data.
The set of state machines 11 determines which channels are allowed to become operational in which order and for how long, based upon a programmable arbitration scheme stored within state machines 11. When a given channel is allowed to become operational, the stored source address from that channel is placed into source address pointer 6 via pointer bus 12, the destination address is placed into destination address pointer 7 via pointer bus 12, and the byte count is placed into byte counter 8 via pointer bus 12. Additionally, the stored arithmetic, logic, or shift instructions, if any, are placed into optional logic unit 33 over pointer bus 12. As coprocessor 9 is clocked through its normal operational cycles by a conventional clock (not shown), e.g., at the rate of 66 Megahertz, the desired operations are performed. The source data are addressed by means of source address pointer 6 placing the source address over system bus 4. The destination address is similarly accessed by pointer 7, again using system bus 4. If it is desired to perform an arithmetic, logic, or shift operation on the data and not just simply move them, logic unit 33 is invoked. Finally, byte counter 8 is decremented once per cycle. Typically, state machines 11 allow each channel to perform a finite number of operations (corresponding to a given finite number of clock cycles) per operational authorization. 128 cycles is a typical number. This may or may not be enough
- 7 - cycles to permit the channel to complete its assigned tasks- If it is enough, byte counter 8 decrements to 0 and state machines 11 pass control to the next channel. If it is not a sufficient number of cycles, the status of items 6, 7, 8r and 33 are passed over update bus 14 via buffer 13 and back to register file 10 over pointer bus 12, so that the next time the particular channel is granted operational authorization, it can resume where it was interrupted. Buffer 13 is needed because one of the CPUs .1 may be trying to initialize another channel within register file 10 at the same time that the updated information is being sent back to register file 10 over update bus 14. Thus, buffer 13 prevents collisions of inbound and outbound in ormation.
Figure 3 illustrates in more detail the set of state machines 11. The first state machine is a programmable arbiter 20, which may comprise random access memory plus associated logic devices. Arbiter 20 stores the programmable scheme for arbitrating which channels are given operational access and for how long. The programming scheme may entail the use of gate arrays, EEPROMs, fuses/anti-fuses, etc. The arbitration scheme may be any one of a number of techniques such as round robin (cycling through the channels in order and then repeating at the zeroeth channel) , priority (giving authorization only to channels which are flagged with certain priority bytes or giving flagged channels a greater number of operational cycles than channels not flagged) , etc.
A set of channel lines CH is input into programmable arbiter 20. These lines can originate from register file 10
- 8 - (as illustrated) or from system resources 2. The purpose of these lines is to indicate whether the associated channels are active (desirous of performing operations on system resources 2) or not. The output of arbiter 20 is a line conveying the number of the channel which is being granted operational access at any given time. This signal is fed to channel initialize and update module 21. One of the outputs of module 21 is a register file address index, which informs register file 10 which channel is being given operational authorization. The length of this index is variable, depending upon the number of channels. For example, if there are eight channels, this index requires three bits.
The other output of module 21 is a channel ready signal. This signal is fed to gas pedal module 22. The composite output of gas pedal 22 is a square wave 15, whose amplitude versus time is illustrated in Figure 4. When gas pedal signal 15 is high, this indicates the presence of a throttle (operational) period 18, i.e., one in which operations on the data are being performed by actual coprocessor 6, 7, 8, 33. When gas pedal signal 15 is low, this indicates the presence of an idle period 19 during which no virtual channel is allowed to be operational, but rather CPUs 1 rather than coprocessor 9 are given access to system bus 4. The durations of the throttle and idle periods 18, 19 are variable and programmable in advance.
The transition between an idle period 19 and a throttle period 18 is denominated as a wakeup signal 16 and is passed to arbiter 20, which induces arbiter 20 to perform a new
- 9 - designation of authorized channel. The transition from a throttle period 18 to an idle period 19 is denominated as a tired signal 17, and is passed to arbiter 20 (commanding it to designate no channel as the designated channel) . Tired signal 17 is also passed to channel initialize and update module 21, inducing module 21 to instruct items 33, 6, 1 , and 8 to send their status back to register .file 10. This information becomes the beginning status for the next throttle period 18 the next time that particular channel is given operational authorization by state machines 11.
Figure 5 illustrates a specific embodiment of the present invention: a fly-by DMA controller 23. This embodiment of scalable coprocessor 9 is used in conjunction with non-addressable devices such as input/output controllers 25. Since these devices are non-addressable, one. of the pointers 6r 7r from the general embodiment can be eliminated. Thus, a single input/output address pointer 24 is used to indicate where in memory 26 data to be read from or written to I/O controller 25 are stored. Pointer 24 points to the beginning location in memory 26 where the data are to be read from or written to. A separate request line 28 and acknowledge line 27 connects pointer 24 with each I/O controller 25. A signal is sent by I/O controller 25 over request line 28 to address pointer 24, asking DMA controller 23 to start each byte transfer. Similarly, a signal is sent from pointer 24 over acknowledge line 27 to I/O controller 25 for each byte that is transferred, signaling that system bus 4 is available to perform the read or write operation. Byte counter 8 decrements
- 10 - for each transferred byte. This process continues until the byte count for the virtual channel becomes zero, at which time an interrupt is issued to the associated CPU 1 if required. The data move directly from memory 26 to the I/O controller 25 over system bus 4, without going through controller 23. This is characteristic of a fly-by DMA controller, as opposed to a flow-through DMA controller.
A second embodiment of the present invention is illustrated in Figure 6: a RAID (redundant array of inexpensive disks) coprocessor 29. In this application there are m-1 equally sized blocks of memory 26, where m-1 is at least 2. The output is a set of m equally sized blocks of data that are written to a set of disk controllers 2. The mth. block is a byte-by-byte parity check on the first m-1 blocks. This permits fault tolerant processing: if any one block fails, including the parity block, all of the data can be reconstructed from the remaining blocks.
In this embodiment, register file 10 contains but a single destination pointer, because the writing onto the m disk controllers 2 is automatically partitioned equally among the m controllers 2. Exclusive OR (XOR) pipeline 30 is a special case of logic unit 33. Every time a source block of data is read in from a memory 26, an exclusive OR (XOR) is performed byte by byte, e.g., 8 bits by 8 bits even when the words are 32 bits long. After all of the m-1 blocks of data have been read " in, XOR pipeline 30 contains the parity block. This is written to the destination controller 2 (m) along with the other m-1 blocks of data, which are written to the first m-1 controllers
- 11 - 2. Pipeline 30 communicates with the memories 26 and disk controllers 2 over the data lines subset 31 of system bus 4. Similarly, address pointer 24 communicates with memories 26 and disk controllers 2 over the address lines subset 32 of system bus 4. The byte count stored in register file 10 is the same for each source 26, because each block of input data has the same number of bytes. State machines 11 give operational authority to the m-1 sources 26 sequentially.
A third embodiment of the present invention is a striping coprocessor. In a hardware sense, it is the same chip as the fly-by DMA controller 23, illustrated in Figure 5. The striping coprocessor 23 takes as inputs the m blocks of data that have been written onto the disk controllers 2 by the RAID coprocessor 29 and notionally (via software) stripes these blocks of data onto m I/O controllers 25. Striping is an. intentional scatter, i.e. the data are fragmented into m equally sized blocks. The number of stripes and their widths are based upon hardware considerations.
Devices illustrated in Figures 5 and 6 have been built using FPGA (field programmable gate array) technology, specifically 4000 series architecture of Xilinx Corporation- Other suitable construction techniques include printed circuit boards and ASICs (application specific integrated circuits) - The above description is included to illustrate the operation of the preferred embodiments and is not meant to limit the scope of the invention. The scope of the invention is to be limited only by the following claims. From the above discussion, many variations will be apparent to one skilled in
- 12 - the art that would yet be encompassed by the spirit and scope of the invention.
What is claimed is:
- 13 -

Claims

C L I M S
1. A computer system comprising: at least one central processing unit (CPU) capable of performing operations on data stored within a set of system resources; and a scalable coprocessor coupled to the CPUs and to the system resources, said scalable coprocessor comprising: coupled to the CPUs via a system busΛ a single register file compartmentalized into at least two bins, each bin corresponding to a virtual coprocessor channel; coupled to t & register file and to the system resources, a single actual coprocessor for performing operations on the system resources; and coupled to the register file, means for apportioning the operational time of the actual coprocessor among the set of virtual coprocessor channels.
2. The computer system of claim 1 wherein the system resources comprise memory and input/output device controllers.
3. The computer system of claim 1 wherein the number of bins is variable and is preselected.
. The computer system of claim 1 wherein the actual coprocessor comprises means for performing any combination of any arithmetic, logic, and shift operations on multiple blocks of data within the system resources.
- 14 - 5. The computer system of claim 1 wherein the apportioning means comprises means for determining the order and duration for which the virtual coprocessor channels are given operational authorization.
6. The computer system of claim 5 wherein the determining means comprises a .programmable throttle which sets the number of operations each virtual channel is allowed to perform during each operational authorization.
7. The computer system of claim 6 wherein idle periods are interspersed between throttle (operational) periods; and the system bus is free to be used by the CPUs during the idle periods.
8. The computer system of claim 1 wherein the actual coprocessor is fly-by DMA controller.
9- The computer system of claim 1 wherein the actual coprocessor is a RAID (redundant array of inexpensive disks) coprocessor.
10. The computer system of claim 1 wherein the actual coprocessor is a striping coprocessor.
11. The computer system of claim 1 wherein the actual coprocessor comprises a byte counter coupled to the register
- 15 - file via a pointer bus, at least one address pointer coupled to the register file via the pointer bus, and a buffer coupled to the address pointer(s), the byte counter, and the pointer bus.
- 16 -
PCT/JP1993/000617 1992-05-12 1993-05-11 Scalable coprocessor WO1993023810A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP5520052A JPH06509896A (en) 1992-05-12 1993-05-11 scalable coprocessor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US88129992A 1992-05-12 1992-05-12
US07/881,299 1992-05-12

Publications (1)

Publication Number Publication Date
WO1993023810A1 true WO1993023810A1 (en) 1993-11-25

Family

ID=25378191

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1993/000617 WO1993023810A1 (en) 1992-05-12 1993-05-11 Scalable coprocessor

Country Status (2)

Country Link
JP (1) JPH06509896A (en)
WO (1) WO1993023810A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2714747A1 (en) * 1993-11-01 1995-07-07 Ericsson Ge Mobile Communicat Device for controlling shared access to a data memory in a multiprocessor system.
EP0772131A3 (en) * 1995-11-03 1998-02-04 Sun Microsystems, Inc. Method and apparatus for support of virtual channels for the transfer of data
EP1141843A1 (en) * 1998-10-19 2001-10-10 Intel Corporation Raid striping using multiple virtual channels
WO2002015470A2 (en) * 2000-08-17 2002-02-21 Advanced Micro Devices, Inc. System and method for separate virtual channels for posted requests in a multiprocessor system
US6888843B2 (en) 1999-09-17 2005-05-03 Advanced Micro Devices, Inc. Response virtual channel for handling all responses
US6938094B1 (en) 1999-09-17 2005-08-30 Advanced Micro Devices, Inc. Virtual channels and corresponding buffer allocations for deadlock-free computer system operation
US6950438B1 (en) 1999-09-17 2005-09-27 Advanced Micro Devices, Inc. System and method for implementing a separate virtual channel for posted requests in a multiprocessor computer system
US7089344B1 (en) * 2000-06-09 2006-08-08 Motorola, Inc. Integrated processor platform supporting wireless handheld multi-media devices
GB2433611A (en) * 2005-12-21 2007-06-27 Advanced Risc Mach Ltd DMA controller with virtual channels
EP2324430A1 (en) * 2008-08-06 2011-05-25 Aspen Acquisition Corporation Haltable and restartable dma engine

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000250847A (en) * 1999-02-26 2000-09-14 Nec Corp Data transfer system
JP4499008B2 (en) * 2005-09-15 2010-07-07 富士通マイクロエレクトロニクス株式会社 DMA transfer system
KR102259970B1 (en) * 2017-10-13 2021-06-02 주식회사 엘지에너지솔루션 Apparatus for scheduling of data input

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2211325A (en) * 1987-10-16 1989-06-28 Ziitt DMA controller
EP0365116A2 (en) * 1988-10-18 1990-04-25 Hewlett-Packard Limited Buffer memory arrangement
WO1991011767A1 (en) * 1990-02-02 1991-08-08 Auspex Systems, Inc. High speed, flexible source/destination data burst direct memory access controller
EP0482819A2 (en) * 1990-10-23 1992-04-29 Emc Corporation On-line reconstruction of a failed redundant array system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2211325A (en) * 1987-10-16 1989-06-28 Ziitt DMA controller
EP0365116A2 (en) * 1988-10-18 1990-04-25 Hewlett-Packard Limited Buffer memory arrangement
WO1991011767A1 (en) * 1990-02-02 1991-08-08 Auspex Systems, Inc. High speed, flexible source/destination data burst direct memory access controller
EP0482819A2 (en) * 1990-10-23 1992-04-29 Emc Corporation On-line reconstruction of a failed redundant array system

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2714747A1 (en) * 1993-11-01 1995-07-07 Ericsson Ge Mobile Communicat Device for controlling shared access to a data memory in a multiprocessor system.
US5598575A (en) * 1993-11-01 1997-01-28 Ericsson Inc. Multiprocessor data memory sharing system in which access to the data memory is determined by the control processor's access to the program memory
EP0772131A3 (en) * 1995-11-03 1998-02-04 Sun Microsystems, Inc. Method and apparatus for support of virtual channels for the transfer of data
US5875352A (en) * 1995-11-03 1999-02-23 Sun Microsystems, Inc. Method and apparatus for multiple channel direct memory access control
KR100680633B1 (en) * 1998-10-19 2007-02-09 인텔 코포레이션 Raid striping using multiple virtual channels
EP1141843A4 (en) * 1998-10-19 2005-02-23 Intel Corp Raid striping using multiple virtual channels
EP1141843A1 (en) * 1998-10-19 2001-10-10 Intel Corporation Raid striping using multiple virtual channels
US6888843B2 (en) 1999-09-17 2005-05-03 Advanced Micro Devices, Inc. Response virtual channel for handling all responses
US6938094B1 (en) 1999-09-17 2005-08-30 Advanced Micro Devices, Inc. Virtual channels and corresponding buffer allocations for deadlock-free computer system operation
US6950438B1 (en) 1999-09-17 2005-09-27 Advanced Micro Devices, Inc. System and method for implementing a separate virtual channel for posted requests in a multiprocessor computer system
US7089344B1 (en) * 2000-06-09 2006-08-08 Motorola, Inc. Integrated processor platform supporting wireless handheld multi-media devices
WO2002015470A2 (en) * 2000-08-17 2002-02-21 Advanced Micro Devices, Inc. System and method for separate virtual channels for posted requests in a multiprocessor system
WO2002015470A3 (en) * 2000-08-17 2003-02-27 Advanced Micro Devices Inc System and method for separate virtual channels for posted requests in a multiprocessor system
GB2433611A (en) * 2005-12-21 2007-06-27 Advanced Risc Mach Ltd DMA controller with virtual channels
EP2324430A1 (en) * 2008-08-06 2011-05-25 Aspen Acquisition Corporation Haltable and restartable dma engine
EP2324430A4 (en) * 2008-08-06 2012-07-25 Aspen Acquisition Corp Haltable and restartable dma engine
US8732382B2 (en) 2008-08-06 2014-05-20 Qualcomm Incorporated Haltable and restartable DMA engine

Also Published As

Publication number Publication date
JPH06509896A (en) 1994-11-02

Similar Documents

Publication Publication Date Title
US5333305A (en) Method for improving partial stripe write performance in disk array subsystems
US7512751B2 (en) Method and apparatus for adjusting timing signal between media controller and storage media
US5655151A (en) DMA controller having a plurality of DMA channels each having multiple register sets storing different information controlling respective data transfer
US5909691A (en) Method for developing physical disk drive specific commands from logical disk access commands for use in a disk array
US5206943A (en) Disk array controller with parity capabilities
EP0768607B1 (en) Disk array controller for performing exclusive or operations
EP0550164B1 (en) Method and apparatus for interleaving multiple-channel DMA operations
CA1150846A (en) Multiprocessor system for processing signals by means of a finite number of processes
US5553307A (en) Method and device for transferring noncontiguous blocks in one transfer start by creating bit-map indicating which block is to be transferred
CA2029199A1 (en) Bus master command protocol
WO1993023810A1 (en) Scalable coprocessor
US4916647A (en) Hardwired pipeline processor for logic simulation
KR910017296A (en) Method and apparatus for implementing multi-master bus pipelining
US4873656A (en) Multiple processor accelerator for logic simulation
US5127088A (en) Disk control apparatus
JP2539058B2 (en) Data processor
EP0825534A2 (en) Method and apparatus for parity block generation
JPH0728758A (en) And device for dynamic time loop arbitration
EP0192366A2 (en) Apparatus and method for improving system bus performance in a data processng system
CN1061153C (en) Bus arbitration between input/output device and processing device including first-in first-out type wrist-in buffer
JPH06511099A (en) How to perform disk array operations using a non-uniform stripe size mapping scheme
US5875458A (en) Disk storage device
US5375217A (en) Method and apparatus for synchronizing disk drive requests within a disk array
WO1991001021A1 (en) Method and circuit for programmable element sequence selection
CN1049751C (en) Virtual array type access device of direct memory

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase