US20020062415A1 - Slotted memory access method - Google Patents

Slotted memory access method Download PDF

Info

Publication number
US20020062415A1
US20020062415A1 US09/956,179 US95617901A US2002062415A1 US 20020062415 A1 US20020062415 A1 US 20020062415A1 US 95617901 A US95617901 A US 95617901A US 2002062415 A1 US2002062415 A1 US 2002062415A1
Authority
US
United States
Prior art keywords
data
data bus
network node
aggregate
access
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/956,179
Inventor
Linghsiao Wang
Yeong Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zarlink Semiconductor VN Inc
Original Assignee
Zarlink Semiconductor VN Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zarlink Semiconductor VN Inc filed Critical Zarlink Semiconductor VN Inc
Priority to US09/956,179 priority Critical patent/US20020062415A1/en
Assigned to ZARLINK SEMICONDUCTOR V. N. INC. reassignment ZARLINK SEMICONDUCTOR V. N. INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WANG, LINGHSIAO, WANG, YEONG
Priority to CA 2357582 priority patent/CA2357582A1/en
Publication of US20020062415A1 publication Critical patent/US20020062415A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/56Routing software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1605Handling requests for interconnection or transfer for access to memory bus based on arbitration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4027Coupling between buses using bus bridges

Definitions

  • the invention relates to data switching, and in particular to methods of scheduling access to a shared memory store in switching Protocol Data Units (PDUs) at a data switching node.
  • PDUs Protocol Data Units
  • Data switching nodes are multi-ported data network nodes which forward Protocol Data Units (PDUs) between input data ports and output data ports.
  • PDUs Protocol Data Units
  • the basic operation of a data switching node includes: receiving at least one PDU, storing the PDU in memory while an appropriate output port is determined, scheduling the PDU for transmission, and transmitting the PDU via the determined output port.
  • This is know in the industry as a “store and forward” method of operation of data switching nodes. Although in principle the operation of data switching nodes is simple, the implementation of such devices is not trivial.
  • PDUs include and are not limited to: cells, frames, packets, etc. Each PDU has a size. Cells have a fixed size while frames and packets may vary in size. Each PDU has associated header information having specifiers holding information used in forwarding the PDU towards a destination data network node in the associated data transport network. The header information is consulted at data switching nodes in determining an output port via which to forward the PDU towards the destination data network node.
  • FIG. 1 is a schematic diagram showing a general design of a data switching node 100 .
  • the data switching node 100 is a multi-ported device storing PDUs in a shared memory buffer 104 , and forwarding PDUs between N physical ports 102 .
  • Each physical port 102 is adapted to receive and transmit data via an associated physical link 118 as shown at 112 .
  • Each physical port 102 has an associated physical data transfer rate.
  • the physical ports 102 may receive and transmit continuously, the data has a structure defined by the PDUs conveyed.
  • the physical ports 102 can be configured to convey empty PDUs in accordance with the specification of the data transfer protocol(s) employed by the physical port 102 .
  • Data transfer rates below the physical data transfer rate of a physical port 102 may also be obtained by inserting empty PDUs in the data stream conveyed therethrough.
  • Data transfer rates above the physical data transfer rate of a physical port 102 can be obtained through inverse multiplexing over a plurality of physical ports 102 , the plurality of physical ports 102 collectively defining a logical data port (not shown).
  • Physical ports 102 can be configured to convey data adhering to more than one data transfer protocol.
  • Each physical port 102 has access to the shared memory buffer 104 via a data bus 106 in a coordinated manner enforced by an arbiter 108 . Also accessing the shared memory buffer 104 is a PDU classifier 110 .
  • the PDU classifier 110 operates on header information of PDUs pending processing.
  • the PDU classifier 110 inspects a body of routing information (switching database) also stored in the shared memory buffer 104 and in so doing requires a portion of the bandwidth on the data bus 106 .
  • the sharing of data storage resources for PDU buffering and for storing routing information consolidates the memory storage requirements and simplifies the design of the data switching node 100 leading to reduced implementation costs.
  • the data bus 106 also has an associated physical data transfer rate which is typically different from the data transfer rates of the physical ports 102 .
  • the conveyed PDUs are buffered in receive 114 and transmit 116 buffers.
  • Physical port 102 designs having an adjustable physical data transfer rate exist. As such, the data switching node 100 besides being multi-ported, can be configured to accommodate interfaces having varied physical data transfer rates while adhering to multiple data transfer protocols.
  • the core of the data switching node 100 includes processes and hardware enabling the conveyance of PDUs between receive buffers 114 , shared memory buffer 104 and transmit buffers 116 .
  • a design requirement is that the data switching node 100 be able to process and convey PDUs such that all physical ports 102 receive and transmit simultaneously at their full physical transfer rates.
  • a PDU is buffered in the receive buffer 114 associated with the input physical port 102 .
  • a switching process implemented by the data switching node 100 determines at least one output port 102 via which to forward the PDU towards its destination data network node.
  • the PDU classifier 110 makes a determination whether the PDU is a unicast PDU or a multicast PDU. Multiple output ports 102 may be determined for multicast PDUs.
  • the PDU is buffered in the corresponding transmit buffer(s) 116 awaiting transmission over the corresponding physical link(s) 118 .
  • the corresponding physical port 102 uses a corresponding Direct Memory Access (DMA) device 120 to access ( 122 ) the shared memory buffer 104 via the data bus 106 .
  • DMA Direct Memory Access
  • the DMA device 120 upon gaining access to the shared memory buffer 104 , writes at least one PDU thereto for further processing.
  • the PDU classifier 110 also uses another corresponding DMA device 120 to access ( 124 ) the shared memory buffer 104 in processing pending PDUs.
  • the PDU classifier 110 makes use of the header information associated with each PDU and consults ( 124 ) routing information in determining the output port(s) 102 to forward the PDU to.
  • the PDU classifier 110 may also modify ( 124 ) the body of routing information held in the shared memory buffer 104 in which case the DMA device 120 writes ( 124 ) to the shared memory buffer 104 . Modifications of the routing information are necessary in establishing new data transport routes in the data transport network for data sessions and the associated data transfers.
  • a DMA device 120 corresponding to at least one of the determined output ports 102 loads ( 122 ) the PDU from the shared memory buffer 104 into the corresponding transmit buffer 116 where the PDU awaits transmission over the physical link 118 . If the PDU is to be multicasted, additional DMA devices 120 corresponding to the other determined output ports 102 load ( 122 ) the PDU into corresponding transmit buffers 116 .
  • the DMA devices 120 are used to perform PDU data transfers 122 as well as data transfers 124 in accessing routing information, the DMA devices 120 contend for the data bus 106 .
  • the DMA devices 120 communicate with the arbiter 108 via an access control bus 128 .
  • the access control bus 128 is integral to the data bus 106 .
  • the arbiter 108 enforces a controlled access 130 to the data bus 106 in coordinating the access to shared memory buffer 104 for all DMA devices 120 . Coordinating the access to the data bus 106 efficiently for all DMA devices 106 is essential in attaining efficient performance of the data switching node 100 .
  • round-robin or weighted round-robin data bus access arbitration techniques are used in scheduling access to the shared memory buffer 104 via the data bus 106 .
  • the DMA devices 120 issue requests ( 126 ) for memory access cycles via the access control bus 128 to the arbiter 108 .
  • the arbiter 108 issues grant responses ( 126 ) to the DMA devices 120 to take over the data bus 106 for a memory access cycle to perform read, write, and modify operations on the shared memory buffer 104 .
  • the request for a memory access cycle and the grant response is known in the field as request-grant handshaking.
  • the need for request-grant handshaking is a disadvantage limiting data switching performance of the data switching node 100 .
  • the handshaking steps themselves use clock cycles in conveying requests and grant responses over the access control bus 128 leading to increased memory access cycle times i.e. longer reads, longer writes, and longer modifies.
  • favoring large data transfers has a detrimental effect in data switching environments in which PDUs have variable sizes by favoring the conveyance of large size PDUs to and from the shared memory buffer 104 .
  • This adds a delay in processing comparatively short PDUs.
  • the effects of the incurred delay compound considering the fact that shorter PDUs have a larger processing overhead to PDU size ratio thereby delaying larger processing overheads.
  • Round-robin techniques are found to be well suited for arbitrating data bus access for the conveyance of fixed size PDUs at data switching nodes 100 having physical ports 102 conveying data at the same physical data transfer rate.
  • a data network node processing data is presented.
  • Components of the data network node includes a divided aggregate data bus, a divided shared memory store accessed via the aggregate data bus and a plurality of data bus connected devices.
  • a deterministic data bus arbitration schedule is used to apportion an aggregate bandwidth of the aggregate data bus to the plurality of data bus connected devices.
  • the deterministic data bus arbitration schedule specifies the grouping of read memory access cycles sequentially and the grouping of write memory access cycles sequentially such that changes between read memory access cycles and write memory access cycles is reduced.
  • a method of arbitrating access to a divided aggregate data bus for a plurality of data bus connected devices includes a sequence of steps. Each stream of data to be conveyed via the aggregate data bus into an aggregate stream of data granules. The access to the aggregate data bus is coordinated according to a deterministic data bus arbitration schedule. The aggregate stream of data granules is conveyed over the aggregate data bus in accordance with the deterministic data bus arbitration schedule.
  • the advantages are derived from eliminating latencies otherwise incurred from: data bus arbitration related handshaking, arbitration request processing in scheduling access to the data bus, and switching between read and write memory access cycles.
  • FIG. 1 is a schematic diagram showing a general design of a data switching node
  • FIG. 2 is a schematic diagram showing a detail of an access schedule for a shared memory buffer in accordance with an exemplary embodiment of the invention
  • FIG. 3 is a schematic diagram showing an exemplary design architecture of a data switching node in accordance with an exemplary embodiment of the invention
  • FIG. 4 is a schematic diagram showing a detail of the architecture of a data switching node in accordance with an exemplary implementation of the preferred embodiment of the invention.
  • FIG. 5 is a schematic diagram showing a detail of the architecture of a data switching node in accordance with another exemplary implementation of the preferred embodiment of the invention.
  • FIG. 6 is a schematic diagram showing a detail of the architecture of a data switching node in accordance with another exemplary implementation of the preferred embodiment of the invention.
  • FIG. 7 is a schematic diagram showing a detail of the architecture of a data switching node in accordance with another exemplary implementation of the preferred embodiment of the invention.
  • FIG. 8 is a schematic diagram showing details of an exemplary memory access schedule in accordance with an exemplary embodiment of the invention.
  • FIG. 9 is a schematic diagram showing a data switching node having a plurality of data bus connected devices in accordance with yet another exemplary implementation of the preferred embodiment of the invention.
  • a minimum memory bandwidth requirement is imposed on the shared memory buffer 104 .
  • r j represents the physical data transfer rate of port j.
  • a slotted memory access scheme is used to arbitrate the access to the data bus 106 and therefore to the shared memory buffer 104 .
  • FIG. 2 is a schematic diagram showing a detail of an access schedule ( 200 ) to a shared memory buffer ( 104 ) in accordance with an exemplary embodiment of the invention.
  • the data bus arbitration schedule 200 is divided into time frames 202 . Each time frame 202 being further subdivided into time slots 204 . A deterministic data bus arbitration schedule 200 is enforced. Read, write, and modify memory access cycles are performed during time slots 204 . Memory access cycles are assigned to DMA devices 120 and it is left up to each individual DMA device 120 to make use of the assigned time slot(s) 204 . This removes the necessity of sending/receiving memory access requests and therefore eliminating the overhead involved in conveying and processing thereof, leading to a more efficient use of the memory bandwidth B.
  • the length of the time frame 202 is left up to design choice. Using more time slots 204 per time frame 202 , a more granular control can be effected as each time slot 204 represents a smaller percentage of the time frame 202 .
  • the memory bandwidth B can therefore be partitioned effectively to match the data transfer rate requirements of data bus connected devices including each one of the physical ports 102 and the PDU classifier 110 .
  • the need to overbudget the memory requirements for the receive 114 and transmit 116 buffers is reduced thereby reducing implementation costs.
  • the partitioning of the memory bandwidth may be set explicitly via a management console, chosen from a group of pre-set bus arbitration schedules, actively monitored and managed by a higher layer protocol monitoring PDU data flows, etc. without limiting the invention.
  • An optimum memory access schedule can be determined through gathering statistics on memory access cycles and the utilization of processing queues at the data switching node.
  • the deterministic arbitration schedule provides a guaranteed bandwidth to the PDU classifier 110 and enables the PDU classifier 110 to process pending PDUs in a timely manner reducing the occurrence of PDU processing backlogs in the shared memory buffer 104 .
  • the deterministic memory access schedule enables the grouping of all read memory access cycles and all write memory access cycles within each time frame 202 . Further advantages are derived from reducing the occurrence of changes between read and write memory access cycles thereby further reducing PDU processing latencies.
  • FIG. 3 is a schematic diagram showing an exemplary design architecture of a data switching node in accordance with the invention.
  • FIG. 3 Shown in the diagram is data switching node 300 having arbiter 308 implementing the memory access scheme presented above with respect to FIG. 2.
  • the arbiter 308 coordinates 326 the access of the DMA devices 120 to the data bus 106 by pacing repeatedly through the access schedule specified via the time frame 202 —in a cyclical fashion.
  • the use of a deterministic access schedule greatly simplifies the design of the data switching node 300 .
  • width W of the data bus 106 Another important parameter in designing data switching equipment is the width W of the data bus 106 . Theoretically, if the clock frequency of the data bus 106 is F, then the required data bus width W is given by: W ⁇ B F .
  • Increasing the clock frequency F of the data bus is limited by the access speed of the shared memory buffer 104 , increases cross-talk in the bus lines, reduces the synchronization window for signals on the data bus 106 (more stringent requirements on signal propagation over the bus lines), etc.
  • FIG. 4, FIG. 5, FIG. 6, FIG. 7 are schematic diagrams showing details of the architecture of data switching nodes in accordance with exemplary implementations of the preferred embodiment of the invention.
  • the width w of each individual data bus 406 is chosen to reduce bandwidth loss in accessing the routing information.
  • each of a data bus width w maintains the complexity thereof while increasing the effective aggregate data bus width W without increasing the clock frequency F of each data bus 406 to provide increased effective aggregate bandwidth B.
  • a DMA device 420 is used for each data transport direction for each physical port 402 .
  • Two DMA devices 420 per physical port 402 provide simultaneous access 422 / 424 to both of the data buses 406 .
  • the data switching node 400 makes use of a single arbiter 408 associated with a single access control bus 428 exercising a controlled access 430 over each one of the data buses 406 by coordinating ( 426 ) DMA device 420 access thereto.
  • the data switching node 500 makes use of two arbiters 508 each associated with corresponding access control buses 528 , each one of the arbiters 508 exercising controlled access 530 over a single corresponding data bus 406 coordinating ( 526 ) DMA device 420 access thereto.
  • each one of the DMA devices 420 accesses 422 / 424 a data bus 406 during assigned time slots 204 .
  • each one of the physical ports 402 can be given simultaneous access to both data buses 406 , for the data switching nodes 400 and 500 , each physical port 402 is constrained to perform simultaneously one read operation and one write operation. More details regarding the data bus arbitration schedule will be presented below with reference to FIG. 8.
  • each DMA device 620 is granted access 426 / 726 to the data bus 406 associated therewith as directed by arbiters 408 / 508 and conveys data 622 / 624 therebetween.
  • FIG. 8 is a schematic diagram showing details of an exemplary memory access schedule in accordance with an exemplary embodiment of the invention.
  • a data bus arbitration schedule 800 has two time lines 810 each of which corresponds to one of the two data buses 406 .
  • Time frames 202 are cyclically paced through in assigning time slots 204 to DMA devices 420 to perform memory access cycles.
  • an exemplary partitioning of the bandwidth B is presented in Table 1.
  • the data bus arbitration schedule 800 is specified by a 128 time slot time frame 202 having time slot 204 assignments for DMA devices 420 connected to the two data buses 406 .
  • the exemplary time slot assignment corresponds to a data switching node 400 / 500 having 24—10/100 Mbit/s physical ports and two —1 Gbit/s physical ports.
  • the read and write memory access cycles are sequenced into 5 groups.
  • the shared memory blocks 404 are labeled as “odd” and “even”; one simple mode of operation of the data switching nodes 400 / 500 / 600 / 700 presented herein in accordance with the preferred embodiment of the invention, includes dividing PDUs into an aggregate stream of PDU granules each of which having a size equal to the width w of each constituent data bus 406 . Odd PDU granules in the aggregate stream of granules represent an odd constituent stream of granules and are stored in the odd shared memory block 404 . Similarly, even PDU granules in the aggregate stream of granules represent an even constituent stream of granules and are stored in the even shared memory block 404 .
  • each data bus 406 has a width w of 64 bits while the aggregate width W of the data bus is 128 bits, this leads to an 8 byte PDU granule size.
  • the 128 time slot time frame 202 contains one read and one write memory access cycle for each 100 Mb/s port 402 to each one of the two shared memory blocks 404 .
  • 1 Gb/s ports 402 are assigned 12 read and write memory access cycles per shared memory block 404 per time frame 202 such that between any consecutive same type memory access cycle to one shared memory block 404 there exists a same type memory access cycle to the other shared memory block 404 .
  • the two extra time slots ascribed to the 1 Gb/s ports 402 prevent potential bandwidth loss due to state machine recovery after receiving an end of PDU marker (EOF). Therefore the reading and writing from alternating shared memory blocks 404 can be performed with minimal loss of memory bandwidth B.
  • EEF end of PDU marker
  • R(port designation) represents the read memory access cycle performed by TxDMA(port designation) device 420
  • W(port designation) is the write memory access cycle performed by RxDMA(port designation) device 420
  • SE(#)'s are the memory access cycles of the PDU classifier 110 obtaining header information
  • RSE is the memory access cycle used in consulting the switching database
  • T is the memory turn around latency time between read and write memory access cycles.
  • Time slots 204 No. 49 and No. 127 of Table 1 show parallel switching database updates labeled WSE.
  • the parallel updates also eliminate inconsistencies between the shared memory blocks 404 .
  • the invention is not limited to parallel switching database updates, although preferred, the updates can be made in sequence as long as PDU classifier 110 / 610 RSE (No. 38 odd and No. 116 even) memory access cycles are not performed in between.
  • the invention is not limited to supporting fixed length memory access cycles.
  • the duration of read memory access cycles and write memory access cycles is independent of each other and not necessarily limited to the length of a time slot 204 .
  • time slots 204 need not correspond to single clock cycles.
  • the memory access cycle may be decoupled from the data bus clock.
  • One time slot 204 can also represent multiple memory access cycles.
  • the read memory access cycle SE(#) for the PDU classifier 110 / 610 uses three consecutive time slots 204 in determining a source data network node identifier, a destination data network node identifier, and PDU treatment information used in processing the PDU. Similar provisions can be made for other types of memory access cycles if the bandwidth requirement for that particular memory access cycle and the required the memory access cycle implementation are known.
  • FIG. 9 is a schematic diagram showing a data switching node having a plurality of data bus connected devices in accordance with yet another exemplary implementation of the preferred embodiment of the invention.
  • the invention is not limited to the design shown, additional devices such as supervisory processors 906 , statistics gathering devices 902 monitoring data traffic, a console manager 904 , etc. may be accommodated to access data bus connected devices via bandwidth partitioning such as is exemplary shown at WMG/RMG in Table 1.
  • DMA devices 420 / 620 can be grouped via multiplexer/demultiplexer intermediaries 950 .
  • the memory access schedule would have to sequenced such that access collisions do not occur within each group of DMA devices.

Abstract

A method of accessing a shared memory store at a multiported data network node is provided. The method provides for a deterministic access schedule to be used in apportioning processing bandwidth between data ports and bus connected devices used in processing conveyed data. Advantages are derived from eliminating data processing latencies otherwise incurred from: data bus arbitration related to handshaking, arbitration request processing, and switching between read and write memory access cycles.

Description

  • This application claims the benefit of U.S. Provisional Application No. 60/236,165, filed Sep. 29, 2000.[0001]
  • FIELD OF THE INVENTION
  • The invention relates to data switching, and in particular to methods of scheduling access to a shared memory store in switching Protocol Data Units (PDUs) at a data switching node. [0002]
  • BACKGROUND OF THE INVENTION
  • In the field of data switching, the performance of data switching nodes participating in data transport networks is of outmost importance. [0003]
  • Data switching nodes are multi-ported data network nodes which forward Protocol Data Units (PDUs) between input data ports and output data ports. [0004]
  • The basic operation of a data switching node includes: receiving at least one PDU, storing the PDU in memory while an appropriate output port is determined, scheduling the PDU for transmission, and transmitting the PDU via the determined output port. This is know in the industry as a “store and forward” method of operation of data switching nodes. Although in principle the operation of data switching nodes is simple, the implementation of such devices is not trivial. [0005]
  • PDUs include and are not limited to: cells, frames, packets, etc. Each PDU has a size. Cells have a fixed size while frames and packets may vary in size. Each PDU has associated header information having specifiers holding information used in forwarding the PDU towards a destination data network node in the associated data transport network. The header information is consulted at data switching nodes in determining an output port via which to forward the PDU towards the destination data network node. [0006]
  • FIG. 1 is a schematic diagram showing a general design of a [0007] data switching node 100. The data switching node 100 is a multi-ported device storing PDUs in a shared memory buffer 104, and forwarding PDUs between N physical ports 102.
  • Each [0008] physical port 102 is adapted to receive and transmit data via an associated physical link 118 as shown at 112. Each physical port 102 has an associated physical data transfer rate.
  • Although the [0009] physical ports 102 may receive and transmit continuously, the data has a structure defined by the PDUs conveyed. In the absence of physical data to convey, the physical ports 102 can be configured to convey empty PDUs in accordance with the specification of the data transfer protocol(s) employed by the physical port 102. Data transfer rates below the physical data transfer rate of a physical port 102 may also be obtained by inserting empty PDUs in the data stream conveyed therethrough. Data transfer rates above the physical data transfer rate of a physical port 102 can be obtained through inverse multiplexing over a plurality of physical ports 102, the plurality of physical ports 102 collectively defining a logical data port (not shown). Physical ports 102 can be configured to convey data adhering to more than one data transfer protocol.
  • Each [0010] physical port 102 has access to the shared memory buffer 104 via a data bus 106 in a coordinated manner enforced by an arbiter 108. Also accessing the shared memory buffer 104 is a PDU classifier 110. The PDU classifier 110 operates on header information of PDUs pending processing.
  • The [0011] PDU classifier 110 inspects a body of routing information (switching database) also stored in the shared memory buffer 104 and in so doing requires a portion of the bandwidth on the data bus 106. The sharing of data storage resources for PDU buffering and for storing routing information consolidates the memory storage requirements and simplifies the design of the data switching node 100 leading to reduced implementation costs.
  • The [0012] data bus 106 also has an associated physical data transfer rate which is typically different from the data transfer rates of the physical ports 102. In order to match data transfer rates between the physical ports 102 and the data bus 106, the conveyed PDUs are buffered in receive 114 and transmit 116 buffers. Physical port 102 designs having an adjustable physical data transfer rate exist. As such, the data switching node 100 besides being multi-ported, can be configured to accommodate interfaces having varied physical data transfer rates while adhering to multiple data transfer protocols.
  • The core of the [0013] data switching node 100 includes processes and hardware enabling the conveyance of PDUs between receive buffers 114, shared memory buffer 104 and transmit buffers 116. A design requirement is that the data switching node 100 be able to process and convey PDUs such that all physical ports 102 receive and transmit simultaneously at their full physical transfer rates.
  • Once received ([0014] 112) over a physical link 118, a PDU is buffered in the receive buffer 114 associated with the input physical port 102. A switching process implemented by the data switching node 100 determines at least one output port 102 via which to forward the PDU towards its destination data network node. The PDU classifier 110 makes a determination whether the PDU is a unicast PDU or a multicast PDU. Multiple output ports 102 may be determined for multicast PDUs. Once processed and at least one output physical port 102 determined, the PDU is buffered in the corresponding transmit buffer(s) 116 awaiting transmission over the corresponding physical link(s) 118.
  • In servicing a [0015] receive buffer 114, the corresponding physical port 102 uses a corresponding Direct Memory Access (DMA) device 120 to access (122) the shared memory buffer 104 via the data bus 106. The DMA device 120, upon gaining access to the shared memory buffer 104, writes at least one PDU thereto for further processing.
  • The [0016] PDU classifier 110 also uses another corresponding DMA device 120 to access (124) the shared memory buffer 104 in processing pending PDUs. The PDU classifier 110 as mentioned above, makes use of the header information associated with each PDU and consults (124) routing information in determining the output port(s) 102 to forward the PDU to.
  • As a side effect of processing PDUs, the [0017] PDU classifier 110 may also modify (124) the body of routing information held in the shared memory buffer 104 in which case the DMA device 120 writes (124) to the shared memory buffer 104. Modifications of the routing information are necessary in establishing new data transport routes in the data transport network for data sessions and the associated data transfers.
  • Once at least one [0018] output port 102 is determined for a PDU pending processing in the shared memory buffer 104, a DMA device 120 corresponding to at least one of the determined output ports 102 loads (122) the PDU from the shared memory buffer 104 into the corresponding transmit buffer 116 where the PDU awaits transmission over the physical link 118. If the PDU is to be multicasted, additional DMA devices 120 corresponding to the other determined output ports 102 load (122) the PDU into corresponding transmit buffers 116.
  • As the [0019] DMA devices 120 are used to perform PDU data transfers 122 as well as data transfers 124 in accessing routing information, the DMA devices 120 contend for the data bus 106. The DMA devices 120 communicate with the arbiter 108 via an access control bus 128. Typically the access control bus 128 is integral to the data bus 106.
  • The [0020] arbiter 108 enforces a controlled access 130 to the data bus 106 in coordinating the access to shared memory buffer 104 for all DMA devices 120. Coordinating the access to the data bus 106 efficiently for all DMA devices 106 is essential in attaining efficient performance of the data switching node 100.
  • Methods of coordinating access to the [0021] data bus 106 for multiple DMA devices 120 exist and provide a variety of advantages.
  • Typically round-robin or weighted round-robin data bus access arbitration techniques are used in scheduling access to the shared [0022] memory buffer 104 via the data bus 106. In accordance with round-robin data bus arbitration techniques, the DMA devices 120 issue requests (126) for memory access cycles via the access control bus 128 to the arbiter 108. The arbiter 108 issues grant responses (126) to the DMA devices 120 to take over the data bus 106 for a memory access cycle to perform read, write, and modify operations on the shared memory buffer 104. The request for a memory access cycle and the grant response is known in the field as request-grant handshaking.
  • The need for request-grant handshaking is a disadvantage limiting data switching performance of the [0023] data switching node 100. There is an overhead incurred in the request-grant handshaking process: in receiving a request and issuing a grant response as well as in processing each request. The handshaking steps themselves use clock cycles in conveying requests and grant responses over the access control bus 128 leading to increased memory access cycle times i.e. longer reads, longer writes, and longer modifies.
  • There is an unpredictability in the waiting time between the request and the grant response which is found in field trials to favor large data transfer bursts. With large data transfer bursts being favored, data switching node designs tend to require the use of large receive [0024] buffers 114 and large transmit buffers 116. Field trials show that using small buffers leads to data over-run and data under-run errors in conveying PDUs over the physical ports 102.
  • The unpredictability of the handshake turnaround time is therefore detrimental to the turnaround time in processing PDUs at the [0025] data switching node 100 since the access by the PDU classifier 110 to the data bus 106 tends to be short, random and frequent in comparison with PDU data transfers. Therefore PDU data transfers are favored in comparison to switching database accesses while being delayed in processing. This may lead to a condition in which an unnecessarily large number of PDUs are awaiting processing in the shared memory buffer 104.
  • Moreover, favoring large data transfers has a detrimental effect in data switching environments in which PDUs have variable sizes by favoring the conveyance of large size PDUs to and from the shared [0026] memory buffer 104. This adds a delay in processing comparatively short PDUs. The effects of the incurred delay compound considering the fact that shorter PDUs have a larger processing overhead to PDU size ratio thereby delaying larger processing overheads.
  • Round-robin techniques are found to be well suited for arbitrating data bus access for the conveyance of fixed size PDUs at [0027] data switching nodes 100 having physical ports 102 conveying data at the same physical data transfer rate. However the trends in the field of data switching call for greater port densities per data switching node 100 and greater port diversity at the data switching node 100 both in physical data transfer rates and data transfer protocols supported.
  • A prior art International Publication, Number WO 00/72524 A1, entitled “APPARATUS AND METHOD FOR PROGRAMMABLE MEMORY ACCESS SLOT ASSIGNMENT”, filed on Dec. 17, 1999 by Yu et al., describes a memory access scheme in which the granting of memory access requests is pre-specified. Although the handshake turnaround time is reduced, memory bandwidth utilization is not very efficient because memory access request processing is still required. [0028]
  • Therefore there is a need to provide methods of efficiently scheduling access to the [0029] data bus 106 in enhancing the performance of the data switching node 100.
  • SUMMARY OF THE INVENTION
  • In accordance with an aspect of the invention, a data network node processing data is presented. Components of the data network node includes a divided aggregate data bus, a divided shared memory store accessed via the aggregate data bus and a plurality of data bus connected devices. A deterministic data bus arbitration schedule is used to apportion an aggregate bandwidth of the aggregate data bus to the plurality of data bus connected devices. [0030]
  • In accordance with another aspect of the invention, the deterministic data bus arbitration schedule specifies the grouping of read memory access cycles sequentially and the grouping of write memory access cycles sequentially such that changes between read memory access cycles and write memory access cycles is reduced. [0031]
  • In accordance with yet another aspect of the invention, a method of arbitrating access to a divided aggregate data bus for a plurality of data bus connected devices is presented. The method includes a sequence of steps. Each stream of data to be conveyed via the aggregate data bus into an aggregate stream of data granules. The access to the aggregate data bus is coordinated according to a deterministic data bus arbitration schedule. The aggregate stream of data granules is conveyed over the aggregate data bus in accordance with the deterministic data bus arbitration schedule. [0032]
  • Data processing latencies in arbitrating the access to the aggregate data bus for the plurality of data bus connected devices are reduced. [0033]
  • The advantages are derived from eliminating latencies otherwise incurred from: data bus arbitration related handshaking, arbitration request processing in scheduling access to the data bus, and switching between read and write memory access cycles.[0034]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The features and advantages of the invention will become more apparent from the following detailed description of the preferred embodiments with reference to the attached diagrams wherein: [0035]
  • FIG. 1 is a schematic diagram showing a general design of a data switching node; [0036]
  • FIG. 2 is a schematic diagram showing a detail of an access schedule for a shared memory buffer in accordance with an exemplary embodiment of the invention; [0037]
  • FIG. 3 is a schematic diagram showing an exemplary design architecture of a data switching node in accordance with an exemplary embodiment of the invention; [0038]
  • FIG. 4 is a schematic diagram showing a detail of the architecture of a data switching node in accordance with an exemplary implementation of the preferred embodiment of the invention; [0039]
  • FIG. 5 is a schematic diagram showing a detail of the architecture of a data switching node in accordance with another exemplary implementation of the preferred embodiment of the invention; [0040]
  • FIG. 6 is a schematic diagram showing a detail of the architecture of a data switching node in accordance with another exemplary implementation of the preferred embodiment of the invention; [0041]
  • FIG. 7 is a schematic diagram showing a detail of the architecture of a data switching node in accordance with another exemplary implementation of the preferred embodiment of the invention; [0042]
  • FIG. 8 is a schematic diagram showing details of an exemplary memory access schedule in accordance with an exemplary embodiment of the invention; and [0043]
  • FIG. 9 is a schematic diagram showing a data switching node having a plurality of data bus connected devices in accordance with yet another exemplary implementation of the preferred embodiment of the invention.[0044]
  • It will be noted that in the attached diagrams like features bear similar labels. [0045]
  • DETAILED DESCRIPTION OF THE EMBODIMENTS
  • Several parameters are important in designing data switching nodes. [0046]
  • A minimum memory bandwidth requirement is imposed on the shared [0047] memory buffer 104. The minimum memory bandwidth B of the shared memory buffer 104 has to accommodate all N physical ports 102 transmitting and receiving data simultaneously at their full physical data transfer rates without discarding PDUs, as well as, the bandwidth C required by the PDU classifier 110 in processing PDUs. If the physical ports 102 convey data at different rates, then the minimum theoretical bandwidth required is given by: B = C + 2 j = 0 N - 1 r j ,
    Figure US20020062415A1-20020523-M00001
  • where r[0048] j represents the physical data transfer rate of port j.
  • Field trials show that the minimum bandwidth required has to be greater than the theoretical estimate due to overheads incurred in the request-grant handshaking, overheads incurred in changing between read and write memory cycles, etc. The more frequent switching between read and write memory cycles the less efficient the memory bandwidth utilization. [0049]
  • In accordance with the preferred embodiment of the invention, a slotted memory access scheme is used to arbitrate the access to the [0050] data bus 106 and therefore to the shared memory buffer 104.
  • FIG. 2 is a schematic diagram showing a detail of an access schedule ([0051] 200) to a shared memory buffer (104) in accordance with an exemplary embodiment of the invention.
  • The data [0052] bus arbitration schedule 200 is divided into time frames 202. Each time frame 202 being further subdivided into time slots 204. A deterministic data bus arbitration schedule 200 is enforced. Read, write, and modify memory access cycles are performed during time slots 204. Memory access cycles are assigned to DMA devices 120 and it is left up to each individual DMA device 120 to make use of the assigned time slot(s) 204. This removes the necessity of sending/receiving memory access requests and therefore eliminating the overhead involved in conveying and processing thereof, leading to a more efficient use of the memory bandwidth B.
  • The length of the [0053] time frame 202 is left up to design choice. Using more time slots 204 per time frame 202, a more granular control can be effected as each time slot 204 represents a smaller percentage of the time frame 202.
  • The memory bandwidth B can therefore be partitioned effectively to match the data transfer rate requirements of data bus connected devices including each one of the [0054] physical ports 102 and the PDU classifier 110. The need to overbudget the memory requirements for the receive 114 and transmit 116 buffers is reduced thereby reducing implementation costs.
  • The partitioning of the memory bandwidth may be set explicitly via a management console, chosen from a group of pre-set bus arbitration schedules, actively monitored and managed by a higher layer protocol monitoring PDU data flows, etc. without limiting the invention. An optimum memory access schedule can be determined through gathering statistics on memory access cycles and the utilization of processing queues at the data switching node. [0055]
  • The deterministic arbitration schedule provides a guaranteed bandwidth to the [0056] PDU classifier 110 and enables the PDU classifier 110 to process pending PDUs in a timely manner reducing the occurrence of PDU processing backlogs in the shared memory buffer 104.
  • Further, in accordance with the preferred embodiment of the invention, with the bandwidth requirement fulfilled, the deterministic memory access schedule enables the grouping of all read memory access cycles and all write memory access cycles within each [0057] time frame 202. Further advantages are derived from reducing the occurrence of changes between read and write memory access cycles thereby further reducing PDU processing latencies.
  • FIG. 3 is a schematic diagram showing an exemplary design architecture of a data switching node in accordance with the invention. [0058]
  • Shown in the diagram is [0059] data switching node 300 having arbiter 308 implementing the memory access scheme presented above with respect to FIG. 2. The arbiter 308 coordinates 326 the access of the DMA devices 120 to the data bus 106 by pacing repeatedly through the access schedule specified via the time frame 202—in a cyclical fashion. The use of a deterministic access schedule greatly simplifies the design of the data switching node 300.
  • Another important parameter in designing data switching equipment is the width W of the [0060] data bus 106. Theoretically, if the clock frequency of the data bus 106 is F, then the required data bus width W is given by: W B F .
    Figure US20020062415A1-20020523-M00002
  • Increasing the clock frequency F of the data bus: is limited by the access speed of the shared [0061] memory buffer 104, increases cross-talk in the bus lines, reduces the synchronization window for signals on the data bus 106 (more stringent requirements on signal propagation over the bus lines), etc.
  • In practice it becomes very hard to design wide data buses while increasing the port density per data switching node. The practical limitations include: an increase in complexity in routing bus lines, a loss of synchronization between the signals conveyed over the bus lines (unevenly long bus lines), increased cross-talk, increased power drain, to name a just a few. [0062]
  • Very wide data buses reduce the efficiency of the [0063] PDU classifier 110. While PDU sizes are typically larger than the width W of the data bus, switching database accesses can be significantly narrower. As an example switching database accesses can be 48 bits wide while a wide data bus can be W=128 bits wide, each PDU classifier access would only utilize 37.5% of the available bandwidth B therefore representing a large processing overhead per access.
  • FIG. 4, FIG. 5, FIG. 6, FIG. 7 are schematic diagrams showing details of the architecture of data switching nodes in accordance with exemplary implementations of the preferred embodiment of the invention. [0064]
  • Common to all these exemplary implementations of [0065] data switching nodes 400, 500, 600 and 700 are the use of a shared memory space divided into blocks 404 with each shared memory block 404 being accessed by each of the N respective ports 402/602 via separate data buses 406. The designs show the shared memory space spanning over two memory blocks 404. The invention is not limited thereto, the use of more memory blocks 404 and corresponding data buses 406 being a design choice based on data switching equipment performance requirements balanced against implementation costs.
  • In optimizing the use of the available memory bandwidth B, the width w of each [0066] individual data bus 406 is chosen to reduce bandwidth loss in accessing the routing information.
  • The switching database access tends to be narrow and frequent. If for example, data transfers associated with the switching database accesses are 48 bits wide, using a single W=w=32 bit bus, one switching database access takes two clock cycles. One clock cycle is used for a single W=w=64 bit bus. The total bandwidth used is the same. However, if the width of the single data bus is increased to 128 bits, the switching database access will still take one clock cycle wasting 62.5% of the bandwidth per access. Therefore wider single buses are only efficient for long, continuous streams of data such as PDU transfers. [0067]
  • By dividing the effective aggregate data bus width W of the aggregate data bus into two W=2w=128 bits, the switching database access only takes one clock cycle on one of the two constituent data buses [0068] 406 w=64 bits wasting only 12.5% of the bandwidth per switching database access.
  • The choice of multiple [0069] constituent data buses 406, each of a data bus width w, maintains the complexity thereof while increasing the effective aggregate data bus width W without increasing the clock frequency F of each data bus 406 to provide increased effective aggregate bandwidth B.
  • Further, in accordance with the preferred embodiment of the invention, a [0070] DMA device 420 is used for each data transport direction for each physical port 402. Two DMA devices 420 per physical port 402 provide simultaneous access 422/424 to both of the data buses 406.
  • In arbitrating the access to the shared memory blocks [0071] 404, the data switching node 400 makes use of a single arbiter 408 associated with a single access control bus 428 exercising a controlled access 430 over each one of the data buses 406 by coordinating (426) DMA device 420 access thereto.
  • In arbitrating the access to the shared memory blocks [0072] 404, the data switching node 500 makes use of two arbiters 508 each associated with corresponding access control buses 528, each one of the arbiters 508 exercising controlled access 530 over a single corresponding data bus 406 coordinating (526) DMA device 420 access thereto.
  • As coordinated by [0073] arbiters 408/508 each one of the DMA devices 420 accesses 422/424 a data bus 406 during assigned time slots 204.
  • While each one of the [0074] physical ports 402 can be given simultaneous access to both data buses 406, for the data switching nodes 400 and 500, each physical port 402 is constrained to perform simultaneously one read operation and one write operation. More details regarding the data bus arbitration schedule will be presented below with reference to FIG. 8.
  • The restriction mentioned above is eliminated in the implementations of [0075] data switching nodes 600 and 700, where multiple DMA devices 620 are associated with receive 614 and transmit 616 buffers. The exemplary implementations shown make use of two DMA devices 620 per each receive buffer 614 and each transmit buffer 616. The invention is not limited thereto—the number of DMA devices 620 associated therewith being a function of the number of data buses 406 used, design requirements and associated implementation costs.
  • As shown, each [0076] DMA device 620 is granted access 426/726 to the data bus 406 associated therewith as directed by arbiters 408/508 and conveys data 622/624 therebetween.
  • FIG. 8 is a schematic diagram showing details of an exemplary memory access schedule in accordance with an exemplary embodiment of the invention. [0077]
  • A data [0078] bus arbitration schedule 800 has two time lines 810 each of which corresponds to one of the two data buses 406. Time frames 202 are cyclically paced through in assigning time slots 204 to DMA devices 420 to perform memory access cycles.
  • In accordance with an exemplary implementation of the invention an exemplary partitioning of the bandwidth B is presented in Table 1. The data [0079] bus arbitration schedule 800 is specified by a 128 time slot time frame 202 having time slot 204 assignments for DMA devices 420 connected to the two data buses 406. The exemplary time slot assignment corresponds to a data switching node 400/500 having 24—10/100 Mbit/s physical ports and two —1 Gbit/s physical ports. The read and write memory access cycles are sequenced into 5 groups.
  • As shown in FIG. 4, FIG. 5, FIG. 6 and FIG. 7, and Table 1, the shared memory blocks [0080] 404 are labeled as “odd” and “even”; one simple mode of operation of the data switching nodes 400/500/600/700 presented herein in accordance with the preferred embodiment of the invention, includes dividing PDUs into an aggregate stream of PDU granules each of which having a size equal to the width w of each constituent data bus 406. Odd PDU granules in the aggregate stream of granules represent an odd constituent stream of granules and are stored in the odd shared memory block 404. Similarly, even PDU granules in the aggregate stream of granules represent an even constituent stream of granules and are stored in the even shared memory block 404.
  • If each [0081] data bus 406 has a width w of 64 bits while the aggregate width W of the data bus is 128 bits, this leads to an 8 byte PDU granule size. Each PDU granule is conveyed during a single clock cycle over the w=64 bits wide data bus 406.
  • The 128 time [0082] slot time frame 202 contains one read and one write memory access cycle for each 100 Mb/s port 402 to each one of the two shared memory blocks 404. 1 Gb/s ports 402 are assigned 12 read and write memory access cycles per shared memory block 404 per time frame 202 such that between any consecutive same type memory access cycle to one shared memory block 404 there exists a same type memory access cycle to the other shared memory block 404. The two extra time slots ascribed to the 1 Gb/s ports 402 prevent potential bandwidth loss due to state machine recovery after receiving an end of PDU marker (EOF). Therefore the reading and writing from alternating shared memory blocks 404 can be performed with minimal loss of memory bandwidth B.
  • In accordance with a worst case scenario, for any [0083] physical port 402, a 1 byte long EOF granule is conveyed to and from the odd shared memory block 404. This means that 7 bytes of the odd shared memory block 404 bandwidth remain idle and furthermore, the next 8 bytes of bandwidth allocated on the even shared memory block 404 will also be idle since the next PDU starts being conveyed to or from the odd shared memory block 404. In this scenario a maximum of 15 bytes of bandwidth seem to be wasted per PDU transfer. In processing PDUs defined by data transfer protocols such as the IEEE 802.x Ethernet Standard, this bandwidth loss is in fact acceptable since an inter-PDU gap of 20 bytes is specified.
  • An exemplary partitioning of the available bandwidth is presented in Table 1: [0084]
    TABLE 1
    Exemplary dual bus arbitration schedule
    Odd Even
    No. Block Block
    0 SE(0) W(14)
    1 SE(0) R(15)
    2 SE(0) R(G0)
    3 R(0) R(16)
    4 R(G0) R(G1)
    5 R(1) R(17)
    6 R(G1) R(G0)
    7 R(2) R(18)
    8 R(G0) R(G1)
    9 R(3) R(19)
    10 R(G1) R(G0)
    11 R(4) SE(1)
    12 R(G0) SE(1)
    13 T SE(1)
    14 T T
    15 W(0) T
    16 W(G0) W(G1)
    17 W(1) W(15)
    18 W(G1) W(G0)
    19 W(2) W(16)
    20 W(G0) W(G1)
    21 W(3) W(17)
    22 W(G1) W(G0)
    23 W(4) W(18)
    24 W(G0) W(G1)
    25 SE(2) W(19)
    26 SE(2) R(G1)
    27 SE(2) R(20)
    28 R(G1) R(G0)
    29 R(5) R(21)
    30 R(G0) R(G1)
    31 R(6) R(22)
    32 R(G1) R(G0)
    33 R(7) R(23)
    34 R(G0) R(G1)
    35 R(8) SE(3)
    36 R(G1) SE(3)
    37 R(9) SE(3)
    38 RSE R(24)
    39 T T
    40 T T
    41 W(5) W(20)
    42 W(G1) W(G0)
    43 W(6) W(21)
    44 W(G0) W(G1)
    45 W(7) W(22)
    46 W(G1) W(G0)
    47 W(8) W(23)
    48 W(G0) W(G1)
    49 WSE WSE
    50 W(G1) W(G0)
    51 SE(4) W(24)
    52 SE(4) R(0)
    53 SE(4) R(G0)
    54 RMG R(1)
    55 R(G0) R(G1)
    56 R(10) R(2)
    57 R(G1) R(G0)
    58 R(11) R(3)
    59 R(G0) R(G1)
    60 R(12) R(4)
    61 R(G1) R(G0)
    62 R(13) SE(5)
    63 R(G0) SE(5)
    64 R(14) SE(5)
    65 T RMG
    66 T T
    67 WMG T
    68 W(G0) W(G1)
    69 W(9) W(0)
    70 W(G1) W(G0)
    71 W(10) W(1)
    72 W(G0) W(G1)
    73 W(11) W(2)
    74 W(G1) W(G0)
    75 W(12) W(3)
    76 W(G0) W(G1)
    77 W(13) W(4)
    78 SE(6) WMG
    79 SE(6) R(5)
    80 SE(6) R(G1)
    81 R(15) R(6)
    82 R(G1) R(G0)
    83 R(16) R(7)
    84 R(G0) R(G1)
    85 R(17) R(8)
    86 R(G1) R(G0)
    87 R(18) R(9)
    88 R(G0) R(G1)
    89 R(19) SE(7)
    90 R(G1) SE(7)
    91 T SE(7)
    92 T T
    93 W(14) T
    94 W(G1) W(G0)
    95 W(15) W(5)
    96 W(G0) W(G1)
    97 W(16) W(6)
    98 W(G1) W(G0)
    99 W(17) W(7)
    100 W(G0) W(G1)
    101 W(18) W(8)
    102 W(G1) W(G0)
    103 SE(8) W(9)
    104 SE(8) R(10)
    105 SE(8) R(G0)
    106 R(20) R(11)
    107 R(G0) R(G1)
    108 R(21) R(12)
    109 R(G1) R(G0)
    110 R(22) R(13)
    111 R(G0) R(G1)
    112 R(23) R(14)
    113 R(G1) SE(9)
    114 R(24) SE(9)
    115 T SE(9)
    116 T RSE
    117 W(24) T
    118 W(19) T
    119 W(G0) W(G1)
    120 W(20) W(10)
    121 W(G1) W(G0)
    122 W(21) W(11)
    123 W(G0) W(G1)
    124 W(22) W(12)
    125 W(G1) W(G0)
    126 W(23) W(13)
    127 WSE WSE
  • where: R(port designation) represents the read memory access cycle performed by TxDMA(port designation) [0085] device 420, W(port designation) is the write memory access cycle performed by RxDMA(port designation) device 420, SE(#)'s are the memory access cycles of the PDU classifier 110 obtaining header information, RSE is the memory access cycle used in consulting the switching database and T is the memory turn around latency time between read and write memory access cycles.
  • Further enhancements in utilizing the available bandwidth B include the storage of the switching database in both shared memory blocks [0086] 404 to eliminate contention in accessing the routing information. Time slots 204: No. 49 and No. 127 of Table 1 show parallel switching database updates labeled WSE. The parallel updates also eliminate inconsistencies between the shared memory blocks 404. The invention is not limited to parallel switching database updates, although preferred, the updates can be made in sequence as long as PDU classifier 110/610 RSE (No. 38 odd and No. 116 even) memory access cycles are not performed in between.
  • The above-described memory access scheme eliminates the necessity of synchronizing memory access cycles otherwise necessitating synchronization hardware and incurring synchronization overheads. [0087]
  • The invention is not limited to supporting fixed length memory access cycles. [0088]
  • The duration of read memory access cycles and write memory access cycles is independent of each other and not necessarily limited to the length of a [0089] time slot 204. Similarly time slots 204 need not correspond to single clock cycles. For some applications, the memory access cycle may be decoupled from the data bus clock. One time slot 204 can also represent multiple memory access cycles.
  • As shown in Table 1, the read memory access cycle SE(#) for the [0090] PDU classifier 110/610, uses three consecutive time slots 204 in determining a source data network node identifier, a destination data network node identifier, and PDU treatment information used in processing the PDU. Similar provisions can be made for other types of memory access cycles if the bandwidth requirement for that particular memory access cycle and the required the memory access cycle implementation are known.
  • FIG. 9 is a schematic diagram showing a data switching node having a plurality of data bus connected devices in accordance with yet another exemplary implementation of the preferred embodiment of the invention. [0091]
  • The invention is not limited to the design shown, additional devices such as [0092] supervisory processors 906, statistics gathering devices 902 monitoring data traffic, a console manager 904, etc. may be accommodated to access data bus connected devices via bandwidth partitioning such as is exemplary shown at WMG/RMG in Table 1.
  • In order to reduce electrical noise in the [0093] data bus lines 406, DMA devices 420/620 can be grouped via multiplexer/demultiplexer intermediaries 950. The memory access schedule would have to sequenced such that access collisions do not occur within each group of DMA devices.
  • The embodiments presented are exemplary only and persons skilled in the art would appreciate that variations to the above described embodiments may be made without departing from the spirit of the invention. The scope of the invention is solely defined by the appended claims. [0094]

Claims (35)

We claim:
1. A data network node processing data comprising:
a. a divided aggregate data bus having an aggregate data bus width W;
b. a divided shared memory store accessed via the aggregate data bus;
c. a plurality of data bus connected devices; and
d. a deterministic data bus arbitration schedule apportioning an aggregate bandwidth B of the aggregate data bus to the plurality of data bus connected devices
whereby data processing latencies in arbitrating the access to the aggregate data bus for the plurality of data bus connected devices are reduced.
2. A data network node as claimed in claim 1, wherein the divided aggregate data bus further comprises at least two constituent data buses, each one of the constituent data buses having a data bus width w whereby the aggregate data bus retains the complexity of each individual constituent data bus while providing increased bandwidth B for processing data at the data network node without increasing an associated data bus clock frequency F.
3. A data network node as claimed in claim 2, wherein the divided shared memory store comprises at least two shared memory blocks, each shared memory block being accessible via a single corresponding constituent data bus.
4. A data network node as claimed in claim 1, wherein the plurality of data bus connected devices includes at least one data port enabling the conveyance of data between the data network node and an associated data transport network.
5. A data network node as claimed in claim 4, wherein the at least one data port comprises an input data port conveying incoming data to the data network node.
6. A data network node as claimed in claim 4, wherein the at least one data port comprises an output port conveying outgoing data from the data network node.
7. A data network node as claimed in claim 4, wherein the data network node comprises a data switching node forwarding data between a plurality of data ports.
8. A data network node as claimed in claim 7, wherein the shared memory store further comprises a switching database specifying routing information used in forwarding data between the plurality of data ports.
9. A data network node as claimed in claim 8, wherein the switching database is stored in each one of a plurality of shared memory blocks of the divided shared memory store whereby contention of access to the switching database is prevented.
10. A data network node as claimed in claim 7, wherein the data has a structure defined by data transport protocols which specifies the encapsulation of conveyed data into Protocol Data Unit (PDUs).
11. A data network node as claimed in claim 10, wherein the plurality of data bus connected devices includes at least one PDU classifier determining at least one data port for each PDU to be forwarded to.
12. A data network node as claimed in claim 1, wherein the data network node comprises a group of selectable pre-set data bus arbitration schedules enabling different bandwidth apportionments associated with the plurality of data bus connected devices.
13. A data network node as claimed in claim 1, wherein the plurality of data bus connected devices includes at least one data flow statistics generator monitoring data processed at the data network node.
14. A data network node as claimed in claim 1, wherein the plurality of data bus connected devices includes at least one supervisory processor running an associated protocol monitoring data processed at the data network node to update the deterministic data bus arbitration schedule in efficiently apportioning the aggregate bandwidth B between the plurality of data bus connected devices.
15. A data network node as claimed in claim 1, wherein the data network node comprises an associated management console for managing the operation of the data network node including the specification of the deterministic data bus arbitration to be used.
16. A data network node as claimed in claim 1, wherein a subgroup of data bus connected devices further comprises at least one Direct Memory Access (DMA) device for accessing the shared memory store.
17. A data network node as claimed in claim 16, wherein a DMA device is used for each direction of data transmission for data port data bus connected devices, to provide simultaneous reception and transmission of data therethrough.
18. A data network node as claimed in claim 1, wherein data bus connected devices are connected to the data bus via multiplexer/demultiplexer intermediaries to reduce noise in the data bus lines enabling a higher speed operation thereof.
19. A data network node as claimed in claim 2, wherein each data bus connected device further comprises at least one Direct Memory Access (DMA) device associated with each constituent data bus.
20. A data network node as claimed in claim 3, wherein the deterministic data bus arbitration schedule further comprises read memory access cycles grouped sequentially and write memory access cycles grouped sequentially whereby a number of changes between read memory access cycles and write memory access cycles is reduced thereby reducing associated memory access latencies incurred in changing therebetween.
21. A data network node as claimed in claim 1, wherein the deterministic data bus access schedule further comprises a time line for each constituent data bus.
22. A data network node as claimed in claim 1, wherein the deterministic data bus arbitration schedule comprises a time frame for each one of the constituent data buses, each time frame being cyclically paced through in coordinating access to the corresponding constituent data bus whereby the design of the data network node is simplified by using the deterministic arbitration schedule.
23. A data network node as claimed in claim 22, wherein the data network node further comprises at least one arbiter coordinating access to the aggregate data bus in accordance with the deterministic data bus access schedule.
24. A data network node as claimed in claim 22, wherein the data network node further comprises an arbiter for each constituent data bus, each arbiter coordinating access to a corresponding constituent data bus in accordance with the deterministic data bus access schedule.
25. A data network node as claimed in claim 22, wherein each time frame further comprises a plurality of time slots, the bandwidth B of the aggregate data bus being apportioned, in terms of the time slots, to the plurality of data bus connected devices, whereby the granularity of the bandwidth apportionment is controlled by the number of time slots per time frame.
26. A data network node as claimed in claim 25, wherein at least one time slot specifies the extent of a memory access cycle.
27. A data network node as claimed in claim 25, wherein at least one time slot corresponds to a plurality of memory access cycles.
28. A method of arbitrating access to a divided aggregate data bus for a plurality of data bus connected devices, the method comprising the steps of:
a. dividing a stream of data conveyed via a data bus connected device into an aggregate stream of data granules;
b. coordinating the access to the aggregate data bus according to a deterministic data bus arbitration schedule; and
c. conveying the aggregate stream of data granules over the aggregate data bus in accordance with the deterministic data bus arbitration schedule
whereby the use of the deterministic data bus arbitration schedule reduces processing overheads associated with the arbitration of access to the aggregate data bus.
29. A method as claimed in claim 28, wherein the aggregate data bus is divided into at least two constituent data buses and the step of dividing the data stream into the aggregate stream of granules further includes a step of: further dividing the aggregate stream of data granules into at least two constituent streams of data granules, each constituent stream of data granules corresponding to a one constituent data bus.
30. A method as claimed in claim 29, wherein the step of coordinating the access to the aggregate data bus, the method further comprises a step of: scheduling the access to a one constituent data bus for the conveyance of at least one data granule from the corresponding constituent stream of data granules.
31. A method as claimed in claim 30, wherein the step of conveying the stream of data granules over the aggregate data bus, the method further comprises a step of: conveying data granules corresponding to the at least two constituent data buses in a sequence repeatedly cycling through the at least two constituent data buses.
32. A method as claimed in claim 29, wherein each one of the at least two constituent data buses has a constituent data bus width w and the step of dividing the data stream into the aggregate stream of data granules the method future comprises a step of: dividing the data stream into data granules, each data granule having a size equal to the data bus width w of each one of the at least two constituent data buses.
33. A method as claimed in claim 28, wherein a divided shared memory store is associated with the divided aggregate data bus, the divided aggregate data bus comprises at least two constituent data buses, the divided shared memory store comprises at least two shared memory blocks each of which is accessible via a one corresponding constituent data bus, the shared memory store further retrievably holding a database, the method further comprising a step of: storing the database in each one of the at least two shared memory blocks.
34. A method as claimed in claim 33, wherein the method further comprises a step of updating each copy of the database stored in each of the at least two shared memory blocks to prevent inaccuracies in the use thereof.
35. A method as claimed in claim 33, wherein the method further comprises a step of sequentially updating each copy of the database stored in each of the at least two shared memory blocks, the sequential update being performed between database access instances.
US09/956,179 2000-09-29 2001-09-19 Slotted memory access method Abandoned US20020062415A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US09/956,179 US20020062415A1 (en) 2000-09-29 2001-09-19 Slotted memory access method
CA 2357582 CA2357582A1 (en) 2001-09-19 2001-09-21 Slotted memory access method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US23616500P 2000-09-29 2000-09-29
US09/956,179 US20020062415A1 (en) 2000-09-29 2001-09-19 Slotted memory access method

Publications (1)

Publication Number Publication Date
US20020062415A1 true US20020062415A1 (en) 2002-05-23

Family

ID=22888392

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/956,179 Abandoned US20020062415A1 (en) 2000-09-29 2001-09-19 Slotted memory access method
US09/966,691 Active 2024-07-24 US7082138B2 (en) 2000-09-29 2001-09-28 Internal communication protocol for data switching equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
US09/966,691 Active 2024-07-24 US7082138B2 (en) 2000-09-29 2001-09-28 Internal communication protocol for data switching equipment

Country Status (5)

Country Link
US (2) US20020062415A1 (en)
KR (1) KR100425062B1 (en)
CN (1) CN1171429C (en)
CA (1) CA2357688A1 (en)
TW (1) TW533718B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050060456A1 (en) * 2003-09-16 2005-03-17 Denali Software, Inc. Method and apparatus for multi-port memory controller
US20050243829A1 (en) * 2002-11-11 2005-11-03 Clearspeed Technology Pic Traffic management architecture
US7406531B2 (en) * 2000-12-28 2008-07-29 Robert Bosch Gmbh Method and communication system for data exchange among multiple users interconnected over a bus system
US20090248911A1 (en) * 2008-03-27 2009-10-01 Apple Inc. Clock control for dma busses
US20110182300A1 (en) * 2010-01-27 2011-07-28 Sundeep Chandhoke Network Traffic Shaping for Reducing Bus Jitter on a Real Time Controller
US20130198429A1 (en) * 2012-02-01 2013-08-01 Sundeep Chandhoke Bus Arbitration for a Real-Time Computer System
CN104731849A (en) * 2013-12-23 2015-06-24 塔塔咨询服务有限公司 System(s) and method(s) for predicting effect of database cache on query elapsed response time during an application development stage
US20150254197A1 (en) * 2012-09-26 2015-09-10 Zte Corporation Data Transmission Method and Device
US20160087790A1 (en) * 2013-04-24 2016-03-24 Nec Europe Ltd. Method and system for encrypting data
US10404449B2 (en) * 2014-11-24 2019-09-03 Nec Corporation Method for encrypting data for distributed storage

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7213075B2 (en) * 2000-12-15 2007-05-01 International Business Machines Corporation Application server and streaming server streaming multimedia file in a client specific format
JP2003060662A (en) * 2001-08-21 2003-02-28 Sony Corp Communication apparatus, communication method, program and recording medium
US7539185B2 (en) * 2002-10-07 2009-05-26 Broadcom Corporation Fast-path implementation for an uplink double tagging engine
US7397809B2 (en) * 2002-12-13 2008-07-08 Conexant Systems, Inc. Scheduling methods for combined unicast and multicast queuing
US7583607B2 (en) * 2003-03-06 2009-09-01 Hewlett-Packard Development Company, L.P. Method and apparatus for designating and implementing support level agreements
KR100520146B1 (en) * 2003-12-22 2005-10-10 삼성전자주식회사 Method for processing data in high speed downlink packet access communication system
US7457241B2 (en) * 2004-02-05 2008-11-25 International Business Machines Corporation Structure for scheduler pipeline design for hierarchical link sharing
US7486688B2 (en) * 2004-03-29 2009-02-03 Conexant Systems, Inc. Compact packet switching node storage architecture employing Double Data Rate Synchronous Dynamic RAM
KR100578655B1 (en) * 2005-03-09 2006-05-11 주식회사 팬택앤큐리텔 Wireless communication terminal notifying transmission time of packet and its method
US8885634B2 (en) * 2007-11-30 2014-11-11 Ciena Corporation Systems and methods for carrier ethernet using referential tables for forwarding decisions
CN101197959B (en) * 2007-12-29 2010-12-08 北京创毅视讯科技有限公司 Terminal control method, system and equipment
CN101572690B (en) * 2008-04-30 2012-07-04 十堰科纳汽车电器有限公司 Transmitting, receiving and network adapters and method for transmitting and receiving LIN frame
US8493983B2 (en) * 2010-06-02 2013-07-23 Cisco Technology, Inc. Virtual fabric membership assignments for fiber channel over Ethernet network devices
KR101502147B1 (en) * 2013-03-08 2015-03-13 김대환 Method, apparatus and computer readable medium for communication between USB host and USB device through network

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5327538A (en) * 1990-09-11 1994-07-05 Canon Kabushiki Kaisha Method of utilizing common buses in a multiprocessor system
US5414816A (en) * 1988-12-19 1995-05-09 Nec Corporation Data transfer apparatus having means for controlling the difference in speed between data input/output ports and memory access
US5442339A (en) * 1990-07-16 1995-08-15 Nec Corporation Method and system of changing destination of protocol data unit in hierarchical data communication network systems
US5490252A (en) * 1992-09-30 1996-02-06 Bay Networks Group, Inc. System having central processor for transmitting generic packets to another processor to be altered and transmitting altered packets back to central processor for routing
US5551007A (en) * 1989-09-20 1996-08-27 Hitachi, Ltd. Method for controlling multiple common memories and multiple common memory system
US5634060A (en) * 1994-08-09 1997-05-27 Unisys Corporation Method and apparatus for high-speed efficient bi-directional communication between multiple processor over a common bus
US5644733A (en) * 1995-05-18 1997-07-01 Unisys Corporation Dual coupled partitionable networks providing arbitration logic for managed access to commonly shared busses
US5862338A (en) * 1996-12-30 1999-01-19 Compaq Computer Corporation Polling system that determines the status of network ports and that stores values indicative thereof
US5935232A (en) * 1995-11-20 1999-08-10 Advanced Micro Devices, Inc. Variable latency and bandwidth communication pathways
US5941979A (en) * 1991-07-08 1999-08-24 Seiko Epson Corporation Microprocessor architecture with a switch network and an arbitration unit for controlling access to memory ports
US5999518A (en) * 1996-12-04 1999-12-07 Alcatel Usa Sourcing, L.P. Distributed telecommunications switching system and method
US6052751A (en) * 1997-02-14 2000-04-18 Advanced Micro Devices, I Nc. Method and apparatus for changing the number of access slots into a memory
US6088753A (en) * 1997-05-27 2000-07-11 Fusion Micromedia Corporation Bus arrangements for interconnection of discrete and/or integrated modules in a digital system and associated method
US6175887B1 (en) * 1998-10-21 2001-01-16 Sun Microsystems, Inc. Deterministic arbitration of a serial bus using arbitration addresses
US6563821B1 (en) * 1997-11-14 2003-05-13 Multi-Tech Systems, Inc. Channel bonding in a remote communications server system
US6618777B1 (en) * 1999-01-21 2003-09-09 Analog Devices, Inc. Method and apparatus for communicating between multiple functional units in a computer environment
US6728254B1 (en) * 1999-06-30 2004-04-27 Nortel Networks Limited Multiple access parallel memory and method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4864558A (en) * 1986-11-29 1989-09-05 Nippon Telegraph And Telephone Corporation Self-routing switch
KR0153944B1 (en) 1995-12-22 1998-11-16 양승택 Ethernet connecting apparatus for including the ip routing function with atm switch
US5742604A (en) 1996-03-28 1998-04-21 Cisco Systems, Inc. Interswitch link mechanism for connecting high-performance network switches
JPH1141246A (en) 1997-07-22 1999-02-12 Fujitsu Ltd Duplex system for network connection device
US6904062B1 (en) * 1999-04-23 2005-06-07 Waytech Investment Co. Ltd. Method and apparatus for efficient and flexible routing between multiple high bit-width endpoints
AU2068201A (en) * 1999-12-08 2001-06-18 Broadcom Homenetworking, Inc. Synchronized transport across non-synchronous networks

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5414816A (en) * 1988-12-19 1995-05-09 Nec Corporation Data transfer apparatus having means for controlling the difference in speed between data input/output ports and memory access
US5551007A (en) * 1989-09-20 1996-08-27 Hitachi, Ltd. Method for controlling multiple common memories and multiple common memory system
US5442339A (en) * 1990-07-16 1995-08-15 Nec Corporation Method and system of changing destination of protocol data unit in hierarchical data communication network systems
US5327538A (en) * 1990-09-11 1994-07-05 Canon Kabushiki Kaisha Method of utilizing common buses in a multiprocessor system
US5941979A (en) * 1991-07-08 1999-08-24 Seiko Epson Corporation Microprocessor architecture with a switch network and an arbitration unit for controlling access to memory ports
US5490252A (en) * 1992-09-30 1996-02-06 Bay Networks Group, Inc. System having central processor for transmitting generic packets to another processor to be altered and transmitting altered packets back to central processor for routing
US5634060A (en) * 1994-08-09 1997-05-27 Unisys Corporation Method and apparatus for high-speed efficient bi-directional communication between multiple processor over a common bus
US5644733A (en) * 1995-05-18 1997-07-01 Unisys Corporation Dual coupled partitionable networks providing arbitration logic for managed access to commonly shared busses
US5935232A (en) * 1995-11-20 1999-08-10 Advanced Micro Devices, Inc. Variable latency and bandwidth communication pathways
US5999518A (en) * 1996-12-04 1999-12-07 Alcatel Usa Sourcing, L.P. Distributed telecommunications switching system and method
US5862338A (en) * 1996-12-30 1999-01-19 Compaq Computer Corporation Polling system that determines the status of network ports and that stores values indicative thereof
US6052751A (en) * 1997-02-14 2000-04-18 Advanced Micro Devices, I Nc. Method and apparatus for changing the number of access slots into a memory
US6088753A (en) * 1997-05-27 2000-07-11 Fusion Micromedia Corporation Bus arrangements for interconnection of discrete and/or integrated modules in a digital system and associated method
US6563821B1 (en) * 1997-11-14 2003-05-13 Multi-Tech Systems, Inc. Channel bonding in a remote communications server system
US6175887B1 (en) * 1998-10-21 2001-01-16 Sun Microsystems, Inc. Deterministic arbitration of a serial bus using arbitration addresses
US6618777B1 (en) * 1999-01-21 2003-09-09 Analog Devices, Inc. Method and apparatus for communicating between multiple functional units in a computer environment
US6728254B1 (en) * 1999-06-30 2004-04-27 Nortel Networks Limited Multiple access parallel memory and method

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7406531B2 (en) * 2000-12-28 2008-07-29 Robert Bosch Gmbh Method and communication system for data exchange among multiple users interconnected over a bus system
US7882312B2 (en) * 2002-11-11 2011-02-01 Rambus Inc. State engine for data processor
US20050243829A1 (en) * 2002-11-11 2005-11-03 Clearspeed Technology Pic Traffic management architecture
US20050257025A1 (en) * 2002-11-11 2005-11-17 Clearspeed Technology Plc State engine for data processor
US8472457B2 (en) 2002-11-11 2013-06-25 Rambus Inc. Method and apparatus for queuing variable size data packets in a communication system
US20110069716A1 (en) * 2002-11-11 2011-03-24 Anthony Spencer Method and apparatus for queuing variable size data packets in a communication system
US7054968B2 (en) * 2003-09-16 2006-05-30 Denali Software, Inc. Method and apparatus for multi-port memory controller
US20050060456A1 (en) * 2003-09-16 2005-03-17 Denali Software, Inc. Method and apparatus for multi-port memory controller
US20090248911A1 (en) * 2008-03-27 2009-10-01 Apple Inc. Clock control for dma busses
US9727505B2 (en) 2008-03-27 2017-08-08 Apple Inc. Clock control for DMA busses
US9032113B2 (en) * 2008-03-27 2015-05-12 Apple Inc. Clock control for DMA busses
US20110182300A1 (en) * 2010-01-27 2011-07-28 Sundeep Chandhoke Network Traffic Shaping for Reducing Bus Jitter on a Real Time Controller
US8295287B2 (en) * 2010-01-27 2012-10-23 National Instruments Corporation Network traffic shaping for reducing bus jitter on a real time controller
US8856415B2 (en) * 2012-02-01 2014-10-07 National Instruments Corporation Bus arbitration for a real-time computer system
US20130198429A1 (en) * 2012-02-01 2013-08-01 Sundeep Chandhoke Bus Arbitration for a Real-Time Computer System
US20140223055A1 (en) * 2012-02-01 2014-08-07 National Instruments Corporation Controlling Bus Access in a Real-Time Computer System
US20140223056A1 (en) * 2012-02-01 2014-08-07 National Instruments Corporation Controlling Bus Access Priority in a Real-Time Computer System
US9477624B2 (en) * 2012-02-01 2016-10-25 National Instruments Corporation Controlling bus access in a real-time computer system
US9460036B2 (en) * 2012-02-01 2016-10-04 National Instruments Corporation Controlling bus access priority in a real-time computer system
US20150254197A1 (en) * 2012-09-26 2015-09-10 Zte Corporation Data Transmission Method and Device
US9697153B2 (en) * 2012-09-26 2017-07-04 Zte Corporation Data transmission method for improving DMA and data transmission efficiency based on priorities of at least two arbitration units for each DMA channel
US20160087790A1 (en) * 2013-04-24 2016-03-24 Nec Europe Ltd. Method and system for encrypting data
US9787469B2 (en) * 2013-04-24 2017-10-10 Nec Corporation Method and system for encrypting data
US10291392B2 (en) 2013-04-24 2019-05-14 Nec Corporation Method and system for encrypting data
US20150178277A1 (en) * 2013-12-23 2015-06-25 Tata Consultancy Services Limited System and method predicting effect of cache on query elapsed response time during application development stage
CN104731849A (en) * 2013-12-23 2015-06-24 塔塔咨询服务有限公司 System(s) and method(s) for predicting effect of database cache on query elapsed response time during an application development stage
US10372711B2 (en) * 2013-12-23 2019-08-06 Tata Consultancy Services Limited System and method predicting effect of cache on query elapsed response time during application development stage
US10404449B2 (en) * 2014-11-24 2019-09-03 Nec Corporation Method for encrypting data for distributed storage
US11108543B2 (en) 2014-11-24 2021-08-31 Nec Corporation Method for encrypting data for distributed storage

Also Published As

Publication number Publication date
CA2357688A1 (en) 2002-03-29
TW533718B (en) 2003-05-21
KR20020025847A (en) 2002-04-04
US7082138B2 (en) 2006-07-25
CN1348290A (en) 2002-05-08
CN1171429C (en) 2004-10-13
US20020093964A1 (en) 2002-07-18
KR100425062B1 (en) 2004-03-30

Similar Documents

Publication Publication Date Title
US20020062415A1 (en) Slotted memory access method
JP3448067B2 (en) Network controller for network adapter
Peh et al. Flit-reservation flow control
US7352765B2 (en) Packet switching fabric having a segmented ring with token based resource control protocol and output queuing control
US5193090A (en) Access protection and priority control in distributed queueing
Feliciian et al. An asynchronous on-chip network router with quality-of-service (QoS) support
US6108306A (en) Apparatus and method in a network switch for dynamically allocating bandwidth in ethernet workgroup switches
EP1779607B1 (en) Network interconnect crosspoint switching architecture and method
US7085847B2 (en) Method and system for scheduling network communication
US5528584A (en) High performance path allocation system and method with fairness insurance mechanism for a fiber optic switch
US6317415B1 (en) Method and system for communicating information in a network
US20080205432A1 (en) Network-On-Chip Environment and Method For Reduction of Latency
JP3322195B2 (en) LAN switch
US20070010205A1 (en) Time-division multiplexing circuit-switching router
US10387355B2 (en) NoC interconnect with linearly-tunable QoS guarantees for real-time isolation
JP2005287038A (en) Compact packet switching node storage architecture employing ddr sdram and method for accessing memory
US7020161B1 (en) Prescheduling arbitrated resources
EP1891778A1 (en) Electronic device and method of communication resource allocation.
US20020131412A1 (en) Switch fabric with efficient spatial multicast
Liu et al. Parallel probe based dynamic connection setup in TDM NoCs
US6356548B1 (en) Pooled receive and transmit queues to access a shared bus in a multi-port switch asic
US6374314B1 (en) Method for managing storage of data by storing buffer pointers of data comprising a sequence of frames in a memory location different from a memory location for pointers of data not comprising a sequence of frames
Kranich et al. NoC switch with credit based guaranteed service support qualified for GALS systems
JP2007532052A (en) Scalable network for management of computing and data storage
US10289598B2 (en) Non-blocking network

Legal Events

Date Code Title Description
AS Assignment

Owner name: ZARLINK SEMICONDUCTOR V. N. INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, LINGHSIAO;WANG, YEONG;REEL/FRAME:012182/0717

Effective date: 20010917

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION