WO1996013922A2 - Computer network switching system with expandable number of ports - Google Patents

Computer network switching system with expandable number of ports Download PDF

Info

Publication number
WO1996013922A2
WO1996013922A2 PCT/US1995/013838 US9513838W WO9613922A2 WO 1996013922 A2 WO1996013922 A2 WO 1996013922A2 US 9513838 W US9513838 W US 9513838W WO 9613922 A2 WO9613922 A2 WO 9613922A2
Authority
WO
WIPO (PCT)
Prior art keywords
bus
packet
port
switching
data
Prior art date
Application number
PCT/US1995/013838
Other languages
French (fr)
Other versions
WO1996013922A3 (en
Inventor
Mark A. Lenney
Hon Wah Chin
Original Assignee
Cisco Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Systems, Inc. filed Critical Cisco Systems, Inc.
Priority to CA 2203500 priority Critical patent/CA2203500A1/en
Priority to EP95939604A priority patent/EP0788691A2/en
Publication of WO1996013922A2 publication Critical patent/WO1996013922A2/en
Publication of WO1996013922A3 publication Critical patent/WO1996013922A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/205Quality of Service based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/254Centralised controller, i.e. arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/45Arrangements for providing or supporting expansion

Definitions

  • the present invention relates generally to local area networks and more specifically to switching fabric circuits for segmented networks.
  • Local area networks provide for the interconnection of a multiplicity of endstations so that a multiplicity of users may share information in an efficient and economical manner.
  • Each endstation of a LAN can typically communicate with every other endstation that is physically connected to that LAN.
  • the effective throughput for each endstation of the LAN decreases.
  • the LAN can be "segmented" into smaller interconnected sub-networks or "LAN segments" that each have fewer endstations.
  • the load for each sub-network of a segmented LAN is reduced, leading to increased throughput for a segmented LAN when compared to a similarly sized, unsegmented LAN.
  • switching fabric circuit Interconnection of the segments of prior segmented LANs is achieved by connecting several individual sub-networks to the ports of a "switching fabric circuit.”
  • switching fabric circuit as used here is meant to encompass any circuit that provides for the processing and forwarding of information between LAN segments in a segmented network.
  • one prior switching fabric circuit includes a number of conventional Ethernet bridges connected to a backbone network that is controlled by a system processor.
  • the system processor is responsible for filtering and forwarding frames of data between LAN segments. Filtering and forwarding typically requires first storing incoming frames of data received from the ports and subsequently forwarding the frames of data to the appropriate destination port.
  • One disadvantage of the filtering and forwarding schemes of prior bridging architectures is the delay time associated with the filtering and forwarding process.
  • EtherswitchTM is sold by Kalpana, Inc., of Sunnyvale, California, and described in U.S. Patent No. 5,274,631, ofBhardwaj, issued on December 28, 1993, entitled Computer Network Switching System.
  • the EtherswitchTM includes a number of packet processors, one for each port, that are each connected to multiplexor logic.
  • the multiplexor logic acts as a crossbar switch that provides a direct physical connection between the packet processors of the source and destination ports.
  • the multiplexor logic allows for 6/13 22 PC17US95/13838
  • the use of the multiplexor logic effectively limits the number of ports of the switching fabric circuit. Because the multiplexor logic provides a physical connection between port the hardware of the multiplexor logic is designed to service a fixed number of ports, and providing additional ports to the multiplexor logic to allow for future expansion may be cost-prohibitive. For example, doubling the maximum number of ports typically results in squaring the complexity of the multiplexor logic, which greatly increases the cost of the multiplexor logic.
  • each port is typically limited to only a portion of the total bandwidth for the switching fabric circuit by virtue of the physical link between the port and the multiplexor logic.
  • the total bandwidth for a ten port Etherswitch may be 100 Mb/s, wherein each port is provided 10 Mb/s of bandwidth. No port may request more than its 10 Mb/s of bandwidth. A port is therefore unable to use the unused bandwidth of idle ports to increase its individual throughput.
  • one object of the present invention is to provide on-the-fly switching for a switching fabric circuit.
  • Another object of the invention is to provide a switching fabric circuit that allows for the cost-effective expansion of the number of ports of the switching fabric circuit.
  • Another object of the invention is to provide a switching fabric circuit that provides for the interconnection of heterogeneous LAN segments that operate at different data transfer rates.
  • Another object of the invention is to provide a single point where statistics regarding segmented network traffic may be gathered.
  • Another object of the invention is to allow the oversubscription of bandwidth for the switching fabric circuit, wherein more bandwidth may be requested than is available.
  • a switching fabric circuit that comprises a plurality of ports. Each port is coupled to a corresponding one of a plurality of local area network (LAN) segments, wherein each LAN segment may operate according to a different LAN communications protocol. With switching fabric circuit also comprises a switching link coupled to the plurality of ports for interconnecting the LAN segments.
  • LAN local area network
  • the switching fabric circuit is for receiving requests for data transfer operations from the plurality of ports during a synchronization period, for prioritizing the requests for data transfer operations according to a high priority and a low priority during the synchronization period, and for granting requests for data transfer operations such that ports requesting data transfer operations at the high priority are guaranteed access to the switching link during the synchronization period and ports requesting data transfer operations at the low priority are provided access to the switching link for a remainder of the synchronization period.
  • the switching fabric circuit may operate according to a distributed arbitration scheme wherein port arbitration is distinct from switching bus arbitration. Therefore, a method for transferring a data packet is provided.
  • a first port requests control of the bus to transfer a data packet, and a central arbiter grants control of the bus to the first port.
  • the first port transfers a port of exit mask via the bus.
  • the port of exit mask indicates a second port as a destination of the data packet.
  • the first port also transfers a source identification by the first port to identify the first port as the source port of the port of exit mask.
  • the second port requests control of the bus to transfer information to the first port indicating that the second port is ready to receive the system packet.
  • the central arbiter grants control of the bus to the second port, and the second port transfers the information and the source identification to identify the first port as a destination of the first signal.
  • the first port may begin transfer of the system packet requesting access of the bus by the first port to transfer the system packet to the second port.
  • Multiport port arbitration may be distinct from uniport port arbitration. Therefore, a method for transferring a multiple destination packet in the switching link of the switching fabric circuit is provided.
  • a first packet processor having a multiple destination packet to transfer monitors the switching bus to determine when the switching bus is free.
  • a counter maintains a count that indicates which of the plurality of packet processors of the switching bus link is a selected packet processor is incremented. Signal lines of the switching bus carry the count to indicate when the first packet processor is the selected packet processor.
  • the first packet processor responds to being the selected packet processor by indicting that it has the multiple destination packet to transfer.
  • the counter is stopped in response the first packet processor having the multiple destination packet to transfer.
  • the first packet processor transmits a port of exit mask on the switching bus.
  • the port of exit mask indicates which of the plurality of packet processors are destination packet processors for the multiple destination packet.
  • the destination packet processors When the destination packet processors are ready to receive the multiple destination packet, they indicate readiness to the first packet processor such that the first packet processor begins transferring the multiple destination packet.
  • the counter is started in response to the destination packet processors indicating that they are ready to receive the multiple destination packet.
  • FIGURE 1 shows a switching fabric circuit including a switching link according to one embodiment.
  • FIGURE 2 shows the switching link of a switching fabric circuit in more detail.
  • FIGURE 3a and 3b are a flow chart showing a switching bus arbitration method.
  • FIGURE 4 is a timing diagram showing switching bus arbitration according to the method shown in FIGURES 3a and 3b.
  • FIGURE 5 is a flow chart showing a uniport port arbitration method as performed by a source packet processor.
  • FIGURE 6 is a flow chart showing uniport port arbitration method as performed by a destination packet processor.
  • FIGURE 7 is a flow chart showing one method for multiport arbitration.
  • FIGURE 8 is a timing diagram showing multiport arbitration according to the method show in FIGURE 7.
  • Figure 1 shows a switching fabric circuit 100 that includes eight ports 110a- 11 Oh and a switching link 105.
  • the switching link 105 is coupled to each of the ports 110a- 11 Oh and provides communications path so that each port 110 may share information with every other port.
  • the architecture of the switching link 105 allows for on-the fly switching, an expandable number of ports, and the interconnection of local area network (LAN) segments that may transfer data at different rates.
  • LAN local area network
  • Each of the ports 110a- 1 lOh is coupled to a corresponding one of the LAN segments 115a- 115h.
  • port 110 a is coupled to LAN segment 115a
  • port 110b is coupled to LAN segment 115b
  • Each LAN segment 115 comprises a data link such as a coaxial cable, twisted pair wire, or optical fiber
  • the ports 110a- 11 Oh provide the appropriate electrical interface between the switching link 105 and the LAN segments 115a-l 15h.
  • LAN segments 115a-l 15h may each operate according to different LAN standards and protocols, and the switching fabric circuit 100 provides for rate matching so that communications may occur between LAN segments that operate according to different communications protocols.
  • LAN segments 115a-l 15e may operate according to the IEEE 802.3 10Base5 LAN standard, which is commonly known as Ethernet.
  • the IEEE 802.3 10Base5 LAN standard provides for a transfer rate of lOMb/s.
  • LAN segments 115a-l 15e may operate at 16 Mb/s according to the IEEE 802.5 Token Ring standard.
  • LAN segments 115f and 115g may operate according to the 802.3 100BaseT Ethernet standard, which provides for a 100 Mb/s data transfer rate.
  • LAN segment 115h may be an ATM network link that provides for data transfer at 150 Mb/s.
  • the switching fabric circuit 100 may be configured to allow for LAN segments that operate according to any of a number of standard and non-standard communications protocols.
  • each LAN segment 115 There may be one or more endstations 120 coupled to each LAN segment 115. Endstations are shown in Figure 1 as small boxes coupled to the LAN segments 115. Alternatively, a LAN segment 115 may be provided as a link between the switching fabric circuit 100 and another switching fabric circuit (not shown).
  • the switching fabric circuit 100 interconnects the endstations of the various LAN segments to form a segmented network.
  • each of the endstations 120 coupled to the LAN segments 115a-l 15h has a unique address that is globally defined for the segmented network. Wherein the segmented network is subdivided into two or more virtual networks, addresses may be locally de ined for each virtual network.
  • An endstation that is the source of a data frame for transfer is called the "source endstation,” and a port that receives a data frame for transfer to another port via the switching linke 105 is called a "port of entry” or a "source port.”
  • An endstation that is the destination of a data frame is called a "destination endstation”, and a port that receives a data frame via the switching link 105 is called a "port of exit” or a "destination port.”
  • a data frame sent by an endstation typically includes both a source address field and a destination address field.
  • the source address field contains the network address of the source endstation for the frame.
  • the contents of the destination address field depends on the type of transaction.
  • a "unicast" transaction is a network transaction in which the destination address field of the frame indicates a single destination endstation.
  • Multicast and broadcast transaction protocols a defined by the LAN communications protocol of a LAN segment.
  • a "multicast” transaction is a network transaction in which th destination address field of the frame contains a predefined multicast address according to IEEE standards
  • a "broadcast” transaction is a network transaction in which the destination address field of the frame of a broadcast transaction is typically set to a default value that indicates the fram is to go to all endstations of the segmented network.
  • the switching link 105 monitors each data frame that is transferred on each LAN segment 115 and determines which data frames are broadcast frames, multicast frames, or unicast frames having a destination that is remote from the port of entry. These three types of data frames are directed to the appropriate port or ports of exit by the switching link 105. Unicast frames having destination endstations that are local to the port of entry are ignored by the switching link 105 exce to the extent that such frames are used during the network learning process whereby the switching link 105 "learns" the address and location of each endstation that is coupled to the switching fabric circuit 100.
  • Switching link 105 includes a switching bus 205, a multiplicity of packet processors 210a-210h, a system processor 215, a processor bus 220, a multiport arbitration ("MPA") counter 213, and a central arbiter 212.
  • Each o the packet processors 210a-210h is coupled to a corresponding one of the ports 1 lOa-1 lOh.
  • packet processor 210a is coupled to port 110a
  • packet processor 210b is coupled to port 110b, etc.
  • the packet processors 210a-210h provide the communications protocol interfaces between the LAN segments 115a-l 15h and the switching link 105.
  • Communications between ports 110 of the switching fabric circuit 100 are done using syste packets, and the switching bus 205 is the mechanism by which system packets are transferred between the ports.
  • System packets are LAN data frames that include the additional information required for forwarding the LAN data frames to the appropriate ports of the switching link 105.
  • the packet processor of the port o entry When a data frame is received from a LAN segment 115 via a port 110 (the port of entry), and that data frame is to be transferred to a remote port (the port of exit), the packet processor of the port o entry generates a system header that may be appended to the beginning of the data frame to create the system packet.
  • the system header contains additional information that may provide additional forwarding options to the port of exit that the port of exit may use to determine the final dispositio of the data frame.
  • a system header may be used, for example, when the system packet is destined for a port of exit coupled to an ATM LAN segment.
  • the system packet is transferred or "switched via the switching bus 205 to the packet processor of the port of exit.
  • the packet processor of the port of exit strips any system header from the system packet to retrieve the transmitted data frame.
  • unicast packets System packets that include unicast data frames are called “unicast packets.” Similarly, system packets that include multicast frames are called “multicast packets”, and system packets that include broadcast frames are called “broadcast packets.'
  • each packet processor 210 includes a filter table that contains entries for each known endstation of the segmented network.
  • Each packet processor performs lookups of the filter table using the destination address field of the received data frame and creates a system packet to forward the data frame if the data frame specifies a remote destination endstation.
  • the system processor 215 maintains a master filter table for the switching fabric circuit and updates the local filter tables of each of the packet processors via the processor bus 220. Entries may be added to the filter tables automatically using a bridge learning process. Alternatively, a system administrator may provide master filter table entries for each endstation of the segmented network.
  • Each of the packet processors 210 and the system processor 215 may use the switching bus 205 to transfer system packets to each other.
  • system packets are switched from the packet processors 210 to the system processor 215 during the network learning process or when a port does not otherwise know the destination endstation specified by the destination address field of the data frame.
  • the system processor 215 uses the packets that are forwarded to it via the switching bus 205 to build the master filter table for the switching fabric circuit 100.
  • the transmission of a system packet is preceded by the transmission of a port of exit (POE) mask that comprises a number of bits equal to one more than the maximum number of ports for the switching fabric circuit 100.
  • a POE mask bit is provided for each port and for the system processor.
  • Each packet processor 210 monitors the switching bus 205 to determine if the bit of the POE mask that indicates the port to which the packet processor is coupled is set to a logic high level. If the bit that indicates the port of the packet processor is set to a logic high level, the packet processor of that port recognizes that it is a port of exit for the system packet . If the bit that indicates the port is set to a logic low level, the packet processor for that port may ignore the system packet because the port is not a port of exit for the system packet.
  • the POE mask for a system packet may be retrieved from the local filter table during the look-up performed by the source packet processor to determine the location of the destination endstation.
  • system packets are switched between the ports using the switching bus 205, and the central arbiter 212 is coupled to the switching bus 205 for arbitrating requests made by the packet processors 210 to access the switching bus 205.
  • the switching bus 205 includes one request signal line (REQ) and one acknowledge signal line (ACK) for the system processor 215.
  • the switching bus 205 also includes one REQ signal line and one ACK signal line for each packet processor that may be coupled to the switching bus 205. All of the REQ signal lines and the ACK signal lines are coupled directly between the central arbiter 212 and the corresponding system or packet processor.
  • the switching bus 205 may be conveniently used to gather information regarding the bandwidth usage o each LAN segment or endstation. This information may be used to meter usage of the switching fabric circuit 100.
  • the switching bus 205 also provides for the convenient expansion of the total number of ports at a reduced cost when compared to the multiplexor logic of the prior system.
  • the switching bus 205 is a partially asynchronous time division multiple access (TDMA) bus that includes a 32-bit data bus for transferring data between the system and packet processors. Data transfer on the switching bus 205 occurs during discrete periods of time called "synchronization periods," each of which is subdivided into a multiplicity of bus slots, wherein each bus slot is equal t one bus clock cycle.
  • the switching bus 205 shown in Figure 2 operates at 16.25 MHz clock speed and fifty-two bus slots are provided per synchronization period such the total duration of the synchronization period is equal to 3.2 microseconds. Thirty-two bits of data may be transferred eac clock cycle such that the maximum bandwidth for the switching bus of this embodiment is 520 Mb/s. Of course, the maximum bandwidth increases as the system clock speed increases, as the width of th data bus increases, or as the number of data transfers per clock cycle increases.
  • the central arbiter 212 dynamically allocates bandwidth for the switching bus 205 according to two levels of priority: high priority and low priority. High priority transactions are guaranteed access to the switching bus 205 (guaranteed bandwidth) so that high priority packets may be switched on-the-fly. All other transactions are assigned low priority. According to one embodiment of the switching bus 205, high priority transactions are reserved for transactions between 10 Mb/s LAN segments.
  • Each packet processor 210 includes circuitry for determining whether its request to O 96/13922 PC17US95/13838
  • transfer a particular system packet is a high priority request or a low priority request. This is discussed in more detail below.
  • Dynamic bus slot allocation may be performed in a number of different ways.
  • the central bus arbiter 212 allocates bus slots to requesting packet processors after access requests are received, and no packet processor has a dedicated bus slot.
  • the number of bus slots in a synchronization period is chosen to be great enough to provide guaranteed access to each packet processor, as if all the packet processors could initiate high priority transactions.
  • bus slots are initially allocated to each packet processor and deallocated if a packet processor does not request a high priority bus access. The order in which a packet processor may access the switching bus is altered when bus slots are deallocated such that all high priority transactions occur in single block of contiguous bus slots.
  • the second method may not require a central arbiter. Both of these methods may be contrasted with typical prior time division multiplexed (TDM) buses wherein each component coupled to the bus has a dedicated bus slot. If the dedicated bus slot is unused, the TDM bus is idle for that bus slot.
  • TDM time division multiplexed
  • each high priority transaction of a synchronization period is provided one bus slot for data transfer, and each low priority transaction competes for as many slots that are required before the next synchronization period begins.
  • the width of the data bus, the duration of the synchronization period, and the guaranteeing of bus access for all high priority transactions during each synchronization period combine to provide a guaranteed throughput for high priority transactions, which allows for high priority system packets to be switched on-the-fly such that the port of entry packet processor may begin transfer of a system packet before the entire data frame has been received from the LAN segment.
  • the data bus thirty-two bits wide, the synchronization period is 3.2 microseconds, and one bus slot is provided per synchronization period, a throughput of 10 Mb/s is guaranteed, and on-the-fly switching between two 10 Mb/s, LAN segments may occur.
  • the duration of the synchronization period may be reduced to 2 microseconds if the width of the data bus and the number of bus slots guaranteed per synchronization period remain constant.
  • SYNC:L Single driver SYNC signal indicates the beginning of a sync period. Push-pull It is asserted for one bus clock every 3.2 used, usec (microseconds) , not;
  • PRIORTTYliL Multi driver Packet processors drive this line during bus requests OC for high priority traffic.
  • REQ(x):L 1 per port This is the request signal line from the packet processor to the bus arbiter.
  • ACK(x):L 1 per port This is the acknowledge line from the bus arbiter to the packet processor.
  • TYPE[2:0]:H Slot driven TYPE signals indicate what type of information is Tri-state driven on the DATA[31:01 signals during the current clock cycle.
  • SID[4:0]:H Slot driven Source ID Indicates source of frame. Driven by source port during POE and DATA type bus cycles, and driven by destination port during DPA type bus cycles.
  • BE[1:0]:H Slot driven During the last data transfer slot BE[1:0] indicate the Tri-state number of valid bytes.
  • POE indicates rate and whether the packet is multicast_or unicast.
  • MPA[4:01:H Single driver MPA[4:0] indicates the port address of the packet Push-pull processor port that is allowed to instigate a multiport arbitration
  • BFREE H Multi drivers BFREE is asserted when no arbitration is occurring on Wired-or the switching bus. This signal is a wired-or signal that is pulled low by any source port that is currently asserting a POE.
  • RESET H Single driver RESET is a synchronous signal used to reset all packet Push-pull processors and synchronize all internal clocks.
  • CLK H 1 Single driver CLK is a 32.5 MHz clock that is used by all packet
  • Push-pull processors to synchronize to the switching bus.
  • Table 1 shows the signal definition for the signal lines of the switching bus 205 shown in Figure 2.
  • the switching bus 205 includes 32 data bus signal lines, DATA[31:0].
  • Three "type" signal lines TYPE[2:0] are provided to each of the system and packet processors to indicate what type of information is currently being driven on the data bus during the bus slot.
  • the SYNC signal line is provided to each of the system and packet processors to indicate the beginning of a synchronization period when the SYNC signal is asserted logic low.
  • the RESET and CLK signals are provided to each of the packet processors.
  • the packet processors include internal clock circuitry for deriving their own internal 16.25 MHz clock signals using the RESET and CLK signals.
  • the internal clock circuits of the packet processors are designed to reduce clock skew between packet processors.
  • the packet processors may also use the SYNC signal to synchronize the internal clock circuits.
  • IDLE - 111 (Sourced by separate active driver
  • Table 2 shows the data types for the 32-bit data bus as may be indicated by the TYPE[2:0] signal lines.
  • the TYPE[2:0] signal lines are asserted by the packet processor that has control of the switching bus 205 during the bus slot in which data is being transferred by the packet processor on the 32-bit data bus.
  • the TYPE signal lines are discussed in more detail below.
  • the PRIORITYl signal line is a wired-OR signal line that is driven active (logic low) when a packet processor requests a high priority transaction.
  • the PRIORITYl signal line remains active as long as there is an outstanding high priority request.
  • the PRIORITYl signal line goes inactive, and low priority arbitration may begin. Once the PRIORITYl signal goes inactive, it may not be asserted again until the next synchronization period.
  • Switching bus arbitration i the process whereby a port requests and is granted access to the switching bus.
  • Port arbitration is t process whereby a source port sends the POE mask and waits for the DP A signal from the destination port so that the source port may begin transmission of a system packet to the destinatio port.
  • Each port must successfully complete switching bus arbitration before it is allowed to initiate port arbitration. Further, once a source port has successfully arbitrated for access to the destination port, the source port continues to initiate switching bus arbitration each synchronization period to transfer the system packet.
  • the SYNC, PRIORITYl, REQ , and ACK signal lines are used primarily for switchin bus arbitration.
  • Signal lines SID[4:0], BE[1:0], DATA[31:0],TYPE[2:0] may be used for port arbitration.
  • the SID[4:0] signal lines indicate the source ID of the source port that is currently driving data on the DATA[31:0] signal lines.
  • Each of the packet processors 210 includes circuitry f arbitrating port access requests. This circuitry is generally comprised of a latch that latches the contents of the STD[4:0] signal lines when the packet processor is free to accept a request.
  • Port access requests may be queued by the destination port, wherein the queue is cleared a the beginning of each synchronization period.
  • port accesses may be grant in a round robin manner, and high speed requesters (e.g. 100 Mb/s or 150 Mb/s packet processors) may be given priority over low speed requesters (e.g. 10 Mb/s packet processors) by low speed ports.
  • the BE[1:0] signal lines are used to indicate the rate of the data transfer on the DATA[31:0] signal lines such that the packet processor of the port of exit may determine the rate matching rules that are applicable to the date of transfer.
  • the TYPE[2:0] signals indicate what type of information is driven on the DATA[31 :0] signal lines during the current clock cycle.
  • Multiport port arbitration Port arbitration for multicast and broadcast transactions (“multiport port arbitration") is distinct from uniport port arbitration. According to one embodiment of the switching link 105, multiport port arbitration is postponed until all uniport port arbitration has concluded. Multiport port arbitration is the process whereby a source port having a multiport packet arbitrates for access to the destination ports of the multiport packet. Once multiport port arbitration has successfully completed, transfer of the multiport packet is done using switching bus arbitration.
  • the switching bus 205 includes signal lines
  • MPA[4:0], BFREE, BBUSY, and BFREEZ For the present embodiment, only one port is allowed to arbitrate for a multiport transaction during a synchronization period. Alternatively, multiple ports may be allowed to initiate multiport arbitration simultaneously.
  • the BFREE signal line is provided to indicate when uniport arbitration has completed so that multiport port arbitration may begin.
  • the MPA[4:0] signal lines are coupled to the MPA counter 213 and are provided to select the packet processor of the switching fabric circuit 100 that is allowed to initiate multiport port arbitration.
  • the MPA counter 213 is incremented every two bus clock cycles unless the MPA counter 213 is disabled.
  • the output of the MPA counter 213 is carried by the MPA[4:0] signal lines.
  • the BFREEZ signal line is provided to signal the beginning of multiport arbitration to all ports and to disable the MPA counter 213.
  • a packet processor may begin multiport arbitration immediately upon being indicated by the MPA[4:0] signal lines.
  • BFREEZ goes active (logic low) to freeze the value carried on the MPA[4:0] signal lines.
  • the BBUSY signal line is provided so that ports of exit of the multiport system packet can indicate when they are ready to receive the multiport system packet.
  • the multiport packet may be transferred to all of the ports of exit indicated by the multiport POE mask when all of the packet processors are ready to receive the multiport system packet.
  • the packet processors 210 of the switching link 105 include receive queues for single file queuing of data frames, which requires that data frames received from the LAN segments must be able to leave the receive queue at least as fast as they ente the receive queue.
  • the packet processors 210 include transmit queues for single file queuing of system packets, which requires that system packets received from the switching bus 205 must be able to leave the transmit queue at least as fast as they enter the transmit queue.
  • the receiv and transmit queues of a packet processor may generally comprise first-in first-out buffers (FIFOs).
  • a first rate matching rule is that a unicast packet is switched between two ports at the rate of the faster of the two ports, wherein the rate of the port is defined by the rate of the associated LAN segment 115.
  • a second rate matching rule is that a multicast packet is switched at the rate of the faster of the source port or the fastest destination addressed by the multicast packet.
  • a third rate matching rule that follows from the first two rate matching rules is that all packet processors 210 must be able to send and receive data at the fastest rate supported by the switching fabric circuit 100.
  • a fourth rate matching rule is that all packet processors that are coupled to LAN segments having the fastest rate must be capable of full duplex communication such that they are able to both send and receive simultaneously at the fastest rate.
  • a fifth rate matching rule similarly requires that all packet processors coupled to the slowest rate LAN segments are able to send and receive at the slowest rat simultaneously. To reduce system costs, packet processors coupled to the slowest rate LAN segments need not be able to simultaneously send and receive at the fastest rate.
  • the switching bus operates at one of two rates for any given transaction although the LAN segments themselves may operate at three different rates.
  • the slowest rate of the LAN segments for the embodiment shown in Figure 2 is 10 Mb/s and the fastest rate is 150 Mb/s.
  • the switching bus 205 is configured to switch system packets at either the slowest or the fastest rates.
  • Packet processors that are coupled to LAN segments that operate at other than the slowest bus switching rate completely store a frame received as a system packet from the switching bus before beginning transmission of that frame to the LAN segment. This is required to prevent FIFO underflow due to possible insufficient bandwidth availability on the bus. It is apparent that the particular rate matching methodology is affected by the system architecture, and the precise rate matching methodology may differ for different system architectures.
  • FIGS. 3a and 3b show a flow diagram for a method of operation for the switching bus 205. More specifically, the flow diagram of Figures 3a and 3b show a process for arbitrating for access to the switching bus 205.
  • a synchronization period begins at process block 305 when the SYNC signal lin is asserted (logic low).
  • packet processors with pending high priority requests each assert the common PRIORITYl signal line and its corresponding REQ signal line driving both signal lines logic low.
  • the table entries of the filter table for each packet processor include a field that may be used to determine whether a particular request is a high priority or a low priority request.
  • each packet processor knows what priority request it has.
  • any remaining packet processors that do not have a high priority transaction request deassert their corresponding REQ signal lines.
  • Process blocks 310 and 315 preferably occur within the first bus slot of the synchronization period.
  • each packet processor that requests a high priority bus access is guaranteed switching bus access during the synchronization period in which the high priority reques is made.
  • the central arbiter grants a high priority request of a packet processor.
  • the packet processor that receives the grant from the central arbiter 212 deasserts the
  • PRIORITYl signal line and its corresponding request line sends data via the data bus for one bus slot of the synchronization period, and asserts the TYPE[2:0] signal lines to indicate what type of data is being sent on the DATA[31:0] signal lines. If there are any remaining high priority requests a process block 330, process blocks 320 and 325 are repeated until no high priority requests remain. According to the present embodiment, the central arbiter grants high priority requests on a round robin basis.
  • the end of high priority arbitration is signaled by the PRIORITYl signal line going inactive (logic high), which occurs during the bus slot in which the packet processor having the last remainin high priority request is granted control of the switching bus 205.
  • the switching bus arbitration process begins again at process block 310. Otherwise, the process continues at process block 340.
  • the packet processors with pending low priority requests assert the their corresponding REQ signal lines at process block 340.
  • Low priority requests may be made at any time during the synchronization period after high priority transactions have completed.
  • the central arbiter 212 grants a low priority request of a packet processor, which, at process block 350, transfers data via the switching bus 205 for a single bus slot.
  • packet processors typically request several bus slots per synchronization period. To ensure some fairness in bus access, low priority bus requests are granted on a round-robin, slot-by-slot basis such that no packet processor transfers data for two contiguous bus slots.
  • process block 355 If bus slots remain in the current synchronization period, it is determined at process block 355 whether there are additional low priority requests. If so, process blocks 335-350 are repeated. If not, the switching bus 205 may be idle until the next synchronization period at which time the switching bus arbitration process is repeated beginning at process block 310.
  • FIG 4 is a timing diagram showing an example of the operation of the switching bus 205 according to the process shown in Figure 3.
  • the length of the synchronization period for the example has been reduced to simplify the explanation.
  • the SYNC signal line is asserted low for a single bus cycle to signal the beginning of the synchronization period.
  • a first and a second packet processor each request high priority bus transactions, as indicated by the corresponding REQl and REQ2 and PRIORITYl signal lines going active.
  • the central arbiter 212 grants the request of the first packet processor to begin data transfer during the second bus slot of the synchronization period as indicated by the ACK1 signal line being asserted low.
  • the first packet processor deasserts its request line REQl , deasserts the PRIORITYl signal line, and drives the data on the DATA[31 :0] signal lines.
  • the first packet processor drives the TYPE[2:0] signal lines to indicate the data type and the SID[4:0] signal lines to indicate the source port of the data carried by the data bus. If the data type is a POE mask, the BE[1 :0] signal lines are driven by the first packet processor to indicate the transfer rate.
  • the BE[1:0] signal lines are driven by the first packet processor to indicate the number of valid bytes in the last data word.
  • the PRIORITYl signal line remains active after the first packet processor is granted contro of the switching bus because the high priority request of the second packet processor remains pending.
  • the central arbiter may grant one high priority request for each bus slot that follows the initial bus slot during which high priority requests were entered. Therefore, the central arbiter 212 i shown as granting control of the switching bus 205 to the second packet processor in the bus slot immediately following the bus slot in which the first packet processor was granted access to the switching bus 205.
  • the second packet process drives its data over the data bus, and the PRIORITYl signal line is deasserted.
  • the second packet processor drives the TYPE[2:0] signal lines, the BE[1:0] signal lines, and th SID[4:0] signal lines.
  • the third and fourth packet processors detect the lack of high priority arbitration during the fifth bus slot, and both packet processor request low priority access to the switching bus 205 durin the sixth bus slot of the synchronization period.
  • the PRIORITYl signal line remains inactive to indicate a low priority request.
  • Low priority transactions are reserved for those transactions requiri multiple bus slots, and packet processors that request low priority transactions may request multipl bus slots during the same synchronization period and may pipeline those requests such that they ma receive bus grants as often as every two bus slots.
  • the central arbiter allocates bus access a single slot at a time in a round-robin manner such that each packet processor is provided one bus slot at a time and such that no packet processor transfers data during two contiguous bus slots. This is sho in Figure 4 by the interleaving of bus grants for the third and fourth packet processors.
  • FIG. 5 is a flow chart showing generally a method for uniport port arbitration.
  • the proces begins at process block 505.
  • the synchronization period is begun by the SYN signal being asserted for the duration of a single bus clock cycle.
  • the packet processor of source p (the "source packet processor") requests access to the switching bus access at process block 515.
  • the bus access requested may be of either a high priority or, a low priority.
  • the source port is granted bus access at process block 520.
  • the source packet processor when the source packet processor is initially granted bus access to transfer a system packet via the switching bus 205, the source packet processor sends a POE mask via the data bus, indicates that the data bus carries the POE mask by asserting the TYPE[2:0] signal lines as indicated in Table 1, indicates the rate of switching to the destination port by signaling via the BE[1:0] signal lines, and identifies itself as the source port of the POE mask by signaling via the STD[4:0] signal lines.
  • the source packet processor cannot continue transfer of the system packet until the packet processor of the destination port (the "destination packet processor") indicates that it is ready to receive the system packet.
  • the earliest that the destination packet processor can indicate that it is ready to receive the system packet is the synchronization period immediately following the initial synchronization period in which the source packet processor transmits the POE mask because the destination packet processor must itself arbitrate for bus access so that it may indicate that it is ready to receive the system packet.
  • the destination packet processor it is possible for the destination packet processor to indicate it is ready to receive the system packet in the same synchronization period that it receives the POE mask.
  • the destination packet processor indicates that is available by simultaneously asserting the "destination port available" (DP A) signal via the TYPE[2:0] signal lines and asserting the STD[4:0] signal lines with the identification of the source packet processor. In this manner, the source packet processor recognizes that the destination port is available.
  • DP A destination port available
  • the source packet processor requests access to the switching bus once every synchronization period to transmit the POE mask until the destination port available (DP A) signal is received.
  • Process block 530 shows that if a DP A signal is not received during the current synchronization period, the source packet processor repeats process blocks 510-525 until the DP A signal is received.
  • the DPA signal is asserted by the port of exit according to the process shown in Figure 6.
  • the source packet processor may begin transfer of the system packet at the next synchronization period.
  • the transfers of a POE mask and the DPA signal each take only a single bus slot, and it is possible to assign high priority to POE mask and DPA signal transfers. According to the present embodiment, however, the priorities of the POE mask and DPA signal transfers are assigned according to the relative rates of the source and destination packet processors as described above. Thus, the POE masks and DPA signals of high priority system packets are also high priority transactions requiring high priority bus access requests.
  • the process continues at process block 535, when the next synchronization period begins.
  • the source packet processor requests switching bus access at process block 540.
  • the source packet processor is granted access, and at process block 550, the source packet processor sends data via the DATA[31:0] signal lines.
  • the source packet processor indicates the type of data on the DATA[31 :0] by toggling the type signal lines. If the transfer is not complete at process block 555 and the transaction is a high priority transaction as determined at process block 556, process blocks 535 through 550 are repeated until transfer is complete, at which time the process ends at process block 560.
  • steps 540-550 are repeated to the extent that switching bus traffic allows the central arbiter 212 to allocate bus slots to that source packet processor. If the transfer is not complete at process block 555, the transaction is a low priority transaction as determined at process block 556, and no bus slots of the current synchronization period remain to be allocated as determined at process block 557, process blocks 535-550 are repeated. Once the DPA signal is received, uniport port arbitration is complete, and process blocks 535-550 are simply the switching bus arbitration method of Figures 3 a and 3b shown in a simplified form.
  • Figure 6 is a flow diagram showing the port arbitration process of a packet processor at the destination port.
  • the process begins at process block 605. After a synchronization period has begu at process block 610, the destination port receives the POE mask, source ID, and rate information from the source port at process block 615.
  • the POE mask is received by the destination port in the same synchronization period that it is sent by the source port. If the destination port is ready to receive a system packet from the source port, the destination packet processor latches the source I of the source port as indicated by the SID[4:0] signal lines.
  • the destination packet processor continues to store the source ID of a source port until the last data word of the system packet is received as indicated by the TYPE[4:0] signal lines signaling EOP/CRC good or EOP/CRC bad.
  • T destination packet processor compares the source ID of the data driven on the data bus to the latched source ID and accepts the data only if the two source ID's match.
  • the destination packet processor clears its latch after receiving the last word of a system packet so that it may latch the source ID of the next source port to be serviced.
  • the source port sends the POE mask once per synchronization period and process blocks 610 and 615 are repeated until the destination packet processor is ready to receive data from the source port. If the destination packet processor is ready to receive data, and the system packet is a high priority packet as determined at process block 622, the destination packet processor waits for the beginning of the subsequent synchronization period at process block 625 before requesting bus access at process block 630. If the system packet is a low priority packet as determined at process block 622, the flow may proceed directly to process block 630 without awaiting for the beginning the next synchronization period.
  • the destination packet processor is granted bus access, and the destination packet processor sends the DPA signal to the source port at process block 640. The process ends at process block 645.
  • FIG. 7 is a flow diagram showing a multiport switching bus arbitration method.
  • a source port has received and identified a multicast or broadcast frame and thus has a multiport arbitration request.
  • the source packet processor may optionally monitor the BFREE signal line at process block 710 to determine if uniport port arbitration has completed.
  • the BFREE signal line is asserted if a port that has asserted a POE mask has not received a return DPA signal, which signals the end of uniport port arbitration.
  • the MPA counter is incremented at process block 716 to select a different packet processor. According to one embodiment, the MPA counter is incremented once every two bus clock cycles. If the MPA[4:0] signal lines indicate the source packet processor, the source packet processor begins multiport arbitration by freezing the MPA counter 213 at its current value. This occurs at process block 720. The MPA counter 213 may be frozen by asserting the BFREEZ signal line. All packet processors of the switching fabric circuit assert the BBUSY line upon detection of the BFREEZ signal going active.
  • the source packet processor is free to request a bus slot with which to send the POE mask for the multiport packet.
  • the source packet processor performs normal switching bus arbitration to send the multiport POE mask. If the multiport transaction is a high priority transaction, the source packet processor may have to wait until the start of the next synchronization period to make its request. If the multiport transaction is a low priority transaction, the source packet processor may request bus access anytime after high priority traffic has completed. The request is made at process block 725, and the central arbiter grants the source packet processor a bus slot at step 730.
  • the source packet processor sends the POE of the multiport system packet at process block 735. Packet processors that are not destinations of the multiport packet and destination packet processors of the multiport system packet that are not busy deassert the BBUSY signal. Destination packet processors that are not busy latch the source ID carried on the MPA[4:0] signal lines. This prevents destination packet processors that are not busy during one synchronization period from becoming busy in a subsequent synchronization period. If one or more destination packet processor are not ready to receive data at process block 740, the source packet processor repeats process blocks 725-735 once every synchronization period until all destination packet processors are ready receive the multiport packet. When all destination packet processors are ready, the BBUSY signal i deasserted .
  • the source packet processor deasserts the FREEZE signal high i response to BBUSY being deasserted, and the destination packet processors latch the value carried by the MPA[4:0] signal lines to use as the source ID of the source packet processor.
  • Multiport port arbitration is complete, and the MPA counter 213 may be incremented a bus clock cycle after
  • BFREEZ goes high.
  • the transfer of the multiport packet then continues as if it were a uniport packet, which is shown in process blocks 535-560 of Figure 5.
  • Multiport arbitration may begin agai before transfer of the multiport packet is completed.
  • Figure 8 shows an example of multiport arbitration according to the method shown in Figur 7.
  • the duration of the synchronization period is reduced to simplify the example.
  • a first synchronization period begins when the SYNC signal goes low for one bus clock cycle.
  • the MPA counter is incremented once every two bus clock cycles, as indicated by the MPA[4:0] signal lines.
  • the MPA counter is incremented, indicating port 2 in binary "00010.”
  • Port 2 has a multiport packet to transfer and freezes the MPA counter 213 during the fourth bus clock cycle by asserting the BFREEZ signal, and the BFREE signal goes low.
  • BBUSY remains low.
  • the source packet processor continues to send the multiport POE mask until all destination ports are ready to receive the multiport packet.
  • Figure 8 shows two vertical lines that indicate the passage of time until the synchronization period after all destination packet processors have signaled their readiness to receive the multiport packet by deasserting BBUSY. As shown, the BFREEZ signal goes inactive in response to the

Abstract

A switching fabric circuit that provides on-the-fly switching of packets, an expandable number of ports, and the interconnection of heterogeneous LAN segments. The switching fabric circuit includes a switching link that comprises a switching bus and a plurality of packet processors, wherein each packet processor is coupled between the switching bus and a LAN segment. The switching bus is a time division multiple access (TDMA) bus, and arbitration for switching bus access is distinct from arbitration for access to the ports of the switching fabric circuit. Switching bus arbitration is done according to one of two priority levels, wherein high priority requests are guaranteed access to the switching bus during a synchronization period in which the high priority requests are made. This provides for guaranteed throughput and on-the-fly switching of packets. Port arbitration may be either uniport port arbitration or multiport port arbitration. Port arbitration is characterized by the transmission of a POE mask by a source port to a destination port and by the subsequent transmission of a DPA signal by the destination port to the source port.

Description

COMPUTER NETWORK SWITCHING SYSTEM WITH EXPANDABLE NUMBER OF PORTS
FIELD OF THE INVENTION
The present invention relates generally to local area networks and more specifically to switching fabric circuits for segmented networks. BACKGROUND
Local area networks (LANs) provide for the interconnection of a multiplicity of endstations so that a multiplicity of users may share information in an efficient and economical manner. Each endstation of a LAN can typically communicate with every other endstation that is physically connected to that LAN.
As the number of endstations of a LAN increases, the effective throughput for each endstation of the LAN decreases. To increase the throughput for each endstation, the LAN can be "segmented" into smaller interconnected sub-networks or "LAN segments" that each have fewer endstations. The load for each sub-network of a segmented LAN is reduced, leading to increased throughput for a segmented LAN when compared to a similarly sized, unsegmented LAN.
Interconnection of the segments of prior segmented LANs is achieved by connecting several individual sub-networks to the ports of a "switching fabric circuit." The term "switching fabric circuit" as used here is meant to encompass any circuit that provides for the processing and forwarding of information between LAN segments in a segmented network. For example, one prior switching fabric circuit includes a number of conventional Ethernet bridges connected to a backbone network that is controlled by a system processor. The system processor is responsible for filtering and forwarding frames of data between LAN segments. Filtering and forwarding typically requires first storing incoming frames of data received from the ports and subsequently forwarding the frames of data to the appropriate destination port. One disadvantage of the filtering and forwarding schemes of prior bridging architectures is the delay time associated with the filtering and forwarding process.
To address this and other disadvantages of bridging architectures, a second prior switching fabric circuit commercially known as Etherswitch ™ was developed. Etherswitch™ is sold by Kalpana, Inc., of Sunnyvale, California, and described in U.S. Patent No. 5,274,631, ofBhardwaj, issued on December 28, 1993, entitled Computer Network Switching System. The Etherswitch™ includes a number of packet processors, one for each port, that are each connected to multiplexor logic. The multiplexor logic acts as a crossbar switch that provides a direct physical connection between the packet processors of the source and destination ports. The multiplexor logic allows for 6/13 22 PC17US95/13838
2 packets to be transmitted directly from the packet processor of the source port to the packet processor of the destination port without first storing the packets. The process of forwarding a packet without first storing the packet is known as "switching on-the-fly."
Wherein on-the-fly switching provides significant speed advantages over other prior art switching fabric circuits, the use of the multiplexor logic effectively limits the number of ports of the switching fabric circuit. Because the multiplexor logic provides a physical connection between port the hardware of the multiplexor logic is designed to service a fixed number of ports, and providing additional ports to the multiplexor logic to allow for future expansion may be cost-prohibitive. For example, doubling the maximum number of ports typically results in squaring the complexity of the multiplexor logic, which greatly increases the cost of the multiplexor logic.
Further, each port is typically limited to only a portion of the total bandwidth for the switching fabric circuit by virtue of the physical link between the port and the multiplexor logic. Fo example, the total bandwidth for a ten port Etherswitch may be 100 Mb/s, wherein each port is provided 10 Mb/s of bandwidth. No port may request more than its 10 Mb/s of bandwidth. A port is therefore unable to use the unused bandwidth of idle ports to increase its individual throughput.
SUMMARY AND OBJECTS OF THE INVENTION
Therefore, one object of the present invention is to provide on-the-fly switching for a switching fabric circuit.
Another object of the invention is to provide a switching fabric circuit that allows for the cost-effective expansion of the number of ports of the switching fabric circuit.
Another object of the invention is to provide a switching fabric circuit that provides for the interconnection of heterogeneous LAN segments that operate at different data transfer rates.
Another object of the invention is to provide a single point where statistics regarding segmented network traffic may be gathered.
Another object of the invention is to allow the oversubscription of bandwidth for the switching fabric circuit, wherein more bandwidth may be requested than is available.
These and other objects of the invention are provided by a switching fabric circuit that comprises a plurality of ports. Each port is coupled to a corresponding one of a plurality of local area network (LAN) segments, wherein each LAN segment may operate according to a different LAN communications protocol. With switching fabric circuit also comprises a switching link coupled to the plurality of ports for interconnecting the LAN segments. The switching fabric circuit is for receiving requests for data transfer operations from the plurality of ports during a synchronization period, for prioritizing the requests for data transfer operations according to a high priority and a low priority during the synchronization period, and for granting requests for data transfer operations such that ports requesting data transfer operations at the high priority are guaranteed access to the switching link during the synchronization period and ports requesting data transfer operations at the low priority are provided access to the switching link for a remainder of the synchronization period.
The switching fabric circuit may operate according to a distributed arbitration scheme wherein port arbitration is distinct from switching bus arbitration. Therefore, a method for transferring a data packet is provided. A first port requests control of the bus to transfer a data packet, and a central arbiter grants control of the bus to the first port. The first port transfers a port of exit mask via the bus. The port of exit mask indicates a second port as a destination of the data packet. The first port also transfers a source identification by the first port to identify the first port as the source port of the port of exit mask. In response to the port of exit mask, the second port requests control of the bus to transfer information to the first port indicating that the second port is ready to receive the system packet. The central arbiter grants control of the bus to the second port, and the second port transfers the information and the source identification to identify the first port as a destination of the first signal. Once the information is received, the first port may begin transfer of the system packet requesting access of the bus by the first port to transfer the system packet to the second port.
Multiport port arbitration may be distinct from uniport port arbitration. Therefore, a method for transferring a multiple destination packet in the switching link of the switching fabric circuit is provided. A first packet processor having a multiple destination packet to transfer monitors the switching bus to determine when the switching bus is free. A counter maintains a count that indicates which of the plurality of packet processors of the switching bus link is a selected packet processor is incremented. Signal lines of the switching bus carry the count to indicate when the first packet processor is the selected packet processor. The first packet processor responds to being the selected packet processor by indicting that it has the multiple destination packet to transfer. The counter is stopped in response the first packet processor having the multiple destination packet to transfer. The first packet processor transmits a port of exit mask on the switching bus. The port of exit mask indicates which of the plurality of packet processors are destination packet processors for the multiple destination packet. When the destination packet processors are ready to receive the multiple destination packet, they indicate readiness to the first packet processor such that the first packet processor begins transferring the multiple destination packet. The counter is started in response to the destination packet processors indicating that they are ready to receive the multiple destination packet.
Other objects, features, and advantages of the present invention will be apparent from the accompanying drawings and from the detailed description which follows below.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is illustrated by way of example and not limitation in the figures of th accompanying drawings, in which like references indicate similar elements, and in which:
FIGURE 1 shows a switching fabric circuit including a switching link according to one embodiment.
FIGURE 2 shows the switching link of a switching fabric circuit in more detail.
FIGURE 3a and 3b are a flow chart showing a switching bus arbitration method.
FIGURE 4 is a timing diagram showing switching bus arbitration according to the method shown in FIGURES 3a and 3b.
FIGURE 5 is a flow chart showing a uniport port arbitration method as performed by a source packet processor.
FIGURE 6 is a flow chart showing uniport port arbitration method as performed by a destination packet processor.
FIGURE 7 is a flow chart showing one method for multiport arbitration.
FIGURE 8 is a timing diagram showing multiport arbitration according to the method show in FIGURE 7.
DETAILED DESCRIPTION
Figure 1 shows a switching fabric circuit 100 that includes eight ports 110a- 11 Oh and a switching link 105. The switching link 105 is coupled to each of the ports 110a- 11 Oh and provides communications path so that each port 110 may share information with every other port. As will b discussed, the architecture of the switching link 105 allows for on-the fly switching, an expandable number of ports, and the interconnection of local area network (LAN) segments that may transfer data at different rates.
Each of the ports 110a- 1 lOh is coupled to a corresponding one of the LAN segments 115a- 115h. For example, port 110 a is coupled to LAN segment 115a, port 110b is coupled to LAN segment 115b, etc. Each LAN segment 115 comprises a data link such as a coaxial cable, twisted pair wire, or optical fiber, and the ports 110a- 11 Oh provide the appropriate electrical interface between the switching link 105 and the LAN segments 115a-l 15h.
LAN segments 115a-l 15h may each operate according to different LAN standards and protocols, and the switching fabric circuit 100 provides for rate matching so that communications may occur between LAN segments that operate according to different communications protocols. For example, LAN segments 115a-l 15e may operate according to the IEEE 802.3 10Base5 LAN standard, which is commonly known as Ethernet. The IEEE 802.3 10Base5 LAN standard provides for a transfer rate of lOMb/s. Alternatively, LAN segments 115a-l 15e may operate at 16 Mb/s according to the IEEE 802.5 Token Ring standard. LAN segments 115f and 115g may operate according to the 802.3 100BaseT Ethernet standard, which provides for a 100 Mb/s data transfer rate. Finally, LAN segment 115h may be an ATM network link that provides for data transfer at 150 Mb/s. The switching fabric circuit 100 may be configured to allow for LAN segments that operate according to any of a number of standard and non-standard communications protocols.
There may be one or more endstations 120 coupled to each LAN segment 115. Endstations are shown in Figure 1 as small boxes coupled to the LAN segments 115. Alternatively, a LAN segment 115 may be provided as a link between the switching fabric circuit 100 and another switching fabric circuit (not shown). The switching fabric circuit 100 interconnects the endstations of the various LAN segments to form a segmented network. Typically, each of the endstations 120 coupled to the LAN segments 115a-l 15h has a unique address that is globally defined for the segmented network. Wherein the segmented network is subdivided into two or more virtual networks, addresses may be locally de ined for each virtual network.
Communications between endstations that are coupled to the same LAN segment 115 may proceed without the switching fabric circuit 100; however, communication between endstations of different LAN segments necessarily involve the operation of the switching fabric circuit 100. An endstation that is the source of a data frame for transfer is called the "source endstation," and a port that receives a data frame for transfer to another port via the switching linke 105 is called a "port of entry" or a "source port." An endstation that is the destination of a data frame is called a "destination endstation", and a port that receives a data frame via the switching link 105 is called a "port of exit" or a "destination port."
A data frame sent by an endstation typically includes both a source address field and a destination address field. The source address field contains the network address of the source endstation for the frame. The contents of the destination address field depends on the type of transaction. A "unicast" transaction is a network transaction in which the destination address field of the frame indicates a single destination endstation. Multicast and broadcast transaction protocols a defined by the LAN communications protocol of a LAN segment. For LAN segments that operate according to the IEEE 802 standards, a "multicast" transaction is a network transaction in which th destination address field of the frame contains a predefined multicast address according to IEEE standards, and a "broadcast" transaction is a network transaction in which the destination address field of the frame of a broadcast transaction is typically set to a default value that indicates the fram is to go to all endstations of the segmented network.
The switching link 105 monitors each data frame that is transferred on each LAN segment 115 and determines which data frames are broadcast frames, multicast frames, or unicast frames having a destination that is remote from the port of entry. These three types of data frames are directed to the appropriate port or ports of exit by the switching link 105. Unicast frames having destination endstations that are local to the port of entry are ignored by the switching link 105 exce to the extent that such frames are used during the network learning process whereby the switching link 105 "learns" the address and location of each endstation that is coupled to the switching fabric circuit 100.
Figure 2 shows the switching link 105 in greater detail. Switching link 105 includes a switching bus 205, a multiplicity of packet processors 210a-210h, a system processor 215, a processor bus 220, a multiport arbitration ("MPA") counter 213, and a central arbiter 212. Each o the packet processors 210a-210h is coupled to a corresponding one of the ports 1 lOa-1 lOh. For example, packet processor 210a is coupled to port 110a, packet processor 210b is coupled to port 110b, etc. The packet processors 210a-210h provide the communications protocol interfaces between the LAN segments 115a-l 15h and the switching link 105.
Communications between ports 110 of the switching fabric circuit 100 are done using syste packets, and the switching bus 205 is the mechanism by which system packets are transferred between the ports. System packets are LAN data frames that include the additional information required for forwarding the LAN data frames to the appropriate ports of the switching link 105. When a data frame is received from a LAN segment 115 via a port 110 (the port of entry), and that data frame is to be transferred to a remote port (the port of exit), the packet processor of the port o entry generates a system header that may be appended to the beginning of the data frame to create the system packet. The system header contains additional information that may provide additional forwarding options to the port of exit that the port of exit may use to determine the final dispositio of the data frame. A system header may be used, for example, when the system packet is destined for a port of exit coupled to an ATM LAN segment. The system packet is transferred or "switched via the switching bus 205 to the packet processor of the port of exit. When a system packet is received by a port of exit, the packet processor of the port of exit strips any system header from the system packet to retrieve the transmitted data frame.
System packets that include unicast data frames are called "unicast packets." Similarly, system packets that include multicast frames are called "multicast packets", and system packets that include broadcast frames are called "broadcast packets.'
To determine whether a data frame received by a packet processor 210 from a port 110 is to be switched to a remote port by the switching bus 205, each packet processor 210 includes a filter table that contains entries for each known endstation of the segmented network. Each packet processor performs lookups of the filter table using the destination address field of the received data frame and creates a system packet to forward the data frame if the data frame specifies a remote destination endstation. The system processor 215 maintains a master filter table for the switching fabric circuit and updates the local filter tables of each of the packet processors via the processor bus 220. Entries may be added to the filter tables automatically using a bridge learning process. Alternatively, a system administrator may provide master filter table entries for each endstation of the segmented network.
Each of the packet processors 210 and the system processor 215 may use the switching bus 205 to transfer system packets to each other. Typically, system packets are switched from the packet processors 210 to the system processor 215 during the network learning process or when a port does not otherwise know the destination endstation specified by the destination address field of the data frame. The system processor 215 uses the packets that are forwarded to it via the switching bus 205 to build the master filter table for the switching fabric circuit 100.
The transmission of a system packet is preceded by the transmission of a port of exit (POE) mask that comprises a number of bits equal to one more than the maximum number of ports for the switching fabric circuit 100. A POE mask bit is provided for each port and for the system processor. Each packet processor 210 monitors the switching bus 205 to determine if the bit of the POE mask that indicates the port to which the packet processor is coupled is set to a logic high level. If the bit that indicates the port of the packet processor is set to a logic high level, the packet processor of that port recognizes that it is a port of exit for the system packet . If the bit that indicates the port is set to a logic low level, the packet processor for that port may ignore the system packet because the port is not a port of exit for the system packet. The POE mask for a system packet may be retrieved from the local filter table during the look-up performed by the source packet processor to determine the location of the destination endstation. As described, system packets are switched between the ports using the switching bus 205, and the central arbiter 212 is coupled to the switching bus 205 for arbitrating requests made by the packet processors 210 to access the switching bus 205. The switching bus 205 includes one request signal line (REQ) and one acknowledge signal line (ACK) for the system processor 215. The switching bus 205 also includes one REQ signal line and one ACK signal line for each packet processor that may be coupled to the switching bus 205. All of the REQ signal lines and the ACK signal lines are coupled directly between the central arbiter 212 and the corresponding system or packet processor. For example, there may be a total of thirty-two sets of REQ and ACK signal line although only nine sets, eight for the packet processors and one for the system processor, may actually be used. Therefore, more packet processors may be added to the switching link 105, and th number of ports for the switching fabric circuit 100 may increase accordingly.
Because all communications between LAN segments use the switching bus 205, the switching bus 205 may be conveniently used to gather information regarding the bandwidth usage o each LAN segment or endstation. This information may be used to meter usage of the switching fabric circuit 100. The switching bus 205 also provides for the convenient expansion of the total number of ports at a reduced cost when compared to the multiplexor logic of the prior system.
The switching bus 205 is a partially asynchronous time division multiple access (TDMA) bus that includes a 32-bit data bus for transferring data between the system and packet processors. Data transfer on the switching bus 205 occurs during discrete periods of time called "synchronization periods," each of which is subdivided into a multiplicity of bus slots, wherein each bus slot is equal t one bus clock cycle. The switching bus 205 shown in Figure 2 operates at 16.25 MHz clock speed and fifty-two bus slots are provided per synchronization period such the total duration of the synchronization period is equal to 3.2 microseconds. Thirty-two bits of data may be transferred eac clock cycle such that the maximum bandwidth for the switching bus of this embodiment is 520 Mb/s. Of course, the maximum bandwidth increases as the system clock speed increases, as the width of th data bus increases, or as the number of data transfers per clock cycle increases.
The central arbiter 212 dynamically allocates bandwidth for the switching bus 205 according to two levels of priority: high priority and low priority. High priority transactions are guaranteed access to the switching bus 205 (guaranteed bandwidth) so that high priority packets may be switched on-the-fly. All other transactions are assigned low priority. According to one embodiment of the switching bus 205, high priority transactions are reserved for transactions between 10 Mb/s LAN segments. Each packet processor 210 includes circuitry for determining whether its request to O 96/13922 PC17US95/13838
9
transfer a particular system packet is a high priority request or a low priority request. This is discussed in more detail below.
Dynamic bus slot allocation may be performed in a number of different ways. According to a first method, the central bus arbiter 212 allocates bus slots to requesting packet processors after access requests are received, and no packet processor has a dedicated bus slot. The number of bus slots in a synchronization period is chosen to be great enough to provide guaranteed access to each packet processor, as if all the packet processors could initiate high priority transactions. According to a second method, bus slots are initially allocated to each packet processor and deallocated if a packet processor does not request a high priority bus access. The order in which a packet processor may access the switching bus is altered when bus slots are deallocated such that all high priority transactions occur in single block of contiguous bus slots. The second method may not require a central arbiter. Both of these methods may be contrasted with typical prior time division multiplexed (TDM) buses wherein each component coupled to the bus has a dedicated bus slot. If the dedicated bus slot is unused, the TDM bus is idle for that bus slot.
As mentioned previously, all high priority requests are granted in the synchronization period in which they are made. Thus, high priority bus traffic is transferred synchronously, and the number of bus slots is equal to at least the number of high priority requests that may be made per synchronization period. Any remaining bus slots of the synchronization period are used for low priority transactions. According to the present embodiment, each high priority transaction of a synchronization period is provided one bus slot for data transfer, and each low priority transaction competes for as many slots that are required before the next synchronization period begins.
The width of the data bus, the duration of the synchronization period, and the guaranteeing of bus access for all high priority transactions during each synchronization period combine to provide a guaranteed throughput for high priority transactions, which allows for high priority system packets to be switched on-the-fly such that the port of entry packet processor may begin transfer of a system packet before the entire data frame has been received from the LAN segment. Wherein the data bus thirty-two bits wide, the synchronization period is 3.2 microseconds, and one bus slot is provided per synchronization period, a throughput of 10 Mb/s is guaranteed, and on-the-fly switching between two 10 Mb/s, LAN segments may occur. To guarantee a throughput of 16 Mb/s, the duration of the synchronization period may be reduced to 2 microseconds if the width of the data bus and the number of bus slots guaranteed per synchronization period remain constant. TABLE 1 SWITCHING BUS SIGNAL LINES
Signal Name Qty. Type Definition DATA[31:0]:H 32 Slot driven DATA[31:01 signals carry packet data during data Tri-state transmission slots, and Port Of Exit mask during arbitration slots.
SYNC:L Single driver SYNC signal indicates the beginning of a sync period. Push-pull It is asserted for one bus clock every 3.2 used, usec (microseconds) , not;
PRIORTTYliL Multi driver Packet processors drive this line during bus requests OC for high priority traffic.
REQ(x):L 1 per port This is the request signal line from the packet processor to the bus arbiter.
ACK(x):L 1 per port This is the acknowledge line from the bus arbiter to the packet processor. TYPE[2:0]:H Slot driven TYPE signals indicate what type of information is Tri-state driven on the DATA[31:01 signals during the current clock cycle.
SID[4:0]:H Slot driven Source ID. Indicates source of frame. Driven by source port during POE and DATA type bus cycles, and driven by destination port during DPA type bus cycles.
BE[1:0]:H Slot driven During the last data transfer slot BE[1:0] indicate the Tri-state number of valid bytes. During POE, indicates rate and whether the packet is multicast_or unicast.
MPA[4:01:H Single driver MPA[4:0] indicates the port address of the packet Push-pull processor port that is allowed to instigate a multiport arbitration
BFREE:H Multi drivers BFREE is asserted when no arbitration is occurring on Wired-or the switching bus. This signal is a wired-or signal that is pulled low by any source port that is currently asserting a POE.
BBUSY:L Multi drivers BBUSYis asserted when a destination port addressed Wired-or by a multiport POE is currently receiving a packet.
BFREEZrL Slot driven BFREEZis asserted to indicate a source packet Tri-state processor is currently arbitrating for transmission of a multiport packet.
RESET:H Single driver RESET is a synchronous signal used to reset all packet Push-pull processors and synchronize all internal clocks. CLK: H 1 Single driver CLK is a 32.5 MHz clock that is used by all packet
Push-pull processors to synchronize to the switching bus.
Table 1 shows the signal definition for the signal lines of the switching bus 205 shown in Figure 2. As mentioned previously, the switching bus 205 includes 32 data bus signal lines, DATA[31:0]. Three "type" signal lines TYPE[2:0] are provided to each of the system and packet processors to indicate what type of information is currently being driven on the data bus during the bus slot. The SYNC signal line is provided to each of the system and packet processors to indicate the beginning of a synchronization period when the SYNC signal is asserted logic low.
The RESET and CLK signals are provided to each of the packet processors. The packet processors include internal clock circuitry for deriving their own internal 16.25 MHz clock signals using the RESET and CLK signals. The internal clock circuits of the packet processors are designed to reduce clock skew between packet processors. The packet processors may also use the SYNC signal to synchronize the internal clock circuits.
TABLE 2
Type Type Field BE Motes
210 10
IDLE - 111 — (Sourced by separate active driver
POE - 110 11 Fast, multicast
POE - 110 10 Fast, unicast
POE - 110 01 Slow, multicast
POE - 110 00 Slow, unicast
HEADER - 101 —
DATA - 100 —
EOP/CRC good - 011 XX BE = # of valid bytes
EOP/CRC bad - 010 XX BE = # of valid bytes
DPA - 001 —
DPOV - 000 — Table 2 shows the data types for the 32-bit data bus as may be indicated by the TYPE[2:0] signal lines. The TYPE[2:0] signal lines are asserted by the packet processor that has control of the switching bus 205 during the bus slot in which data is being transferred by the packet processor on the 32-bit data bus. The TYPE signal lines are discussed in more detail below.
Returning to Table 1, the PRIORITYl signal line is a wired-OR signal line that is driven active (logic low) when a packet processor requests a high priority transaction. The PRIORITYl signal line remains active as long as there is an outstanding high priority request. When the last hig priority request is serviced, the PRIORITYl signal line goes inactive, and low priority arbitration may begin. Once the PRIORITYl signal goes inactive, it may not be asserted again until the next synchronization period.
Arbitration for unicast transactions ("uniport arbitration") is distributed, which means that arbitration for access to the switching bus 205 ("switching bus arbitration") is distinct from arbitration for access to a particular destination port ("port arbitration"). Switching bus arbitration i the process whereby a port requests and is granted access to the switching bus. Port arbitration is t process whereby a source port sends the POE mask and waits for the DP A signal from the destination port so that the source port may begin transmission of a system packet to the destinatio port. Each port must successfully complete switching bus arbitration before it is allowed to initiate port arbitration. Further, once a source port has successfully arbitrated for access to the destination port, the source port continues to initiate switching bus arbitration each synchronization period to transfer the system packet.
The SYNC, PRIORITYl, REQ , and ACK signal lines are used primarily for switchin bus arbitration. Signal lines SID[4:0], BE[1:0], DATA[31:0],TYPE[2:0] may be used for port arbitration. The SID[4:0] signal lines indicate the source ID of the source port that is currently driving data on the DATA[31:0] signal lines. Each of the packet processors 210 includes circuitry f arbitrating port access requests. This circuitry is generally comprised of a latch that latches the contents of the STD[4:0] signal lines when the packet processor is free to accept a request. By latching the source ID of the source port of a system packet, the packet processor locks out all oth packet processors until transmission of the system packet is complete, at which time the latch is cleared. Port access requests may be queued by the destination port, wherein the queue is cleared a the beginning of each synchronization period. To better ensure fairness, port accesses may be grant in a round robin manner, and high speed requesters (e.g. 100 Mb/s or 150 Mb/s packet processors) may be given priority over low speed requesters (e.g. 10 Mb/s packet processors) by low speed ports. During transfer of the POE mask, the BE[1:0] signal lines are used to indicate the rate of the data transfer on the DATA[31:0] signal lines such that the packet processor of the port of exit may determine the rate matching rules that are applicable to the date of transfer. The TYPE[2:0] signals indicate what type of information is driven on the DATA[31 :0] signal lines during the current clock cycle.
Port arbitration for multicast and broadcast transactions ("multiport port arbitration") is distinct from uniport port arbitration. According to one embodiment of the switching link 105, multiport port arbitration is postponed until all uniport port arbitration has concluded. Multiport port arbitration is the process whereby a source port having a multiport packet arbitrates for access to the destination ports of the multiport packet. Once multiport port arbitration has successfully completed, transfer of the multiport packet is done using switching bus arbitration.
To provide for multiport port arbitration, the switching bus 205 includes signal lines
MPA[4:0], BFREE, BBUSY, and BFREEZ. For the present embodiment, only one port is allowed to arbitrate for a multiport transaction during a synchronization period. Alternatively, multiple ports may be allowed to initiate multiport arbitration simultaneously. The BFREE signal line is provided to indicate when uniport arbitration has completed so that multiport port arbitration may begin. The MPA[4:0] signal lines are coupled to the MPA counter 213 and are provided to select the packet processor of the switching fabric circuit 100 that is allowed to initiate multiport port arbitration. The MPA counter 213 is incremented every two bus clock cycles unless the MPA counter 213 is disabled. The output of the MPA counter 213 is carried by the MPA[4:0] signal lines. The BFREEZ signal line is provided to signal the beginning of multiport arbitration to all ports and to disable the MPA counter 213.
If the packet processor indicated by the MPA[4:0] signal lines has a multiport transaction and either uniport port arbitration has completed or a predetermined amount of time has elapsed, that packet processor is allowed to arbitrate for access to the destination ports of exit. Optionally, a packet processor may begin multiport arbitration immediately upon being indicated by the MPA[4:0] signal lines. BFREEZ goes active (logic low) to freeze the value carried on the MPA[4:0] signal lines. The BBUSY signal line is provided so that ports of exit of the multiport system packet can indicate when they are ready to receive the multiport system packet. The multiport packet may be transferred to all of the ports of exit indicated by the multiport POE mask when all of the packet processors are ready to receive the multiport system packet. Before describing switching bus transactions in detail, rate matching methodology that may be used by the switching link 105 will be discussed. The packet processors 210 of the switching link 105 include receive queues for single file queuing of data frames, which requires that data frames received from the LAN segments must be able to leave the receive queue at least as fast as they ente the receive queue. Similarly, the packet processors 210 include transmit queues for single file queuing of system packets, which requires that system packets received from the switching bus 205 must be able to leave the transmit queue at least as fast as they enter the transmit queue. The receiv and transmit queues of a packet processor may generally comprise first-in first-out buffers (FIFOs).
Several rate matching rules may be derived from the queuing requirements. A first rate matching rule is that a unicast packet is switched between two ports at the rate of the faster of the two ports, wherein the rate of the port is defined by the rate of the associated LAN segment 115. A second rate matching rule is that a multicast packet is switched at the rate of the faster of the source port or the fastest destination addressed by the multicast packet. A third rate matching rule that follows from the first two rate matching rules is that all packet processors 210 must be able to send and receive data at the fastest rate supported by the switching fabric circuit 100.
To prevent system-wide queue delays due to half duplex bus interfaces, a fourth rate matching rule is that all packet processors that are coupled to LAN segments having the fastest rate must be capable of full duplex communication such that they are able to both send and receive simultaneously at the fastest rate. A fifth rate matching rule similarly requires that all packet processors coupled to the slowest rate LAN segments are able to send and receive at the slowest rat simultaneously. To reduce system costs, packet processors coupled to the slowest rate LAN segments need not be able to simultaneously send and receive at the fastest rate. The rules as applie to one implementation are shown in Tables 3 and 4.
TABLE 3 UNIPORT RATE MATCHING
Port of Entry Port Of Exit Bus Port Of Entry Port Of Exit an Type Lan Type Switching Packet Packet
Rate Processor's Processor's Capability to Capability To Receive (Mb/s) Transmit (Mb/s)
10 Mb/s 10 Mb/s 10 Mb/s Rx from any Tx to any
10 Mb/s 100 Mb/s 170 Mb/s Rx from 10's Tx to any only
10 Mb/s 150 Mb/s 170 Mb/s Rx from 10's Tx to any only
100 Mb/s 10 Mb/s 170 Mb/s Rx from any Tx to 10 only
100 Mb/s 100 Mb/s 170 Mb/s Rx from any Tx to any
100 Mb/s 150 Mb/s 170 Mb/s Rx from any Tx to any 50 Mb/s 10 Mb/s 170 Mb/s Rx from any Tx to 10 only 50 Mb/s 100 Mb/s 170 Mb/s Rx from any Tx to any
150 Mb/s 150 Mb/s 170 Mb/s Rx from any Tx to any
TABLE 4 MULTIPORT RATE MATCHING
Port of Entry Lan Ports Of Exit Lan Types Bus Switching Rate Type
10 Mb/s 10 Mb/s only 10 Mb/s
10 Mb/s Any 100 Mb/s 170 Mb/s
10 Mb/s Any 150 Mb/s 170 Mb/s
100 Mb/s 10 Mb/s, 100 Mb/s or 150 170 Mb/s Mb/s
150 Mb/s 10 Mb/s, 100 Mb/s or 150 170 Mb/s Mb/s
As shown in Tables 3 and 4, the switching bus operates at one of two rates for any given transaction although the LAN segments themselves may operate at three different rates. The slowest rate of the LAN segments for the embodiment shown in Figure 2 is 10 Mb/s and the fastest rate is 150 Mb/s. The switching bus 205 is configured to switch system packets at either the slowest or the fastest rates. Packet processors that are coupled to LAN segments that operate at other than the slowest bus switching rate completely store a frame received as a system packet from the switching bus before beginning transmission of that frame to the LAN segment. This is required to prevent FIFO underflow due to possible insufficient bandwidth availability on the bus. It is apparent that the particular rate matching methodology is affected by the system architecture, and the precise rate matching methodology may differ for different system architectures.
The operation of the switching bus 205 will now be described with reference to Figures 3a and 3b, which show a flow diagram for a method of operation for the switching bus 205. More specifically, the flow diagram of Figures 3a and 3b show a process for arbitrating for access to the switching bus 205. A synchronization period begins at process block 305 when the SYNC signal lin is asserted (logic low). At process block 310, packet processors with pending high priority requests each assert the common PRIORITYl signal line and its corresponding REQ signal line driving both signal lines logic low. The table entries of the filter table for each packet processor include a field that may be used to determine whether a particular request is a high priority or a low priority request. In this manner, each packet processor knows what priority request it has. At process block 315, any remaining packet processors that do not have a high priority transaction request deassert their corresponding REQ signal lines. Process blocks 310 and 315 preferably occur within the first bus slot of the synchronization period.
As discussed previously, each packet processor that requests a high priority bus access is guaranteed switching bus access during the synchronization period in which the high priority reques is made. At process block 320, the central arbiter grants a high priority request of a packet processor. The packet processor that receives the grant from the central arbiter 212 deasserts the
PRIORITYl signal line and its corresponding request line, sends data via the data bus for one bus slot of the synchronization period, and asserts the TYPE[2:0] signal lines to indicate what type of data is being sent on the DATA[31:0] signal lines. If there are any remaining high priority requests a process block 330, process blocks 320 and 325 are repeated until no high priority requests remain. According to the present embodiment, the central arbiter grants high priority requests on a round robin basis.
The end of high priority arbitration is signaled by the PRIORITYl signal line going inactive (logic high), which occurs during the bus slot in which the packet processor having the last remainin high priority request is granted control of the switching bus 205. Depending on the LAN segments 115 coupled to the switching fabric circuit 100, it is possible that an entire synchronization period will be used to accommodate only high priority transactions. If the SYNC signal is asserted at process block 335 after the last high priority transaction has completed, the switching bus arbitration process begins again at process block 310. Otherwise, the process continues at process block 340.
After the PRIORITYl signal line goes inactive, the packet processors with pending low priority requests assert the their corresponding REQ signal lines at process block 340. Low priority requests may be made at any time during the synchronization period after high priority transactions have completed. At process block 345, the central arbiter 212 grants a low priority request of a packet processor, which, at process block 350, transfers data via the switching bus 205 for a single bus slot. As low priority requests are reserved for bulk data transfers by high speed ports, packet processors typically request several bus slots per synchronization period. To ensure some fairness in bus access, low priority bus requests are granted on a round-robin, slot-by-slot basis such that no packet processor transfers data for two contiguous bus slots. If bus slots remain in the current synchronization period, it is determined at process block 355 whether there are additional low priority requests. If so, process blocks 335-350 are repeated. If not, the switching bus 205 may be idle until the next synchronization period at which time the switching bus arbitration process is repeated beginning at process block 310.
Figure 4 is a timing diagram showing an example of the operation of the switching bus 205 according to the process shown in Figure 3. The length of the synchronization period for the example has been reduced to simplify the explanation. As shown, the SYNC signal line is asserted low for a single bus cycle to signal the beginning of the synchronization period. During the first slot of the synchronization period a first and a second packet processor each request high priority bus transactions, as indicated by the corresponding REQl and REQ2 and PRIORITYl signal lines going active. The central arbiter 212 grants the request of the first packet processor to begin data transfer during the second bus slot of the synchronization period as indicated by the ACK1 signal line being asserted low. In the bus cycle following the receipt of the bus grant, the first packet processor deasserts its request line REQl , deasserts the PRIORITYl signal line, and drives the data on the DATA[31 :0] signal lines. During the same bus slot, which is the third bus slot of the synchronization period, the first packet processor drives the TYPE[2:0] signal lines to indicate the data type and the SID[4:0] signal lines to indicate the source port of the data carried by the data bus. If the data type is a POE mask, the BE[1 :0] signal lines are driven by the first packet processor to indicate the transfer rate. If the data type is EOP (end of packet), the BE[1:0] signal lines are driven by the first packet processor to indicate the number of valid bytes in the last data word. The PRIORITYl signal line remains active after the first packet processor is granted contro of the switching bus because the high priority request of the second packet processor remains pending. The central arbiter may grant one high priority request for each bus slot that follows the initial bus slot during which high priority requests were entered. Therefore, the central arbiter 212 i shown as granting control of the switching bus 205 to the second packet processor in the bus slot immediately following the bus slot in which the first packet processor was granted access to the switching bus 205. In the fourth bus slot of the synchronization period, the second packet process drives its data over the data bus, and the PRIORITYl signal line is deasserted. During the same bus slot, the second packet processor drives the TYPE[2:0] signal lines, the BE[1:0] signal lines, and th SID[4:0] signal lines.
The third and fourth packet processors detect the lack of high priority arbitration during the fifth bus slot, and both packet processor request low priority access to the switching bus 205 durin the sixth bus slot of the synchronization period. The PRIORITYl signal line remains inactive to indicate a low priority request. Low priority transactions are reserved for those transactions requiri multiple bus slots, and packet processors that request low priority transactions may request multipl bus slots during the same synchronization period and may pipeline those requests such that they ma receive bus grants as often as every two bus slots. The central arbiter allocates bus access a single slot at a time in a round-robin manner such that each packet processor is provided one bus slot at a time and such that no packet processor transfers data during two contiguous bus slots. This is sho in Figure 4 by the interleaving of bus grants for the third and fourth packet processors.
Figure 5 is a flow chart showing generally a method for uniport port arbitration. The proces begins at process block 505. At process block 510, the synchronization period is begun by the SYN signal being asserted for the duration of a single bus clock cycle. The packet processor of source p (the "source packet processor") requests access to the switching bus access at process block 515. The bus access requested may be of either a high priority or, a low priority. The source port is granted bus access at process block 520. As shown at process block 525, when the source packet processor is initially granted bus access to transfer a system packet via the switching bus 205, the source packet processor sends a POE mask via the data bus, indicates that the data bus carries the POE mask by asserting the TYPE[2:0] signal lines as indicated in Table 1, indicates the rate of switching to the destination port by signaling via the BE[1:0] signal lines, and identifies itself as the source port of the POE mask by signaling via the STD[4:0] signal lines.
Whether the transaction is high priority or low priority, the source packet processor cannot continue transfer of the system packet until the packet processor of the destination port (the "destination packet processor") indicates that it is ready to receive the system packet. For high priority transactions, the earliest that the destination packet processor can indicate that it is ready to receive the system packet is the synchronization period immediately following the initial synchronization period in which the source packet processor transmits the POE mask because the destination packet processor must itself arbitrate for bus access so that it may indicate that it is ready to receive the system packet. For low priority transactions, it is possible for the destination packet processor to indicate it is ready to receive the system packet in the same synchronization period that it receives the POE mask. The destination packet processor indicates that is available by simultaneously asserting the "destination port available" (DP A) signal via the TYPE[2:0] signal lines and asserting the STD[4:0] signal lines with the identification of the source packet processor. In this manner, the source packet processor recognizes that the destination port is available.
The source packet processor requests access to the switching bus once every synchronization period to transmit the POE mask until the destination port available (DP A) signal is received. Process block 530 shows that if a DP A signal is not received during the current synchronization period, the source packet processor repeats process blocks 510-525 until the DP A signal is received. The DPA signal is asserted by the port of exit according to the process shown in Figure 6. Thus, after the initial synchronization period when the POE mask was initially transmitted, it is possible for the source packet processor to send the POE mask and receive the DPA signal in the same synchronization period. If the DPA signal is received from the destination packet processor, the source packet processor may begin transfer of the system packet at the next synchronization period.
The transfers of a POE mask and the DPA signal each take only a single bus slot, and it is possible to assign high priority to POE mask and DPA signal transfers. According to the present embodiment, however, the priorities of the POE mask and DPA signal transfers are assigned according to the relative rates of the source and destination packet processors as described above. Thus, the POE masks and DPA signals of high priority system packets are also high priority transactions requiring high priority bus access requests.
After the DPA signal is received, the process continues at process block 535, when the next synchronization period begins. The source packet processor requests switching bus access at process block 540. At process block 545, the source packet processor is granted access, and at process block 550, the source packet processor sends data via the DATA[31:0] signal lines. The source packet processor indicates the type of data on the DATA[31 :0] by toggling the type signal lines. If the transfer is not complete at process block 555 and the transaction is a high priority transaction as determined at process block 556, process blocks 535 through 550 are repeated until transfer is complete, at which time the process ends at process block 560. If the transfer is not complete at process block 555, the transaction is a low priority transaction as determined at process block 556, and bus slots of the current synchronization period remain to be allocated as determined at process block 557, steps 540-550 are repeated to the extent that switching bus traffic allows the central arbiter 212 to allocate bus slots to that source packet processor. If the transfer is not complete at process block 555, the transaction is a low priority transaction as determined at process block 556, and no bus slots of the current synchronization period remain to be allocated as determined at process block 557, process blocks 535-550 are repeated. Once the DPA signal is received, uniport port arbitration is complete, and process blocks 535-550 are simply the switching bus arbitration method of Figures 3 a and 3b shown in a simplified form.
Figure 6 is a flow diagram showing the port arbitration process of a packet processor at the destination port. The process begins at process block 605. After a synchronization period has begu at process block 610, the destination port receives the POE mask, source ID, and rate information from the source port at process block 615. The POE mask is received by the destination port in the same synchronization period that it is sent by the source port. If the destination port is ready to receive a system packet from the source port, the destination packet processor latches the source I of the source port as indicated by the SID[4:0] signal lines. The destination packet processor continues to store the source ID of a source port until the last data word of the system packet is received as indicated by the TYPE[4:0] signal lines signaling EOP/CRC good or EOP/CRC bad. T destination packet processor compares the source ID of the data driven on the data bus to the latched source ID and accepts the data only if the two source ID's match. The destination packet processor clears its latch after receiving the last word of a system packet so that it may latch the source ID of the next source port to be serviced.
At step 620, if the destination packet processor is not ready to receive data from the source port, the source port sends the POE mask once per synchronization period and process blocks 610 and 615 are repeated until the destination packet processor is ready to receive data from the source port. If the destination packet processor is ready to receive data, and the system packet is a high priority packet as determined at process block 622, the destination packet processor waits for the beginning of the subsequent synchronization period at process block 625 before requesting bus access at process block 630. If the system packet is a low priority packet as determined at process block 622, the flow may proceed directly to process block 630 without awaiting for the beginning the next synchronization period. At process block 635, the destination packet processor is granted bus access, and the destination packet processor sends the DPA signal to the source port at process block 640. The process ends at process block 645.
Figure 7 is a flow diagram showing a multiport switching bus arbitration method. At process block 705, a source port has received and identified a multicast or broadcast frame and thus has a multiport arbitration request. The source packet processor may optionally monitor the BFREE signal line at process block 710 to determine if uniport port arbitration has completed. The BFREE signal line is asserted if a port that has asserted a POE mask has not received a return DPA signal, which signals the end of uniport port arbitration. If the bus is free at process block 710, the source packet processor may begin multiport port arbitration once the MPA[4:0] signal lines indicate the source packet processor. If the bus is not free at process block 710, the source packet processor may nonetheless begin multiport port arbitration if an internal timer of the source packet processor times out (at time t=0) at process block 711.
At process block 715, if the MPA[4:0] signal lines do not indicate the source packet processor, the MPA counter is incremented at process block 716 to select a different packet processor. According to one embodiment, the MPA counter is incremented once every two bus clock cycles. If the MPA[4:0] signal lines indicate the source packet processor, the source packet processor begins multiport arbitration by freezing the MPA counter 213 at its current value. This occurs at process block 720. The MPA counter 213 may be frozen by asserting the BFREEZ signal line. All packet processors of the switching fabric circuit assert the BBUSY line upon detection of the BFREEZ signal going active.
Once the source packet processor has asserted the BFREEZ signal, the source packet processor is free to request a bus slot with which to send the POE mask for the multiport packet. The source packet processor performs normal switching bus arbitration to send the multiport POE mask. If the multiport transaction is a high priority transaction, the source packet processor may have to wait until the start of the next synchronization period to make its request. If the multiport transaction is a low priority transaction, the source packet processor may request bus access anytime after high priority traffic has completed. The request is made at process block 725, and the central arbiter grants the source packet processor a bus slot at step 730.
The source packet processor sends the POE of the multiport system packet at process block 735. Packet processors that are not destinations of the multiport packet and destination packet processors of the multiport system packet that are not busy deassert the BBUSY signal. Destination packet processors that are not busy latch the source ID carried on the MPA[4:0] signal lines. This prevents destination packet processors that are not busy during one synchronization period from becoming busy in a subsequent synchronization period. If one or more destination packet processor are not ready to receive data at process block 740, the source packet processor repeats process blocks 725-735 once every synchronization period until all destination packet processors are ready receive the multiport packet. When all destination packet processors are ready, the BBUSY signal i deasserted . At process block 745, the source packet processor deasserts the FREEZE signal high i response to BBUSY being deasserted, and the destination packet processors latch the value carried by the MPA[4:0] signal lines to use as the source ID of the source packet processor. Multiport port arbitration is complete, and the MPA counter 213 may be incremented a bus clock cycle after
BFREEZ goes high. The transfer of the multiport packet then continues as if it were a uniport packet, which is shown in process blocks 535-560 of Figure 5. Multiport arbitration may begin agai before transfer of the multiport packet is completed.
Figure 8 shows an example of multiport arbitration according to the method shown in Figur 7. The duration of the synchronization period is reduced to simplify the example. A first synchronization period begins when the SYNC signal goes low for one bus clock cycle. The MPA counter is incremented once every two bus clock cycles, as indicated by the MPA[4:0] signal lines. At the fourth bus clock cycle, the MPA counter is incremented, indicating port 2 in binary "00010." Port 2 has a multiport packet to transfer and freezes the MPA counter 213 during the fourth bus clock cycle by asserting the BFREEZ signal, and the BFREE signal goes low. During the fifth bus clock cycle, all the packet processors of the switching fabric circuit respond to BFREEZ going low by asserting BBUSY, and the source packet processor drives the POE mask on the data bus. Durin the sixth bus clock cycle, BFREE goes high and packet processors that are either not addressed by the multiport packet or are destination packet processors ready to receive the multiport packet deassert the BBUSY line. As shown, some of the destination packet processors are busy such that
BBUSY remains low. For subsequent synchronization periods, the source packet processor continues to send the multiport POE mask until all destination ports are ready to receive the multiport packet.
Figure 8 shows two vertical lines that indicate the passage of time until the synchronization period after all destination packet processors have signaled their readiness to receive the multiport packet by deasserting BBUSY. As shown, the BFREEZ signal goes inactive in response to the
BBUSY signal going inactive, and the MPA counter 213 is enabled to begin incrementing in response to the BFREEZ signal going inactive. The transfer of the multiport packet may proceed similarly to the transfer of uniport packets as shown in Figure 4. In the foregoing specification the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense.

Claims

1. A switching fabric circuit comprising: a plurality of ports each coupled to a corresponding one of a plurality of local area networ (LAN) segments; a switching link coupled to the plurality of ports for receiving requests for data transfer operations from the plurality of ports during a synchronization period, for prioritizing the requests for data transfer operations according to a high priority and a low priority during the synchronizati period, and for granting requests for data transfer operations such that ports requesting data transf operations at a high priority are guaranteed access to the switching link during the synchronization period, and such that ports requesting data transfer operations at the low priority are provided acce to the switching link for a remainder of the synchronization period.
2. The switching fabric circuit of claim 1, wherein the switching link further comprises: a switching bus; a plurality of packet processors each coupled to a corresponding one of the plurality of port and to the switching bus, the plurality of packet processors for receiving data frames from the LAN segments via the ports, for requesting access to the switching bus at one of the high priority and th low priority, and for transferring data to other packet processors via the switching bus when access to the switching bus is granted; and a central arbiter coupled to the switching bus, the central arbiter for receiving requests to access the switching bus, for granting access to the switching bus to all packet processors that request high priority accesses during the synchronization period in which the requests are received, and for granting bus access to packet processors that request low priority accesses during the remainder of the synchronization period.
3. The switching fabric circuit as claimed in claim 2, wherein each of the LAN segments transfers data according to one of a plurality of transfer rates and wherein high priority is assigned t data transfer operations between ports coupled to LAN segments that have a lowest transfer rate.
4. The switching fabric circuit as claimed in claim 2, wherein the synchronization period is subdivided into a plurality of bus slots and wherein each high priority data transfer operation is provided one bus slot per synchronization period.
5. The switching fabric circuit as claimed in claim 2, wherein the switching bus comprises: a data bus for transmitting data during a bus slot; type signal lines for indicating a data type transmitted by the data bus during the bus slot; and source signal lines for indicating a source port of the data transmitted by the data bus during the bus slot.
6. The switching fabric circuit as claimed in claim 5, wherein each packet processor includes circuitry for determining whether a particular data transfer operation is high priority or low priority.
7. A method of transferring data between ports of a switching fabric circuit wherein a first plurality of ports is coupled to local area network (LAN) segments that transfer data at a first rate and a second plurality of ports is coupled to LAN segments that transfer data at a second rate, the method comprising the steps of: indicating a start of a synchronization period; requesting high priority access to a bus of the switching fabric circuit by at least one of a plurality of pocket processors coupled to the first and second plurality of ports and the bus; granting all high priority access requests during the synchronization period; requesting low priority access to the bus at least one of the plurality of packet processors coupled to the first and second plurality of ports after all high priority access request have been granted; granting low priority access requests for the remainder of the synchronization period; and indicating a start of a new synchronization period by the central arbiter of the switching fabric circuit.
8. The method of claim 7, wherein each synchronization period is divided into a plurality of bus slots, each bus slot having a duration of a bus clock cycle, the method further comprising the steps of: transferring data on the bus for one bus slot by each packet processor that has been granted a high priority access; and transferring data on the bus for at least one bus slot by a packet processor that has been granted a low priority access.
9. The method of claim 7, wherein each synchronization period is divided into a plurality of bus slots, each bus slot having a duration of a bus clock cycle, the method further comprising the steps of: initially allocating each packet processor a bus slot for transferring data; deallocating the bus slots for packet processors that do not request high priority access requests; organizing the bus slots of high priority accesses to be contiguous; transferring data on the bus for one bus slot by each packet processor that has been granted high priority access; and transferring data on the bus for at least one bus slot for a packet processor that has been granted a low priority access.
10. A method for forwarding a multiple destination packet in a switching link of a switchin fabric circuit, comprising the steps of: monitoring a bus of the switching link by a plurality of packet processors coupled to the bus selecting a first packet processor to transmit the multiple destination packet; indicating via the bus that the first packet processor is selected to transmit the multiple destination packet during a synchronization period; indicating by the first packet processor that the first packet processor has the multiple destination packet to transfer during the synchronization period; transmitting a port of exit mask on the bus by the first packet processor, the port of exit mas indicating which of the plurality of packet processors are destination packet processors are destination packet processors for the multiple destination packet; indicating by the destination packet processors that the destination packet processors are ready to receive the multiple destination packet; and transmitting the multiple destination packet via the bus by the first packet processor during the synchronization period.
11. The method of claim 10, wherein the step of selecting the first packet processor comprises the step of incrementing a counter, wherein a count of the counter indicates one of the plurality of packet processors.
12. The method of claim 11, further comprising the step of stopping the counter in response to the first packet processor indicating that it has the multiple destination packet to transfer such that the count continues to indicate the first packet processor.
13. The method of claim 12, further comprising the step of incrementing the counter in response to the destination packet processors indicating that they are ready to receive the multiple destination packet.
14. A method for transferring a data packet comprising the steps of: requesting control of a bus by a first port to transfer the data packet during a synchronization period; granting control of the bus to the first port by a central arbiter; transferring destination information by the first port via the bus during the synchronization period, the destination information indicating a second port as a destination of the data packet; requesting control of the bus by the second port in response to receiving the destination information; granting control of the bus to the second port by the central arbiter; transferring information by the second port via the bus indicating to the first port that the second port is ready to receive the data packet; and requesting access of the bus by the first port to transfer the data packet to the second port during the synchronization period.
15. The method of claim 14, further comprising the step of transferring a source identification by the first port to the second port simultaneously with the step of transferring destination information by the first port, wherein the source identification indicates the first port as a source of the destination information.
16. The method of claim 15, further comprising the step of transferring the source identification by the second port to the first port simultaneous with the step of transferring information by the second port.
PCT/US1995/013838 1994-10-26 1995-10-25 Computer network switching system with expandable number of ports WO1996013922A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CA 2203500 CA2203500A1 (en) 1994-10-26 1995-10-25 Computer network switching system with expandable number of ports
EP95939604A EP0788691A2 (en) 1994-10-26 1995-10-25 Computer network switching system with expandable number of ports

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/330,074 US5561669A (en) 1994-10-26 1994-10-26 Computer network switching system with expandable number of ports
US08/330,074 1994-10-26

Publications (2)

Publication Number Publication Date
WO1996013922A2 true WO1996013922A2 (en) 1996-05-09
WO1996013922A3 WO1996013922A3 (en) 1996-06-13

Family

ID=23288215

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1995/013838 WO1996013922A2 (en) 1994-10-26 1995-10-25 Computer network switching system with expandable number of ports

Country Status (3)

Country Link
US (1) US5561669A (en)
EP (1) EP0788691A2 (en)
WO (1) WO1996013922A2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998011696A1 (en) * 1996-09-13 1998-03-19 Advanced Micro Devices, Inc. A method and system for reducing the pin count required for the interconnections of controller devices and physical devices in a network
EP0854611A2 (en) * 1996-12-30 1998-07-22 Compaq Computer Corporation Network switch with multiple bus architecture
EP0854612A2 (en) * 1996-12-30 1998-07-22 Compaq Computer Corporation Method and system for performing concurrent read and write cycles in a network switch
WO1999014901A1 (en) * 1997-09-17 1999-03-25 Sony Electronics Inc. High speed bus structure in a multi-port bridge for a local area network
US5940597A (en) * 1995-01-11 1999-08-17 Sony Corporation Method and apparatus for periodically updating entries in a content addressable memory
US6012099A (en) * 1995-01-11 2000-01-04 Sony Corporation Method and integrated circuit for high-bandwidth network server interfacing to a local area network
US6157951A (en) * 1997-09-17 2000-12-05 Sony Corporation Dual priority chains for data-communication ports in a multi-port bridge for a local area network
US6185210B1 (en) 1997-09-30 2001-02-06 Bbn Corporation Virtual circuit management for multi-point delivery in a network system
US6256313B1 (en) 1995-01-11 2001-07-03 Sony Corporation Triplet architecture in a multi-port bridge for a local area network
US6446173B1 (en) 1997-09-17 2002-09-03 Sony Corporation Memory controller in a multi-port bridge for a local area network
CN100341274C (en) * 2003-06-13 2007-10-03 韩国电子通信研究院 Ethernet switch, and apparatus and method for expanding port
WO2010076649A2 (en) * 2008-12-31 2010-07-08 Transwitch India Pvt. Ltd. Packet processing system on chip device

Families Citing this family (157)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4426123C2 (en) * 1994-07-22 1998-05-20 Siemens Nixdorf Inf Syst Arbitration for delayed bus coupling
US5867666A (en) 1994-12-29 1999-02-02 Cisco Systems, Inc. Virtual interfaces with dynamic binding
US5793978A (en) 1994-12-29 1998-08-11 Cisco Technology, Inc. System for routing packets by separating packets in to broadcast packets and non-broadcast packets and allocating a selected communication bandwidth to the broadcast packets
US5692126A (en) * 1995-01-24 1997-11-25 Bell Atlantic Network Services, Inc. ISDN access to fast packet data network
WO1996041274A1 (en) * 1995-06-07 1996-12-19 Advanced Micro Devices, Inc. Dynamically reconfigurable data bus
US6788662B2 (en) 1995-06-30 2004-09-07 Interdigital Technology Corporation Method for adaptive reverse power control for spread-spectrum communications
US7020111B2 (en) 1996-06-27 2006-03-28 Interdigital Technology Corporation System for using rapid acquisition spreading codes for spread-spectrum communications
US6816473B2 (en) 1995-06-30 2004-11-09 Interdigital Technology Corporation Method for adaptive forward power control for spread-spectrum communications
US6697350B2 (en) 1995-06-30 2004-02-24 Interdigital Technology Corporation Adaptive vector correlator for spread-spectrum communications
US6885652B1 (en) 1995-06-30 2005-04-26 Interdigital Technology Corporation Code division multiple access (CDMA) communication system
ZA965340B (en) 1995-06-30 1997-01-27 Interdigital Tech Corp Code division multiple access (cdma) communication system
US5754803A (en) 1996-06-27 1998-05-19 Interdigital Technology Corporation Parallel packetized intermodule arbitrated high speed control and data bus
US7929498B2 (en) 1995-06-30 2011-04-19 Interdigital Technology Corporation Adaptive forward power control and adaptive reverse power control for spread-spectrum communications
US5734867A (en) * 1995-07-28 1998-03-31 Motorola, Inc. Method, device, microprocessor and microprocessor memory for instantaneous preemption of packet data
US6097718A (en) 1996-01-02 2000-08-01 Cisco Technology, Inc. Snapshot routing with route aging
US6147996A (en) 1995-08-04 2000-11-14 Cisco Technology, Inc. Pipelined multiple issue packet switch
US6182224B1 (en) 1995-09-29 2001-01-30 Cisco Systems, Inc. Enhanced network services using a subnetwork of communicating processors
US6091725A (en) 1995-12-29 2000-07-18 Cisco Systems, Inc. Method for traffic management, traffic prioritization, access control, and packet forwarding in a datagram computer network
US6035105A (en) 1996-01-02 2000-03-07 Cisco Technology, Inc. Multiple VLAN architecture system
US6308148B1 (en) 1996-05-28 2001-10-23 Cisco Technology, Inc. Network flow data export
US6243667B1 (en) 1996-05-28 2001-06-05 Cisco Systems, Inc. Network flow switching and flow data export
US5917820A (en) * 1996-06-10 1999-06-29 Cisco Technology, Inc. Efficient packet forwarding arrangement for routing packets in an internetwork
US6212182B1 (en) 1996-06-27 2001-04-03 Cisco Technology, Inc. Combined unicast and multicast scheduling
US6434120B1 (en) 1998-08-25 2002-08-13 Cisco Technology, Inc. Autosensing LMI protocols in frame relay networks
US5872783A (en) * 1996-07-24 1999-02-16 Cisco Systems, Inc. Arrangement for rendering forwarding decisions for packets transferred among network switches
US6240084B1 (en) 1996-10-10 2001-05-29 Cisco Systems, Inc. Telephony-enabled network processing device with separate TDM bus and host system backplane bus
US6304546B1 (en) 1996-12-19 2001-10-16 Cisco Technology, Inc. End-to-end bidirectional keep-alive using virtual circuits
US5943230A (en) * 1996-12-19 1999-08-24 Applied Materials, Inc. Computer-implemented inter-chamber synchronization in a multiple chamber substrate processing system
US6141351A (en) * 1996-12-20 2000-10-31 International Business Machines Corporation Radio frequency bus for broadband microprocessor communications
US6233246B1 (en) 1996-12-30 2001-05-15 Compaq Computer Corporation Network switch with statistics read accesses
US6665733B1 (en) * 1996-12-30 2003-12-16 Hewlett-Packard Development Company, L.P. Network communication device including bonded ports for increased bandwidth
EP0853406A3 (en) * 1997-01-06 2001-02-21 Compaq Computer Corporation Management of a computer network switched repeater
US6002675A (en) * 1997-01-06 1999-12-14 Cabletron Systems, Inc. Method and apparatus for controlling transmission of data over a network
US6097705A (en) * 1997-01-06 2000-08-01 Cabletron Systems, Inc. Buffered repeater with independent ethernet collision domains
US6757286B1 (en) 1997-03-24 2004-06-29 Alcatel Self-configuring communication network
US6160653A (en) * 1997-03-26 2000-12-12 Sun Microsystems, Inc. Optical computer bus with dynamic bandwidth allocation
GB9706379D0 (en) * 1997-03-27 1997-05-14 Texas Instruments Ltd Network switch
US6151325A (en) * 1997-03-31 2000-11-21 Cisco Technology, Inc. Method and apparatus for high-capacity circuit switching with an ATM second stage switch
US6791979B1 (en) 1997-04-10 2004-09-14 Cisco Technology, Inc. Mechanism for conveying data prioritization information among heterogeneous nodes of a computer network
US6115751A (en) * 1997-04-10 2000-09-05 Cisco Technology, Inc. Technique for capturing information needed to implement transmission priority routing among heterogeneous nodes of a computer network
US5991302A (en) * 1997-04-10 1999-11-23 Cisco Technology, Inc. Technique for maintaining prioritization of data transferred among heterogeneous nodes of a computer network
US6327266B1 (en) * 1997-04-25 2001-12-04 Alcatel Usa Sourcing, L.P. Multiple user access network
US5949784A (en) * 1997-05-01 1999-09-07 3Com Corporation Forwarding mechanism for multi-destination packets to minimize per packet scheduling overhead in a network forwarding engine
US6356530B1 (en) 1997-05-23 2002-03-12 Cisco Technology, Inc. Next hop selection in ATM networks
US6122272A (en) 1997-05-23 2000-09-19 Cisco Technology, Inc. Call size feedback on PNNI operation
US6078590A (en) 1997-07-14 2000-06-20 Cisco Technology, Inc. Hierarchical routing knowledge for multicast packet routing
US5959968A (en) * 1997-07-30 1999-09-28 Cisco Systems, Inc. Port aggregation protocol
US6330599B1 (en) 1997-08-05 2001-12-11 Cisco Technology, Inc. Virtual interfaces with dynamic binding
US6512766B2 (en) 1997-08-22 2003-01-28 Cisco Systems, Inc. Enhanced internet packet routing lookup
US6212183B1 (en) 1997-08-22 2001-04-03 Cisco Technology, Inc. Multiple parallel packet routing lookup
US6157641A (en) 1997-08-22 2000-12-05 Cisco Technology, Inc. Multiprotocol packet recognition and switching
US6147991A (en) * 1997-09-05 2000-11-14 Video Network Communications, Inc. Scalable high speed packet switch using packet diversion through dedicated channels
US6343072B1 (en) 1997-10-01 2002-01-29 Cisco Technology, Inc. Single-chip architecture for shared-memory router
US6128296A (en) * 1997-10-03 2000-10-03 Cisco Technology, Inc. Method and apparatus for distributed packet switching using distributed address tables
US6252878B1 (en) 1997-10-30 2001-06-26 Cisco Technology, Inc. Switched architecture access server
US6065062A (en) * 1997-12-10 2000-05-16 Cisco Systems, Inc. Backup peer pool for a routed computer network
US7369556B1 (en) 1997-12-23 2008-05-06 Cisco Technology, Inc. Router for virtual private network employing tag switching
US6339595B1 (en) 1997-12-23 2002-01-15 Cisco Technology, Inc. Peer-model support for virtual private networks with potentially overlapping addresses
US6111877A (en) 1997-12-31 2000-08-29 Cisco Technology, Inc. Load sharing across flows
US6003104A (en) * 1997-12-31 1999-12-14 Sun Microsystems, Inc. High speed modular internal microprocessor bus system
US6065038A (en) * 1998-02-06 2000-05-16 Accton Technology Corp. Method and apparatus for transmitting data at different data transfer rates using multiple interconnected hubs
US5974051A (en) * 1998-03-03 1999-10-26 Cisco Technology, Inc. System interprocessor communication using media independent interface-based channel
US6044061A (en) * 1998-03-10 2000-03-28 Cabletron Systems, Inc. Method and apparatus for fair and efficient scheduling of variable-size data packets in an input-buffered multipoint switch
US6738814B1 (en) * 1998-03-18 2004-05-18 Cisco Technology, Inc. Method for blocking denial of service and address spoofing attacks on a private network
US6154743A (en) * 1998-06-16 2000-11-28 Cisco Technology, Inc. Technique for accessing heterogeneous directory services in an APPN environment
US6836838B1 (en) 1998-06-29 2004-12-28 Cisco Technology, Inc. Architecture for a processor complex of an arrayed pipelined processing engine
US6407985B1 (en) 1998-06-29 2002-06-18 Cisco Technology, Inc. Load sharing over blocked links
US6370121B1 (en) 1998-06-29 2002-04-09 Cisco Technology, Inc. Method and system for shortcut trunking of LAN bridges
US6195739B1 (en) 1998-06-29 2001-02-27 Cisco Technology, Inc. Method and apparatus for passing data among processor complex stages of a pipelined processing engine
US6101599A (en) * 1998-06-29 2000-08-08 Cisco Technology, Inc. System for context switching between processing elements in a pipeline of processing elements
US6356548B1 (en) 1998-06-29 2002-03-12 Cisco Technology, Inc. Pooled receive and transmit queues to access a shared bus in a multi-port switch asic
US6119215A (en) * 1998-06-29 2000-09-12 Cisco Technology, Inc. Synchronization and control system for an arrayed processing engine
US6513108B1 (en) 1998-06-29 2003-01-28 Cisco Technology, Inc. Programmable processing engine for efficiently processing transient data
US6377577B1 (en) 1998-06-30 2002-04-23 Cisco Technology, Inc. Access control list processing in hardware
US6351454B1 (en) 1998-07-24 2002-02-26 Cisco Technology, Inc. Apparatus and method for maintaining packet ordering over parallel links of a crossbar based switch fabric
US6182147B1 (en) 1998-07-31 2001-01-30 Cisco Technology, Inc. Multicast group routing using unidirectional links
US6308219B1 (en) 1998-07-31 2001-10-23 Cisco Technology, Inc. Routing table lookup implemented using M-trie having nodes duplicated in multiple memory banks
US6389506B1 (en) 1998-08-07 2002-05-14 Cisco Technology, Inc. Block mask ternary cam
US6101115A (en) 1998-08-07 2000-08-08 Cisco Technology, Inc. CAM match line precharge
US6535520B1 (en) 1998-08-14 2003-03-18 Cisco Technology, Inc. System and method of operation for managing data communication between physical layer devices and ATM layer devices
US6269096B1 (en) 1998-08-14 2001-07-31 Cisco Technology, Inc. Receive and transmit blocks for asynchronous transfer mode (ATM) cell delineation
US6560240B1 (en) * 1998-09-04 2003-05-06 Advanced Micro Devices, Inc. System-on-a-chip with variable clock rate
US6381245B1 (en) 1998-09-04 2002-04-30 Cisco Technology, Inc. Method and apparatus for generating parity for communication between a physical layer device and an ATM layer device
US6724772B1 (en) 1998-09-04 2004-04-20 Advanced Micro Devices, Inc. System-on-a-chip with variable bandwidth
US5991300A (en) * 1998-09-08 1999-11-23 Cisco Technology, Inc. Technique for efficiently performing optional TTL propagation during label imposition
US6295296B1 (en) 1998-09-08 2001-09-25 Cisco Technology, Inc. Use of a single data structure for label forwarding and imposition
US6272113B1 (en) 1998-09-11 2001-08-07 Compaq Computer Corporation Network controller system that uses multicast heartbeat packets
US6229538B1 (en) 1998-09-11 2001-05-08 Compaq Computer Corporation Port-centric graphic representations of network controllers
US6381218B1 (en) 1998-09-11 2002-04-30 Compaq Computer Corporation Network controller system that uses directed heartbeat packets
US6728839B1 (en) 1998-10-28 2004-04-27 Cisco Technology, Inc. Attribute based memory pre-fetching technique
US10511573B2 (en) 1998-10-30 2019-12-17 Virnetx, Inc. Agile network protocol for secure communications using secure domain names
ES2760905T3 (en) 1998-10-30 2020-05-18 Virnetx Inc An agile network protocol for secure communications with assured system availability
US7418504B2 (en) 1998-10-30 2008-08-26 Virnetx, Inc. Agile network protocol for secure communications using secure domain names
US6502135B1 (en) 1998-10-30 2002-12-31 Science Applications International Corporation Agile network protocol for secure communications with assured system availability
US6839759B2 (en) 1998-10-30 2005-01-04 Science Applications International Corp. Method for establishing secure communication link between computers of virtual private network without user entering any cryptographic information
US6185221B1 (en) 1998-11-09 2001-02-06 Cabletron Systems, Inc. Method and apparatus for fair and efficient scheduling of variable-size data packets in an input-buffered multipoint switch
US6700872B1 (en) 1998-12-11 2004-03-02 Cisco Technology, Inc. Method and system for testing a utopia network element
US6173386B1 (en) 1998-12-14 2001-01-09 Cisco Technology, Inc. Parallel processor with debug capability
US6385747B1 (en) 1998-12-14 2002-05-07 Cisco Technology, Inc. Testing of replicated components of electronic device
US6920562B1 (en) 1998-12-18 2005-07-19 Cisco Technology, Inc. Tightly coupled software protocol decode with hardware data encryption
US6535511B1 (en) 1999-01-07 2003-03-18 Cisco Technology, Inc. Method and system for identifying embedded addressing information in a packet for translation between disparate addressing systems
US6453357B1 (en) * 1999-01-07 2002-09-17 Cisco Technology, Inc. Method and system for processing fragments and their out-of-order delivery during address translation
US6771642B1 (en) 1999-01-08 2004-08-03 Cisco Technology, Inc. Method and apparatus for scheduling packets in a packet switch
US6449655B1 (en) 1999-01-08 2002-09-10 Cisco Technology, Inc. Method and apparatus for communication between network devices operating at different frequencies
US7307990B2 (en) * 1999-01-19 2007-12-11 Cisco Technology, Inc. Shared communications network employing virtual-private-network identifiers
US6337861B1 (en) 1999-02-02 2002-01-08 Cisco Technology, Inc. Method and apparatus to properly route ICMP messages in a tag-switching network
US6512768B1 (en) 1999-02-26 2003-01-28 Cisco Technology, Inc. Discovery and tag space identifiers in a tag distribution protocol (TDP)
US6853623B2 (en) 1999-03-05 2005-02-08 Cisco Technology, Inc. Remote monitoring of switch network
US6473421B1 (en) 1999-03-29 2002-10-29 Cisco Technology, Inc. Hierarchical label switching across multiple OSPF areas
US6757791B1 (en) 1999-03-30 2004-06-29 Cisco Technology, Inc. Method and apparatus for reordering packet data units in storage queues for reading and writing memory
US6603772B1 (en) 1999-03-31 2003-08-05 Cisco Technology, Inc. Multicast routing with multicast virtual output queues and shortest queue first allocation
US6760331B1 (en) 1999-03-31 2004-07-06 Cisco Technology, Inc. Multicast routing with nearest queue first allocation and dynamic and static vector quantization
US6061728A (en) * 1999-05-25 2000-05-09 Cisco Technology, Inc. Arrangement for controlling network proxy device traffic on a transparently-bridged local area network using a master proxy device
CA2371795C (en) 1999-05-26 2012-02-07 Bigband Networks, Inc. Communication management system and method
US7009969B1 (en) * 1999-06-08 2006-03-07 Cisco Technology, Inc. Local area network and message packet for a telecommunications device
US6952421B1 (en) 1999-10-07 2005-10-04 Cisco Technology, Inc. Switched Ethernet path detection
US6529983B1 (en) 1999-11-03 2003-03-04 Cisco Technology, Inc. Group and virtual locking mechanism for inter processor synchronization
US6681341B1 (en) 1999-11-03 2004-01-20 Cisco Technology, Inc. Processor isolation method for integrated multi-processor systems
US6484224B1 (en) 1999-11-29 2002-11-19 Cisco Technology Inc. Multi-interface symmetric multiprocessor
JP2001217894A (en) * 1999-12-07 2001-08-10 Texas Instr Inc <Ti> Data transfer controller with hub and port having effective channel priority processing function
US6807172B1 (en) 1999-12-21 2004-10-19 Cisco Technology, Inc. Method and apparatus for learning and switching frames in a distributed network switch
US6910133B1 (en) 2000-04-11 2005-06-21 Cisco Technology, Inc. Reflected interrupt for hardware-based encryption
US6505269B1 (en) 2000-05-16 2003-01-07 Cisco Technology, Inc. Dynamic addressing mapping to eliminate memory resource contention in a symmetric multiprocessor system
US6850980B1 (en) 2000-06-16 2005-02-01 Cisco Technology, Inc. Content routing service protocol
US20020083344A1 (en) * 2000-12-21 2002-06-27 Vairavan Kannan P. Integrated intelligent inter/intra networking device
US6847619B2 (en) 2001-05-11 2005-01-25 Northrop Grumman Corporation Dual mode of operation multiple access system for data link communication
US7283556B2 (en) * 2001-07-31 2007-10-16 Nishan Systems, Inc. Method and system for managing time division multiplexing (TDM) timeslots in a network switch
US7177971B2 (en) * 2001-08-24 2007-02-13 Intel Corporation General input/output architecture, protocol and related methods to provide isochronous channels
US9836424B2 (en) 2001-08-24 2017-12-05 Intel Corporation General input/output architecture, protocol and related methods to implement flow control
CN100367254C (en) 2001-08-24 2008-02-06 英特尔公司 A general input/output architecture, protocol and related methods to support legacy interrupts
NO20016372D0 (en) * 2001-12-27 2001-12-27 Ericsson Telefon Ab L M Arrangement to reduce memory requirements in a switch
US7281044B2 (en) * 2002-01-10 2007-10-09 Hitachi, Ltd. SAN infrastructure on demand service system
US7548512B2 (en) * 2003-02-06 2009-06-16 General Electric Company Methods and systems for prioritizing data transferred on a Local Area Network
US7194568B2 (en) 2003-03-21 2007-03-20 Cisco Technology, Inc. System and method for dynamic mirror-bank addressing
US7729267B2 (en) 2003-11-26 2010-06-01 Cisco Technology, Inc. Method and apparatus for analyzing a media path in a packet switched network
US7522524B2 (en) * 2004-04-29 2009-04-21 International Business Machines Corporation Employing one or more multiport systems to facilitate servicing of asynchronous communications events
US7787361B2 (en) 2005-07-29 2010-08-31 Cisco Technology, Inc. Hybrid distance vector protocol for wireless mesh networks
US7151782B1 (en) 2005-08-09 2006-12-19 Bigband Networks, Inc. Method and system for providing multiple services to end-users
US7660318B2 (en) * 2005-09-20 2010-02-09 Cisco Technology, Inc. Internetworking support between a LAN and a wireless mesh network
US20070110024A1 (en) * 2005-11-14 2007-05-17 Cisco Technology, Inc. System and method for spanning tree cross routes
US8453147B2 (en) * 2006-06-05 2013-05-28 Cisco Technology, Inc. Techniques for reducing thread overhead for systems with multiple multi-threaded processors
US8041929B2 (en) * 2006-06-16 2011-10-18 Cisco Technology, Inc. Techniques for hardware-assisted multi-threaded processing
US8010966B2 (en) * 2006-09-27 2011-08-30 Cisco Technology, Inc. Multi-threaded processing using path locks
US7738383B2 (en) * 2006-12-21 2010-06-15 Cisco Technology, Inc. Traceroute using address request messages
US7706278B2 (en) * 2007-01-24 2010-04-27 Cisco Technology, Inc. Triggering flow analysis at intermediary devices
US7793032B2 (en) * 2007-07-11 2010-09-07 Commex Technologies, Ltd. Systems and methods for efficient handling of data traffic and processing within a processing device
US7940762B2 (en) * 2008-03-19 2011-05-10 Integrated Device Technology, Inc. Content driven packet switch
US8732369B1 (en) 2010-03-31 2014-05-20 Ambarella, Inc. Minimal-cost pseudo-round-robin arbiter
US8774010B2 (en) 2010-11-02 2014-07-08 Cisco Technology, Inc. System and method for providing proactive fault monitoring in a network environment
US8559341B2 (en) 2010-11-08 2013-10-15 Cisco Technology, Inc. System and method for providing a loop free topology in a network environment
WO2012094285A1 (en) * 2011-01-04 2012-07-12 Thomson Licensing Apparatus and method for multi-device routing in a gateway
US8982733B2 (en) 2011-03-04 2015-03-17 Cisco Technology, Inc. System and method for managing topology changes in a network environment
US8670326B1 (en) 2011-03-31 2014-03-11 Cisco Technology, Inc. System and method for probing multiple paths in a network environment
US8724517B1 (en) 2011-06-02 2014-05-13 Cisco Technology, Inc. System and method for managing network traffic disruption
US8830875B1 (en) 2011-06-15 2014-09-09 Cisco Technology, Inc. System and method for providing a loop free topology in a network environment
JP5704567B2 (en) * 2011-09-13 2015-04-22 株式会社日立製作所 Node device, system, and packet processing method
US9450846B1 (en) 2012-10-17 2016-09-20 Cisco Technology, Inc. System and method for tracking packets in a network environment

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4593282A (en) * 1983-04-14 1986-06-03 At&T Information Systems Inc. Network protocol for integrating synchronous and asynchronous traffic on a common serial data bus
GB8407102D0 (en) * 1984-03-19 1984-04-26 Int Computers Ltd Interconnection of communications networks
US4586175A (en) * 1984-04-30 1986-04-29 Northern Telecom Limited Method for operating a packet bus for transmission of asynchronous and pseudo-synchronous signals
US4621362A (en) * 1984-06-04 1986-11-04 International Business Machines Corp. Routing architecture for a multi-ring local area network
US4706081A (en) * 1984-12-14 1987-11-10 Vitalink Communications Corporation Method and apparatus for bridging local area networks
US4809265A (en) * 1987-05-01 1989-02-28 Vitalink Communications Corporation Method and apparatus for interfacing to a local area network
US4866421A (en) * 1987-06-18 1989-09-12 Texas Instruments Incorporated Communications circuit having an interface for external address decoding
US4811337A (en) * 1988-01-15 1989-03-07 Vitalink Communications Corporation Distributed load sharing
US4979100A (en) * 1988-04-01 1990-12-18 Sprint International Communications Corp. Communication processor for a packet-switched network
US4933938A (en) * 1989-03-22 1990-06-12 Hewlett-Packard Company Group address translation through a network bridge
FR2648646B1 (en) * 1989-06-19 1991-08-23 Alcatel Business Systems METHOD AND DEVICE FOR MANAGING ACCESS TO THE TRANSMISSION MEDIUM OF A MULTI-SERVICE DIVISION SWITCHING NETWORK
US4987571A (en) * 1989-07-25 1991-01-22 Motorola, Inc. Data communication system with prioritized periodic and aperiodic messages
US5088090A (en) * 1990-01-31 1992-02-11 Rad Network Devices Ltd. Routing system to interconnect local area networks
US5274631A (en) * 1991-03-11 1993-12-28 Kalpana, Inc. Computer network switching system
DE69124596T2 (en) * 1991-04-22 1997-08-21 Ibm Collision-free insertion and removal of circuit-switched channels in a packet-switching transmission structure
CA2068847C (en) * 1991-07-01 1998-12-29 Ronald C. Roposh Method for operating an asynchronous packet bus for transmission of asynchronous and isochronous information
US5392280A (en) * 1994-04-07 1995-02-21 Mitsubishi Electric Research Laboratories, Inc. Data transmission system and scheduling protocol for connection-oriented packet or cell switching networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BYTE, vol. 19, no. 9, 1 September 1994 XP 000473306 BRYAN J 'LANS MAKE THE SWITCH THE SWITCHING CAPABILITIES OF BRIDGES, ROUTERS, AND HUBS ADD NEW POWER TO LANS AND MAKE IT EASIER THAN EVER TO CONNECT THEM TO ENTERPRISE WANS' *
DATA COMMUNICATIONS, vol. 23, no. 12, September 1994 NEW YORK US, pages 49-50, XP 000462381 S. SAUNDERS 'SWITCH MIXES UNLIKE LANS AT AN UNLIKELY LOW PRICE' *
DATA COMMUNICATIONS, vol. 23, no. 12, September 1994 NEW YORK US, pages 64E-64F, XP 000462384 P. HEYWOOD 'LAN SWITCH IS SET TO TAKE ON MULTIMEDIA' *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5940597A (en) * 1995-01-11 1999-08-17 Sony Corporation Method and apparatus for periodically updating entries in a content addressable memory
US6256313B1 (en) 1995-01-11 2001-07-03 Sony Corporation Triplet architecture in a multi-port bridge for a local area network
US6122667A (en) * 1995-01-11 2000-09-19 Sony Corporation Method and integrated circuit for high-bandwidth network server interfacing to a local area network using CSMA/CD
US6012099A (en) * 1995-01-11 2000-01-04 Sony Corporation Method and integrated circuit for high-bandwidth network server interfacing to a local area network
US6108726A (en) * 1996-09-13 2000-08-22 Advanced Micro Devices. Inc. Reducing the pin count within a switching element through the use of a multiplexer
WO1998011696A1 (en) * 1996-09-13 1998-03-19 Advanced Micro Devices, Inc. A method and system for reducing the pin count required for the interconnections of controller devices and physical devices in a network
EP0854612A3 (en) * 1996-12-30 1999-06-16 Compaq Computer Corporation Method and system for performing concurrent read and write cycles in a network switch
EP0854611A3 (en) * 1996-12-30 1999-06-16 Compaq Computer Corporation Network switch with multiple bus architecture
EP0854612A2 (en) * 1996-12-30 1998-07-22 Compaq Computer Corporation Method and system for performing concurrent read and write cycles in a network switch
US6222840B1 (en) 1996-12-30 2001-04-24 Compaq Computer Corporation Method and system for performing concurrent read and write cycles in network switch
EP0854611A2 (en) * 1996-12-30 1998-07-22 Compaq Computer Corporation Network switch with multiple bus architecture
US6260073B1 (en) 1996-12-30 2001-07-10 Compaq Computer Corporation Network switch including a switch manager for periodically polling the network ports to determine their status and controlling the flow of data between ports
WO1999014901A1 (en) * 1997-09-17 1999-03-25 Sony Electronics Inc. High speed bus structure in a multi-port bridge for a local area network
US6157951A (en) * 1997-09-17 2000-12-05 Sony Corporation Dual priority chains for data-communication ports in a multi-port bridge for a local area network
US6446173B1 (en) 1997-09-17 2002-09-03 Sony Corporation Memory controller in a multi-port bridge for a local area network
US6185210B1 (en) 1997-09-30 2001-02-06 Bbn Corporation Virtual circuit management for multi-point delivery in a network system
CN100341274C (en) * 2003-06-13 2007-10-03 韩国电子通信研究院 Ethernet switch, and apparatus and method for expanding port
WO2010076649A2 (en) * 2008-12-31 2010-07-08 Transwitch India Pvt. Ltd. Packet processing system on chip device
WO2010076649A3 (en) * 2008-12-31 2011-11-24 Transwitch India Pvt. Ltd. Packet processing system on chip device

Also Published As

Publication number Publication date
WO1996013922A3 (en) 1996-06-13
US5561669A (en) 1996-10-01
EP0788691A2 (en) 1997-08-13

Similar Documents

Publication Publication Date Title
US5561669A (en) Computer network switching system with expandable number of ports
US5546385A (en) Flexible switching hub for a communication network
US6430194B1 (en) Method and apparatus for arbitrating bus access amongst competing devices
EP0630540B1 (en) Communications bus and controller
US6295281B1 (en) Symmetric flow control for ethernet full duplex buffered repeater
US6674750B1 (en) Apparatus and method for communicating time-division multiplexed data and packet data on a shared bus
US6108306A (en) Apparatus and method in a network switch for dynamically allocating bandwidth in ethernet workgroup switches
US8379658B2 (en) Deferred queuing in a buffered switch
US6373848B1 (en) Architecture for a multi-port adapter with a single media access control (MAC)
US6510138B1 (en) Network switch with head of line input buffer queue clearing
US7274705B2 (en) Method and apparatus for reducing clock speed and power consumption
US6061348A (en) Method and apparatus for dynamically allocating bandwidth for a time division multiplexed data bus
US20040022263A1 (en) Cross point switch with out-of-band parameter fine tuning
US6421348B1 (en) High-speed network switch bus
US20020118692A1 (en) Ensuring proper packet ordering in a cut-through and early-forwarding network switch
JPH07295924A (en) Computer bus and arbitration method
US5732079A (en) Method and apparatus for skewing the start of transmission on multiple data highways
US20020172210A1 (en) Network device switch
EP1442376B1 (en) Tagging and arbitration mechanism in an input/output node of a computer system
US5982296A (en) Data switching processing method and apparatus
US6115374A (en) Method and apparatus for dynamically assigning bandwidth for a time division multiplexing data bus
US6512769B1 (en) Method and apparatus for rate-based cell traffic arbitration in a switch
US6195334B1 (en) Apparatus and method for terminating a data transfer in a network switch in response to a detected collision
US6438102B1 (en) Method and apparatus for providing asynchronous memory functions for bi-directional traffic in a switch platform
US6654838B1 (en) Methods for performing bit sensitive parallel bus peer addressing

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

AK Designated states

Kind code of ref document: A3

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2203500

Country of ref document: CA

Ref country code: CA

Ref document number: 2203500

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1995939604

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1995939604

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1995939604

Country of ref document: EP