US20010043614A1 - Multi-layer switching apparatus and method - Google Patents

Multi-layer switching apparatus and method Download PDF

Info

Publication number
US20010043614A1
US20010043614A1 US09/118,458 US11845898A US2001043614A1 US 20010043614 A1 US20010043614 A1 US 20010043614A1 US 11845898 A US11845898 A US 11845898A US 2001043614 A1 US2001043614 A1 US 2001043614A1
Authority
US
United States
Prior art keywords
packet
memory
module
cam
port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/118,458
Other versions
US6424659B2 (en
Inventor
Krishna Viswanadham
Mahesh Veerina
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonus Networks Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US09/118,458 priority Critical patent/US6424659B2/en
Application filed by Individual filed Critical Individual
Assigned to FLOWWISE NETWORKS INC. reassignment FLOWWISE NETWORKS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VEERINA, MAHESH
Assigned to FLOWWISE NETWORKS INC. reassignment FLOWWISE NETWORKS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VISWANADHAM, KRISHNA
Assigned to NETWORK EQUIPMENT TECHNOLOGIES, INC. reassignment NETWORK EQUIPMENT TECHNOLOGIES, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: FLOWWISE NETWORKS, INC.
Publication of US20010043614A1 publication Critical patent/US20010043614A1/en
Publication of US6424659B2 publication Critical patent/US6424659B2/en
Application granted granted Critical
Assigned to BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT reassignment BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT SECURITY INTEREST Assignors: NETWORK EQUIPMENT TECHNOLOGIES, INC., PERFORMANCE TECHNOLOGIES, INCORPORATED, SONUS FEDERAL, INC., SONUS INTERNATIONAL, INC., SONUS NETWORKS, INC.
Assigned to SONUS FEDERAL, INC., NETWORK EQUIPMENT TECHNOLOGIES, INC., PERFORMANCE TECHNOLOGIES, INCORPORATED, SONUS INTERNATIONAL, INC., TAQUA, INC., SONUS NETWORKS, INC. reassignment SONUS FEDERAL, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: BANK OF AMERICA, N.A.
Assigned to SONUS NETWORKS, INC. reassignment SONUS NETWORKS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NETWORK EQUIPMENT TECHNOLOGIES, INC.
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/60Software-defined switches
    • H04L49/602Multilayer or multiprotocol switching, e.g. IP switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/201Multicast operation; Broadcast operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/254Centralised controller, i.e. arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches

Definitions

  • Invention relates to digital networks, particularly to multi-layer switching network apparatus and method.
  • LAN local area network
  • TCP/IP TCP/IP
  • LAN size As businesses increasingly rely on such technologies, both LAN size and TCP/IP traffic volume that runs across them have grown dramatically. This has led the network manager on continuous search for products to increase network performance, easily adapt to changing network requirements, and preserve existing network investment.
  • LAN technology is evolving into Gigabit per second (Gbps) range.
  • Equipment designers have been challenged to make network interfaces and networking products such as bridges, routers, and switches, fast enough to take advantage of the new performance.
  • Compounding the equipment design problem has been the rapid innovation in networking protocols.
  • the traditional response to this shifting sands problem has been to build easily upgradable software-intensive products.
  • these software intensive products typically exhibit poor system performance.
  • Invention resides in a multilayer switching device and associated technique for enabling simultaneous wire-speed routing at layer 3 , wire-speed switching at layer 2 , and support multiple interfaces at layer 1 , according to OSI reference model.
  • Inventive implementation may be embodied using one or more integrated circuits (ASIC), RISC processor, and software, thereby providing wire-speed performance on interfaces, in various operational modes.
  • ASIC integrated circuits
  • RISC RISC processor
  • FIG. 1 is system-level diagram of preferred embodiment.
  • FIGS. 2 A-B are block diagrams of first- and second-level switch respectively of present embodiment.
  • FIG. 3 is general switch block diagram of present embodiment.
  • FIG. 4 is general control-path diagram of present embodiment.
  • FIG. 5 is general datapath diagram of present embodiment.
  • FIGS. 6 A-B are block diagrams of LAN interface and datapath interface respectively of present embodiment.
  • FIGS. 7 A-B are block diagrams of DMA transfer between local memory and packet memory, and processor access to packet memory respectively of present embodiment.
  • FIGS. 8 A-B are processor access to L 3 CAM memory and control memory respectively of present embodiment.
  • FIGS. 9 A-B are processor access to L 2 CAM memory, and LAN arbiter interaction with datapath respectively of present embodiment.
  • FIGS. 10 A-B are transmit queue management block (XQMB) interfaces and operation respectively of present embodiment.
  • XQMB transmit queue management block
  • FIG. 11 is DMA block diagram of present embodiment.
  • FIG. 12 is flowchart of CPU-to-packet memory operation of present embodiment.
  • FIG. 13 is flowchart of packet memory-to-CPU operation of present embodiment.
  • FIG. 14 is block diagram of L 3 block interfaces of present embodiment.
  • FIGS. 15 A-B are flowcharts of age table maintenance of present embodiment.
  • FIGS. 16 A-B are flowcharts of search and lookup operations respectively of present embodiment.
  • FIG. 17 is flowchart of packet reception of present embodiment.
  • FIG. 1 is top-level overview diagram of system architecture for preferred embodiment.
  • Multilayer switch device 6 couples local area network (LAN) workgroup hubs 2 through enterprise switching hub 4 to wide-area network (WAN) links through multiprotocol router 8 .
  • Multilayer switch 6 and associated technique enables simultaneous wire-speed routing at Layer 3 (L 3 ), wire-speed switching at Layer 2 (L 2 ), and support multiple interfaces at Layer 1 (L 1 ), according to OSI reference model.
  • System may be embodied using one or more integrated circuits (ASIC), RISC processor, and software, thereby providing wire-speed performance on various interfaces in various operational modes.
  • ASIC integrated circuits
  • RISC processor RISC processor
  • System architecture comprises two-level distributed multilayer switch, preferably using 4-Gbps non-blocking switch fabric 6 .
  • Multilayer (i.e., both L 2 and L 3 ) switch fabric is entirely contained within single ASIC capable of switching 3M pps or more.
  • a 4 Gbps I/O bus connects one or more interface modules to the ASIC.
  • the switch matrix is not necessarily integrated with the MAC layer, a wide range of interface types can be supported (i.e., both LAN and WAN).
  • various combinations of layer 1 interfaces are supportable, and all interface modules are field-upgradable.
  • Various interface modules may carry multiple physical interfaces.
  • first-level switch 22 includes switch ASIC 20 , which couples RISC coprocessors (i.e., Network Management Processor (NMP) 10 and Route/Switch (RS) processor 12 ,) for supporting for higher-layer software functions and support features. Optional components may be added for redundancy of critical system components, such as power supplies. Memory 16 and input/output (I/O) modules 14 couple to switch circuit 20 .
  • RISC coprocessors i.e., Network Management Processor (NMP) 10 and Route/Switch (RS) processor 12 ,
  • NMP Network Management Processor
  • RS Route/Switch
  • Memory 16 and input/output (I/O) modules 14 couple to switch circuit 20 .
  • second-level switch or cross-bar interconnection 18 couples multiple first-level switches 22 .
  • aggregate performance of non-blocking switch fabric may exceed 24 Gbps.
  • RISC processors 10 , 12 provided in each switch element 22 execute software to provide standards-based dynamic routing, and non-real time activities such as network management.
  • Software is stored in flash memory, and is network-updatable via TFTP.
  • Preferred software functions include: dynamic Internet Protocol (IP) routing (e.g., RIP, RIPv2, OSPF); layer 2 support (e.g., 802. ID STP); configuration support (e.g., enable/disable Layer 2 or Layer 3 support on per-port basis; ports can be grouped into broadcast domains, flexible subnet configuration); network management: (e.g., SNMP, HTML, Telnet, TFTP, DHCP support).
  • IP Internet Protocol
  • layer 2 support e.g., 802. ID STP
  • configuration support e.g., enable/disable Layer 2 or Layer 3 support on per-port basis; ports can be grouped into broadcast domains, flexible subnet configuration
  • network management e.g., SNMP, HTML, Telnet, TF
  • Additional software functions include: quality-of-service provisioning (QOS) (e.g., providing multiple levels of prioritization, address- and policy-based QOS, dynamic layer 3 , QOS based on RSVP); IP Multicast (e.g., IGMP, DVMRP); network traffic monitoring (e.g., RMON); hot standby support (e.g., VRRP); additional dynamic routing (e.g., NHRP); and certain IEEE enhancements (e.g., 802.IQ (i.e., VLAN), 802.3x (i.e., flow control), and 802.1p (i.e., priority)).
  • QOS quality-of-service provisioning
  • IP Multicast e.g., IGMP, DVMRP
  • network traffic monitoring e.g., RMON
  • hot standby support e.g., VRRP
  • additional dynamic routing e.g., NHRP
  • certain IEEE enhancements e.g., 802.IQ (i.e., VLAN), 802.3x (i.
  • present multilayer switch system is suitable for applications at network aggregation points
  • present system may also be used in high-performance workgroup and server applications.
  • present system may interconnect between cluster of closely cooperating high performance computers, such as in video postproduction, where ability to transfer data rapidly between workstations is critical to production throughput.
  • wire-speed performance is interesting, and flexible layer 3 -addressing support provides connections outside workgroup, without impacting switching speed.
  • present multilayer switch system provides network attachment point for one or more servers. Wire-speed performance of present system allows network designer to use either layer 2 or layer 3 topologies, and removes potential network performance bottleneck.
  • preferred implementation of innovative multilayer switch apparatus and methodology provides following functionality: support for 16 or more full-duplex 100BaseT ports or up to 28 ports of 10/100BaseT ports; direct interface to MIPS-type RISC processor for management and routing; integration of SDRAM controller for shared high-speed 6-channel packet memory; integrates of CAM access interface to system processor; integration of hardware CAM processor for L 2 learning, lookup and live interactive activities or transactions; integration of hardware hash-based IP header lookup and management; integration of hardware-based transmit and free queue management; integration of L 2 and L 3 forwarding of unicast, broadcast and multicast packets; broadcast traffic management; integration of QoS, with 4 priority queues per port; hardware-handled packet movement; integration of 768 bytes of dual-port memory for L 2 and L 3 header for 28 ports; support for 4MB/16MB of SDRAM packet memory; implementation of 256 bytes of data buffers for concurrent transfers to PM SDRAM and LAN bus; intelligent buffer scheduler & arbiter
  • multilayer switch circuit 20 is implemented as single-chip integrated circuit (e.g., semicustom ASIC) for processing handles switching of any canonical packet, cell, frame, or other data communication element, with no or limited processing assistance from external processors.
  • Switch circuit 20 operates in relatively low latency, and store-and-forward switching modes. Transactions between Ethernet ports may operate in low-latency cut-thru mode; other transactions may occur in store-and-forward mode.
  • switch circuit 20 may contain substantially one or more of following functions: external bus interface, processor interface, CAM interface, LAN interface, packet memory (PM) SDRAM interface, route cache SDRAM interface, control memory (CM) SRAM interface, LAN block, LAN bus arbiter, LAN bus controller, LAN block interfaces, data path block, data path buffers, data path controller, buffer scheduler, packet memory, packet memory SDRAM arbiter and controller, DMA function-to-processor interface, packet engine (PE), port control function, port attribute memory, L 2 CAM engine, memory blocks for header and CAM analysis result, CAM structures, L 2 header analysis hardware engine, auto-forwarding block, forwarding block, L 3 header analysis result memory, free queue management block, block attributes management, transmit queue management block (XQMB), SRAM arbiter and controller, processor interface, L 3 block, L 3 header memory, hash function, L 3 lookup algorithm, L 3 management function, L 3 aging function, route cache (RC) SDRAM arbiter and controller, RISC processor interface
  • switch system shows general logic block diagram for switch circuit 20 coupled to: 64-bit 66 Mhz LAN bus, external memory 16 through 32-bit 99-Mhz bus, L 2 CAM through 16-bit 66-Mhz bus, control memory 136 through 16-bit 66-Mhz bus, L 3 route cache through 16-bit 66-Mhz bus, and switch processor 12 through 16-bit 66-Mhz bus, which couples to network management processor (NMW) 10 through external interprocessor controller (IPC) 24 .
  • NMW network management processor
  • IPC interprocessor controller
  • internal control path of switch circuit 20 is shown.
  • External switch processor 12 couples to CAM interface 46 , free queue management 48 , L 3 lookup 50 , transmit queue management and scheduler 58 , SDRAM memory controller 62 , and SRAM memory controller 64 .
  • internal control path includes forwarding engine 52 , which couples to CAM interface 46 , free queue management 48 , L 3 lookup 50 , block attributes 60 , transmit queue management and scheduler 58 , and receive block 54 .
  • Transmit queue management and scheduler 58 couples to transmit block 56 , SRAM memory controller 64 , and block attributes 60 .
  • Receive block 54 and transmit block 56 couple to LAN bus.
  • CAM interface 46 couple to CAM bus and receive block 54 .
  • SRAM memory controller 64 couples to free queue management 48 , block attributes 60 , L 3 lookup 50 , and SDRAM memory controller 62 .
  • SDRAM memory controller 62 couples to RC memory bus and L 3 lookup 50 .
  • Block attributes 60 couples to free queue management 48 .
  • Forwarding engine 52 couples to receive block 54 .
  • multi-channel packet memory arbiter and controller 66 couples to SDRAM packet memory bus, processor and DMA interface 68 , L 3 engine 70 , receiver buffers 72 , and transmit buffers 74 .
  • Receive and transmit buffers 72 , 74 couple to media access controller (MAC) first-in first-out (FIFO) bus.
  • MAC media access controller
  • FIFO media access controller
  • Switch circuit 20 includes processor interface 36 which couples to 32-bit MIPS RISC processor multiplexed bus (e.g., NEC R4300).
  • processor bus a 32-bit address/data bus operable up to 66 Mhz, operates in master and slave modes.
  • slave mode such processor bus responds to accesses to internal resources, such as registers, CAM 142 , Control Memory 136 , PM SDRAM and RC SDRAM.
  • master mode such bus handles DMA operations to and from PM SDRAM.
  • Such processor bus does not respond to accesses to external resources, but cooperates with external system controller circuit.
  • master mode such processor bus may handle DMA to system memory.
  • Switch circuit 20 includes CAM interface 46 , a dedicated 16-bit bus compliant with content-addressable memory (i.e., Music Semiconductor CAM 1480 compatible) operating at 66 Mhz. Such bus may be shared by external interface.
  • content-addressable memory i.e., Music Semiconductor CAM 1480 compatible
  • Such bus may be shared by external interface.
  • RS route/switch
  • Switch circuit 20 generates CAM access timing control on behalf of RS processor 12 .
  • Switch circuit 20 learns and looks-up MAC addresses and port numbers through such bus.
  • Switch circuit 20 includes LAN interface 40 which couples LAN bus, a 64-bit access bus operating at 66 Mhz. Ethernet MAC devices connect to such LAN bus through receive and transmit MAC FIFO bus. Switch circuit 20 generates select signals and control signals for access to external MAC device FIFO bus. Switch circuit 20 reads writes data in 64-bit single-cycle burst mode. Burst size is 64 bytes. Preferred bandwidth is 4 GB/s at 64-bit/66 Mhz-operation at 64-byte slice size. Ethernet frames are transferred across LAN bus. At end of receive frame, status bytes are read.
  • Switch circuit 20 includes packet memory (PM) SDRAM interface 42 , which includes PM SDRAM bus which operates at 32-bit/99-Mhz standard. Packet memory 16 is directly connected to such bus through registered transceivers. Preferred bandwidth is 400 MB/s at 99-Mhz operation and 64-byte burst mode. Seven-channel arbiter inside switch circuit 20 allows up to 7 agents to access packet memory 16 . PM interface supports up to 8 MB of SDRAM in two banks.
  • PM SDRAM interface 42 includes PM SDRAM bus which operates at 32-bit/99-Mhz standard. Packet memory 16 is directly connected to such bus through registered transceivers. Preferred bandwidth is 400 MB/s at 99-Mhz operation and 64-byte burst mode. Seven-channel arbiter inside switch circuit 20 allows up to 7 agents to access packet memory 16 . PM interface supports up to 8 MB of SDRAM in two banks.
  • Switch circuit 20 includes interface to Route Cache (RC) SDRAM for coupling timing control signals and multiplexed 16-bit bus, which operates in 66-Mhz mode capable of streaming data at 132 MB/sec.
  • RC Route Cache
  • Switch circuit 20 includes interface to Control Memory (CM) SRAM for managing block free queue list, transmit queues, block parameters and L 3 CAM aging information.
  • CM Control Memory
  • Such interface is 16-bits wide and operates at 66-Mhz. Address and data buses are multiplexed and operate in flow-through and pipelined modes.
  • FIG. 6A shows LAN block and interfaces 40 externally to Ethernet Media Access Controller (MAC) FIFO bus and internally to CAM interface block, datapath block 44 , and packet engine block 82 .
  • LAN block interface functionality include bus arbitration for receive and transmit requests of FIFO bus, bus control and protocol handling, signaling internal datapath block to initiate data transfers and communicating with packet engine to signal begin and end of receive and transmit operations on FIFO bus.
  • datapath block 44 couples to FIFO data bus, LAN bus controller 76 , buffer allocator 78 , and packet engine 82 .
  • LAN bus controller (LBC) 76 couples to FIFO bus control, buffer allocator 78 , and receiver and transmit arbiters 80 , which couple to packet engine 82 and receive and transmit requests.
  • LBC LAN bus controller
  • LAN interface 40 When LAN interface 40 operates, receive requests and transmit requests are multiplexed and fed by external logic. Multiplexer uses 2-bit counter output. Front end demultiplexer reconstructs requests on 32-bit receive request register and 32-bit transmit request register. Few clocks latency for request may be sensed to be activated or deactivated, which may be handled by arbiter mechanism 80 .
  • Receive arbiter 80 services receive port requests, preferably in round-robin scheme for equal distribution. Overlapped processing provides improved performance. Hence, if receive port is under service, next request prioritization occurs in parallel. During arbitration, arbiter 80 may receive port enabled, free block allocated signals from other modules. Upon certain channel winning arbitration, internal receive buffer is allocated 78 , and data staged from MAC FIFO bus for packet memory 16 . When buffer is granted, channel is presented to LAN Bus controller 76 for data transfer.
  • transmit arbiter 80 services transmit port requests in round-robin scheme for equal distribution. Overlapped processing provides improved performance. Hence, when transmit port is under service, next request is prioritized in pipeline.
  • arbiter 80 may receive port enabled, valid packet assigned, in link mode the transmitter has at least one slice signals from other modules. If channel has data slice in datapath 44 , channel is not allowed to join arbitration until data is put into packet memory 16 , thereby preventing out-of-sequence data transfer.
  • Upon channel winning arbitration it is presented to buffer allocator block 78 to obtain internal transmit buffers for staging from packet memory 16 for MAC FIFO bus. Once transmit request wins arbitration, and transmit buffer is allocated, channel is presented to packet engine block 82 to obtain data from packet memory 16 . Once data is staged in transmit buffer, buffer requests to LAN Bus controller 76 to transfer data in transmit buffer to MAC FIFO bus.
  • LAN bus controller 76 provides access to MAC FIFO bus targeted to port moving slice between MAC FIFO and internal data buffers. Receive request, which wins receive arbitration and secures one of receive buffers from buffer allocator 78 and transmit buffers having data for transfer to FIFO bus, competes for services of LAN bus controller 76 . Arbitration mechanism is configured to split bandwidth evenly between receive requests and transmit requests. LAN bus controller 76 generates end-of-packet status read cycles for receive request data transfer operations. Status information is used to determine if received packet is good or bad. If error is sensed, received packet may be rejected.
  • Data bus width of LAN bus is 64 bits.
  • LAN bus access is performed in burst mode (i.e., single-cycle burst mode) with maximum of 64-byte transfer, preferably executing at 8 data cycles in burst.
  • LAN bus controller 76 is started by buffer scheduler when data buffer is allocated to receive or when data transfer from packet memory 16 to one of transmit buffers is complete.
  • Receive and transmit data to LAN bus is staged through 64-byte deep receive and transmit data buffers in datapath block 44 .
  • Receive and transmit requests arbitration and FIFO bus control are handled by LAN block.
  • Buffer allocator 78 in datapath block 44 manages allocation of receive and transmit buffers, and packet engine block 82 handles movement of data between packet memory 16 and receive and transmit buffers.
  • FIG. 6B shows datapath block 44 interface, including packet memory controller 82 coupled to data buffers 84 and packet memory engine (PME) 90 .
  • Data buffers 84 couple to LAN block 86 , buffer scheduler 94 , slice counters 88 .
  • Buffer attributes 92 couple to PME 90 and LAN block 86 , which couple to buffer scheduler 94 .
  • Data transfers between packet memory bus and MAC FIFO bus are staged through receive and transmit buffers 84 in datapath block 44 .
  • Block logic tracks state of buffers 84 .
  • Datapath block 44 interacts with LAN Block 86 , packet engine block 82 and packet memory controller 82 .
  • Two buffers are provided for transmission, and two buffers are provided for reception. Such buffers are associated with respective buffer status. Transmit buffers hold data form PM 16 to MAC FIFO (LAN) bus. Receive buffers hold data from MAC FIFO bus to PM 16 . Each buffer has dedicated channel to PM SDRAM Controller. PM SDRAM Controller arbitrates each request to transfer on first-come/first-serve basis. On LAN side, appropriate buffer is selected for read or write.
  • Frame transfer across LAN bus occurs on slice basis.
  • Slice is 64 bytes.
  • burst data transfer size is slice size, except for last slice in frame.
  • Last slice size is decided by frame size. Ports are serviced, in time-division multiplex mode.
  • Receive slice buffer is used to capture LAN data from MAC FIFO.
  • Slice is 64 bytes.
  • Switch circuit 20 has two 64-byte buffers.
  • incoming 64-bit data words are strobed on selected slice buffers, word-by-word, during clock edges. Write order is from top to down.
  • Receive status is maintained for respective receive slice. For example, slice status provides:
  • Receive slice size (represented by 6-bit number.) Maximum is 64 bytes.
  • MAC provides in each data phase, valid bytes through bits (e.g., LBE# ⁇ 7-0>). Hence, LBEI# ⁇ 7-0> are registered and analyzed at end of data phase to provide cumulative slice size.
  • MAC provides in each read data phase, if end-of-frame.
  • EOFI# signal is registered and stored for EOF status. It is also used to close current transfer.
  • SOF signaling The MAC provides on each read data phase, if Start-of-frame. SOFI# signal is registered and stored for SOF status.
  • Transmit slice buffer is used to capture (e.g., PMDO) bus data and supply to LAN bus.
  • Slice is 64-bytes.
  • Switch circuit has two 64-byte slice buffers.
  • 64-bit data words are read from selected slice buffer.
  • One clock pre-read is implemented to provide minimum delay time on LAN data (LD) bus. Read order is from top to down.
  • LD LAN data
  • Status is maintained for respective transmit slice.
  • Slice status is loaded by PM engine 90 when moving slice from PM.
  • Status information includes:
  • Slice size (represented by 6-bit number.) Maximum is 64 bytes.
  • PM engine registers slice size.
  • PM engine 90 registers signal while transferring slice from PM bus. If status is on, LAN FIFO controller asserts EOF# signal at appropriate data phase.
  • Buffer scheduler 94 allocates transmit and receive data buffers to requesting agents, keeps track of busy/free status of each buffer, and allocates free buffer to requesting agent. Buffer scheduler 94 optimizes for (a) equal distribution of bandwidth between receivers and transmitters, (b) avoiding deadlock situation of transmit buffer, and (c) achieving highest concurrence of LAN bus and PM bus.
  • Datapath controller includes buffer attributes 92 for receive and transmit buffers 84 , and track byte count per slice basis.
  • Buffer attributes 92 such as End-of-Packet (EOF), start-of-packet (SOF), Byte Enables (BEB), and Slice Count are tracked from time data arrives into receive or transmit buffer until data leaves buffer.
  • Buffer attribute 92 information is used by packet memory engine 90 to track progress of packet flowing through switch circuit 20 per slice basis.
  • Datapath controller interacts with buffer scheduler 94 at end of slice transfer to release buffer. Synchronization between PM SDRAM controller and LAN bus interface 40 is thereby accomplished.
  • Packet memory resides on dedicated SDRAM bus.
  • Switch circuit 20 integrates SDRAM controller to access packet memory 16 .
  • PM SDRAM controller functionality includes: 32-bit interface operating at 99-Mhz to 8 MB of external SDRAM; support for up to 7 internal requesting agents; arbitrates requests and generates request to SDRAM control block; pipelines requests for maximum efficiency and throughput; bursts of 4 (one bank), 8 or 16 (both banks) accesses on SDRAM; and maximum performance at 16 bursts and minimum performance at single read or write.
  • Route processing is provided by MIPS R4000 family RISC processor 12 , which interfaces with switch circuit through address/data multiplexed bus.
  • RISC processor interface may use external system controller, for example, for communicating with switch circuit 20 though processor slave port.
  • RISC processor serves switch or route processor 12 .
  • Several register resources in switch circuit 20 are used by RISC processor 12 to control configuration and operation of switch circuit 20 .
  • RISC processor 20 may access resources outside of switch circuit 20 , such access being controlled by switch circuit 20 packet memory 16 , route cache memory, and CAM for L 2 forwarding.
  • Switch circuit 20 communicates status of operation and draws attention of processor 12 through status and process attention registers. When configured, switch circuit 20 performs DMA of data from packet memory to processor local memory, and forwards packets to processor queue maintained by switch circuit 20 .
  • route processor (RP) 12 is NEC Vr4300 RISC microprocessor from MIPS family with internal operating frequency of 133 Mhz and system bus frequency of 66 Mhz.
  • Processor 12 has 32-bit address/data multiplexed bus, 5-bit command bus for processor requests and data identification, six handshake signals for communication with external agents, and five interrupts. Bus width can be selected as 32-bit operation.
  • Processor 12 supports 1, 2, 3 and 4-byte single accesses and 2, 4 and 8 word burst accesses. Processor 12 uses little endian when accessing switch resources.
  • RP 12 is interfaced to switch circuit 20 .
  • RP 12 communicates with NMP 10 through interprocessor communication (IPC) bus 24 , and accesses switch local resources, such as packet memory 16 , L 3 CAM (Route Cache) 28 , control memory 136 and L 2 CAM 142 through switch circuit 20 and local resources, such as local memory, ROM etc., through system controller.
  • IPC interprocessor communication
  • switch local resources such as packet memory 16 , L 3 CAM (Route Cache) 28 , control memory 136 and L 2 CAM 142
  • switch circuit 20 and local resources, such as local memory, ROM etc.
  • Two interrupts are used by switch circuit 20 to issue interrupt requests to processor 12 .
  • Two slaves on RP processor 12 system bus are switch and system controller.
  • Switch is final agent to provide ready signal to processor requests that switch or system controller is ready to accept. During DMA transfer, switch acts as master.
  • Write access is implemented as ‘dump and run’ with two pipelined buffers to improve system performance. This allows two back-to-back write cycles.
  • One read request is processed at a time.
  • Processor 12 accesses internal registers resources in 32-bit mode.
  • Write buffer and read buffer are provided to packet memory 16 to match frequency difference of 99-Mhz and 66-Mhz.
  • Memory interface to switch is 32-bit.
  • Maximum burst size to packet memory 16 is four 32-bit words (i.e., 16 bytes).
  • Read buffers are provided to L 3 CAM and control memory 136 because of 16-bit interface to switch. Little endian is used when data is packed and unpacked during write or read requests to 16-bit interfaced memories.
  • Maximum burst size to L 3 CAM 28 is 16 bytes, and to CM is 8 bytes.
  • Write or read request to memories is arbitrated through agents inside switch, such as forwarding engine, L 3 engine etc., so latency depends on various factors.
  • processor 12 owns mastership or control of bus.
  • processor 12 enters into uncompelled slave state after address phase, giving bus control to external agent to drive data.
  • FIG. 7A illustrates DMA transfer between RP processor 12 local memory 100 and packet memory 16 .
  • DMA transfer between packet memory 16 and NMP processor local memory is also provided in architecture.
  • NMP processor system controller responds to DMA master requests between packet memory and NMP processor local memory.
  • DMA is implemented using two design blocks called DMA engine 104 and DMA master 102 .
  • DMA engine is interfaced to packet memory 16 and that of DMA Master to processor system bus.
  • DMA is initiated by setting bits in DMA command register.
  • DMA transfer between local memory 100 and packet memory 16 occurs substantially as follows:
  • DMA engine 104 notifies DMA master 102 to initiate DMA transfer when packet is pending by giving request.
  • DMA master 102 arbitrates for processor bus with RP processor 12 as another master by giving request (e.g., EREQ) to processor 12 .
  • request e.g., EREQ
  • switch circuit 20 acts as master to system controller 98 .
  • Processor gives bus control to RP processor when ready.
  • bus is granted by processor, DMA transfer begins. Mastership of processor bus can be re-acquired by RP processor 12 between each slice transfer, which is maximum of eight 32-bit words (i.e, 32 bytes).
  • DMA engine 104 reasserts request after each slice transfer, until block of packet data is transferred. At end of DMA, bus control is given to processor.
  • DMA master 102 When bus is in uncompelled slave state, DMA master 102 does not access processor system bus to simplify design. While DMA transfer is taking place on bus, system controller 98 does not drive bus, assuming bus in slave state.
  • FIG. 7B illustrates RP processor 12 access to packet memory (PM) 16 through L 2 /L 3 switch circuit 20 .
  • Switch interface to packet memory 16 is 32-bit, and maximum burst size is 16 bytes.
  • Synchronous DRAM is chosen for packet memory that can be operated at 66-Mhz, 99-Mhz and 125-Mhz frequencies.
  • processor write request processor dumps write-data into front-end pipeline buffers 106 .
  • Slave state machine 108 provides such data into packet memory write buffer 110 .
  • Processor request is arbitrated with LAN requests and L 3 engine requests in PM SDRAM arbiter to access PM 16 .
  • PM SDRAM controller 112 generates control signals for SDRAM.
  • PM read buffer 114 During processor read request, read-data is provided in PM read buffer 114 from packet memory bus. Synchronizer 116 converts 99-Mhz signal into 66-Mhz pulse signal that initiates slave state machine to empty read buffer. Read data is muxed with data from other blocks and driven to processor system bus. Packet memory to local memory (PM-to-LM) DMA transfer data is not written into read buffer, but passed to processor system bus.
  • PM-to-LM Packet memory to local memory
  • FIG. 8A illustrates RP processor 12 access to L 3 CAM (route cache) memory 28 .
  • RP processor 12 accesses L 3 CAM 28 through switch circuit 20 to initialize entries and maintain data structures.
  • FIG. 8B shows control memory 136 access through switch circuit 20 .
  • RP processor 12 couples to switch circuit 20 through 66-Mhz, 32-bit processor system bus, wherein pipeline buffers 106 receive processor write data and couple to slave state machine 108 .
  • L 3 CAM write buffer couples to slave machine 108 and L 3 CAM SDRAM controller 120 , which receive requests from other agents and couples to L 3 CAM memory 28 through 66-Mhz, 16-bit bus.
  • L 3 CAM read buffer 122 provides read data through 32-bit processor bus and couples to slave state machine 108 and register 134 over 16-bit bus.
  • Register 134 receives 66-Mhz clock signal and couples to L 3 CAM memory 28 through 66-Mhz, 16-bit bus.
  • CM write buffer 128 couples to slave machine 108 and CM SSRAM controller 130 , which receive requests from other agents and couples to control memory 136 through 66-Mhz, 16-bit bus.
  • CM read buffer 132 provides read data through 32-bit processor bus and couples to slave state machine 108 and register 124 over 16-bit bus.
  • Register 124 receives 66-Mhz clock signal and couples to control memory 136 through 66-Mhz, 16-bit bus.
  • Synchronous SDRAM is chosen for L 3 CAM 28
  • Synchronous SRAM is chosen for control memory 136 .
  • Switch interface to both memories is 16-bit, and both memories operate at 66-Mhz.
  • Processor 12 access to memories is similar in both cases, maximum burst size to L 3 CAM memory is 16 bytes, and maximum burst size for control memory 136 is 8 bytes. Data is packed and unpacked for each processor access.
  • Each memory 28 , 136 has write buffer 118 into which processor write-data is provided from pipeline buffers 106 by slave state machine 108 . Since memory interface is 16-bit, processor write data is divided into two 16-bit half words.
  • Processor 12 request to L 3 CAM memory 28 is arbitrated with L 3 engine requests.
  • Processor 12 request to control memory is arbitrated with forwarding engine, FQMB, L 3 engine, XQMB, BAM and DMA.
  • FQMB forwarding engine
  • L 3 engine L 3 engine
  • XQMB XQMB
  • BAM DMA
  • DMA digital multiple access memory
  • L 3 CAM or CM read data from memory bus is provided in read buffer.
  • slave state machine 108 starts emptying read buffer 122 appropriately and packs two half words into 32-bit word, and puts on processor system bus.
  • RP processor 12 accesses L 2 CAM memory 142 through switch circuit 20 .
  • Content Addressable Memory CAM
  • Switch circuit 20 interface to L 2 CAM memory is 16-bit.
  • Processor 12 executes commands write/read and data write/read to L 2 CAM 142 using CAM access registers provided inside switch circuit 20 .
  • Processor 12 accesses L 2 CAM 142 through register-based request-grant handshake by loading L 2 CAM Access Control & Status Register to execute processing cycles.
  • RP processor 12 arbitrates with CAM arbiter 138 in switch circuit 20 for CAM bus.
  • slave state machine 108 For processor requests, slave state machine 108 generates control signals for CAM 142 , and CAM arbiter engine 138 processes switch requests.
  • processor 12 provides write-data in pipeline buffers 106 .
  • slave state machine 108 puts data from pipeline buffer 106 on CAM bus.
  • read-data from CAM bus is muxed 140 with data from other blocks and passed to processor system bus. Write/read buffers need not be provided in present case.
  • FIG. 9B shows LAN arbiters interaction with datapath.
  • Register files 144 for receivers and transmitters including corresponding block address registers and byte count registers, couple to block address register and byte count register coupled to state machines 148 , which couple to switch data path 44 and receive and transmit arbiter 150 .
  • Packet switch engine 82 performs control functions for transfer request in and out of receive and transmit buffer to packet memory 16 . Packet engine 82 handles LAN data movement, command process, and PM address calculation.
  • packet memory engine 82 sets up for moving slice between packet memory 16 and allocated data buffer. This is triggered by scheduler while slice is scheduled to move in/out of data buffer. PM engine has access right to block address registers and registers to understand actual address on PM 16 and update packet size.
  • packet memory engine 82 executes systematic hardware processes when Forwarding Block and Transmit Queue Management Block (FB/TQMB) generates instructions such as: link, receive enable, transmit enable, receive reject, etc. Hence, end of packet reception/transmission is noticed for next packet initialization. In notifying such events, priority encoding is provided for first-in/first-service integrity.
  • FB/TQMB Forwarding Block and Transmit Queue Management Block
  • packet memory engine 82 regards Ethernet ports as 32 concurrent full-duplex DMA channels. Relevant PM pointers for each channel are maintained. For every slice transfer, PM bus address is calculated.
  • buffer attributes 92 are provided in attribute block address array, which is 3-port architecture having 64 ⁇ 12-bit 3-port memory array.
  • Port-i is write port; port-2 is read port; and port-3 is read port.
  • Packet memory engine 82 can write/read memory locations using various ports.
  • Forwarding engine (FE) can read locations asynchronously.
  • Port-3 is assigned for FE.
  • First 32 locations are used for “Receive block address” of 32 receive ports.
  • Next 32 locations are “Transmit block address” for 32 transmit ports.
  • PM Engine 82 initializes block address for receive/transmit ports on command of Auto Forwarding Block.
  • PM 10 engine 82 reads block address relevant to receive/transmit port under service.
  • PM engine 82 uses block address to identify packet in PM 16 .
  • CAM interface block analyzes incoming packet at layer 2 , i.e., at MAC layer. Analysis result is forwarded to Auto Forwarding Block state machine. CAM processor is called for attention when ether header block is loaded in ether header memory. On such trigger condition, after acquiring CAM bus interface, CAM Processor starts defined fast processing action. Block contains layer 2 header memory, analyzed and to be analyzed port FIFOs, and result memory. CAM block interfaces to internal memories organized as memories and FIFOs as well as external CAM to accomplish L 2 lookup.
  • begin of header is identified and required header information is loaded into ether header memory. Sixteen-byte header blocks are reserved for each port in header memory. Loaded indication is updated on 5-bit entry in 32-deep ether header to-be-analyzed FIFO. Such FIFO provides first-in/first-service feature.
  • Ether header memory is 2-port memory has 64 ⁇ 64 bit architecture. Port- I is write port, and port-2 is read port. Such memory is located on LAN side of receive buffer. As first slice of new receive packet is loaded into receive buffer, header slice (i.e., 16 bytes) is written to ether header memory in 64-bit words. Ether port number is used as reference address to select header block number. Maximum of 32 header blocks can be stored in such memory. Port-2 is used by CAM processing engine. CAM engine reads 16-bit quantity at a time through front-end 64:16 multiplexer. L 3 header information, up to 8 bytes per port is stored in different memory. Such information is used by L 3 lookup engine during routing operation.
  • Ether-to-be-analyzed FIFO memory is 32 ⁇ 5-bit two-port memory, holding maximum of 32 port numbers to be analyzed.
  • Port-1 is write port
  • port-2 is read port.
  • FIFO is written with port number when first slice of data is received on LAN bus and header loaded in ether header memory.
  • CAM Processor reads port number through port-2 for indexing header memory.
  • FIFO structure ensures that ports to be analyzed are presented to CAM engine in arrived order.
  • Ether analyzed FIFO memory is 32 ⁇ 6-bit two-port memory, holding maximum 32 analyzed port numbers.
  • Port-1 is write port
  • port-2 is read port.
  • CAM Processor writes analyzed port number through port-1 and Forwarding Engine (FE) reads through port-2.
  • FIFO structure ensures that analyzed ports are presented to forwarding engine in arrived order.
  • Ether result memory is 32 ⁇ 16-bit two-port memory, holding results for 32 ether ports.
  • Port-1 is write port
  • port-2 is read port.
  • CAM Processor writes L 2 forwarding result through port-1
  • FB Forwarding Block
  • FB reads port number from Analyzed FIFO to make forwarding decision.
  • FB uses port number as reference address to read CAM analysis result.
  • External CAM memory is 1024 ⁇ 64-bit capacity on standard configuration. Size can be expanded to 2048 ⁇ 64-bit by adding CAM device in vertical expansion.
  • CAM memory is connected on dedicated CAM bus. Such bus is shared between CPU and switch circuit. Normally such bus is default-owned by switch circuit. CPU can use bus by register mode bus handshake.
  • CAM memory contains 1024 locations of 64 bits wide. Locations can be assigned as RAM property or CAM property. Location assigned as RAM will not be accounted in lookup process; CAM locations participate in lookup process. Repeatedly used parameters are stored in RAM so that real-time data movement between RAM location and Comparand/mask registers/etc. can happen with minimum overhead. Every location has associated status filed which describes entry, such as: empty entry, valid entry, skip entry on lookup, and RAM entry.
  • Layer 2 header analysis is performed by CAM processor. Ethernet headers are loaded and processed on dedicated Ethernet header memory having 128 ⁇ 32 bit dual port memory. Assuming case where packet received on port(x), switch circuit 20 is triggered on such packet by request from MAC port number(x), which is effectively hardware packet arrival notification.
  • Header is extracted from MAC received data stream. New receive packet data is identified with arrival of SOF, and first 16 bytes are treated as layer- 2 header. If header is concurrent to store access to receive buffer, then header is stored in port-specific block number(x) in header memory. Writing process may not account for port contention. Block written on port-1 may not be accessed on port-2. Header is stored as header block(x). At end of storage, port number is written in ether-to-be-analyzed FIFO, which is 32 ⁇ 5-bit register. FIFO write pointer is incremented after each status write
  • CAM processor starts when valid entry is loaded in Ether-to-be-analyzed FIFO. CAM Processor maintains read pointer to read valid entry. Valid entry is notified if there is difference between write pointer and read pointer. Entry read provides port number of header. CAM Processor uses port number to reach header block(x).
  • switch system has 32 ports and 32 entries. New packet on port can not be received unless old packet is processed, according to system-level handshake. Hence, at any time, no more than 32 headers/header status may be stored, effectively reducing complexity of FIFO design. PM engine can blindly write header/status without looking for FIFO-full condition. CAM Processor can start as long as pointers are not equal.
  • CAM processor handles header processing. CAM processor is notified of Ethernet header valid when write pointer and read pointer differ. When entry is valid on Ethernet-to-be-analyzed FIFO, CAM processor reads entry and increments read pointer. Using such value, CAM processor can reach specified header block. Ether header memory is divided into 32 blocks. Port number directly provides starting address of header block. Entries in block are consecutive 16 bytes.
  • CAM processor processes header block, and writes result on port specific location on Ether result memory. CAM process completion is notified to Auto Forwarding Block through Ethernet result FIFO, which is 32 deep register construction. Each entry is 6-bit wide. Entry is result of CAM memory lookup. If set, destination MAC address indicates CAM hit. Routing tag in header block is valid. If clear, CAM lookup fails; routing tag does not contain valid information.
  • CAM processor To write on result FIFO, CAM processor has write pointer, which is 5-bit counter. CAM processor write entry, whereas AFB read entry. When CAM completes process, it writes result entry, and increments write pointer. At last, CAM processor increments Ether header status FIFO read pointer to point to next entry.
  • CAM processor header processing includes learning process of: source lookup, source port read, and source learning.
  • CAM processor learns MAC addresses arriving from Ethernet ports.
  • As associated process of CAM lookup CAM processor determines whether source address was learned previously, i.e., by reading source address from Ether header memory, and writes CAM for lookup. If match occurs, processor presumes source port was learned; it reads existing port information from associated data to compare whether port is same as receiving port. If MAC header matches, whether or not ports match, processor makes entry live and at same time relearns receiving port. If receiving port number does not match learned port, Source Address (SA) Learned flag is set. If miss, processor learns entry into next free address if CAM is not full, and if learned SA Learned flag is set. While updating such new entry, processor follows correct data structure for RAM associated information.
  • SA Source Address
  • attribute is set with (e.g., ETHR_LRN_INHIBIT) register for each port to inhibit learning on specified ports. If set, during source lookup process, after source port read, entry is made live if hit and if miss, MAC address is not learned. Source port read phase can be skipped if source port filtering is not required.
  • ETHR_LRN_INHIBIT e.g., ETHR_LRN_INHIBIT
  • Destination lookup process includes steps: destination lookup and destination port read.
  • CAM processor reads 6-bytes destination MAC address from header memory and writes on CAM for comparison lookup. If miss, destination is assumed unknown; if hit, destination is available through memory (e.g., ARAM) field, which provides destination port number and port/MAC address specific status and control flags. If hit, CAM processor reads ARAM field and writes in result memory, setting hit flag. If miss, CAM processor has nothing to read and write miss flag to result memory. Rest of result data is not valid in miss case. Forwarding blocks read this field for analysis and forwarding decision. At end of process, CAM analysis done flag is set for packet on receiving port.
  • CAM processor analyzes results of source lookup and destination lookup processes to determine how to process incoming packet.
  • Each port has two bits allocated to handle spanning-tree protocol requirements. One bit is allocated for ‘Port Blocked State’ flag and other for ‘Learn Inhibit’ flag, which is used for learning of MAC addresses on receiving port.
  • Port Blocked State’ flag is used for forwarding decision. Filtering bits in result from both source lookup and destination lookup along with port specific STP control bits relating to forwarding, source port and destination port read as result of destination lookup and read are considered.
  • CAM processor sets CAM analysis completion status for receiving port. If destination lookup resulted in hit and destination port is one of physical LAN ports and cut-thru switching on port is enabled or CPU port, port number is written to Ether analyzed FIFO. CPU port is allowed to enable Layer 3 analysis parallel to packet reception. Result processing is done by Auto Forwarding Block (AFB). AFB is notified of CAM process completion through Ethernet Analyzed FIFO. AFB can read highest priority FIFO entry using hardware hidden read pointer. If read pointer and write pointer are different, one or more valid entries are available in Ethernet analyzed FIFO. AFB reads valid entry and gets port number. Reading entry increments read pointer, if present entry is valid.
  • AFB Auto Forwarding Block
  • AFB can access Ether result memory. Refer to Auto Forwarding Block section for details on AFB functionality. If CAM analysis resulted in miss or hit but packet can not be switched, AFB does not need to be notified until packet reception is complete. Hence, CAM processor merely sets CAM analysis completion flag for receiving port. AFB processes packet when both receive completion and CAM analysis completion set for receiving packet.
  • Time stamp register provides variable granularity for aging.
  • Processor uses instruction set provided by CAM device. Entries to be aged are processed in one instruction, though setup is required before executing instruction. In addition to status bits provided by CAM for every entry, 3 bits in RAM field are dedicated for aging information. Status provide by CAM are used to identify if entry is ‘Valid’, ‘Empty’, ‘Skip’ or ‘RAM only’. One of bits allocated in ARAM field is used to mark entry ‘Permanent’. Entries marked ‘Valid’ and not ‘Permanent’ are considered for aging. Additional two bits in ARAM filed provide flexibility to CPU to implement aging process.
  • processor visits CAM to ageout entries, it searches CAM for entries with oldest time stamp. In search process, processor configures mask registers in CAM in such way that age bits enter comparison, and entries that are not ‘Valid’ and marked ‘Permanent’ do not enter comparison. In next instruction, processor can clear ‘Valid’ bits on matching locations to ‘Empty’ state. By doing so, oldest entries are marked empty. From that point, aged entries do not enter compare operation until made ‘Valid’ aging during normal learning process.
  • Auto forwarding block is hardware Ethernet packet forwarding engine and queue processor.
  • AFB analyzes incoming packet and may forward packet both at layer 2 and layer 3 . After forwarding analysis is done, AFB posts and maintains port queues.
  • AFB may accept packets from processor interface and post packet in requested queues.
  • AFB provides processing power on packet-by-packet and manages required information for integrity of packet routing strategy.
  • AFB feeds initial setup information for each ether packet for each port to run data transaction.
  • AFB functionality enables switch circuit 20 to perform forwarding and filtering without real-time assistance from processor 12 .
  • Processing element is out of the datapath, and forwarding and filtering is done at line rate for supported ports.
  • AFB functionality includes: free queue management, block attributes management, receive port management, forwarding and filtering, transmit queue management, quality-of-service (QoS) support, and control memory interface.
  • QoS quality-of-service
  • Forwarding function features port linking, wherein receive port is linked to transmit port before packet is fully received, thereby improving latency between received and transmitted packets.
  • Port linking is accomplished in forwarding stage if conditions are suitable. For example, packet can cut-thru with unique destination, i.e. no more than one port is target destination for packet. Cut-thru enable bit are satisfied for certain values, such as: destination port, speed-matching logic, xmtPortEn, xmtPortNotBsy, xmtQNotVld and mirrNotVld. Data arrive speed should not exceed transmitting port speed. Transmitter should be ready to accept command.
  • Transmitter may be busy transmitting data, or there may be packets waiting for queue transmit Also, there should be minimum of data present in buffer before process can start or arbitration latency may result in transmit FIFO under-run condition. In such case, transmitter is linked but does not start transmitting data until required minimum data is received in packet memory.
  • FIG. 10A shows Transmit Queue Management Block (XQMB) 154 , which is hardware block for managing transmit queue functions for switch circuit 20 .
  • XQMB 154 couples to forwarding engine (FE) 52 , DMA Engine 104 , block attribute memory (BAM) 152 , PME 90 , queue attribute memory (e.g., AttrRAM) 156 , port AttrRAM 158 , and control memory 136 through interface 160 .
  • FE forwarding engine
  • BAM block attribute memory
  • PME 90 queue attribute memory
  • AttrRAM AttrRAM
  • port AttrRAM 158 port AttrRAM 158
  • control memory 136 control memory
  • XQMB 154 functionality includes: initializing and managing transmit queues for each port; maintaining QOS parameters (i.e., tokens) for each port; queues (e.g., nQueues and dQueues) blocks to/from control memory transmitter queues; forwarding blocks to requesting transmitter; returning block numbers to BAM controller 152 ; forwarding multi/broadcast block in ‘background’; supporting 28 physical ports, 3 logical ports and multi/broadcast port; and using round-robin priority scheme to service requests.
  • FIG. 10B shows queue processor state machine 162 , which couples to transmit arbiter 80 , block address and byte count registers 164 in control memory, and transmit queue 136 .
  • FIG. 11 shows DMA engine 104 , which couples to CPU register 166 , CPU master interface 36 , XQMB 154 , packet memory interface 42 , and control memory interface 160 .
  • DMA engine handles data transfer between packet memory 16 and CPU local memory 100 so that CPU 12 may perform other tasks.
  • CPU 12 packet send is enabled by creating packet in local memory 100 , register set-up, and initiating packet transfer.
  • packet receive is enabled by notifying CPU 12 .
  • CPU 12 checks block attribute to determine whether to process packet. If CPU 12 transfers packet to local memory 100 , it DMA engine 104 is notified to proceed. Otherwise, register is written to de-queue packet.
  • FIG. 12 flow chart shows CPU 12 to packet memory 16 operation. Initially, in software, CPU sets-up register and initiate packet transfer 168 . Then, in hardware, processor determines 170 whether to initialize block attribute 176 , whether 172 to initialize DMA transfer 178 , and whether 174 to write command to XQMB 180 .
  • FIG. 13 flow chart shows packet memory 16 to CPU 12 operation. Initially, in hardware, block attribute is read 182 , and CPU 12 is notified 184 . Then, in software, run CPU check whether DMA needed 186 ; if not, then set register to de-queue 188 . Then, in hardware, DMA transfer 190 , and notify CPU 12 .
  • FIG. 14 shows switch circuit 20 with L 3 engine 70 coupled to FE 52 , interface 36 to CPU 12 , interface 42 to packet memory 16 , IP header RAM 198 , MAC address RAM 196 , interface 194 to L 3 CAM 126 , and interface 160 to control memory 136 .
  • L 3 check block captures destination IP address, Time To Live Field and Checksum field in L 3 Header Memory, for use by L 3 block 70 for L 3 lookup and processing.
  • L 3 check block processes rest of packet header. Receive packet is checked for IP protocol filed, and to detect packets for specialized handling.
  • IP header length is checked to determine whether packet needs specialized option processing. If header length is not equal to 5 32-bit words, option processing is applied to packet. Time To Live filed is checked to see if TTL field is more than 1; if less than 1, packet is marked with TTL error flag. IP packet length is checked for minimum length to contain full IP header. Checksum of header is performed.
  • L 3 INFO Memory is 32-byte wide. Each location is dedicated for corresponding numbered port. Result of L 3 header checks for receiving port is stored in corresponding location and used by Forwarding Block to decide whether packet is sent to L 3 Block for processing.
  • L 3 check (e.g., CHK) block takes into consideration if arriving packet contains VLAN tag, if VLAN tag option is enabled. If so, hardware accounts for shift in appropriate fields for L 3 header checking process. This amounts to 4-byte shift of L 3 header following MAC header.
  • VLAN priority bits are extracted and passed along with L 3 INFO. VLAN priority bits may be enabled to override QoS information set in L 2 CAM result and L 3 Header Lookup result. Programmable register is provided to load pattern to identify if incoming packet is VLAN tagged packet.
  • L 3 Engine (L 3 E) 73 is hardware block for implementing the Layer 3 CAM lookup and age table functions for switch circuit 20 .
  • L 3 E 72 receives requests from forwarding engine (FE) 52 and CPU 12 , processes requests and returns results to requester.
  • L 3 E 72 lookup functions includes: receiving, buffering and processing lookup requests from FE 52 , providing hardware to calculate hash index from destination IP (DstIP) address provided by FE 52 ; reading CAM entry at address and checking for IP address match; following linked list entries until match is found or end of list is reached; and returns lookup result to FE 52 .
  • DstIP destination IP
  • L 3 E 72 age table maintenance function includes: maintaining age table in control memory 136 ; adding and deleting entries in table by CPU 12 request; aging table at CPU-controlled intervals; reporting aged entries to CPU; maintaining aging time stamp; and making entries live.
  • L 3 E 72 CAM management assistance function includes: providing hardware hash calculation function for CPU 12 ; implementing search function which scans L 3 CAM and reports matching entries; and providing change option to search function which writes new data into matching entries.
  • CPU 12 interface to L 3 Engine 70 is for age table and L 3 CAM maintenance.
  • Initial CAM entries are written to L 3 CAM 126 by CPU 12 through dedicated control memory interface port. Managing linked entries and free buffers is done by CPU 12 . Searching for entries and reporting or changing them is accomplished by appropriate command registers.
  • Age table entries are created and deleted by CPU 12 using add and delete commands. Aged entries are reported to CPU 12 and deleted by CPU 12 using delete command. Time hardware modifies age table entry when entry is made.
  • Packet memory 16 includes 8-MB SDRAM with 4 1M ⁇ 16 devices providing 32-bit data path to 4096 2KB blocks for packet storage.
  • L 3 Engine 70 writes to packet memory 16 to modify fields (e.g., destination address (DA), source address (SA), TTL and checksum) in packet following L 3 lookup.
  • DA and SA fields are written in 32-byte burst with byte enables set appropriately.
  • MAC address RAM 196 is 32-entry RAM, indexed by port number, which contains lower byte of MAC address for each physical port.
  • IP HDR RAM 198 is 2-port Internet Protocol header memory RAM located on switch circuit 20 . Each entry contains IP values (e.g., TTL, checksum and DST IP) for packet. Write port of RAM 198 is used by packet memory engine 90 to store data from packet IP header. As data streams to packet memory 16 , appropriate bytes are pulled and written to RAM 198 . L 3 Engine 70 uses read port of RAM 198 to access data required to process lookup request from FE 52 . Entries are indexed by port number, so receive (RCV) port number is used to lookup entry.
  • IP values e.g., TTL, checksum and DST IP
  • Write port of RAM 198 is used by packet memory engine 90 to store data from packet IP header. As data streams to packet memory 16 , appropriate bytes are pulled and written to RAM 198 .
  • L 3 Engine 70 uses read port of RAM 198 to access data required to process lookup request from FE 52 . Entries are indexed by port number, so receive (RCV) port number is used to lookup entry.
  • L 3 CAM 126 is contained in 2-MBytes synchronous DRAM (SDRAM) located in single 1M ⁇ 16 part. Since SDRAM is optimized for burst transfer, L 3 Engine 70 accesses occur in bursts of eight 16-bit words. On-chip arbiter/controller logic for L 3 CAM 126 memory has multiple ports to allow better pipelining of accesses and L 3 engine 70 uses two of these ports.
  • SDRAM 2-MBytes synchronous DRAM
  • L 3 CAM 126 data structure is implemented as hash table combined with pool of free buffers which can be linked to entry in hash table. Entry, whether in hash table or free buffer pool, is 8 words (16 bytes). Entry is referred to by entry number, 17-bit number used when indexing CAM, when indexing into age table or when reporting results of search or aging operation.
  • Base hash table contains 64K entries and resides in lower 1-MByte SDRAM. Entries in table have entry numbers in 0 to 64K range, i.e. bit 16 of entry number is set to ‘0’. Entries located in free buffer pool are in upper 1-Mbyte of SDRAM, and entry numbers have bit 16 set to ‘1’. Address of first word of entry in CAM is determined by concatenating entry number with 3 bits of ‘0’.
  • CPU 12 creates entries in hash table for DstIP addresses by hashing address and using resulting 16-bit hash index as offset to entry in table. When multiple address' hash to same entry in base table, link is created to free buffer pool entry. If additional addresses hash to same location, they can be added to end of linked list. CPU 12 creates and maintains entries and manages linked list structures.
  • Control memory block (CTL MEM) 136 uses 128K ⁇ 16 synchronous SRAM (SSRAM), instead of SDRAM devices because most data structures stored require single read and write accesses.
  • L 3 Engine 70 uses 32-KB portion of control memory to store age table. It does single read followed by single write of word in age table. Each 16-bit word contains age table information for 4 CAM entries. Aging information for particular L 3 CAM 126 entry is accessed by using CAM entry number divided by 4 as address into age table.
  • Forwarding Engine (FE) 52 performs lookup requests to L 3 Engine 70 for each IP packet to be processed. Four-deep FIFO buffer is provided to buffer these requests. FE 52 provides RCV Port Number and Block Number for each packet. After lookup is complete, L 3 Engine 70 returns RCV Port Number as well as L 3 Result and L 3 Status word containing various flags and information from matching CAM entry.
  • age table support since control memory 136 containing age table does not support locked operations, table modifications are done by hardware. Such table modifications address condition of two agents trying to modify same table entry.
  • CPU 12 can initialize entries to invalid state at startup by writing to control memory; but during operation, hardware performs table modifications.
  • Age table operations are done by CPU 12 write to age command register.
  • Write to age command register causes Age Table Busy flag in L 3 Status register to be set until operation is complete.
  • Aged entries are reported in registers (e.g., AgeResult 1&2).
  • age table maintenance is illustrated, starting with CPU or live command processing 200 , then determine if age command 202 applies. If so, increment time stamp 210 and set age flag; otherwise, read table entry 204 , mask and modify table entry 206 , and write table entry 208 . Further, in FIG. 15B, after age flag set 212 , age table is read 216 , then determine age out 218 . If so, then write result registers 220 , set result valid 222 , wait for CPU 224 , and clear result valid 226 ; otherwise determine 228 if last entry. Next, clear age flag 232 , 230 and read hash table 240 .
  • Time stamp is 2-bit value providing four age time intervals. There are two age time counters, currTime and ageTime. CurrTime is reset to zero and increments when CPU 12 issues age command. Entries with time stamps equal to this value are newest entries. AgeTime value is always equal to currTime +1 m(i.e., currTime ⁇ 3, modulo 4). Entries with time stamps equal to ageTime are aged next time CPU 12 issues age command.
  • CPU adds entry to age table when creating new entry in L 3 CAM 126 . Until entry is added to age table, entry does not participate in aging process.
  • CPU 12 writes (e.g., AgeCmd) register with entry number and add or add permanent command, and hardware reads appropriate entry, modifies valid and permanent bits appropriately and writes currTime into time stamp field.
  • Hardware makes entry live (i.e., accessed) when L 3 CAM lookup results in IP hit. Entry number of matching entry is used to access age table, and time stamp field is updated with currTime. Entries which are accessed frequently have more recent time stamp than infrequently used entries, and are not aged out.
  • CPU 12 deletes entry in age table when removing entry from L 3 CAM 126 .
  • CPU 12 writes AgeCmd register with entry number and delete command, and hardware reads appropriate entry, clears valid bit, and writes modified entry back to table.
  • First aged entry number used to access L 3 CAM 126 to retrieve DstIP for entry.
  • DstIP is hashed to locate base hash table entry and CAM entry at address is read.
  • Hardware follows linked list, reading CAM entries until retrieving entry with Link Address equal to original aged entry number. Entry number is reported along with aged entry number in AgeResult registers.
  • CPU 12 provides L 3 CAM management functions, including initial setup, adding entries, deleting entries and managing linked lists and free buffer pool.
  • Hardware provides automatic search/change capability to assist CPU 12 in locating entries with certain characteristics and optionally changing such entries.
  • Search operations are initiated by CPU 12 write to SearchCmd register.
  • Write to SearchCmd register causes Search Busy flag in L 3 Status register to set until operation is complete.
  • Matching entries are reported in (e.g., SearchResult) registers.
  • FIG. 16A shows search operation steps.
  • CPU initiates search 234 , writes commands 236 , initialize entry to zero 238 , read hash table 240 , then determine match 242 . If so, write result to registers 244 , and wait for CPU 246 ; otherwise, determine if linked 248 . If so, clear age flag 250 , else, determine if last entry 254 . If so, clear age flag 252 ; otherwise, clear age flag 256 .
  • Hardware performs automatic and exhaustive search of L 3 CAM 126 when SearchCmd register is written. Starting with entry 0, each entry in base hash table is read and checked against search criteria. If entries have valid link address, then linked entries are read and checked. Minimum 64K CAM entries are read.
  • SearchCmd can be written with Abort Flag set, and hardware exits search process. Pending SearchResults are read by CPU 12 before hardware exists and clears Search Busy flag.
  • SearchMask and SearchData registers (16 registers total). Before search command is issued, SearchMask registers are written. ‘0’ in bit position masks bit from consideration in comparison. SearchData registers are written with data values to be matched. Registers containing data to be matched are written.
  • change option was selected when (e.g., SearchCmd) value was written, then matching entries found during search are changed by hardware according to values written to change setup registers. When matching entry is found, hardware alters data and writes back to CAM before reporting match result to CPU 12 .
  • NewCamDataX ChangeMaskX & ChangeDataX
  • L 3 Engine receives CAM lookup requests from forwarding engine and searches matching entry in L 3 CAM 126 . Results of search are returned to FE 52 , and additional requests are serviced.
  • FIG. 16B flow chart shows CAM lookup steps. Initially, valid buffer is set 258 , read IP header RAM 260 , hash destination IP (DstIP) address 262 , and read hash table 264 , then determine if valid and hit 266 . If so, read port MAC RAM 268 , update packet data 270 , modify packet 272 , and write result 274 . If not, determine valid link 276 . If so, follow link 278 and read hash table 264 , else write result 274 .
  • DstIP hash destination IP
  • L 3 Engine 70 buffers up to 4 lookup requests from forwarding engine (FE) 52 . When buffer is full, busy signal is sent to FE 52 . Buffer is organized as FIFO and contains receiving port number and block number for lookup request.
  • DstIP address is hashed to 16-bit value which is used as entry number for base hash table. That entry is read, and words containing DstIP address are compared to packets DstIP address. If these two addresses match, then IP hit bit is set, and results of successful lookup are returned to FE 52 .
  • packet Before result is posted to FE 52 , packet may be modified, depending on bit in L 3 Flags field of CAM entry. If Don't modify bit of CAM entry is set, nothing is changed in packet. Otherwise, when lookup is successful, TTL field of IP header is decremented and modified in packet memory, and (e.g., CheckSum) field is recalculated and changed. Packets DA is overwritten with value contained in matching CAM entry, and SA is replaced with value from MAC Address Registers and MAC Address RAM.
  • L 3 Flags field of CAM entry If Don't modify bit of CAM entry is set, nothing is changed in packet. Otherwise, when lookup is successful, TTL field of IP header is decremented and modified in packet memory, and (e.g., CheckSum) field is recalculated and changed. Packets DA is overwritten with value contained in matching CAM entry, and SA is replaced with value from MAC Address Registers and MAC Address RAM.
  • L 3 CAM 126 If match is not found in L 3 CAM 126 , hardware checks to see if default route registers are written by CPU. These registers provide ‘default route’ CAM entry and are programmed with same information as CAM entries in control memory 136 . If default route exists, then packet is modified using default information, and (e.g., IPHit and DefaultRouteUsed) bits of L 3 Result are set.
  • default route registers are written by CPU. These registers provide ‘default route’ CAM entry and are programmed with same information as CAM entries in control memory 136 . If default route exists, then packet is modified using default information, and (e.g., IPHit and DefaultRouteUsed) bits of L 3 Result are set.
  • MAC Address RAM 196 contains lower byte of MAC address for each port. It is 32 ⁇ 8 dual port ram which is written by CPU 12 and read by hardware during packet modification. This value replaces lower byte from MAC Address Registers when writing new SA for packet.
  • L 3 results of CAM lookup returned to FE include receive port number and block number originally provided by FE 52 and two 16-bit values, L 3 Result and L 3 Status. A detailed bit definition for these last two values was provided earlier in this document.
  • Switch circuit 20 operates on various performance and integrity levels. In cut-thru switching mode, relatively fast ether switching mode and high performance level are achieved; however, there is possibility of transmitting packet with error during receive process. In this mode, CPU programs MACs to raise receive request when collecting 64 bytes. Also it programs MAC to raise subsequent receive request after every 64 bytes collection. First request provides fast header analysis and switching.
  • SF store forward
  • Switch circuit 20 waits until packet completion and updates transmit queues of relevant destination. MAC programming remains same.
  • Forwarding Block acts on ‘store forward mode’ mode on port by port basis. SF mode is selectable as per port basis. In this mode port linking is disabled.
  • packet moves in following directions: received on LAN port(x) and transmitted to any/all other LAN ports; received on LAN port(x) and posted to UL queue; received on LAN port(x) and posted to CPU queue; received on LAN port(x) and packet dropped; and forwarded from CPU to any/all LAN ports.
  • packet flows through packet memory 16 and switch circuit 20 .
  • switch circuit 20 participates in forwarding, filtering and queue management.
  • switch circuit 20 In case of Ethernet port originated packet flow, packet is received on Ethernet ports, and switch circuit 20 is triggered on such packet from request from one of MACs. This is hardware trigger mode. Switch circuit 20 , in coordination with RISC processor 12 , allocates free block pulled from receive free list. Once block is assigned, block is busy until destination agent(s) complete transmission. Transmission completion has mechanism to release block and insert in receive free list.
  • switch circuit 20 needs to obtain packet header information. Header is extracted from MAC received data stream.
  • PM engine 90 identifies header from data stream and loads on port-specific segment of Ether header memory.
  • CAM Processor makes lookup on CAM and delivers result to Auto Forwarding Block.
  • AFB adds packet to one of following queues: one of Ethernet ports transmit queue; all Ethernet ports queues, UL transmit queue and CPU queue; UL queue; CPU queue; or L 3 block for L 3 lookup.
  • AFB handles updating “Block address”, “Byte count”, and “routing information” on transmit queues. Once such information is provided, respective transmitting agents handle packet transmission. At end of transmission, block is released and added to receive free list. Refer for more details about releasing the block in some other sections.
  • CPU 12 posts packets to XQMB 154 of Auto Forwarding Block for transmission to one or several ports.
  • XQMB 154 handles posting packet to respective queues.
  • CPU 12 assembles or modifies existing packet in packet memory 16 for transmission.
  • CPU 12 with help of DMA function, can transfer packet from local memory 100 to packet memory 16 , and at end of such transfer, can initiate XQMB 154 action.
  • FIG. 17 shows packet receive process, which is accomplished by Receive Arbiter 80 , Buffer scheduler 94 , LAN bus controller 76 , Packet memory engine 90 , header memory 290 , data buffer 84 , as well as Auto Forwarding Block (AFB) and PM SDRAM controller.
  • Receive Arbiter 80 receives packet receive process
  • Buffer scheduler 94 receives packet data from packet data buffer 84 .
  • Packet memory engine 90 Packet memory engine 90
  • header memory 290 e.g., data buffer 84
  • data buffer 84 et data buffer
  • ARB Auto Forwarding Block
  • PM SDRAM controller PM SDRAM controller
  • Receive arbiter 80 arbitrates and prioritizes receive request from Ether ports. It raises request to buffer scheduler 94 . When request is under process, arbiter 80 makes background processing on remaining requests.
  • Buffer scheduler 94 handles resource allocation of internal buffers.
  • Buffer scheduler 94 maintains two receive buffers and two transmit buffers 84 .
  • Each buffer can hold up to 64 bytes of data, and optimizes buffer allocation algorithm for fair bandwidth extraction between receivers/transmitters.
  • LAN Bus Controller 76 interfaces to LAN bus to read/write packet data to MAC FIFO buffers, and access MAC receive completion status.
  • LAN bus controller 76 may access MAC and read data slice, and store to internal data buffer.
  • a data buffer can hold up to 64 bytes of information.
  • Packet memory engine 90 sets-up moving packet in slice-by-slice into PM 16 .
  • Packet memory engine 90 reads PM receive block register and byte count register, and updates (i.e., increments) byte count register on each transfer.
  • Packet memory engine 90 commands PM SDRAM controller to start data transfer.
  • PM SDRAM controller transfers data from receive buffer to packet memory 16 , and generates control timing to access external SDRAM.
  • Auto Forwarding Block allocates free block to receiver; initializes receive block register, byte count register. Occasionally AFB commands to reject packet.
  • Every LAN MAC has (e.g., RREQx) receive request signal which, when active, including at least 64 bytes of data (i.e., header/data region) is collected in internal FIFO. There are 32 request signals from LAN bus. Following steps describe new packet reception and header memory loading:
  • RREQx signal becomes active, indicating 64 bytes valid in FIFO.
  • Request active and (e.g., RBODYx) bit clear means new packet.
  • RREQx signal is first-level conditioned if corresponding bit enabled in Receive Enable register.
  • RISC 12 allocates free block to receive port, it writes block address on corresponding receive block register and enables the receiver”.
  • Conditional RREQx first wins RREQ arbitration to get service.
  • Buffer scheduler 94 allocates one of two free receive data buffer and enables LAN bus controller 76 to start data transfer.
  • LAN bus controller 76 executes Burst Read Accesses on LAN bus targeted to Port-x. Read data is written on allocated internal receive buffer. Since body bit is clear, loading process signatures slice as “header”. If slice is header, it writes header data on 2-port Ether Header memory. At end of header loading, Port to be Analyzed FIFO is loaded with 5-bit port number. Loading of FIFO enables CAM engine to start analyzing header information. Load completion calls attention of PM engine for data movement from receive data buffer to packet memory 16 .
  • PM engine 90 updates byte count and sets-up SDRAM Controller for data transfer to packet memory 16 .
  • Buffers have dedicated channels to SDRAM Controller.
  • SDRAM Controller arbitrates transfer requests amongst channels and starts executing request at time overlapping address and data phases to maximize throughput and efficiency. Requesting channel is held arbitrating for LAN bus until full slice is moved in packet memory 16 .
  • Packet data reception procedure similarly to MAC RREQx signal which, when active informs at least 64 bytes data, is collected in internal FIFO, as follows:
  • RREQx signal is first-level conditioned if corresponding bit is enabled in Receive Enable register. Conditional RREQx first wins the RREQ SCAN arbitration to get the service.
  • Buffer scheduler 94 allocates free receive data buffer 84 and alerts LAN bus controller 76 .
  • LAN bus controller 76 executes burst-read accesses on LAN bus targeted to Port-x. Read data is written in allocated receive buffer 84 .
  • Rec Link register(x) indicates accordingly. If link bit is set, slice is in switch mode for present slice and consecutive slices until end of packet.
  • Loaded data buffer calls attention of PM engine 90 to load data into packet memory 16 .
  • PM engine 90 uses Receive Block Address register(x) and Receive BC register(x) to construct PM destination address. Byte count is updated in receive BC register(x) and transmit BC register(y).
  • PM engine issues command to PM SDRAM controller to start data transfer and ready to service receive or transmit buffer or accept command.
  • Forwarding Engine in coordination with CAM Processor and L 3 Lookup Block, evaluates current receiving packet for following possible decisions: reject packet; link packet; forward packet to transmitter queue; multicast packet to two or more ports; broadcast packet only to Ether ports; broadcast packet to Ether ports and UL; send packet to UL; send packet to CPU; or send packet to L 3 Lookup Block for L 3 analysis.
  • CAM Processor writes decision information into header analyzed FIFO. Such write process wakes-up Forwarding Block to take-up forwarding process.
  • PM engine 90 keeps loading successive slices of packet in packet memory 16 independent of CAM analysis. Decision of CAM might occur at middle of packet reception or after end of packet reception. If header analysis is complete before packet reception is complete, Forwarding Block acts on packet if packet is unicast or destination is L 3 Lookup Block which carries on further analysis associated with L 3 forwarding. For other cases, Forwarding Block is not called to action until receive completion of packet. If packet reception is complete before header analysis, Forwarding Block is not called into action until header analysis is complete. Receiver is not primed again until forwarding decision has been taken on received block and acted upon.
  • control bit is set for corresponding receiving port to reject incoming packet.
  • PM engine 90 looks at reject bit to set prepare for transfer from receive buffers to packet memory. If reject is set, PM engine 90 empties FIFO without setting transfer to PM. PM engine 90 clears reject bit at end of packet reception. Receive complete state is indicated to Forwarding Block.
  • CAM lookup posts port number in CAM Analysis Done FIFO in addition to setting CAM analysis done bit for port. This draws attention of Forwarding Block prior to completion of packet reception. Forwarding Block checks several conditions to take forwarding action. At this time, it may link packet to corresponding transmitter or post packet in queue of transmitter.
  • Transmitter may be busy, i.e., transmitter queue contains one or more packets queued or transmitter is currently processing old packet.
  • Forwarding Block requests XQMB 154 to post receive packet in transmitter queue with incomplete information. This is handled by manipulating RC bit clear in BC entry in control memory 136 . Bit, if clear, means packet block address is valid, but byte count is invalid. Packet data is incomplete in packet memory 16 .
  • Forwarding Block pushes incomplete packet in transmitter queue on special occasion. When, receiver(x) wants to switch to transmitter and if transmitter is currently busy, Forwarding Block puts packet in transmitter queue to maintain order of priority. At pushing event, byte count information is invalid.
  • XQMB 154 commands to link. If receiver completes packet before getting to transmitter, Forwarding Block sets such bit, and loads valid BC value on BC entry. Subsequent forwarding action on packet degenerates to store and forward mode.
  • Transmitter may be free when switching decision occurs.
  • Forwarding Block commands to link receiver to transmitter. It does not manipulate control memory structure.
  • Forwarding Block primes receiver, and transmitter continues to transmit until end of packet without further intervention. Finishing transmitter event releases block and pushes to receive free list.
  • Forwarding Block may act on receive packet after receive completion and CAM analysis completion. Since packet is received, XQMB 154 is instructed to post on appropriate transmitter queue; this is Store and Forward mode.
  • Broadcasting to Ether ports decision may result from not finding destination port or hit on broadcast MAC address.
  • broadcast map for receiving port is fetched, and packet is forwarded to transmit queue management block for posting on transmit queues
  • header analysis results in port not physically connected to LAN ports, but CPU or uplink ports; Forwarding Engine Block instructs XQMB 154 to queue on appropriate ports.
  • XQMB 154 may queue packet on ports if ports are specified in broadcast port map.
  • LAN controller When end of packet is sensed from LAN port(x), LAN controller signals by bit in slice status. PM engine 90 , while moving slice to PM 16 , notifies same status by setting appropriate bit in Rec end reg. Forwarding Block acts on every receive completion; and in addition to forwarding actions, it instructs Free Queue Management Block to prime receiver. In case previous packet is rejected, no new block needs to be allocated; in such case it enables receiver to receive new packet.
  • transmit port activity is top-level enabled by Forwarding Block or Transmit Queue Management Block of Auto Forwarding Block.
  • XQMB 154 picks highest priority packet, and loads transmit block address register and byte count register corresponding to packet. This action enables transmitter on transmit enable register.
  • Forwarding Block or XQMB 154 loads link command with which hardware copies receiver block address to transmitter block address register. It copies current running receiver rec(x) byte count value to transmitter(y) byte count register. It also sets link bit active.
  • Transmitter enters arbitration if transmit (e.g., XMT) enable bit is set, and Byte count validity is met. If MAC transmit FIFO has at least 64 bytes free space, it raises TREQ# signal. This signal is conditioned with first phase enable signal, and transmitter enters arbitration with other TREQ# signals asserted by other transmitters. Winning transmitter requests allocation of one of two free transmit buffers. This request is forwarded to buffer scheduler. When buffer scheduler allocates free buffer, transmitter enters arbitration for PM engine 90 service. PM engine 90 time-multiplexes between receive requests and transmit requests and other commands such as link, receive enable and transmit enable.
  • transmit e.g., XMT
  • PM engine 90 time-multiplexes between receive requests and transmit requests and other commands such as link, receive enable and transmit enable.
  • PM engine 90 moves sets-up transfer with PM SDRAM controller by giving command to move slice from PM to data buffer, and updates byte count and address registers in array for corresponding transmitter.
  • PM engine 90 signatures slice as header or non-header based on XMT body bit. Along with slice, PM engine 90 passes information, such as slice count and port address through buffer attributes. Loaded slice calls attention of corresponding LAN controller for service to transfer data from transmit buffer to MAC on LAN bus. LAN controller moves slice to target MAC port and releases buffer. Whenever PM engine 90 moves slice, decremented byte count is compared, to check, if reached zero. If reached zero, packet may reach end of packet status based on following cases:
  • transmitter byte count reaching zero is not regarded as end of packet; it is regarded as transmitter has to wait for receiver to get slice. Transmitter does not participate in arbitration again until slice is received on linked receiver. Link bit clear and byte count zero signal packet completion.
  • PM Engine clears XMT enable bit, and sets End of packet transmit bit. End of transmit draws attention of XQMB 154 to look at transmitter queue in control memory 136 . If queue contains additional packets, XQMB 154 loads new packet to re-enable transmitter. If queue is empty, XQMB 154 does not take action. Trigger point for enabling transmitter is: when current packet ends, and new packet is pending in queue; when receive packet is targeted to transmitter and queue is empty; or when CPU inserts packet to transmitter.

Abstract

Multilayer switching device and associated technique enables simultaneous wire-speed routing at OSI layer 3, wire-speed switching at layer 2, and support multiple interfaces at layer 1. Implementation may be embodied using one or more integrated circuits (ASIC), RISC processor, and software, thereby providing wire-speed performance on interfaces, in various operational modes.

Description

    FIELD OF INVENTION
  • Invention relates to digital networks, particularly to multi-layer switching network apparatus and method. [0001]
  • BACKGROUND OF INVENTION
  • Conventional local area network (LAN) and TCP/IP have become dominant technologies in computer networking. As businesses increasingly rely on such technologies, both LAN size and TCP/IP traffic volume that runs across them have grown dramatically. This has led the network manager on continuous search for products to increase network performance, easily adapt to changing network requirements, and preserve existing network investment. [0002]
  • Presently, LAN technology is evolving into Gigabit per second (Gbps) range. Equipment designers have been challenged to make network interfaces and networking products such as bridges, routers, and switches, fast enough to take advantage of the new performance. Compounding the equipment design problem has been the rapid innovation in networking protocols. The traditional response to this shifting sands problem has been to build easily upgradable software-intensive products. Unfortunately, these software intensive products typically exhibit poor system performance. [0003]
  • Accordingly, there is need for a new generation of internetworking devices capable of gigabit speeds, but with the flexibility of previous software intensive products. [0004]
  • SUMMARY OF INVENTION
  • Invention resides in a multilayer switching device and associated technique for enabling simultaneous wire-speed routing at layer [0005] 3, wire-speed switching at layer 2, and support multiple interfaces at layer 1, according to OSI reference model. Inventive implementation may be embodied using one or more integrated circuits (ASIC), RISC processor, and software, thereby providing wire-speed performance on interfaces, in various operational modes.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is system-level diagram of preferred embodiment. [0006]
  • FIGS. [0007] 2A-B are block diagrams of first- and second-level switch respectively of present embodiment.
  • FIG. 3 is general switch block diagram of present embodiment. [0008]
  • FIG. 4 is general control-path diagram of present embodiment. [0009]
  • FIG. 5 is general datapath diagram of present embodiment. [0010]
  • FIGS. [0011] 6A-B are block diagrams of LAN interface and datapath interface respectively of present embodiment.
  • FIGS. [0012] 7A-B are block diagrams of DMA transfer between local memory and packet memory, and processor access to packet memory respectively of present embodiment.
  • FIGS. [0013] 8A-B are processor access to L3CAM memory and control memory respectively of present embodiment.
  • FIGS. [0014] 9A-B are processor access to L2CAM memory, and LAN arbiter interaction with datapath respectively of present embodiment.
  • FIGS. [0015] 10A-B are transmit queue management block (XQMB) interfaces and operation respectively of present embodiment.
  • FIG. 11 is DMA block diagram of present embodiment. [0016]
  • FIG. 12 is flowchart of CPU-to-packet memory operation of present embodiment. [0017]
  • FIG. 13 is flowchart of packet memory-to-CPU operation of present embodiment. [0018]
  • FIG. 14 is block diagram of L[0019] 3 block interfaces of present embodiment.
  • FIGS. [0020] 15A-B are flowcharts of age table maintenance of present embodiment.
  • FIGS. [0021] 16A-B are flowcharts of search and lookup operations respectively of present embodiment.
  • FIG. 17 is flowchart of packet reception of present embodiment. [0022]
  • DETAILED DESCRIPTION
  • FIG. 1 is top-level overview diagram of system architecture for preferred embodiment. Multilayer switch device [0023] 6 couples local area network (LAN) workgroup hubs 2 through enterprise switching hub 4 to wide-area network (WAN) links through multiprotocol router 8. Multilayer switch 6 and associated technique enables simultaneous wire-speed routing at Layer 3 (L3), wire-speed switching at Layer 2 (L2), and support multiple interfaces at Layer 1 (L1), according to OSI reference model. System may be embodied using one or more integrated circuits (ASIC), RISC processor, and software, thereby providing wire-speed performance on various interfaces in various operational modes.
  • System architecture comprises two-level distributed multilayer switch, preferably using 4-Gbps non-blocking switch fabric [0024] 6. Multilayer (i.e., both L2 and L3) switch fabric is entirely contained within single ASIC capable of switching 3M pps or more. A 4 Gbps I/O bus connects one or more interface modules to the ASIC. Because the switch matrix is not necessarily integrated with the MAC layer, a wide range of interface types can be supported (i.e., both LAN and WAN). Using present embodiment, various combinations of layer 1 interfaces are supportable, and all interface modules are field-upgradable. Various interface modules may carry multiple physical interfaces.
  • As shown in FIG. 2A, first-[0025] level switch 22 includes switch ASIC 20, which couples RISC coprocessors (i.e., Network Management Processor (NMP) 10 and Route/Switch (RS) processor 12,) for supporting for higher-layer software functions and support features. Optional components may be added for redundancy of critical system components, such as power supplies. Memory 16 and input/output (I/O) modules 14 couple to switch circuit 20.
  • In FIG. 2B, second-level switch or [0026] cross-bar interconnection 18 couples multiple first-level switches 22. For example, in configuration shown with six first-level switches 22, aggregate performance of non-blocking switch fabric may exceed 24 Gbps.
  • [0027] RISC processors 10, 12 provided in each switch element 22 execute software to provide standards-based dynamic routing, and non-real time activities such as network management. Software is stored in flash memory, and is network-updatable via TFTP. Preferred software functions include: dynamic Internet Protocol (IP) routing (e.g., RIP, RIPv2, OSPF); layer 2 support (e.g., 802. ID STP); configuration support (e.g., enable/disable Layer 2 or Layer 3 support on per-port basis; ports can be grouped into broadcast domains, flexible subnet configuration); network management: (e.g., SNMP, HTML, Telnet, TFTP, DHCP support).
  • Additional software functions include: quality-of-service provisioning (QOS) (e.g., providing multiple levels of prioritization, address- and policy-based QOS, dynamic layer [0028] 3, QOS based on RSVP); IP Multicast (e.g., IGMP, DVMRP); network traffic monitoring (e.g., RMON); hot standby support (e.g., VRRP); additional dynamic routing (e.g., NHRP); and certain IEEE enhancements (e.g., 802.IQ (i.e., VLAN), 802.3x (i.e., flow control), and 802.1p (i.e., priority)).
  • Present multi-layer switch approach offloads multiprotocol router [0029] 8 of local IP routing, thereby leaving router 8 with bandwidth for routing other protocols, and for handling WAN links. Hence, existing investment in router 8 may be preserved, for example, without changes to WAN topology. Further, effective performance at network apex is wire speed, and enterprise switching hubs 4 at network apex may be segmented, thereby preserving bandwidth, and extending useful life. Additionally, with present system architecture, installation therein of network products and applications is comparatively easier, particularly because addressing changes are incremental, thereby minimizing impact on network operations. Moreover, preferred system not use non-standard protocols, thereby assuring interoperability in multi-vendor environment.
  • Although present multilayer switch system is suitable for applications at network aggregation points, present system may also be used in high-performance workgroup and server applications. For example, in high-performance workgroup application, present system may interconnect between cluster of closely cooperating high performance computers, such as in video postproduction, where ability to transfer data rapidly between workstations is critical to production throughput. In such case, wire-speed performance is interesting, and flexible layer [0030] 3-addressing support provides connections outside workgroup, without impacting switching speed. Additionally, in case of server applications, present multilayer switch system provides network attachment point for one or more servers. Wire-speed performance of present system allows network designer to use either layer 2 or layer 3 topologies, and removes potential network performance bottleneck.
  • Moreover, as described in further detail hereunder, preferred implementation of innovative multilayer switch apparatus and methodology provides following functionality: support for 16 or more full-duplex 100BaseT ports or up to 28 ports of 10/100BaseT ports; direct interface to MIPS-type RISC processor for management and routing; integration of SDRAM controller for shared high-speed 6-channel packet memory; integrates of CAM access interface to system processor; integration of hardware CAM processor for L[0031] 2 learning, lookup and live interactive activities or transactions; integration of hardware hash-based IP header lookup and management; integration of hardware-based transmit and free queue management; integration of L2 and L3 forwarding of unicast, broadcast and multicast packets; broadcast traffic management; integration of QoS, with 4 priority queues per port; hardware-handled packet movement; integration of 768 bytes of dual-port memory for L2 and L3 header for 28 ports; support for 4MB/16MB of SDRAM packet memory; implementation of 256 bytes of data buffers for concurrent transfers to PM SDRAM and LAN bus; intelligent buffer scheduler & arbiter for efficient bandwidth distribution; low-latency mode, store and forward mode selection, with 10-us switching latency; operation of LAN bus at 64-bit/66-Mhz; operation of packet memory bus at 32-bit/100-Mhz; operation of processor bus at 32-bit/66-Mhz; operation of control memory and L2 CAM interfaces at 16-bit/66-Mhz; operation of router (RT) cache SDRAM interface at 16-bit/66-Mhz.
  • Preferably, [0032] multilayer switch circuit 20 is implemented as single-chip integrated circuit (e.g., semicustom ASIC) for processing handles switching of any canonical packet, cell, frame, or other data communication element, with no or limited processing assistance from external processors. Switch circuit 20 operates in relatively low latency, and store-and-forward switching modes. Transactions between Ethernet ports may operate in low-latency cut-thru mode; other transactions may occur in store-and-forward mode.
  • As appropriate, [0033] switch circuit 20 may contain substantially one or more of following functions: external bus interface, processor interface, CAM interface, LAN interface, packet memory (PM) SDRAM interface, route cache SDRAM interface, control memory (CM) SRAM interface, LAN block, LAN bus arbiter, LAN bus controller, LAN block interfaces, data path block, data path buffers, data path controller, buffer scheduler, packet memory, packet memory SDRAM arbiter and controller, DMA function-to-processor interface, packet engine (PE), port control function, port attribute memory, L2 CAM engine, memory blocks for header and CAM analysis result, CAM structures, L2 header analysis hardware engine, auto-forwarding block, forwarding block, L3 header analysis result memory, free queue management block, block attributes management, transmit queue management block (XQMB), SRAM arbiter and controller, processor interface, L3 block, L3 header memory, hash function, L3 lookup algorithm, L3 management function, L3 aging function, route cache (RC) SDRAM arbiter and controller, RISC processor interface, slave interface, bus master interface, DMA interface, bus protocol, register interface-to-internal resources, and interrupts.
  • In FIG. 3, for example, preferred implementation of switch system shows general logic block diagram for [0034] switch circuit 20 coupled to: 64-bit 66 Mhz LAN bus, external memory 16 through 32-bit 99-Mhz bus, L2 CAM through 16-bit 66-Mhz bus, control memory 136 through 16-bit 66-Mhz bus, L3 route cache through 16-bit 66-Mhz bus, and switch processor 12 through 16-bit 66-Mhz bus, which couples to network management processor (NMW) 10 through external interprocessor controller (IPC) 24.
  • In FIG. 4, internal control path of [0035] switch circuit 20 is shown. External switch processor 12 couples to CAM interface 46, free queue management 48, L3 lookup 50, transmit queue management and scheduler 58, SDRAM memory controller 62, and SRAM memory controller 64. Also, internal control path includes forwarding engine 52, which couples to CAM interface 46, free queue management 48, L3 lookup 50, block attributes 60, transmit queue management and scheduler 58, and receive block 54. Transmit queue management and scheduler 58 couples to transmit block 56, SRAM memory controller 64, and block attributes 60. Receive block 54 and transmit block 56 couple to LAN bus. CAM interface 46 couple to CAM bus and receive block 54. SRAM memory controller 64 couples to free queue management 48, block attributes 60, L3 lookup 50, and SDRAM memory controller 62. SDRAM memory controller 62 couples to RC memory bus and L3 lookup 50. Block attributes 60 couples to free queue management 48. Forwarding engine 52 couples to receive block 54.
  • In FIG. 5, internal datapath of [0036] switch circuit 20 is shown. In particular, multi-channel packet memory arbiter and controller 66 couples to SDRAM packet memory bus, processor and DMA interface 68, L3 engine 70, receiver buffers 72, and transmit buffers 74. Receive and transmit buffers 72, 74 couple to media access controller (MAC) first-in first-out (FIFO) bus. Processor and DMA interface 68 couples to processor bus.
  • [0037] Switch circuit 20 includes processor interface 36 which couples to 32-bit MIPS RISC processor multiplexed bus (e.g., NEC R4300). Such processor bus, a 32-bit address/data bus operable up to 66 Mhz, operates in master and slave modes. In slave mode, such processor bus responds to accesses to internal resources, such as registers, CAM 142, Control Memory 136, PM SDRAM and RC SDRAM. In master mode, such bus handles DMA operations to and from PM SDRAM. Such processor bus does not respond to accesses to external resources, but cooperates with external system controller circuit. In master mode, such processor bus may handle DMA to system memory.
  • [0038] Switch circuit 20 includes CAM interface 46, a dedicated 16-bit bus compliant with content-addressable memory (i.e., Music Semiconductor CAM 1480 compatible) operating at 66 Mhz. Such bus may be shared by external interface. For route/switch (RS) processor accesses to CAM memory, special data path is provided through switch circuit 20. Switch circuit 20 generates CAM access timing control on behalf of RS processor 12. Switch circuit 20 learns and looks-up MAC addresses and port numbers through such bus.
  • [0039] Switch circuit 20 includes LAN interface 40 which couples LAN bus, a 64-bit access bus operating at 66 Mhz. Ethernet MAC devices connect to such LAN bus through receive and transmit MAC FIFO bus. Switch circuit 20 generates select signals and control signals for access to external MAC device FIFO bus. Switch circuit 20 reads writes data in 64-bit single-cycle burst mode. Burst size is 64 bytes. Preferred bandwidth is 4 GB/s at 64-bit/66 Mhz-operation at 64-byte slice size. Ethernet frames are transferred across LAN bus. At end of receive frame, status bytes are read.
  • [0040] Switch circuit 20 includes packet memory (PM) SDRAM interface 42, which includes PM SDRAM bus which operates at 32-bit/99-Mhz standard. Packet memory 16 is directly connected to such bus through registered transceivers. Preferred bandwidth is 400 MB/s at 99-Mhz operation and 64-byte burst mode. Seven-channel arbiter inside switch circuit 20 allows up to 7 agents to access packet memory 16. PM interface supports up to 8 MB of SDRAM in two banks.
  • [0041] Switch circuit 20 includes interface to Route Cache (RC) SDRAM for coupling timing control signals and multiplexed 16-bit bus, which operates in 66-Mhz mode capable of streaming data at 132 MB/sec.
  • [0042] Switch circuit 20 includes interface to Control Memory (CM) SRAM for managing block free queue list, transmit queues, block parameters and L3 CAM aging information. Such interface is 16-bits wide and operates at 66-Mhz. Address and data buses are multiplexed and operate in flow-through and pipelined modes.
  • FIG. 6A shows LAN block and interfaces [0043] 40 externally to Ethernet Media Access Controller (MAC) FIFO bus and internally to CAM interface block, datapath block 44, and packet engine block 82. LAN block interface functionality include bus arbitration for receive and transmit requests of FIFO bus, bus control and protocol handling, signaling internal datapath block to initiate data transfers and communicating with packet engine to signal begin and end of receive and transmit operations on FIFO bus. As shown, datapath block 44 couples to FIFO data bus, LAN bus controller 76, buffer allocator 78, and packet engine 82. LAN bus controller (LBC) 76 couples to FIFO bus control, buffer allocator 78, and receiver and transmit arbiters 80, which couple to packet engine 82 and receive and transmit requests.
  • When LAN interface [0044] 40 operates, receive requests and transmit requests are multiplexed and fed by external logic. Multiplexer uses 2-bit counter output. Front end demultiplexer reconstructs requests on 32-bit receive request register and 32-bit transmit request register. Few clocks latency for request may be sensed to be activated or deactivated, which may be handled by arbiter mechanism 80.
  • Receive [0045] arbiter 80 services receive port requests, preferably in round-robin scheme for equal distribution. Overlapped processing provides improved performance. Hence, if receive port is under service, next request prioritization occurs in parallel. During arbitration, arbiter 80 may receive port enabled, free block allocated signals from other modules. Upon certain channel winning arbitration, internal receive buffer is allocated 78, and data staged from MAC FIFO bus for packet memory 16. When buffer is granted, channel is presented to LAN Bus controller 76 for data transfer.
  • Additionally, transmit [0046] arbiter 80 services transmit port requests in round-robin scheme for equal distribution. Overlapped processing provides improved performance. Hence, when transmit port is under service, next request is prioritized in pipeline. During arbitration, arbiter 80 may receive port enabled, valid packet assigned, in link mode the transmitter has at least one slice signals from other modules. If channel has data slice in datapath 44, channel is not allowed to join arbitration until data is put into packet memory 16, thereby preventing out-of-sequence data transfer. Upon channel winning arbitration, it is presented to buffer allocator block 78 to obtain internal transmit buffers for staging from packet memory 16 for MAC FIFO bus. Once transmit request wins arbitration, and transmit buffer is allocated, channel is presented to packet engine block 82 to obtain data from packet memory 16. Once data is staged in transmit buffer, buffer requests to LAN Bus controller 76 to transfer data in transmit buffer to MAC FIFO bus.
  • LAN bus controller [0047] 76 provides access to MAC FIFO bus targeted to port moving slice between MAC FIFO and internal data buffers. Receive request, which wins receive arbitration and secures one of receive buffers from buffer allocator 78 and transmit buffers having data for transfer to FIFO bus, competes for services of LAN bus controller 76. Arbitration mechanism is configured to split bandwidth evenly between receive requests and transmit requests. LAN bus controller 76 generates end-of-packet status read cycles for receive request data transfer operations. Status information is used to determine if received packet is good or bad. If error is sensed, received packet may be rejected.
  • Data bus width of LAN bus is 64 bits. LAN bus access is performed in burst mode (i.e., single-cycle burst mode) with maximum of 64-byte transfer, preferably executing at 8 data cycles in burst. LAN bus controller [0048] 76 is started by buffer scheduler when data buffer is allocated to receive or when data transfer from packet memory 16 to one of transmit buffers is complete.
  • Receive and transmit data to LAN bus is staged through 64-byte deep receive and transmit data buffers in [0049] datapath block 44. Receive and transmit requests arbitration and FIFO bus control are handled by LAN block. Buffer allocator 78 in datapath block 44 manages allocation of receive and transmit buffers, and packet engine block 82 handles movement of data between packet memory 16 and receive and transmit buffers.
  • FIG. 6B shows [0050] datapath block 44 interface, including packet memory controller 82 coupled to data buffers 84 and packet memory engine (PME) 90. Data buffers 84 couple to LAN block 86, buffer scheduler 94, slice counters 88. Buffer attributes 92 couple to PME 90 and LAN block 86, which couple to buffer scheduler 94.
  • Data transfers between packet memory bus and MAC FIFO bus are staged through receive and transmit [0051] buffers 84 in datapath block 44. Block logic tracks state of buffers 84. Datapath block 44 interacts with LAN Block 86, packet engine block 82 and packet memory controller 82.
  • Data transfers between MAC FIFO bus-to-[0052] PM 16 and PM 16 to MAC FIFO bus occur through temporary datapath storage buffers 84 inside switch circuit 20. Buffers 84 match difference in bus access bandwidth for slice, and maintain concurrent transfers between FIFO bus and PM bus.
  • Two buffers are provided for transmission, and two buffers are provided for reception. Such buffers are associated with respective buffer status. Transmit buffers hold data form [0053] PM 16 to MAC FIFO (LAN) bus. Receive buffers hold data from MAC FIFO bus to PM 16. Each buffer has dedicated channel to PM SDRAM Controller. PM SDRAM Controller arbitrates each request to transfer on first-come/first-serve basis. On LAN side, appropriate buffer is selected for read or write.
  • Frame transfer across LAN bus occurs on slice basis. Slice is 64 bytes. When [0054] switch circuit 20 is servicing port, slice of data transfers on single-cycle burst mode. Burst data transfer size is slice size, except for last slice in frame. Last slice size is decided by frame size. Ports are serviced, in time-division multiplex mode.
  • Receive slice buffer is used to capture LAN data from MAC FIFO. Slice is 64 bytes. [0055] Switch circuit 20 has two 64-byte buffers. During LAN FIFO read access, incoming 64-bit data words are strobed on selected slice buffers, word-by-word, during clock edges. Write order is from top to down. Receive status is maintained for respective receive slice. For example, slice status provides:
  • Receive slice size (represented by 6-bit number.) Maximum is 64 bytes. In read access, MAC provides in each data phase, valid bytes through bits (e.g., LBE#<7-0>). Hence, LBEI#<7-0> are registered and analyzed at end of data phase to provide cumulative slice size. [0056]
  • EOF signaling. MAC provides in each read data phase, if end-of-frame. EOFI# signal is registered and stored for EOF status. It is also used to close current transfer. [0057]
  • SOF signaling. MAC provides on each read data phase, if Start-of-frame. SOFI# signal is registered and stored for SOF status. [0058]
  • Transmit slice buffer is used to capture (e.g., PMDO) bus data and supply to LAN bus. Slice is 64-bytes. Switch circuit has two 64-byte slice buffers. During LAN FIFO write access, 64-bit data words are read from selected slice buffer. One clock pre-read is implemented to provide minimum delay time on LAN data (LD) bus. Read order is from top to down. [0059]
  • Status is maintained for respective transmit slice. Slice status is loaded by [0060] PM engine 90 when moving slice from PM. Status information includes:
  • Slice size (represented by 6-bit number.) Maximum is 64 bytes. When slice is read from PM bus, PM engine registers slice size. [0061]
  • EOF signaling. [0062] PM engine 90 registers signal while transferring slice from PM bus. If status is on, LAN FIFO controller asserts EOF# signal at appropriate data phase.
  • SOF signal. PM engine registers signal while transferring first slice of packet from PM. If status is on, LAN FIFO controller asserts the SOF# signal at first data phase. [0063]
  • [0064] Buffer scheduler 94 allocates transmit and receive data buffers to requesting agents, keeps track of busy/free status of each buffer, and allocates free buffer to requesting agent. Buffer scheduler 94 optimizes for (a) equal distribution of bandwidth between receivers and transmitters, (b) avoiding deadlock situation of transmit buffer, and (c) achieving highest concurrence of LAN bus and PM bus.
  • Datapath controller includes buffer attributes [0065] 92 for receive and transmit buffers 84, and track byte count per slice basis. Buffer attributes 92, such as End-of-Packet (EOF), start-of-packet (SOF), Byte Enables (BEB), and Slice Count are tracked from time data arrives into receive or transmit buffer until data leaves buffer. Buffer attribute 92 information is used by packet memory engine 90 to track progress of packet flowing through switch circuit 20 per slice basis. Datapath controller interacts with buffer scheduler 94 at end of slice transfer to release buffer. Synchronization between PM SDRAM controller and LAN bus interface 40 is thereby accomplished.
  • Packet memory resides on dedicated SDRAM bus. [0066] Switch circuit 20 integrates SDRAM controller to access packet memory 16. PM SDRAM controller functionality includes: 32-bit interface operating at 99-Mhz to 8 MB of external SDRAM; support for up to 7 internal requesting agents; arbitrates requests and generates request to SDRAM control block; pipelines requests for maximum efficiency and throughput; bursts of 4 (one bank), 8 or 16 (both banks) accesses on SDRAM; and maximum performance at 16 bursts and minimum performance at single read or write.
  • Route processing is provided by MIPS R4000 [0067] family RISC processor 12, which interfaces with switch circuit through address/data multiplexed bus. RISC processor interface may use external system controller, for example, for communicating with switch circuit 20 though processor slave port. RISC processor serves switch or route processor 12. Several register resources in switch circuit 20 are used by RISC processor 12 to control configuration and operation of switch circuit 20. RISC processor 20 may access resources outside of switch circuit 20, such access being controlled by switch circuit 20 packet memory 16, route cache memory, and CAM for L2 forwarding. Switch circuit 20 communicates status of operation and draws attention of processor 12 through status and process attention registers. When configured, switch circuit 20 performs DMA of data from packet memory to processor local memory, and forwards packets to processor queue maintained by switch circuit 20.
  • Preferably, route processor (RP) [0068] 12 is NEC Vr4300 RISC microprocessor from MIPS family with internal operating frequency of 133 Mhz and system bus frequency of 66 Mhz. Processor 12 has 32-bit address/data multiplexed bus, 5-bit command bus for processor requests and data identification, six handshake signals for communication with external agents, and five interrupts. Bus width can be selected as 32-bit operation. Processor 12 supports 1, 2, 3 and 4-byte single accesses and 2, 4 and 8 word burst accesses. Processor 12 uses little endian when accessing switch resources.
  • [0069] RP 12 is interfaced to switch circuit 20. RP 12 communicates with NMP 10 through interprocessor communication (IPC) bus 24, and accesses switch local resources, such as packet memory 16, L3 CAM (Route Cache) 28, control memory 136 and L2 CAM 142 through switch circuit 20 and local resources, such as local memory, ROM etc., through system controller. Two interrupts are used by switch circuit 20 to issue interrupt requests to processor 12. Two slaves on RP processor 12 system bus are switch and system controller. Switch is final agent to provide ready signal to processor requests that switch or system controller is ready to accept. During DMA transfer, switch acts as master.
  • Write access is implemented as ‘dump and run’ with two pipelined buffers to improve system performance. This allows two back-to-back write cycles. One read request is processed at a time. [0070] Processor 12 accesses internal registers resources in 32-bit mode. Write buffer and read buffer are provided to packet memory 16 to match frequency difference of 99-Mhz and 66-Mhz. Memory interface to switch is 32-bit. Maximum burst size to packet memory 16 is four 32-bit words (i.e., 16 bytes). Read buffers are provided to L3CAM and control memory 136 because of 16-bit interface to switch. Little endian is used when data is packed and unpacked during write or read requests to 16-bit interfaced memories. Maximum burst size to L3CAM 28 is 16 bytes, and to CM is 8 bytes. Write or read request to memories is arbitrated through agents inside switch, such as forwarding engine, L3 engine etc., so latency depends on various factors.
  • During write access, [0071] processor 12 owns mastership or control of bus. During read requests, processor 12 enters into uncompelled slave state after address phase, giving bus control to external agent to drive data.
  • FIG. 7A illustrates DMA transfer between [0072] RP processor 12 local memory 100 and packet memory 16. DMA transfer between packet memory 16 and NMP processor local memory is also provided in architecture. NMP processor system controller responds to DMA master requests between packet memory and NMP processor local memory. DMA is implemented using two design blocks called DMA engine 104 and DMA master 102. DMA engine is interfaced to packet memory 16 and that of DMA Master to processor system bus. DMA is initiated by setting bits in DMA command register. DMA transfer between local memory 100 and packet memory 16, or vice versa, occurs substantially as follows:
  • [0073] DMA engine 104 notifies DMA master 102 to initiate DMA transfer when packet is pending by giving request. DMA master 102 arbitrates for processor bus with RP processor 12 as another master by giving request (e.g., EREQ) to processor 12. During DMA transfer, switch circuit 20 acts as master to system controller 98. Processor gives bus control to RP processor when ready. When bus is granted by processor, DMA transfer begins. Mastership of processor bus can be re-acquired by RP processor 12 between each slice transfer, which is maximum of eight 32-bit words (i.e, 32 bytes). DMA engine 104 reasserts request after each slice transfer, until block of packet data is transferred. At end of DMA, bus control is given to processor.
  • When bus is in uncompelled slave state, DMA master [0074] 102 does not access processor system bus to simplify design. While DMA transfer is taking place on bus, system controller 98 does not drive bus, assuming bus in slave state.
  • FIG. 7B illustrates [0075] RP processor 12 access to packet memory (PM) 16 through L2/L3 switch circuit 20. Switch interface to packet memory 16 is 32-bit, and maximum burst size is 16 bytes. Synchronous DRAM is chosen for packet memory that can be operated at 66-Mhz, 99-Mhz and 125-Mhz frequencies. During processor write request, processor dumps write-data into front-end pipeline buffers 106. Slave state machine 108 provides such data into packet memory write buffer 110. Processor request is arbitrated with LAN requests and L3 engine requests in PM SDRAM arbiter to access PM 16. PM SDRAM controller 112 generates control signals for SDRAM. During processor read request, read-data is provided in PM read buffer 114 from packet memory bus. Synchronizer 116 converts 99-Mhz signal into 66-Mhz pulse signal that initiates slave state machine to empty read buffer. Read data is muxed with data from other blocks and driven to processor system bus. Packet memory to local memory (PM-to-LM) DMA transfer data is not written into read buffer, but passed to processor system bus.
  • FIG. 8A illustrates [0076] RP processor 12 access to L3CAM (route cache) memory 28. RP processor 12 accesses L3CAM 28 through switch circuit 20 to initialize entries and maintain data structures. Additionally, FIG. 8B shows control memory 136 access through switch circuit 20. For both such memory accesses in FIGS. 8A-B, RP processor 12 couples to switch circuit 20 through 66-Mhz, 32-bit processor system bus, wherein pipeline buffers 106 receive processor write data and couple to slave state machine 108.
  • In [0077] switch circuit 20 shown in FIG. 8A, L3CAM write buffer couples to slave machine 108 and L3CAM SDRAM controller 120, which receive requests from other agents and couples to L3CAM memory 28 through 66-Mhz, 16-bit bus. L3CAM read buffer 122 provides read data through 32-bit processor bus and couples to slave state machine 108 and register 134 over 16-bit bus. Register 134 receives 66-Mhz clock signal and couples to L3CAM memory 28 through 66-Mhz, 16-bit bus.
  • In [0078] switch circuit 20 shown in FIG. 8B, CM write buffer 128 couples to slave machine 108 and CM SSRAM controller 130, which receive requests from other agents and couples to control memory 136 through 66-Mhz, 16-bit bus. CM read buffer 132 provides read data through 32-bit processor bus and couples to slave state machine 108 and register 124 over 16-bit bus. Register 124 receives 66-Mhz clock signal and couples to control memory 136 through 66-Mhz, 16-bit bus.
  • Synchronous SDRAM is chosen for [0079] L3CAM 28, and Synchronous SRAM is chosen for control memory 136. Switch interface to both memories is 16-bit, and both memories operate at 66-Mhz. Processor 12 access to memories is similar in both cases, maximum burst size to L3CAM memory is 16 bytes, and maximum burst size for control memory 136 is 8 bytes. Data is packed and unpacked for each processor access.
  • Each [0080] memory 28, 136 has write buffer 118 into which processor write-data is provided from pipeline buffers 106 by slave state machine 108. Since memory interface is 16-bit, processor write data is divided into two 16-bit half words. Processor 12 request to L3CAM memory 28 is arbitrated with L3 engine requests. Processor 12 request to control memory is arbitrated with forwarding engine, FQMB, L3 engine, XQMB, BAM and DMA. During processor read request, L3CAM or CM read data from memory bus is provided in read buffer. When last transfer is triggered, slave state machine 108 starts emptying read buffer 122 appropriately and packs two half words into 32-bit word, and puts on processor system bus.
  • In FIG. 9A, [0081] RP processor 12 accesses L2CAM memory 142 through switch circuit 20. Content Addressable Memory (CAM) is chosen for accessing L2CAM memory 142, which operates at 66-Mhz frequency. Switch circuit 20 interface to L2CAM memory is 16-bit. Processor 12 executes commands write/read and data write/read to L2CAM 142 using CAM access registers provided inside switch circuit 20. Processor 12 accesses L2CAM 142 through register-based request-grant handshake by loading L2CAM Access Control & Status Register to execute processing cycles.
  • [0082] RP processor 12 arbitrates with CAM arbiter 138 in switch circuit 20 for CAM bus. For processor requests, slave state machine 108 generates control signals for CAM 142, and CAM arbiter engine 138 processes switch requests. During processor write request, processor 12 provides write-data in pipeline buffers 106. When CAM bus is granted by CAM arbiter 138, slave state machine 108 puts data from pipeline buffer 106 on CAM bus. During read request, read-data from CAM bus is muxed 140 with data from other blocks and passed to processor system bus. Write/read buffers need not be provided in present case.
  • FIG. 9B shows LAN arbiters interaction with datapath. Register files [0083] 144 for receivers and transmitters, including corresponding block address registers and byte count registers, couple to block address register and byte count register coupled to state machines 148, which couple to switch data path 44 and receive and transmit arbiter 150.
  • [0084] Packet switch engine 82 performs control functions for transfer request in and out of receive and transmit buffer to packet memory 16. Packet engine 82 handles LAN data movement, command process, and PM address calculation.
  • For LAN data movement, [0085] packet memory engine 82 sets up for moving slice between packet memory 16 and allocated data buffer. This is triggered by scheduler while slice is scheduled to move in/out of data buffer. PM engine has access right to block address registers and registers to understand actual address on PM 16 and update packet size.
  • For command process, [0086] packet memory engine 82 executes systematic hardware processes when Forwarding Block and Transmit Queue Management Block (FB/TQMB) generates instructions such as: link, receive enable, transmit enable, receive reject, etc. Hence, end of packet reception/transmission is noticed for next packet initialization. In notifying such events, priority encoding is provided for first-in/first-service integrity.
  • For PM address calculation, [0087] packet memory engine 82 regards Ethernet ports as 32 concurrent full-duplex DMA channels. Relevant PM pointers for each channel are maintained. For every slice transfer, PM bus address is calculated.
  • Preferably, buffer attributes [0088] 92 are provided in attribute block address array, which is 3-port architecture having 64×12-bit 3-port memory array. Port-i is write port; port-2 is read port; and port-3 is read port. Packet memory engine 82 can write/read memory locations using various ports. Forwarding engine (FE) can read locations asynchronously. Port-3 is assigned for FE.
  • First 32 locations are used for “Receive block address” of 32 receive ports. Next 32 locations are “Transmit block address” for 32 transmit ports. [0089] PM Engine 82 initializes block address for receive/transmit ports on command of Auto Forwarding Block. PM 10 engine 82 reads block address relevant to receive/transmit port under service. PM engine 82 uses block address to identify packet in PM 16.
  • CAM interface block analyzes incoming packet at [0090] layer 2, i.e., at MAC layer. Analysis result is forwarded to Auto Forwarding Block state machine. CAM processor is called for attention when ether header block is loaded in ether header memory. On such trigger condition, after acquiring CAM bus interface, CAM Processor starts defined fast processing action. Block contains layer 2 header memory, analyzed and to be analyzed port FIFOs, and result memory. CAM block interfaces to internal memories organized as memories and FIFOs as well as external CAM to accomplish L2 lookup.
  • When begin transfer of receive data, begin of header is identified and required header information is loaded into ether header memory. Sixteen-byte header blocks are reserved for each port in header memory. Loaded indication is updated on 5-bit entry in 32-deep ether header to-be-analyzed FIFO. Such FIFO provides first-in/first-service feature. [0091]
  • Ether header memory is 2-port memory has 64×64 bit architecture. Port- I is write port, and port-2 is read port. Such memory is located on LAN side of receive buffer. As first slice of new receive packet is loaded into receive buffer, header slice (i.e., 16 bytes) is written to ether header memory in 64-bit words. Ether port number is used as reference address to select header block number. Maximum of 32 header blocks can be stored in such memory. Port-2 is used by CAM processing engine. CAM engine reads 16-bit quantity at a time through front-end 64:16 multiplexer. L[0092] 3 header information, up to 8 bytes per port is stored in different memory. Such information is used by L3 lookup engine during routing operation.
  • Ether-to-be-analyzed FIFO memory is 32×5-bit two-port memory, holding maximum of [0093] 32 port numbers to be analyzed. Port-1 is write port, and port-2 is read port. FIFO is written with port number when first slice of data is received on LAN bus and header loaded in ether header memory. CAM Processor reads port number through port-2 for indexing header memory. FIFO structure ensures that ports to be analyzed are presented to CAM engine in arrived order.
  • Ether analyzed FIFO memory is 32×6-bit two-port memory, holding [0094] maximum 32 analyzed port numbers. Port-1 is write port, and port-2 is read port. CAM Processor writes analyzed port number through port-1 and Forwarding Engine (FE) reads through port-2. FIFO structure ensures that analyzed ports are presented to forwarding engine in arrived order.
  • Ether result memory is 32×16-bit two-port memory, holding results for 32 ether ports. Port-1 is write port, and port-2 is read port. CAM Processor writes L[0095] 2 forwarding result through port-1, and Forwarding Block (FB) reads through port-2. When CAM Processor processes specific ether port header, it uses port number as address to write result. FB reads port number from Analyzed FIFO to make forwarding decision. FB uses port number as reference address to read CAM analysis result.
  • External CAM memory is 1024×64-bit capacity on standard configuration. Size can be expanded to 2048×64-bit by adding CAM device in vertical expansion. CAM memory is connected on dedicated CAM bus. Such bus is shared between CPU and switch circuit. Normally such bus is default-owned by switch circuit. CPU can use bus by register mode bus handshake. [0096]
  • CAM memory contains [0097] 1024 locations of 64 bits wide. Locations can be assigned as RAM property or CAM property. Location assigned as RAM will not be accounted in lookup process; CAM locations participate in lookup process. Repeatedly used parameters are stored in RAM so that real-time data movement between RAM location and Comparand/mask registers/etc. can happen with minimum overhead. Every location has associated status filed which describes entry, such as: empty entry, valid entry, skip entry on lookup, and RAM entry.
  • [0098] Layer 2 header analysis is performed by CAM processor. Ethernet headers are loaded and processed on dedicated Ethernet header memory having 128×32 bit dual port memory. Assuming case where packet received on port(x), switch circuit 20 is triggered on such packet by request from MAC port number(x), which is effectively hardware packet arrival notification.
  • Header is extracted from MAC received data stream. New receive packet data is identified with arrival of SOF, and first 16 bytes are treated as layer-[0099] 2 header. If header is concurrent to store access to receive buffer, then header is stored in port-specific block number(x) in header memory. Writing process may not account for port contention. Block written on port-1 may not be accessed on port-2. Header is stored as header block(x). At end of storage, port number is written in ether-to-be-analyzed FIFO, which is 32×5-bit register. FIFO write pointer is incremented after each status write
  • CAM processor starts when valid entry is loaded in Ether-to-be-analyzed FIFO. CAM Processor maintains read pointer to read valid entry. Valid entry is notified if there is difference between write pointer and read pointer. Entry read provides port number of header. CAM Processor uses port number to reach header block(x). [0100]
  • Preferably, switch system has 32 ports and 32 entries. New packet on port can not be received unless old packet is processed, according to system-level handshake. Hence, at any time, no more than 32 headers/header status may be stored, effectively reducing complexity of FIFO design. PM engine can blindly write header/status without looking for FIFO-full condition. CAM Processor can start as long as pointers are not equal. [0101]
  • CAM processor handles header processing. CAM processor is notified of Ethernet header valid when write pointer and read pointer differ. When entry is valid on Ethernet-to-be-analyzed FIFO, CAM processor reads entry and increments read pointer. Using such value, CAM processor can reach specified header block. Ether header memory is divided into 32 blocks. Port number directly provides starting address of header block. Entries in block are consecutive 16 bytes. [0102]
  • CAM processor processes header block, and writes result on port specific location on Ether result memory. CAM process completion is notified to Auto Forwarding Block through Ethernet result FIFO, which is 32 deep register construction. Each entry is 6-bit wide. Entry is result of CAM memory lookup. If set, destination MAC address indicates CAM hit. Routing tag in header block is valid. If clear, CAM lookup fails; routing tag does not contain valid information. [0103]
  • To write on result FIFO, CAM processor has write pointer, which is 5-bit counter. CAM processor write entry, whereas AFB read entry. When CAM completes process, it writes result entry, and increments write pointer. At last, CAM processor increments Ether header status FIFO read pointer to point to next entry. [0104]
  • CAM processor header processing includes learning process of: source lookup, source port read, and source learning. CAM processor learns MAC addresses arriving from Ethernet ports. As associated process of CAM lookup, CAM processor determines whether source address was learned previously, i.e., by reading source address from Ether header memory, and writes CAM for lookup. If match occurs, processor presumes source port was learned; it reads existing port information from associated data to compare whether port is same as receiving port. If MAC header matches, whether or not ports match, processor makes entry live and at same time relearns receiving port. If receiving port number does not match learned port, Source Address (SA) Learned flag is set. If miss, processor learns entry into next free address if CAM is not full, and if learned SA Learned flag is set. While updating such new entry, processor follows correct data structure for RAM associated information. [0105]
  • Optionally, attribute is set with (e.g., ETHR_LRN_INHIBIT) register for each port to inhibit learning on specified ports. If set, during source lookup process, after source port read, entry is made live if hit and if miss, MAC address is not learned. Source port read phase can be skipped if source port filtering is not required. [0106]
  • Destination lookup process includes steps: destination lookup and destination port read. CAM processor reads 6-bytes destination MAC address from header memory and writes on CAM for comparison lookup. If miss, destination is assumed unknown; if hit, destination is available through memory (e.g., ARAM) field, which provides destination port number and port/MAC address specific status and control flags. If hit, CAM processor reads ARAM field and writes in result memory, setting hit flag. If miss, CAM processor has nothing to read and write miss flag to result memory. Rest of result data is not valid in miss case. Forwarding blocks read this field for analysis and forwarding decision. At end of process, CAM analysis done flag is set for packet on receiving port. [0107]
  • CAM processor analyzes results of source lookup and destination lookup processes to determine how to process incoming packet. Each port has two bits allocated to handle spanning-tree protocol requirements. One bit is allocated for ‘Port Blocked State’ flag and other for ‘Learn Inhibit’ flag, which is used for learning of MAC addresses on receiving port. Port Blocked State’ flag is used for forwarding decision. Filtering bits in result from both source lookup and destination lookup along with port specific STP control bits relating to forwarding, source port and destination port read as result of destination lookup and read are considered. [0108]
  • CAM processor sets CAM analysis completion status for receiving port. If destination lookup resulted in hit and destination port is one of physical LAN ports and cut-thru switching on port is enabled or CPU port, port number is written to Ether analyzed FIFO. CPU port is allowed to enable Layer [0109] 3 analysis parallel to packet reception. Result processing is done by Auto Forwarding Block (AFB). AFB is notified of CAM process completion through Ethernet Analyzed FIFO. AFB can read highest priority FIFO entry using hardware hidden read pointer. If read pointer and write pointer are different, one or more valid entries are available in Ethernet analyzed FIFO. AFB reads valid entry and gets port number. Reading entry increments read pointer, if present entry is valid.
  • Using port number, AFB can access Ether result memory. Refer to Auto Forwarding Block section for details on AFB functionality. If CAM analysis resulted in miss or hit but packet can not be switched, AFB does not need to be notified until packet reception is complete. Hence, CAM processor merely sets CAM analysis completion flag for receiving port. AFB processes packet when both receive completion and CAM analysis completion set for receiving packet. [0110]
  • Aging process is performed by [0111] processor 12 as processor bandwidth requirement for task is relatively low. Time stamp register provides variable granularity for aging. Processor uses instruction set provided by CAM device. Entries to be aged are processed in one instruction, though setup is required before executing instruction. In addition to status bits provided by CAM for every entry, 3 bits in RAM field are dedicated for aging information. Status provide by CAM are used to identify if entry is ‘Valid’, ‘Empty’, ‘Skip’ or ‘RAM only’. One of bits allocated in ARAM field is used to mark entry ‘Permanent’. Entries marked ‘Valid’ and not ‘Permanent’ are considered for aging. Additional two bits in ARAM filed provide flexibility to CPU to implement aging process.
  • When entry is visited during source lookup process of CAM analysis, if source is found, ARAM filed is updated with latest time stamp from (e.g., ETHER CAM CONTROL) register. If new source is learned, in addition to port, time stamp bits are written into ARAM field. When processor visits CAM to ageout entries, it searches CAM for entries with oldest time stamp. In search process, processor configures mask registers in CAM in such way that age bits enter comparison, and entries that are not ‘Valid’ and marked ‘Permanent’ do not enter comparison. In next instruction, processor can clear ‘Valid’ bits on matching locations to ‘Empty’ state. By doing so, oldest entries are marked empty. From that point, aged entries do not enter compare operation until made ‘Valid’ aging during normal learning process. [0112]
  • Auto forwarding block (AFB) is hardware Ethernet packet forwarding engine and queue processor. AFB analyzes incoming packet and may forward packet both at [0113] layer 2 and layer 3. After forwarding analysis is done, AFB posts and maintains port queues. AFB may accept packets from processor interface and post packet in requested queues. AFB provides processing power on packet-by-packet and manages required information for integrity of packet routing strategy. AFB feeds initial setup information for each ether packet for each port to run data transaction.
  • AFB functionality enables [0114] switch circuit 20 to perform forwarding and filtering without real-time assistance from processor 12. Processing element is out of the datapath, and forwarding and filtering is done at line rate for supported ports. AFB functionality includes: free queue management, block attributes management, receive port management, forwarding and filtering, transmit queue management, quality-of-service (QoS) support, and control memory interface.
  • Forwarding function features port linking, wherein receive port is linked to transmit port before packet is fully received, thereby improving latency between received and transmitted packets. Port linking is accomplished in forwarding stage if conditions are suitable. For example, packet can cut-thru with unique destination, i.e. no more than one port is target destination for packet. Cut-thru enable bit are satisfied for certain values, such as: destination port, speed-matching logic, xmtPortEn, xmtPortNotBsy, xmtQNotVld and mirrNotVld. Data arrive speed should not exceed transmitting port speed. Transmitter should be ready to accept command. Transmitter may be busy transmitting data, or there may be packets waiting for queue transmit Also, there should be minimum of data present in buffer before process can start or arbitration latency may result in transmit FIFO under-run condition. In such case, transmitter is linked but does not start transmitting data until required minimum data is received in packet memory. [0115]
  • FIG. 10A shows Transmit Queue Management Block (XQMB) [0116] 154, which is hardware block for managing transmit queue functions for switch circuit 20. XQMB 154 couples to forwarding engine (FE) 52, DMA Engine 104, block attribute memory (BAM) 152, PME 90, queue attribute memory (e.g., AttrRAM) 156, port AttrRAM 158, and control memory 136 through interface 160. XQMB 154 functionality includes: initializing and managing transmit queues for each port; maintaining QOS parameters (i.e., tokens) for each port; queues (e.g., nQueues and dQueues) blocks to/from control memory transmitter queues; forwarding blocks to requesting transmitter; returning block numbers to BAM controller 152; forwarding multi/broadcast block in ‘background’; supporting 28 physical ports, 3 logical ports and multi/broadcast port; and using round-robin priority scheme to service requests. Furthermore, FIG. 10B shows queue processor state machine 162, which couples to transmit arbiter 80, block address and byte count registers 164 in control memory, and transmit queue 136.
  • FIG. 11 [0117] shows DMA engine 104, which couples to CPU register 166, CPU master interface 36, XQMB 154, packet memory interface 42, and control memory interface 160. Generally, DMA engine handles data transfer between packet memory 16 and CPU local memory 100 so that CPU 12 may perform other tasks. CPU 12 packet send is enabled by creating packet in local memory 100, register set-up, and initiating packet transfer. Also, packet receive is enabled by notifying CPU 12. CPU 12 checks block attribute to determine whether to process packet. If CPU 12 transfers packet to local memory 100, it DMA engine 104 is notified to proceed. Otherwise, register is written to de-queue packet.
  • FIG. 12 flow chart shows [0118] CPU 12 to packet memory 16 operation. Initially, in software, CPU sets-up register and initiate packet transfer 168. Then, in hardware, processor determines 170 whether to initialize block attribute 176, whether 172 to initialize DMA transfer 178, and whether 174 to write command to XQMB 180.
  • FIG. 13 flow chart shows [0119] packet memory 16 to CPU 12 operation. Initially, in hardware, block attribute is read 182, and CPU 12 is notified 184. Then, in software, run CPU check whether DMA needed 186; if not, then set register to de-queue 188. Then, in hardware, DMA transfer 190, and notify CPU 12.
  • FIG. 14 shows switch [0120] circuit 20 with L3 engine 70 coupled to FE 52, interface 36 to CPU 12, interface 42 to packet memory 16, IP header RAM 198, MAC address RAM 196, interface 194 to L3 CAM 126, and interface 160 to control memory 136.
  • As packet is received, L[0121] 3 check block captures destination IP address, Time To Live Field and Checksum field in L3 Header Memory, for use by L3 block 70 for L3 lookup and processing. L3 check block processes rest of packet header. Receive packet is checked for IP protocol filed, and to detect packets for specialized handling.
  • IP header length is checked to determine whether packet needs specialized option processing. If header length is not equal to 5 32-bit words, option processing is applied to packet. Time To Live filed is checked to see if TTL field is more than 1; if less than 1, packet is marked with TTL error flag. IP packet length is checked for minimum length to contain full IP header. Checksum of header is performed. [0122]
  • Results of above checks are written into informational (e.g., L[0123] 3 INFO) Memory. L3 INFO Memory is 32-byte wide. Each location is dedicated for corresponding numbered port. Result of L3 header checks for receiving port is stored in corresponding location and used by Forwarding Block to decide whether packet is sent to L3 Block for processing.
  • L[0124] 3 check (e.g., CHK) block takes into consideration if arriving packet contains VLAN tag, if VLAN tag option is enabled. If so, hardware accounts for shift in appropriate fields for L3 header checking process. This amounts to 4-byte shift of L3 header following MAC header. Optionally, VLAN priority bits are extracted and passed along with L3 INFO. VLAN priority bits may be enabled to override QoS information set in L2 CAM result and L3 Header Lookup result. Programmable register is provided to load pattern to identify if incoming packet is VLAN tagged packet.
  • L[0125] 3 Engine (L3E) 73 is hardware block for implementing the Layer 3 CAM lookup and age table functions for switch circuit 20. L3E 72 receives requests from forwarding engine (FE) 52 and CPU 12, processes requests and returns results to requester. L3E 72 lookup functions includes: receiving, buffering and processing lookup requests from FE 52, providing hardware to calculate hash index from destination IP (DstIP) address provided by FE 52; reading CAM entry at address and checking for IP address match; following linked list entries until match is found or end of list is reached; and returns lookup result to FE 52. L3E 72 age table maintenance function includes: maintaining age table in control memory 136; adding and deleting entries in table by CPU 12 request; aging table at CPU-controlled intervals; reporting aged entries to CPU; maintaining aging time stamp; and making entries live. L3E 72 CAM management assistance function includes: providing hardware hash calculation function for CPU 12; implementing search function which scans L3 CAM and reports matching entries; and providing change option to search function which writes new data into matching entries.
  • [0126] CPU 12 interface to L3 Engine 70 is for age table and L3 CAM maintenance. Initial CAM entries are written to L3 CAM 126 by CPU 12 through dedicated control memory interface port. Managing linked entries and free buffers is done by CPU 12. Searching for entries and reporting or changing them is accomplished by appropriate command registers.
  • Age table entries are created and deleted by [0127] CPU 12 using add and delete commands. Aged entries are reported to CPU 12 and deleted by CPU 12 using delete command. Time hardware modifies age table entry when entry is made.
  • [0128] Packet memory 16 includes 8-MB SDRAM with 4 1M×16 devices providing 32-bit data path to 4096 2KB blocks for packet storage. L3 Engine 70 writes to packet memory 16 to modify fields (e.g., destination address (DA), source address (SA), TTL and checksum) in packet following L3 lookup. DA and SA fields are written in 32-byte burst with byte enables set appropriately. MAC address RAM 196 is 32-entry RAM, indexed by port number, which contains lower byte of MAC address for each physical port.
  • [0129] IP HDR RAM 198 is 2-port Internet Protocol header memory RAM located on switch circuit 20. Each entry contains IP values (e.g., TTL, checksum and DST IP) for packet. Write port of RAM 198 is used by packet memory engine 90 to store data from packet IP header. As data streams to packet memory 16, appropriate bytes are pulled and written to RAM 198. L3 Engine 70 uses read port of RAM 198 to access data required to process lookup request from FE 52. Entries are indexed by port number, so receive (RCV) port number is used to lookup entry.
  • [0130] L3 CAM 126 is contained in 2-MBytes synchronous DRAM (SDRAM) located in single 1M×16 part. Since SDRAM is optimized for burst transfer, L3 Engine 70 accesses occur in bursts of eight 16-bit words. On-chip arbiter/controller logic for L3 CAM 126 memory has multiple ports to allow better pipelining of accesses and L3 engine 70 uses two of these ports.
  • [0131] L3 CAM 126 data structure is implemented as hash table combined with pool of free buffers which can be linked to entry in hash table. Entry, whether in hash table or free buffer pool, is 8 words (16 bytes). Entry is referred to by entry number, 17-bit number used when indexing CAM, when indexing into age table or when reporting results of search or aging operation.
  • Base hash table contains 64K entries and resides in lower 1-MByte SDRAM. Entries in table have entry numbers in 0 to 64K range, i.e. [0132] bit 16 of entry number is set to ‘0’. Entries located in free buffer pool are in upper 1-Mbyte of SDRAM, and entry numbers have bit 16 set to ‘1’. Address of first word of entry in CAM is determined by concatenating entry number with 3 bits of ‘0’.
  • [0133] CPU 12 creates entries in hash table for DstIP addresses by hashing address and using resulting 16-bit hash index as offset to entry in table. When multiple address' hash to same entry in base table, link is created to free buffer pool entry. If additional addresses hash to same location, they can be added to end of linked list. CPU 12 creates and maintains entries and manages linked list structures.
  • Control memory block (CTL MEM) [0134] 136 uses 128K×16 synchronous SRAM (SSRAM), instead of SDRAM devices because most data structures stored require single read and write accesses. L3 Engine 70 uses 32-KB portion of control memory to store age table. It does single read followed by single write of word in age table. Each 16-bit word contains age table information for 4 CAM entries. Aging information for particular L3 CAM 126 entry is accessed by using CAM entry number divided by 4 as address into age table.
  • Forwarding Engine (FE) [0135] 52 performs lookup requests to L3 Engine 70 for each IP packet to be processed. Four-deep FIFO buffer is provided to buffer these requests. FE 52 provides RCV Port Number and Block Number for each packet. After lookup is complete, L3 Engine 70 returns RCV Port Number as well as L3 Result and L3 Status word containing various flags and information from matching CAM entry.
  • Regarding age table support, since [0136] control memory 136 containing age table does not support locked operations, table modifications are done by hardware. Such table modifications address condition of two agents trying to modify same table entry. CPU 12 can initialize entries to invalid state at startup by writing to control memory; but during operation, hardware performs table modifications.
  • Age table operations are done by [0137] CPU 12 write to age command register. Write to age command register causes Age Table Busy flag in L3 Status register to be set until operation is complete. Aged entries are reported in registers (e.g., AgeResult 1&2).
  • In FIG. 15A, age table maintenance is illustrated, starting with CPU or live command processing [0138] 200, then determine if age command 202 applies. If so, increment time stamp 210 and set age flag; otherwise, read table entry 204, mask and modify table entry 206, and write table entry 208. Further, in FIG. 15B, after age flag set 212, age table is read 216, then determine age out 218. If so, then write result registers 220, set result valid 222, wait for CPU 224, and clear result valid 226; otherwise determine 228 if last entry. Next, clear age flag 232, 230 and read hash table 240.
  • Time stamp is 2-bit value providing four age time intervals. There are two age time counters, currTime and ageTime. CurrTime is reset to zero and increments when [0139] CPU 12 issues age command. Entries with time stamps equal to this value are newest entries. AgeTime value is always equal to currTime +1 m(i.e., currTime −3, modulo 4). Entries with time stamps equal to ageTime are aged next time CPU 12 issues age command.
  • CPU adds entry to age table when creating new entry in [0140] L3 CAM 126. Until entry is added to age table, entry does not participate in aging process. CPU 12 writes (e.g., AgeCmd) register with entry number and add or add permanent command, and hardware reads appropriate entry, modifies valid and permanent bits appropriately and writes currTime into time stamp field.
  • Hardware makes entry live (i.e., accessed) when L[0141] 3 CAM lookup results in IP hit. Entry number of matching entry is used to access age table, and time stamp field is updated with currTime. Entries which are accessed frequently have more recent time stamp than infrequently used entries, and are not aged out.
  • [0142] CPU 12 deletes entry in age table when removing entry from L3 CAM 126. CPU 12 writes AgeCmd register with entry number and delete command, and hardware reads appropriate entry, clears valid bit, and writes modified entry back to table.
  • When [0143] CPU 12 age timer expires, CPU writes AgeCmd register to initiate aging process. This sets AgeCmd Busy bit in L3 Status register until entire table is aged. Add and delete commands can be issued, but new age commands have no affect.
  • When CPU writes ageCmd register, hardware increments ageTime and currTime counters and resets aging address counter to zero. Hardware reads 32K words of age table and checks if any time stamp fields are equal to ageTime. Entries with time stamps equal to ageTime are reported to [0144] CPU 12 as aged out. CPU 12 deletes aged entry from CAM and age table.
  • To assist CPU in managing linked CAM entries, hardware reports aged entry number and entry number of previous entry in linked list. For aged entries in base hash table, zero value is reported for previous entry. When result is posted to result registers, Age Result Valid bit is set in L[0145] 3 Status register, and aging process is halted until result is read by CPU 12. Reading AgeResult register restarts aging process and clears status register bit.
  • First aged entry number used to access [0146] L3 CAM 126 to retrieve DstIP for entry. DstIP is hashed to locate base hash table entry and CAM entry at address is read. Hardware follows linked list, reading CAM entries until retrieving entry with Link Address equal to original aged entry number. Entry number is reported along with aged entry number in AgeResult registers.
  • [0147] CPU 12 provides L3 CAM management functions, including initial setup, adding entries, deleting entries and managing linked lists and free buffer pool. Hardware provides automatic search/change capability to assist CPU 12 in locating entries with certain characteristics and optionally changing such entries.
  • Search operations are initiated by [0148] CPU 12 write to SearchCmd register. Write to SearchCmd register causes Search Busy flag in L3 Status register to set until operation is complete. Matching entries are reported in (e.g., SearchResult) registers.
  • FIG. 16A shows search operation steps. CPU initiates [0149] search 234, writes commands 236, initialize entry to zero 238, read hash table 240, then determine match 242. If so, write result to registers 244, and wait for CPU 246; otherwise, determine if linked 248. If so, clear age flag 250, else, determine if last entry 254. If so, clear age flag 252; otherwise, clear age flag 256.
  • Hardware performs automatic and exhaustive search of [0150] L3 CAM 126 when SearchCmd register is written. Starting with entry 0, each entry in base hash table is read and checked against search criteria. If entries have valid link address, then linked entries are read and checked. Minimum 64K CAM entries are read.
  • During search, SearchCmd can be written with Abort Flag set, and hardware exits search process. Pending SearchResults are read by [0151] CPU 12 before hardware exists and clears Search Busy flag.
  • For each of 8 words in CAM entry, there is corresponding (e.g., SearchMask and SearchData) registers (16 registers total). Before search command is issued, SearchMask registers are written. ‘0’ in bit position masks bit from consideration in comparison. SearchData registers are written with data values to be matched. Registers containing data to be matched are written. [0152]
  • Match is indicated when all eight words of CAM entry meet following representative requirement: SearchMaskX & SearchDataX=SearchMaskX & CamDataX. [0153]
  • To assist [0154] CPU 12 in managing linked CAM entries, hardware reports entry number where match found and entry number of previous entry in linked list. For entries in base hash table matching search criteria, zero value is reported for previous entry. When result is posted to result registers, (e.g., Search Result Valid) bit is set in L3 Status register, and search is halted until result is read by CPU 12. Reading SearchResult2 register restarts search and clears status register bit.
  • As hardware searches CAM entries and follows linked lists, it stores address of previous entry in register. Entry number is reported with matching entry number in SearchResult registers. [0155]
  • If change option was selected when (e.g., SearchCmd) value was written, then matching entries found during search are changed by hardware according to values written to change setup registers. When matching entry is found, hardware alters data and writes back to CAM before reporting match result to [0156] CPU 12.
  • For each of 8 words in CAM entry, there is corresponding (e.g., ChangeMask and ChangeData) registers (16 registers total). Before search command with change option is issued, ChangeMask registers are written. ‘1’ in bit position marks bits to be changed. ChangeData registers are written with desired new data values. Registers containing data to be changed are written. [0157]
  • For each match, eight words of CAM entry are changed, for example, as follows: [0158]
  • NewCamDataX=ChangeMaskX & ChangeDataX|˜ChangeMaskX & CamDataX. [0159]
  • If (e.g., Don't Report) Flag is set when SearchCmd is written, then matching entries are not reported to [0160] CPU 12. Flag should not be set for search only commands.
  • L[0161] 3 Engine receives CAM lookup requests from forwarding engine and searches matching entry in L3 CAM 126. Results of search are returned to FE 52, and additional requests are serviced.
  • In FIG. 16B, flow chart shows CAM lookup steps. Initially, valid buffer is set [0162] 258, read IP header RAM 260, hash destination IP (DstIP) address 262, and read hash table 264, then determine if valid and hit 266. If so, read port MAC RAM 268, update packet data 270, modify packet 272, and write result 274. If not, determine valid link 276. If so, follow link 278 and read hash table 264, else write result 274.
  • [0163] L3 Engine 70 buffers up to 4 lookup requests from forwarding engine (FE) 52. When buffer is full, busy signal is sent to FE 52. Buffer is organized as FIFO and contains receiving port number and block number for lookup request.
  • When valid request exists in buffer, hardware begins lookup process. First buffered request is read, and receive port number for that request is used to access IP header RAM and retrieve packets DstIP address. [0164]
  • 32 bit DstIP address is hashed to 16-bit value which is used as entry number for base hash table. That entry is read, and words containing DstIP address are compared to packets DstIP address. If these two addresses match, then IP hit bit is set, and results of successful lookup are returned to [0165] FE 52.
  • Before result is posted to [0166] FE 52, packet may be modified, depending on bit in L3 Flags field of CAM entry. If Don't modify bit of CAM entry is set, nothing is changed in packet. Otherwise, when lookup is successful, TTL field of IP header is decremented and modified in packet memory, and (e.g., CheckSum) field is recalculated and changed. Packets DA is overwritten with value contained in matching CAM entry, and SA is replaced with value from MAC Address Registers and MAC Address RAM.
  • Whenever IP addresses don't match or CAM entry is not valid, hardware checks Link Valid field of entry to see if entries with same hash index exist. If link valid bit is set, each entry in linked list is read and checked for matching IP address until hit occurs or end of list is reached. [0167]
  • If match is not found in [0168] L3 CAM 126, hardware checks to see if default route registers are written by CPU. These registers provide ‘default route’ CAM entry and are programmed with same information as CAM entries in control memory 136. If default route exists, then packet is modified using default information, and (e.g., IPHit and DefaultRouteUsed) bits of L3 Result are set.
  • Upper bits of MAC address for ports are provided in three registers. Three 16-bit registers provide full 48-bits address, but lower byte of address for each port is provided by [0169] MAC Address RAM 196.
  • [0170] MAC Address RAM 196 contains lower byte of MAC address for each port. It is 32×8 dual port ram which is written by CPU 12 and read by hardware during packet modification. This value replaces lower byte from MAC Address Registers when writing new SA for packet.
  • L[0171] 3 results of CAM lookup returned to FE include receive port number and block number originally provided by FE 52 and two 16-bit values, L3 Result and L3 Status. A detailed bit definition for these last two values was provided earlier in this document.
  • [0172] Switch circuit 20 operates on various performance and integrity levels. In cut-thru switching mode, relatively fast ether switching mode and high performance level are achieved; however, there is possibility of transmitting packet with error during receive process. In this mode, CPU programs MACs to raise receive request when collecting 64 bytes. Also it programs MAC to raise subsequent receive request after every 64 bytes collection. First request provides fast header analysis and switching.
  • In store forward (SF) mode, receiving packet is not sent to port until reception is complete. [0173] Switch circuit 20 waits until packet completion and updates transmit queues of relevant destination. MAC programming remains same. Forwarding Block acts on ‘store forward mode’ mode on port by port basis. SF mode is selectable as per port basis. In this mode port linking is disabled.
  • In preferred embodiment, packet moves in following directions: received on LAN port(x) and transmitted to any/all other LAN ports; received on LAN port(x) and posted to UL queue; received on LAN port(x) and posted to CPU queue; received on LAN port(x) and packet dropped; and forwarded from CPU to any/all LAN ports. In each case, packet flows through [0174] packet memory 16 and switch circuit 20. In each case, switch circuit 20 participates in forwarding, filtering and queue management.
  • In case of Ethernet port originated packet flow, packet is received on Ethernet ports, and [0175] switch circuit 20 is triggered on such packet from request from one of MACs. This is hardware trigger mode. Switch circuit 20, in coordination with RISC processor 12, allocates free block pulled from receive free list. Once block is assigned, block is busy until destination agent(s) complete transmission. Transmission completion has mechanism to release block and insert in receive free list.
  • To identify packet destination, [0176] switch circuit 20 needs to obtain packet header information. Header is extracted from MAC received data stream. Here, PM engine 90 identifies header from data stream and loads on port-specific segment of Ether header memory. CAM Processor makes lookup on CAM and delivers result to Auto Forwarding Block. AFB adds packet to one of following queues: one of Ethernet ports transmit queue; all Ethernet ports queues, UL transmit queue and CPU queue; UL queue; CPU queue; or L3 block for L3 lookup.
  • AFB handles updating “Block address”, “Byte count”, and “routing information” on transmit queues. Once such information is provided, respective transmitting agents handle packet transmission. At end of transmission, block is released and added to receive free list. Refer for more details about releasing the block in some other sections. [0177]
  • [0178] CPU 12 posts packets to XQMB 154 of Auto Forwarding Block for transmission to one or several ports. XQMB 154 handles posting packet to respective queues. Prior to request, CPU 12 assembles or modifies existing packet in packet memory 16 for transmission. CPU 12, with help of DMA function, can transfer packet from local memory 100 to packet memory 16, and at end of such transfer, can initiate XQMB 154 action.
  • FIG. 17 shows packet receive process, which is accomplished by Receive [0179] Arbiter 80, Buffer scheduler 94, LAN bus controller 76, Packet memory engine 90, header memory 290, data buffer 84, as well as Auto Forwarding Block (AFB) and PM SDRAM controller. Such modules work concurrently.
  • Receive [0180] arbiter 80 arbitrates and prioritizes receive request from Ether ports. It raises request to buffer scheduler 94. When request is under process, arbiter 80 makes background processing on remaining requests.
  • [0181] Buffer scheduler 94 handles resource allocation of internal buffers. Buffer scheduler 94 maintains two receive buffers and two transmit buffers 84. Each buffer can hold up to 64 bytes of data, and optimizes buffer allocation algorithm for fair bandwidth extraction between receivers/transmitters.
  • LAN Bus Controller [0182] 76 interfaces to LAN bus to read/write packet data to MAC FIFO buffers, and access MAC receive completion status. LAN bus controller 76 may access MAC and read data slice, and store to internal data buffer. A data buffer can hold up to 64 bytes of information.
  • [0183] Packet memory engine 90 sets-up moving packet in slice-by-slice into PM 16. Packet memory engine 90 reads PM receive block register and byte count register, and updates (i.e., increments) byte count register on each transfer. Packet memory engine 90 commands PM SDRAM controller to start data transfer. PM SDRAM controller transfers data from receive buffer to packet memory 16, and generates control timing to access external SDRAM.
  • Auto Forwarding Block allocates free block to receiver; initializes receive block register, byte count register. Occasionally AFB commands to reject packet. [0184]
  • After reset, if (e.g., AUTO_IMT_STRT) sense bit is cleared, (e.g., RCVR_INIT) receiver initialize block in [0185] FE 52 waits for (e.g., INIT_STRT) initialize start bit to be set. When bit is set, then using (e.g., PORT_ES_STS) information, which tells receive ports to be initialized, active receive ports are initialized with free block.
  • Every LAN MAC has (e.g., RREQx) receive request signal which, when active, including at least 64 bytes of data (i.e., header/data region) is collected in internal FIFO. There are 32 request signals from LAN bus. Following steps describe new packet reception and header memory loading: [0186]
  • 1. When new packet starts in Port-x, RREQx signal becomes active, indicating 64 bytes valid in FIFO. Request active and (e.g., RBODYx) bit clear means new packet. [0187]
  • 2. RREQx signal is first-level conditioned if corresponding bit enabled in Receive Enable register. When [0188] RISC 12 allocates free block to receive port, it writes block address on corresponding receive block register and enables the receiver”. Conditional RREQx first wins RREQ arbitration to get service.
  • 3. When RREQx wins arbitration, scanner freezes on port number x and request service to buffer [0189] scheduler 94.
  • 4. [0190] Buffer scheduler 94 allocates one of two free receive data buffer and enables LAN bus controller 76 to start data transfer.
  • 5. LAN bus controller [0191] 76 executes Burst Read Accesses on LAN bus targeted to Port-x. Read data is written on allocated internal receive buffer. Since body bit is clear, loading process signatures slice as “header”. If slice is header, it writes header data on 2-port Ether Header memory. At end of header loading, Port to be Analyzed FIFO is loaded with 5-bit port number. Loading of FIFO enables CAM engine to start analyzing header information. Load completion calls attention of PM engine for data movement from receive data buffer to packet memory 16.
  • 6. [0192] PM engine 90 updates byte count and sets-up SDRAM Controller for data transfer to packet memory 16.
  • 7. Buffers have dedicated channels to SDRAM Controller. SDRAM Controller arbitrates transfer requests amongst channels and starts executing request at time overlapping address and data phases to maximize throughput and efficiency. Requesting channel is held arbitrating for LAN bus until full slice is moved in [0193] packet memory 16.
  • Packet data reception procedure, similarly to MAC RREQx signal which, when active informs at least 64 bytes data, is collected in internal FIFO, as follows: [0194]
  • 1. When body bit for Port(x) is set, and RREQx signal is active, 64 bytes are valid in FIFO. [0195]
  • 2. RREQx signal is first-level conditioned if corresponding bit is enabled in Receive Enable register. Conditional RREQx first wins the RREQ SCAN arbitration to get the service. [0196]
  • 3. When RREQx wins arbitration, scanner freezes on port number, (x) and requests service to buffer [0197] scheduler 94. Along with this request, RBODYx bit indicates if packet is in middle of reception.
  • 4. [0198] Buffer scheduler 94 allocates free receive data buffer 84 and alerts LAN bus controller 76.
  • 5. After acquiring LAN bus, LAN bus controller [0199] 76 executes burst-read accesses on LAN bus targeted to Port-x. Read data is written in allocated receive buffer 84.
  • 6. When data loading is completed on data buffer, loaded data buffer is signatured as “non header”. LAN bus controller [0200] 76 continues scan for next request in queue. Loaded data buffer draws attention of PM engine 90 to load to PM 16.
  • When processing in unswitched environment, (e.g., Rec Link) receiver link register(x) indicates accordingly. Link bit cleared means slice is non-switch mode presently. Loaded data buffer calls attention of [0201] PM engine 90 to load data into packet memory 16. PM engine 90 uses the Receiver Block Address register(x) and Receiver BC register(x) to construct PM destination address. It leaves updated byte count in Rec BC register(x) for future reference. PM engine issues command to PM SDRAM controller to start data transfer and ready to service receive or transmit buffer or accept command.
  • When processing in switched environment, Rec Link register(x) indicates accordingly. If link bit is set, slice is in switch mode for present slice and consecutive slices until end of packet. Loaded data buffer calls attention of [0202] PM engine 90 to load data into packet memory 16. PM engine 90 uses Receive Block Address register(x) and Receive BC register(x) to construct PM destination address. Byte count is updated in receive BC register(x) and transmit BC register(y). PM engine issues command to PM SDRAM controller to start data transfer and ready to service receive or transmit buffer or accept command.
  • Forwarding Engine (FE), in coordination with CAM Processor and L[0203] 3 Lookup Block, evaluates current receiving packet for following possible decisions: reject packet; link packet; forward packet to transmitter queue; multicast packet to two or more ports; broadcast packet only to Ether ports; broadcast packet to Ether ports and UL; send packet to UL; send packet to CPU; or send packet to L3 Lookup Block for L3 analysis.
  • In packet forwarding mode, CAM Processor writes decision information into header analyzed FIFO. Such write process wakes-up Forwarding Block to take-up forwarding process. [0204] PM engine 90 keeps loading successive slices of packet in packet memory 16 independent of CAM analysis. Decision of CAM might occur at middle of packet reception or after end of packet reception. If header analysis is complete before packet reception is complete, Forwarding Block acts on packet if packet is unicast or destination is L3 Lookup Block which carries on further analysis associated with L3 forwarding. For other cases, Forwarding Block is not called to action until receive completion of packet. If packet reception is complete before header analysis, Forwarding Block is not called into action until header analysis is complete. Receiver is not primed again until forwarding decision has been taken on received block and acted upon.
  • In packet rejection mode, when CAM engine or Forwarding Block decides not to receive packet, control bit is set for corresponding receiving port to reject incoming packet. [0205] PM engine 90 looks at reject bit to set prepare for transfer from receive buffers to packet memory. If reject is set, PM engine 90 empties FIFO without setting transfer to PM. PM engine 90 clears reject bit at end of packet reception. Receive complete state is indicated to Forwarding Block.
  • In packet switching mode, CAM lookup posts port number in CAM Analysis Done FIFO in addition to setting CAM analysis done bit for port. This draws attention of Forwarding Block prior to completion of packet reception. Forwarding Block checks several conditions to take forwarding action. At this time, it may link packet to corresponding transmitter or post packet in queue of transmitter. [0206]
  • Transmitter may be busy, i.e., transmitter queue contains one or more packets queued or transmitter is currently processing old packet. Forwarding Block requests [0207] XQMB 154 to post receive packet in transmitter queue with incomplete information. This is handled by manipulating RC bit clear in BC entry in control memory 136. Bit, if clear, means packet block address is valid, but byte count is invalid. Packet data is incomplete in packet memory 16. Forwarding Block pushes incomplete packet in transmitter queue on special occasion. When, receiver(x) wants to switch to transmitter and if transmitter is currently busy, Forwarding Block puts packet in transmitter queue to maintain order of priority. At pushing event, byte count information is invalid. If transmitter finishes old packet, and receiver/packet falls as next packet in transmitter queue, and receiver has not completed full reception, then XQMB 154 commands to link. If receiver completes packet before getting to transmitter, Forwarding Block sets such bit, and loads valid BC value on BC entry. Subsequent forwarding action on packet degenerates to store and forward mode.
  • Transmitter may be free when switching decision occurs. Forwarding Block commands to link receiver to transmitter. It does not manipulate control memory structure. At receive packet completion, Forwarding Block primes receiver, and transmitter continues to transmit until end of packet without further intervention. Finishing transmitter event releases block and pushes to receive free list. [0208]
  • Forwarding Block may act on receive packet after receive completion and CAM analysis completion. Since packet is received, [0209] XQMB 154 is instructed to post on appropriate transmitter queue; this is Store and Forward mode.
  • Broadcasting to Ether ports decision may result from not finding destination port or hit on broadcast MAC address. Depending on nature of broadcast decision, broadcast map for receiving port is fetched, and packet is forwarded to transmit queue management block for posting on transmit queues [0210]
  • In sending packet to uplink or [0211] processor 12, header analysis results in port not physically connected to LAN ports, but CPU or uplink ports; Forwarding Engine Block instructs XQMB 154 to queue on appropriate ports. XQMB 154 may queue packet on ports if ports are specified in broadcast port map.
  • When end of packet is sensed from LAN port(x), LAN controller signals by bit in slice status. [0212] PM engine 90, while moving slice to PM 16, notifies same status by setting appropriate bit in Rec end reg. Forwarding Block acts on every receive completion; and in addition to forwarding actions, it instructs Free Queue Management Block to prime receiver. In case previous packet is rejected, no new block needs to be allocated; in such case it enables receiver to receive new packet.
  • For packet transmission, transmit port activity is top-level enabled by Forwarding Block or Transmit Queue Management Block of Auto Forwarding Block. [0213]
  • In stored packet mode from transmit queue, [0214] XQMB 154 picks highest priority packet, and loads transmit block address register and byte count register corresponding to packet. This action enables transmitter on transmit enable register.
  • In cut-thru mode for linking, Forwarding Block or [0215] XQMB 154 loads link command with which hardware copies receiver block address to transmitter block address register. It copies current running receiver rec(x) byte count value to transmitter(y) byte count register. It also sets link bit active.
  • Transmitter enters arbitration if transmit (e.g., XMT) enable bit is set, and Byte count validity is met. If MAC transmit FIFO has at least 64 bytes free space, it raises TREQ# signal. This signal is conditioned with first phase enable signal, and transmitter enters arbitration with other TREQ# signals asserted by other transmitters. Winning transmitter requests allocation of one of two free transmit buffers. This request is forwarded to buffer scheduler. When buffer scheduler allocates free buffer, transmitter enters arbitration for [0216] PM engine 90 service. PM engine 90 time-multiplexes between receive requests and transmit requests and other commands such as link, receive enable and transmit enable.
  • [0217] PM engine 90 moves sets-up transfer with PM SDRAM controller by giving command to move slice from PM to data buffer, and updates byte count and address registers in array for corresponding transmitter.
  • [0218] PM engine 90 signatures slice as header or non-header based on XMT body bit. Along with slice, PM engine 90 passes information, such as slice count and port address through buffer attributes. Loaded slice calls attention of corresponding LAN controller for service to transfer data from transmit buffer to MAC on LAN bus. LAN controller moves slice to target MAC port and releases buffer. Whenever PM engine 90 moves slice, decremented byte count is compared, to check, if reached zero. If reached zero, packet may reach end of packet status based on following cases:
  • If case is non-linked, packet was originally fully received. Byte count loaded at time of enable was actual byte count of packet. Link bit clear indicates end of packet. [0219]
  • If packet is in linked state, transmitter byte count reaching zero is not regarded as end of packet; it is regarded as transmitter has to wait for receiver to get slice. Transmitter does not participate in arbitration again until slice is received on linked receiver. Link bit clear and byte count zero signal packet completion. [0220]
  • PM Engine clears XMT enable bit, and sets End of packet transmit bit. End of transmit draws attention of [0221] XQMB 154 to look at transmitter queue in control memory 136. If queue contains additional packets, XQMB 154 loads new packet to re-enable transmitter. If queue is empty, XQMB 154 does not take action. Trigger point for enabling transmitter is: when current packet ends, and new packet is pending in queue; when receive packet is targeted to transmitter and queue is empty; or when CPU inserts packet to transmitter.
  • Foregoing described embodiments of invention are provided as illustration and description. It is not intended to limit invention to precise form described. Such described specification contemplates that inventive functionality may be equivalently implemented in software, firmware, hardware, and/or other functionally comparable or equivalent electronic digital processing system or circuit made available to one of ordinary skill in the art. Other variations and embodiments are possible in light of above teaching, and it is thus intended that scope of invention not be limited by detailed description, but rather by claims as follow. [0222]

Claims (22)

We claim:
1. A multi-level switching system comprising:
a first-level switch for packet reception or transmission; and
a second-level switch coupled to the first-level switch for enabling packet communication between the second-level switch and the first-level switch.
2. The system of
claim 1
wherein the first-level switch comprises:
an integrated switch module for effectively enabling multi-layer switching;
a processing module coupled to the integrated switch module;
a memory module coupled to the integrated switch module; and
a network interface module coupled to the integrated switch module.
3. The system of
claim 2
wherein the integrated switch module comprises:
a Layer-2 module for effectively enabling Layer-2 packet switching;
a Layer-3 module for effectively enabling Layer-3 packet routing;
a processor interface module for interfacing to the processing module;
a memory interface module for interfacing to the memory module; and
a data path module.
4. The system of
claim 3
wherein the Layer-3 module comprises:
a forwarding module for effectively enabling packet forwarding.
5. The system of
claim 3
wherein the Layer-3 module comprises:
a look-up module for accessing a hash table.
6. The system of
claim 5
wherein the look-up module further modifies a packet.
7. The system of
claim 5
wherein the look-up module further modifies an age flag in an aging table.
8. The system of
claim 5
wherein the look-up module further manages a packet queue.
9. The system of
claim 5
wherein the look-up module further processes packet attributes.
10. The system of
claim 3
wherein the datapath module comprises:
a buffer scheduler module for scheduling a pipeline buffer.
11. The system of
claim 2
wherein the network interface module comprises:
an arbiter module for effectively enabling channel arbitration for packet reception or transmission.
12. The system of
claim 2
wherein the network interface module comprises:
a Local Area Network (LAN) bus controller for coupling to a LAN bus.
13. The system of
claim 3
wherein the memory interface module comprises:
a Direct Memory Access (DMA) module for effectively enabling DMA access to the memory module.
14. The system of
claim 2
wherein the memory module comprises:
a Content Addressable Memory (CAM) module.
15. The system of
claim 2
wherein the memory module comprises:
a local memory, a control memory, a cache memory, or a packet memory.
16. The system of
claim 2
wherein the integrated switch module comprises:
an integrated single-chip circuit for effectively enabling packet traffic broadcasting.
17. The system of
claim 1
wherein the second-level switch comprises:
a cross-bar switch coupled to a multi-protocol router;
the first-level switch being coupled to one or more hubs.
18. Integrated network switching circuit comprising:
a Layer-2 networking element for packet reception or transmission; and
a Layer-3 networking element coupled to the Layer-2 networking element for multi-layer packet switching;
wherein the Layer-3 networking element further comprises:
a forwarding module for effectively enabling packet forwarding;
a look-up module for accessing a hash table, modifying a packet and an age flag in an aging table, managing a packet queue, or processing packet attributes.
19. The circuit of
claim 18
further comprising a network interface comprising:
an arbiter module for arbitrating packet reception or transmission; and
a Local Area Network (LAN) bus controller for coupling to a LAN bus.
20. The circuit of
claim 18
further comprising a memory circuit comprising:
a Content Addressable Memory (CAM);
a local memory;
a control memory;
a cache memory; or
a packet memory.
21. The circuit of
claim 20
wherein the memory circuit further comprises:
a Direct Memory Access (DMA) circuit for DMA access to the memory circuit.
22. In a network for coupling a first link to a second link, a method for multi-layer packet switching comprising the steps of:
receiving a packet from a first link;
arbitrating the received packet;
managing a packet queue,
accessing a hash table and an age flag in an aging table;
forwarding the received packet according to Layer-2 or Layer-3 switching to a second link.
US09/118,458 1998-07-17 1998-07-17 Multi-layer switching apparatus and method Expired - Lifetime US6424659B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/118,458 US6424659B2 (en) 1998-07-17 1998-07-17 Multi-layer switching apparatus and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/118,458 US6424659B2 (en) 1998-07-17 1998-07-17 Multi-layer switching apparatus and method

Publications (2)

Publication Number Publication Date
US20010043614A1 true US20010043614A1 (en) 2001-11-22
US6424659B2 US6424659B2 (en) 2002-07-23

Family

ID=22378721

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/118,458 Expired - Lifetime US6424659B2 (en) 1998-07-17 1998-07-17 Multi-layer switching apparatus and method

Country Status (1)

Country Link
US (1) US6424659B2 (en)

Cited By (157)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020105958A1 (en) * 2000-12-12 2002-08-08 Jean-Pierre Mao Process and device for deterministic transmission of asynchronous data in packets
US20020120888A1 (en) * 2001-02-14 2002-08-29 Jorg Franke Network co-processor for vehicles
US20020143910A1 (en) * 2001-03-29 2002-10-03 Shih-Wei Chou Network hub
US20030007488A1 (en) * 2001-06-22 2003-01-09 Sriram Rao Efficient data transmission based on a policy
US20030041216A1 (en) * 2001-08-27 2003-02-27 Rosenbluth Mark B. Mechanism for providing early coherency detection to enable high performance memory updates in a latency sensitive multithreaded environment
US20030095567A1 (en) * 2001-11-20 2003-05-22 Lo Man Kuk Real time protocol packet handler
US20030210688A1 (en) * 2002-05-13 2003-11-13 International Business Machines Corporation Logically grouping physical ports into logical interfaces to expand bandwidth
US20030225965A1 (en) * 2002-06-04 2003-12-04 Ram Krishnan Hitless restart of access control module
US6674769B1 (en) * 2000-03-07 2004-01-06 Advanced Micro Devices, Inc. Simultaneous searching of layer 3 policy filter and policy cache in a network switch port
US6757742B1 (en) * 2000-05-25 2004-06-29 Advanced Micro Devices, Inc. Computer-based system for validating hash-based table lookup schemes in a network switch
US20040139290A1 (en) * 2003-01-10 2004-07-15 Gilbert Wolrich Memory interleaving
US6778526B1 (en) * 2000-12-22 2004-08-17 Nortel Networks Limited High speed access bus interface and protocol
US20040172346A1 (en) * 2002-08-10 2004-09-02 Cisco Technology, Inc., A California Corporation Generating accounting data based on access control list entries
US6791983B1 (en) * 2000-06-14 2004-09-14 Mindspeed Technologies, Inc. Content-addressable memory for use with a communication packet processor to retrieve context information
US6798778B1 (en) * 2000-06-14 2004-09-28 Mindspeed Technologies, Inc. Communication packet processor with a look-up engine and content-addressable memory for updating context information for a core processor
US6826180B1 (en) * 2000-06-14 2004-11-30 Mindspeed Technologies, Inc. Communication packet processor with a look-up engine and content-addressable memory for storing summation blocks of context information for a core processor
US6845099B2 (en) * 2000-06-14 2005-01-18 Mindspeed Technologies, Inc. Communication packet processor with a look-up engine and content-addressable memory for modifying selectors to retrieve context information for a core processor
US6859459B1 (en) * 1999-11-16 2005-02-22 Nec Corporation High-speed/high-reliability ether transmission system and I/F apparatus
US6885666B1 (en) * 2000-08-14 2005-04-26 Advanced Micro Devices, Inc. Apparatus and method in a network switch for synchronizing transfer of a control tag to a switch fabric with transfer of frame data to a buffer memory
US6891829B1 (en) * 2000-06-14 2005-05-10 Mindspeed Technologies, Inc. Communication packet processor with a look-up engine and content-addressable memory for retrieving context information for a core processor
US20050141501A1 (en) * 1999-03-17 2005-06-30 Broadcom Corporation Network switch having a programmable counter
US6925641B1 (en) * 2000-02-04 2005-08-02 Xronix Communications, Inc. Real time DSP load management system
US6934260B1 (en) * 2000-02-01 2005-08-23 Advanced Micro Devices, Inc. Arrangement for controlling learning of layer 3 network addresses in a network switch
US6940854B1 (en) * 2001-03-23 2005-09-06 Advanced Micro Devices, Inc. Systems and methods for determining priority based on previous priority determinations
US20050198429A1 (en) * 2004-03-02 2005-09-08 Nec Electronics Corporation Multilayer system and clock control method
US20060114895A1 (en) * 2004-11-30 2006-06-01 Broadcom Corporation CPU transmission of unmodified packets
US20060120366A1 (en) * 2000-09-12 2006-06-08 Cisco Technology, Inc. Stateful network address translation protocol implemented over a data network
US20060256787A1 (en) * 2000-06-23 2006-11-16 Broadcom Corporation Switch having external address resolution interface
US7215637B1 (en) * 2000-04-17 2007-05-08 Juniper Networks, Inc. Systems and methods for processing packets
US20070118677A1 (en) * 2005-05-13 2007-05-24 Freescale Semiconductor Incorporated Packet switch having a crossbar switch that connects multiport receiving and transmitting elements
US20070136331A1 (en) * 2005-11-28 2007-06-14 Nec Laboratories America Storage-efficient and collision-free hash-based packet processing architecture and method
US20080126572A1 (en) * 2006-10-05 2008-05-29 Holt John M Multi-path switching networks
US20080144785A1 (en) * 2006-12-19 2008-06-19 Dae-Hyun Lee Call setup method and terminal in a IP network
US20080225837A1 (en) * 2007-03-16 2008-09-18 Novell, Inc. System and Method for Multi-Layer Distributed Switching
US20090037629A1 (en) * 2007-08-01 2009-02-05 Broadcom Corporation Master slave core architecture with direct buses
US7688727B1 (en) 2000-04-17 2010-03-30 Juniper Networks, Inc. Filtering and route lookup in a switching device
US7688324B1 (en) * 1999-03-05 2010-03-30 Zoran Corporation Interactive set-top box having a unified memory architecture
US7720055B2 (en) * 1999-03-17 2010-05-18 Broadcom Corporation Method for handling IP multicast packets in network switch
US20100153961A1 (en) * 2004-02-10 2010-06-17 Hitachi, Ltd. Storage system having processor and interface adapters that can be increased or decreased based on required performance
US20110080913A1 (en) * 2006-07-21 2011-04-07 Cortina Systems, Inc. Apparatus and method for layer-2 to 7 search engine for high speed network application
US20110110372A1 (en) * 2003-06-05 2011-05-12 Juniper Networks, Inc. Systems and methods to perform hybrid switching and routing functions
WO2013026049A1 (en) * 2011-08-17 2013-02-21 Nicira, Inc. Distributed logical l3 routing
US20130182707A1 (en) * 2012-01-18 2013-07-18 International Business Machines Corporation Managing a global forwarding table in a distributed switch
US20140032701A1 (en) * 2009-02-19 2014-01-30 Micron Technology, Inc. Memory network methods, apparatus, and systems
US8717895B2 (en) 2010-07-06 2014-05-06 Nicira, Inc. Network virtualization apparatus and method with a table mapping engine
US20140140346A1 (en) * 2012-11-22 2014-05-22 Hitachi Metals, Ltd. Communication System and Network Relay Device
US8856419B2 (en) 2010-07-19 2014-10-07 International Business Machines Corporation Register access in distributed virtual bridge environment
US8861400B2 (en) 2012-01-18 2014-10-14 International Business Machines Corporation Requesting multicast membership information in a distributed switch in response to a miss event
US20140351885A1 (en) * 2013-05-22 2014-11-27 Unisys Corporation Control of simple network management protocol activity
US8966035B2 (en) 2009-04-01 2015-02-24 Nicira, Inc. Method and apparatus for implementing and managing distributed virtual switches in several hosts and physical forwarding elements
US8964528B2 (en) 2010-07-06 2015-02-24 Nicira, Inc. Method and apparatus for robust packet distribution among hierarchical managed switching elements
US9043452B2 (en) 2011-05-04 2015-05-26 Nicira, Inc. Network control apparatus and method for port isolation
US9083609B2 (en) 2007-09-26 2015-07-14 Nicira, Inc. Network operating system for managing and securing networks
US9225597B2 (en) 2014-03-14 2015-12-29 Nicira, Inc. Managed gateways peering with external router to attract ingress packets
US9306864B2 (en) 2011-10-25 2016-04-05 Nicira, Inc. Scheduling distribution of physical control plane data
US9306910B2 (en) 2009-07-27 2016-04-05 Vmware, Inc. Private allocated networks over shared communications infrastructure
US9313129B2 (en) 2014-03-14 2016-04-12 Nicira, Inc. Logical router processing by network controller
US9385954B2 (en) 2014-03-31 2016-07-05 Nicira, Inc. Hashing techniques for use in a network environment
US9407580B2 (en) 2013-07-12 2016-08-02 Nicira, Inc. Maintaining data stored with a packet
US9413644B2 (en) 2014-03-27 2016-08-09 Nicira, Inc. Ingress ECMP in virtual distributed routing environment
US9419855B2 (en) 2014-03-14 2016-08-16 Nicira, Inc. Static routes for logical routers
US9444651B2 (en) 2011-08-17 2016-09-13 Nicira, Inc. Flow generation from second level controller to first level controller to managed switching element
US9503321B2 (en) 2014-03-21 2016-11-22 Nicira, Inc. Dynamic routing for logical routers
US9503371B2 (en) 2013-09-04 2016-11-22 Nicira, Inc. High availability L3 gateways for logical networks
US9525647B2 (en) 2010-07-06 2016-12-20 Nicira, Inc. Network control apparatus and method for creating and modifying logical switching elements
US9548924B2 (en) 2013-12-09 2017-01-17 Nicira, Inc. Detecting an elephant flow based on the size of a packet
US9559987B1 (en) * 2008-09-26 2017-01-31 Tellabs Operations, Inc Method and apparatus for improving CAM learn throughput using a cache
US9571386B2 (en) 2013-07-08 2017-02-14 Nicira, Inc. Hybrid packet processing
US9569368B2 (en) 2013-12-13 2017-02-14 Nicira, Inc. Installing and managing flows in a flow table cache
US9575782B2 (en) 2013-10-13 2017-02-21 Nicira, Inc. ARP for logical router
US9577845B2 (en) 2013-09-04 2017-02-21 Nicira, Inc. Multiple active L3 gateways for logical networks
US9590901B2 (en) 2014-03-14 2017-03-07 Nicira, Inc. Route advertisement by managed gateways
US20170070474A1 (en) * 2013-10-06 2017-03-09 Mellanox Technologies Ltd. Simplified packet routing
US9602398B2 (en) 2013-09-15 2017-03-21 Nicira, Inc. Dynamically generating flows with wildcard fields
US9647883B2 (en) 2014-03-21 2017-05-09 Nicria, Inc. Multiple levels of logical routers
US9680750B2 (en) 2010-07-06 2017-06-13 Nicira, Inc. Use of tunnels to hide network addresses
US9697032B2 (en) 2009-07-27 2017-07-04 Vmware, Inc. Automated network configuration of virtual machines in a virtual lab environment
US9742881B2 (en) 2014-06-30 2017-08-22 Nicira, Inc. Network virtualization using just-in-time distributed capability for classification encoding
US9768980B2 (en) 2014-09-30 2017-09-19 Nicira, Inc. Virtual distributed bridging
US9887960B2 (en) 2013-08-14 2018-02-06 Nicira, Inc. Providing services for logical networks
US9893988B2 (en) 2014-03-27 2018-02-13 Nicira, Inc. Address resolution using multiple designated instances of a logical router
US9900410B2 (en) 2006-05-01 2018-02-20 Nicira, Inc. Private ethernet overlay networks over a shared ethernet in a virtual environment
US9923760B2 (en) 2015-04-06 2018-03-20 Nicira, Inc. Reduction of churn in a network control system
AU2014398480B2 (en) * 2014-06-24 2018-04-05 Hitachi, Ltd. Financial products trading system and financial products trading control method
US9952885B2 (en) 2013-08-14 2018-04-24 Nicira, Inc. Generation of configuration files for a DHCP module executing within a virtualized container
US9967199B2 (en) 2013-12-09 2018-05-08 Nicira, Inc. Inspecting operations of a machine to detect elephant flows
US9996467B2 (en) 2013-12-13 2018-06-12 Nicira, Inc. Dynamically adjusting the number of flows allowed in a flow table cache
US10020960B2 (en) 2014-09-30 2018-07-10 Nicira, Inc. Virtual distributed bridging
US10033579B2 (en) 2012-04-18 2018-07-24 Nicira, Inc. Using transactions to compute and propagate network forwarding state
US10038628B2 (en) 2015-04-04 2018-07-31 Nicira, Inc. Route server mode for dynamic routing between logical and physical networks
US10057157B2 (en) 2015-08-31 2018-08-21 Nicira, Inc. Automatically advertising NAT routes between logical routers
US10063458B2 (en) 2013-10-13 2018-08-28 Nicira, Inc. Asymmetric connection with external networks
US10079779B2 (en) 2015-01-30 2018-09-18 Nicira, Inc. Implementing logical router uplinks
US10091161B2 (en) 2016-04-30 2018-10-02 Nicira, Inc. Assignment of router ID for logical routers
US10089127B2 (en) 2011-11-15 2018-10-02 Nicira, Inc. Control plane interface for logical middlebox services
US10095535B2 (en) 2015-10-31 2018-10-09 Nicira, Inc. Static route types for logical routers
US10103939B2 (en) 2010-07-06 2018-10-16 Nicira, Inc. Network control apparatus and method for populating logical datapath sets
US10129142B2 (en) 2015-08-11 2018-11-13 Nicira, Inc. Route configuration for logical router
US10153973B2 (en) 2016-06-29 2018-12-11 Nicira, Inc. Installation of routing tables for logical router in route server mode
US20190007364A1 (en) * 2017-06-30 2019-01-03 Nicira, Inc. Assignment of unique physical network addresses for logical network addresses
US10178029B2 (en) 2016-05-11 2019-01-08 Mellanox Technologies Tlv Ltd. Forwarding of adaptive routing notifications
US10181993B2 (en) 2013-07-12 2019-01-15 Nicira, Inc. Tracing network packets through logical and physical networks
US10193806B2 (en) 2014-03-31 2019-01-29 Nicira, Inc. Performing a finishing operation to improve the quality of a resulting hash
US10200294B2 (en) 2016-12-22 2019-02-05 Mellanox Technologies Tlv Ltd. Adaptive routing based on flow-control credits
US10200306B2 (en) 2017-03-07 2019-02-05 Nicira, Inc. Visualization of packet tracing operation results
US10204122B2 (en) 2015-09-30 2019-02-12 Nicira, Inc. Implementing an interface between tuple and message-driven control entities
US10212071B2 (en) 2016-12-21 2019-02-19 Nicira, Inc. Bypassing a load balancer in a return path of network traffic
US20190065494A1 (en) * 2017-08-28 2019-02-28 International Business Machines Corporation Efficient and accurate lookups of data by a stream processor using a hash table
US10225184B2 (en) 2015-06-30 2019-03-05 Nicira, Inc. Redirecting traffic in a virtual distributed router environment
US10237123B2 (en) 2016-12-21 2019-03-19 Nicira, Inc. Dynamic recovery from a split-brain failure in edge nodes
US10250443B2 (en) 2014-09-30 2019-04-02 Nicira, Inc. Using physical location to modify behavior of a distributed virtual network element
US10333849B2 (en) 2016-04-28 2019-06-25 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US10341236B2 (en) 2016-09-30 2019-07-02 Nicira, Inc. Anycast edge service gateways
US10374827B2 (en) 2017-11-14 2019-08-06 Nicira, Inc. Identifier that maps to different networks at different datacenters
US10454758B2 (en) 2016-08-31 2019-10-22 Nicira, Inc. Edge node cluster network redundancy and fast convergence using an underlay anycast VTEP IP
US10469342B2 (en) 2014-10-10 2019-11-05 Nicira, Inc. Logical network traffic analysis
US10484515B2 (en) 2016-04-29 2019-11-19 Nicira, Inc. Implementing logical metadata proxy servers in logical networks
US10498638B2 (en) 2013-09-15 2019-12-03 Nicira, Inc. Performing a multi-stage lookup to classify packets
US10511458B2 (en) 2014-09-30 2019-12-17 Nicira, Inc. Virtual distributed bridging
US10511459B2 (en) 2017-11-14 2019-12-17 Nicira, Inc. Selection of managed forwarding element for bridge spanning multiple datacenters
US10560320B2 (en) 2016-06-29 2020-02-11 Nicira, Inc. Ranking of gateways in cluster
US10608887B2 (en) 2017-10-06 2020-03-31 Nicira, Inc. Using packet tracing tool to automatically execute packet capture operations
US10616045B2 (en) 2016-12-22 2020-04-07 Nicira, Inc. Migration of centralized routing components of logical router
US10637800B2 (en) 2017-06-30 2020-04-28 Nicira, Inc Replacement of logical network addresses with physical network addresses
US10644995B2 (en) 2018-02-14 2020-05-05 Mellanox Technologies Tlv Ltd. Adaptive routing in a box
US10659373B2 (en) 2014-03-31 2020-05-19 Nicira, Inc Processing packets according to hierarchy of flow entry storages
US10742746B2 (en) 2016-12-21 2020-08-11 Nicira, Inc. Bypassing a load balancer in a return path of network traffic
US10797998B2 (en) 2018-12-05 2020-10-06 Vmware, Inc. Route server for distributed routers using hierarchical routing protocol
US10819621B2 (en) 2016-02-23 2020-10-27 Mellanox Technologies Tlv Ltd. Unicast forwarding of adaptive-routing notifications
US10841273B2 (en) 2016-04-29 2020-11-17 Nicira, Inc. Implementing logical DHCP servers in logical networks
US10931560B2 (en) 2018-11-23 2021-02-23 Vmware, Inc. Using route type to determine routing protocol behavior
US10938788B2 (en) 2018-12-12 2021-03-02 Vmware, Inc. Static routes for policy-based VPN
US11005724B1 (en) 2019-01-06 2021-05-11 Mellanox Technologies, Ltd. Network topology having minimal number of long connections among groups of network elements
US11019167B2 (en) 2016-04-29 2021-05-25 Nicira, Inc. Management of update queues for network controller
US11095480B2 (en) 2019-08-30 2021-08-17 Vmware, Inc. Traffic optimization using distributed edge services
US11178051B2 (en) 2014-09-30 2021-11-16 Vmware, Inc. Packet key parser for flow-based forwarding elements
US11190463B2 (en) 2008-05-23 2021-11-30 Vmware, Inc. Distributed virtual switch for virtualized computer systems
US11196628B1 (en) 2020-07-29 2021-12-07 Vmware, Inc. Monitoring container clusters
US11201808B2 (en) 2013-07-12 2021-12-14 Nicira, Inc. Tracing logical network packets through physical network
US11336533B1 (en) 2021-01-08 2022-05-17 Vmware, Inc. Network visualization of correlations between logical elements and associated physical elements
US11411911B2 (en) 2020-10-26 2022-08-09 Mellanox Technologies, Ltd. Routing across multiple subnetworks using address mapping
US11451413B2 (en) 2020-07-28 2022-09-20 Vmware, Inc. Method for advertising availability of distributed gateway service and machines at host computer
US11558426B2 (en) 2020-07-29 2023-01-17 Vmware, Inc. Connection tracking for container cluster
US11570090B2 (en) 2020-07-29 2023-01-31 Vmware, Inc. Flow tracing operation in container cluster
US11575594B2 (en) 2020-09-10 2023-02-07 Mellanox Technologies, Ltd. Deadlock-free rerouting for resolving local link failures using detour paths
US11606294B2 (en) 2020-07-16 2023-03-14 Vmware, Inc. Host computer configured to facilitate distributed SNAT service
US11611613B2 (en) 2020-07-24 2023-03-21 Vmware, Inc. Policy-based forwarding to a load balancer of a load balancing cluster
US11616755B2 (en) 2020-07-16 2023-03-28 Vmware, Inc. Facilitating distributed SNAT service
US11677645B2 (en) 2021-09-17 2023-06-13 Vmware, Inc. Traffic monitoring
US11687210B2 (en) 2021-07-05 2023-06-27 Vmware, Inc. Criteria-based expansion of group nodes in a network topology visualization
US11711278B2 (en) 2021-07-24 2023-07-25 Vmware, Inc. Visualization of flow trace operation across multiple sites
US11736436B2 (en) 2020-12-31 2023-08-22 Vmware, Inc. Identifying routes with indirect addressing in a datacenter
US11765103B2 (en) 2021-12-01 2023-09-19 Mellanox Technologies, Ltd. Large-scale network with high port utilization
US11855898B1 (en) * 2018-03-14 2023-12-26 F5, Inc. Methods for traffic dependent direct memory access optimization and devices thereof
US11870682B2 (en) 2021-06-22 2024-01-09 Mellanox Technologies, Ltd. Deadlock-free local rerouting for handling multiple local link failures in hierarchical network topologies
US11902050B2 (en) 2020-07-28 2024-02-13 VMware LLC Method for providing distributed gateway service at host computer
US11924080B2 (en) 2020-01-17 2024-03-05 VMware LLC Practical overlay network latency measurement in datacenter

Families Citing this family (123)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6463470B1 (en) 1998-10-26 2002-10-08 Cisco Technology, Inc. Method and apparatus of storing policies for policy-based management of quality of service treatments of network data traffic flows
US7194554B1 (en) 1998-12-08 2007-03-20 Nomadix, Inc. Systems and methods for providing dynamic network authorization authentication and accounting
US8713641B1 (en) 1998-12-08 2014-04-29 Nomadix, Inc. Systems and methods for authorizing, authenticating and accounting users having transparent computer access to a network using a gateway device
US8266266B2 (en) 1998-12-08 2012-09-11 Nomadix, Inc. Systems and methods for providing dynamic network authorization, authentication and accounting
US7382736B2 (en) 1999-01-12 2008-06-03 Mcdata Corporation Method for scoring queued frames for selective transmission through a switch
US6639915B1 (en) * 1999-04-07 2003-10-28 Utstarcom, Inc. Method and apparatus for transmission of voice data in a network structure
US6587461B1 (en) 1999-06-08 2003-07-01 Cisco Technology, Inc. TDM switching system and ASIC device
US6434703B1 (en) 1999-06-08 2002-08-13 Cisco Technology, Inc. Event initiation bus and associated fault protection for a telecommunications device
US7346677B1 (en) 1999-07-02 2008-03-18 Cisco Technology, Inc. Method and apparatus for creating policies for policy-based management of quality of service treatments of network data traffic flows
US6983350B1 (en) 1999-08-31 2006-01-03 Intel Corporation SDRAM controller for parallel processor architecture
US7974192B2 (en) * 1999-10-13 2011-07-05 Avaya Inc. Multicast switching in a distributed communication system
US7197556B1 (en) * 1999-10-22 2007-03-27 Nomadix, Inc. Location-based identification for use in a communications network
US6934292B1 (en) * 1999-11-09 2005-08-23 Intel Corporation Method and system for emulating a single router in a switch stack
US6618373B1 (en) * 1999-11-10 2003-09-09 Cisco Technology, Inc. Method and system for reliable in-order distribution of events
US6788647B1 (en) 1999-11-19 2004-09-07 Cisco Technology, Inc. Automatically applying bi-directional quality of service treatment to network data flows
US6697380B1 (en) * 1999-12-07 2004-02-24 Advanced Micro Devices, Inc. Multiple key lookup arrangement for a shared switching logic address table in a network switch
US6532509B1 (en) 1999-12-22 2003-03-11 Intel Corporation Arbitrating command requests in a parallel multi-threaded processing system
US6985964B1 (en) * 1999-12-22 2006-01-10 Cisco Technology, Inc. Network processor system including a central processor and at least one peripheral processor
US6694380B1 (en) 1999-12-27 2004-02-17 Intel Corporation Mapping requests from a processing unit that uses memory-mapped input-output space
US6661794B1 (en) 1999-12-29 2003-12-09 Intel Corporation Method and apparatus for gigabit packet assignment for multithreaded packet processing
US7480706B1 (en) * 1999-12-30 2009-01-20 Intel Corporation Multi-threaded round-robin receive for fast network port
US6952824B1 (en) * 1999-12-30 2005-10-04 Intel Corporation Multi-threaded sequenced receive for fast network port stream of packets
US6813243B1 (en) 2000-02-14 2004-11-02 Cisco Technology, Inc. High-speed hardware implementation of red congestion control algorithm
US6778546B1 (en) 2000-02-14 2004-08-17 Cisco Technology, Inc. High-speed hardware implementation of MDRR algorithm over a large number of queues
US6721316B1 (en) * 2000-02-14 2004-04-13 Cisco Technology, Inc. Flexible engine and data structure for packet header processing
US6977930B1 (en) * 2000-02-14 2005-12-20 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
US6731644B1 (en) 2000-02-14 2004-05-04 Cisco Technology, Inc. Flexible DMA engine for packet header modification
DE10011667C2 (en) * 2000-03-10 2002-11-21 Infineon Technologies Ag High Speed Router
US7113487B1 (en) * 2000-04-07 2006-09-26 Intel Corporation Forwarding control protocol
US6981027B1 (en) * 2000-04-10 2005-12-27 International Business Machines Corporation Method and system for memory management in a network processing system
US6798777B1 (en) 2000-04-17 2004-09-28 Juniper Networks, Inc. Filtering and route lookup in a switching device
US7333489B1 (en) * 2000-05-08 2008-02-19 Crossroads Systems, Inc. System and method for storing frame header data
US6807183B1 (en) * 2000-05-09 2004-10-19 Advanced Micro Devices, Inc. Arrangement for reading a prescribed location of a FIFO buffer in a network switch port
DE10022764A1 (en) * 2000-05-10 2001-11-15 Siemens Ag Switching system, especially digital switching system, has improved message distributor with internal bus for direct connection of signaling control unit to coupling network
US7111163B1 (en) 2000-07-10 2006-09-19 Alterwan, Inc. Wide area network using internet with quality of service
US6959332B1 (en) 2000-07-12 2005-10-25 Cisco Technology, Inc. Basic command representation of quality of service policies
US6954462B1 (en) * 2000-07-31 2005-10-11 Cisco Technology, Inc. Method and apparatus for determining a multilayer switching path
US7099932B1 (en) 2000-08-16 2006-08-29 Cisco Technology, Inc. Method and apparatus for retrieving network quality of service policy information from a directory in a quality of service policy management system
US6822940B1 (en) 2000-09-29 2004-11-23 Cisco Technology, Inc. Method and apparatus for adapting enforcement of network quality of service policies based on feedback about network conditions
US7096260B1 (en) 2000-09-29 2006-08-22 Cisco Technology, Inc. Marking network data packets with differentiated services codepoints based on network load
US6988133B1 (en) 2000-10-31 2006-01-17 Cisco Technology, Inc. Method and apparatus for communicating network quality of service policy information to a plurality of policy enforcement points
US6735218B2 (en) * 2000-11-17 2004-05-11 Foundry Networks, Inc. Method and system for encoding wide striped cells
US7356030B2 (en) 2000-11-17 2008-04-08 Foundry Networks, Inc. Network switch cross point
US7596139B2 (en) 2000-11-17 2009-09-29 Foundry Networks, Inc. Backplane interface adapter with error control and redundant fabric
US7236490B2 (en) 2000-11-17 2007-06-26 Foundry Networks, Inc. Backplane interface adapter
US8180870B1 (en) * 2000-11-28 2012-05-15 Verizon Business Global Llc Programmable access device for a distributed network access system
US7657628B1 (en) 2000-11-28 2010-02-02 Verizon Business Global Llc External processor for a distributed network access system
US8185615B1 (en) 2000-11-28 2012-05-22 Verizon Business Global Llc Message, control and reporting interface for a distributed network access system
US7046680B1 (en) * 2000-11-28 2006-05-16 Mci, Inc. Network access system including a programmable access device having distributed service control
US7050396B1 (en) 2000-11-30 2006-05-23 Cisco Technology, Inc. Method and apparatus for automatically establishing bi-directional differentiated services treatment of flows in a network
US7002980B1 (en) 2000-12-19 2006-02-21 Chiaro Networks, Ltd. System and method for router queue and congestion management
US6963535B2 (en) * 2000-12-28 2005-11-08 Intel Corporation MAC bus interface
KR20020055287A (en) * 2000-12-28 2002-07-08 구자홍 Method for routing a packet of a router device
US6847645B1 (en) * 2001-02-22 2005-01-25 Cisco Technology, Inc. Method and apparatus for controlling packet header buffer wrap around in a forwarding engine of an intermediate network node
US7286532B1 (en) * 2001-02-22 2007-10-23 Cisco Technology, Inc. High performance interface logic architecture of an intermediate network node
US6987760B2 (en) * 2001-03-05 2006-01-17 International Business Machines Corporation High speed network processor
US6909716B1 (en) * 2001-03-30 2005-06-21 Intel Corporation System and method for switching and routing data associated with a subscriber community
US7627870B1 (en) * 2001-04-28 2009-12-01 Cisco Technology, Inc. Method and apparatus for a data structure comprising a hierarchy of queues or linked list data structures
US7206283B2 (en) * 2001-05-15 2007-04-17 Foundry Networks, Inc. High-performance network switch
US7197042B2 (en) * 2001-06-01 2007-03-27 4198638 Canada Inc. Cell-based switch fabric with cell-to-line-card control for regulating injection of packets
US6744652B2 (en) * 2001-08-22 2004-06-01 Netlogic Microsystems, Inc. Concurrent searching of different tables within a content addressable memory
US20030046429A1 (en) * 2001-08-30 2003-03-06 Sonksen Bradley Stephen Static data item processing
US7139818B1 (en) * 2001-10-04 2006-11-21 Cisco Technology, Inc. Techniques for dynamic host configuration without direct communications between client and server
US8868715B2 (en) * 2001-10-15 2014-10-21 Volli Polymer Gmbh Llc Report generation and visualization systems and methods and their use in testing frameworks for determining suitability of a network for target applications
US8543681B2 (en) * 2001-10-15 2013-09-24 Volli Polymer Gmbh Llc Network topology discovery systems and methods
US7447197B2 (en) * 2001-10-18 2008-11-04 Qlogic, Corporation System and method of providing network node services
US7200144B2 (en) * 2001-10-18 2007-04-03 Qlogic, Corp. Router and methods using network addresses for virtualization
US6993622B2 (en) * 2001-10-31 2006-01-31 Netlogic Microsystems, Inc. Bit level programming interface in a content addressable memory
US7210003B2 (en) * 2001-10-31 2007-04-24 Netlogic Microsystems, Inc. Comparand generation in a content addressable memory
GB2382960B (en) * 2001-12-05 2005-03-16 Ipwireless Inc Method and arrangement for data processing in a communication system
US7237058B2 (en) * 2002-01-14 2007-06-26 Netlogic Microsystems, Inc. Input data selection for content addressable memory
US7477600B1 (en) 2002-02-12 2009-01-13 Cisco Technology, Inc. Method and apparatus for configuring network elements to support real time applications based on meta-templates
US7333432B1 (en) 2002-02-12 2008-02-19 Cisco Technology, Inc. Method and apparatus for configuring network elements to support real time applications
US20120155466A1 (en) 2002-05-06 2012-06-21 Ian Edward Davis Method and apparatus for efficiently processing data packets in a computer network
US7649885B1 (en) 2002-05-06 2010-01-19 Foundry Networks, Inc. Network routing system for enhanced efficiency and monitoring capability
US7187687B1 (en) 2002-05-06 2007-03-06 Foundry Networks, Inc. Pipeline method and system for switching packets
US7468975B1 (en) 2002-05-06 2008-12-23 Foundry Networks, Inc. Flexible method for processing data packets in a network routing system for enhanced efficiency and monitoring capability
US7266117B1 (en) 2002-05-06 2007-09-04 Foundry Networks, Inc. System architecture for very fast ethernet blade
TW561740B (en) * 2002-06-06 2003-11-11 Via Tech Inc Network connecting device and data packet transferring method
JP4023281B2 (en) * 2002-10-11 2007-12-19 株式会社日立製作所 Packet communication apparatus and packet switch
US20080008202A1 (en) * 2002-10-31 2008-01-10 Terrell William C Router with routing processors and methods for virtualization
US7107391B2 (en) * 2002-12-30 2006-09-12 Micron Technology, Inc. Automatic learning in a CAM
US7062582B1 (en) * 2003-03-14 2006-06-13 Marvell International Ltd. Method and apparatus for bus arbitration dynamic priority based on waiting period
JP4157403B2 (en) * 2003-03-19 2008-10-01 株式会社日立製作所 Packet communication device
US6901072B1 (en) 2003-05-15 2005-05-31 Foundry Networks, Inc. System and method for high speed packet transmission implementing dual transmit and receive pipelines
US20060018142A1 (en) * 2003-08-11 2006-01-26 Varadarajan Srinivasan Concurrent searching of different tables within a content addressable memory
US8345701B1 (en) * 2003-08-26 2013-01-01 F5 Networks, Inc. Memory system for controlling distribution of packet data across a switch
US7082493B1 (en) * 2003-10-31 2006-07-25 Integrated Device Technology, Inc. CAM-based search engines and packet coprocessors having results status signaling for completed contexts
US7450592B2 (en) 2003-11-12 2008-11-11 At&T Intellectual Property I, L.P. Layer 2/layer 3 interworking via internal virtual UNI
JP4231773B2 (en) * 2003-12-01 2009-03-04 株式会社日立コミュニケーションテクノロジー VRRP technology that maintains the confidentiality of VR
US7974275B2 (en) * 2004-01-09 2011-07-05 Broadcom Corporation Saturated datagram aging mechanism
US7817659B2 (en) 2004-03-26 2010-10-19 Foundry Networks, Llc Method and apparatus for aggregating input data streams
US8730961B1 (en) 2004-04-26 2014-05-20 Foundry Networks, Llc System and method for optimizing router lookup
US7853676B1 (en) 2004-06-10 2010-12-14 Cisco Technology, Inc. Protocol for efficient exchange of XML documents with a network device
US8090806B1 (en) 2004-06-10 2012-01-03 Cisco Technology, Inc. Two-stage network device configuration process
US7660882B2 (en) * 2004-06-10 2010-02-09 Cisco Technology, Inc. Deploying network element management system provisioning services
US7657703B1 (en) 2004-10-29 2010-02-02 Foundry Networks, Inc. Double density content addressable memory (CAM) lookup scheme
US7751421B2 (en) * 2004-12-29 2010-07-06 Alcatel Lucent Traffic generator and monitor
US20060140191A1 (en) * 2004-12-29 2006-06-29 Naik Uday R Multi-level scheduling using single bit vector
US7768932B2 (en) * 2005-04-13 2010-08-03 Hewlett-Packard Development Company, L.P. Method for analyzing a system in a network
KR100814904B1 (en) * 2005-12-06 2008-03-19 한국전자통신연구원 On-Chip Communication architecture
US8054830B2 (en) * 2005-12-07 2011-11-08 Alcatel Lucent Managing the distribution of control protocol information in a network node
US8448162B2 (en) 2005-12-28 2013-05-21 Foundry Networks, Llc Hitless software upgrades
US7903654B2 (en) 2006-08-22 2011-03-08 Foundry Networks, Llc System and method for ECMP load sharing
US8238255B2 (en) 2006-11-22 2012-08-07 Foundry Networks, Llc Recovering from failures without impact on data traffic in a shared bus architecture
US20090279441A1 (en) 2007-01-11 2009-11-12 Foundry Networks, Inc. Techniques for transmitting failure detection protocol packets
US8271859B2 (en) 2007-07-18 2012-09-18 Foundry Networks Llc Segmented CRC design in high speed networks
US8037399B2 (en) 2007-07-18 2011-10-11 Foundry Networks, Llc Techniques for segmented CRC design in high speed networks
US8149839B1 (en) 2007-09-26 2012-04-03 Foundry Networks, Llc Selection of trunk ports and paths using rotation
US8190881B2 (en) 2007-10-15 2012-05-29 Foundry Networks Llc Scalable distributed web-based authentication
US8606940B2 (en) * 2008-02-06 2013-12-10 Alcatel Lucent DHCP address conflict detection/enforcement
US8335238B2 (en) * 2008-12-23 2012-12-18 International Business Machines Corporation Reassembling streaming data across multiple packetized communication channels
US8176026B2 (en) * 2009-04-14 2012-05-08 International Business Machines Corporation Consolidating file system backend operations with access of data
US8266504B2 (en) * 2009-04-14 2012-09-11 International Business Machines Corporation Dynamic monitoring of ability to reassemble streaming data across multiple channels based on history
US8271634B2 (en) * 2009-04-30 2012-09-18 Alcatel Lucent Buffer system for managing service measurement requests
US8090901B2 (en) 2009-05-14 2012-01-03 Brocade Communications Systems, Inc. TCAM management approach that minimize movements
US8971335B2 (en) * 2009-07-02 2015-03-03 Exafer Ltd System and method for creating a transitive optimized flow path
US8325733B2 (en) * 2009-09-09 2012-12-04 Exafer Ltd Method and system for layer 2 manipulator and forwarder
US8599850B2 (en) 2009-09-21 2013-12-03 Brocade Communications Systems, Inc. Provisioning single or multistage networks using ethernet service instances (ESIs)
US8274987B2 (en) * 2010-03-22 2012-09-25 International Business Machines Corporation Contention free pipelined broadcasting within a constant bisection bandwidth network topology
US10250528B2 (en) * 2012-11-13 2019-04-02 Netronome Systems, Inc. Packet prediction in a multi-protocol label switching network using operation, administration, and maintenance (OAM) messaging
US10834065B1 (en) 2015-03-31 2020-11-10 F5 Networks, Inc. Methods for SSL protected NTLM re-authentication and devices thereof
US10404698B1 (en) 2016-01-15 2019-09-03 F5 Networks, Inc. Methods for adaptive organization of web application access points in webtops and devices thereof

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4821265A (en) 1987-04-06 1989-04-11 Racal Data Communications Inc. Node architecture for communication networks
US4949338A (en) 1987-04-06 1990-08-14 Racal Data Communications Inc. Arbitration in multiprocessor communication node
DE4106183A1 (en) 1991-02-27 1992-09-03 Standard Elektrik Lorenz Ag DEVICE AND METHOD FOR D-CHANNEL PACKAGING
US5295133A (en) * 1992-02-12 1994-03-15 Sprint International Communications Corp. System administration in a flat distributed packet switch architecture
US5926482A (en) * 1994-05-05 1999-07-20 Sprint Communications Co. L.P. Telecommunications apparatus, system, and method with an enhanced signal transfer point
US5805072A (en) * 1994-12-12 1998-09-08 Ultra-High Speed Network VC connection method
US5600644A (en) * 1995-03-10 1997-02-04 At&T Method and apparatus for interconnecting LANs
US5581552A (en) * 1995-05-23 1996-12-03 At&T Multimedia server
CA2181293C (en) * 1995-07-17 2000-06-06 Charles Kevin Huscroft Atm layer device
US6067608A (en) * 1997-04-15 2000-05-23 Bull Hn Information Systems Inc. High performance mechanism for managing allocation of virtual memory buffers to virtual processes on a least recently used basis
US5909686A (en) * 1997-06-30 1999-06-01 Sun Microsystems, Inc. Hardware-assisted central processing unit access to a forwarding database
US6016310A (en) * 1997-06-30 2000-01-18 Sun Microsystems, Inc. Trunking support in a high performance network device
US6021132A (en) * 1997-06-30 2000-02-01 Sun Microsystems, Inc. Shared memory management in a switched network element
US5938736A (en) * 1997-06-30 1999-08-17 Sun Microsystems, Inc. Search engine architecture for a high performance multi-layer switch element

Cited By (364)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7688324B1 (en) * 1999-03-05 2010-03-30 Zoran Corporation Interactive set-top box having a unified memory architecture
US7782891B2 (en) 1999-03-17 2010-08-24 Broadcom Corporation Network switch memory interface configuration
US20050141501A1 (en) * 1999-03-17 2005-06-30 Broadcom Corporation Network switch having a programmable counter
US7720055B2 (en) * 1999-03-17 2010-05-18 Broadcom Corporation Method for handling IP multicast packets in network switch
US6859459B1 (en) * 1999-11-16 2005-02-22 Nec Corporation High-speed/high-reliability ether transmission system and I/F apparatus
US6934260B1 (en) * 2000-02-01 2005-08-23 Advanced Micro Devices, Inc. Arrangement for controlling learning of layer 3 network addresses in a network switch
US6925641B1 (en) * 2000-02-04 2005-08-02 Xronix Communications, Inc. Real time DSP load management system
US6674769B1 (en) * 2000-03-07 2004-01-06 Advanced Micro Devices, Inc. Simultaneous searching of layer 3 policy filter and policy cache in a network switch port
US8879395B2 (en) 2000-04-17 2014-11-04 Juniper Networks, Inc. Filtering and route lookup in a switching device
US7688727B1 (en) 2000-04-17 2010-03-30 Juniper Networks, Inc. Filtering and route lookup in a switching device
US7986629B1 (en) 2000-04-17 2011-07-26 Juniper Networks, Inc. Filtering and route lookup in a switching device
US8238246B2 (en) 2000-04-17 2012-08-07 Juniper Networks, Inc. Filtering and route lookup in a switching device
US8503304B2 (en) 2000-04-17 2013-08-06 Juniper Networks, Inc. Filtering and route lookup in a switching device
US7215637B1 (en) * 2000-04-17 2007-05-08 Juniper Networks, Inc. Systems and methods for processing packets
US9813339B2 (en) 2000-04-17 2017-11-07 Juniper Networks, Inc. Filtering and route lookup in a switching device
US9258228B2 (en) 2000-04-17 2016-02-09 Juniper Networks, Inc. Filtering and route lookup in a switching device
US6757742B1 (en) * 2000-05-25 2004-06-29 Advanced Micro Devices, Inc. Computer-based system for validating hash-based table lookup schemes in a network switch
US6891829B1 (en) * 2000-06-14 2005-05-10 Mindspeed Technologies, Inc. Communication packet processor with a look-up engine and content-addressable memory for retrieving context information for a core processor
US6826180B1 (en) * 2000-06-14 2004-11-30 Mindspeed Technologies, Inc. Communication packet processor with a look-up engine and content-addressable memory for storing summation blocks of context information for a core processor
US6845099B2 (en) * 2000-06-14 2005-01-18 Mindspeed Technologies, Inc. Communication packet processor with a look-up engine and content-addressable memory for modifying selectors to retrieve context information for a core processor
US6798778B1 (en) * 2000-06-14 2004-09-28 Mindspeed Technologies, Inc. Communication packet processor with a look-up engine and content-addressable memory for updating context information for a core processor
US6791983B1 (en) * 2000-06-14 2004-09-14 Mindspeed Technologies, Inc. Content-addressable memory for use with a communication packet processor to retrieve context information
US8027341B2 (en) * 2000-06-23 2011-09-27 Broadcom Corporation Switch having external address resolution interface
US20060256787A1 (en) * 2000-06-23 2006-11-16 Broadcom Corporation Switch having external address resolution interface
US6885666B1 (en) * 2000-08-14 2005-04-26 Advanced Micro Devices, Inc. Apparatus and method in a network switch for synchronizing transfer of a control tag to a switch fabric with transfer of frame data to a buffer memory
US7894427B2 (en) * 2000-09-12 2011-02-22 Cisco Technology, Inc. Stateful network address translation protocol implemented over a data network
US20110103387A1 (en) * 2000-09-12 2011-05-05 Cisco Technology, Inc. Stateful network address translation protocol implemented over a data network
US20060120366A1 (en) * 2000-09-12 2006-06-08 Cisco Technology, Inc. Stateful network address translation protocol implemented over a data network
US9042381B2 (en) 2000-09-12 2015-05-26 Cisco Technology, Inc. Stateful network address translation protocol implemented over a data network
US8675650B2 (en) 2000-09-12 2014-03-18 Cisco Technology, Inc. Stateful network address translation protocol implemented over a data network
US7680137B2 (en) * 2000-12-12 2010-03-16 Airbus France S.A.S. Process for transmitting asynchronous data packets
US20040114600A1 (en) * 2000-12-12 2004-06-17 Jean-Pierre Mao Process for transmitting asynchronous data packets
US7590134B2 (en) 2000-12-12 2009-09-15 Airbus France S.A.S. System for transmitting asynchronous data packets
US7586928B2 (en) * 2000-12-12 2009-09-08 Airbus France Process and device for deterministic transmission of asynchronous data in packets
US20020105958A1 (en) * 2000-12-12 2002-08-08 Jean-Pierre Mao Process and device for deterministic transmission of asynchronous data in packets
US6778526B1 (en) * 2000-12-22 2004-08-17 Nortel Networks Limited High speed access bus interface and protocol
US20020120888A1 (en) * 2001-02-14 2002-08-29 Jorg Franke Network co-processor for vehicles
US7260668B2 (en) * 2001-02-14 2007-08-21 Micronas Gmbh Network co-processor for vehicles
US6940854B1 (en) * 2001-03-23 2005-09-06 Advanced Micro Devices, Inc. Systems and methods for determining priority based on previous priority determinations
US20020143910A1 (en) * 2001-03-29 2002-10-03 Shih-Wei Chou Network hub
US20030007488A1 (en) * 2001-06-22 2003-01-09 Sriram Rao Efficient data transmission based on a policy
US7136935B2 (en) * 2001-06-22 2006-11-14 Inktomi Corporation Efficient data transmissions based on a policy
US20030041216A1 (en) * 2001-08-27 2003-02-27 Rosenbluth Mark B. Mechanism for providing early coherency detection to enable high performance memory updates in a latency sensitive multithreaded environment
US7216204B2 (en) * 2001-08-27 2007-05-08 Intel Corporation Mechanism for providing early coherency detection to enable high performance memory updates in a latency sensitive multithreaded environment
US20030095567A1 (en) * 2001-11-20 2003-05-22 Lo Man Kuk Real time protocol packet handler
EP1363429A2 (en) * 2002-05-13 2003-11-19 Alcatel Logically grouping physical ports into logical interfaces to expand bandwidth
US7280527B2 (en) 2002-05-13 2007-10-09 International Business Machines Corporation Logically grouping physical ports into logical interfaces to expand bandwidth
EP1363429A3 (en) * 2002-05-13 2004-06-23 Alcatel Logically grouping physical ports into logical interfaces to expand bandwidth
US20030210688A1 (en) * 2002-05-13 2003-11-13 International Business Machines Corporation Logically grouping physical ports into logical interfaces to expand bandwidth
US20030225965A1 (en) * 2002-06-04 2003-12-04 Ram Krishnan Hitless restart of access control module
US7181567B2 (en) * 2002-06-04 2007-02-20 Lucent Technologies Inc. Hitless restart of access control module
US7689485B2 (en) * 2002-08-10 2010-03-30 Cisco Technology, Inc. Generating accounting data based on access control list entries
US20040172346A1 (en) * 2002-08-10 2004-09-02 Cisco Technology, Inc., A California Corporation Generating accounting data based on access control list entries
US20050185437A1 (en) * 2003-01-10 2005-08-25 Gilbert Wolrich Memory interleaving
US20040139290A1 (en) * 2003-01-10 2004-07-15 Gilbert Wolrich Memory interleaving
US20110110372A1 (en) * 2003-06-05 2011-05-12 Juniper Networks, Inc. Systems and methods to perform hybrid switching and routing functions
US20100153961A1 (en) * 2004-02-10 2010-06-17 Hitachi, Ltd. Storage system having processor and interface adapters that can be increased or decreased based on required performance
US20050198429A1 (en) * 2004-03-02 2005-09-08 Nec Electronics Corporation Multilayer system and clock control method
US20060114895A1 (en) * 2004-11-30 2006-06-01 Broadcom Corporation CPU transmission of unmodified packets
US8170019B2 (en) * 2004-11-30 2012-05-01 Broadcom Corporation CPU transmission of unmodified packets
US20070118677A1 (en) * 2005-05-13 2007-05-24 Freescale Semiconductor Incorporated Packet switch having a crossbar switch that connects multiport receiving and transmitting elements
US20070136331A1 (en) * 2005-11-28 2007-06-14 Nec Laboratories America Storage-efficient and collision-free hash-based packet processing architecture and method
US7653670B2 (en) * 2005-11-28 2010-01-26 Nec Laboratories America, Inc. Storage-efficient and collision-free hash-based packet processing architecture and method
US9900410B2 (en) 2006-05-01 2018-02-20 Nicira, Inc. Private ethernet overlay networks over a shared ethernet in a virtual environment
US8611350B2 (en) * 2006-07-21 2013-12-17 Cortina Systems, Inc. Apparatus and method for layer-2 to 7 search engine for high speed network application
US20110080913A1 (en) * 2006-07-21 2011-04-07 Cortina Systems, Inc. Apparatus and method for layer-2 to 7 search engine for high speed network application
US20080126572A1 (en) * 2006-10-05 2008-05-29 Holt John M Multi-path switching networks
US8600017B2 (en) * 2006-12-19 2013-12-03 Samsung Electronics Co., Ltd. Call setup method and terminal in an IP network
US20080144785A1 (en) * 2006-12-19 2008-06-19 Dae-Hyun Lee Call setup method and terminal in a IP network
US20080225837A1 (en) * 2007-03-16 2008-09-18 Novell, Inc. System and Method for Multi-Layer Distributed Switching
US20090037629A1 (en) * 2007-08-01 2009-02-05 Broadcom Corporation Master slave core architecture with direct buses
US9876672B2 (en) 2007-09-26 2018-01-23 Nicira, Inc. Network operating system for managing and securing networks
US11683214B2 (en) 2007-09-26 2023-06-20 Nicira, Inc. Network operating system for managing and securing networks
US10749736B2 (en) 2007-09-26 2020-08-18 Nicira, Inc. Network operating system for managing and securing networks
US9083609B2 (en) 2007-09-26 2015-07-14 Nicira, Inc. Network operating system for managing and securing networks
US11757797B2 (en) 2008-05-23 2023-09-12 Vmware, Inc. Distributed virtual switch for virtualized computer systems
US11190463B2 (en) 2008-05-23 2021-11-30 Vmware, Inc. Distributed virtual switch for virtualized computer systems
US9559987B1 (en) * 2008-09-26 2017-01-31 Tellabs Operations, Inc Method and apparatus for improving CAM learn throughput using a cache
US10681136B2 (en) 2009-02-19 2020-06-09 Micron Technology, Inc. Memory network methods, apparatus, and systems
US20140032701A1 (en) * 2009-02-19 2014-01-30 Micron Technology, Inc. Memory network methods, apparatus, and systems
US9590919B2 (en) 2009-04-01 2017-03-07 Nicira, Inc. Method and apparatus for implementing and managing virtual switches
US11425055B2 (en) 2009-04-01 2022-08-23 Nicira, Inc. Method and apparatus for implementing and managing virtual switches
US8966035B2 (en) 2009-04-01 2015-02-24 Nicira, Inc. Method and apparatus for implementing and managing distributed virtual switches in several hosts and physical forwarding elements
US10931600B2 (en) 2009-04-01 2021-02-23 Nicira, Inc. Method and apparatus for implementing and managing virtual switches
US9306910B2 (en) 2009-07-27 2016-04-05 Vmware, Inc. Private allocated networks over shared communications infrastructure
US9697032B2 (en) 2009-07-27 2017-07-04 Vmware, Inc. Automated network configuration of virtual machines in a virtual lab environment
US9952892B2 (en) 2009-07-27 2018-04-24 Nicira, Inc. Automated network configuration of virtual machines in a virtual lab environment
US10949246B2 (en) 2009-07-27 2021-03-16 Vmware, Inc. Automated network configuration of virtual machines in a virtual lab environment
US10757234B2 (en) 2009-09-30 2020-08-25 Nicira, Inc. Private allocated networks over shared communications infrastructure
US9888097B2 (en) 2009-09-30 2018-02-06 Nicira, Inc. Private allocated networks over shared communications infrastructure
US11533389B2 (en) 2009-09-30 2022-12-20 Nicira, Inc. Private allocated networks over shared communications infrastructure
US10291753B2 (en) 2009-09-30 2019-05-14 Nicira, Inc. Private allocated networks over shared communications infrastructure
US11917044B2 (en) 2009-09-30 2024-02-27 Nicira, Inc. Private allocated networks over shared communications infrastructure
US11838395B2 (en) 2010-06-21 2023-12-05 Nicira, Inc. Private ethernet overlay networks over a shared ethernet in a virtual environment
US10951744B2 (en) 2010-06-21 2021-03-16 Nicira, Inc. Private ethernet overlay networks over a shared ethernet in a virtual environment
US8880468B2 (en) 2010-07-06 2014-11-04 Nicira, Inc. Secondary storage architecture for a network control system that utilizes a primary network information base
US9680750B2 (en) 2010-07-06 2017-06-13 Nicira, Inc. Use of tunnels to hide network addresses
US9007903B2 (en) 2010-07-06 2015-04-14 Nicira, Inc. Managing a network by controlling edge and non-edge switching elements
US9008087B2 (en) 2010-07-06 2015-04-14 Nicira, Inc. Processing requests in a network control system with multiple controller instances
US11743123B2 (en) 2010-07-06 2023-08-29 Nicira, Inc. Managed switch architectures: software managed switches, hardware managed switches, and heterogeneous managed switches
US8964528B2 (en) 2010-07-06 2015-02-24 Nicira, Inc. Method and apparatus for robust packet distribution among hierarchical managed switching elements
US10038597B2 (en) 2010-07-06 2018-07-31 Nicira, Inc. Mesh architectures for managed switching elements
US9049153B2 (en) 2010-07-06 2015-06-02 Nicira, Inc. Logical packet processing pipeline that retains state information to effectuate efficient processing of packets
US10103939B2 (en) 2010-07-06 2018-10-16 Nicira, Inc. Network control apparatus and method for populating logical datapath sets
US9077664B2 (en) 2010-07-06 2015-07-07 Nicira, Inc. One-hop packet processing in a network with managed switching elements
US8966040B2 (en) 2010-07-06 2015-02-24 Nicira, Inc. Use of network information base structure to establish communication between applications
US9106587B2 (en) 2010-07-06 2015-08-11 Nicira, Inc. Distributed network control system with one master controller per managed switching element
US9112811B2 (en) 2010-07-06 2015-08-18 Nicira, Inc. Managed switching elements used as extenders
US9172663B2 (en) 2010-07-06 2015-10-27 Nicira, Inc. Method and apparatus for replicating network information base in a distributed network control system with multiple controller instances
US8717895B2 (en) 2010-07-06 2014-05-06 Nicira, Inc. Network virtualization apparatus and method with a table mapping engine
US8718070B2 (en) 2010-07-06 2014-05-06 Nicira, Inc. Distributed network virtualization apparatus and method
US9231891B2 (en) 2010-07-06 2016-01-05 Nicira, Inc. Deployment of hierarchical managed switching elements
US8958292B2 (en) 2010-07-06 2015-02-17 Nicira, Inc. Network control apparatus and method with port security controls
US10320585B2 (en) 2010-07-06 2019-06-11 Nicira, Inc. Network control apparatus and method for creating and modifying logical switching elements
US9300603B2 (en) 2010-07-06 2016-03-29 Nicira, Inc. Use of rich context tags in logical data processing
US10326660B2 (en) 2010-07-06 2019-06-18 Nicira, Inc. Network virtualization apparatus and method
US11876679B2 (en) 2010-07-06 2024-01-16 Nicira, Inc. Method and apparatus for interacting with a network information base in a distributed network control system with multiple controller instances
US8959215B2 (en) 2010-07-06 2015-02-17 Nicira, Inc. Network virtualization
US9306875B2 (en) 2010-07-06 2016-04-05 Nicira, Inc. Managed switch architectures for implementing logical datapath sets
US11677588B2 (en) 2010-07-06 2023-06-13 Nicira, Inc. Network control apparatus and method for creating and modifying logical switching elements
US8743889B2 (en) 2010-07-06 2014-06-03 Nicira, Inc. Method and apparatus for using a network information base to control a plurality of shared network infrastructure switching elements
US8743888B2 (en) 2010-07-06 2014-06-03 Nicira, Inc. Network control apparatus and method
US8750164B2 (en) 2010-07-06 2014-06-10 Nicira, Inc. Hierarchical managed switch architecture
US10686663B2 (en) 2010-07-06 2020-06-16 Nicira, Inc. Managed switch architectures: software managed switches, hardware managed switches, and heterogeneous managed switches
US8750119B2 (en) 2010-07-06 2014-06-10 Nicira, Inc. Network control apparatus and method with table mapping engine
US8761036B2 (en) 2010-07-06 2014-06-24 Nicira, Inc. Network control apparatus and method with quality of service controls
US9363210B2 (en) 2010-07-06 2016-06-07 Nicira, Inc. Distributed network control system with one master controller per logical datapath set
US9692655B2 (en) 2010-07-06 2017-06-27 Nicira, Inc. Packet processing in a network with hierarchical managed switching elements
US8964598B2 (en) 2010-07-06 2015-02-24 Nicira, Inc. Mesh architectures for managed switching elements
US9391928B2 (en) 2010-07-06 2016-07-12 Nicira, Inc. Method and apparatus for interacting with a network information base in a distributed network control system with multiple controller instances
US8775594B2 (en) 2010-07-06 2014-07-08 Nicira, Inc. Distributed network control system with a distributed hash table
US11641321B2 (en) 2010-07-06 2023-05-02 Nicira, Inc. Packet processing for logical datapath sets
US11539591B2 (en) 2010-07-06 2022-12-27 Nicira, Inc. Distributed network control system with one master controller per logical datapath set
US8913483B2 (en) 2010-07-06 2014-12-16 Nicira, Inc. Fault tolerant managed switching element architecture
US8817620B2 (en) 2010-07-06 2014-08-26 Nicira, Inc. Network virtualization apparatus and method
US8817621B2 (en) 2010-07-06 2014-08-26 Nicira, Inc. Network virtualization apparatus
US11509564B2 (en) 2010-07-06 2022-11-22 Nicira, Inc. Method and apparatus for replicating network information base in a distributed network control system with multiple controller instances
US8830823B2 (en) 2010-07-06 2014-09-09 Nicira, Inc. Distributed control platform for large-scale production networks
US9525647B2 (en) 2010-07-06 2016-12-20 Nicira, Inc. Network control apparatus and method for creating and modifying logical switching elements
US11223531B2 (en) 2010-07-06 2022-01-11 Nicira, Inc. Method and apparatus for interacting with a network information base in a distributed network control system with multiple controller instances
US10021019B2 (en) 2010-07-06 2018-07-10 Nicira, Inc. Packet processing for logical datapath sets
US8837493B2 (en) 2010-07-06 2014-09-16 Nicira, Inc. Distributed network control apparatus and method
US8842679B2 (en) 2010-07-06 2014-09-23 Nicira, Inc. Control system that elects a master controller instance for switching elements
US8856419B2 (en) 2010-07-19 2014-10-07 International Business Machines Corporation Register access in distributed virtual bridge environment
US9043452B2 (en) 2011-05-04 2015-05-26 Nicira, Inc. Network control apparatus and method for port isolation
US9185069B2 (en) 2011-08-17 2015-11-10 Nicira, Inc. Handling reverse NAT in logical L3 routing
US9444651B2 (en) 2011-08-17 2016-09-13 Nicira, Inc. Flow generation from second level controller to first level controller to managed switching element
WO2013026049A1 (en) * 2011-08-17 2013-02-21 Nicira, Inc. Distributed logical l3 routing
US11804987B2 (en) 2011-08-17 2023-10-31 Nicira, Inc. Flow generation from second level controller to first level controller to managed switching element
US9059999B2 (en) 2011-08-17 2015-06-16 Nicira, Inc. Load balancing in a logical pipeline
US9407599B2 (en) 2011-08-17 2016-08-02 Nicira, Inc. Handling NAT migration in logical L3 routing
US10193708B2 (en) 2011-08-17 2019-01-29 Nicira, Inc. Multi-domain interconnect
US9369426B2 (en) 2011-08-17 2016-06-14 Nicira, Inc. Distributed logical L3 routing
US9356906B2 (en) 2011-08-17 2016-05-31 Nicira, Inc. Logical L3 routing with DHCP
US9350696B2 (en) 2011-08-17 2016-05-24 Nicira, Inc. Handling NAT in logical L3 routing
US9319375B2 (en) 2011-08-17 2016-04-19 Nicira, Inc. Flow templating in logical L3 routing
US10931481B2 (en) 2011-08-17 2021-02-23 Nicira, Inc. Multi-domain interconnect
US10027584B2 (en) 2011-08-17 2018-07-17 Nicira, Inc. Distributed logical L3 routing
US9461960B2 (en) 2011-08-17 2016-10-04 Nicira, Inc. Logical L3 daemon
US10868761B2 (en) 2011-08-17 2020-12-15 Nicira, Inc. Logical L3 daemon
US8958298B2 (en) 2011-08-17 2015-02-17 Nicira, Inc. Centralized logical L3 routing
US11695695B2 (en) 2011-08-17 2023-07-04 Nicira, Inc. Logical L3 daemon
US9276897B2 (en) 2011-08-17 2016-03-01 Nicira, Inc. Distributed logical L3 routing
US9306864B2 (en) 2011-10-25 2016-04-05 Nicira, Inc. Scheduling distribution of physical control plane data
US9319338B2 (en) 2011-10-25 2016-04-19 Nicira, Inc. Tunnel creation
US11669488B2 (en) 2011-10-25 2023-06-06 Nicira, Inc. Chassis controller
US9602421B2 (en) 2011-10-25 2017-03-21 Nicira, Inc. Nesting transaction updates to minimize communication
US9319337B2 (en) 2011-10-25 2016-04-19 Nicira, Inc. Universal physical control plane
US10505856B2 (en) 2011-10-25 2019-12-10 Nicira, Inc. Chassis controller
US9954793B2 (en) 2011-10-25 2018-04-24 Nicira, Inc. Chassis controller
US9319336B2 (en) 2011-10-25 2016-04-19 Nicira, Inc. Scheduling distribution of logical control plane data
US10884780B2 (en) 2011-11-15 2021-01-05 Nicira, Inc. Architecture of networks with middleboxes
US10949248B2 (en) 2011-11-15 2021-03-16 Nicira, Inc. Load balancing and destination network address translation middleboxes
US10922124B2 (en) 2011-11-15 2021-02-16 Nicira, Inc. Network control system for configuring middleboxes
US10191763B2 (en) 2011-11-15 2019-01-29 Nicira, Inc. Architecture of networks with middleboxes
US10235199B2 (en) 2011-11-15 2019-03-19 Nicira, Inc. Migrating middlebox state for distributed middleboxes
US10310886B2 (en) 2011-11-15 2019-06-04 Nicira, Inc. Network control system for configuring middleboxes
US10977067B2 (en) 2011-11-15 2021-04-13 Nicira, Inc. Control plane interface for logical middlebox services
US10514941B2 (en) 2011-11-15 2019-12-24 Nicira, Inc. Load balancing and destination network address translation middleboxes
US11593148B2 (en) 2011-11-15 2023-02-28 Nicira, Inc. Network control system for configuring middleboxes
US11372671B2 (en) 2011-11-15 2022-06-28 Nicira, Inc. Architecture of networks with middleboxes
US11740923B2 (en) 2011-11-15 2023-08-29 Nicira, Inc. Architecture of networks with middleboxes
US10089127B2 (en) 2011-11-15 2018-10-02 Nicira, Inc. Control plane interface for logical middlebox services
US20130182707A1 (en) * 2012-01-18 2013-07-18 International Business Machines Corporation Managing a global forwarding table in a distributed switch
US8861400B2 (en) 2012-01-18 2014-10-14 International Business Machines Corporation Requesting multicast membership information in a distributed switch in response to a miss event
US8891535B2 (en) * 2012-01-18 2014-11-18 International Business Machines Corporation Managing a global forwarding table in a distributed switch
US10033579B2 (en) 2012-04-18 2018-07-24 Nicira, Inc. Using transactions to compute and propagate network forwarding state
US10135676B2 (en) 2012-04-18 2018-11-20 Nicira, Inc. Using transactions to minimize churn in a distributed network control system
US9306845B2 (en) * 2012-11-22 2016-04-05 Hitachi Metals, Ltd. Communication system and network relay device
US20140140346A1 (en) * 2012-11-22 2014-05-22 Hitachi Metals, Ltd. Communication System and Network Relay Device
US9038136B2 (en) * 2013-05-22 2015-05-19 Unisys Corporation Control of simple network management protocol activity
US20140351885A1 (en) * 2013-05-22 2014-11-27 Unisys Corporation Control of simple network management protocol activity
US10033640B2 (en) 2013-07-08 2018-07-24 Nicira, Inc. Hybrid packet processing
US9571386B2 (en) 2013-07-08 2017-02-14 Nicira, Inc. Hybrid packet processing
US10680948B2 (en) 2013-07-08 2020-06-09 Nicira, Inc. Hybrid packet processing
US10181993B2 (en) 2013-07-12 2019-01-15 Nicira, Inc. Tracing network packets through logical and physical networks
US11201808B2 (en) 2013-07-12 2021-12-14 Nicira, Inc. Tracing logical network packets through physical network
US9407580B2 (en) 2013-07-12 2016-08-02 Nicira, Inc. Maintaining data stored with a packet
US10778557B2 (en) 2013-07-12 2020-09-15 Nicira, Inc. Tracing network packets through logical and physical networks
US10764238B2 (en) 2013-08-14 2020-09-01 Nicira, Inc. Providing services for logical networks
US11695730B2 (en) 2013-08-14 2023-07-04 Nicira, Inc. Providing services for logical networks
US9887960B2 (en) 2013-08-14 2018-02-06 Nicira, Inc. Providing services for logical networks
US9952885B2 (en) 2013-08-14 2018-04-24 Nicira, Inc. Generation of configuration files for a DHCP module executing within a virtualized container
US9577845B2 (en) 2013-09-04 2017-02-21 Nicira, Inc. Multiple active L3 gateways for logical networks
US9503371B2 (en) 2013-09-04 2016-11-22 Nicira, Inc. High availability L3 gateways for logical networks
US10003534B2 (en) 2013-09-04 2018-06-19 Nicira, Inc. Multiple active L3 gateways for logical networks
US10389634B2 (en) 2013-09-04 2019-08-20 Nicira, Inc. Multiple active L3 gateways for logical networks
US10382324B2 (en) 2013-09-15 2019-08-13 Nicira, Inc. Dynamically generating flows with wildcard fields
US9602398B2 (en) 2013-09-15 2017-03-21 Nicira, Inc. Dynamically generating flows with wildcard fields
US10498638B2 (en) 2013-09-15 2019-12-03 Nicira, Inc. Performing a multi-stage lookup to classify packets
US10708219B2 (en) * 2013-10-06 2020-07-07 Mellanox Technologies, Ltd. Simplified packet routing
US20170070474A1 (en) * 2013-10-06 2017-03-09 Mellanox Technologies Ltd. Simplified packet routing
US9785455B2 (en) 2013-10-13 2017-10-10 Nicira, Inc. Logical router
US10063458B2 (en) 2013-10-13 2018-08-28 Nicira, Inc. Asymmetric connection with external networks
US9575782B2 (en) 2013-10-13 2017-02-21 Nicira, Inc. ARP for logical router
US9910686B2 (en) 2013-10-13 2018-03-06 Nicira, Inc. Bridging between network segments with a logical router
US11029982B2 (en) 2013-10-13 2021-06-08 Nicira, Inc. Configuration of logical router
US10528373B2 (en) 2013-10-13 2020-01-07 Nicira, Inc. Configuration of logical router
US10693763B2 (en) 2013-10-13 2020-06-23 Nicira, Inc. Asymmetric connection with external networks
US9977685B2 (en) 2013-10-13 2018-05-22 Nicira, Inc. Configuration of logical router
US10193771B2 (en) 2013-12-09 2019-01-29 Nicira, Inc. Detecting and handling elephant flows
US11539630B2 (en) 2013-12-09 2022-12-27 Nicira, Inc. Inspecting operations of a machine to detect elephant flows
US11811669B2 (en) 2013-12-09 2023-11-07 Nicira, Inc. Inspecting operations of a machine to detect elephant flows
US10666530B2 (en) 2013-12-09 2020-05-26 Nicira, Inc Detecting and handling large flows
US10158538B2 (en) 2013-12-09 2018-12-18 Nicira, Inc. Reporting elephant flows to a network controller
US9548924B2 (en) 2013-12-09 2017-01-17 Nicira, Inc. Detecting an elephant flow based on the size of a packet
US9838276B2 (en) 2013-12-09 2017-12-05 Nicira, Inc. Detecting an elephant flow based on the size of a packet
US9967199B2 (en) 2013-12-09 2018-05-08 Nicira, Inc. Inspecting operations of a machine to detect elephant flows
US11095536B2 (en) 2013-12-09 2021-08-17 Nicira, Inc. Detecting and handling large flows
US9569368B2 (en) 2013-12-13 2017-02-14 Nicira, Inc. Installing and managing flows in a flow table cache
US9996467B2 (en) 2013-12-13 2018-06-12 Nicira, Inc. Dynamically adjusting the number of flows allowed in a flow table cache
US10380019B2 (en) 2013-12-13 2019-08-13 Nicira, Inc. Dynamically adjusting the number of flows allowed in a flow table cache
US9225597B2 (en) 2014-03-14 2015-12-29 Nicira, Inc. Managed gateways peering with external router to attract ingress packets
US11025543B2 (en) 2014-03-14 2021-06-01 Nicira, Inc. Route advertisement by managed gateways
US9313129B2 (en) 2014-03-14 2016-04-12 Nicira, Inc. Logical router processing by network controller
US9419855B2 (en) 2014-03-14 2016-08-16 Nicira, Inc. Static routes for logical routers
US10567283B2 (en) 2014-03-14 2020-02-18 Nicira, Inc. Route advertisement by managed gateways
US9590901B2 (en) 2014-03-14 2017-03-07 Nicira, Inc. Route advertisement by managed gateways
US10164881B2 (en) 2014-03-14 2018-12-25 Nicira, Inc. Route advertisement by managed gateways
US10110431B2 (en) 2014-03-14 2018-10-23 Nicira, Inc. Logical router processing by network controller
US10411955B2 (en) 2014-03-21 2019-09-10 Nicira, Inc. Multiple levels of logical routers
US11252024B2 (en) 2014-03-21 2022-02-15 Nicira, Inc. Multiple levels of logical routers
US9503321B2 (en) 2014-03-21 2016-11-22 Nicira, Inc. Dynamic routing for logical routers
US9647883B2 (en) 2014-03-21 2017-05-09 Nicria, Inc. Multiple levels of logical routers
US11190443B2 (en) 2014-03-27 2021-11-30 Nicira, Inc. Address resolution using multiple designated instances of a logical router
US9413644B2 (en) 2014-03-27 2016-08-09 Nicira, Inc. Ingress ECMP in virtual distributed routing environment
US11736394B2 (en) 2014-03-27 2023-08-22 Nicira, Inc. Address resolution using multiple designated instances of a logical router
US9893988B2 (en) 2014-03-27 2018-02-13 Nicira, Inc. Address resolution using multiple designated instances of a logical router
US10659373B2 (en) 2014-03-31 2020-05-19 Nicira, Inc Processing packets according to hierarchy of flow entry storages
US10193806B2 (en) 2014-03-31 2019-01-29 Nicira, Inc. Performing a finishing operation to improve the quality of a resulting hash
US11431639B2 (en) 2014-03-31 2022-08-30 Nicira, Inc. Caching of service decisions
US9385954B2 (en) 2014-03-31 2016-07-05 Nicira, Inc. Hashing techniques for use in a network environment
AU2014398480B2 (en) * 2014-06-24 2018-04-05 Hitachi, Ltd. Financial products trading system and financial products trading control method
US10540717B2 (en) 2014-06-24 2020-01-21 Hitachi, Ltd. Financial products trading system and financial products trading control method
US9742881B2 (en) 2014-06-30 2017-08-22 Nicira, Inc. Network virtualization using just-in-time distributed capability for classification encoding
US11483175B2 (en) 2014-09-30 2022-10-25 Nicira, Inc. Virtual distributed bridging
US10020960B2 (en) 2014-09-30 2018-07-10 Nicira, Inc. Virtual distributed bridging
US10511458B2 (en) 2014-09-30 2019-12-17 Nicira, Inc. Virtual distributed bridging
US11252037B2 (en) 2014-09-30 2022-02-15 Nicira, Inc. Using physical location to modify behavior of a distributed virtual network element
US9768980B2 (en) 2014-09-30 2017-09-19 Nicira, Inc. Virtual distributed bridging
US10250443B2 (en) 2014-09-30 2019-04-02 Nicira, Inc. Using physical location to modify behavior of a distributed virtual network element
US11178051B2 (en) 2014-09-30 2021-11-16 Vmware, Inc. Packet key parser for flow-based forwarding elements
US10469342B2 (en) 2014-10-10 2019-11-05 Nicira, Inc. Logical network traffic analysis
US11128550B2 (en) 2014-10-10 2021-09-21 Nicira, Inc. Logical network traffic analysis
US11283731B2 (en) 2015-01-30 2022-03-22 Nicira, Inc. Logical router with multiple routing components
US10079779B2 (en) 2015-01-30 2018-09-18 Nicira, Inc. Implementing logical router uplinks
US10700996B2 (en) 2015-01-30 2020-06-30 Nicira, Inc Logical router with multiple routing components
US10129180B2 (en) 2015-01-30 2018-11-13 Nicira, Inc. Transit logical switch within logical router
US11799800B2 (en) 2015-01-30 2023-10-24 Nicira, Inc. Logical router with multiple routing components
US11601362B2 (en) 2015-04-04 2023-03-07 Nicira, Inc. Route server mode for dynamic routing between logical and physical networks
US10652143B2 (en) 2015-04-04 2020-05-12 Nicira, Inc Route server mode for dynamic routing between logical and physical networks
US10038628B2 (en) 2015-04-04 2018-07-31 Nicira, Inc. Route server mode for dynamic routing between logical and physical networks
US9923760B2 (en) 2015-04-06 2018-03-20 Nicira, Inc. Reduction of churn in a network control system
US9967134B2 (en) 2015-04-06 2018-05-08 Nicira, Inc. Reduction of network churn based on differences in input state
US11050666B2 (en) 2015-06-30 2021-06-29 Nicira, Inc. Intermediate logical interfaces in a virtual distributed router environment
US11799775B2 (en) 2015-06-30 2023-10-24 Nicira, Inc. Intermediate logical interfaces in a virtual distributed router environment
US10361952B2 (en) 2015-06-30 2019-07-23 Nicira, Inc. Intermediate logical interfaces in a virtual distributed router environment
US10693783B2 (en) 2015-06-30 2020-06-23 Nicira, Inc. Intermediate logical interfaces in a virtual distributed router environment
US10225184B2 (en) 2015-06-30 2019-03-05 Nicira, Inc. Redirecting traffic in a virtual distributed router environment
US10348625B2 (en) 2015-06-30 2019-07-09 Nicira, Inc. Sharing common L2 segment in a virtual distributed router environment
US11533256B2 (en) 2015-08-11 2022-12-20 Nicira, Inc. Static route configuration for logical router
US10805212B2 (en) 2015-08-11 2020-10-13 Nicira, Inc. Static route configuration for logical router
US10129142B2 (en) 2015-08-11 2018-11-13 Nicira, Inc. Route configuration for logical router
US10230629B2 (en) 2015-08-11 2019-03-12 Nicira, Inc. Static route configuration for logical router
US10057157B2 (en) 2015-08-31 2018-08-21 Nicira, Inc. Automatically advertising NAT routes between logical routers
US10075363B2 (en) 2015-08-31 2018-09-11 Nicira, Inc. Authorization for advertised routes among logical routers
US10601700B2 (en) 2015-08-31 2020-03-24 Nicira, Inc. Authorization for advertised routes among logical routers
US11425021B2 (en) 2015-08-31 2022-08-23 Nicira, Inc. Authorization for advertised routes among logical routers
US10204122B2 (en) 2015-09-30 2019-02-12 Nicira, Inc. Implementing an interface between tuple and message-driven control entities
US11288249B2 (en) 2015-09-30 2022-03-29 Nicira, Inc. Implementing an interface between tuple and message-driven control entities
US10795716B2 (en) 2015-10-31 2020-10-06 Nicira, Inc. Static route types for logical routers
US10095535B2 (en) 2015-10-31 2018-10-09 Nicira, Inc. Static route types for logical routers
US11593145B2 (en) 2015-10-31 2023-02-28 Nicira, Inc. Static route types for logical routers
US10819621B2 (en) 2016-02-23 2020-10-27 Mellanox Technologies Tlv Ltd. Unicast forwarding of adaptive-routing notifications
US10333849B2 (en) 2016-04-28 2019-06-25 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US11502958B2 (en) 2016-04-28 2022-11-15 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US10805220B2 (en) 2016-04-28 2020-10-13 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US11855959B2 (en) 2016-04-29 2023-12-26 Nicira, Inc. Implementing logical DHCP servers in logical networks
US11019167B2 (en) 2016-04-29 2021-05-25 Nicira, Inc. Management of update queues for network controller
US11601521B2 (en) 2016-04-29 2023-03-07 Nicira, Inc. Management of update queues for network controller
US10484515B2 (en) 2016-04-29 2019-11-19 Nicira, Inc. Implementing logical metadata proxy servers in logical networks
US10841273B2 (en) 2016-04-29 2020-11-17 Nicira, Inc. Implementing logical DHCP servers in logical networks
US10091161B2 (en) 2016-04-30 2018-10-02 Nicira, Inc. Assignment of router ID for logical routers
US10178029B2 (en) 2016-05-11 2019-01-08 Mellanox Technologies Tlv Ltd. Forwarding of adaptive routing notifications
US10560320B2 (en) 2016-06-29 2020-02-11 Nicira, Inc. Ranking of gateways in cluster
US10153973B2 (en) 2016-06-29 2018-12-11 Nicira, Inc. Installation of routing tables for logical router in route server mode
US11418445B2 (en) 2016-06-29 2022-08-16 Nicira, Inc. Installation of routing tables for logical router in route server mode
US10749801B2 (en) 2016-06-29 2020-08-18 Nicira, Inc. Installation of routing tables for logical router in route server mode
US10454758B2 (en) 2016-08-31 2019-10-22 Nicira, Inc. Edge node cluster network redundancy and fast convergence using an underlay anycast VTEP IP
US11539574B2 (en) 2016-08-31 2022-12-27 Nicira, Inc. Edge node cluster network redundancy and fast convergence using an underlay anycast VTEP IP
US10341236B2 (en) 2016-09-30 2019-07-02 Nicira, Inc. Anycast edge service gateways
US10911360B2 (en) 2016-09-30 2021-02-02 Nicira, Inc. Anycast edge service gateways
US11665242B2 (en) 2016-12-21 2023-05-30 Nicira, Inc. Bypassing a load balancer in a return path of network traffic
US10645204B2 (en) 2016-12-21 2020-05-05 Nicira, Inc Dynamic recovery from a split-brain failure in edge nodes
US10237123B2 (en) 2016-12-21 2019-03-19 Nicira, Inc. Dynamic recovery from a split-brain failure in edge nodes
US10212071B2 (en) 2016-12-21 2019-02-19 Nicira, Inc. Bypassing a load balancer in a return path of network traffic
US10742746B2 (en) 2016-12-21 2020-08-11 Nicira, Inc. Bypassing a load balancer in a return path of network traffic
US10616045B2 (en) 2016-12-22 2020-04-07 Nicira, Inc. Migration of centralized routing components of logical router
US10200294B2 (en) 2016-12-22 2019-02-05 Mellanox Technologies Tlv Ltd. Adaptive routing based on flow-control credits
US11115262B2 (en) 2016-12-22 2021-09-07 Nicira, Inc. Migration of centralized routing components of logical router
US10200306B2 (en) 2017-03-07 2019-02-05 Nicira, Inc. Visualization of packet tracing operation results
US11336590B2 (en) 2017-03-07 2022-05-17 Nicira, Inc. Visualization of path between logical network endpoints
US10805239B2 (en) 2017-03-07 2020-10-13 Nicira, Inc. Visualization of path between logical network endpoints
US10681000B2 (en) * 2017-06-30 2020-06-09 Nicira, Inc. Assignment of unique physical network addresses for logical network addresses
US11595345B2 (en) 2017-06-30 2023-02-28 Nicira, Inc. Assignment of unique physical network addresses for logical network addresses
US10637800B2 (en) 2017-06-30 2020-04-28 Nicira, Inc Replacement of logical network addresses with physical network addresses
US20190007364A1 (en) * 2017-06-30 2019-01-03 Nicira, Inc. Assignment of unique physical network addresses for logical network addresses
US10803040B2 (en) * 2017-08-28 2020-10-13 International Business Machines Corporation Efficient and accurate lookups of data by a stream processor using a hash table
US20190065493A1 (en) * 2017-08-28 2019-02-28 International Business Machines Corporation Efficient and accurate lookups of data by a stream processor using a hash table
US20190065494A1 (en) * 2017-08-28 2019-02-28 International Business Machines Corporation Efficient and accurate lookups of data by a stream processor using a hash table
US10817491B2 (en) * 2017-08-28 2020-10-27 International Business Machines Corporation Efficient and accurate lookups of data by a stream processor using a hash table
US10608887B2 (en) 2017-10-06 2020-03-31 Nicira, Inc. Using packet tracing tool to automatically execute packet capture operations
US11336486B2 (en) 2017-11-14 2022-05-17 Nicira, Inc. Selection of managed forwarding element for bridge spanning multiple datacenters
US10511459B2 (en) 2017-11-14 2019-12-17 Nicira, Inc. Selection of managed forwarding element for bridge spanning multiple datacenters
US10374827B2 (en) 2017-11-14 2019-08-06 Nicira, Inc. Identifier that maps to different networks at different datacenters
US10644995B2 (en) 2018-02-14 2020-05-05 Mellanox Technologies Tlv Ltd. Adaptive routing in a box
US11855898B1 (en) * 2018-03-14 2023-12-26 F5, Inc. Methods for traffic dependent direct memory access optimization and devices thereof
US10931560B2 (en) 2018-11-23 2021-02-23 Vmware, Inc. Using route type to determine routing protocol behavior
US10797998B2 (en) 2018-12-05 2020-10-06 Vmware, Inc. Route server for distributed routers using hierarchical routing protocol
US10938788B2 (en) 2018-12-12 2021-03-02 Vmware, Inc. Static routes for policy-based VPN
US11005724B1 (en) 2019-01-06 2021-05-11 Mellanox Technologies, Ltd. Network topology having minimal number of long connections among groups of network elements
US11095480B2 (en) 2019-08-30 2021-08-17 Vmware, Inc. Traffic optimization using distributed edge services
US11159343B2 (en) 2019-08-30 2021-10-26 Vmware, Inc. Configuring traffic optimization using distributed edge services
US11924080B2 (en) 2020-01-17 2024-03-05 VMware LLC Practical overlay network latency measurement in datacenter
US11606294B2 (en) 2020-07-16 2023-03-14 Vmware, Inc. Host computer configured to facilitate distributed SNAT service
US11616755B2 (en) 2020-07-16 2023-03-28 Vmware, Inc. Facilitating distributed SNAT service
US11611613B2 (en) 2020-07-24 2023-03-21 Vmware, Inc. Policy-based forwarding to a load balancer of a load balancing cluster
US11902050B2 (en) 2020-07-28 2024-02-13 VMware LLC Method for providing distributed gateway service at host computer
US11451413B2 (en) 2020-07-28 2022-09-20 Vmware, Inc. Method for advertising availability of distributed gateway service and machines at host computer
US11558426B2 (en) 2020-07-29 2023-01-17 Vmware, Inc. Connection tracking for container cluster
US11570090B2 (en) 2020-07-29 2023-01-31 Vmware, Inc. Flow tracing operation in container cluster
US11196628B1 (en) 2020-07-29 2021-12-07 Vmware, Inc. Monitoring container clusters
US11575594B2 (en) 2020-09-10 2023-02-07 Mellanox Technologies, Ltd. Deadlock-free rerouting for resolving local link failures using detour paths
US11411911B2 (en) 2020-10-26 2022-08-09 Mellanox Technologies, Ltd. Routing across multiple subnetworks using address mapping
US11736436B2 (en) 2020-12-31 2023-08-22 Vmware, Inc. Identifying routes with indirect addressing in a datacenter
US11336533B1 (en) 2021-01-08 2022-05-17 Vmware, Inc. Network visualization of correlations between logical elements and associated physical elements
US11848825B2 (en) 2021-01-08 2023-12-19 Vmware, Inc. Network visualization of correlations between logical elements and associated physical elements
US11870682B2 (en) 2021-06-22 2024-01-09 Mellanox Technologies, Ltd. Deadlock-free local rerouting for handling multiple local link failures in hierarchical network topologies
US11687210B2 (en) 2021-07-05 2023-06-27 Vmware, Inc. Criteria-based expansion of group nodes in a network topology visualization
US11711278B2 (en) 2021-07-24 2023-07-25 Vmware, Inc. Visualization of flow trace operation across multiple sites
US11706109B2 (en) 2021-09-17 2023-07-18 Vmware, Inc. Performance of traffic monitoring actions
US11855862B2 (en) 2021-09-17 2023-12-26 Vmware, Inc. Tagging packets for monitoring and analysis
US11677645B2 (en) 2021-09-17 2023-06-13 Vmware, Inc. Traffic monitoring
US11765103B2 (en) 2021-12-01 2023-09-19 Mellanox Technologies, Ltd. Large-scale network with high port utilization

Also Published As

Publication number Publication date
US6424659B2 (en) 2002-07-23

Similar Documents

Publication Publication Date Title
US6424659B2 (en) Multi-layer switching apparatus and method
US7065050B1 (en) Apparatus and method for controlling data flow in a network switch
US7114024B2 (en) Apparatus and method for managing memory defects
JP3806239B2 (en) Network switch with shared memory system
US6201789B1 (en) Network switch with dynamic backpressure per port
US6430626B1 (en) Network switch with a multiple bus structure and a bridge interface for transferring network data between different buses
US6460088B1 (en) Method and apparatus for port vector determination at egress
US6504846B1 (en) Method and apparatus for reclaiming buffers using a single buffer bit
JP4002334B2 (en) Network switch with read statistics access
JP3789395B2 (en) Packet processing device
US6389480B1 (en) Programmable arbitration system for determining priority of the ports of a network switch
US6260073B1 (en) Network switch including a switch manager for periodically polling the network ports to determine their status and controlling the flow of data between ports
US6052751A (en) Method and apparatus for changing the number of access slots into a memory
EP0854615A2 (en) Multiport polling system for a network switch
JPH10215266A (en) Network switch with another cut-through buffer
JPH10243013A (en) Method and device executing simultaneous read write cycle by network switch
WO2000072524A1 (en) Apparatus and method for programmable memory access slot assignment
US6907036B1 (en) Network switch enhancements directed to processing of internal operations in the network switch
US6778547B1 (en) Method and apparatus for improving throughput of a rules checker logic
US6335938B1 (en) Multiport communication switch having gigaport and expansion ports sharing the same time slot in internal rules checker
US6895015B1 (en) Dynamic time slot allocation in internal rules checker scheduler
US6816488B1 (en) Apparatus and method for processing data frames in a network switch
US6480490B1 (en) Interleaved access to address table in network switching system
US7031302B1 (en) High-speed stats gathering in a network switch
JP2003500926A (en) Method and apparatus for trunking multiple ports in a network switch

Legal Events

Date Code Title Description
AS Assignment

Owner name: FLOWWISE NETWORKS INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VEERINA, MAHESH;REEL/FRAME:009850/0588

Effective date: 19980922

AS Assignment

Owner name: FLOWWISE NETWORKS INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VISWANADHAM, KRISHNA;REEL/FRAME:009851/0253

Effective date: 19980717

AS Assignment

Owner name: NETWORK EQUIPMENT TECHNOLOGIES, INC., CALIFORNIA

Free format text: MERGER;ASSIGNOR:FLOWWISE NETWORKS, INC.;REEL/FRAME:011186/0804

Effective date: 19991214

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

REFU Refund

Free format text: REFUND - SURCHARGE, PETITION TO ACCEPT PYMT AFTER EXP, UNINTENTIONAL (ORIGINAL EVENT CODE: R2551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, MASSACHUSETTS

Free format text: SECURITY INTEREST;ASSIGNORS:SONUS NETWORKS, INC.;SONUS FEDERAL, INC.;NETWORK EQUIPMENT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:033728/0409

Effective date: 20140627

Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, MA

Free format text: SECURITY INTEREST;ASSIGNORS:SONUS NETWORKS, INC.;SONUS FEDERAL, INC.;NETWORK EQUIPMENT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:033728/0409

Effective date: 20140627

AS Assignment

Owner name: TAQUA, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:044283/0361

Effective date: 20170701

Owner name: SONUS NETWORKS, INC., MASSACHUSETTS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:044283/0361

Effective date: 20170701

Owner name: SONUS INTERNATIONAL, INC., MASSACHUSETTS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:044283/0361

Effective date: 20170701

Owner name: SONUS FEDERAL, INC., VIRGINIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:044283/0361

Effective date: 20170701

Owner name: PERFORMANCE TECHNOLOGIES, INCORPORATED, NEW YORK

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:044283/0361

Effective date: 20170701

Owner name: NETWORK EQUIPMENT TECHNOLOGIES, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A.;REEL/FRAME:044283/0361

Effective date: 20170701

AS Assignment

Owner name: SONUS NETWORKS, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NETWORK EQUIPMENT TECHNOLOGIES, INC.;REEL/FRAME:044904/0829

Effective date: 20171218