US20040202192A9 - Method for automatic resource reservation and communication that facilitates using multiple processing events for a single processing task - Google Patents

Method for automatic resource reservation and communication that facilitates using multiple processing events for a single processing task Download PDF

Info

Publication number
US20040202192A9
US20040202192A9 US09/919,283 US91928301A US2004202192A9 US 20040202192 A9 US20040202192 A9 US 20040202192A9 US 91928301 A US91928301 A US 91928301A US 2004202192 A9 US2004202192 A9 US 2004202192A9
Authority
US
United States
Prior art keywords
buffers
buffer
data
context
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/919,283
Other versions
US20020051460A1 (en
US7099328B2 (en
Inventor
Duane Galbi
Joseph Tompkins
Bruce Burns
Daniel Lussier
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bicameral LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=22530010&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20040202192(A9) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Priority to US09/919,283 priority Critical patent/US7099328B2/en
Application filed by Individual filed Critical Individual
Assigned to CONEXANT SYSTEMS, INC. reassignment CONEXANT SYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BURNS, BRUCE G., TOMPKINS, JOSEPH B., LUSSIER, DANIEL J., GALBI, DUANE E.
Publication of US20020051460A1 publication Critical patent/US20020051460A1/en
Assigned to MINDSPEED TECHNOLOGIES, INC. reassignment MINDSPEED TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CONEXANT SYSTEMS, INC.
Assigned to CONEXANT SYSTEMS, INC. reassignment CONEXANT SYSTEMS, INC. SECURITY AGREEMENT Assignors: MINDSPEED TECHNOLOGIES, INC.
Publication of US20040202192A9 publication Critical patent/US20040202192A9/en
Publication of US7099328B2 publication Critical patent/US7099328B2/en
Application granted granted Critical
Assigned to MINDSPEED TECHNOLOGIES, INC. reassignment MINDSPEED TECHNOLOGIES, INC. RELEASE OF SECURITY INTEREST Assignors: CONEXANT SYSTEMS, INC
Assigned to MINDSPEED TECHNOLOGIES, INC. reassignment MINDSPEED TECHNOLOGIES, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CONEXANT SYSTEMS, INC.
Assigned to MINDSPEED TECHNOLOGIES, INC. reassignment MINDSPEED TECHNOLOGIES, INC. CORRECTIVE DOCUMENT Assignors: CONEXANT SYSTEMS, INC.
Assigned to CHEDMIN COMMUNICATION LTD., LLC reassignment CHEDMIN COMMUNICATION LTD., LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MINDSPEED TECHNOLOGIES, INC.
Assigned to MINDSPEED TECHNOLOGIES, INC. reassignment MINDSPEED TECHNOLOGIES, INC. CORRECTIVE ASSIGNMENT TO CORRECT THE OMISSION OF SCHEDULE 1.01 (D) PREVIOUSLY RECORDED AT REEL: 020532 FRAME: 0916. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: CONEXANT SYSTEMS, INC.
Assigned to F. POSZAT HU, L.L.C. reassignment F. POSZAT HU, L.L.C. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: CHEDMIN COMMUNICATION LTD., LLC
Assigned to INTELLECTUAL VENTURES ASSETS 67 LLC reassignment INTELLECTUAL VENTURES ASSETS 67 LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: F. POSZAT HU, L.L.C.
Assigned to BICAMERAL LLC reassignment BICAMERAL LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTELLECTUAL VENTURES ASSETS 67 LLC
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L12/5602Bandwidth control in ATM Networks, e.g. leaky bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9047Buffering arrangements including multiple buffers, e.g. buffer pools
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9063Intermediate storage in different physical parts of a node or terminal
    • H04L49/9068Intermediate storage in different physical parts of a node or terminal in the network interface card
    • H04L49/9073Early interruption upon arrival of a fraction of a packet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9063Intermediate storage in different physical parts of a node or terminal
    • H04L49/9078Intermediate storage in different physical parts of a node or terminal using an external memory or storage device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9084Reactions to storage capacity overflow
    • H04L49/9089Reactions to storage capacity overflow replacing packets in a storage arrangement, e.g. pushout
    • H04L49/9094Arrangements for simultaneous transmit and receive, e.g. simultaneous reading/writing from/to the storage element
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5651Priority, marking, classes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5679Arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5684Characteristics of traffic flows

Definitions

  • the present invention is related to the field of communications, and more particularly to integrated circuits that process communication packets.
  • each packet contains a header and a payload.
  • the header contains control information, such as addressing or channel information, that indicate how the packet should be handled.
  • the payload contains the information that is being transferred.
  • Some examples of the types of packets used in communication systems include, Asynchronous Transfer Mode (ATM) cells, Internet Protocol (IP) packets, frame relay packets, Ethernet packets, or some other packet-like information block.
  • ATM Asynchronous Transfer Mode
  • IP Internet Protocol
  • Ethernet packets or some other packet-like information block.
  • packet is intended to include packet segments.
  • Integrated circuits termed “traffic stream processors” have been designed to apply robust functionality to high-speed packet streams. Robust functionality is critical with today's diverse but converging communication systems. Stream processors must handle multiple protocols and inter-work between streams of different protocols. Stream processors must also ensure that quality-of service constraints, priority, and bandwidth requirements are met. This functionality must be applied differently to different streams, and there may be thousands of different streams.
  • the integrated circuit includes a core processor.
  • the processor handles a series of tasks, termed “events”. These events consist of tasks such as CPU processing steps as well as the scheduling of subsequent events. These subsequently scheduled events may consist of CAM lookups, DMA data transfers, or other generic events based on conditions in the current event. All events have an associated service address, “context information” and “data”.
  • Information about the event such as the resource that requested the event, how much data is associated with the event, and other key information from the event requester is stored in “special state” information associated with the event.
  • the external resource supplies the core processor with a memory pointer to “context” information and it also supplies the data to be associated with the event.
  • the context pointer is used to fetch the context from external memory and to store this “context” information in memory located on the chip. If the required context data has already been fetched onto the chip, the hardware recognizes this fact and sets the on chip context pointer to point to this already pre-fetched context data. Only a small number of the system “contexts” are cached on the chip at any one time, and their allocation needs to managed and sometimes shared among multiple processing events. Each cached “context” has an in-use counter so that one context can be associated with multiple sets of data. The rest of the system “contexts” are stored in external memory. This context fetch mechanism and the storage of these contexts in the co-processor is described in the above referenced co-pending applications.
  • data and context information for a number of events are stored in buffers in a co-processor.
  • the core processor needs the service address of the event as well as the “context” and “data” associated with the event.
  • the service address is the starting address for the instructions used to service the event.
  • the core processor branches to the service address in order to start servicing the event.
  • the present invention adds flexibility and additional functions to an integrated circuit such as that described in the above references co-pending applications.
  • special state information is effectively stored together with associated data in data buffers.
  • the data buffers do not have associated in-use counters.
  • separate logical buffers are provided for special state information and for the associated data buffer.
  • each data buffer and each special state information buffer (hereinafter termed resources) has an associated in-use counter. Multiple events can share the same resource.
  • the counter associated with a resource is incremented when a resource becomes associated with a particular event.
  • the counter associated with a resource is decremented when an event completes the use of that particular resource.
  • the in-use count for a resource becomes zero, the in-use count indicates that the resource is unassigned and that the resource can be assigned to a new event.
  • two events can point to (i.e. utilize) the same data buffer and/or the same special state information buffer.
  • content of a data buffer or a special state information buffer can be passed directly from one event to another event without reading the data into and out of memory.
  • the in-use counter is particularly useful to facilitate the timing of DMA requests without need for explicit control by an external program.
  • two events can use the same data buffer. This is possible since the special state information is stored in a separate buffer.
  • one can have one data buffer associated with multiple context buffers since the special state information is stored separately from the associated data.
  • the present invention also adds a communication mechanism which allows an event to pass a multi-bit message to subsequent events. This message passing mechanism does not require that the two events share any of the same context, data, or special state resources.
  • FIG. 1 is an overall block diagram of a packet processing integrated circuit in an example of the invention.
  • FIG. 2 is a block diagram that illustrates packet processing stages and the pipe-lining used by the circuit in an example of the invention.
  • FIG. 3 is a diagram illustrating circuitry in the co-processing relating to context and data buffer processing in an example of the invention.
  • FIG. 4 is a block program flow diagram illustrating buffer correlation and in-use counts in an example of the invention.
  • FIG. 5 is a block diagram of the buffer management circuitry in an example of the invention.
  • FIG. 6 is a block diagram showing the details of the data and special state information buffers in an example of the invention.
  • FIG. 7 is a block program flow diagram illustrating how data buffers are passed between events in an example of the invention.
  • FIG. 8 is a block program flow diagram illustrating how state information buffers are passed between events in an example of the invention.
  • FIG. 9A and 9B are block program flow diagram illustrating examples of how DMA commands are handled in an example of the invention.
  • FIG. 1 is a block diagram that illustrates a packet processing integrated circuit 100 in an example of the invention. It should be understood that the present invention can also be applied to other types of processors. The operation of the circuit 100 will first be described with reference to FIGS. 1 to 4 and then the operation of different embodiments of the invention will be described with reference to FIGS. 5 to 9 .
  • Integrated circuit 100 includes a core processor 104 , a scheduler 105 , receive interface 106 , co-processor circuitry 107 , transmit interface 108 , and memory interface 109 . These components may be interconnected through a memory crossbar or some other type of internal interface.
  • Receive interface 106 is coupled to communication system 101 .
  • Transmit interface 108 is coupled to communication system 102 .
  • Memory interface is coupled to memory 103 .
  • Communication system 101 could be any device that supplies communication packets with one example being the switching fabric in an Asynchronous Transfer Mode (ATM) switch.
  • Communication system 101 could be any device that receives communication packets with one example being the physical line interface in the ATM switch.
  • Memory 103 could be any memory device with one example being Random Access Memory (RAM) integrated circuits.
  • Receive interface 106 could be any circuitry configured to receive packets with some examples including UTOPIA interfaces or Peripheral Component Interconnect (PCI) interfaces.
  • Transmit interface 108 could be any circuitry configured to transfer packets with some examples including UTOPIA interfaces or PCI interfaces.
  • Core processor 104 is a micro-processor that executes networking application software. Core-processor 104 supports an instruction set that has been tuned for networking operations especially context switching. As described herein, core processor 104 has the following characteristics: 166 MHz, pipelined single-cycle operation, RISC-based design, 32-bit instruction and register set, K instruction cache, 8 KB zero-latency scratchpad memory, interrupt/trap/halt support, and C compiler readiness.
  • Scheduler 105 comprises circuitry configured to schedule and initiate packet processing that typically results in packet transmissions from integrated circuit 100 , although scheduler 105 may also schedule and initiate other activities. Scheduler 105 schedules upcoming events, and as time passes, selects scheduled events for processing and reschedules unprocessed events. Scheduler 105 transfers processing requests for selected events to co-processor circuitry 107 . Scheduler 105 can handle multiple independent schedules to provide prioritized scheduling across multiple traffic streams. To provide scheduling, scheduler 105 may execute a guaranteed cell rate algorithm to implement a leaky bucket or a token bucket scheduling system. The guaranteed cell rate algorithm is implemented through a cache that holds algorithm parameters. Scheduler 105 is described in detail in the above referenced co-pending patent applications.
  • Co-processor circuitry 107 receives communication packets from receive interface 106 and memory interface 109 and stores the packets in internal data buffers. Co-processor circuitry 107 correlates each packet to context information describing how the packet should be handled. Co-processor circuitry 107 stores the correlated context information in internal context buffers and associates individual data buffers with individual context buffers to maintain the correlation between individual packets and context information. Importantly, co-processor circuitry 107 ensures that only one copy of the correlated context information is present the context buffers to maintain coherency. Multiple data buffers are associated with a single context buffer to maintain the correlation between the multiple packets and the single copy the context information.
  • Co-processor circuitry 107 also determines a prioritized processing order for core processor 104 .
  • the prioritized processing order controls the sequence in which core processor 104 handles the communication packets.
  • the prioritized processing order is typically based on the availability of all of the resources and information that are required by core processor 104 to process a given communication packet. Resource state bits are set when resources become available, so co-processor circuitry 107 may determine when all of these resources are available by processing the resource state bits. If desired, the prioritized processing order may be based on information in packet handling requests.
  • Co-processor circuitry 107 selects scheduling algorithms based on an internal scheduling state bits and uses the selected scheduling algorithms to determine the prioritized processing order.
  • co-processor circuitry 107 is externally controllable. Co-processor circuitry 107 is described in more detail with respect to FIGS. 2-4.
  • Memory interface 109 comprises circuitry configured to exchange packets with external buffers in memory 103 .
  • Memory interface 109 maintains a pointer cache that holds pointers to the external buffers.
  • Memory interface 109 allocates the external buffers when entities, such as core processor 104 or co-processor circuitry 107 , read pointers from the pointer cache.
  • Memory interface 109 de-allocates the external buffers when the entities write the pointers to the pointer cache.
  • external buffer allocation and de-allocation is available through an on-chip cache read/write.
  • Memory interface 109 also manages various external buffer classes, and handles conditions such as external buffer exhaustion. Memory interface 109 is described in detail in the above referenced patent applications.
  • receive interface 106 receives new packets from communication system 101 , and scheduler 105 initiates transmissions of previously received packets that are typically stored in memory 103 .
  • receive interface 106 and scheduler 105 transfer requests to co-processor circuitry 107 .
  • core processor 104 may also request packet handling from co-processor circuitry 107 .
  • Co-processor circuitry 107 fields the requests, correlates the packets with their respective context information, and creates a prioritized work queue for core processor 104 .
  • Core processor 104 processes the packets and context information in order from the prioritized work queue.
  • co-processor circuitry 107 operates in parallel with core processor 104 to offload the context correlation and prioritization tasks to conserve important core processing capacity.
  • core processor 104 In response to packet handling, core processor 104 typically initiates packet transfers to either memory 103 or communication system 102 . If the packet is transferred to memory 103 , then core processor instructs scheduler 105 to schedule and initiate future packet transmission or processing.
  • scheduler 105 operates in parallel with core processor 104 to offload scheduling tasks and conserve important core processing capacity.
  • core processor 104 In response to packet handling, core processor 104 typically initiates packet transfers to either memory 103 or communication system 102 . If the packet is transferred to memory 103 , then core processor 104 instructs scheduler 105 to schedule and initiate future packet transmission or processing.
  • scheduler 105 operates in parallel with core processor 104 to offload scheduling tasks and conserve important core processing capacity.
  • Co-processor circuitry 107 transfers packets directly to communication system 102 through transmit interface 108 .
  • Co-processor circuitry 107 transfers packets to memory 103 through memory interface 109 with an on-chip pointer cache.
  • Memory interface 109 transfers packets from memory 103 to communication system 102 through transmit interface 108 .
  • Co-processor circuitry 107 transfers context information from a context buffer through memory interface 109 to memory 103 if there are no packets in the data buffers that are correlated with the context information in the context buffer.
  • memory interface 109 operates in parallel with core processor 104 to offload external memory management tasks and conserve important core processing capacity.
  • FIGS. 2-4 depict one example of co-processor circuitry. Those skilled in the art will understand that FIGS. 2-4 have been simplified for clarity.
  • FIG. 2 illustrates how co-processor circuitry 107 provides pipelined operation in an example of the invention.
  • FIG. 2 is vertically separated by dashed lines that indicate five packet processing stages: 1) context resolution, 2) context fetching, 3) priority queuing, 4) software application, and 5) context flushing.
  • Co-processor circuitry 107 handles stages 1-3 to provide hardware acceleration.
  • Core processor 104 handles stage 4 to provide software control with optimized efficiency due to stages 1-3.
  • Co-processor circuitry 107 also handles stage 5.
  • Co-processor circuitry 107 has eight pipelines through stages 1-3 and 5 to concurrently process multiple packet streams.
  • requests to handle packets are resolved to a context for each packet in the internal data buffers.
  • the requests are generated by receive interface 106 , scheduler 105 , and core processor 104 in response to incoming packets, scheduled transmissions, and application software instructions.
  • the context information includes a channel descriptor that has information regarding how the packet is to be handled.
  • a channel descriptor may indicate service address information, traffic management parameters, channel status, stream queue information, and thread status.
  • Channel descriptors are identified by channel identifiers. Channel identifiers may be indicated by the request.
  • a map may be used to translate selected bits from the packet header to a channel identifier.
  • a hardware engine may also perform a sophisticated search for the channel identifier based on various information. Different algorithms that calculate the channel identifier from the various information may be selected by setting correlation state bits in co-processor circuitry 107 . Thus, the technique used for context resolution is externally controllable.
  • stage 2 context information is fetched, if necessary, by using the channel identifiers to transfer the channel descriptors to internal context buffers. Prior to the transfer, the context buffers are first checked for a matching channel identifier and validity bit. If a match is found, then the context buffer with the existing channel descriptor is associated with the corresponding internal data buffer holding the packet.
  • requests with available context are prioritized and arbitrated for core processor 104 handling.
  • the priority may be indicated by the request—and it may be the source of the request.
  • the priority queues 1 - 12 are 8 entries deep. Priority queues 1 - 12 are also ranked in a priority order by queue number.
  • the priority for each request is determined, and when the context and data buffers for the request are valid, an entry for the request is placed in one of the priority queues that corresponds to the determined priority.
  • the entries in the priority queues point to a pending request state RAM that contains state information for each data buffer.
  • the state information includes a data buffer pointer, a context pointer, context validity bit, requester indicator, port status, a channel descriptor loaded indicator. This state information was referred to earlier in this document as the special state information associated with an event. These two terms may be used interchangeably.
  • the work queue indicates the selected priority queue entry that core processor 104 should handle next.
  • the requests in priority queues are arbitrated using one of various algorithms such as round robin, service-to-completion, weighted fair queuing, simple fairness, first-come first-serve, allocation through priority promotion, and software override.
  • the algorithms may be selected through scheduling state bits in co-processor circuitry 107 .
  • the technique used for prioritization is externally controllable.
  • Co-processor circuitry 107 loads core processor 104 registers with the channel descriptor information for the next entry in the work queue.
  • core processor 104 executes the software application to process the next entry in the work queue which points to a portion of the pending state request RAM that identifies the data buffer and context buffer.
  • the context buffer indicates one or more service addresses that direct the core processor 104 to the proper functions within the software application.
  • One such function of the software application is traffic shaping to conform to service level agreements.
  • Other functions include header manipulation and translation, queuing algorithms, statistical accounting, buffer management, inter-working, header encapsulation or stripping, cyclic redundancy checking, segmentation and reassembly, frame relay formatting, multicasting, and routing. Any context information changes made by the core processor 104 are linked back to the context buffer in real time.
  • stage 5 context is flushed.
  • core processor 104 instructs coprocessor circuitry 107 to transfer packets to off-chip memory 103 or transmit interface 108 . If no other data buffers are currently associated with the pertinent context information, then co-processor circuitry 107 transfers the context information to off-chip memory 103 .
  • FIG. 3 is a block diagram that illustrates co-processor circuitry 107 in an example of the invention.
  • Co-processor circuitry 107 comprises a hardware engine that is firmware-programmable in that it operates in response to state bits and register content.
  • core processor 104 is a micro-processor that executes application software.
  • Co-processor circuitry 107 operates in parallel with core processor 104 to conserve core processor capacity by off-loading numerous tasks from the core processor 104 .
  • Co-processor circuitry 107 comprises context resolution 310 , control 311 , arbiter 312 , priority queues 313 , data buffers 314 , context buffers 315 , context DMA 316 , and data DMA 317 .
  • Data buffers 314 hold packets and context buffers 315 hold context information, such as a channel descriptor.
  • Data buffers 314 are relatively small and of a fixed size, such as 64 bytes, so if the packets are ATM cells, each data buffer holds only a single ATM cell and ATM cells do not cross data buffer boundaries.
  • Individual data buffers 314 are associated with individual context buffers 315 as indicated by the downward arrows.
  • Priority queues 313 hold entries that represent individual data buffers 314 as indicated by the upward arrows.
  • a packet in one of the data buffers is associated with its context information in an associated one of the context buffers 315 and with an entry in priority queues 313 .
  • Arbiter 312 presents a next entry from priority queues 313 to core processor 104 which handles the associated packet in the order determined by arbiter 312 .
  • Context DMA 316 exchanges context information between memory 103 and context buffers 315 through memory interface 109 . Context DMA automatically updates queue pointers in the context information.
  • Data DMA 317 exchanges packets between data buffers 314 and memory 103 through memory interface 109 . Data DMA 317 also transfers packets from memory 103 to transmit interface 108 through memory interface 109 .
  • Data DMA 317 signals context DMA 316 when transferring packets off-chip, and context DMA 316 determines if the associated context should be transferred to off-chip memory 103 . Both DMAs 316 - 317 may be configured to perform CRC calculations.
  • control 311 receives the new packet and a request to handle the new packet from receive interface 106 .
  • Control 311 receives and places the packet in one of the data buffers 314 and transfers the packet header to context resolution 310 .
  • context resolution 310 Based on gap state bits, a gap in the packet may be created between the header and the payload in the data buffer, so core processor 104 can subsequently write encapsulation information to the gap without having to create the gap.
  • Context resolution 310 processes the packet header to correlate the packet with a channel descriptor, although in some cases, receive interface 106 may have already performed this context resolution.
  • the channel descriptor comprises information regarding packet transfer over a channel.
  • Control 311 determines if the channel descriptor that has been correlated with the packet is already in one of the context buffers 315 and is valid. If so, control 311 does not request the channel descriptor from off-chip memory 103 . Instead, control 311 associates the particular data buffer 314 holding the new packet with the particular context buffer 315 that already holds the correlated channel descriptor. This prevents multiple copies of the channel descriptor from existing in context buffers 314 . Control 311 then increments an in-use count for the channel descriptor to track the number of data buffers 314 that are associated with the same channel descriptor.
  • control 311 requests the channel descriptor from context DMA 316 .
  • Context DMA 316 transfers the requested channel descriptor from off-chip memory 103 to one of the context buffers 315 using the channel descriptor identifier, which may be an address, that was determined during context resolution.
  • Control 311 associates the context buffer 315 holding the transferred channel descriptor with the data buffer 314 holding the new packet to maintain the correlation between the new 5 packet and the channel descriptor.
  • Control 311 also sets the in-use counter for the transferred channel descriptor to one and sets the validity bit to indicate context information validity.
  • Control 311 also determines a priority for the new packet.
  • the priority may be determined by the source of the new packet, header information, or channel descriptor.
  • Control 311 places an entry in one of priority queues 313 based on the priority. The entry indicates the data buffer 314 that has the new packet.
  • Arbiter 312 implements an arbitration scheme to select the next entry for core processor 104 .
  • Core processor 104 reads the next entry and processes the associated packet and channel descriptor in the particular data buffer 314 and context buffer 315 indicated in the next entry.
  • Each priority queue has a service-to-completion bit and a sleep bit.
  • the service-to-completion bit When the service-to-completion bit is set, the priority queue has a higher priority that any priority queues without the service-to-completion bit set.
  • the sleep bit When the sleep bit is set, the priority queues is not processed until the sleep bit is cleared.
  • the ranking of the priority queue number breaks priority ties.
  • Each priority queue has a weight from 0-15 to ensure a certain percentage of core processor handling. After an entry from a priority queue is handled, its weight is decremented by one if the service-to-completion bit is not set The weights are re-initialized to a default value after 128 requests have been handled or if all weights are zero.
  • Each priority queue has a high and low watermark.
  • the service-to-completion bit is set.
  • the service-to-completion bit is cleared.
  • the high watermark is typically set at the number of data buffers allocated to the priority queue.
  • the context buffers 315 each have an associated in-use counter.
  • the in-use counters associated with the context buffers is not shown in FIG. 3, but it is shown in FIG. 6.
  • Core processor 104 may instruct control 311 to transfer the packet to off-chip memory 103 through data DMA 317 .
  • Control 311 decrements the context buffer in-use counter, and if the in-use counter is zero (no data buffers 314 are associated with the context buffer 315 holding the channel descriptor), then control 311 instructs context DMA 316 to transfer the channel descriptor to off-chip memory 103 .
  • Control 311 also clears the validity bit. This same general procedure is followed when scheduler 105 requests packet transmission, except that in response to the request from scheduler 105 , control 311 instructs data DMA 317 to transfer the packet from memory 103 to one of data buffers 314 .
  • the present invention provides additional circuitry associated with data buffers 314 .
  • the additional circuitry provided by the present invention is shown in FIG. 6 and it will be explained in detail later.
  • FIG. 4 is a flow diagram that illustrates the operation of coprocessor circuitry 107 when correlating buffers in an example of the invention.
  • Co-processor circuitry 107 has eight pipelines to concurrently process multiple packet streams in accord with FIG. 3.
  • a packet is stored in a data buffer, and the packet is correlated to a channel descriptor as identified by a channel identifier.
  • the channel descriptor comprises the context information regarding how packets in the different channels are to be handled.
  • context buffers 314 are checked for a valid version of the correlated channel descriptor. This entails matching the correlated channel identifier with a channel identifier in a context buffer that is valid. If the correlated channel descriptor is not in a context buffer that is valid, then the channel descriptor is retrieved from memory 103 and stored in a context buffer using the channel identifier. The data buffer holding the packet is associated with the context buffer holding the transferred channel descriptor. An in-use counter for the context buffer holding the channel descriptor is set to one. A validity bit for the context buffer is set to indicate that the channel descriptor in the context buffer is valid. If the correlated channel descriptor is already in a context buffer that is valid, then the data buffer holding the packet is associated with the context buffer already holding the channel descriptor. The in-use counter for the context buffer holding the channel descriptor is incremented.
  • core processor 104 instructs co-processor circuitry 107 to transfer packets to off-chip memory 103 or transmit interface 108 .
  • Data DMA 317 transfers the packet and signals context DMA 316 when finished.
  • Context DMA 316 decrements the in-use counter for the context buffer holding the channel descriptor, and if the decremented in-use count equals zero, then context DMA 316 transfers the channel descriptor to memory 103 and clears the validity bit for the context buffer.
  • FIGS. 9A and 9B will be used to illustrate these operations.
  • FIG. 5 depicts a specific example of memory interface circuitry in accord with the present invention. Those skilled in the art will appreciate numerous variations from the circuitry shown in this example may be made. Furthermore, those skilled in the art will appreciate that some conventional aspects of FIGS. 5-6 have been simplified or omitted for clarity.
  • FIG. 5 is a block diagram that illustrates memory interface 109 .
  • Memory interface 109 comprises a hardware circuitry engine that is firmware-programmable in that it operates in response to state bits and register content.
  • core processor 104 is a micro-processor that executes application software.
  • Memory interface 109 operates in parallel with core processor 104 to conserve core processor capacity by off-loading numerous tasks from the core processor.
  • FIG. 1 and FIG. 5 show memory 103 , core processor 104 , co-processor circuitry 107 , transmit interface 108 , and memory interface 109 .
  • Memory 103 comprises Static RAM (SRAM) 525 and Synchronous Dynamic RAM (SDRAM) 526 , although other memory systems could also be used.
  • SDRAM 526 comprises pointer stack 527 and external buffers 528 .
  • Memory interface 109 comprises buffer management engine 520 , SRAM interface 521 , and SDRAM interface 522 .
  • Buffer management engine 520 comprises pointer cache 523 and control logic 524 .
  • SRAM interface 521 exchanges context information between SRAM 525 and co-processor circuitry 107 .
  • External buffers 528 use a linked list mechanism to store communication packets externally to integrated circuit 100 .
  • Pointer stack 527 is a cache of pointers to free external buffers 528 that is initially built by core processor 104 .
  • Pointer cache 523 stores pointers that were transferred from pointer stack 527 and correspond to external buffers 528 . Sets of pointers may be periodically exchanged between pointer stack 527 and pointer cache 523 . Typically, the exchange from stack 527 to cache 523 operates on a first-in/first-out basis.
  • core processor 104 writes pointers to free external buffers 528 to pointer stack 527 in SDRAM 526 .
  • control logic 524 transfers a subset of these pointers to pointer cache 523 .
  • an entity such as core processor 104 , co-processor circuitry 107 , or an external system, needs to store a packet in memory 103
  • the entity reads a pointer from pointer cache 523 and uses the pointer to transfer the packet to external buffers 528 through SDRAM interface 522 .
  • Control logic 524 allocates the external buffer as the corresponding pointer is read from pointer cache 523 .
  • SDRAM stores the packet in the external buffer indicated by the pointer. Allocation means to reserve the buffer, so other entities do not improperly write to it while it is allocated.
  • the entity When the entity no longer needs the external buffer—for example, the packet is transferred from memory 103 through SDRAM interface 522 to co-processor circuitry 107 or transmit interface 108 , then the entity writes the pointer to pointer cache 523 .
  • Control logic 524 de-allocates the external buffer as the corresponding pointer is written to pointer cache 523 .
  • De-allocation means to release the buffer, so other entities may reserve it. The allocation and de-allocation process is repeated for other external buffers 528 .
  • Control logic 524 tracks the number of the pointers in pointer cache 523 that point to de-allocated external buffers 528 . If the number reaches a minimum threshold, then control logic 524 transfers additional pointers from pointer stack 527 to pointer cache 523 . Control logic 524 may also transfer an exhaustion signal to core processor 104 in this situation. If the number reaches a maximum threshold, then control logic 524 transfers an excess portion of the pointers from pointer cache 523 to pointer stack 527 .
  • FIG. 6 shows the detailed logic added to the data buffer 314 shown in FIG. 3 in an example of the invention.
  • the data buffer 314 includes two sections designated data only buffers 614 and special state information buffers 620 .
  • the data buffers are assigned an index number from zero to the maximum number of data buffers in the co-processor 107 .
  • the special state information buffers are also assigned an index from zero to the maximum number of special state information buffers in the co-processor 107 .
  • the context buffers are also assigned an index from zero to the maximum number of context buffers in the co-processor 107 . These indexes are used by the logic in the co-processor 107 and the core processor 104 to identify an individual context buffer, data buffer, or special state information buffer. In one embodiment, there are sixteen of each of these type of buffers in the co-processor 107 . The exact number of each of these buffers is not significant to the general operation of the logic.
  • Each buffer has an associated in-use counter 614 - 0 to 614 - 5 and 620 - 0 to 620 - 5 .
  • the in-use counters keep track of the number of events, which are using the data in the particular buffers.
  • Each in-use counter is incremented by one for each event, which is using the data or state information in a particular buffer.
  • the in-use counter is decremented by one.
  • no events are using the particular buffer and it can be reallocated.
  • Data buffer resolution logic 622 and PRSR special data resolution logic 621 operates similar to the operation of context buffer resolution 310 , which was previously described.
  • Data buffer resolution logic 622 keeps track of which data buffers 614 are in use and which are available to the assigned to new events. Data buffer resolution logic 622 also contains the logic for incrementing and decrementing the in use counters associated with the data buffers 614 .
  • PRSR special data resolution logic 621 keeps track of which special state information buffers are in use and which are available to be assigned to new events. PRSR special data resolution logic 621 also contains the logic for incrementing and decrementing the in use counters associated with the special state information buffers.
  • PRSR special data resolution logic 621 and data buffer resolution logic 622 select a buffer to be assigned to a new event by scanning the in use counts of all their associated buffers and picking the buffer with the lowest index which has an in-use count of zero. In other embodiments, there are numerous variations in selecting a buffer to be assigned to a new event and which has an in-use count of zero. Some examples of selecting a buffer are first-in-first-out selection and last-in-first-out selection.
  • Context resolution 310 contains the logic used to select the context buffer to be assigned to a new event.
  • a global configuration bit is used to pick which of two mechanisms is used to select the next context buffer to be assigned to a new event.
  • One mechanism picks the context buffer in the same manner as the next data buffer is picked.
  • this method returns the context buffer with a zero in-use count which has the lowest index.
  • the problem with this selection mechanism for context buffers is that the selection mechanism tends to select the context buffer that have been most recently freed. For instance, when context buffer with index zero is freed, it is always the next new index to be selected. Because context information, which is not already stored in a context buffer, needs to be read in from off-chip memory, under certain conditions is better to not reuse a context buffer as soon as its in-use count goes to zero.
  • This problem is addressed by the second context selection mechanism.
  • This mechanism uses a moving “finger” which determines at what index the logic will start searching for an in-use count of zero. The value of the finger is incremented after each new context selection. Hence, for the first context new selection the logic will start search forward from index zero. For the second new context select, the logic will start searching forward from index 1 , etc.
  • the special state information data buffer 620 contains a pointer to an associated data buffer 614 as well as an associated context buffer 315 (hereinafter these will also be referred to as resources). Because of these links, a special state data buffer can be used to identify the resources associated with an event. As shown by the arrows from the special state data buffers to the priority queues 313 , a special state data buffer pointer is stored in the appropriate priority queue. This logic was described in more detail above in stage 3 of FIG. 3. When the arbiter 312 picks the next entry to service from the priority queue, the arbiter 312 returns a special state data buffer pointer. This pointer is then used by logic associated with the core processor 104 and the co-processor circuitry 107 to identify the context and data buffer resources the event will be using.
  • the size of a data buffer 614 is 64-bytes
  • the size of a context buffer 315 is 64-bytes
  • the size of a special state data buffer 620 is 44 bits. As recognized by those skilled in the art, the size of these buffers could be changed without affecting the operation of the logic in FIG. 6.
  • FIG. 7 is a block flow diagram showing how a data buffer 614 can be passed from one event to another event in an example of the invention.
  • a new event begins as indicated by steps 701 and 702 .
  • a check is made to determine if the particular event is using a passed data buffer. If the particular event would like to use a “passed” data buffer, the particular data buffer 614 is associated with the event and the in-use counter for the particular data.
  • the event processing takes place and at the end of the event, the in-use counter of the data buffer is decremented by one in step 722 .
  • a check is made to determine if the in-use counter is zero. If the count is zero, the buffer is freed and can be assigned to a new event as indicated by step 725 . If the count is not zero, as indicated by step 724 , the buffer is not freed since the buffer is still in use by some other event.
  • FIG. 8 is a block flow diagram showing how state information is passed between events in an example of the invention.
  • a determination is made is as to whether or not an event is passing “state” information. If state information is not being passed, the operation proceeds as indicated by steps 810 to 815 .
  • a new state information buffer is selected from the unused pool of buffers as indicated by step 810 .
  • the event is performed.
  • the in-use counter is decremented by one (step 812 ) and a check is made to determine if the count is zero at step 813 . If the count is zero, the buffer is free to be assigned as indicated by step 815 . Otherwise, the buffer is not freed as indicated by block 814 .
  • steps 804 to 808 The operations that occur when “state” information is passed from one event to another event are indicated by steps 804 to 808 .
  • the information in the data only buffer 614 is also passed between the events. This is indicated by steps 804 and 805 .
  • the event proceeds as indicated by step 806 , and at the end of the event, as indicated by steps 807 and 812 , the in-use counter of the data only buffer 614 and the state information buffer 620 is decreased by one.
  • steps 808 , 808 -a and 808 -b and 813 to 815 the check is then made to determine if the in-use counter has reached zero to determine if the buffers can be re-assigned.
  • An event can pass data or special state information associated with one event to a new event, which does not share the same context information. Such transfers are possible because the state information is stored in a buffer that is separate from the data buffer.
  • An event can also pass a multi-bit message from a current event to a subsequent event that is generated by the current event. This message is stored in the special state buffer of the subsequent event.
  • FIGS. 9A and 9B illustrate examples of how one embodiment of the invention operates.
  • the horizontal dimension in FIGS. 9A and 9B represents time.
  • FIG. 9A illustrates how the in-use counts for a data buffer change for an event which submits a DMA command in an example of the invention.
  • the process begins at step 901 . It is assumed that at this point the in-use count of the data buffer is one. While the event posted as indicated by step 901 is progressing, steps 902 and 903 indicate that two DMA transfers are submitted. The data buffer count is incremented to two by the first DMA command and to three by the second DMA command. As indicated by step 904 , when the first DMA transfer finishes, the in-use count is reduced to two.
  • the in-use count is reduced to one as indicated by block 905 .
  • the in-use count is reduced to zero as indicated by step 906 .
  • Conventional logic is provided in co-processor circuitry 107 to handle the changes to the in-use counts as described.
  • FIG. 9B indicates how the in-use count of a data buffer changes for an event, which creates a shared data buffer in an example of the invention.
  • the horizontal dimension indicates time.
  • the illustrated process begins as indicated by step 911 with an event being posted. In one embodiment, this event requested a new data buffer. This data buffer would have an initial in-use count of zero and when the event is posted, as indicated by step 911 , the in-use count is increased to one.
  • Step 921 represents another event request, which is posted as indicated by step 922 . For the event request shown in 921 , the first event passes its data buffer to the second event so the second event starts with a data buffer in-use count of two. This initial in-use count of two is arrived at using multiple steps.
  • the core processor 104 initiates a request for another event, the data buffer in-use count is immediately incremented by one in order to reserve this data buffer for the next event.
  • the event request is for another core processor event, the co-processor circuitry 107 receives this event request and passes this request to the section of the co-processor logic which handles core processor event requests. This is the same logic, which handled the initial event generation indicated in 901 or 911 .
  • the in-use count of the data buffer is again incremented as this data buffer is assigned to the new event.
  • the section of the co-processor circuitry 107 that handles event requests signals back to the section of the co-processor circuitry 107 , which received this event request from the core processor 104 .
  • This section of the co-processor logic now requests the in-use count of the data buffer be decremented by one. Hence, there is a total of two increments and one decrement and the new event is posted with an effective initial data buffer in-use count two.
  • step 922 is delayed by stalls in the system such that this event request is really processed after 912 happens, the data buffer is reserved using in-use counts by the 921 operation until the 922 operation can take place. This assures that independent of the relative timing of 922 and 912 this is not time between 912 and 922 that the value of the data buffer's in-use count allows this passed data buffer to be viewed as an unassigned data buffer.
  • the effective reservation of this data buffer by incrementing the is-use count when the event request 921 is posted, assures that no intervening event request can mistakenly view this data buffer as unassigned and reallocate this data buffer.
  • Step 912 indicates that when the first event is finished, the data buffer count is reduced to one.
  • Steps 931 and 932 indicate a DMA request that is submitted and posted using the same data buffer. As indicated by steps 932 and 931 the count is increased to two and then reduced to one when the DMA request is finished. Finally, as indicated by step 923 , the event posed at step 922 is finished, the in-use count is reduced to zero and the data buffer can be re-assigned to a new event.
  • FIGS. 9A and 9B explain only the change in the data buffer in-use count.
  • the in-use counts of the context and special state information buffers change in a similar manner.
  • FIGS. 9A and 9B are meant to be illustrative examples only. Many other sequences can occur.
  • the point of FIGS. 9A and 9B is to illustrate that with the present invention, there can be a composition of multiple processing tasks in situations where the subsequent tasks have no idea that any of their resources (data buffer/context buffer/special state buffer) had been processed by a previous service task. The in-use counters keep track of this automatically.

Abstract

An integrated circuit for processing communication packets having separate data buffers and separate state information buffers. Each data buffer and each state information buffer (hereinafter termed resources) has an associated in-use counter. Multiple events can share the same resource. The counter associated with a resource is incremented when a resource becomes associated with a particular event. The counter associated with a resource is decremented when an event completes the use of that particular resource. When the in-use counter for a resource becomes zero, the in-use counter indicates that the resource is unassigned and that the resource can be assigned to a new event.

Description

    RELATED APPLICATIONS
  • Priority is claimed for the following co-pending applications: [0001]
  • 1) Application Ser. No. 60/221,821 entitled “Traffic Stream Processor” filed on Jul. 31, 2000. [0002]
  • 2) Application Ser. No. 09/639,915 entitled “Integrated Circuit that Processes Communication Packets with Scheduler Circuitry that Executes Scheduling Algorithms based on Cached Scheduling Parameters” filed on Aug. 16, 2000. [0003]
  • 3) Application Ser. No. 09/640,258 entitled “Integrated Circuit that Processes Communication Packets with Co-Processor Circuitry to Determine a Prioritized Processing Order for a Core Processor” filed on Aug. 16, 2000. [0004]
  • 4) Application Ser. No. 09/640,231 entitled “Integrated Circuit that Processes Communication Packets with Co-Processor Circuitry to Correlate a Packet Stream with Context Information” filed on Aug. 16, 2000. [0005]
  • The content of the above applications is hereby incorporated herein by reference.[0006]
  • FIELD OF THE INVENTION
  • The present invention is related to the field of communications, and more particularly to integrated circuits that process communication packets. [0007]
  • BACKGROUND OF THE INVENTION
  • Many communication systems transfer information in streams of packets. In general, each packet contains a header and a payload. The header contains control information, such as addressing or channel information, that indicate how the packet should be handled. The payload contains the information that is being transferred. Some examples of the types of packets used in communication systems include, Asynchronous Transfer Mode (ATM) cells, Internet Protocol (IP) packets, frame relay packets, Ethernet packets, or some other packet-like information block. As used herein, the term “packet” is intended to include packet segments. [0008]
  • Integrated circuits termed “traffic stream processors” have been designed to apply robust functionality to high-speed packet streams. Robust functionality is critical with today's diverse but converging communication systems. Stream processors must handle multiple protocols and inter-work between streams of different protocols. Stream processors must also ensure that quality-of service constraints, priority, and bandwidth requirements are met. This functionality must be applied differently to different streams, and there may be thousands of different streams. [0009]
  • Co-pending application Ser. Nos. 09/639,966, 09/640,231 and 09/640,258, the content of which is incorporated herein by reference, describe an integrated circuit for processing communication packets. As described in the above applications, the integrated circuit includes a core processor. The processor handles a series of tasks, termed “events”. These events consist of tasks such as CPU processing steps as well as the scheduling of subsequent events. These subsequently scheduled events may consist of CAM lookups, DMA data transfers, or other generic events based on conditions in the current event. All events have an associated service address, “context information” and “data”. Information about the event such as the resource that requested the event, how much data is associated with the event, and other key information from the event requester is stored in “special state” information associated with the event. When an external resource initiates an event, the external resource supplies the core processor with a memory pointer to “context” information and it also supplies the data to be associated with the event. [0010]
  • The context pointer is used to fetch the context from external memory and to store this “context” information in memory located on the chip. If the required context data has already been fetched onto the chip, the hardware recognizes this fact and sets the on chip context pointer to point to this already pre-fetched context data. Only a small number of the system “contexts” are cached on the chip at any one time, and their allocation needs to managed and sometimes shared among multiple processing events. Each cached “context” has an in-use counter so that one context can be associated with multiple sets of data. The rest of the system “contexts” are stored in external memory. This context fetch mechanism and the storage of these contexts in the co-processor is described in the above referenced co-pending applications. [0011]
  • In the circuit described in the above references co-pending applications, data and context information for a number of events are stored in buffers in a co-processor. In order to process an event, the core processor needs the service address of the event as well as the “context” and “data” associated with the event. The service address is the starting address for the instructions used to service the event. The core processor branches to the service address in order to start servicing the event. [0012]
  • SUMMARY OF THE INVENTION
  • The present invention adds flexibility and additional functions to an integrated circuit such as that described in the above references co-pending applications. In the integrated circuit shown in the referenced co-pending applications, special state information is effectively stored together with associated data in data buffers. Furthermore, the data buffers do not have associated in-use counters. With the present invention, separate logical buffers are provided for special state information and for the associated data buffer. Furthermore, each data buffer and each special state information buffer (hereinafter termed resources) has an associated in-use counter. Multiple events can share the same resource. The counter associated with a resource is incremented when a resource becomes associated with a particular event. The counter associated with a resource is decremented when an event completes the use of that particular resource. When the in-use count for a resource becomes zero, the in-use count indicates that the resource is unassigned and that the resource can be assigned to a new event. [0013]
  • With the present invention, two events can point to (i.e. utilize) the same data buffer and/or the same special state information buffer. Furthermore the content of a data buffer or a special state information buffer can be passed directly from one event to another event without reading the data into and out of memory. The in-use counter is particularly useful to facilitate the timing of DMA requests without need for explicit control by an external program. With the present invention two events can use the same data buffer. This is possible since the special state information is stored in a separate buffer. Furthermore, with the present invention one can have one data buffer associated with multiple context buffers since the special state information is stored separately from the associated data. The present invention also adds a communication mechanism which allows an event to pass a multi-bit message to subsequent events. This message passing mechanism does not require that the two events share any of the same context, data, or special state resources. [0014]
  • BRIEF DESCRIPTION OF THE FIGURES
  • FIG. 1 is an overall block diagram of a packet processing integrated circuit in an example of the invention. [0015]
  • FIG. 2 is a block diagram that illustrates packet processing stages and the pipe-lining used by the circuit in an example of the invention. [0016]
  • FIG. 3 is a diagram illustrating circuitry in the co-processing relating to context and data buffer processing in an example of the invention. [0017]
  • FIG. 4 is a block program flow diagram illustrating buffer correlation and in-use counts in an example of the invention. [0018]
  • FIG. 5 is a block diagram of the buffer management circuitry in an example of the invention. [0019]
  • FIG. 6 is a block diagram showing the details of the data and special state information buffers in an example of the invention. [0020]
  • FIG. 7 is a block program flow diagram illustrating how data buffers are passed between events in an example of the invention. [0021]
  • FIG. 8 is a block program flow diagram illustrating how state information buffers are passed between events in an example of the invention. [0022]
  • FIG. 9A and 9B are block program flow diagram illustrating examples of how DMA commands are handled in an example of the invention.[0023]
  • DETAILED DESCRIPTION OF THE INVENTION
  • Various aspects of packet processing integrated circuits are discussed in U.S. Pat. No. 5,748,630, entitled “ASYNCHRONOUS TRANSFER MODE CELL PROCESSING WITH LOAD MULTIPLE INSTRUCTION AND MEMORY WRITE-BACK”, filed on May 9, 1996. The content of the above referenced patent is hereby incorporated by reference into this application in order to shorten and simplify the description in this application. [0024]
  • One embodiment of the present invention described herein is applied as an improvement to the type of integrated circuit described in co-pending patent applications Ser. No. 60/211,863 filed on Jun. 14, 2000, Ser. No. 09/640,260 filed on Aug. 16, 2000, Ser. No. 09/639,915 filed on Aug. 16, 2000, Ser. No. 09/639,966 filed on Aug. 16, 2000, Ser. No. 09/640,258 filed on Aug. 16, 2000 and Ser. No. 091640,231 filed on Aug. 17, 2000, the content of which is hereby incorporated herein by reference in order to shorten and simplify the description of the present application. [0025]
  • FIG. 1 is a block diagram that illustrates a packet processing integrated [0026] circuit 100 in an example of the invention. It should be understood that the present invention can also be applied to other types of processors. The operation of the circuit 100 will first be described with reference to FIGS. 1 to 4 and then the operation of different embodiments of the invention will be described with reference to FIGS. 5 to 9.
  • [0027] Integrated circuit 100 includes a core processor 104, a scheduler 105, receive interface 106, co-processor circuitry 107, transmit interface 108, and memory interface 109. These components may be interconnected through a memory crossbar or some other type of internal interface. Receive interface 106 is coupled to communication system 101. Transmit interface 108 is coupled to communication system 102. Memory interface is coupled to memory 103.
  • [0028] Communication system 101 could be any device that supplies communication packets with one example being the switching fabric in an Asynchronous Transfer Mode (ATM) switch. Communication system 101 could be any device that receives communication packets with one example being the physical line interface in the ATM switch. Memory 103 could be any memory device with one example being Random Access Memory (RAM) integrated circuits. Receive interface 106 could be any circuitry configured to receive packets with some examples including UTOPIA interfaces or Peripheral Component Interconnect (PCI) interfaces. Transmit interface 108 could be any circuitry configured to transfer packets with some examples including UTOPIA interfaces or PCI interfaces.
  • [0029] Core processor 104 is a micro-processor that executes networking application software. Core-processor 104 supports an instruction set that has been tuned for networking operations especially context switching. As described herein, core processor 104 has the following characteristics: 166 MHz, pipelined single-cycle operation, RISC-based design, 32-bit instruction and register set, K instruction cache, 8 KB zero-latency scratchpad memory, interrupt/trap/halt support, and C compiler readiness.
  • [0030] Scheduler 105 comprises circuitry configured to schedule and initiate packet processing that typically results in packet transmissions from integrated circuit 100, although scheduler 105 may also schedule and initiate other activities. Scheduler 105 schedules upcoming events, and as time passes, selects scheduled events for processing and reschedules unprocessed events. Scheduler 105 transfers processing requests for selected events to co-processor circuitry 107. Scheduler 105 can handle multiple independent schedules to provide prioritized scheduling across multiple traffic streams. To provide scheduling, scheduler 105 may execute a guaranteed cell rate algorithm to implement a leaky bucket or a token bucket scheduling system. The guaranteed cell rate algorithm is implemented through a cache that holds algorithm parameters. Scheduler 105 is described in detail in the above referenced co-pending patent applications.
  • [0031] Co-processor circuitry 107 receives communication packets from receive interface 106 and memory interface 109 and stores the packets in internal data buffers. Co-processor circuitry 107 correlates each packet to context information describing how the packet should be handled. Co-processor circuitry 107 stores the correlated context information in internal context buffers and associates individual data buffers with individual context buffers to maintain the correlation between individual packets and context information. Importantly, co-processor circuitry 107 ensures that only one copy of the correlated context information is present the context buffers to maintain coherency. Multiple data buffers are associated with a single context buffer to maintain the correlation between the multiple packets and the single copy the context information.
  • [0032] Co-processor circuitry 107 also determines a prioritized processing order for core processor 104. The prioritized processing order controls the sequence in which core processor 104 handles the communication packets. The prioritized processing order is typically based on the availability of all of the resources and information that are required by core processor 104 to process a given communication packet. Resource state bits are set when resources become available, so co-processor circuitry 107 may determine when all of these resources are available by processing the resource state bits. If desired, the prioritized processing order may be based on information in packet handling requests. Co-processor circuitry 107 selects scheduling algorithms based on an internal scheduling state bits and uses the selected scheduling algorithms to determine the prioritized processing order. The algorithms could be round robin, service-to-completion, weighted fair queuing, simple fairness, first-come first-serve, allocation through priority promotion, software override, or some other arbitration scheme. Thus, the prioritization technique used by co-processor circuitry 107 is externally controllable. Co-processor circuitry 107 is described in more detail with respect to FIGS. 2-4.
  • [0033] Memory interface 109 comprises circuitry configured to exchange packets with external buffers in memory 103. Memory interface 109 maintains a pointer cache that holds pointers to the external buffers. Memory interface 109 allocates the external buffers when entities, such as core processor 104 or co-processor circuitry 107, read pointers from the pointer cache. Memory interface 109 de-allocates the external buffers when the entities write the pointers to the pointer cache. Advantageously, external buffer allocation and de-allocation is available through an on-chip cache read/write. Memory interface 109 also manages various external buffer classes, and handles conditions such as external buffer exhaustion. Memory interface 109 is described in detail in the above referenced patent applications.
  • In operation, receive [0034] interface 106 receives new packets from communication system 101, and scheduler 105 initiates transmissions of previously received packets that are typically stored in memory 103. To initiate packet handling, receive interface 106 and scheduler 105 transfer requests to co-processor circuitry 107. Under software control, core processor 104 may also request packet handling from co-processor circuitry 107. Co-processor circuitry 107 fields the requests, correlates the packets with their respective context information, and creates a prioritized work queue for core processor 104. Core processor 104 processes the packets and context information in order from the prioritized work queue. Advantageously, co-processor circuitry 107 operates in parallel with core processor 104 to offload the context correlation and prioritization tasks to conserve important core processing capacity. In response to packet handling, core processor 104 typically initiates packet transfers to either memory 103 or communication system 102. If the packet is transferred to memory 103, then core processor instructs scheduler 105 to schedule and initiate future packet transmission or processing. Advantageously, scheduler 105 operates in parallel with core processor 104 to offload scheduling tasks and conserve important core processing capacity.
  • In response to packet handling, [0035] core processor 104 typically initiates packet transfers to either memory 103 or communication system 102. If the packet is transferred to memory 103, then core processor 104 instructs scheduler 105 to schedule and initiate future packet transmission or processing. Advantageously, scheduler 105 operates in parallel with core processor 104 to offload scheduling tasks and conserve important core processing capacity.
  • Various data paths are used in response to [0036] core processor 104 packet transfer instructions. Co-processor circuitry 107 transfers packets directly to communication system 102 through transmit interface 108. Co-processor circuitry 107 transfers packets to memory 103 through memory interface 109 with an on-chip pointer cache. Memory interface 109 transfers packets from memory 103 to communication system 102 through transmit interface 108. Co-processor circuitry 107 transfers context information from a context buffer through memory interface 109 to memory 103 if there are no packets in the data buffers that are correlated with the context information in the context buffer. Advantageously, memory interface 109 operates in parallel with core processor 104 to offload external memory management tasks and conserve important core processing capacity.
  • Co-processor Circuitry FIGS. 2-4
  • FIGS. 2-4 depict one example of co-processor circuitry. Those skilled in the art will understand that FIGS. 2-4 have been simplified for clarity. [0037]
  • FIG. 2 illustrates how [0038] co-processor circuitry 107 provides pipelined operation in an example of the invention. FIG. 2 is vertically separated by dashed lines that indicate five packet processing stages: 1) context resolution, 2) context fetching, 3) priority queuing, 4) software application, and 5) context flushing. Co-processor circuitry 107 handles stages 1-3 to provide hardware acceleration. Core processor 104 handles stage 4 to provide software control with optimized efficiency due to stages 1-3. Co-processor circuitry 107 also handles stage 5. Co-processor circuitry 107 has eight pipelines through stages 1-3 and 5 to concurrently process multiple packet streams.
  • In [0039] stage 1, requests to handle packets are resolved to a context for each packet in the internal data buffers. The requests are generated by receive interface 106, scheduler 105, and core processor 104 in response to incoming packets, scheduled transmissions, and application software instructions. The context information includes a channel descriptor that has information regarding how the packet is to be handled. For example, a channel descriptor may indicate service address information, traffic management parameters, channel status, stream queue information, and thread status. In the current implementation, there are a maximum of 64,000 channels. Thus, 64,000 channels with different characteristics are available to support a wide array of service differentiation. Channel descriptors are identified by channel identifiers. Channel identifiers may be indicated by the request. A map may be used to translate selected bits from the packet header to a channel identifier. A hardware engine may also perform a sophisticated search for the channel identifier based on various information. Different algorithms that calculate the channel identifier from the various information may be selected by setting correlation state bits in co-processor circuitry 107. Thus, the technique used for context resolution is externally controllable.
  • In [0040] stage 2, context information is fetched, if necessary, by using the channel identifiers to transfer the channel descriptors to internal context buffers. Prior to the transfer, the context buffers are first checked for a matching channel identifier and validity bit. If a match is found, then the context buffer with the existing channel descriptor is associated with the corresponding internal data buffer holding the packet.
  • In [0041] stage 3, requests with available context are prioritized and arbitrated for core processor 104 handling. The priority may be indicated by the request—and it may be the source of the request. The priority queues 1-12 are 8 entries deep. Priority queues 1-12 are also ranked in a priority order by queue number. The priority for each request is determined, and when the context and data buffers for the request are valid, an entry for the request is placed in one of the priority queues that corresponds to the determined priority. The entries in the priority queues point to a pending request state RAM that contains state information for each data buffer. The state information includes a data buffer pointer, a context pointer, context validity bit, requester indicator, port status, a channel descriptor loaded indicator. This state information was referred to earlier in this document as the special state information associated with an event. These two terms may be used interchangeably.
  • The work queue indicates the selected priority queue entry that [0042] core processor 104 should handle next. To get to the work queue, the requests in priority queues are arbitrated using one of various algorithms such as round robin, service-to-completion, weighted fair queuing, simple fairness, first-come first-serve, allocation through priority promotion, and software override. The algorithms may be selected through scheduling state bits in co-processor circuitry 107. Thus, the technique used for prioritization is externally controllable. Co-processor circuitry 107 loads core processor 104 registers with the channel descriptor information for the next entry in the work queue.
  • In [0043] stage 4, core processor 104 executes the software application to process the next entry in the work queue which points to a portion of the pending state request RAM that identifies the data buffer and context buffer. The context buffer indicates one or more service addresses that direct the core processor 104 to the proper functions within the software application. One such function of the software application is traffic shaping to conform to service level agreements. Other functions include header manipulation and translation, queuing algorithms, statistical accounting, buffer management, inter-working, header encapsulation or stripping, cyclic redundancy checking, segmentation and reassembly, frame relay formatting, multicasting, and routing. Any context information changes made by the core processor 104 are linked back to the context buffer in real time.
  • In [0044] stage 5, context is flushed. Typically, core processor 104 instructs coprocessor circuitry 107 to transfer packets to off-chip memory 103 or transmit interface 108. If no other data buffers are currently associated with the pertinent context information, then co-processor circuitry 107 transfers the context information to off-chip memory 103.
  • FIG. 3 is a block diagram that illustrates [0045] co-processor circuitry 107 in an example of the invention. Co-processor circuitry 107 comprises a hardware engine that is firmware-programmable in that it operates in response to state bits and register content. In contrast, core processor 104 is a micro-processor that executes application software. Co-processor circuitry 107 operates in parallel with core processor 104 to conserve core processor capacity by off-loading numerous tasks from the core processor 104.
  • [0046] Co-processor circuitry 107 comprises context resolution 310, control 311, arbiter 312, priority queues 313, data buffers 314, context buffers 315, context DMA 316, and data DMA 317. Data buffers 314 hold packets and context buffers 315 hold context information, such as a channel descriptor. Data buffers 314 are relatively small and of a fixed size, such as 64 bytes, so if the packets are ATM cells, each data buffer holds only a single ATM cell and ATM cells do not cross data buffer boundaries.
  • Individual data buffers [0047] 314 are associated with individual context buffers 315 as indicated by the downward arrows. Priority queues 313 hold entries that represent individual data buffers 314 as indicated by the upward arrows. Thus, a packet in one of the data buffers is associated with its context information in an associated one of the context buffers 315 and with an entry in priority queues 313. Arbiter 312 presents a next entry from priority queues 313 to core processor 104 which handles the associated packet in the order determined by arbiter 312.
  • [0048] Context DMA 316 exchanges context information between memory 103 and context buffers 315 through memory interface 109. Context DMA automatically updates queue pointers in the context information. Data DMA 317 exchanges packets between data buffers 314 and memory 103 through memory interface 109. Data DMA 317 also transfers packets from memory 103 to transmit interface 108 through memory interface 109. Data DMA 317 signals context DMA 316 when transferring packets off-chip, and context DMA 316 determines if the associated context should be transferred to off-chip memory 103. Both DMAs 316-317 may be configured to perform CRC calculations.
  • For a new packet from [0049] communication system 101, control 311 receives the new packet and a request to handle the new packet from receive interface 106. Control 311 receives and places the packet in one of the data buffers 314 and transfers the packet header to context resolution 310. Based on gap state bits, a gap in the packet may be created between the header and the payload in the data buffer, so core processor 104 can subsequently write encapsulation information to the gap without having to create the gap. Context resolution 310 processes the packet header to correlate the packet with a channel descriptor, although in some cases, receive interface 106 may have already performed this context resolution. The channel descriptor comprises information regarding packet transfer over a channel.
  • [0050] Control 311 determines if the channel descriptor that has been correlated with the packet is already in one of the context buffers 315 and is valid. If so, control 311 does not request the channel descriptor from off-chip memory 103. Instead, control 311 associates the particular data buffer 314 holding the new packet with the particular context buffer 315 that already holds the correlated channel descriptor. This prevents multiple copies of the channel descriptor from existing in context buffers 314. Control 311 then increments an in-use count for the channel descriptor to track the number of data buffers 314 that are associated with the same channel descriptor.
  • If the correlated channel descriptor is not in context buffers [0051] 315, then control 311 requests the channel descriptor from context DMA 316. Context DMA 316 transfers the requested channel descriptor from off-chip memory 103 to one of the context buffers 315 using the channel descriptor identifier, which may be an address, that was determined during context resolution. Control 311 associates the context buffer 315 holding the transferred channel descriptor with the data buffer 314 holding the new packet to maintain the correlation between the new 5 packet and the channel descriptor. Control 311 also sets the in-use counter for the transferred channel descriptor to one and sets the validity bit to indicate context information validity.
  • [0052] Control 311 also determines a priority for the new packet. The priority may be determined by the source of the new packet, header information, or channel descriptor. Control 311 places an entry in one of priority queues 313 based on the priority. The entry indicates the data buffer 314 that has the new packet. Arbiter 312 implements an arbitration scheme to select the next entry for core processor 104. Core processor 104 reads the next entry and processes the associated packet and channel descriptor in the particular data buffer 314 and context buffer 315 indicated in the next entry.
  • Each priority queue has a service-to-completion bit and a sleep bit. When the service-to-completion bit is set, the priority queue has a higher priority that any priority queues without the service-to-completion bit set. When the sleep bit is set, the priority queues is not processed until the sleep bit is cleared. The ranking of the priority queue number breaks priority ties. Each priority queue has a weight from 0-15 to ensure a certain percentage of core processor handling. After an entry from a priority queue is handled, its weight is decremented by one if the service-to-completion bit is not set The weights are re-initialized to a default value after 128 requests have been handled or if all weights are zero. Each priority queue has a high and low watermark. When outstanding requests that are entered in a priority queue exceed its high watermark, the service-to-completion bit is set. When the outstanding requests fall to the low watermark, the service-to-completion bit is cleared. The high watermark is typically set at the number of data buffers allocated to the priority queue. [0053]
  • The context buffers [0054] 315 each have an associated in-use counter. The in-use counters associated with the context buffers is not shown in FIG. 3, but it is shown in FIG. 6.
  • [0055] Core processor 104 may instruct control 311 to transfer the packet to off-chip memory 103 through data DMA 317. Control 311 decrements the context buffer in-use counter, and if the in-use counter is zero (no data buffers 314 are associated with the context buffer 315 holding the channel descriptor), then control 311 instructs context DMA 316 to transfer the channel descriptor to off-chip memory 103. Control 311 also clears the validity bit. This same general procedure is followed when scheduler 105 requests packet transmission, except that in response to the request from scheduler 105, control 311 instructs data DMA 317 to transfer the packet from memory 103 to one of data buffers 314.
  • The present invention provides additional circuitry associated with data buffers [0056] 314. The additional circuitry provided by the present invention is shown in FIG. 6 and it will be explained in detail later.
  • FIG. 4 is a flow diagram that illustrates the operation of [0057] coprocessor circuitry 107 when correlating buffers in an example of the invention. Co-processor circuitry 107 has eight pipelines to concurrently process multiple packet streams in accord with FIG. 3.
  • First, a packet is stored in a data buffer, and the packet is correlated to a channel descriptor as identified by a channel identifier. The channel descriptor comprises the context information regarding how packets in the different channels are to be handled. [0058]
  • Next, context buffers [0059] 314 are checked for a valid version of the correlated channel descriptor. This entails matching the correlated channel identifier with a channel identifier in a context buffer that is valid. If the correlated channel descriptor is not in a context buffer that is valid, then the channel descriptor is retrieved from memory 103 and stored in a context buffer using the channel identifier. The data buffer holding the packet is associated with the context buffer holding the transferred channel descriptor. An in-use counter for the context buffer holding the channel descriptor is set to one. A validity bit for the context buffer is set to indicate that the channel descriptor in the context buffer is valid. If the correlated channel descriptor is already in a context buffer that is valid, then the data buffer holding the packet is associated with the context buffer already holding the channel descriptor. The in-use counter for the context buffer holding the channel descriptor is incremented.
  • Typically, [0060] core processor 104 instructs co-processor circuitry 107 to transfer packets to off-chip memory 103 or transmit interface 108. Data DMA 317 transfers the packet and signals context DMA 316 when finished. Context DMA 316 decrements the in-use counter for the context buffer holding the channel descriptor, and if the decremented in-use count equals zero, then context DMA 316 transfers the channel descriptor to memory 103 and clears the validity bit for the context buffer. The effect of DMA operations on the in-use counts of the special state buffers and the data buffers will be explained later. FIGS. 9A and 9B will be used to illustrate these operations.
  • Memory Interface 109 FIG. 5
  • FIG. 5 depicts a specific example of memory interface circuitry in accord with the present invention. Those skilled in the art will appreciate numerous variations from the circuitry shown in this example may be made. Furthermore, those skilled in the art will appreciate that some conventional aspects of FIGS. 5-6 have been simplified or omitted for clarity. [0061]
  • FIG. 5 is a block diagram that illustrates [0062] memory interface 109. Memory interface 109 comprises a hardware circuitry engine that is firmware-programmable in that it operates in response to state bits and register content. In contrast, core processor 104 is a micro-processor that executes application software. Memory interface 109 operates in parallel with core processor 104 to conserve core processor capacity by off-loading numerous tasks from the core processor.
  • Both FIG. 1 and FIG. 5 [0063] show memory 103, core processor 104, co-processor circuitry 107, transmit interface 108, and memory interface 109. Memory 103 comprises Static RAM (SRAM) 525 and Synchronous Dynamic RAM (SDRAM) 526, although other memory systems could also be used. SDRAM 526 comprises pointer stack 527 and external buffers 528. Memory interface 109 comprises buffer management engine 520, SRAM interface 521, and SDRAM interface 522. Buffer management engine 520 comprises pointer cache 523 and control logic 524.
  • Conventional components could be used for [0064] SRAM interface 521, SDRAM interface 522, SRAM 525, and SDRAM 526. SRAM interface 521 exchanges context information between SRAM 525 and co-processor circuitry 107. External buffers 528 use a linked list mechanism to store communication packets externally to integrated circuit 100. Pointer stack 527 is a cache of pointers to free external buffers 528 that is initially built by core processor 104. Pointer cache 523 stores pointers that were transferred from pointer stack 527 and correspond to external buffers 528. Sets of pointers may be periodically exchanged between pointer stack 527 and pointer cache 523. Typically, the exchange from stack 527 to cache 523 operates on a first-in/first-out basis.
  • In operation, [0065] core processor 104 writes pointers to free external buffers 528 to pointer stack 527 in SDRAM 526. Through SDRAM interface 522, control logic 524 transfers a subset of these pointers to pointer cache 523. When an entity, such as core processor 104, co-processor circuitry 107, or an external system, needs to store a packet in memory 103, the entity reads a pointer from pointer cache 523 and uses the pointer to transfer the packet to external buffers 528 through SDRAM interface 522. Control logic 524 allocates the external buffer as the corresponding pointer is read from pointer cache 523. SDRAM stores the packet in the external buffer indicated by the pointer. Allocation means to reserve the buffer, so other entities do not improperly write to it while it is allocated.
  • When the entity no longer needs the external buffer—for example, the packet is transferred from [0066] memory 103 through SDRAM interface 522 to co-processor circuitry 107 or transmit interface 108, then the entity writes the pointer to pointer cache 523. Control logic 524 de-allocates the external buffer as the corresponding pointer is written to pointer cache 523. De-allocation means to release the buffer, so other entities may reserve it. The allocation and de-allocation process is repeated for other external buffers 528.
  • [0067] Control logic 524 tracks the number of the pointers in pointer cache 523 that point to de-allocated external buffers 528. If the number reaches a minimum threshold, then control logic 524 transfers additional pointers from pointer stack 527 to pointer cache 523. Control logic 524 may also transfer an exhaustion signal to core processor 104 in this situation. If the number reaches a maximum threshold, then control logic 524 transfers an excess portion of the pointers from pointer cache 523 to pointer stack 527.
  • FIG. 6 shows the detailed logic added to the [0068] data buffer 314 shown in FIG. 3 in an example of the invention. The data buffer 314 includes two sections designated data only buffers 614 and special state information buffers 620. For this embodiment, there are six buffers for data only and six buffers for special state information, shown in the diagram. For other embodiments, there are numerous data buffers and special state information buffers. The data buffers are assigned an index number from zero to the maximum number of data buffers in the co-processor 107. The special state information buffers are also assigned an index from zero to the maximum number of special state information buffers in the co-processor 107. Furthermore, the context buffers are also assigned an index from zero to the maximum number of context buffers in the co-processor 107. These indexes are used by the logic in the co-processor 107 and the core processor 104 to identify an individual context buffer, data buffer, or special state information buffer. In one embodiment, there are sixteen of each of these type of buffers in the co-processor 107. The exact number of each of these buffers is not significant to the general operation of the logic.
  • Each buffer has an associated in-use counter [0069] 614-0 to 614-5 and 620-0 to 620-5. The in-use counters keep track of the number of events, which are using the data in the particular buffers. Each in-use counter is incremented by one for each event, which is using the data or state information in a particular buffer. When an event finishes with a particular buffer, the in-use counter is decremented by one. When the count in an in-use counter reaches zero, no events are using the particular buffer and it can be reallocated. Data buffer resolution logic 622 and PRSR special data resolution logic 621 operates similar to the operation of context buffer resolution 310, which was previously described.
  • Data [0070] buffer resolution logic 622 keeps track of which data buffers 614 are in use and which are available to the assigned to new events. Data buffer resolution logic 622 also contains the logic for incrementing and decrementing the in use counters associated with the data buffers 614. PRSR special data resolution logic 621 keeps track of which special state information buffers are in use and which are available to be assigned to new events. PRSR special data resolution logic 621 also contains the logic for incrementing and decrementing the in use counters associated with the special state information buffers.
  • PRSR special [0071] data resolution logic 621 and data buffer resolution logic 622 select a buffer to be assigned to a new event by scanning the in use counts of all their associated buffers and picking the buffer with the lowest index which has an in-use count of zero. In other embodiments, there are numerous variations in selecting a buffer to be assigned to a new event and which has an in-use count of zero. Some examples of selecting a buffer are first-in-first-out selection and last-in-first-out selection.
  • [0072] Context resolution 310 contains the logic used to select the context buffer to be assigned to a new event. A global configuration bit is used to pick which of two mechanisms is used to select the next context buffer to be assigned to a new event. One mechanism picks the context buffer in the same manner as the next data buffer is picked. As previous described, this method returns the context buffer with a zero in-use count which has the lowest index. The problem with this selection mechanism for context buffers is that the selection mechanism tends to select the context buffer that have been most recently freed. For instance, when context buffer with index zero is freed, it is always the next new index to be selected. Because context information, which is not already stored in a context buffer, needs to be read in from off-chip memory, under certain conditions is better to not reuse a context buffer as soon as its in-use count goes to zero.
  • This problem is addressed by the second context selection mechanism. This mechanism uses a moving “finger” which determines at what index the logic will start searching for an in-use count of zero. The value of the finger is incremented after each new context selection. Hence, for the first context new selection the logic will start search forward from index zero. For the second new context select, the logic will start searching forward from [0073] index 1, etc.
  • As is shown by the arrows in FIG. 6, the special state information data buffer [0074] 620 contains a pointer to an associated data buffer 614 as well as an associated context buffer 315 (hereinafter these will also be referred to as resources). Because of these links, a special state data buffer can be used to identify the resources associated with an event. As shown by the arrows from the special state data buffers to the priority queues 313, a special state data buffer pointer is stored in the appropriate priority queue. This logic was described in more detail above in stage 3 of FIG. 3. When the arbiter 312 picks the next entry to service from the priority queue, the arbiter 312 returns a special state data buffer pointer. This pointer is then used by logic associated with the core processor 104 and the co-processor circuitry 107 to identify the context and data buffer resources the event will be using.
  • In one embodiment, the size of a [0075] data buffer 614 is 64-bytes, the size of a context buffer 315 is 64-bytes, and the size of a special state data buffer 620 is 44 bits. As recognized by those skilled in the art, the size of these buffers could be changed without affecting the operation of the logic in FIG. 6.
  • FIG. 7 is a block flow diagram showing how a [0076] data buffer 614 can be passed from one event to another event in an example of the invention. When a new event begins as indicated by steps 701 and 702, a check is made to determine if the particular event is using a passed data buffer. If the particular event would like to use a “passed” data buffer, the particular data buffer 614 is associated with the event and the in-use counter for the particular data. Next as indicated by step 721, the event processing takes place and at the end of the event, the in-use counter of the data buffer is decremented by one in step 722. Next as indicated by step 723, a check is made to determine if the in-use counter is zero. If the count is zero, the buffer is freed and can be assigned to a new event as indicated by step 725. If the count is not zero, as indicated by step 724, the buffer is not freed since the buffer is still in use by some other event.
  • FIG. 8 is a block flow diagram showing how state information is passed between events in an example of the invention. As indicated by [0077] step 802, a determination is made is as to whether or not an event is passing “state” information. If state information is not being passed, the operation proceeds as indicated by steps 810 to 815. A new state information buffer is selected from the unused pool of buffers as indicated by step 810. Next as indicated by step 811 the event is performed. At the end of the event, the in-use counter is decremented by one (step 812) and a check is made to determine if the count is zero at step 813. If the count is zero, the buffer is free to be assigned as indicated by step 815. Otherwise, the buffer is not freed as indicated by block 814.
  • The operations that occur when “state” information is passed from one event to another event are indicated by [0078] steps 804 to 808. When “state” information is passed from one event to another event, the information in the data only buffer 614 is also passed between the events. This is indicated by steps 804 and 805. The event proceeds as indicated by step 806, and at the end of the event, as indicated by steps 807 and 812, the in-use counter of the data only buffer 614 and the state information buffer 620 is decreased by one. As indicated by steps 808, 808-a and 808-b and 813 to 815 the check is then made to determine if the in-use counter has reached zero to determine if the buffers can be re-assigned.
  • An event can pass data or special state information associated with one event to a new event, which does not share the same context information. Such transfers are possible because the state information is stored in a buffer that is separate from the data buffer. An event can also pass a multi-bit message from a current event to a subsequent event that is generated by the current event. This message is stored in the special state buffer of the subsequent event. [0079]
  • FIGS. 9A and 9B illustrate examples of how one embodiment of the invention operates. The horizontal dimension in FIGS. 9A and 9B represents time. FIG. 9A illustrates how the in-use counts for a data buffer change for an event which submits a DMA command in an example of the invention. The process begins at [0080] step 901. It is assumed that at this point the in-use count of the data buffer is one. While the event posted as indicated by step 901 is progressing, steps 902 and 903 indicate that two DMA transfers are submitted. The data buffer count is incremented to two by the first DMA command and to three by the second DMA command. As indicated by step 904, when the first DMA transfer finishes, the in-use count is reduced to two. When the event posted as indicated by block 901 is complete, the in-use count is reduced to one as indicated by block 905. Finally, when the second DMA transfer is complete, the in-use count is reduced to zero as indicated by step 906. Conventional logic is provided in co-processor circuitry 107 to handle the changes to the in-use counts as described.
  • FIG. 9B indicates how the in-use count of a data buffer changes for an event, which creates a shared data buffer in an example of the invention. As in FIG. 9A, the horizontal dimension indicates time. The illustrated process begins as indicated by [0081] step 911 with an event being posted. In one embodiment, this event requested a new data buffer. This data buffer would have an initial in-use count of zero and when the event is posted, as indicated by step 911, the in-use count is increased to one. Step 921 represents another event request, which is posted as indicated by step 922. For the event request shown in 921, the first event passes its data buffer to the second event so the second event starts with a data buffer in-use count of two. This initial in-use count of two is arrived at using multiple steps. When the core processor 104 initiates a request for another event, the data buffer in-use count is immediately incremented by one in order to reserve this data buffer for the next event. In step 922, the event request is for another core processor event, the co-processor circuitry 107 receives this event request and passes this request to the section of the co-processor logic which handles core processor event requests. This is the same logic, which handled the initial event generation indicated in 901 or 911. When the event is processed by this section of the co-process logic, the in-use count of the data buffer is again incremented as this data buffer is assigned to the new event. When this new event is created, the section of the co-processor circuitry 107 that handles event requests, signals back to the section of the co-processor circuitry 107, which received this event request from the core processor 104. This section of the co-processor logic, now requests the in-use count of the data buffer be decremented by one. Hence, there is a total of two increments and one decrement and the new event is posted with an effective initial data buffer in-use count two.
  • The system is setup so that if [0082] step 922 is delayed by stalls in the system such that this event request is really processed after 912 happens, the data buffer is reserved using in-use counts by the 921 operation until the 922 operation can take place. This assures that independent of the relative timing of 922 and 912 this is not time between 912 and 922 that the value of the data buffer's in-use count allows this passed data buffer to be viewed as an unassigned data buffer. The effective reservation of this data buffer by incrementing the is-use count when the event request 921 is posted, assures that no intervening event request can mistakenly view this data buffer as unassigned and reallocate this data buffer.
  • [0083] Step 912 indicates that when the first event is finished, the data buffer count is reduced to one. Steps 931 and 932 indicate a DMA request that is submitted and posted using the same data buffer. As indicated by steps 932 and 931 the count is increased to two and then reduced to one when the DMA request is finished. Finally, as indicated by step 923, the event posed at step 922 is finished, the in-use count is reduced to zero and the data buffer can be re-assigned to a new event.
  • It should be noted that the descriptions for the examples give in FIGS. 9A and 9B explain only the change in the data buffer in-use count. The in-use counts of the context and special state information buffers change in a similar manner. [0084]
  • It should also be noted that the examples given in FIGS. 9A and 9B are meant to be illustrative examples only. Many other sequences can occur. The point of FIGS. 9A and 9B is to illustrate that with the present invention, there can be a composition of multiple processing tasks in situations where the subsequent tasks have no idea that any of their resources (data buffer/context buffer/special state buffer) had been processed by a previous service task. The in-use counters keep track of this automatically. [0085]
  • While the invention has been shown and described with respect to embodiments thereof, it will be appreciated by those skilled in the art that various changes in forma and detail can be made without departing from the spirit and scope of the invention. Applicant's invention is limited only by the scope of the appended claims. [0086]

Claims (21)

We claim:
1. An integrated circuit for processing events related to communication packets, said integrated circuit comprising:
a core processor configured to execute software to process a series of communication packets, the processing of each packet being an event and having associated data and context information; and
a co-processor comprising a plurality of state information buffers for storing state information associated with events wherein each of said state information buffers having an in-use counter indicating the number of events associated with the contents of said buffer.
2. The integrated circuit of claim 1 wherein said co-processor comprises a plurality of context buffers for storing context information associated with a plurality of events.
3. The integrated circuit of claim 2 wherein said co-processor comprises an in-use counter associated with each of said context buffers.
4. The integrated circuit of claim 1 wherein said co-processor comprises a plurality of data buffers for storing data.
5. The integrated circuit of claim 4 wherein said co-processor comprises an in-use counter associated with each of said data buffers.
6. The integrated circuit of claim 1 wherein said integrated circuit comprises a plurality of data buffers each having an in-use counter whereby data can be transferred from one event to another event by changing information in a data buffer.
7. The integrated circuit of claim 1 wherein said integrated circuit comprises a plurality of buffers for data associated with events and a plurality of buffers for context associated with events.
8. The integrated circuit of claim 7 wherein said integrated circuit comprises an in-use counter associated with each of said buffers.
9. The integrated circuit of claim 1 wherein said co-processor comprises a plurality of data only information buffers, a plurality of context information buffers, an in-use counter for each of said data only buffers and an in-use counter for each of said context buffers.
10. The integrated circuit of claim 9 where data can be passed from one event to another event by changing the data in one of said state information buffers.
11. A method of processing events related to communication packets in an integrated circuit which includes a core processor and a co-processor having a state information buffer for storing state information for an event separate from the data associated with said event, said state information buffer having an associated in use counter, the method comprising:
incrementing the in-use counter associated with said state information buffer when an event is associated with said state information buffer; and
decrementing the in-use counter of said state information buffer when said event associated with said buffer is finished.
12. The method of claim 11 wherein said integrated circuit comprises a plurality of state information buffers.
13. The method of claim 11 wherein said integrated circuit comprises a context buffer and an in-use counter for said context information buffer and the method further comprises:
incrementing the in-use counter associated with said context buffer when an event is associated with said context buffer; and
decrementing the in-use counter of said context buffer when said events associated with said context buffer is finished.
14. The method of claim 11 wherein said integrated circuit comprises a data only buffer to store data associated with an event.
15. The method of claim 11 wherein said integrated circuit comprises a data only buffer to store data associated with an event and an in-use counter associated with said data only buffer and the method further comprises:
incrementing the in-use counter associated with said data buffer when an event is associated with said data buffer; and
decrementing the in-use counter of said data buffer when said event associated with said data buffer is finished.
16. An integrated circuit for processing events associated with communication packets which includes a core processor and a co-processor, the improvement which comprises, separate buffers for data and state information and in-use counters for all of said buffers, whereby the contents of a data can be passed from one event to another event, each of said events having state information in a separate state information buffer.
17. The integrated circuit of claim 16 which includes context information buffers.
18. The integrated circuit of claim 17 which includes in-use counters for said context information buffers.
19. The integrated circuit of claim 16 including a plurality of data buffers and a plurality of state information buffers.
20. The integrated circuit of claim 16 which includes a plurality of data buffers, a plurality of state information buffers and a plurality of context information buffers, each of said buffers having an in-use counter which is increments when an event is associated with the buffer and decremented when an event is finished utilizing the buffer.
21. An integrated circuit for processing events related to communication packets, said integrated circuit comprising:
a core processor configured to execute software to process a series of communication packets, the processing of each packet being an event and having associated data, state and context information; and
a co-processor having a plurality buffers which separately store data, state and context information associated with events wherein each of said data, state and context buffers having an in-use counter indicating the number of events associated with said buffer.
US09/919,283 1999-08-17 2001-07-31 Method for automatic resource reservation and communication that facilitates using multiple processing events for a single processing task Expired - Lifetime US7099328B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/919,283 US7099328B2 (en) 1999-08-17 2001-07-31 Method for automatic resource reservation and communication that facilitates using multiple processing events for a single processing task

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US14937699P 1999-08-17 1999-08-17
US22182100P 2000-07-31 2000-07-31
US09/639,915 US6888830B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with scheduler circuitry that executes scheduling algorithms based on cached scheduling parameters
US09/640,231 US6804239B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with co-processor circuitry to correlate a packet stream with context information
US09/640,258 US6754223B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with co-processor circuitry to determine a prioritized processing order for a core processor
US09/919,283 US7099328B2 (en) 1999-08-17 2001-07-31 Method for automatic resource reservation and communication that facilitates using multiple processing events for a single processing task

Related Parent Applications (3)

Application Number Title Priority Date Filing Date
US09/639,915 Continuation US6888830B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with scheduler circuitry that executes scheduling algorithms based on cached scheduling parameters
US09/640,231 Continuation US6804239B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with co-processor circuitry to correlate a packet stream with context information
US09/640,258 Continuation US6754223B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with co-processor circuitry to determine a prioritized processing order for a core processor

Publications (3)

Publication Number Publication Date
US20020051460A1 US20020051460A1 (en) 2002-05-02
US20040202192A9 true US20040202192A9 (en) 2004-10-14
US7099328B2 US7099328B2 (en) 2006-08-29

Family

ID=22530010

Family Applications (7)

Application Number Title Priority Date Filing Date
US09/640,231 Expired - Lifetime US6804239B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with co-processor circuitry to correlate a packet stream with context information
US09/640,260 Ceased US7046686B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with a buffer management engine having a pointer cache
US09/639,966 Expired - Lifetime US6760337B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with scheduler circuitry having multiple priority levels
US09/639,915 Expired - Lifetime US6888830B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with scheduler circuitry that executes scheduling algorithms based on cached scheduling parameters
US09/640,258 Expired - Lifetime US6754223B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with co-processor circuitry to determine a prioritized processing order for a core processor
US09/919,283 Expired - Lifetime US7099328B2 (en) 1999-08-17 2001-07-31 Method for automatic resource reservation and communication that facilitates using multiple processing events for a single processing task
US12/122,625 Expired - Lifetime USRE42092E1 (en) 1999-08-17 2008-05-16 Integrated circuit that processes communication packets with a buffer management engine having a pointer cache

Family Applications Before (5)

Application Number Title Priority Date Filing Date
US09/640,231 Expired - Lifetime US6804239B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with co-processor circuitry to correlate a packet stream with context information
US09/640,260 Ceased US7046686B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with a buffer management engine having a pointer cache
US09/639,966 Expired - Lifetime US6760337B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with scheduler circuitry having multiple priority levels
US09/639,915 Expired - Lifetime US6888830B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with scheduler circuitry that executes scheduling algorithms based on cached scheduling parameters
US09/640,258 Expired - Lifetime US6754223B1 (en) 1999-08-17 2000-08-16 Integrated circuit that processes communication packets with co-processor circuitry to determine a prioritized processing order for a core processor

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/122,625 Expired - Lifetime USRE42092E1 (en) 1999-08-17 2008-05-16 Integrated circuit that processes communication packets with a buffer management engine having a pointer cache

Country Status (3)

Country Link
US (7) US6804239B1 (en)
AU (1) AU6776200A (en)
WO (1) WO2001013590A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030219027A1 (en) * 2002-05-25 2003-11-27 Su-Hyun Kim Method for processing various numbers of ports in network processor
US20060136619A1 (en) * 2004-12-16 2006-06-22 Intel Corporation Data integrity processing and protection techniques
US20070058649A1 (en) * 2004-06-16 2007-03-15 Nokia Corporation Packet queuing system and method
US7292593B1 (en) * 2002-03-28 2007-11-06 Advanced Micro Devices, Inc. Arrangement in a channel adapter for segregating transmit packet data in transmit buffers based on respective virtual lanes
US7490327B1 (en) 2008-05-15 2009-02-10 International Business Machines Corporation System and method for programmatic distributed transaction commit prioritization mechanism
US7509370B1 (en) * 2008-05-15 2009-03-24 International Business Machines Corporation Method for optimizing parallel processing of backend transactions by prioritizing related transactions
US8170041B1 (en) * 2005-09-14 2012-05-01 Sandia Corporation Message passing with parallel queue traversal
US8683085B1 (en) 2008-05-06 2014-03-25 Marvell International Ltd. USB interface configurable for host or device mode
US8688922B1 (en) * 2010-03-11 2014-04-01 Marvell International Ltd Hardware-supported memory management
US8688877B1 (en) 2003-03-13 2014-04-01 Marvell World Trade Ltd. Multiport memory architecture
US20140185629A1 (en) * 2008-04-04 2014-07-03 Micron Technology, Inc. Queue processing method
US8843723B1 (en) 2010-07-07 2014-09-23 Marvell International Ltd. Multi-dimension memory timing tuner
US8874833B1 (en) 2009-03-23 2014-10-28 Marvell International Ltd. Sequential writes to flash memory
US9070454B1 (en) 2009-04-21 2015-06-30 Marvell International Ltd. Flash memory
US9070451B1 (en) 2008-04-11 2015-06-30 Marvell International Ltd. Modifying data stored in a multiple-write flash memory cell
EP3591830A4 (en) * 2017-03-03 2020-03-04 Mitsubishi Electric Corporation Power conversion device and communication method

Families Citing this family (146)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1116127B1 (en) * 1998-09-23 2003-10-15 Infineon Technologies AG Program-controlled unit
US7028071B1 (en) * 2000-01-28 2006-04-11 Bycast Inc. Content distribution system for generating content streams to suit different users and facilitating e-commerce transactions using broadcast content metadata
US7139901B2 (en) * 2000-02-08 2006-11-21 Mips Technologies, Inc. Extended instruction set for packet processing applications
US7649901B2 (en) * 2000-02-08 2010-01-19 Mips Technologies, Inc. Method and apparatus for optimizing selection of available contexts for packet processing in multi-stream packet processing
US20010049757A1 (en) 2000-03-01 2001-12-06 Ming-Kang Liu Programmable task scheduler for use with multiport xDSL processing system
JP3594076B2 (en) * 2000-03-01 2004-11-24 日本電気株式会社 Packet switch and scheduling method thereof
DE10023037A1 (en) * 2000-05-11 2001-11-22 Marconi Comm Gmbh Switching network for a telecommunications network and method for switching in a switching network
US20020176430A1 (en) * 2001-01-25 2002-11-28 Sangha Onkar S. Buffer management for communication systems
US7130916B2 (en) * 2001-02-23 2006-10-31 International Business Machines Corporation Linking frame data by inserting qualifiers in control blocks
US6985441B1 (en) * 2001-03-05 2006-01-10 Advanced Micro Devices, Inc. Intelligent embedded processor enabled mechanism to implement RSVP function
US8051212B2 (en) * 2001-04-11 2011-11-01 Mellanox Technologies Ltd. Network interface adapter with shared data send resources
KR100902513B1 (en) 2001-04-13 2009-06-15 프리스케일 세미컨덕터, 인크. Manipulating data streams in data stream processors
KR100436365B1 (en) * 2001-06-23 2004-06-18 삼성전자주식회사 ATM-based delay adaptive scheduling apparatus according to traffic types and method thereof
US6934834B2 (en) * 2001-08-31 2005-08-23 Hewlett-Packard Development Company, L.P. Computer program for controlling the manner in which an operating system launches a plurality of application programs
US7539204B2 (en) * 2001-09-26 2009-05-26 Broadcom Corporation Data and context memory sharing
US7221678B1 (en) 2001-10-01 2007-05-22 Advanced Micro Devices, Inc. Method and apparatus for routing packets
US7274692B1 (en) 2001-10-01 2007-09-25 Advanced Micro Devices, Inc. Method and apparatus for routing packets that have multiple destinations
US7295563B2 (en) * 2001-10-01 2007-11-13 Advanced Micro Devices, Inc. Method and apparatus for routing packets that have ordering requirements
US7187689B1 (en) * 2001-10-29 2007-03-06 Juniper Networks, Inc. Self-cleaning mechanism for error recovery
US6982986B2 (en) * 2001-11-01 2006-01-03 International Business Machines Corporation QoS scheduler and method for implementing quality of service anticipating the end of a chain of flows
US6973036B2 (en) * 2001-11-01 2005-12-06 International Business Machines Corporation QoS scheduler and method for implementing peak service distance using next peak service time violated indication
US7280474B2 (en) * 2001-11-01 2007-10-09 International Business Machines Corporation Weighted fair queue having adjustable scaling factor
US7310345B2 (en) * 2001-11-01 2007-12-18 International Business Machines Corporation Empty indicators for weighted fair queues
US7103051B2 (en) * 2001-11-01 2006-09-05 International Business Machines Corporation QoS scheduler and method for implementing quality of service with aging time stamps
US7317683B2 (en) * 2001-11-01 2008-01-08 International Business Machines Corporation Weighted fair queue serving plural output ports
US7046676B2 (en) * 2001-11-01 2006-05-16 International Business Machines Corporation QoS scheduler and method for implementing quality of service with cached status array
US6976154B1 (en) * 2001-11-07 2005-12-13 Juniper Networks, Inc. Pipelined processor for examining packet header information
US20030135632A1 (en) * 2001-12-13 2003-07-17 Sophie Vrzic Priority scheduler
US7653736B2 (en) * 2001-12-14 2010-01-26 Nxp B.V. Data processing system having multiple processors and a communications means in a data processing system
US7107413B2 (en) * 2001-12-17 2006-09-12 Intel Corporation Write queue descriptor count instruction for high speed queuing
US7269179B2 (en) * 2001-12-18 2007-09-11 Intel Corporation Control mechanisms for enqueue and dequeue operations in a pipelined network processor
US7379420B2 (en) * 2001-12-28 2008-05-27 Network Equipment Technologies, Inc. Method and apparatus for multiple qualities of service to different network connections of a single network path
US7895239B2 (en) * 2002-01-04 2011-02-22 Intel Corporation Queue arrays in network devices
US7181573B2 (en) * 2002-01-07 2007-02-20 Intel Corporation Queue array caching in network devices
JP3882618B2 (en) * 2002-01-18 2007-02-21 ヤマハ株式会社 Communication apparatus and network system
US7149226B2 (en) * 2002-02-01 2006-12-12 Intel Corporation Processing data packets
US7257124B2 (en) * 2002-03-20 2007-08-14 International Business Machines Corporation Method and apparatus for improving the fairness of new attaches to a weighted fair queue in a quality of service (QoS) scheduler
US7680043B2 (en) 2002-03-20 2010-03-16 International Business Machines Corporation Network processor having fast flow queue disable process
DE10235544B4 (en) * 2002-03-25 2013-04-04 Agere Systems Guardian Corp. Method for improved data communication due to improved data processing within a transceiver
KR20030080443A (en) * 2002-04-08 2003-10-17 (주) 위즈네트 Internet protocol system using hardware protocol processing logic and the parallel data processing method using the same
US7023843B2 (en) * 2002-06-26 2006-04-04 Nokia Corporation Programmable scheduling for IP routers
US7243154B2 (en) * 2002-06-27 2007-07-10 Intel Corporation Dynamically adaptable communications processor architecture and associated methods
US20040009141A1 (en) 2002-07-09 2004-01-15 Kimberly-Clark Worldwide, Inc. Skin cleansing products incorporating cationic compounds
US8520519B2 (en) * 2002-09-20 2013-08-27 Broadcom Corporation External jitter buffer in a packet voice system
US7408937B1 (en) * 2003-01-09 2008-08-05 Cisco Technology, Inc. Methods and apparatus for identifying a variable number of items first in sequence from a variable starting position which may be particularly useful by packet or other scheduling mechanisms
US8005971B2 (en) * 2003-02-08 2011-08-23 Hewlett-Packard Development Company, L.P. Apparatus for communicating with a network
US20040184470A1 (en) * 2003-03-18 2004-09-23 Airspan Networks Inc. System and method for data routing
US6920111B1 (en) * 2003-03-21 2005-07-19 Cisco Technology, Inc. Multiple update frequencies for counters in a multi-level shaping system
US7392399B2 (en) * 2003-05-05 2008-06-24 Sun Microsystems, Inc. Methods and systems for efficiently integrating a cryptographic co-processor
US20050021558A1 (en) * 2003-06-11 2005-01-27 Beverly Harlan T. Network protocol off-load engine memory management
KR100473814B1 (en) * 2003-07-16 2005-03-14 한국전자통신연구원 Duplicated system and method using a serial to deserialize
KR100584341B1 (en) * 2003-07-29 2006-05-26 삼성전자주식회사 Method for controling upstream traffic of ethernet passive optical networke-pon
US8804761B2 (en) * 2003-08-21 2014-08-12 Qualcomm Incorporated Methods for seamless delivery of broadcast and multicast content across cell borders and/or between different transmission schemes and related apparatus
US20050047425A1 (en) * 2003-09-03 2005-03-03 Yonghe Liu Hierarchical scheduling for communications systems
US8644321B2 (en) * 2003-09-16 2014-02-04 Qualcomm Incorporated Scheduling packet transmissions
US9065741B1 (en) * 2003-09-25 2015-06-23 Cisco Technology, Inc. Methods and apparatuses for identifying and alleviating internal bottlenecks prior to processing packets in internal feature modules
US8009563B2 (en) * 2003-12-19 2011-08-30 Broadcom Corporation Method and system for transmit scheduling for multi-layer network interface controller (NIC) operation
US7864806B2 (en) * 2004-01-06 2011-01-04 Broadcom Corp. Method and system for transmission control packet (TCP) segmentation offload
US7379453B1 (en) * 2004-03-29 2008-05-27 Sun Microsystems, Inc. Method and apparatus for transferring multiple packets from hardware
US20050232298A1 (en) * 2004-04-19 2005-10-20 Beverly Harlan T Early direct memory access in network communications
US7889750B1 (en) * 2004-04-28 2011-02-15 Extreme Networks, Inc. Method of extending default fixed number of processing cycles in pipelined packet processor architecture
US7675926B2 (en) * 2004-05-05 2010-03-09 Cisco Technology, Inc. Hierarchical QoS behavioral model
US20060140191A1 (en) * 2004-12-29 2006-06-29 Naik Uday R Multi-level scheduling using single bit vector
US20060164431A1 (en) * 2005-01-26 2006-07-27 Samsung Electronics, Co.,Ltd. Apparatus and method for displaying graphic objects concurrently
US20070002827A1 (en) * 2005-06-30 2007-01-04 Victor Lau Automated serial protocol target port transport layer retry mechanism
US20070011333A1 (en) * 2005-06-30 2007-01-11 Victor Lau Automated serial protocol initiator port transport layer retry mechanism
CN101253745B (en) 2005-07-18 2011-06-22 博通以色列研发公司 Method and system for transparent TCP offload
US20070053349A1 (en) * 2005-09-02 2007-03-08 Bryan Rittmeyer Network interface accessing multiple sized memory segments
US7500209B2 (en) * 2005-09-28 2009-03-03 The Mathworks, Inc. Stage evaluation of a state machine
US7472261B2 (en) * 2005-11-08 2008-12-30 International Business Machines Corporation Method for performing externally assisted calls in a heterogeneous processing complex
US20070140232A1 (en) * 2005-12-16 2007-06-21 Carson Mark B Self-steering Clos switch
US7756036B2 (en) * 2005-12-22 2010-07-13 Intuitive Surgical Operations, Inc. Synchronous data communication
US8054752B2 (en) 2005-12-22 2011-11-08 Intuitive Surgical Operations, Inc. Synchronous data communication
US7757028B2 (en) * 2005-12-22 2010-07-13 Intuitive Surgical Operations, Inc. Multi-priority messaging
US8194690B1 (en) * 2006-05-24 2012-06-05 Tilera Corporation Packet processing in a parallel processing environment
US8713574B2 (en) * 2006-06-05 2014-04-29 International Business Machines Corporation Soft co-processors to provide a software service function off-load architecture in a multi-core processing environment
US7980582B2 (en) * 2006-08-09 2011-07-19 Atc Leasing Company Llc Front tow extended saddle
US7934063B2 (en) 2007-03-29 2011-04-26 International Business Machines Corporation Invoking externally assisted calls from an isolated environment
US7814243B2 (en) * 2007-06-01 2010-10-12 Sonics, Inc. Shared storage for multi-threaded ordered queues in an interconnect
US20080316983A1 (en) * 2007-06-22 2008-12-25 At&T Intellectual Property, Inc. Service information in a LAN access point that regulates network service levels provided to communication terminals
US8184538B2 (en) * 2007-06-22 2012-05-22 At&T Intellectual Property I, L.P. Regulating network service levels provided to communication terminals through a LAN access point
US8121117B1 (en) 2007-10-01 2012-02-21 F5 Networks, Inc. Application layer network traffic prioritization
US7822885B2 (en) * 2007-10-16 2010-10-26 Applied Micro Circuits Corporation Channel-less multithreaded DMA controller
US9106592B1 (en) * 2008-05-18 2015-08-11 Western Digital Technologies, Inc. Controller and method for controlling a buffered data transfer device
ES2385632T3 (en) * 2009-01-07 2012-07-27 Abb Research Ltd. Smart electronic device and design method of a substation automation system
CN101854311A (en) * 2009-03-31 2010-10-06 国际商业机器公司 Method and device for transmitting context information on web server
US8650341B2 (en) * 2009-04-23 2014-02-11 Microchip Technology Incorporated Method for CAN concatenating CAN data payloads
US8312243B2 (en) * 2009-07-16 2012-11-13 Lantiq Deutschland Gmbh Memory management in network processors
US10721269B1 (en) 2009-11-06 2020-07-21 F5 Networks, Inc. Methods and system for returning requests with javascript for clients before passing a request to a server
US8806056B1 (en) 2009-11-20 2014-08-12 F5 Networks, Inc. Method for optimizing remote file saves in a failsafe way
US8868951B2 (en) 2009-12-26 2014-10-21 Intel Corporation Multiple-queue multiple-resource entry sleep and wakeup for power savings and bandwidth conservation in a retry based pipeline
US9503375B1 (en) 2010-06-30 2016-11-22 F5 Networks, Inc. Methods for managing traffic in a multi-service environment and devices thereof
US9420049B1 (en) 2010-06-30 2016-08-16 F5 Networks, Inc. Client side human user indicator
US8347100B1 (en) 2010-07-14 2013-01-01 F5 Networks, Inc. Methods for DNSSEC proxying and deployment amelioration and systems thereof
US8478909B1 (en) 2010-07-20 2013-07-02 Qlogic, Corporation Method and system for communication across multiple channels
US8972995B2 (en) 2010-08-06 2015-03-03 Sonics, Inc. Apparatus and methods to concurrently perform per-thread as well as per-tag memory access scheduling within a thread and across two or more threads
US9088594B2 (en) 2011-02-07 2015-07-21 International Business Machines Corporation Providing to a parser and processors in a network processor access to an external coprocessor
US8468546B2 (en) 2011-02-07 2013-06-18 International Business Machines Corporation Merging result from a parser in a network processor with result from an external coprocessor
WO2012158854A1 (en) 2011-05-16 2012-11-22 F5 Networks, Inc. A method for load balancing of requests' processing of diameter servers
US8396836B1 (en) 2011-06-30 2013-03-12 F5 Networks, Inc. System for mitigating file virtualization storage import latency
US8463850B1 (en) 2011-10-26 2013-06-11 F5 Networks, Inc. System and method of algorithmically generating a server side transaction identifier
US8954492B1 (en) 2011-11-30 2015-02-10 F5 Networks, Inc. Methods for inlining content externally referenced in a web page prior to providing the web page to a requestor and devices thereof
US10230566B1 (en) 2012-02-17 2019-03-12 F5 Networks, Inc. Methods for dynamically constructing a service principal name and devices thereof
US9244843B1 (en) 2012-02-20 2016-01-26 F5 Networks, Inc. Methods for improving flow cache bandwidth utilization and devices thereof
US9020912B1 (en) 2012-02-20 2015-04-28 F5 Networks, Inc. Methods for accessing data in a compressed file system and devices thereof
EP2853074B1 (en) 2012-04-27 2021-03-24 F5 Networks, Inc Methods for optimizing service of content requests and devices thereof
US10033837B1 (en) 2012-09-29 2018-07-24 F5 Networks, Inc. System and method for utilizing a data reducing module for dictionary compression of encoded data
US9578090B1 (en) 2012-11-07 2017-02-21 F5 Networks, Inc. Methods for provisioning application delivery service and devices thereof
US10375155B1 (en) 2013-02-19 2019-08-06 F5 Networks, Inc. System and method for achieving hardware acceleration for asymmetric flow connections
US9270610B2 (en) * 2013-02-27 2016-02-23 Apple Inc. Apparatus and method for controlling transaction flow in integrated circuits
US9554418B1 (en) 2013-02-28 2017-01-24 F5 Networks, Inc. Device for topology hiding of a visited network
US9497614B1 (en) 2013-02-28 2016-11-15 F5 Networks, Inc. National traffic steering device for a better control of a specific wireless/LTE network
US10187317B1 (en) 2013-11-15 2019-01-22 F5 Networks, Inc. Methods for traffic rate control and devices thereof
US9330036B2 (en) * 2013-11-25 2016-05-03 Red Hat Israel, Ltd. Interrupt reduction by dynamic application buffering
US9087394B1 (en) * 2014-02-13 2015-07-21 Raycast Systems, Inc. Computer hardware architecture and data structures for packet binning to support incoherent ray traversal
US9455932B2 (en) * 2014-03-03 2016-09-27 Ericsson Ab Conflict detection and resolution in an ABR network using client interactivity
US10142259B2 (en) 2014-03-03 2018-11-27 Ericsson Ab Conflict detection and resolution in an ABR network
US9529532B2 (en) 2014-03-07 2016-12-27 Cavium, Inc. Method and apparatus for memory allocation in a multi-node system
US9411644B2 (en) * 2014-03-07 2016-08-09 Cavium, Inc. Method and system for work scheduling in a multi-chip system
US10592459B2 (en) 2014-03-07 2020-03-17 Cavium, Llc Method and system for ordering I/O access in a multi-node environment
US9372800B2 (en) 2014-03-07 2016-06-21 Cavium, Inc. Inter-chip interconnect protocol for a multi-chip system
US11838851B1 (en) 2014-07-15 2023-12-05 F5, Inc. Methods for managing L7 traffic classification and devices thereof
US10182013B1 (en) 2014-12-01 2019-01-15 F5 Networks, Inc. Methods for managing progressive image delivery and devices thereof
US11895138B1 (en) 2015-02-02 2024-02-06 F5, Inc. Methods for improving web scanner accuracy and devices thereof
US10834065B1 (en) 2015-03-31 2020-11-10 F5 Networks, Inc. Methods for SSL protected NTLM re-authentication and devices thereof
US11350254B1 (en) 2015-05-05 2022-05-31 F5, Inc. Methods for enforcing compliance policies and devices thereof
US10505818B1 (en) 2015-05-05 2019-12-10 F5 Networks. Inc. Methods for analyzing and load balancing based on server health and devices thereof
US20170147517A1 (en) * 2015-11-23 2017-05-25 Mediatek Inc. Direct memory access system using available descriptor mechanism and/or pre-fetch mechanism and associated direct memory access method
US11757946B1 (en) 2015-12-22 2023-09-12 F5, Inc. Methods for analyzing network traffic and enforcing network policies and devices thereof
US10404698B1 (en) 2016-01-15 2019-09-03 F5 Networks, Inc. Methods for adaptive organization of web application access points in webtops and devices thereof
US11178150B1 (en) 2016-01-20 2021-11-16 F5 Networks, Inc. Methods for enforcing access control list based on managed application and devices thereof
US10797888B1 (en) 2016-01-20 2020-10-06 F5 Networks, Inc. Methods for secured SCEP enrollment for client devices and devices thereof
US10412198B1 (en) 2016-10-27 2019-09-10 F5 Networks, Inc. Methods for improved transmission control protocol (TCP) performance visibility and devices thereof
US11063758B1 (en) 2016-11-01 2021-07-13 F5 Networks, Inc. Methods for facilitating cipher selection and devices thereof
US10505792B1 (en) 2016-11-02 2019-12-10 F5 Networks, Inc. Methods for facilitating network traffic analytics and devices thereof
US10812266B1 (en) 2017-03-17 2020-10-20 F5 Networks, Inc. Methods for managing security tokens based on security violations and devices thereof
US11343237B1 (en) 2017-05-12 2022-05-24 F5, Inc. Methods for managing a federated identity environment using security and access control data and devices thereof
US11122042B1 (en) 2017-05-12 2021-09-14 F5 Networks, Inc. Methods for dynamically managing user access control and devices thereof
US11223689B1 (en) 2018-01-05 2022-01-11 F5 Networks, Inc. Methods for multipath transmission control protocol (MPTCP) based session migration and devices thereof
US11275632B2 (en) 2018-09-14 2022-03-15 Advanced Micro Devices, Inc. Broadcast command and response
US11185379B2 (en) * 2019-01-10 2021-11-30 Verily Life Sciences Llc Comprehensive messaging system for robotic surgical systems
US10860325B1 (en) 2019-07-05 2020-12-08 Nokia Solutions And Networks Oy Dynamic control of processor instruction sets
CN113010173A (en) 2019-12-19 2021-06-22 超威半导体(上海)有限公司 Method for matrix data broadcasting in parallel processing
CN113094099A (en) 2019-12-23 2021-07-09 超威半导体(上海)有限公司 Matrix data broadcast architecture
US11403221B2 (en) 2020-09-24 2022-08-02 Advanced Micro Devices, Inc. Memory access response merging in a memory hierarchy
US11606317B1 (en) * 2021-04-14 2023-03-14 Xilinx, Inc. Table based multi-function virtualization

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5896511A (en) * 1995-07-19 1999-04-20 Fujitsu Network Communications, Inc. Method and apparatus for providing buffer state flow control at the link level in addition to flow control on a per-connection basis
US6021132A (en) * 1997-06-30 2000-02-01 Sun Microsystems, Inc. Shared memory management in a switched network element
US6157955A (en) * 1998-06-15 2000-12-05 Intel Corporation Packet processing system including a policy engine having a classification unit
US6246682B1 (en) * 1999-03-05 2001-06-12 Transwitch Corp. Method and apparatus for managing multiple ATM cell queues
US6310879B2 (en) * 1994-09-14 2001-10-30 Fan Zhou Method and apparatus for multicast of ATM cells where connections can be dynamically added or dropped
US6373846B1 (en) * 1996-03-07 2002-04-16 Lsi Logic Corporation Single chip networking device with enhanced memory access co-processor
US6667978B1 (en) * 1998-07-09 2003-12-23 International Business Machines Corporation Apparatus and method for reassembling frame data into stream data
US6724767B1 (en) * 1998-06-27 2004-04-20 Intel Corporation Two-dimensional queuing/de-queuing methods and systems for implementing the same
US6782465B1 (en) * 1999-10-20 2004-08-24 Infineon Technologies North America Corporation Linked list DMA descriptor architecture

Family Cites Families (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB8329509D0 (en) * 1983-11-04 1983-12-07 Inmos Ltd Computer
US5151895A (en) * 1990-06-29 1992-09-29 Digital Equipment Corporation Terminal server architecture
US5742760A (en) * 1992-05-12 1998-04-21 Compaq Computer Corporation Network packet switch using shared memory for repeating and bridging packets at media rate
US5805927A (en) 1994-01-28 1998-09-08 Apple Computer, Inc. Direct memory access channel architecture and method for reception of network information
US5493652A (en) * 1994-04-29 1996-02-20 International Business Machines Corporation Management system for a buffer memory having buffers of uniform size in which the buffers are divided into a portion of contiguous unused buffers and a portion of contiguous buffers in which at least some are used
US5533020A (en) 1994-10-31 1996-07-02 International Business Machines Corporation ATM cell scheduler
US6351780B1 (en) * 1994-11-21 2002-02-26 Cirrus Logic, Inc. Network controller using held data frame monitor and decision logic for automatically engaging DMA data transfer when buffer overflow is anticipated
US5566170A (en) * 1994-12-29 1996-10-15 Storage Technology Corporation Method and apparatus for accelerated packet forwarding
US6523060B1 (en) 1995-04-07 2003-02-18 Cisco Technology, Inc. Method and apparatus for the management of queue pointers by multiple processors in a digital communications network
EP0812083B1 (en) * 1995-08-02 2007-04-18 Nippon Telegraph And Telephone Corporation Dynamic rate controller
US5606559A (en) * 1995-08-11 1997-02-25 International Business Machines Corporation System and method for an efficient ATM adapter/device driver interface
US6327246B1 (en) * 1995-11-29 2001-12-04 Ahead Communications Systems, Inc. Controlled available bit rate service in an ATM switch
US5689707A (en) * 1995-12-04 1997-11-18 Ncr Corporation Method and apparatus for detecting memory leaks using expiration events and dependent pointers to indicate when a memory allocation should be de-allocated
US5818830A (en) * 1995-12-29 1998-10-06 Lsi Logic Corporation Method and apparatus for increasing the effective bandwidth of a digital wireless network
US5920561A (en) 1996-03-07 1999-07-06 Lsi Logic Corporation ATM communication system interconnect/termination unit
US5859835A (en) * 1996-04-15 1999-01-12 The Regents Of The University Of California Traffic scheduling system and method for packet-switched networks
US5748630A (en) 1996-05-09 1998-05-05 Maker Communications, Inc. Asynchronous transfer mode cell processing system with load multiple instruction and memory write-back
US5959993A (en) * 1996-09-13 1999-09-28 Lsi Logic Corporation Scheduler design for ATM switches, and its implementation in a distributed shared memory architecture
US6175902B1 (en) * 1997-12-18 2001-01-16 Advanced Micro Devices, Inc. Method and apparatus for maintaining a time order by physical ordering in a memory
US6487212B1 (en) * 1997-02-14 2002-11-26 Advanced Micro Devices, Inc. Queuing structure and method for prioritization of frames in a network switch
US6061351A (en) * 1997-02-14 2000-05-09 Advanced Micro Devices, Inc. Multicopy queue structure with searchable cache area
US6028843A (en) * 1997-03-25 2000-02-22 International Business Machines Corporation Earliest deadline first communications cell scheduler and scheduling method for transmitting earliest deadline cells first
US6324623B1 (en) * 1997-05-30 2001-11-27 Oracle Corporation Computing system for implementing a shared cache
GB9721947D0 (en) * 1997-10-16 1997-12-17 Thomson Consumer Electronics Intelligent IP packet scheduler algorithm
US6088777A (en) * 1997-11-12 2000-07-11 Ericsson Messaging Systems, Inc. Memory system and method for dynamically allocating a memory divided into plural classes with different block sizes to store variable length messages
US6091709A (en) * 1997-11-25 2000-07-18 International Business Machines Corporation Quality of service management for packet switched networks
US6208661B1 (en) * 1998-01-07 2001-03-27 International Business Machines Corporation Variable resolution scheduler for virtual channel communication devices
US6216215B1 (en) * 1998-04-02 2001-04-10 Intel Corporation Method and apparatus for senior loads
US6353616B1 (en) * 1998-05-21 2002-03-05 Lucent Technologies Inc. Adaptive processor schedulor and method for reservation protocol message processing
US6205150B1 (en) * 1998-05-28 2001-03-20 3Com Corporation Method of scheduling higher and lower priority data packets
US6311212B1 (en) * 1998-06-27 2001-10-30 Intel Corporation Systems and methods for on-chip storage of virtual connection descriptors
US6526062B1 (en) * 1998-10-13 2003-02-25 Verizon Corporate Services Group Inc. System and method for scheduling and rescheduling the transmission of cell objects of different traffic types
US6347344B1 (en) * 1998-10-14 2002-02-12 Hitachi, Ltd. Integrated multimedia system with local processor, data transfer switch, processing modules, fixed functional unit, data streamer, interface unit and multiplexer, all integrated on multimedia processor
US5978898A (en) * 1998-10-30 1999-11-02 Intel Corporation Allocating registers in a superscalar machine
US6477562B2 (en) * 1998-12-16 2002-11-05 Clearwater Networks, Inc. Prioritized instruction scheduling for multi-streaming processors
US6341325B2 (en) * 1999-01-12 2002-01-22 International Business Machines Corporation Method and apparatus for addressing main memory contents including a directory structure in a computer system
US6466580B1 (en) * 1999-02-23 2002-10-15 Advanced Micro Devices, Inc. Method and apparatus for processing high and low priority frame data transmitted in a data communication system
US6621792B1 (en) * 1999-02-23 2003-09-16 Avaya Technology Corp. Computationally-efficient traffic shaper
US6539024B1 (en) * 1999-03-26 2003-03-25 Alcatel Canada Inc. Method and apparatus for data buffer management in a communications switch
US6504846B1 (en) * 1999-05-21 2003-01-07 Advanced Micro Devices, Inc. Method and apparatus for reclaiming buffers using a single buffer bit
US6401147B1 (en) * 1999-05-24 2002-06-04 Advanced Micro Devices, Inc. Split-queue architecture with a first queue area and a second queue area and queue overflow area having a trickle mode and an overflow mode based on prescribed threshold values
US6601089B1 (en) * 1999-06-21 2003-07-29 Sun Microsystems, Inc. System and method for allocating buffers for message passing in a shared-memory computer system
US6633540B1 (en) * 1999-07-02 2003-10-14 Nokia Internet Communications, Inc. Real-time traffic shaper with keep-alive property for best-effort traffic
US6301646B1 (en) * 1999-07-30 2001-10-09 Curl Corporation Pointer verification system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6310879B2 (en) * 1994-09-14 2001-10-30 Fan Zhou Method and apparatus for multicast of ATM cells where connections can be dynamically added or dropped
US5896511A (en) * 1995-07-19 1999-04-20 Fujitsu Network Communications, Inc. Method and apparatus for providing buffer state flow control at the link level in addition to flow control on a per-connection basis
US6373846B1 (en) * 1996-03-07 2002-04-16 Lsi Logic Corporation Single chip networking device with enhanced memory access co-processor
US6021132A (en) * 1997-06-30 2000-02-01 Sun Microsystems, Inc. Shared memory management in a switched network element
US6157955A (en) * 1998-06-15 2000-12-05 Intel Corporation Packet processing system including a policy engine having a classification unit
US6625689B2 (en) * 1998-06-15 2003-09-23 Intel Corporation Multiple consumer-multiple producer rings
US6724767B1 (en) * 1998-06-27 2004-04-20 Intel Corporation Two-dimensional queuing/de-queuing methods and systems for implementing the same
US6667978B1 (en) * 1998-07-09 2003-12-23 International Business Machines Corporation Apparatus and method for reassembling frame data into stream data
US6246682B1 (en) * 1999-03-05 2001-06-12 Transwitch Corp. Method and apparatus for managing multiple ATM cell queues
US6782465B1 (en) * 1999-10-20 2004-08-24 Infineon Technologies North America Corporation Linked list DMA descriptor architecture

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7292593B1 (en) * 2002-03-28 2007-11-06 Advanced Micro Devices, Inc. Arrangement in a channel adapter for segregating transmit packet data in transmit buffers based on respective virtual lanes
US7321595B2 (en) * 2002-05-25 2008-01-22 Samsung Electronics Co., Ltd. Method for processing various numbers of ports in network processor
US20030219027A1 (en) * 2002-05-25 2003-11-27 Su-Hyun Kim Method for processing various numbers of ports in network processor
US9105319B2 (en) 2003-03-13 2015-08-11 Marvell World Trade Ltd. Multiport memory architecture
US8688877B1 (en) 2003-03-13 2014-04-01 Marvell World Trade Ltd. Multiport memory architecture
US20070058649A1 (en) * 2004-06-16 2007-03-15 Nokia Corporation Packet queuing system and method
US20060136619A1 (en) * 2004-12-16 2006-06-22 Intel Corporation Data integrity processing and protection techniques
US8170041B1 (en) * 2005-09-14 2012-05-01 Sandia Corporation Message passing with parallel queue traversal
US20140185629A1 (en) * 2008-04-04 2014-07-03 Micron Technology, Inc. Queue processing method
US9070451B1 (en) 2008-04-11 2015-06-30 Marvell International Ltd. Modifying data stored in a multiple-write flash memory cell
US8924598B1 (en) 2008-05-06 2014-12-30 Marvell International Ltd. USB interface configurable for host or device mode
US8683085B1 (en) 2008-05-06 2014-03-25 Marvell International Ltd. USB interface configurable for host or device mode
US7490327B1 (en) 2008-05-15 2009-02-10 International Business Machines Corporation System and method for programmatic distributed transaction commit prioritization mechanism
US7509370B1 (en) * 2008-05-15 2009-03-24 International Business Machines Corporation Method for optimizing parallel processing of backend transactions by prioritizing related transactions
US8874833B1 (en) 2009-03-23 2014-10-28 Marvell International Ltd. Sequential writes to flash memory
US9070454B1 (en) 2009-04-21 2015-06-30 Marvell International Ltd. Flash memory
US8688922B1 (en) * 2010-03-11 2014-04-01 Marvell International Ltd Hardware-supported memory management
US8843723B1 (en) 2010-07-07 2014-09-23 Marvell International Ltd. Multi-dimension memory timing tuner
EP3591830A4 (en) * 2017-03-03 2020-03-04 Mitsubishi Electric Corporation Power conversion device and communication method
US10819217B2 (en) 2017-03-03 2020-10-27 Mitsubishi Electric Corporation Power conversion device and communication method

Also Published As

Publication number Publication date
US20020051460A1 (en) 2002-05-02
WO2001013590A1 (en) 2001-02-22
US6804239B1 (en) 2004-10-12
USRE42092E1 (en) 2011-02-01
AU6776200A (en) 2001-03-13
US7046686B1 (en) 2006-05-16
US6760337B1 (en) 2004-07-06
US7099328B2 (en) 2006-08-29
US6754223B1 (en) 2004-06-22
US6888830B1 (en) 2005-05-03

Similar Documents

Publication Publication Date Title
US7099328B2 (en) Method for automatic resource reservation and communication that facilitates using multiple processing events for a single processing task
US6822959B2 (en) Enhancing performance by pre-fetching and caching data directly in a communication processor's register set
JP3801919B2 (en) A queuing system for processors in packet routing operations.
US7649901B2 (en) Method and apparatus for optimizing selection of available contexts for packet processing in multi-stream packet processing
US8935483B2 (en) Concurrent, coherent cache access for multiple threads in a multi-core, multi-thread network processor
US7831974B2 (en) Method and apparatus for serialized mutual exclusion
US7269179B2 (en) Control mechanisms for enqueue and dequeue operations in a pipelined network processor
US7337275B2 (en) Free list and ring data structure management
US6996639B2 (en) Configurably prefetching head-of-queue from ring buffers
US8505013B2 (en) Reducing data read latency in a network communications processor architecture
US6952824B1 (en) Multi-threaded sequenced receive for fast network port stream of packets
US8514874B2 (en) Thread synchronization in a multi-thread network communications processor architecture
US7113985B2 (en) Allocating singles and bursts from a freelist
US8910171B2 (en) Thread synchronization in a multi-thread network communications processor architecture
US20040034718A1 (en) Prefetching of receive queue descriptors
US20110225589A1 (en) Exception detection and thread rescheduling in a multi-core, multi-thread network processor
US20060064508A1 (en) Method and system to store and retrieve message packet data in a communications network
US8868889B2 (en) Instruction breakpoints in a multi-core, multi-thread network communications processor architecture
US7039054B2 (en) Method and apparatus for header splitting/splicing and automating recovery of transmit resources on a per-transmit granularity
US7474662B2 (en) Systems and methods for rate-limited weighted best effort scheduling
US9804959B2 (en) In-flight packet processing
WO2002011368A2 (en) Pre-fetching and caching data in a communication processor's register set
US9559988B2 (en) PPI allocation request and response for accessing a memory system

Legal Events

Date Code Title Description
AS Assignment

Owner name: CONEXANT SYSTEMS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GALBI, DUANE E.;TOMPKINS, JOSEPH B.;BURNS, BRUCE G.;AND OTHERS;REEL/FRAME:012525/0093;SIGNING DATES FROM 20011025 TO 20011105

Owner name: CONEXANT SYSTEMS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GALBI, DUANE E.;TOMPKINS, JOSEPH B.;BURNS, BRUCE G.;AND OTHERS;SIGNING DATES FROM 20011025 TO 20011105;REEL/FRAME:012525/0093

AS Assignment

Owner name: MINDSPEED TECHNOLOGIES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CONEXANT SYSTEMS, INC.;REEL/FRAME:014568/0275

Effective date: 20030627

AS Assignment

Owner name: CONEXANT SYSTEMS, INC., CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:MINDSPEED TECHNOLOGIES, INC.;REEL/FRAME:014546/0305

Effective date: 20030930

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: MINDSPEED TECHNOLOGIES, INC., CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST;ASSIGNOR:CONEXANT SYSTEMS, INC;REEL/FRAME:020186/0047

Effective date: 20041208

AS Assignment

Owner name: MINDSPEED TECHNOLOGIES, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CONEXANT SYSTEMS, INC.;REEL/FRAME:020270/0058

Effective date: 20041208

AS Assignment

Owner name: MINDSPEED TECHNOLOGIES, INC., CALIFORNIA

Free format text: CORRECTIVE DOCUMENT;ASSIGNOR:CONEXANT SYSTEMS, INC.;REEL/FRAME:020532/0916

Effective date: 20030627

CC Certificate of correction
AS Assignment

Owner name: CHEDMIN COMMUNICATION LTD., LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MINDSPEED TECHNOLOGIES, INC.;REEL/FRAME:021354/0430

Effective date: 20071214

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: MINDSPEED TECHNOLOGIES, INC., CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE OMISSION OF SCHEDULE 1.01 (D) PREVIOUSLY RECORDED AT REEL: 020532 FRAME: 0916. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:CONEXANT SYSTEMS, INC.;REEL/FRAME:036467/0628

Effective date: 20030627

AS Assignment

Owner name: F. POSZAT HU, L.L.C., DELAWARE

Free format text: MERGER;ASSIGNOR:CHEDMIN COMMUNICATION LTD., LLC;REEL/FRAME:036684/0393

Effective date: 20150812

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553)

Year of fee payment: 12

AS Assignment

Owner name: INTELLECTUAL VENTURES ASSETS 67 LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:F. POSZAT HU, L.L.C.;REEL/FRAME:044824/0286

Effective date: 20171215

AS Assignment

Owner name: BICAMERAL LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTELLECTUAL VENTURES ASSETS 67 LLC;REEL/FRAME:046688/0582

Effective date: 20180105