CA2271883C - Many dimensional congestion detection system and method - Google Patents

Many dimensional congestion detection system and method Download PDF

Info

Publication number
CA2271883C
CA2271883C CA002271883A CA2271883A CA2271883C CA 2271883 C CA2271883 C CA 2271883C CA 002271883 A CA002271883 A CA 002271883A CA 2271883 A CA2271883 A CA 2271883A CA 2271883 C CA2271883 C CA 2271883C
Authority
CA
Canada
Prior art keywords
congestion
cell
threshold
count
service class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002271883A
Other languages
French (fr)
Other versions
CA2271883A1 (en
Inventor
Brian D. Holden
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsemi Solutions US Inc
Original Assignee
PMC Sierra Maryland Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PMC Sierra Maryland Inc filed Critical PMC Sierra Maryland Inc
Publication of CA2271883A1 publication Critical patent/CA2271883A1/en
Application granted granted Critical
Publication of CA2271883C publication Critical patent/CA2271883C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1553Interconnection of ATM switching modules, e.g. ATM switching fabrics
    • H04L49/1576Crossbar or matrix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • H04L49/309Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0064Admission Control
    • H04J2203/0067Resource management and allocation
    • H04J2203/0071Monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5636Monitoring or policing, e.g. compliance with allocated rate, corrective actions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5651Priority, marking, classes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • H04L2012/5682Threshold; Watermark

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A congestion detection system and method for an advanced ATM network measure congestion in a number of dimensions. In one embodiment, cell traffic is measured on a per virtual channel basis, per service class queue basis and a per device basis within an input routing table, and additionally on a per virtual output and per service class basis within an output routing table. I n a specific embodiment, upon each measurement, cell traffic is compared to thresholds corresponding to that measurement's congested and maximum limit a nd a congestion or maximum signal is sent if that threshold is exceeded.</SDOAB >

Description

MANY DIMENSIONAL CONGESTION DETECTION
SYSTEM AND METHOD
BACKGROUND OF THE INVENTION
This invention relates primarily to a class of digital communication systems known as asynchronous transfer mode (ATM) switching systems and generally to intercomputer communications architectures. The invention more specifically relates to systems and methods for detecting data congestion within an ATM network.
A building block in a switch-fabric architecture ATM switch system is a structure known as a switch element. A switch element provides packet signal routing from one of a plurality of input ports to one or more of a plurality of output ports by maintaining an array of crosspoints for connecting any input port to any output port. Switch elements may be aggregated in various patterns to provide arbitrarily large N by N possible interconnection of input ports to output ports.
Problems arise where the receiving port cannot assimilate information as fast as it is delivered or where the priority of the traffic varies. A
"brute-force"
technique for handling the queuing problem is to provide sufficient data storage at each possible crosspoint in the switch element. If the amount of data accumulated at the crosspoint exceeds the capacity of the data storage, data is discarded, thus forcing the destination port to request that data be resent.
An alternative solution is discussed in detail in co-assigned U.S. patent 5,583,861 entitled ATM ARCHITECTURE AND SWITCHING ELEMENT. A
difference in that ATM architecture over most prior systems is the use of a shared pool of memory for storing cells. A shared pool more effectively utilizes available memory.
Use of a shared pool of memory also creates an opportunity for more effective and sophisticated congestion management in an ATM architecture.
Prior ATM systems generally measured congestion only crudely, either measuring -lA-congestion on just a physical device basis or according to just one or a few parameters in a device, such as priority. Some prior art systems attempted to infer congestion by examining traffic flow in particular channels of the network or did not measure congestion at all, but instead made a determination to discard buffered cells when shared buffers were full.
How an ATM network manages congestion is an important characteristic that affects overall performance. In general, it is desirable for an ATM switch to rarely drop a cell due to congestion. In order to achieve this, the network must be able to signal to transmitters that they must halt or slow down transmission when congestion begins to occur.
Congestion occurs when two cells are directed to the same output at the same time. In general, one of the cells will be stored temporarily in a buffer (or queue) associated with one of the ATM devices and will be output during a subsequent cell cycle. Congestion can also occur because stored cells at an intermediate resource in the ATM network, such as at a particular input routing table (IRT) or output routing table (ORT), exceeds the physical memory (buffer) storage capacity of that device.
Prior Art References U.S. Patent 5,280,470 lBuhrke), filed Feb. 3, 1993, priority Nov. 21, 1990, as a further example, describes a congestion management in broadband ISDN cell networks where overload is detected in a network switch and then a routine is performed to determine which virtual channels to slow down (Fig. 4) in order to relieve congestion. Buhrke does not monitor congestion in a particular virtual channel, but instead infers congestion by monitoring transmission rates. Buhrke does not monitor congestion in several dimensions on an ongoing basis.
U.S. Patent No. 5,233,606 (Pashan), filed Aug. 2, 1991, discusses controlling shared-buffer-memory overflow in a multipriority environment that does not measure congestion at all but instead waits until all buffer memories are used up and then determines from which memory a cell should be flushed. (See, for example, the abstract, "It initially allows output-port queues to completely consume the buffer memory.
Thereafter, when an additional incoming cell is received for which there is no room in the buffer memory, the lengths of all of the queues of each output port are individually summed and compared to determine which port has the greatest number of buffered cells. A buffered ATM cell is discarded from the lowest priority non-empty queue of that port." Pashan teaches away from actually measuring congestion in that, instead of measuring congestion, Pashan allows a buffer to fill up and then discards cells from that buffer.
U.S. Patent 5,313,454 (Bustini), filed April 1, 1992, for example, describes a congestion control for cell networks where congestion is monitored by measuring queue lengths at network nodes. Because congestion is monitored in only one dimension at a particular buffer pool memory, the congestion detection threshold must be set very low as SUBSTITUTE SHEET (RULE 26) WO X38 PCTIUS97n2863 compared with the possible capacity of the buffers. For example, the patent states, "The PQth threshold is normally set at four kilobytes, a fraction _of the 64 kilobyte queue capacity." 13:52-54.
U.5. Patent No. 5,367,520 (Cordell), filed Nov. 25, 1992, discusses a multiple glane switch fabric. A number of problems must be solved in order to efficiently and correctly transmit data from multiple sources to multiple destinations across such a multiple plane fabric. Fig.
5 of the '520 patent illustrates a mechanism for handling two of these problems: (1) spreading cells out over multiple planes in the switch fabric, and (2) maintaining cells in order through the switch fabric so that if stream of cells 1-10 are queued to a particular destination A, cell A1 is always delivered before cell A2. The backpressure feedback is discussed at 16:31 et seq. The discussed scheme is limited to measuring congestion in the output buffers only. The reference states that "In IS practice, however, it is probably satisfactory to make all cells destined to that queue's entire Output Buffer Group of 16 queues wait, resulting in a brief 0.2~ reduction in switch throughput."(16:52-55) U.S. Patent 5,359,592 (Corbalis), filed Jun. 25, 1993, describes a congestion control in an ATM device where cell counts are kept on a per cell queue basis only, and these counts are compared to up to three different thresholds. Though Corbalis sets many different threshold levels, levels are set in only one dimension.
Cooper, C.A. and Park, K.I. (Cooper), "Toward a Broadband Congestion Control Strategy," I.E.E.E. Network Magazine, May 1990, is a survey article that discusses a possibility of congestion control strategies where ATM cells may be processed in accordance with multilevel priorities. Traffic Characterization, Admission Control, and Policing are discussed. A strategy of many-dimensional congestion measuring in a shared buffer is not discussed.
Oshima, K. et al, (Oshima), "A New Switch Architecture Based on STS-Type Shared Buffering and its LST Implementation," XIV International Switching Symposium, Yokohama, Japan, Oct. 1992 discusses an ATM
architecture with a partially shared buffer that does not discuss congestion measurement in shared buffers>.
Badran, H.F. and Mouftah, H.T. (Badran I), "Head of Line Arbitration in ATM Switcher with Input-Output Buffering and Backpresaure Control," GLOBECOM '91, I.E.E.E., discuss a backpresaure mechanism that uses two queue specific criteria (queue length and input-queue age) and two cell-specific criteria (time of joining input queues and time of arrival to the head-of-line position) to resolve head-of-line contention.
Badran, H.F. and Mouftah, H.T. (Badran II), "Input-output-Buffered ATM Switches with Delayed Backpressure Mechanisms," CCECE/CCGEI
'93, I.E.E.E., discuss a delayed backpressure feedback mechanism that sets two levels (Levell and Level2) for measuring congestion on one dimension of the output queue only (see Figure 1).
More sophisticated and finer congestion management on multiple SUBSTITUTE SHEET (RULE 26) dimensions would be desirable in a shared memory system because when congestion is measured and monitored at finer levels of detail, the system can allow individual types of traffic to use more of the shared resources while still ensuring that sufficient resources will be available to provide guaranteed service to higher priority traffic.
However, increased levels of congestion detection and management require increased circuit and processing overhead and could reduce processing speed.
Increasing demands for communications speed and capacity have created a need for higher performance ATM architectures as described herein.
This architecture differs from the architecture in U.S. Patent 5,583,861 in that the primary shared memory areas are associated with an input routing table (IRT) and output routing table (ORT). Shared buffer memory associated with individual switch elements are generally used only when handling multicast traffic. The architecture is different also in that it provides for a number of virtual outputs (VOs) for each physical output from an ORT and virtual inputs (VIs) for each physical input to an IRT. In one specific embodiment, the ORT and IRT are combined into a single device referred to as a Quad Routing Table (QRT). The QRT may be used in connection with a switch fabric constructed of switch elements (SEs) as described in U.S. Patent 5,583,861 or may be used in connection with a switch fabric made up of update quad switch elements (QSEs).
What is needed is a congestion management scheme for an ATM
architecture having substantial shared resources that allows effective use of system resources while able to guarantee service to certain traffic classes.
SUMMARY OF THE INVENTION
A congestion detection system and method for advanced ATM
networks measures congestion according to a sufficient number of dimensions that shared resources may be heavily utilized by existing cell traffic while guaranteeing sufficient capacity remains to service high priority traffic. Congestion management actions may be configured and taken according to congestion detected in any dimension or in aggregates of dimensions.

-S-Prior art ATM systems that measure congestion do so in only one or a very few dimensions in order to reduce overhead and to attempt to speed processing of ATM cells. While measuring congestion in a number of dimensions and taking congestion actions in more than one dimension might be thought to be inefficient in terms of system resources, the invention achieves the surprising result of providing greater system efficiency and performance by measuring congestion according to multiple dimensions. The invention does this by allowing the network to provide guaranteed service to certain types of cell traffic (e.g. to certain virtual channels or service classes) while allowing larger congestion thresholds for the individual congestion points measured.
This may be understood by considering an example of an ATM
network where 100 units of a resource are available for "high priority"
traffic and where that traffic can be on any one of 10 virtual channels. Assume further that the maximum traffic that can be carried on any one virtual channel is 50 units of resource.
Assume further that acceptable system performance can be guaranteed only if congestion management actions are taken whenever 80% of available resources for either a given virtual channel or a given priority were being used.
In this situation, if congestion were measured only on a per virtual channel basis, a congestion threshold would have to be set at 8 units per virtual channel to ensure that congestion management actions were timely taken in the case where all ten possible virtual channels had 8 units each.
If congestion were measured only by priority, a congestion threshold of 40 would have to be set for high priority traffic to ensure that management actions were timely taken when 40 high priority cells were in one virtual channel. By measuring congestion independently in both dimensions, a threshold of 80 for high priority traffic and 40 for a particular VC can be maintained and timely congestion management can still be guaranteed. Therefore the invention, even though it may require additional overhead circuitry and processing to track congestion in various dimensions, has the result of enhancing overall network performance by allowing greater utilization of shared resources.

ATM congestion may be understood as measured and congestion thresholds set in three dimensions: (1) per virtual channel (or connection);
(2) per priority; and (3) per device. A marked interrupt linked list is one mechanism used to alert a processor to the existence of various congestions in the '861 architecture.
Some of these congestion measurements are made in each switch element of the switch fabric and backpressure signals are sent through the fabric to previous transmitters to slow down transmission. Congestion measurements are also made in routing tables.
Embodiment In A Newer ATM Architecture In a new ATM architecture, more detailed congestion management is desirable. One reason for this is that in the new architecture, a single ORT
has a physical output which may carry traffic for up to 31 virtual outputs (VOs) .
It is desirable for congestion caused by one VO not to affect other VOs.
Furthermore, 1 S because substantial buffering of cells takes place in the IRT and the ORT, it is desirable to measure congestion independently in both of those devices.
In a specific embodiment for an improved ATM switch architecture, congestion is measured in five dimensions within an ORT and in three dimensions within an IRT. In a specific embodiment, within an IRT, cell traffic congestion is measured .(1) on a per virtual channel (VC) basis, (2) per service class (SC) basis, or (3) a per device basis. In a specific embodiment, within an ORT, cell traffic congestion is measured (1) per VC, (2) per SC, (3) per device, (4) per virtual output (V0), and (5) per service class on a particular VO (referred to as per service class queue (SCQ)).
According to one embodiment, counters are maintained for the dimensions in an IRT and an ORT. Each time a cell enters and is queued for transmission through either an IRT or ORT, appropriate counters in those devices are incremented. Each time a cell leaves either an IRT or ORT, the appropriate counters in those devices are decremented.

Counter values are compared to at least one threshold for that count value. If a threshold is equalled or exceeded, appropriate congestion management action is taken. In one embodiment, more than one preset threshold is set for a counter, with an initial threshold signalling that one type of congestion management action must be taken and a different threshold signalling another management action must be taken.
In a specific embodiment, numerous count values may be maintained in a particular dimension, such as a separate count value for each of up to 16,384 (16K) VCs defined in an IRT or ORT. In such a case, a current count value is stored in a memory along with configuration information for a particular VC, and when a cell is processed for that VC the corresponding count value is loaded into the counter and incremented or decremented as appropriate. In a further embodiment, separate threshold values may be established for each VC, and these are also stored in memory and loaded when a cell for a particular VC is processed.
In one embodiment, a congestion manager, located in the IRT and ORT, aggregates the different count values from different components in an ATM
switch and takes overall congestion management actions as appropriate.
In a further embodiment, QSEs during one mode of non-multicast operation operate without any buffering or cells within the QSE, unlike the SE
described in U.S. Patent 5,583,861. Therefore, QSEs do not measure congestion within their cell buffer memories. However, in an embodiment, QSEs are enabled to operate in a different mode where they do activate an internal cell buffer. In that case, congestion detecting also takes place separately within an QSE as discussed in U.S.
Patent 5,583,861. This additional mode may be activated to allow a new QSE to operate in a backwards compatible fashion and also may be activated during multicast cell transmission.
In accordance with another aspect of the invention, there is provided a method for detecting congestion within an ATM device, the device having at least one buffer pool, a plurality of service classes and a plurality of virtual channels. The method includes counting the number of cells in a buffer in a virtual channel and determining a virtual channel count, comparing the virtual channel count to a virtual channel count threshold, and generating a virtual channel congestion signal if indicated by the virtual channel compare. The method further includes counting the number of cells in a buffer within a service class and determining a service class count, comparing the service class count to a preset service class threshold, and generating a service class congestion signal if indicated by the service class compare.
The method also includes determining a number of available cell buffers remaining in a buffer pool as a device count, comparing the device count to a preset device threshold, and generating a device congestion signal if indicated by the device count compare. The method further includes receiving any generated congestion signals and initiating congestion management actions if one or more of the congestion signals is received.
The method may further include counting the number of cells in a buffer for a virtual output and determining a virtual output count, comparing the virtual output count to a virtual output count threshold, and generating a virtual output congestion signal if indicated by the virtual output compare.
The method may further include counting the number of cells in a service class queue and determining a service class queue count, comparing the service class queue count to a service class queue threshold, and generating a service class queue congestion signal if the service class queue count exceeds the service class queue threshold.
For each threshold, two threshold values may be stored, the first being an initial threshold and the second being a maximum threshold. When an initial threshold value is exceeded, a congestion management action may be taken according to that threshold. When a maximum threshold value is exceeded, cells may be dropped unconditionally.
A threshold value may be stored as a power of two.
A threshold value may be stored as a power of two and at least one bit of mantissa to add to the resolution of the stored threshold value.
Congestion management action can be one or more actions from the set: Cell Loss Priority Marked Cell Dropping, Early Packet Discard, Random Early Discard, Explicit Forward Congestion Indication Marking, Congestion Indication Marking, Explicit Rate Signaling.

A choice of congestion management actions may be selected on a per virtual channel basis.
For at least one stored threshold, a state bit may be kept to allow a hysteresis function to be implemented so that a congestion management action can be S taken for an extended period until congestion is relieved.
In accordance with another aspect of the invention, there is provided an ATM device capable of detecting congestion in a plurality of dimensions. The ATM
device includes a cell buffer, an input line for receiving ATM cells, an output line for outputting ATM cells, and a controller for receiving congestion indications.
The ATM device further includes a virtual channel counter and a virtual channel count threshold, a service class counter and a service class counter threshold, a device counter and a device counter threshold. The ATM device further includes a comparator for comparing a value in one of the counters to its corresponding threshold and generating a congestion indication, and also includes a configuration memory.
The device may further include a service class queue counter and a service class queue counter threshold.
The device may further include a service class group counter and a service class group threshold.
Count values and threshold values may be stored in the configuration memory and loaded into the counters for the processing of a cell.
The counters may be incorporated into independent arithmetic logic units, each able to simultaneously perform a count decrement or increment and threshold compare during processing of a cell.
The device may further include a virtual channel configuration entry containing a queue depth value for a virtual channel, at least one threshold value, and at least one congestion management action bit indicating appropriate congestion management action for a virtual charmel.
In accordance with another aspect of the invention, there is provided a method for detecting congestion within an ATM device. The method includes receiving a cell into the device, incrementing at least two counts associated with the cell, comparing a first count against a first threshold, ; comparing a second count against a second threshold, and indicating the presence of congestion in response to any comparings. The method may further include incrementing a third count associated with the cell, and comparing a third count against a third threshold. The method may further include incrementing a fourth count associated with the cell, and comparing a fourth count against a fourth threshold. The method may further include incrementing a fifth count associated with the cell, and comparing a fifth count against a fifth threshold.
The invention will be further understood with reference to the drawings of specific embodiments described below.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a block diagram of one type of ATM network in which the present invention may be employed.
Fig. 2 is a representation of cell traffic flow in an IRT and an ORT
showing the congestion measurements in each according to one specific embodiment of the invention.
Fig. 3 is a block diagram of an IRT and ORT showing multiple congestion counters/ALUs according to one embodiment of the invention.
Fig. 4 is a block diagram showing one counter/ALU as an example of a counter circuit according to the invention.
Fig. 5 is a state diagram showing congestion management hysteresis according to one embodiment of the invention.
Fig. 6 is a block diagram of an ATM switch fabric architecture implemented as disclosed in U.S. Patent No. 5,583,861.
Fig. 7 is a block diagram illustrating an example of a portion of a switch fabric architecture with ATM routing table circuits and switch element circuits configured as disclosed in U.S. Patent No. 5,583,861 in an ATM switch fabric architecture.
Fig. 8 is a block diagram of an ATM switch element circuit with external SRAM as disclosed in U.S. Patent No. 5,583,861.
Fig. 9 is a block diagram of an ATM switch element circuit with a cell buffer pool as disclosed in U.S. Patent No. 5,583,861.

Fig. 10 is a block diagram showing an address multiplexer coupled to a linked list controlling a buffer pointer as disclosed in U.S. Patent No.
5,583,861.
Fig. 11 is a block diagram of a back-pressure controller as disclosed in U.S. Patent No. 5,583,861.
Fig. 12A and Fig. 12B are block diagrams of configurations for switch elements circuits with back-pressure control, as disclosed in U.S. Patent No.
5,583,861.
Fig. 13 is a block diagram of an aggregate bit controller, as disclosed in U.S. Patent No. 5,583,861.
Fig. 14 is a table illustrating service order for one service order period, as disclosed in U.S. Patent No. 5,583,861.
Fig. 1 S is a block diagram showing the source cell duplication multicasting of the prior art.
Fig. 16 is a block diagram showing mid-cell duplication multicasting according to the prior art.
Fig. 17 is a block diagram showing mid-switch duplication multicasting according to the prior art.
Fig. 18 is a block diagram showing tree-based duplication multicasting according to the prior art.
Fig. 19 is a block diagram showing tree-based duplication multicasting according to a specific embodiment disclosed in U.S. Patent No. 5,583,861.
Fig. 20 is a tabular illustration of per-priority queuing with per VPC
cell counts in a routing table as disclosed in U.S. Patent No. 5,583,861.
Fig. 21 is a tabular illustration of a per VC count of queued cells as disclosed in U.S. Patent No. 5,583,861.

_7_ DETAILED DESCRIPTION OF THE DRAWINGS
Fig. 1 shows a representation of an ATM network 10 as an example of an ATM architecture, this one having virtual outputs, in which the invention may be employed. ATM network 10 as illustrated contains input transmission line 110, input routing tables (IRTs) 120, a N x N switch matrix 150, output routing tables (ORTs) 170, and output transmission lines 180. Associated with IRT 120 is cell buffer memory 122 and configuration memory 124. Associated with ORT 170 is cell buffer memory 172 and configuration memory 174.
ATM cells, which are digitized packets corresponding to a voice or video signal or a data stream, are sent through an input transmission line 110 into a connecting IRT 120. The IRT 120 ascertains the cell's routing and determines an entry point into the switch matrix path, based on a particular algorithm, including a random-entry algorithm.
Cells are arranged in queues within a buffer memory 122 associated with IRT 120 and are then transmitted through the switch matrix 150. Upon exiting the switch matrix, a cell is sent to one (or possibly more than one in the case of multicast cells) of the N ORTs corresponding to the cell's destination address. Within the ORT 170, received cells are queued in a plurality of queues in cell buffer 172 and subsequently transmitted onto a connecting output transmission line 180. In this manner, an ATM network can route audio, video or data signals, each requiring different bandwidth and transmission speeds.
In order to manage cells flowing through an ATM network, cells are grouped into virtual channels (VCs). A VC can be thought of as a sequential stream of cells flowing from a source to a destination, generally representing a single connection such as a single telephone call. The channel is referred to as a virtual channel because there is not generally a dedicated path within the ATM switch from the source to the destination; the actual path may vary from transmission to transmission, or WO 98126628 PCT/US9?122863 even during transmission, depending upon the type of traffic sent, whether congestion occurs, or other factors.
In the specific embodiment shown, each input transmission line can carry cells from a plurality of virtual inputs (VIs), which number 31 in a specific embodiment. The ATM switch can keep track of 16K VCs and a particular VC can occur on any VI. At its simplest, a VC is a stream of cells travelling from a particular VI to a particular VO and having a particular transmission priority.
In many ATM switches, cells or VCs are assigned a service class (SC) (sometimes referred to as a priority). The SC defines certain handling within the ATM switch, such as priority of throughput or the amount of available bandwidth that a particular VC is allowed to occupy.
In advanced ATM networks, cells may also be grouped according to VOs. Supporting VOs allows the cell to be routed to different physical receivers out of the same ORT output line, with data multiplexed to the different output receivers by a device outside of the ORT.
Fig 2 illustrates in a general way the different dimensions for which congestion counts are maintained according to a specific embodiment of the invention.
Fig. 3 shows an example of an IRT 120 and ORT 170 each containing congestion counters. In order to perform congestion detection, cell counters are placed within components in the ATM switch. IRT 120 has counters for VC count, SC count, and device count. ORT 170 has counters for VC count, SC count, device counts, VO counts, and SCQ counts. These cell counters are incremented each time a cell of the appropriate type enters a counter s associated component, device, or buffer, and decremented each time an appropriate cell exits a counter s associated component, device, or buffer.
It will be understood that counters may be implemented in a variety of ways. One mechanism for implementing counters is for the counters to exist as dedicated or general-purpose memory locations that are loaded with a count value from a data structure each time a cell is processed and then are incremented or decremented as appropriate. A
configuration data structure for each VC according to one embodiment is stored in memory 124, which, because of the size necessary to store data structures for 16K virtual channels, may be located in external memory.
Data structures are also maintained for each SC, VO, and SCQ. In one embodiment, these dada structures are maintained in internal memory I25 and 175, as shown, in order to be more quickly accessible.
According to one embodiment, each cell counter may be constructed as a separate arithmetic/logic unit (ALU) for independently incrementing or decrementing its count value and independently comparing that value to loaded thresholds so as to speed processing of a cell. Fig.
4 shows one example of details of an ALU and is described in more detail below. It should be understood that depending on overall configuration, count values may remain in an ALU for extended period and not need to be SUBSTITUTE SHEET (RULE 26) WO X6628 PCTIUS9?1~863 G
loaded from memory. Count values for a device, for example, might remain always loaded in their respective ALU. Alternatively, count values could be loaded into a register and incremented and compared by a central w processing unit.
According to the invention, the finest detail of cell traffic measured is the number of cells pending in a particular VC. In a specific embodiment, both the IRT and ORT contain VC counter I23a and 173a and count values, stored in external memory, for each VC established in the routing tables. There can be up to 16K different VC count values maintained in the IRT and 16K maintained in the ORT. When a cell is handled in either device for a particular VC, the VC count value for that VC (i.e., the number of pending cells stored in the device for that VC) is loaded from memory into the VC counter and the value is incremented or decremented as appropriate;
compared to threshold values, and then placed back into memory after any necessary congestion management actions are signalled. According to one embodiment, threshold values for each VC are also loaded from a memory into the counter ALU to perform the compare. According to one embodiment, for each virtual channel there is an entry such as 124a specifying the queue depth, at least one threshold value (Th:L, Th2), and additional configuration management action bits that can specify appropriate congestion management action for a virtual channel.
The next finest detail of congestion that is measured in both the IRT and the ORT in one specific embodiment is per service class (SC) in counters 123b and 173b. Each VC in the ATM network is assigned a service class. In one embodiment, there can be up to 64 different 5C count values maintained in the IRT and 16 in the ORT, with each ORT SC mapped to four IRT SCs. When a cell is handled in either device for a particular VC, the SC count value for that VC is loaded from memory into the SC counter and the value is incremented or decremented as appropriate along with the VC
count. The SC count value is then compared to threshold values for that SC.
The final level of detail of cell traffic measured in both the IRT and the ORT in one specific embodiment is per device (i.e., per a single IRT or ORT) in counters 123c and 173c. When a cell is handled in either device the device counter for that device is incremented or decremented as appropriate along with the other counts and is then compared to threshold values for that device.
In the ORT, congestion is measured at two additional levels of detail. One is per VO. In one specific embodiment, the number of VOs per physical output is 31. Each VC is directed to one VO (except for multicast VCs, which are directed to a finite and known number of VOs), so that the 16K available VC's are distributed among the 31 VOs. When a cell is handled in the ORT, the VO count value for that cell is loaded from memory into the VO counter 173d and the value is incremented or decremented as appropriate along with the other counts. The VO count value is then compared to threshold values for that VO.
SUBSTITUTE SHEET (RULE 26) wo 9sns6is rcrrtrs97nzss3 to In the ORT, congestion is additionally measured per service class per v0, referred to as a service class queue (SCQ). In one specific embodiment, the number of VOs per physical output is 31 and the number of possible SCs in the ORT is 16, so the number of SCQ counts is 496. When a S cell is handled in the ORT, the SCQ count value for that cell is loaded from memory into the SCQ counter 173e and the value is incremented or decremented as appropriate along with the VO count and other counts. The SCQ count value is then compared to threshold values for that SCQ.
The embodiment shown in Fig. 3 provides a further advantage that it is possible to include congestion counts and management in even more dimensions without impacting RT performance because of the parallel nature of the ALUs. In some applications, for example, it may be desirable to include congestion counts and thresholds for the lowest four SCs (which might be referred to as service-class-group-lowest-four (SCGL4), for the IS lowest eight SCs (SCGLB), and for the lowest twelve SCs(SCGL12). This can be accomplished by providing additional ALUs, such as ALU 173a-e. Some of these ALUs might include a dedicated register, such as ALU 173c, for counts that have a single value that applies to the entire ORT and others might include a register that is loaded from a configuration memory. Because these ALUs operate in parallel, additional dimensions of congestion counts may be kept without impacting the speed of cell processing.
It should also be understood that in an embodiment, for some or all of ALUs 123a-c or 173a-a shown, there will actually be two ALU
circuits, one for performing calculations as a cell is entering a queue or buffer (enqueue) and the second for performing calculations as a cell, is leaving a queue or buffer (dequeue).
Fig. 4 shows one example of the details of a counter/ALU for performing operations associated with monitoring congestion containing a count register 200 for holding a count value, registers TH1 and TH2 for holding threshold values, hysteresis bit (HB) registers HB1 and HB2, described below, for holding an HB value, and an ALU/controller 202 for performing the increment/decrement on the count register, comparisons to TH1 and TH2, checking of the HB and signalling to a congestion controller.
It will be understood that in a common embodiment, each of these values will be loaded from a memory data structure associated with a particular VC, V0, SC, or SCQ as a cell is processed and the computations performed.
According to one embodiment, the value in each cell counter is compared to two thresholds applicable to that count. If the count exceeds the preset congestion threshold, a VC congestion signal is transmitted to a congestion monitor. If the count exceeds the maximum threshold, a maximum limit signal is transmitted to a congestion monitor.
Congestion controllers 128 or 178, shown in Fig. 3, may receive congestion signals resulting from any of the aforementioned measurements in their respective IRT or ORT. In one embodiment, a congestion monitor may function as an OR gate which initiates a congestion management protocol when any one or more of the measurements result in a threshold being SUBSTITUTE SHEET (RULE 26) wo ~ns~s pcrrtrs~nnzss~
m exceeded. A further embodiment is a congestion monitor which indicates which of the types of traffic is being congested within the IRT or ORT so that the congestion management protocol can identify where the congestion occurs and how it might be circumvented.
_ _C_ongestion Management Actions When congestion is detected, congestion management actions are taken in accordance with various embodiments of the invention. These actions are, for the most part, as is known in the art and defined by l0 various ATM standards. In one embodiment of the invention, the choice of congestion management actions is selected on a per VC basis and is stored along with a VC configuration data structure in memories 174 and 124.
Congestion management may also be selected on a per SC basis or according to other configured criteria. Possible congestion management actions IS include:
Cell Loss Priority (CLP) Marked Cell Dropping When this action is configured, a bit in the header of a cell indicates whether that cell is low priority or high priority (independent 20 of SC). Low priority cells are dropped when congestion levels are reached.
Early Packet Discard when this action is configured, frames contained within a sequence of cells are found and dropped when congestion is detected. This 25 may be done in accordance with various methods known in the prior art.
Frames are detected by monitoring the header of the cells.
Random Early Discard when this action is configured, the probability of frame 30 discard is determined in relation to the depth of the queues.
Explicit Forward Cona~estion Indication (EFCI) Marking When this action is configured, the EFCI codepoint in the cell header is marked, this will cause the cell rate for this channel to be 35 lowered through the action of the ATM Forum ABR protocol.
Congestion Indication (CI) Marking When this action is configured, the CI bit in the reverse ~ direction Resource Management (RM) cells is set. This will cause the cell 40 rate for this channel to be lowered through the action of the ATM Forum ABR
protocol.
Explicit Rate Signaling When this action is configured, the ATM Switch will indicate to 45 the source the rate to send by setting the Explicit Rate (ER) value in backwards RM cells.
SUBSTITUTE SHEET (RULE 26) Expression of Threshold by an Exponent In one embodiment, threshold values are expressed and stored as the exponent of the power of two. This allows an efficient and compact implementation and storage of threshold values. A variation on this embodiment adds one or more bits of mantissa stored in the threshold to add to the resolution.
Hysteresis on the Congested State In one embodiment, for each threshold, a state bit is kept which allows a hysteresis function to be implemented as shown in Fig. 5. This allows a congestion management action to be taken for an extended period until the congestion is substantially relieved. In a specific embodiment, congestion management actions appropriate to a particular threshold are taken until the cell count value falls below 1/2 of the threshold value.
Although the foregoing embodiments of the invention have been described as being employed in an ATM network such as that shown in Figure 1, alternatively, other networks or network architectures may be substituted. For example, the remainder of this section is a description of an exemplary ATM
switch fabric architecture, as disclosed in United States Patent No. 5,583,861 issued to Holden on December 10, 1996.
Cwitrh Fahrir Fig. 6 is a block diagram of an ATM switch fabric architecture 1020.
Shown in the figure is a 4x5 array of switch element circuit blocks (SE) 1040.
Each switch element 1040 accepts eight 4-bit-wide input lines and has eight 4-bit-wide output lines. The switch fabric 1020 thus has a total of thirty-two 4-bit-wide input lines shown at the left side of the switch fabric 1020. In an operating switch, each of these thirty-two four-bit-wide input lines is connected from a separate routing table circuit, and each such routing table circuit is connected to a workstation or other digital device. The switch fabric has thirty-two four-bit-wide output lines shown at the right side of the figure. Each of the output lines is connected to individual further routing table circuits, each of which is connected to a workstation. Thus the switch fabric shown in Fig. 6 may provide a physical connection to up to thirty-two workstations and may connect data from any one of those thirty-two workstations to any other one of those or other thirty-two workstations. It will be seen that the interconnections among the switch elements 1040 are such that data entering any one of the switch fabric input lines may be routed to any one of the switch fabric output lines after passing through a total of four stages of switch elements. The switch fabric architecture 1020 as shown in Fig. 6 is known as a Reversed Delta Network architecture. The switch element circuits may be used in any number of other known network wiring configurations such as the Clos network or the Delta network, and the switch array may be expanded to provide any number of input lines and any number of output lines. (Not shown in Fig. 6 is a configuration bus connected to each one of the switch elements. The configuration bus is used by a configuration processor of the switch system to set up a number of switch parameters in a memory in the switch element.) Fig. 7 shows a portion of a switch fabric made up of four switch elements 1040. One of the switch elements is shown with interconnections through eight routing tables 1030, to a number of workstations 1050, and to a server computer 1052. As shown in the figure, in a typical application, each input to a switch fabric is connected to a routing table circuit 1030. A routing table circuit 1030 is typically connected to some type of digital workstation 1050, which may transmit and receive voice, video, and digital data via the switch fabric.
Also shown in Fig. 7 is an aggregate input connection to a switch element in accordance with one specific embodiment. In an aggregate connection, four of the input lines of the switch fabric are grouped together and act as one input to receive and transmit data to a high-speed data device such as server computer 1052.
With the aggregate input feature, the same switch element and switch fabric can handle two speeds of packet data, the first speed being the speed of one input line and the second speed being four times faster or the speed of the aggregated input lines.
Routing Table Fig. 8 is a block diagram of a routing table circuit 1042. The routing table circuit 1042 is a combination storage and control device that is used with external memory, e.g., SRAM 1090, and includes a receive queue controller 1080 which sends data to the switch fabric and receives a back-pressure signal from the switch fabric, and a transmission buffer controller 1082 which receives data from the switch fabric after that data has been processed by the multicast header translation circuit 1084 and asserts back pressure to the switch fabric. The transmission buffer controller 1082 also includes a small buffer memory 1086 for storing cells received from the switch fabric. A further controller, called a connection table controller 1088, is for reading header information from the workstation interface and is operative to use that header information to add an appropriate switch tag to the cells before they are transmitted to the switch fabric. Controller 1088 stores information about switch tags and buffers data in external SRAM 1090. Further included are an interrupt processor 1092 and processor interface 1094, which are for sending control signals to the workstation. Optionally included is an OAM~BECN cell transmit circuit 1096 for inserting control cells to the outgoing data stream.
The routing table circuit 1042 in each instance operates by receiving one 8-bit-wide segment of data via connection from a workstation as input and provides one 8-bit-wide workstation output. The routing table includes one 4-bit output to the switch fabric and receives one 4-bit input from the switch fabric.
Switch Element Fig. 9 is a block diagram of the structure of switch element circuit 1040. The switch element circuit 40 includes a very small cell buffer pool memory 1100 for storing and queuing cells being transmitted through the switch element, input I/O crosspoint block 1110 for connecting any input line to any cell memory in the cell buffer pool, output I/O crosspoint block 1120 for connecting any output line to any cell memory in the cell buffer pool, input bus controller 1130 for controlling data flow on the input bus to the cell memories, output bus controller 1140 for controlling data flow from the cell memories to the output lines, and multipriority buffer pool controller (MPBPC) 1150 for controlling assignment of cell memories to connections defined by the cross-point blocks. The switch element circuit 1040 is connected to a configuration bus 1041 which supplies configuration data to controller 1150.
The switch element 1040 has eight input interfaces, labeled IO through I7, and eight output interfaces, labeled 00 through 07. Each of the eight inputs and eight outputs is a four-bit or nibble-wide interface capable of operating at for example up to 50 Mhz, i.e., sufficient to support digital communications at the current ATM
OC-3 standard. Each of the inputs receives cells from another switch element in the switch fabric or from a routing table, as previously outlined. ATM cells of data are transferred as one hundred and eighteen four-bit nibbles. This allows the standard fifty-three ATM byte cells to be transferred along with six overhead bytes. A
cell start signal goes high every one hundred and eighteen clock cycles to indicate the start of a cell.
Cell buffer pool 1100 is a pool of random access memory. The pool contains thirty-two individual cell memories, each capable of storing an entire cell of one hundred and eighteen nibbles. The thirty-two memories can be connected to any one of the eight inputs by input crosspoint block 1110 which is controlled by input bus controller 1130. Crosspoint block 1110 contains a plurality of multiplexers 1112 for connecting input buses to any of the cell memories. Multiplexers 1112 are controlled by signals from the input bus controller 1130 that are transmitted on six-bit wide connection control bus lines 1132.
Any of the cell memories may be connected to any of the output lines via output crosspoint block 1120. Output crosspoint block 1120 is controlled by the output bus controller 1140 via output connection control bus lines 1142.
MPBPC 1150 contains a link list RAM 1152 for storing queue assignment information about the cell buffer pool memories, a service order table 1154 for controlling the service order of the proportional bandwidth queues, a memory for multicast group bits 1156 for storing information about multicast cell transmission, and a back-pressure control circuit 1158 for asserting multipriority back-pressure on the eight back-pressure lines of the switch element.
T inlrarl T ietc Referring to Fig. 9, MPBPC 1150 uses its linked list RAM 1152 to maintain five First-In/First-Out (FIFO) queues by means of lists of pointers to the next entry in the cell memory for each of the five output lines for a total of 40 possible virtual queues. Fig. 10 is a representation of the linked-list RAM 1152 and associated head register set 1153 and tail register set 1155 for the forty queues defined for the 32 -ls-cell memories accounted for in the list RAM 1152. For each of the forty queues, a buffer pointer is constructed from the head address of the queue and tail address of the queue for one of the forty queues stored in the head register set 1153 and the tail register set 1155. A head pointer and a tail pointer is kept for each one of the forty queues. The forty queues share a linked list of up to thirty-two entries. Each entry can identify one of the thirty-two cell memories in cell buffer pool 1100. The linked-list thereby specifies which cell memories are part of each queue in FIFO
order. Cells are enqueued onto the tail of the proper queue and dequeued from the head of the proper queue in accordance with a queue service procedure which generates the Q-dequeue pointer value to the head register set 1155 and the Iqueue pointer to the tail register set 1155. A mux 1157 switches between register sets, depending on whether the procedure calls for enqueuing or dequeuing a cell. An input buffer pointer specifies where the input cell is currently stored, and the output pointer designates where the cell is to be directed.
The queues for one output line are assigned five different priorities.
Three of the queues are proportional bandwidth queues of equal priority but having an assigned bandwidth of 5/8, 2/8 (1/4), or 1/8. Of the remaining two queues, one is designated a high-priority queue which may be used for very time dependent data such as voice, and the other a multicast queue which is used for data being sent from one transmitting workstation to more than one receiving workstation which might be the case in video conferences.
It will be seen that while there are forty possible virtual queues definable by MPBPC 1150, only up to thirty-two queues may be active at any one time because there are only thirty-two available cell buffer pool memories. In practice fewer than thirty-two queues may be active at any one time, because it is likely that there will always be some queues which are using more than one cell memory.
Multipriority buffer pool controller (MPBPC) 1150 controls the overall function of the switch element 40 as follows. During each cell cycle, cells having a length of 118 nibbles may be received on any or all of the eight input lines.
Prior to the start of a cell cycle, the controller 1150 has specified which input line is connected to which - cell memory via input crosspoint block 1110 by setting bits in input controller 1130. The first twelve of the 118 nibbles from each input line are read by input bus controller 1130 and transmitted to the multipriority buffer pool controller 1150 while the ATM cell is being stored in its designated cell memory. From these tags, MPBPC 1150 determines the priority and destination of the cell that has just been stored in each of the eight cell memories connected to one of the input interfaces. The MPBPC 1150 then adds the cell memories to their appropriate queues by updating its linked lists. The MPBPC 1150 then determines which of the output interfaces to which cells are directed can receive the cells during the next clock cycle.
An output interface may be unavailable to receive all of the data which is directed towards it during a cycle when more than one input lines are directing cells to a single output interface, or when the output interface has asserted back-pressure to the MPBPC 1150. The MPBPC 1150 handles the problem of output interfaces being unavailable to receive cells by establishing queues for the connections for which output lines are temporarily unavailable in the cell buffer pool 1100. Cells may be stored in these queues in a first-in-first-out FIFO fashion for some number of cell cycles until the output interfaces are available for outputting the cells.
Once the MPBPC 1150 has made determinations regarding which cells can be transmitted to their destination output interfaces during the next clock cycle and which cells will be stored in queues in the cell buffer pool 1100, it directs the output interfaces to receive data from cell memories in the cell buffer pool by sending control signals to output bus controller 1140. It also directs the input interfaces to available cell memories by sending input control signals to input bus controller 1130.
Back-Pressure Control One problem that may arise in a switch element 1040 as packets are being routed through cell memories from input lines to output lines is the unavailability of cell memories for queuing prior to a clock cycle when new cells may be received on the input interfaces. If a cell is received at an input interface to the switch element when no cell memory is available to receive the cell, the cell must be dropped and the data resent.
In accordance with one aspect, each switching element 1150 avoids dropping cells by issuing back-pressure signals to each connection to each of its input interfaces on a per-input, per-priority basis to halt the flow of cells having a given priority to a given input. Back-pressure is asserted for a given input and given priority whenever the number of currently enqueued cells of the given priority supplied by the given input exceeds a predetermined threshold. Back-pressure for a given priority is also asserted for all inputs whenever the total number of available cell memories falls below a threshold associated with that priority.
By employing a shared buffer pool, the switching element virtually eliminates the deletion of cells due to exhaustion of available memory. In many ATM
applications, even infrequent cell drops are harmful in that the loss of one cell necessitates the retransmission of many cells, substantially reducing network efficiency. Furthermore, in the event of excessive cell traffic through a switching fabric, it is preferable that cell drops occur at a routing table rather than at a switching element, since routing tables employ sophisticated congestion management strategies unavailable at a switching element when dropping cells. (One such sophisticated congestion management strategy is the standard ATM Adaption Layer 5 (AALS) early frame discard technique, AALS being a technique for segmenting frames into cells.).
Fig. 11 is a simplified representation of the elements within back-pressure controller 1150 used to implement the back-pressure capability. Back-pressure controller 1150 includes a time domain multiplexer 1402, a state machine 1404, a time domain demuldplexer 1406, a queue service controller 1408, an index memory 1410, a variable delay circuit 1412, and a variable delay register 1414.
Back-pressure signals are generated by state machine 1404 based on criteria as discussed below. Back-pressure signals from other switching elements or a routing table are received by a queue service controller 1408 which selects cells for output.
Back-pressure is asserted for a given input and given priority whenever the number of currently-enqueued cells of the given priority supplied by the given input exceeds a predetermined threshold. A problem is posed in that queues, with the exception of the multicast queue, are organized by output rather than input.
To maintain a count of enqueued cells for each input and priority, index memory 1410 is maintained within the back-pressure controller 1158 with an entry for each cell memory location which identifies the source of the cell stored there. When a new cell is enqueued, index memory 1410 is updated and a counter, internal to state machine 1404 and associated with the source and priority of the cell, is incremented.
When a cell is dequeued for output, the index entry for that cell is read to identify the source for that cell and the appropriate counter is then decremented. To determine the S necessity of back-pressure for a given input and priority, the counter for that input and priority is compared to a predetermined threshold.
The predetermined threshold is the same for each input and priority.
Thus, back-pressure is allocated among inputs so that no one input blocks incoming traffic from other inputs by occupying a disproportionate share of the cell memory locations. When inputs are aggregated, a counter is maintained for each priority for the aggregated inputs as a group rather than for each input.
To assure that availability of cell memories is also properly allocated among priorities, a count of empty cell memories is maintained within state machine 404 and compared to thresholds stored for each priority. When the number of empty cell memories falls below the threshold associated with a given priority, back-pressure is asserted for that priority for every input. The higher priorities have lower thresholds set so that high priority traffic is impeded last as the count of available cell memories decreases. In addition, the thresholds are normally set so that high-priority traffic has strict priority over lower priority traffic.
In one embodiment, back-pressure signals for the various priorities are time-domain multiplexed together by time-domain multiplexer 1402 so that each input is provided with a single back-pressure signal. Received back-pressure signals are demultiplexed by time domain demultiplexer 1406. Each priority then corresponds to a different time slot within the time-domain multiplexed back-pressure signal.
A switching element and an associated input device (switching element or routing table) may or may not be on the same printed circuit board. A
problem arises in that if the devices are on the same printed circuit board, no delay is required on the interconnecting data or back-pressure lines while if the devices are on separate printed circuit boards, the interconnecting lines may be retimed with D flip-flops.
Fig. 12A and Fig. 12B are a simplified representation of these two situations.
A set of retiming buffers 1051-1054 compensate for inter-card delays. To compensate for the resulting delays, a switching element 1040 according to one embodiment is provided with internal means for establishing a variable delay in the back-pressure line. Fig. 11 shows variable delay circuit 1412 inserted in one of the back-pressure lines.
The variable delay is selected by writing to variable delay register 1414 within the S switching element.
Aggregate Bits Refernng to Fig. 13, a switch element 1040 includes two aggregate input bits, agg_in(0) 11 S l and agg in( 1 ) 1153 and two aggregate output bits, agg out(0) 1155 and agg out(1) 1157, which may be set by the configuration controller (not shown) to allow for aggregating either the group of inputs IO
to I3, the group of inputs I4 to I7, the group of outputs 00 to 03, or the group of outputs 04 to 07. Referring back to Fig. 7, some types of digital devices, particularly other switching systems, such as server computer 1052, may need to transmit data through the switch fabric at a higher rate than may be provided by one switch element input, e.g., at the 622 Mbps rate provided by a conventional OC-12 ATM interface.
Fig. 13 shows the groupings of inputs and outputs of a switch element 1040 when the signals agg_in(0) and agg_in( 1 ) and agg_out(0) and agg_out( 1 ) are set. Switch element 1040 has two bits for input lines and two bits for output lines that signal the switch element when a set of four of its input lines is being configured as a single input which receives four cells at once and retains FIFO order. Input bits agg_in(0) and agg in(1) are set in the multipriority buffer pool controller 1150 via the configuration bus.
When bit agg_in(0) is set true, inputs 0 through 3 are treated as if they are a single stream of cells. This makes the input capable of handling data at an effective rate four times higher than possible on one input line, which in one specific embodiment is 622 Mbps. With this feature, a switch element 1040 can support a mixture of data rates on its inputs.
A key problem that must be addressed when aggregating input lines is maintaining FIFO order between the cells that arrive simultaneously. When the inputs are not aggregated, the cells from each input are enqueued separately.
When the inputs are aggregated, then the cells are enqueued as if they were from a single input with the cell received on input 0 placed in the single aggregated FIFO
queue first, the cell received on input 1 placed in the single FIFO queue second, and so on.
In the absence of the aggregate bit, FIFO order could be violated as the MPBPC

uses a round-robin procedure to enqueue multicast cells to improve fairness.
This procedure intentionally chooses cells from the inputs in differing orders from cell time to cell time.
A second problem addressed is that cells bound for an aggregated output can go out any one of the outputs in the aggregated output, depending on availability. When the aggregate bit is set, cells bound for the aggregated output are dequeued for any one of its outputs. The MPBPC 1150 also uses the aggregate bit to determine how to assert back-pressure to an output from the previous stage in an aggregated input. Back-pressure is given if cells from a given input are queued excessively.
When inputs are aggregated to boost effective speed from 155 Mbps to 622 Mbps, the MPBPC 1150 measures the counts of the cells from any of the inputs 1 S in the aggregated input, rather than from the individual inputs. The back-pressure is then applied to all of the inputs in the aggregated input, rather than the individual inputs. More specifically, to aggregate, in the first level of the switch fabric (Fig. 6), the agg_in value is set for all inputs actually connected to a high speed input.
Agg_out is set for all possible destinations of the inputs that agg_in is set for. In subsequent levels, agg in is set for those links which have agg_out set in the previous level. Thus agg_out is set for all possible destinations of an input in which agg in has been set. In the last level, agg_in is set for those links which have agg out set in the previous level. Agg_out is set for those links which are actually connected to a 622 Mbps output port.
Proportional Bandwidth Queues Switch elements 1040 and the routing table circuits 1030 can also support proportional bandwidth queues. Proportional bandwidth queues solve a problem that arises when data traffic from sources of drastically different bandwidths coexist on an intermediate link. In one specific embodiment, the switch element 1040 and routing table circuit 1030 can support for example three queues that are of equal priority but which have bandwidths of 1/8, 1/4 and 5/8 of the available bandwidth.

MPBPC 1150 maintains a service order table 1154 (Fig. 14) which enhances fairness among connections having different bandwidths. The assigned proportions can be adjusted by externally altering the contents of the service order table 1154.
The proportional bandwidth queues are implemented by the MPBPC
1150 by having a service order table 1154 for the dequeuing process that specifies the order of queue service for each of the output queues. The schedule in each stage is delayed by one cell period which tends to minimize queuing and thus the cell memories required by trying to dequeue a cell from a given queue just after a cell from that queues is likely to have arnved. MPBPC 1150 must make sure that all of the possible competitions between differing bandwidth queues turn out as predicted.
For example, if cells only in the 1/8th and 1/4th queues arnve, then the 1/8th queue should get 1/3rd of the available bandwidth on the output channel, and the 1/4th queue should get 2/3rd of the bandwidth. Similar results should apply for all of the possible x-way competitions. These issues have been addressed by a careful design of the service order table 1154 stored within MPBPC 1150. This table 1154 provides each of the participants in the possible competitions with approximately the proper bandwidth while reducing overhead processing that must be done by the MPBPC
1150 to determine dequeuing order. Additionally, the MPBPC 1150 can update the service order table 1154 on the fly so that moment-by-moment adjustments may be made by certain types of switching systems that will use these devices to enhance fairness in competition.
As a further detail, Fig. 14 shows a service order table 1154 stored in MPBPC 1150 for determining dequeuing from cell buffer pool 1100 when more than one proportional bandwidth queue is queued for a given output. The MPBPC 1150 defines a service order interval of eight cell transmission cycles for determining the priority of serving the proportional bandwidth queues. These cycles are represented by the eight columns labelled 0 to 7 in Fig. 14. During any given cycle, MPBPC

examines which queues for a given output wish to transmit data to that output.
It will be seen that during any given cycle there is a queue service order listing the priority with which bandwidth queues will be serviced. During any cycle, only one queue is serviced and that queue is the queue having the highest priority, 1 st through 3rd, as listed in the service order table during that cycle. For example, during cell cycle 4, the priority list is 3, 4 and 2. Should cells from two proportional bandwidth queues both be ready to transmit during cycle 4, the cell from the queue having bandwidth of the higher priority will be transmitted. During the next clock cycle, cycle 5, if both of those queues wish to transmit, the next bandwidth queue will be transmitted because the queue service order table shows that it has a higher priority during that cell cycle.
Multicast One data transmission application which the present ATM switch fabric architecture may be employed to support is multicast transmission.
During multicast transmission, data from one source is distributed to several destinations, i.e., the multicast group, which comprise some, but not necessarily all, of the possible switch fabric outputs. An example of such an application is a video conference over a network in which several workstations are connected by a switch fabric, and voice and image data are transmitted from each workstation to each of the other workstations.
Generally, multicast may be supported in a variety of ways. Source cell duplication is a simple, but brute-force solution to the multicast support problem.
Fig. 1 S is a simplified representation of the source cell duplication solution 1060 known in the prior art. With source cell duplication, the source 1062 of the data cells creates copies of each cell for transmission to each destination. This solution suffers from a number of significant disadvantages. Not only does the task of duplication place a severe load on the source 1060, it also places limits on the number of destinations 1064, 1066, 1068 in the rnulticast group connected by a switch element 1040. As a result, the size of the multicast group for a network which supports multicast may be drastically limited (e.g., to half the size of connection for the network). Additionally, expensive bandwidth is wasted. For example, in cases where more than one destination is at the same distant location, redundant copies of the information are transmitted over the entire distance, thereby unnecessarily contributing to system traffic.
Mid-switch cell duplication 1070 is an alternate multicast support solution. A simplified representation of a mid-switch duplication 1070 solution is shown in Fig. 16. According to a typical mid-switch duplication 1070 solution, a module 1072 is provided at some point in the switching system which duplicates the transmitted cells from a source 1074 as necessary for distribution to destinations 1064, 1066, 1068 in the multicast group. Although this solution does not suffer from all of the disadvantages of the source cell duplication solution 1060, bandwidth is still unnecessarily consumed by the transmission of the cell duplicates through the remainder of the system.
The optimal solution for supporting multicast, made practicable by the present ATM switch fabric architecture, is referred to as tree-based cell duplication.
A simplified representation of a tree-based cell duplication system 1076 is provided in Fig. 17. With a tree-based cell duplication system 1076, the transmitted cells are not duplicated until the last points of divergence 1077, 1079 to the destinations 1064, 1066, 1068, e.g., by means of cell replication within the switch element 1040, represented herein as a serial redirector 1078. This avoids the unnecessary consumption of bandwidth encountered with the previously described solutions.
One possible complication of this solution, however, is that all destinations 1064, 1066, 1068 of a multicast cell may not be reachable with a cell having the same address.
The present ATM switch fabric architecture implements a tree-based cell duplication system 1076 using a specific embodiment of the switch element described above. The solution is described with reference to Fig. 18. An eight-bit field in the routing tag of each transmitted cell determines what is called the multicast group for that cell. As described above, the routing tag, a twelve-nibble field placed on the front of a cell by the routing table circuit based on the content of a multicast group bit register 1081, dictates the path of the cell through the switch fabric. The multicast group consists of the group of network destinations to which the cell is to be transmitted. For each switch element, the multicast group field determines which switch element outputs upon which a received cell is to be placed in order to get the information to the desired destinations.
The switch element of the present ATM switch fabric architecture stores an array of multicast group bits in its RAM, the array including one eight-bit word for each of the multicast groups. Each bit in each word represents one switch element output. When the multicast queue of the switch element is selected and a data cell placed therein (as determined by a one nibble field in the routing tag), the multicast group field in the cell is used as an index into the multicast group bits array, pointing to a particular word in the array. Any bits which are set in the selected word correspond to the switch element outputs on which the cell in the multicast queue is to be placed.
Multicast Completion One difficulty encountered in multicast transmissions is that it is often not possible to place a given cell on all of the desired outputs simultaneously. This is referred to as the problem of multicast completion. Such a situation might arise, for example, if a cell from a higher priority queue has already been placed on the selected output. This situation can also occur if a subsequent switch element has exerted back-pressure on the selected output, thereby preventing the transmission of cells from that output. Some open loop switching systems simply allow cells to be dropped if congestion causes cell buffers to overflow. If this occurs with the transmission of video information, for example, the penalty incurred from such a drop is relatively high. An entire video frame might be lost due to the loss of just one cell.
Other penalties are incurred if the system protocol requires the retransmission of the entire frame or a series of frames.
The switch element 1040 of the present ATM switch fabric architecture solves this problem by keeping a record of the switch element outputs upon which the cell in the multicast queue has been successfully placed.
Refernng to Fig. 19, multicast queue controller 1156 of switch element 1040 sets bits in a multicast queue completion register 1083 for each output on which the cell is actually placed. The ATM cell in the multicast queue is dequeued only when the bits in the completion register 1083 match the bits in the word selected from the multicast group bits array stored in the multicast group bits register 1081. As indicated by the multicast group bits word, cell M 1085 in the multicast queue is to be placed on outputs (2), (3), and (S) 1089, 1091, 1093. However, cell H 1087 in the higher priority queue has already been placed on output (3) 1093, thereby preventing immediate placement of cell M 1085 on that output 1093. This is reflected by the fact that bit number 3 in the completion register 1083 (corresponding to output (3) 1093) has not yet been set. When cell M 1085 is eventually placed on output (3) 1093, this bit 3 is set. The word in the completion register 1083 then matches the word from the multicast group bits array 1081, allowing cell M 1085 to be dequeued.
Per Priority Queuing with Per Connection Counts As described above, the routing table circuit 1030 of the present ATM
switch fabric architecture receives a cell from a source, looks up the intended address in its RAM, adds the appropriate routing tag to the cell, and then puts the cell out onto the switch fabric via the switching elements. The routing table circuit 1030 also performs a queuing function in which it stores queued cells in an accompanying SRAM before placing them on the switch fabric. The routing table circuit 1030 of each source queues the cells on a per priority basis, but also keeps track of how many cells from each connection are in the queue at any given time. Unlike a strict per connection queuing discipline, a transmission scheduler is not required.
Fig. 20 is a table which illustrates the manner in which the routing table queues incoming cells. In the illustrated example, cells having priorities 0, 2, and 5 have been queued by the routing table. Within each of the priorities, cells from different connections have been queued. The count of queued cells per connection is maintained as shown in Fig. 21, showing the number of queued cells for each VPC.
The routing table uses the connection count to perform closed loop functions such as sending a back-pressure signal to a particular data cell source. Thus, with the present ATM switch fabric architecture, the simplicity of per priority queuing is enjoyed, while at the same time per connection queue depths are kept so that congestion management techniques can be employed.
Marked Interrupt Linked List It is one of the functions of the routing table to alert the external processor that a virtual channel is experiencing congestion. One method for doing this is to generate an interrupt signal each time a congestion condition is encountered.
However, it is not desirable to generate an interrupt every time a data cell is queued for a congested channel, especially if the processor has already been notified of the congestion on that channel. Also, more than one channel may experience congestion before the processor is able to respond to a congestion interrupt. It is therefore necessary to keep track of channels experiencing congestion so that the processor may take appropriate action for all such channels when it is ready to do so. One method for keeping track of congested channels includes assigning a bit for each of the channels, and setting the bits corresponding to channels which experience congestion.
The processor then checks the bits for all of the channels to determine which channels are congested. However, because of the number of channels made possible by the present ATM switch fabric architecture, such a solution is undesirably slow, consuming valuable processor time.
Therefore, according to a specific embodiment of the ATM switch fabric architecture, the routing table maintains a linked list of currently congested channels. Each channel in the list is also marked (i.e., a "congestion" bit is set) so that the queuing of further data cells for those channels does not generate additional interrupts. When a data cell is queued for a particular channel, the current queue depth for that channel is compared with the configured congested queue depth.
If the current queue depth is longer and the channel is not marked, the routing table generates an interrupt and the channel is marked and added to the end of the linked list. If the channel is already marked, nothing happens.
When the processor is able to respond to an interrupt, it first looks at the interrupt head pointer which points to an address which represents the first virtual channel in the linked list. The processor then reads from that channel the address for the next channel experiencing congestion. The processor continues to read the addresses for the channels experiencing congestion until it reaches the end of the linked list. The processor then takes appropriate action for each of the channels in the list to resolve the congestion. Such action might include, for example, sending an appropriate feedback message, or changing the queue depth which causes interrupts.
A congestion-relieved threshold is determined by multiplying the congestion threshold by a fractional constant (e.g., 0.75). And whenever a cell is dequeued, and the current queue depth falls below the congestion-relieved threshold, a second interrupt is generated, and the congestion is cleared.
A global "enable" for interrupts allows the system processor to read the linked list of congested channels atomically. If a channel becomes congested while the interrupts are disabled, once the interrupts are re-enabled, the next cell queued for that channel will cause an interrupt if the channel is still congested.
The exemplary ATM switch fabric architecture has now been explained with reference to specific embodiments. Other embodiments will be apparent to those of ordinary skill in the art upon review of this description. It is therefore not intended that the ATM switch fabric architecture be limited, except as indicated by the appended claims.
The invention has now been explained in accordance with specific embodiments, however many variations will be obvious to those skilled in the art.
The invention should therefore not be limited except as provided in the attached claims.

Claims (17)

THE EMBODIMENTS OF THE INVENTION IN WHICH AN EXCLUSIVE
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. A method for detecting congestion within an ATM device, the device having at least one buffer pool, a plurality of service classes and a plurality of virtual channels, the method comprising:
counting the number of cells in a buffer in a virtual channel and determining a virtual channel count;
comparing said virtual channel count to a virtual channel count threshold;
generating a virtual channel congestion signal if indicated by said virtual channel compare;
counting the number of cells in a buffer within a service class and determining a service class count;
comparing said service class count to a preset service class threshold;
generating a service class congestion signal if indicated by said service class compare;
determining a number of available cell buffers remaining in a buffer pool as a device count;
comparing said device count to a preset device threshold;
generating a device congestion signal if indicated by said device count compare;
receiving any generated congestion signals and initiating congestion management actions if one or more of said congestion signals is received.
2. The method according to claim 1 further comprising:
counting the number of cells in a buffer for a virtual output and determining a virtual output count;
comparing said virtual output count to a virtual output count threshold;
and generating a virtual output congestion signal if indicated by said virtual output compare.
3. The method according to claim 2 further comprising:
counting the number of cells in a service class queue and determining a service class queue count;
comparing said service class queue count to a service class queue threshold;
generating a service class queue congestion signal if said service class queue count exceeds said service class queue threshold.
4. The method according to claim 1 wherein for each threshold two threshold values are stored, the first being an initial threshold and the second being a maximum threshold.
5. The method according to claim 4 wherein when an initial threshold value is exceeded, a congestion management action is taken according to that threshold.
6. The method according to claim 4 wherein when a maximum threshold value is exceeded, cells are dropped unconditionally.
7. The method according to claim 1 wherein a threshold value is stored as a power of two.
8. The method according to claim 1 wherein a threshold value is stored as a power of two and at least one bit of mantissa to add to the resolution of the stored threshold value.
9. The method according to claim 1 wherein congestion management action can be one or more actions from the set: Cell Loss Priority Marked Cell Dropping, Early Packet Discard, Random Early Discard, Explicit Forward Congestion Indication Marking, Congestion Indication Marking, Explicit Rate Signaling.
10. The method according to claim 9 wherein a choice of congestion management actions is selected on a per virtual channel basis.
11. The method according to claim 1 wherein for at least one stored threshold, a state bit is kept to allow a hysteresis function to be implemented so that a congestion management action can be taken for an extended period until congestion is relieved.
12. A ATM device capable of detecting congestion in a plurality of dimensions comprising:
a cell buffer;
an input line for receiving ATM cells;
an output line for outputting ATM cells;
a controller for receiving congestion indications;
a virtual channel counter and a virtual channel count threshold;
a service class counter and a service class counter threshold;
a device counter and a device counter threshold;
a comparator for comparing a value in one of said counters to its corresponding threshold and generating a congestion indication; and a configuration memory.
13. The device according to claim 12 further comprising a service class queue counter and a service class queue counter threshold.
14. The device according to claim 12 further comprising a service class group counter and a service class group threshold.
15. The device according to claim 12 wherein count values and threshold values are stored in said configuration memory and loaded into said counters for the processing of a cell.
16. The device according to claim 12 wherein said counters are incorporated into independent arithmetic logic units, each able to simultaneously perform a count decrement or increment and threshold compare during processing of a cell.
17. The device according to claim 12 further comprising a virtual channel configuration entry containing a queue depth value for a virtual channel, at least one threshold value, and at least one congestion management action bit indicating appropriate congestion management action for a virtual channel.
CA002271883A 1996-12-12 1997-12-12 Many dimensional congestion detection system and method Expired - Fee Related CA2271883C (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US3302996P 1996-12-12 1996-12-12
US60/033,029 1996-12-12
US08/970,882 1997-11-14
US08/970,882 US6134218A (en) 1994-04-28 1997-11-14 Many dimensional congestion detection system and method
PCT/US1997/022863 WO1998026628A1 (en) 1996-12-12 1997-12-12 Many dimensional congestion detection system and method

Publications (2)

Publication Number Publication Date
CA2271883A1 CA2271883A1 (en) 1998-06-18
CA2271883C true CA2271883C (en) 2003-05-20

Family

ID=26709201

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002271883A Expired - Fee Related CA2271883C (en) 1996-12-12 1997-12-12 Many dimensional congestion detection system and method

Country Status (3)

Country Link
US (1) US6134218A (en)
CA (1) CA2271883C (en)
WO (1) WO1998026628A1 (en)

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DK174882B1 (en) * 1996-04-12 2004-01-19 Tellabs Denmark As Method and network element for transmitting data packets in a telephony transmission network
JP3801740B2 (en) * 1997-08-13 2006-07-26 富士通株式会社 Cell flow rate control method and cell exchange system using the same
US6185185B1 (en) * 1997-11-21 2001-02-06 International Business Machines Corporation Methods, systems and computer program products for suppressing multiple destination traffic in a computer network
US6633543B1 (en) * 1998-08-27 2003-10-14 Intel Corporation Multicast flow control
US6747951B1 (en) * 1999-09-20 2004-06-08 Nortel Networks Limited Method and apparatus for providing efficient management of resources in a multi-protocol over ATM (MPOA)
US6724776B1 (en) * 1999-11-23 2004-04-20 International Business Machines Corporation Method and system for providing optimal discard fraction
US6657960B1 (en) 1999-11-23 2003-12-02 International Business Machines Corporation Method and system for providing differentiated services in computer networks
US6771652B1 (en) 1999-11-23 2004-08-03 International Business Machines Corporation Method and system for controlling transmission of packets in computer networks
TW580813B (en) * 1999-11-23 2004-03-21 Ibm Method and system for controlling transmission of packets in computer networks
US6674718B1 (en) 2000-04-11 2004-01-06 International Business Machines Corporation Unified method and system for scheduling and discarding packets in computer networks
US6687224B1 (en) * 2000-02-28 2004-02-03 Orckit Communications, Ltd. Bandwidth sharing method
GB2360168B (en) * 2000-03-11 2003-07-16 3Com Corp Network switch including hysteresis in signalling fullness of transmit queues
US7016365B1 (en) * 2000-03-31 2006-03-21 Intel Corporation Switching fabric including a plurality of crossbar sections
IL143539A0 (en) * 2000-06-09 2002-04-21 Hughes Electronics Corp Queue and scheduler utilization metrics
US7075928B1 (en) * 2000-09-25 2006-07-11 Integrated Device Technology, Inc. Detection and recovery from connection failure in an ATM switch
FI20002320A (en) * 2000-10-20 2002-04-21 Nokia Corp Blocking Management in Wireless Telecommunication Networks
SE0003854D0 (en) * 2000-10-24 2000-10-24 Ericsson Telefon Ab L M Adaptive regulation in a mobile system
US6973032B1 (en) * 2000-12-04 2005-12-06 Cisco Technology, Inc. Selective backpressure control for multistage switches
US6963536B1 (en) * 2001-03-23 2005-11-08 Advanced Micro Devices, Inc. Admission control in a network device
US6954427B1 (en) 2001-03-28 2005-10-11 Advanced Micro Devices, Inc. Method and apparatus for performing priority-based admission control
US6950887B2 (en) * 2001-05-04 2005-09-27 Intel Corporation Method and apparatus for gathering queue performance data
US7277429B2 (en) * 2001-06-01 2007-10-02 4198638 Canada Inc. Cell-based switch fabric with distributed scheduling
US7197042B2 (en) * 2001-06-01 2007-03-27 4198638 Canada Inc. Cell-based switch fabric with cell-to-line-card control for regulating injection of packets
US7266612B1 (en) 2002-02-14 2007-09-04 At&T Corp. Network having overload control using deterministic early active drops
US7154853B2 (en) * 2002-05-02 2006-12-26 Intel Corporation Rate policing algorithm for packet flows
US8023950B2 (en) 2003-02-18 2011-09-20 Qualcomm Incorporated Systems and methods for using selectable frame durations in a wireless communication system
US8081598B2 (en) 2003-02-18 2011-12-20 Qualcomm Incorporated Outer-loop power control for wireless communication systems
US20040160922A1 (en) * 2003-02-18 2004-08-19 Sanjiv Nanda Method and apparatus for controlling data rate of a reverse link in a communication system
US7660282B2 (en) 2003-02-18 2010-02-09 Qualcomm Incorporated Congestion control in a wireless data network
US7155236B2 (en) 2003-02-18 2006-12-26 Qualcomm Incorporated Scheduled and autonomous transmission and acknowledgement
US8150407B2 (en) 2003-02-18 2012-04-03 Qualcomm Incorporated System and method for scheduling transmissions in a wireless communication system
US8391249B2 (en) 2003-02-18 2013-03-05 Qualcomm Incorporated Code division multiplexing commands on a code division multiplexed channel
US8705588B2 (en) 2003-03-06 2014-04-22 Qualcomm Incorporated Systems and methods for using code space in spread-spectrum communications
US7215930B2 (en) 2003-03-06 2007-05-08 Qualcomm, Incorporated Method and apparatus for providing uplink signal-to-noise ratio (SNR) estimation in a wireless communication
US7573827B2 (en) * 2003-05-06 2009-08-11 Hewlett-Packard Development Company, L.P. Method and apparatus for detecting network congestion
US8477592B2 (en) 2003-05-14 2013-07-02 Qualcomm Incorporated Interference and noise estimation in an OFDM system
US8489949B2 (en) 2003-08-05 2013-07-16 Qualcomm Incorporated Combining grant, acknowledgement, and rate control commands
GB2416647B (en) * 2004-07-26 2006-10-25 Motorola Inc Method and apparatus for resource allocation
US7457245B2 (en) * 2004-09-07 2008-11-25 Intel Corporation Directional and priority based flow control mechanism between nodes
US7536479B2 (en) * 2004-11-09 2009-05-19 Intel Corporation Local and remote network based management of an operating system-independent processor
US7733770B2 (en) * 2004-11-15 2010-06-08 Intel Corporation Congestion control in a network
US7526570B2 (en) * 2005-03-31 2009-04-28 Intel Corporation Advanced switching optimal unicast and multicast communication paths based on SLS transport protocol
US20070230369A1 (en) * 2006-03-31 2007-10-04 Mcalpine Gary L Route selection in a network
US9219686B2 (en) 2006-03-31 2015-12-22 Alcatel Lucent Network load balancing and overload control
US8917598B2 (en) * 2007-12-21 2014-12-23 Qualcomm Incorporated Downlink flow control
US8699487B2 (en) * 2008-02-04 2014-04-15 Qualcomm Incorporated Uplink delay budget feedback
US8656239B2 (en) * 2008-02-12 2014-02-18 Qualcomm Incorporated Control of data transmission based on HARQ in a wireless communication system
US20110261696A1 (en) * 2010-04-22 2011-10-27 International Business Machines Corporation Network data congestion management probe system
GB2485236B (en) 2010-11-08 2015-05-27 Sca Ipla Holdings Inc Infrastructure equipment and method
CN109391559B (en) * 2017-08-10 2022-10-18 华为技术有限公司 Network device

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4937817A (en) * 1988-12-29 1990-06-26 American Telephone And Telegraph Company Packet selection for packet distribution arrangements
US4998244A (en) * 1989-07-17 1991-03-05 Racal Data Communications Inc. High speed module interconnection bus
JP3128654B2 (en) * 1990-10-19 2001-01-29 富士通株式会社 Supervisory control method, supervisory control device and switching system
FR2670972A1 (en) * 1990-12-20 1992-06-26 Lmt Radio Professionelle TRANSIT SWITCH OF AN ASYNCHRONOUS NETWORK, IN PARTICULAR AN ATM NETWORK.
CA2068847C (en) * 1991-07-01 1998-12-29 Ronald C. Roposh Method for operating an asynchronous packet bus for transmission of asynchronous and isochronous information
US5233606A (en) * 1991-08-02 1993-08-03 At&T Bell Laboratories Arrangement for controlling shared-buffer-memory overflow in a multi-priority environment
JP2818505B2 (en) * 1991-08-14 1998-10-30 日本電気株式会社 Polishing equipment
FI91349C (en) * 1992-07-17 1994-06-10 Nokia Telecommunications Oy A method for implementing a connection in a time or space plane
GB9309468D0 (en) * 1993-05-07 1993-06-23 Roke Manor Research Improvements in or relating to asynchronous transfer mode communication systems
JPH0766820A (en) * 1993-08-24 1995-03-10 Matsushita Electric Ind Co Ltd Flow control system
US5483526A (en) * 1994-07-20 1996-01-09 Digital Equipment Corporation Resynchronization method and apparatus for local memory buffers management for an ATM adapter implementing credit based flow control
EP0712220A1 (en) * 1994-11-08 1996-05-15 International Business Machines Corporation Hop-by-hop flow control in an ATM network
JPH10126510A (en) * 1996-10-17 1998-05-15 Fujitsu Ltd Setting device for common signal line

Also Published As

Publication number Publication date
US6134218A (en) 2000-10-17
WO1998026628A1 (en) 1998-06-18
CA2271883A1 (en) 1998-06-18

Similar Documents

Publication Publication Date Title
CA2271883C (en) Many dimensional congestion detection system and method
US6151301A (en) ATM architecture and switching element
US5570348A (en) Method and apparatus for enqueueing data cells in an ATM switch fabric architecture
KR100229558B1 (en) The low-delay or low-loss switch for asynchronous transfer mode
EP0864244B1 (en) Apparatus and methods to change thresholds to control congestion in atm switches
WO1998026628A9 (en) Many dimensional congestion detection system and method
US6717912B1 (en) Fair discard system
AU730804B2 (en) Method and apparatus for per traffic flow buffer management
US5629928A (en) Dynamic fair queuing to support best effort traffic in an ATM network
JP3606565B2 (en) Switching device and method
US5983278A (en) Low-loss, fair bandwidth allocation flow control in a packet switch
JP4006205B2 (en) Switching arrangement and method with separate output buffers
US6768717B1 (en) Apparatus and method for traffic shaping in a network switch
WO1997031461A1 (en) High speed packet-switched digital switch and method
EP0973304A2 (en) Apparatus and method for bandwidth management
JP4504606B2 (en) Apparatus and method for shaping traffic in a network switch
KR100368439B1 (en) Method and apparatus for ensuring transfer order in a packet-switch with a dual-switching plane000
Shimojo et al. A 622 Mbps ATM switch access LSI with multicast capable per-VC queueing architecture
JP2005328578A (en) Common buffer type atm switch

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed

Effective date: 20131212