WO2014051778A1 - Adaptive packet deflection to achieve fair, low-cost, and/or energy-efficient quality of service in network on chip devices - Google Patents

Adaptive packet deflection to achieve fair, low-cost, and/or energy-efficient quality of service in network on chip devices Download PDF

Info

Publication number
WO2014051778A1
WO2014051778A1 PCT/US2013/047525 US2013047525W WO2014051778A1 WO 2014051778 A1 WO2014051778 A1 WO 2014051778A1 US 2013047525 W US2013047525 W US 2013047525W WO 2014051778 A1 WO2014051778 A1 WO 2014051778A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
port
agent
logic
link
Prior art date
Application number
PCT/US2013/047525
Other languages
French (fr)
Inventor
Ren Wang
Ahmad Samih
Christian Maciocco
Tsung-Yuan Tai
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to CN201380045432.8A priority Critical patent/CN104583992A/en
Priority to DE112013003733.5T priority patent/DE112013003733B4/en
Publication of WO2014051778A1 publication Critical patent/WO2014051778A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/06Deflection routing, e.g. hot-potato routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/122Shortest path evaluation by minimising distances, e.g. by selecting a route with minimum of number of hops
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/125Shortest path evaluation based on throughput or bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS

Definitions

  • the present disclosure generally relates to the field of electronics. More particularly, an embodiment of the invention relates to techniques for provision of adaptive packet deflection to achieve fair, low-cost, and/or energy-efficient Quality of Service (QoS) in Network-on- Chip (NoC) devices.
  • QoS Quality of Service
  • DIMMs that includes dynamic random access memory (DRAM).
  • DRAM dynamic random access memory
  • the network fabric may be utilized for any System on Chip (SoC) application, utilize custom or standard interfaces, such as, ARM compliant interfaces for AMBA (Advanced Microcontroller Bus Architecture), OCP (Open Core Protocol), MIPI (Mobile
  • a new IP can be added with minimal changes to the Yunit.
  • the changes may occur by adding a new entry in an address redirection table.
  • the shims are IP-specific, in some implementations a large amount of the functionality (e.g., more than 90%) is common across all IPs. This enables a rapid reconfiguration of an existing shim for a new IP.
  • Some embodiments thus also enable use of auto-generated interconnect fabrics without modification. In a point-to-point bus architecture, designing interconnect fabrics can be a challenging task.
  • the Yunit approach described above leverages an industry ecosystem into a PCI system with minimal effort and without requiring any modifications to industry-standard tools.
  • one implementation (such as shown in Fig. 2) is for a socket glueless configuration with mirroring.
  • data assigned to a memory controller (such as MC0/HA0) is mirrored to another memory controller (such as MC3/HA3) over the PtP links.
  • Some current implementations provide QoS support for different classes by having separate queues in the router, and serving, according to the service level agreement, different queues based on priority. For example, a higher priority queue receives more serving time and is able to preempt lower priority packets.
  • This class of methods introduces additional separate queues is the fact that traditional router architecture is not capable of fetching packets from a single FIFO queue in an out-of-order fashion; hence, the additional dedicated per- class queues are provided.
  • a packet is received at operation 402 and the utilization of the target port is above (or at) a certain threshold value as determined at operation 404, it indicates that some packets need to be deflected to avoid further congestion.
  • the probability for deflecting a packet to a non-target port is calculated at operation 406, based on the following factors:
  • a probability-based deflection mechanism is used to provide for QoS in some embodiments.
  • the deflection probability p is calculated using the following equation:
  • the packet is sent to the target port at operation 414. Also, after operations 412 and 414, the method resumes at operation 402 to receive the next packet.
  • the target QoS level can be represented by assigning priority values to packets. These priority values could either have a user- level meaning or implementation (e.g., a user selected QoS level for a given application can be propagated down to the interconnect through the OS), a hardware-level meaning or implementation (e.g., the hardware gives control packets higher priority than data packets), or a combination of both.
  • a user- level meaning or implementation e.g., a user selected QoS level for a given application can be propagated down to the interconnect through the OS
  • a hardware-level meaning or implementation e.g., the hardware gives control packets higher priority than data packets
  • the memory 512 stores data, including sequences of instructions that are executed by the CPU 502, or any other device included in the computing system 500.
  • the memory 512 stores data corresponding to an operation system (OS) 513 and/or a device driver 511 as discussed with reference to the previous figures.
  • OS operation system
  • the memory 512 and memory 140 of Fig. 1 can be the same or similar.
  • the memory 512 can include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage devices.
  • RAM random access memory
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • SRAM static RAM
  • Nonvolatile memory can also be utilized such as a hard disk. Additional devices can communicate via the interconnection network 504, such as multiple CPUs and/or multiple system memories.
  • a hub interface 518 allows the GMCH 508 and an input/output control hub (ICH) 520 to communicate.
  • the ICH 520 provides an interface to I/O devices that communicate with the computing system 500.
  • the ICH 520 communicates with a bus 522 through a peripheral bridge (or controller) 524, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or other types of peripheral bridges or controllers.
  • the bridge 524 provides a data path between the CPU 502 and peripheral devices.
  • Other types of topologies can be utilized.
  • multiple buses can communicate with the ICH 520, e.g., through multiple bridges or controllers.
  • nonvolatile memory includes one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 528), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto-optical disk, or other types of nonvolatile machine-readable media that are capable of storing electronic data (e.g., including instructions).
  • ROM read-only memory
  • PROM programmable ROM
  • EPROM erasable PROM
  • EEPROM electrically EPROM
  • a disk drive e.g., 528
  • floppy disk e.g., floppy disk
  • CD-ROM compact disk ROM
  • DVD digital versatile disk
  • flash memory e.g., a magneto-optical disk, or other types of nonvolatile machine-readable media that are capable of storing electronic data (e.g.,
  • 640 has one or more devices that communicate with it, such as a bus bridge 642 and I/O devices 643. Via a bus 644, the bus bridge 642 communicate with other devices such as a keyboard/mouse 645, communication devices 646 (such as modems, network interface devices, or other communication devices that communicate through the computer network 503), audio I/O device, and/or a data storage device 648.
  • the data storage device 648 stores code 649 that may be executed by the processors 602 and/or 604.

Abstract

Methods and apparatus for provision of adaptive packet deflection to achieve fair, lowcost, and/or energy-efficient Quality of Service (QoS) in Network-on-Chip (NoC) devices are described. In some embodiments, it is determined whether a target port of a packet has reached a threshold utilization value and the packet is routed to an alternate port in response to a deflection probability value that is to be determined based on a utilization value of the target port and a priority level value of the packet. Other embodiments are also claimed and/or disclosed.

Description

ADAPTIVE PACKET DEFLECTION TO ACHIEVE FAIR. LOW-COST. AND/OR ENERGY-EFFICIENT QUALITY OF SERVICE IN NETWORK ON CHIP DEVICES
FIELD
The present disclosure generally relates to the field of electronics. More particularly, an embodiment of the invention relates to techniques for provision of adaptive packet deflection to achieve fair, low-cost, and/or energy-efficient Quality of Service (QoS) in Network-on- Chip (NoC) devices.
BACKGROUND
Some current interconnection networks are used to connect many computing components such as many cores in Chip Multi Processors (CMPs) and many-nodes in clustered systems. Some Network-on- Chip (NoC) prototypes with high core count show that NoCs consume a substantial portion of overall system power. Moreover, with such systems, there can be a diverse set of applications running simultaneously on multiple cores/nodes and the interconnect acts as a shared medium, servicing requests from these cores. As a result, at any given instance, there may exist multiple packet classes (data and control) that belong to multiple applications, originating from different cores/nodes. Each of these packets can have different Quality-of-Service (QoS) requirements. This means that the interconnect policy should be able to support multiple traffic classes, so that the packets that belong to a higher priority class are served with certain QoS requirement (e.g., faster delivery time).
Current NoC-QoS approaches can be lumped into two main categories. First, QoS can be achieved by introducing additional queues to the router (e.g., extra virtual channels), assigning different classes of packets to these different queues, and serving them with different priorities. Although adding additional buffering/queues guarantee (at least to some extent) that all packets are routed through minimum paths, it significantly increases the power budget and associated costs of the interconnect. The second category of NoC-QoS approaches, on the other hand, do not necessarily add additional buffering; however, they require major changes to the router architecture such that the router, instead of maintaining FIFO (First In First Out) queues, is able to pull out any packet from the available queues in any order, and service it based on its priority level. This, therefore, increases the complexity of the interconnect and even worse, it increases its power consumption and cost as well.
BRIEF DESCRIPTION OF THE DRAWINGS
The detailed description is provided with reference to the accompanying figures. In the figures, the left- most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.
Fig. 1 illustrates a block diagram of an embodiment of a computing system, which may be utilized to implement various embodiments discussed herein.
Fig. 2 illustrates a block diagram of an embodiment of a computing system, which may be utilized to implement various embodiments discussed herein.
Fig. 3 is a block diagram of a routing and switching logic, in accordance with an embodiment.
Fig. 4 illustrates a flow diagram of a method for a selective deflection policy for QoS support, according to some embodiments.
Fig. 5 illustrates a block diagram of an embodiment of a computing system, which may be utilized to implement various embodiments discussed herein.
Fig. 6 illustrates a block diagram of an embodiment of a computing system, which may be utilized to implement various embodiments discussed herein.
DETAILED DESCRIPTION
In the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, some embodiments may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments. Various aspects of embodiments of the invention may be performed using various means, such as integrated semiconductor circuits ("hardware"), computer-readable instructions organized into one or more programs ("software") or some combination of hardware and software. For the purposes of this disclosure reference to "logic" shall mean either hardware, software, or some combination thereof.
Some embodiments improve the quality and/or performance of high-speed serial I/O channels via various techniques. For example, such techniques are used to provide adaptive packet deflection to achieve fair, low-cost, and/or energy-efficient Quality of Service (QoS) in Network-on- Chip (NoC) devices. As a result, interconnects may be able to support QoS traffic without an increase in power consumption and/or silicon area. Furthermore, some embodiments provide QoS for Network-on-Chips, without changing the router architecture, or requiring support of multiple queues, while at the same time maintaining high overall throughput.
Also, some embodiments are used for both many-core processors and systems with many nodes (such as μΟιιβΙβΓ based systems), allowing energy efficient and high performance interconnects that could fit within a target power budget. Moreover, QoS support (e.g., without queues used in some current implementations), relatively simple router architecture (resulting in no or little increase in silicon area and without major changes to the router architecture), and/or less buffering area and power consumption (e.g., since the additional queues used in some current implementation are not used).
In an embodiment, NoC and/or QoS support is provided via selectively deflecting low priority packets to avoid/reduce congestion and to guarantee the timely delivery of high priority packets. By selectively deflecting packets, one embodiment does not require any additional buffering, nor does it demand changing of the router architecture; thus, it is capable of achieving QoS with minimal buffering area, and simple router architectures, in various systems such as NoC. This, in turn, reduces the interconnect cost and power consumption. Also, in an embodiment, the interconnect(s) discussed herein are implemented in accordance with PCI Express Base Specification 3.0, Revision 3.0, version 1.0 November 10, 2010 and Errata for the PCI Express Base Specification Revision 3.0, October 20, 2011.
Various embodiments are discussed herein with reference to a computing system component, such as the components discussed herein, e.g., with reference to Figs. 1-2 and 5-6. More particularly, Fig. 1 illustrates a block diagram of a computing system 100, according to an embodiment of the invention. The system 100 includes one or more agents 102-1 through 102-M (collectively referred to herein as "agents 102" or more generally "agent 102"). In an embodiment, the agents 102 are components of a computing system, such as the computing systems discussed with reference to Figs. 2 and 5-6.
As illustrated in Fig. 1, the agents 102 communicate via a network fabric 104. In an embodiment, the network fabric 104 can include one or more interconnects (or interconnection networks) that communicate via a serial (e.g., point-to-point) link and/or a shared communication network. For example, some embodiments can facilitate component debug or validation on links that allow communication with fully buffered dual in-line memory modules (FBD), e.g., where the FBD link is a serial link for coupling memory modules to a host controller device (such as a processor or memory hub). Debug information may be transmitted from the FBD channel host such that the debug information is observed along the channel by channel traffic trace capture tools (such as one or more logic analyzers).
In one embodiment, the system 100 can support a layered protocol scheme, which includes a physical layer, a link layer, a routing layer, a transport layer, and/or a protocol layer. The fabric 104 can further facilitate transmission of data (e.g., in form of packets) from one protocol (e.g., caching processor or caching aware memory controller) to another protocol for a point-to-point network. Also, in some embodiments, the network fabric 104 can provide communication that adheres to one or more cache coherent protocols.
Furthermore, as shown by the direction of arrows in Fig. 1, the agents 102 transmit and/or receive data via the network fabric 104. Hence, some agents utilize a unidirectional link while others utilize a bidirectional link for communication. For instance, one or more agents (such as agent 102-M) transmit data (e.g., via a unidirectional link 106), other agent(s) (such as agent
102-2) receive data (e.g., via a unidirectional link 108), while some agent(s) (such as agent 102-
1) both transmit and receive data (e.g., via a bidirectional link 110).
Also, in accordance with an embodiment, one or more of the agents 102 include one or more routing and switching logic 300 to facilitate communication between an agent (e.g., agent 102-1 shown) and one or more Input/Output ("I/O" or "IO") devices 124 (such as Peripheral
Component Interconnect Express (PCIe) I/O devices, which operate in accordance with PCI
Express Base Specification 3.0, Revision 3.0, version 1.0 November 10, 2010 and Errata for the
PCI Express Base Specification Revision 3.0, October 20, 2011) and/or other agents coupled via the fabric 104 as will be further discussed herein (e.g., with reference to Figs. 3-4). Also, while Fig. 1 illustrates logic 300 to be included in the agent 102-1, logic 300 can be located elsewhere in the system 100, such as within I/O device(s) 124, as part of or another device (such as a network router) coupled to the network fabric 104.
Fig. 2 is a block diagram of a computing system 200 in accordance with an embodiment.
System 200 includes a plurality of sockets 202-208 (four shown but some embodiments can have more or less socket). Each socket includes a processor and one or more routing and switching logic 300. In some embodiments, one or more routing and switching logic 300 can be present in one or more components of system 200 (such as those shown in Fig. 2). Additionally, each socket is coupled to the other sockets via a point-to-point (PtP) link, or a differential interconnect, such as a Quick Path Interconnect (QPI), MIPI (Mobile Industry Processor Interface), etc. As discussed with respect the network fabric 104 of Fig. 1, each socket is coupled to a local portion of system memory, e.g., formed by a plurality of Dual Inline Memory Modules
(DIMMs) that includes dynamic random access memory (DRAM).
In another embodiment, the network fabric may be utilized for any System on Chip (SoC) application, utilize custom or standard interfaces, such as, ARM compliant interfaces for AMBA (Advanced Microcontroller Bus Architecture), OCP (Open Core Protocol), MIPI (Mobile
Industry Processor Interface), PCI (Peripheral Component Interconnect) or PCIe (Peripheral
Component Interconnect Express).
Some embodiments use a technique that enables use of heterogeneous resources, such as
AXI/OCP technologies, in a PC (Personal Computer) based system such as a PCI-based system without making any changes to the IP resources themselves. Embodiments provide two very thin hardware blocks, referred to herein as a Yunit and a shim, that can be used to plug AXI/OCP IP into an auto-generated interconnect fabric to create PCI-compatible systems. In one embodiment a first (e.g., a north) interface of the Yunit connects to an adapter block that interfaces to a PCI-compatible bus such as a direct media interface (DMI) bus, a PCI bus, or a Peripheral Component Interconnect Express (PCIe) bus. A second (e.g., south) interface connects directly to a non-PC interconnect, such as an AXI/OCP interconnect. In various implementations, this bus may be an OCP bus.
In some embodiments, the Yunit implements PCI enumeration by translating PCI configuration cycles into transactions that the target IP can understand. This unit also performs address translation from re-locatable PCI addresses into fixed AXI/OCP addresses and vice versa. The Yunit may further implement an ordering mechanism to satisfy a producer-consumer model (e.g., a PCI producer-consumer model). In turn, individual IPs are connected to the interconnect via dedicated PCI shims. Each shim may implement the entire PCI header for the corresponding IP. The Yunit routes all accesses to the PCI header and the device memory space to the shim. The shim consumes all header read/write transactions and passes on other transactions to the IP. In some embodiments, the shim also implements all power management related features for the IP.
Thus, rather than being a monolithic compatibility block, embodiments that implement a Yunit take a distributed approach. Functionality that is common across all IPs, e.g., address translation and ordering, is implemented in the Yunit, while IP-specific functionality such as power management, error handling, and so forth, is implemented in the shims that are tailored to that IP.
In this way, a new IP can be added with minimal changes to the Yunit. For example, in one implementation the changes may occur by adding a new entry in an address redirection table. While the shims are IP-specific, in some implementations a large amount of the functionality (e.g., more than 90%) is common across all IPs. This enables a rapid reconfiguration of an existing shim for a new IP. Some embodiments thus also enable use of auto-generated interconnect fabrics without modification. In a point-to-point bus architecture, designing interconnect fabrics can be a challenging task. The Yunit approach described above leverages an industry ecosystem into a PCI system with minimal effort and without requiring any modifications to industry-standard tools.
As shown in Fig. 2, each socket is coupled to a Memory Controller (MC)/Home Agent (HA) (such as MC0/HA0 through MC3/HA3). The memory controllers is coupled to a corresponding local memory (labeled as MEMO through MEM3), which can be a portion of system memory (such as memory 512 of Fig 5). In some embodiments, the memory controller (MC)/Home Agent (HA) (such as MC0/HA0 through MC3/HA3) can be the same or similar to agent 102-1 of Fig. 1 and the memory, labeled as MEMO through MEM3, is the same or similar to memory devices discussed with reference to any of the figures herein. Generally, processing/caching agents send requests to a home node for access to a memory address with which a corresponding "home agent" is associated. Also, in one embodiment, MEMO through MEM3 can be configured to mirror data, e.g., as master and slave. Also, one or more components of system 200 can be included on the same integrated circuit die in some embodiments.
Furthermore, one implementation (such as shown in Fig. 2) is for a socket glueless configuration with mirroring. For example, data assigned to a memory controller (such as MC0/HA0) is mirrored to another memory controller (such as MC3/HA3) over the PtP links.
Some current implementations of NoCs in CMPs or System Area Networks (SANs) have various topologies, including mesh, torus, and irregular mesh toplogies. Such topologies usually feature a relatively large node connection degree (e.g., 4 in torus topology). As a result, a flow or a packet can potentially choose one of the multiple paths between a given source-destination pair. In adaptive routing, when a packet encounters a faulty or congested path, it can select another bypassing path, even it is longer in some cases. This allows for a balance of network traffic, and can potentially improve throughput and latency. However, current adaptive routing approaches generally treat all packets equally without consideration for QoS support for traffic with different service levels (priority).
As mentioned before, an NoC can be used as a shared medium connecting and servicing all cores/nodes on the chip. This means that at any point of time, there can be multiple messages of different node origins and different types being communicated. For example, some messages can be related to control signaling which have higher priority than other messages. Further, different applications can have different service level requirement (real time vs. best effort). Without losing generosity, some embodiments assume there are N classes of traffic, with priority 1 (e.g., being the highest priority), which should be delivered in very timely fashion.
Some current implementations provide QoS support for different classes by having separate queues in the router, and serving, according to the service level agreement, different queues based on priority. For example, a higher priority queue receives more serving time and is able to preempt lower priority packets. One major reason why this class of methods introduces additional separate queues is the fact that traditional router architecture is not capable of fetching packets from a single FIFO queue in an out-of-order fashion; hence, the additional dedicated per- class queues are provided. The drawback of this category of techniques is that multiple-queue support usually increases the total required buffering, which in turn increases both area (e.g., for additional queues and any supporting logic) and power consumption of the router (e.g., to operated the additional logic for the queues and their supporting logic) . With the increasing number of nodes/cores in the system, this would become a severe problem. Some studies have also shown that in general the interconnect load can be relatively low, thus larger buffering may waste energy and area.
Moreover, another class of methods addresses the problem of the router not being able to fetch packets from the queues in an out-of-order fashion by foregoing the traditional, simple router architecture and introducing a sophisticated one that is capable of pulling out packets from a given queue in no particular order (i.e., without respecting their FIFO order). Although these approaches do not introduce additional buffering, they still need a complex router design that by itself consumes significant amount of power and increases the interconnect cost and area. By contrast, some embodiments provide for interconnect QoS support without requiring multiple queues nor changing the router architecture.
Fig. 3 is a block diagram of a routing and switching logic 300, in accordance with an embodiment. The logic 300 selectively deflects routing of packets for QoS support. When a packet 302 arrives at an input port 303, the routing/switching logic 304 checks the utilization of the target port (e.g., output port 306) that is associated with a destination of the received packet 302 (e.g., where the destination is identified by a destination address in a header of the packet 302 or accompanying information, etc.). If the utilization is high (e.g., at output port 306), for highest priority traffic, it can still be sent to the target port 306. However, for lower priority packets, it is more likely it will be deflected to a less utilized port (e.g., output port 308), even though this port is not the target port. Eventually, the lower priority packets will reach the destination on an alternative path, which takes more time to traverse than the minimal path. By doing so, the QoS for higher priority packets is supported.
In an embodiment, when the network is lightly loaded, all packets are routed through minimal paths to achieve minimal routing latency and energy consumption. When the utilization at certain ports increases (e.g., when compared with threshold utilization value(s), it can be based on information detected by one or more sensors proximate to the ports), the logic 304 will selectively deflect one or more lower priority packets to other ports to avoid or reduce further congestion, even if the other ports are not the ports on the minimal path to destination. This approach in turn reduces or avoids further congestion for the higher priority traffic and their timely delivery. Moreover, since this approach balances the load on different ports, higher overall network throughput can be achieved as well. To avoid live-locks (i.e., where a packet is being deflected repeatedly), the priority of packets being deflected, is gradually increased with every packet deflection. This in turn can guarantee that even for a low priority packet it will eventually be delivered to its destination, instead of being deflected endlessly.
Fig. 4 illustrates a flow diagram of a method for a selective deflection policy for QoS support, according to some embodiments. In various embodiments, the operations discussed with reference to Fig. 4 are performed by one or more of the components discussed with reference to Figs. 1-3 and/or 5-6.
Referring to Fig. 4, when a packet is received at operation 402 and the utilization of the target port is above (or at) a certain threshold value as determined at operation 404, it indicates that some packets need to be deflected to avoid further congestion. In this case, the probability for deflecting a packet to a non-target port is calculated at operation 406, based on the following factors:
(1) utilization of the target port - the higher the utilization, the higher the deflection probability (the reason for this is to prevent saturation and congestion for the target port, and to ensure the timely delivery of the higher priority packets); and/or
(2) priority level of the packet - the lower priority packet is more likely to be deflected (while, on the other hand, a packet with higher priority is less likely to be deflected).
Accordingly, a probability-based deflection mechanism is used to provide for QoS in some embodiments. For example, the deflection probability p is calculated using the following equation:
P = a x {targetPort Uutilization - utilizatio nThreshold )x (n / N) where "a" is a scaling factor (can be experimentally decided), "n" is the priority of the packet [1, 2, 3,...], with 1 being the highest priority, and "N" is the total number of traffic classes.
Using this equation, a lower priority packet has a higher chance to be deflected when the utilization value of the target port is higher than a threshold value. When choosing an alternate port to deflect to, the alternate port with lower utilization and one that also does not increase the hop count to the destination is considered first in an embodiment. Another important aspect is to ensure that even lower priority packets are delivered eventually to the destination, e.g., by applying an aging mechanism. For example, every time a packet is deflected as determined at operation 408, its priority is increased by a certain value at an operation 410 such that it will be less likely to be deflected at the next hop. The width of the priority increment can be a design parameter. At an operation 412, the packet is sent to a non-target port. As shown in Fig. 4, if the threshold is not met at operation 404 or the packet is not deflected at operation 408, the packet is sent to the target port at operation 414. Also, after operations 412 and 414, the method resumes at operation 402 to receive the next packet.
In one embodiment, to avoid or at least reduce the chance for live-lock a "deflection- counter" counts the number of deflection a packet encounters, and this counter value is taken into consideration when calculating the deflection probability discussed above. For example, the higher the value of the deflection counter, the lower the deflection probability. Also, even where all packets have the same priority (N=l), some embodiments still serve as a efficient and low- overhead approach to disperse congestion and achieve overall high throughput.
Furthermore, the target QoS level can be represented by assigning priority values to packets. These priority values could either have a user- level meaning or implementation (e.g., a user selected QoS level for a given application can be propagated down to the interconnect through the OS), a hardware-level meaning or implementation (e.g., the hardware gives control packets higher priority than data packets), or a combination of both.
Fig. 5 illustrates a block diagram of a computing system 500 in accordance with an embodiment of the invention. The computing system 500 includes one or more central processing unit(s) (CPUs) 502- 1 through 502-N or processors (collectively referred to herein as "processors 502" or more generally "processor 502") that communicate via an interconnection network (or bus) 504. The processors 502 includes a general purpose processor, a network processor (that processes data communicated over a computer network 503), or other types of a processor (including a reduced instruction set computer (RISC) processor or a complex instruction set computer (CISC)). Moreover, the processors 502 has a single or multiple core design. The processors 502 with a multiple core design can integrate different types of processor cores on the same integrated circuit (IC) die. Also, the processors 502 with a multiple core design can be implemented as symmetrical or asymmetrical multiprocessors.
Also, the operations discussed with reference to Figs. 1-4 can be performed by one or more components of the system 500. In some embodiments, the processors 502 can be the same or similar to the processors 202-208 of Fig. 2. Furthermore, the processors 502 (or other components of the system 500) includes one or more routing and switching logic 300. Moreover, even though Fig. 5 illustrates some locations for logic 300, these components can be located elsewhere in system 500. For example, I/O device(s) 124 communicate via bus 522 through logic 300. A chipset 506 also communicates with the interconnection network 504. The chipset 506 includes a graphics and memory controller hub (GMCH) 508. The GMCH 508 includes a memory controller 510 that communicates with a memory 512. The memory 512 stores data, including sequences of instructions that are executed by the CPU 502, or any other device included in the computing system 500. For example, the memory 512 stores data corresponding to an operation system (OS) 513 and/or a device driver 511 as discussed with reference to the previous figures. In an embodiment, the memory 512 and memory 140 of Fig. 1 can be the same or similar. In one embodiment of the invention, the memory 512 can include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage devices. Nonvolatile memory can also be utilized such as a hard disk. Additional devices can communicate via the interconnection network 504, such as multiple CPUs and/or multiple system memories.
Additionally, in some embodiments, one or more of the processors 502 have access to one or more caches (which can include private and/or shared caches in various embodiments) and associated cache controllers (not shown). The cache(s) can adhere to one or more cache coherent protocols. The cache(s) store data (e.g., including instructions) that are utilized by one or more components of the system 500. For example, the cache locally caches data stored in a memory 512 for faster access by the components of the processors 502. In an embodiment, the cache (that can be shared) can include a mid- level cache and/or a last level cache (LLC). Also, each processor 502 includes a level 1 (LI) cache. Various components of the processors 502 communicate with the cache directly, through a bus or interconnection network, and/or a memory controller or hub.
The GMCH 508 also includes a graphics interface 514 that communicates with a display device 516, e.g., via a graphics accelerator. In one embodiment of the invention, the graphics interface 514 can communicate with the graphics accelerator via an accelerated graphics port (AGP). In an embodiment of the invention, the display 516 (such as a flat panel display) can communicate with the graphics interface 514 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display 516. The display signals produced by the display device pass through various control devices before being interpreted by and subsequently displayed on the display 516.
A hub interface 518 allows the GMCH 508 and an input/output control hub (ICH) 520 to communicate. The ICH 520 provides an interface to I/O devices that communicate with the computing system 500. The ICH 520 communicates with a bus 522 through a peripheral bridge (or controller) 524, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or other types of peripheral bridges or controllers. The bridge 524 provides a data path between the CPU 502 and peripheral devices. Other types of topologies can be utilized. Also, multiple buses can communicate with the ICH 520, e.g., through multiple bridges or controllers. Moreover, other peripherals in communication with the ICH 520 include, in various embodiments of the invention, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), USB port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), or other devices.
The bus 522 communicates with an audio device 526, one or more disk drive(s) 528, and a network interface device 530 (which is in communication with the computer network 503). Other devices communicate via the bus 522. Also, various components (such as the network interface device 530) can communicate with the GMCH 508 in some embodiments of the invention. In an embodiment, the processor 502 and one or more components of the GMCH 508 and/or chipset 506 are combined to form a single integrated circuit chip (or be otherwise present on the same integrated circuit die).
Furthermore, the computing system 500 can include volatile and/or nonvolatile memory (or storage). For example, nonvolatile memory includes one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 528), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto-optical disk, or other types of nonvolatile machine-readable media that are capable of storing electronic data (e.g., including instructions).
Fig. 6 illustrates a computing system 600 that is arranged in a point-to-point (PtP) configuration, according to an embodiment of the invention. In particular, Fig. 6 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to- point interfaces. The operations discussed with reference to Figs. 1-5 are performed by one or more components of the system 600.
As illustrated in Fig. 6, the system 600 includes several processors, of which only two, processors 602 and 604 are shown for clarity. The processors 602 and 604 each include a local memory controller hub (MCH) 606 and 608 to enable communication with memories 610 and 612. The memories 610 and/or 612 store various data such as those discussed with reference to the memory 512 of Fig. 5. In an embodiment, the processors 602 and 604 also include the cache(s) discussed with reference to Fig. 5. In an embodiment, the processors 602 and 604 can be one of the processors 502 discussed with reference to Fig. 5. The processors 602 and 604 exchange data via a point-to-point (PtP) interface 614 using PtP interface circuits 616 and 618, respectively. Also, the processors 602 and 604 each exchange data with a chipset 620 via individual PtP interfaces 622 and 624 using point- to-point interface circuits 626, 628, 630, and 632. The chipset 620 further exchanges data with a high-performance graphics circuit 634 via a high-performance graphics interface 636, e.g., using a PtP interface circuit 637.
At least one embodiment of the invention is provided within the processors 602 and 604 or chipset 620. For example, the processors 602 and 604 and/or chipset 620 include one or more routing and switching logic 300. Other embodiments of the invention, however, can exist in other circuits, logic units, or devices within the system 600 of Fig. 6. Furthermore, other embodiments of the invention can be distributed throughout several circuits, logic units, or devices illustrated in Fig. 6. Hence, location of logic 300 shown in Fig. 6 is exemplary and such components may or may not be provided in the illustrated locations.
The chipset 620 communicates with a bus 640 using a PtP interface circuit 641. The bus
640 has one or more devices that communicate with it, such as a bus bridge 642 and I/O devices 643. Via a bus 644, the bus bridge 642 communicate with other devices such as a keyboard/mouse 645, communication devices 646 (such as modems, network interface devices, or other communication devices that communicate through the computer network 503), audio I/O device, and/or a data storage device 648. The data storage device 648 stores code 649 that may be executed by the processors 602 and/or 604.
In various embodiments of the invention, the operations discussed herein, e.g., with reference to Figs. 1-6, can be implemented as hardware (e.g., circuitry), software, firmware, microcode, or combinations thereof, which may be provided as a computer program product, e.g., including a (e.g., non-transitory) machine-readable or (e.g., non-transitory) computer- readable medium having stored thereon instructions (or software procedures) used to program a computer to perform a process discussed herein. Also, the term "logic" may include, by way of example, software, hardware, or combinations of software and hardware. The machine-readable medium may include a storage device such as those discussed with respect to Figs. 1-6. Additionally, such computer-readable media may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals transmitted via a carrier wave or other propagation medium via a communication link (e.g., a bus, a modem, or a network connection).
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least an implementation. The appearances of the phrase "in one embodiment" in various places in the specification may or may not be all referring to the same embodiment.
Also, in the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. In some embodiments of the invention, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other.
Thus, although embodiments of the invention have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter.

Claims

1. An apparatus comprising:
logic to determine whether a target port of a packet has reached a threshold utilization value; and
logic to route the packet to an alternate port in response to a deflection probability value to be determined based on a utilization value of the target port and a priority level value of the packet.
2. The apparatus of claim 1, wherein the target port corresponds to a minimal path for the packet to reach a destination of the packet.
3. The apparatus of claim 2, wherein the destination is to be included in a header of the packet.
4. The apparatus of claim 1, wherein the logic to route the packet is to select the alternate port based on one of: a utilization value of the alternate port and a number of hops to a destination of the packet.
5. The apparatus of claim 1, wherein the deflection probability value is to be determined based on a number of deflections of the packet.
6. The apparatus of claim 1, further comprising logic to modify the priority level value of the packet in response to a determination that the packet is to be routed to the alternate port.
7. The apparatus of claim 1, wherein the target port is to be coupled to a link to transmit the packet.
8. The apparatus of claim 7, wherein the link is to couple a first agent to a second agent, wherein the second agent is to comprise an input/output device.
9. The apparatus of claim 8, wherein the link is to comprise a point-to-point coherent interconnect.
10. The apparatus of claim 8, wherein the link is to couple a first agent to a second agent, wherein one or more of the first agent, the second agent, and a memory are on a same integrated circuit chip.
11. The apparatus of claim 8, wherein the link comprises a Peripheral Component
Interconnect Express (PCIe) link.
12. The apparatus of claim 1, further comprising an input port to receive the packet over a link.
13. A method comprising:
determining whether a target port of a packet has reached a threshold utilization value; and
routing the packet to an alternate port in response to a deflection probability value to be determined based on a utilization value of the target port and a priority level value of the packet.
14. The method of claim 13, further comprising selecting the alternate port based on one of: a utilization value of the alternate port and a number of hops to a destination of the packet.
15. The method of claim 13, comprising determining the deflection probability value based on a number of prior deflections of the packet.
16. A computing system comprising:
routing and switching logic to be capable of coupling a first agent and a second agent via a link, the routing and switching logic to comprise:
logic to determine whether a target port of a packet in the routing and switching logic has reached a threshold utilization value; and
logic to route the packet to an alternate port of the routing and switching logic in response to a deflection probability value to be determined based on a utilization value of the target port and a priority level value of the packet.
17. The system of claim 16, wherein the logic to route the packet is to select the alternate port based on one of: a utilization value of the alternate port and a number of hops to a destination of the packet.
18. The system of claim 16, wherein the deflection probability value is to be determined based on a number of prior deflections of the packet.
19. A non-transitory computer-readable medium comprising one or more instructions that when executed on a processor configure the processor to perform one or more operations of claims 13 to 15.
20. An apparatus comprising:
utilization logic to determine a utilization metric associated with a first port; and routing logic configured to route a first packet associated with the first port based on a minimal path to the first port in response to the utilization metric being below a utilization threshold, and to deflect a second packet associated with the first port based on a minimal path to a second port in response to the utilization metric exceeding the utilization threshold and the second packet being associated with a priority level below a threshold priority level.
21. The apparatus of claim 20, further comprising control logic to increase the threshold priority level in response to the routing logic deflecting the second packet associated with the first port to the second port.
22. The apparatus of claim 20, wherein the first port and the second port are to be coupled via a link.
23. The apparatus of claim 22, wherein the link is to couple a first agent to a second agent, wherein the second agent is to comprise an input/output device.
24. The apparatus of claim 22, wherein the link is to comprise a point-to-point coherent interconnect.
25. The apparatus of claim 22, wherein the link is to couple a first agent to a second agent, wherein the first agent is to comprise a plurality of processor cores and one or more sockets.
PCT/US2013/047525 2012-09-29 2013-06-25 Adaptive packet deflection to achieve fair, low-cost, and/or energy-efficient quality of service in network on chip devices WO2014051778A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201380045432.8A CN104583992A (en) 2012-09-29 2013-06-25 Adaptive packet deflection to achieve fair, low-cost, and/or energy-efficient quality of service in network on chip devices
DE112013003733.5T DE112013003733B4 (en) 2012-09-29 2013-06-25 Adaptive packet rerouting to achieve reasonable, low-cost, and/or power-efficient network quality of service on chip devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/631,878 2012-09-29
US13/631,878 US20140092740A1 (en) 2012-09-29 2012-09-29 Adaptive packet deflection to achieve fair, low-cost, and/or energy-efficient quality of service in network on chip devices

Publications (1)

Publication Number Publication Date
WO2014051778A1 true WO2014051778A1 (en) 2014-04-03

Family

ID=50385073

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/047525 WO2014051778A1 (en) 2012-09-29 2013-06-25 Adaptive packet deflection to achieve fair, low-cost, and/or energy-efficient quality of service in network on chip devices

Country Status (4)

Country Link
US (1) US20140092740A1 (en)
CN (1) CN104583992A (en)
DE (1) DE112013003733B4 (en)
WO (1) WO2014051778A1 (en)

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9244880B2 (en) 2012-08-30 2016-01-26 Netspeed Systems Automatic construction of deadlock free interconnects
US8885510B2 (en) 2012-10-09 2014-11-11 Netspeed Systems Heterogeneous channel capacities in an interconnect
US8601423B1 (en) 2012-10-23 2013-12-03 Netspeed Systems Asymmetric mesh NoC topologies
US9253085B2 (en) 2012-12-21 2016-02-02 Netspeed Systems Hierarchical asymmetric mesh with virtual routers
US9774498B2 (en) 2012-12-21 2017-09-26 Netspeed Systems Hierarchical asymmetric mesh with virtual routers
US9185026B2 (en) 2012-12-21 2015-11-10 Netspeed Systems Tagging and synchronization for fairness in NOC interconnects
US9007920B2 (en) 2013-01-18 2015-04-14 Netspeed Systems QoS in heterogeneous NoC by assigning weights to NoC node channels and using weighted arbitration at NoC nodes
US9009648B2 (en) 2013-01-18 2015-04-14 Netspeed Systems Automatic deadlock detection and avoidance in a system interconnect by capturing internal dependencies of IP cores using high level specification
US9130856B2 (en) 2013-01-28 2015-09-08 Netspeed Systems Creating multiple NoC layers for isolation or avoiding NoC traffic congestion
US8934377B2 (en) 2013-03-11 2015-01-13 Netspeed Systems Reconfigurable NoC for customizing traffic and optimizing performance after NoC synthesis
US9160627B2 (en) 2013-04-04 2015-10-13 Netspeed Systems Multiple heterogeneous NoC layers
US9185023B2 (en) 2013-05-03 2015-11-10 Netspeed Systems Heterogeneous SoC IP core placement in an interconnect to optimize latency and interconnect performance
US9571402B2 (en) 2013-05-03 2017-02-14 Netspeed Systems Congestion control and QoS in NoC by regulating the injection traffic
US10027433B2 (en) 2013-06-19 2018-07-17 Netspeed Systems Multiple clock domains in NoC
US9781043B2 (en) 2013-07-15 2017-10-03 Netspeed Systems Identification of internal dependencies within system components for evaluating potential protocol level deadlocks
US9471726B2 (en) 2013-07-25 2016-10-18 Netspeed Systems System level simulation in network on chip architecture
US9054977B2 (en) 2013-08-05 2015-06-09 Netspeed Systems Automatic NoC topology generation
US9473388B2 (en) 2013-08-07 2016-10-18 Netspeed Systems Supporting multicast in NOC interconnect
US9223711B2 (en) 2013-08-13 2015-12-29 Netspeed Systems Combining associativity and cuckoo hashing
US9294354B2 (en) * 2013-10-24 2016-03-22 Netspeed Systems Using multiple traffic profiles to design a network on chip
US9830265B2 (en) 2013-11-20 2017-11-28 Netspeed Systems, Inc. Reuse of directory entries for holding state information through use of multiple formats
US9158882B2 (en) 2013-12-19 2015-10-13 Netspeed Systems Automatic pipelining of NoC channels to meet timing and/or performance
US9699079B2 (en) 2013-12-30 2017-07-04 Netspeed Systems Streaming bridge design with host interfaces and network on chip (NoC) layers
US9473415B2 (en) 2014-02-20 2016-10-18 Netspeed Systems QoS in a system with end-to-end flow control and QoS aware buffer allocation
US9319232B2 (en) 2014-04-04 2016-04-19 Netspeed Systems Integrated NoC for performing data communication and NoC functions
US9762474B2 (en) 2014-04-07 2017-09-12 Netspeed Systems Systems and methods for selecting a router to connect a bridge in the network on chip (NoC)
US9244845B2 (en) 2014-05-12 2016-01-26 Netspeed Systems System and method for improving snoop performance
US9473359B2 (en) 2014-06-06 2016-10-18 Netspeed Systems Transactional traffic specification for network-on-chip design
US9535848B2 (en) 2014-06-18 2017-01-03 Netspeed Systems Using cuckoo movement for improved cache coherency
WO2016015223A1 (en) * 2014-07-29 2016-02-04 华为技术有限公司 Dynamic equilibrium method and apparatus for qos of i/o channel
US10528682B2 (en) 2014-09-04 2020-01-07 Netspeed Systems Automatic performance characterization of a network-on-chip (NOC) interconnect
US9742630B2 (en) 2014-09-22 2017-08-22 Netspeed Systems Configurable router for a network on chip (NoC)
US9477280B1 (en) 2014-09-24 2016-10-25 Netspeed Systems Specification for automatic power management of network-on-chip and system-on-chip
US10042404B2 (en) 2014-09-26 2018-08-07 Netspeed Systems Automatic generation of power management sequence in a SoC or NoC
US9571341B1 (en) 2014-10-01 2017-02-14 Netspeed Systems Clock gating for system-on-chip elements
US9529400B1 (en) 2014-10-29 2016-12-27 Netspeed Systems Automatic power domain and voltage domain assignment to system-on-chip agents and network-on-chip elements
US9660942B2 (en) 2015-02-03 2017-05-23 Netspeed Systems Automatic buffer sizing for optimal network-on-chip design
US9444702B1 (en) 2015-02-06 2016-09-13 Netspeed Systems System and method for visualization of NoC performance based on simulation output
US9928204B2 (en) 2015-02-12 2018-03-27 Netspeed Systems, Inc. Transaction expansion for NoC simulation and NoC design
US9568970B1 (en) 2015-02-12 2017-02-14 Netspeed Systems, Inc. Hardware and software enabled implementation of power profile management instructions in system on chip
US10348563B2 (en) 2015-02-18 2019-07-09 Netspeed Systems, Inc. System-on-chip (SoC) optimization through transformation and generation of a network-on-chip (NoC) topology
US10050843B2 (en) 2015-02-18 2018-08-14 Netspeed Systems Generation of network-on-chip layout based on user specified topological constraints
US9864728B2 (en) 2015-05-29 2018-01-09 Netspeed Systems, Inc. Automatic generation of physically aware aggregation/distribution networks
US9825809B2 (en) 2015-05-29 2017-11-21 Netspeed Systems Dynamically configuring store-and-forward channels and cut-through channels in a network-on-chip
US10218580B2 (en) 2015-06-18 2019-02-26 Netspeed Systems Generating physically aware network-on-chip design from a physical system-on-chip specification
SG10201600224SA (en) * 2016-01-12 2017-08-30 Huawei Int Pte Ltd Dedicated ssr pipeline stage of router for express traversal (extra) noc
US10452124B2 (en) 2016-09-12 2019-10-22 Netspeed Systems, Inc. Systems and methods for facilitating low power on a network-on-chip
US10462046B2 (en) 2016-11-09 2019-10-29 International Business Machines Corporation Routing of data in network
US20180159786A1 (en) 2016-12-02 2018-06-07 Netspeed Systems, Inc. Interface virtualization and fast path for network on chip
US10313269B2 (en) 2016-12-26 2019-06-04 Netspeed Systems, Inc. System and method for network on chip construction through machine learning
US10063496B2 (en) 2017-01-10 2018-08-28 Netspeed Systems Inc. Buffer sizing of a NoC through machine learning
US10084725B2 (en) 2017-01-11 2018-09-25 Netspeed Systems, Inc. Extracting features from a NoC for machine learning construction
US10469337B2 (en) 2017-02-01 2019-11-05 Netspeed Systems, Inc. Cost management against requirements for the generation of a NoC
US10298485B2 (en) 2017-02-06 2019-05-21 Netspeed Systems, Inc. Systems and methods for NoC construction
US10983910B2 (en) 2018-02-22 2021-04-20 Netspeed Systems, Inc. Bandwidth weighting mechanism based network-on-chip (NoC) configuration
US10547514B2 (en) 2018-02-22 2020-01-28 Netspeed Systems, Inc. Automatic crossbar generation and router connections for network-on-chip (NOC) topology generation
US10896476B2 (en) 2018-02-22 2021-01-19 Netspeed Systems, Inc. Repository of integration description of hardware intellectual property for NoC construction and SoC integration
US11144457B2 (en) 2018-02-22 2021-10-12 Netspeed Systems, Inc. Enhanced page locality in network-on-chip (NoC) architectures
US11023377B2 (en) 2018-02-23 2021-06-01 Netspeed Systems, Inc. Application mapping on hardened network-on-chip (NoC) of field-programmable gate array (FPGA)
US11176302B2 (en) 2018-02-23 2021-11-16 Netspeed Systems, Inc. System on chip (SoC) builder
CN109150717B (en) * 2018-07-04 2022-03-22 东南大学 Combined routing method for optimizing network-on-chip power consumption
CN109818882B (en) * 2019-02-22 2022-11-01 杭州迪普科技股份有限公司 Method and device for executing QoS strategy
US10860762B2 (en) 2019-07-11 2020-12-08 Intel Corpration Subsystem-based SoC integration
CN113542140B (en) * 2021-07-26 2023-04-07 合肥工业大学 Reconfigurable high-energy-efficiency router in wireless network-on-chip and power gating method
WO2023225890A1 (en) * 2022-05-25 2023-11-30 Qualcomm Incorporated Quality of service (qos) differentiation in user-plane procedures

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040066743A1 (en) * 1998-12-15 2004-04-08 Kabushiki Kaisha Toshiba Packet switch and packet switching method using priority control based on congestion status within packet switch
US20090274049A1 (en) * 2008-05-02 2009-11-05 Realtek Semiconductor Corp. Non-blocked network system and packet arbitration method thereof
US7876686B1 (en) * 2007-12-28 2011-01-25 Marvell International, Ltd. Message processing
US20110026400A1 (en) * 2009-07-29 2011-02-03 Kalray Network on chip with quality of service

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5602839A (en) * 1995-11-09 1997-02-11 International Business Machines Corporation Adaptive and dynamic message routing system for multinode wormhole networks
US6741552B1 (en) 1998-02-12 2004-05-25 Pmc Sierra Inertnational, Inc. Fault-tolerant, highly-scalable cell switching architecture
US6519062B1 (en) * 2000-02-29 2003-02-11 The Regents Of The University Of California Ultra-low latency multi-protocol optical routers for the next generation internet
US7103059B2 (en) * 2001-06-15 2006-09-05 Industrial Technology Research Institute Scalable 2-stage interconnections
US7254138B2 (en) 2002-02-11 2007-08-07 Optimum Communications Services, Inc. Transparent, look-up-free packet forwarding method for optimizing global network throughput based on real-time route status
US7383349B2 (en) * 2002-06-04 2008-06-03 Lucent Technologies Inc. Controlling the flow of packets within a network node utilizing random early detection
US8037224B2 (en) * 2002-10-08 2011-10-11 Netlogic Microsystems, Inc. Delegating network processor operations to star topology serial bus interfaces
US8730965B2 (en) * 2011-01-05 2014-05-20 Google Inc. Systems and methods for dynamic routing in a multiprocessor network using local congestion sensing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040066743A1 (en) * 1998-12-15 2004-04-08 Kabushiki Kaisha Toshiba Packet switch and packet switching method using priority control based on congestion status within packet switch
US7876686B1 (en) * 2007-12-28 2011-01-25 Marvell International, Ltd. Message processing
US20090274049A1 (en) * 2008-05-02 2009-11-05 Realtek Semiconductor Corp. Non-blocked network system and packet arbitration method thereof
US20110026400A1 (en) * 2009-07-29 2011-02-03 Kalray Network on chip with quality of service

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANDREAS G. SAVVVA ET AL.: "Intelligent On/Off Dynamic Link Management for On -Chip Networks", JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING - SPECIAL ISSUE ON NETWORKS-ON-CHIP: ARCHITECTURES, DESIGN METHODOLOGIES, AND CASE STUDIS, vol. 2012, January 2012 (2012-01-01), Retrieved from the Internet <URL:http://www.hindawi.com/journals/jece/2012/107)21> *

Also Published As

Publication number Publication date
US20140092740A1 (en) 2014-04-03
DE112013003733T5 (en) 2015-05-21
CN104583992A (en) 2015-04-29
DE112013003733B4 (en) 2022-11-10

Similar Documents

Publication Publication Date Title
US20140092740A1 (en) Adaptive packet deflection to achieve fair, low-cost, and/or energy-efficient quality of service in network on chip devices
US9294403B2 (en) Mechanism to control resource utilization with adaptive routing
US10380059B2 (en) Control messaging in multislot link layer flit
KR101861312B1 (en) Control messaging in multislot link layer flit
US9043526B2 (en) Versatile lane configuration using a PCIe PIe-8 interface
US7643477B2 (en) Buffering data packets according to multiple flow control schemes
US10749811B2 (en) Interface virtualization and fast path for Network on Chip
US10206175B2 (en) Communications fabric with split paths for control and data packets
EP1779609B1 (en) Integrated circuit and method for packet switching control
JP2008546298A (en) Electronic device and communication resource allocation method
US20150188797A1 (en) Adaptive admission control for on die interconnect
US20240048489A1 (en) Dynamic fabric reaction for optimized collective communication
US20240073129A1 (en) Peer-to-peer communication between reconfigurable dataflow units
US20240070106A1 (en) Reconfigurable dataflow unit having remote fifo management functionality
US20240073136A1 (en) Reconfigurable dataflow unit with remote read/write functionality
US20240070111A1 (en) Reconfigurable dataflow unit with streaming write functionality
Heisswolf et al. Efficient memory access in 2D Mesh NoC architectures using high bandwidth routers
CN117716676A (en) Router architecture for multidimensional topologies in networks on chip and on packets
Bianco et al. Boosting the performance of PC-based software routers with FPGA-enhanced line cards

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13842577

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 112013003733

Country of ref document: DE

Ref document number: 1120130037335

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13842577

Country of ref document: EP

Kind code of ref document: A1