WO1997043843A1 - Method and apparatus for controlling the flow of data via an ethernet network - Google Patents

Method and apparatus for controlling the flow of data via an ethernet network Download PDF

Info

Publication number
WO1997043843A1
WO1997043843A1 PCT/US1997/008111 US9708111W WO9743843A1 WO 1997043843 A1 WO1997043843 A1 WO 1997043843A1 US 9708111 W US9708111 W US 9708111W WO 9743843 A1 WO9743843 A1 WO 9743843A1
Authority
WO
WIPO (PCT)
Prior art keywords
nodes
data
hub
node
current node
Prior art date
Application number
PCT/US1997/008111
Other languages
French (fr)
Inventor
Kenneth N. Fujimoto
Original Assignee
Medialink Technologies Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medialink Technologies Corporation filed Critical Medialink Technologies Corporation
Publication of WO1997043843A1 publication Critical patent/WO1997043843A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/44Star or tree networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/403Bus networks with centralised control, e.g. polling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/407Bus networks with decentralised control
    • H04L12/413Bus networks with decentralised control with random access, e.g. carrier-sense multiple-access with collision detection (CSMA-CD)

Definitions

  • This invention relates to Ethernet networks and, more particularly to method and apparatus for controlling the flow of data in an Ethernet network having a star configuration, i.e., a plurality of nodes connected to a centralized hub.
  • Ethernet is one of the most popular local area networking standards. Although traditionally developed to implement a bus topology, in which all nodes are connected to one another by a single coaxial cable, Ethernet networks have shifted to a star configuration, wherein each node of the network is connected to a centralized hub by a twisted-pair cable. When the hub receives the transmission from one of the nodes, the hub repeats the transmission to each of the other nodes.
  • Ethernet relies upon a form of network access protocol known as Carrier Sense Multiple Access with Collision Detection ("CSMA/CD”) to regulate communication via the network.
  • CSMA/CD Carrier Sense Multiple Access with Collision Detection
  • each node contends for access to the network.
  • the CSMA/CD network protocol requires that a node wait until the network has been quiet for a predetermined period of time (9.6 microseconds) before it begins to transmit a data packet via the network.
  • the hub then retransmits the packet to all of the other nodes. If more than one node attempts to transmit a packet at the same time, a collision occurs and the CSMA/CD protocol requires the nodes to wait a random back-off time interval before transmitting again.
  • a transmitting node successfully sends a data packet to the hub without a collision, that node will have a better chance statistically of being allowed to send another data packet, because the transmitting node will most likely calculate a random back-off time that is less than the random back-off time calculated by any other node of the network. If the transmitting node successfully transmits another packet or two, it becomes progressively more difficult for any of the other nodes to transmit. Consequently, the transmitting node may become the most favored node to transmit another packet. Eventually, the node may "capture" the network for the transmission of about sixteen data packets, during which time the remaining nodes are denied access to the network.
  • Ethernet should be able to carry all types of multimedia data, such as voice (64 Kbps), CD quality audio (1.4 Mbps) and MPEG video (1-2 Mbps).
  • voice 64 Kbps
  • CD quality audio 1.4 Mbps
  • MPEG video 1-2 Mbps
  • the present invention provides a method and apparatus that meet these criteria and solves other problems in the prior art.
  • a method and apparatus are provided for controlling the flow of data in an Ethernet network having a plurality of nodes connected to a centralized hub. Only one node is selected to transmit data via the Ethernet network at a time. During a first predetermined time interval, a traffic signal is transmitted by the hub to all nodes except the selected node so as to prevent those nodes from transmitting data to the hub. If the selected node transmits data to the hub during the first predetermined time interval, the hub retransmits the data to each node except the selected node.
  • the hub transmits a synchronization signal to each node including the selected node so as to synchronize the signals being received by each of the nodes.
  • a new node is then selected to transmit data and the process is repeated upon the expiration of a second predetermined time interval.
  • the hub when the hub receives the data from the selected node and retransmits the data to each of the nodes, the hub ceases transmitting the traffic signal to each of the other nodes and transmits the data to each of the other nodes immediately following the traffic signal such that the data is "spliced" onto the traffic signal.
  • the node selected to transmit data may be selected as a function of network bandwidth, time, or the number of ports included in the hub.
  • FIGURE 1 is a pictorial diagram of a plurality of computers, i.e., nodes, connected in a star Ethernet network to a centralized repeater hub implementing the present invention
  • FIGURE 2 is a block diagram of the hub shown in FIGURE 1 ;
  • FIGURES 3A and 3B are a flow diagram of the method of controlling the flow of data between the plurality of nodes connected to the hub shown in FIGURE 1; and
  • FIGURES 4A and 4B are waveform diagrams of the data transmissions into and out of the hub shown in FIGURE 1.
  • FIGURE 1 illustrates an Ethernet network 10 in a star configuration that connects a plurality of personal computers 12 to a hub 20.
  • Each of the computers connected to the star network 10 is hereinafter referred to as a node 0 through 3.
  • any number of computers i.e., nodes
  • any type of computer including but not limited to portable computers, personal digital assistants, etc. that is equipped with the necessary interface hardware may be connected to the star network 10.
  • other electronic devices may also be connected to the star network 10, if equipped with the necessary interface hardware. Suitable electronic devices may include video cameras, speakers, television sets, telephones, lamps, etc.
  • the present invention provides a method and apparatus that control the flow of data via the Ethernet network 10. Specifically, the present invention determines which node will be granted access to the network so that it may transmit a data packet to the hub 20, and when the hub will repeat that data packet to the remaining nodes within the confines of the Ethernet network standard.
  • the hub 20 interconnecting the nodes 0, 1, 2 and 3 is a repeater hub. Consequently, when the hub 20 receives a data packet from any one of the nodes 0, 1, 2 or 3, the hub retransmits or "repeats" the data packet to each of the other nodes. For example, if node 1 transmits a data packet to the hub 20, the hub will repeat the data packet to each of the remaining nodes 0, 2 and 3.
  • FIGURE 2 illustrates in greater detail the hub 20 shown in FIGURE 1.
  • the hub 20 includes a microprocessor 14 and a data loopback and preamble generation state machine 16 that control the flow of data packets via the network 10 as will be described in more detail below.
  • the microprocessor determines which one of the nodes will be granted access to the network; determines the time at which a received data packet will be sent to the other nodes; and selects which nodes will be sent the data packet.
  • the state machine 16 is a collection of flip flops, NAND, AND and OR gates that generate a series of signals based on instruction received from the microprocessor, that control the flow of data packets from the node granted access, to the other nodes.
  • the hub 20 receives data packets from the nodes 12 via a plurality of ports 18. More specifically, each node O, 1, 2 and 3 transmits and receives data through ports 0, 1, 2 and 3, respectively.
  • the hub also includes a plurality of decoders 22 designated as decoders 0, 1, 2 and 3 and corresponding to ports 0, 1, 2 and 3, respectively.
  • the decoders decode the data packet transmitted by the corresponding nodes via the corresponding ports into a receive data signal, a receive clock signal, and a receive carrier signal. These signals are then provided to a selector 30 that selects the signals to be processed, i.e., selects which node will be granted access to the network 10, based on a control signal received from the microprocessor 14. The selected receive data, receive clock, and carrier signals are then provided by the selector 30 to the state machine 16.
  • the hub 20 also includes a demultiplexer 24 and a plurality of encoders 26, designated as encoders 0, 1, 2 or 3, corresponding to ports 0, 1, 2 or 3 and thus, nodes 0, 1, 2 or 3, respectively.
  • the state machine 16 Upon receipt of a transmit control signal from the microprocessor 14, the state machine 16 sends the demultiplexer a transmit enable signal and the encoders a transmit data signal and a transmit clock signal.
  • the demultiplexer 24 supplies the transmit enable signal to the encoders that are to encode the transmit data signal using the transmit clock signal.
  • the hub transmits received data to all of the nodes except the one sending the data to the hub.
  • all of the encoders, except for one are enabled by the transmit enable signal.
  • the demultiplexer 24 in essence decides which encoder is not to be enabled based on the nature of the port select signal and the transmit enable signal. The encoders then encode the data signal and send the encoded data signal to the corresponding nodes via the corresponding ports.
  • the flow of data through the Ethernet network 10 via the hub 20 is controlled in accordance with the logic illustrated in FIGURES 3A and 3B.
  • the hub 20 grants each node access to the network 10 in a round-robin fashion, i.e., the hub 20 only allows one node to transmit a data packet via the network at a time. Consequently, no node becomes favored and the "capture effect" is eliminated. Further, because only one node is allowed to transmit data at a time, packet collision is avoided and the distribution of backlogged packets is evenly spread out over the network. The overall result is predictable packet latencies over the network.
  • a current node variable used to identify the node being granted access by the hub is initialized to zero at a block 102 of FIGURE 2A.
  • a number of ports counter that is used to keep track of the number of ports 18 in the hub 20 is initialized.
  • the number of ports counter is set equal to 4 because there are four ports 0, 1, 2 and 3, one for each node 0, 1, 2 and 3. As will be described more fully below, the number of ports counter is used to determine the next node that will be granted access to the network 10.
  • the hub 20 waits for a predetermined spacing interval, for example, 4.5 microseconds (identified as interval "B" in FIGURES 4A and 4B) to expire. Then in a block 108, a timer is set to zero and started. As the timer runs, the logic proceeds to a decision block 110 where a test is made to determine if the hub 20 is receiving a data packet from the current node via the current port. If the result of decision block 110 is negative, the logic proceeds to a block 116 where the hub 20 begins transmitting a traffic signal 32 to all of the nodes connected to the hub, except the current node.
  • a predetermined spacing interval for example, 4.5 microseconds (identified as interval "B" in FIGURES 4A and 4B)
  • the traffic signal is preferably comprised of alternating Is and 0s and created by changing the state of a bit between zero and one each time a pass is made through block 116.
  • a traffic signal 32 prevents the other nodes from transmitting a data packet to the hub.
  • the lack of a traffic signal being sent to the current node creates a gap (identified as interval "A" in FIGURE 4A) during which the current node can transmit a data packet to the hub.
  • a decision block 118 the logic determines if a 9.6 microsecond transmission interval has passed. This is determined by testing the timer (started in block 108) to determine if the timer value is greater than or equal to 9.6 microseconds. If the a transmission interval has not passed, the logic returns to decision block 110. If the result of decision block 110 is positive, meaning that the hub 20 is receiving a data packet from the current node, the logic proceeds to block 112. At block 112 the hub 20 transmits the received data packet to all of the nodes except the current node. Next, at a decision block 114, the logic determines if the hub has finished receiving the data packet from the current node via the current port. If the result is negative, the logic returns to block 112.
  • Blocks 112 and 114 are repeated until the hub has finished receiving the data packet from the current node. Then the logic proceeds to decision block 118 where it determines if the 9.6 microsecond transmission interval has passed. It will be appreciated that the transmission interval may have long been exceeded depending on the size of the data - -
  • each data packet contains a preamble of at least fifty-six bits of alternating Is and 0s. Since there is no maximum specification for the length of the preamble under the Ethernet standard, the extra alternating bits spliced onto the transmitted data packet are merely treated by the Ethernet network 10 as part of the preamble.
  • the present invention seamlessly interfaces with the Ethernet network, and there is no need to modify or change the existing Ethernet network adapters, network cards or driver software in any of the connected nodes.
  • Blocks 110-118 are repeated until the 9.6 microsecond transmission interval has expired, or longer if the retransmission of data received from the current node exceeds the transmission interval.
  • the logic proceeds to a decision block 120 where a test is made to determine if the hub 20 had received a data packet from the current node via the current port during the period of time the logic was cycling through blocks 106-118. If the result is negative, it will be appreciated that a traffic signal was transmitted to all of the remaining nodes, except for the current node for exactly 9.6 microseconds. Further, no data was received by the hub from the current node for a total of 14.1 microseconds (4.5 microseconds plus 9.6 microseconds).
  • the hub Before the hub grants access to another node and repeats the process described above beginning with the initiation of the 4.5 microsecond spacing interval, the hub must synchronize all of the nodes. This is done to ensure that all of the nodes begin the 4.5 microsecond spacing interval at the same point in time.
  • the logic proceeds to a block 122 where the timer is reset to 0.
  • the hub begins transmitting a synchronization signal 34 to all of the nodes, including the current node.
  • the synchronization signal is preferably created by changing the state of a bit between zero and one each time a pass is made through this part of the logic. With respect to the current node, this first bit marks the end of the 9.6 microsecond interval during which the hub 20 gave the current node the opportunity to transmit a data packet.
  • a decision block 126 the logic determines if the hub 20 is now receiving a data packet from the current node via the current port. If the result is positive, the logic proceeds to block 128 where the alternate bit is sent to all nodes, e.g., if the bit sent at block 124 was a one, the bit sent at black 128 is a zero and vice versa. Blocks 126 and 128 are repeated as long as the hub continues to receive a data packet from the current node. Hence, rather than recognizing and retransmitting the received data packet to all the other nodes, the hub ignores the received data packet and continues sending the synchronization signal 34 comprising alternating Is and 0s to all of the nodes, including the current node.
  • the logic determines if a predetermined (preferably 3.2 microsecond) synchronization interval has passed, i.e., the timer is tested to determine if its value is greater than or equal to 3.2 microseconds. If the result is negative, the logic returns to block 124 where the hub 20 sends the opposite (alternate) bit to all of the ports. Blocks 124-130 are repeated until the 3.2 microsecond synchronization interval has expired. As shown in FIGURES 4A and 4B, sending a synchronization signal 34 to each of the ports at the end of the transmission interval results in all of the nodes being synchronized before the initiation of the next 4.5 microsecond spacing interval.
  • the traffic signal essentially is extended another 3.2 microseconds to end at the same time as the 3.2 microsecond synchronization signal 34 transmitted to the current node.
  • the hub receives a data packet from the current node during the 3.2 microsecond interval, the hub ignores the incoming data and continues to send the synchronization signal 34 (comprised of alternating bits) to all of the nodes, including the current node until the entire data packet has been received by the hub. Necessarily, the synchronization signal transmitted to all of the nodes will end at the same time, even though the 3.2 microsecond synchronization signal may have since been exceeded.
  • the logic " ⁇ "then returns to block 106 where the hub waits for a 4.5 microsecond spacing interval to expire. Blocks 106-132 are continuously repeated as the hub grants network access to each node, one node at a time.
  • FIGURE 4B is an exemplary diagram that first depicts the traffic and data signals received and transmitted by the hub 20 when node 2 is granted access to the network 10 and transmits a data packet 38 to the hub via input port 2 following the expiration of the 4.5 microsecond spacing interval.
  • the data packet 38 is repeated to the remaining nodes 0, 1, and 3 via ports 0, 1, and 3, respectively, spliced to the portion 36 of the traffic signals that occurred prior to when the hub first received the data packet from node 2.
  • the hub grants access to the network 10 to node 3 connected to the hub via port 3.
  • the hub then waits for another 4.5 microsecond spacing interval to expire and sends a traffic signal 32 comprising alternating Is and 0s to each of the other nodes, namely, nodes 0, 1, and 2 via their corresponding ports.
  • node 3 Since node 3 does not transmit a data packet during the next 9.6 microsecond transmission interval, the transmission of signals to each of the nodes is resynchronized by transmitting a synchronization signal 34 comprised of alternating Is and 0s that is sent to all of the nodes by the hub for the next 3.2 microseconds. Once synchronized, the hub grants access to a new node, another 4.5 microsecond spacing interval begins, and so on.
  • a timing scheme could be implemented that would assign a timer to each node of the hub.
  • the hub would grant access in round-robin fashion to all of the nodes whose timers have timed out. If no node timed out, the hub would grant access to each node in turn.
  • the timers could also be adapted such that if the nodes sent data on every grant the time out would decrease.
  • the hub could also be configured so as to provided guaranteed bandwidth to each node.
  • 9.6 microsecond transmission interval, 3.2 microsecond synchronization interval and the 4.5 microsecond spacing interval have been calibrated to provide a smooth interface with the Ethernet standard, those of ordinary skill in the art will recognize that under some circumstances it may be desirable to vary or modify the length of these time intervals. Accordingly, it is not intended that the scope of the invention be limited by the disclosure of the actual embodiment described above. Instead the invention should be determined entirely by reference to the claims that follow.

Abstract

The present invention controls the flow of data via an Ethernet network (10) having a plurality of nodes (12) connected to a centralized hub (20). The hub (20) grants access to one node at a time and waits for a 9.6 microsecond transmission interval for the node (12) to transmit a data packet to the hub (20). While waiting, the hub (20) sends traffic signals each of the other nodes (12) in order to prevent them from transmitting a data packet to the hub (20) during the 9.6 microsecond transmission interval. If the hub (20) receives a data packet from the node (12) granted access during the 9.6 microsecond transmission interval, the hub (20) ceases transmitting traffic signal (32) and immediately retransmits the data packet to the remaining nodes. The packet is spliced onto the end (36) of the traffic signals that was being sent to the other nodes when the data packet began. The hub (20) than grants access to a new node (12) and waits for a 4.5 microsecond spacing interval to expire before transmitting traffic signals to the remaining nodes. If the hub (20) does not receive a data packet from the node during the 9.6 microsecond interval, the hub (20) transmits a synchronization signal (34) to each node before granting access to a new node (12) and repeating the process upon expiration of the 4.5 microsecond spacing interval.

Description

METHOD AND APPARATUS FOR CONTROLLING THE FLOW OF DATA
VIA AN ETHERNET NETWORK
Field of the Invention This invention relates to Ethernet networks and, more particularly to method and apparatus for controlling the flow of data in an Ethernet network having a star configuration, i.e., a plurality of nodes connected to a centralized hub.
Background of the Invention Ethernet is one of the most popular local area networking standards. Although traditionally developed to implement a bus topology, in which all nodes are connected to one another by a single coaxial cable, Ethernet networks have shifted to a star configuration, wherein each node of the network is connected to a centralized hub by a twisted-pair cable. When the hub receives the transmission from one of the nodes, the hub repeats the transmission to each of the other nodes.
Ethernet relies upon a form of network access protocol known as Carrier Sense Multiple Access with Collision Detection ("CSMA/CD") to regulate communication via the network. Under the CSMA/CD network protocol, each node contends for access to the network. In an Ethernet network having a star configuration, the CSMA/CD network protocol requires that a node wait until the network has been quiet for a predetermined period of time (9.6 microseconds) before it begins to transmit a data packet via the network. The hub then retransmits the packet to all of the other nodes. If more than one node attempts to transmit a packet at the same time, a collision occurs and the CSMA/CD protocol requires the nodes to wait a random back-off time interval before transmitting again. Following such a collision, if a transmitting node successfully sends a data packet to the hub without a collision, that node will have a better chance statistically of being allowed to send another data packet, because the transmitting node will most likely calculate a random back-off time that is less than the random back-off time calculated by any other node of the network. If the transmitting node successfully transmits another packet or two, it becomes progressively more difficult for any of the other nodes to transmit. Consequently, the transmitting node may become the most favored node to transmit another packet. Eventually, the node may "capture" the network for the transmission of about sixteen data packets, during which time the remaining nodes are denied access to the network. After this point, all nodes are again given equal odds of gaining access to the network. If there is heavy use of the network, the capture effect leads to large backlogs of unsent packets and additional protocol computation overhead due to the retry processing of discarded packets. This retry overhead can lead to unbounded latency in packet throughput.
It is precisely this problem that prevents Ethernet from being used to carry streaming multimedia data. The original Ethernet standard provided for base band transmission at 10 megabits per second ("Mbps"). Conceptually, at a data rate of 10 Mbps, Ethernet should be able to carry all types of multimedia data, such as voice (64 Kbps), CD quality audio (1.4 Mbps) and MPEG video (1-2 Mbps). However, it has been observed that as the utilization of an Ethernet network rises, the packet delay across the network becomes increasingly variable. Consequently, delivery of the voice/audio/video may become unacceptably choppy.
To solve the foregoing and other shortcomings of the Ethernet network, a method and apparatus are needed that can provide predictable packet latencies so that Ethernet can be used as a viable network for transmitting multimedia data. Furthermore, the method and apparatus should be implemented in a centralized location, such as the hub interconnecting the nodes, so that it is not necessary to modify or change existing network adapters, network cards, or driver software in any of the connected nodes. In addition, the method and apparatus should ensure fair access to the network and reduce the "capture effect." Finally, the method and apparatus should eliminate collisions between data packets, which can cause ineffective bandwidth utilization. As explained in the following, the present invention provides a method and apparatus that meet these criteria and solves other problems in the prior art.
Summary of the Invention In accordance with the present invention, a method and apparatus are provided for controlling the flow of data in an Ethernet network having a plurality of nodes connected to a centralized hub. Only one node is selected to transmit data via the Ethernet network at a time. During a first predetermined time interval, a traffic signal is transmitted by the hub to all nodes except the selected node so as to prevent those nodes from transmitting data to the hub. If the selected node transmits data to the hub during the first predetermined time interval, the hub retransmits the data to each node except the selected node. However, if the selected node does not transmit data to the hub during the first predetermined time interval, the hub transmits a synchronization signal to each node including the selected node so as to synchronize the signals being received by each of the nodes. A new node is then selected to transmit data and the process is repeated upon the expiration of a second predetermined time interval.
In accordance with the other aspects of this invention, when the hub receives the data from the selected node and retransmits the data to each of the nodes, the hub ceases transmitting the traffic signal to each of the other nodes and transmits the data to each of the other nodes immediately following the traffic signal such that the data is "spliced" onto the traffic signal.
In accordance with other further aspects of this invention, the node selected to transmit data may be selected as a function of network bandwidth, time, or the number of ports included in the hub. Brief Description of the Drawings
The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein: FIGURE 1 is a pictorial diagram of a plurality of computers, i.e., nodes, connected in a star Ethernet network to a centralized repeater hub implementing the present invention;
FIGURE 2 is a block diagram of the hub shown in FIGURE 1 ; FIGURES 3A and 3B are a flow diagram of the method of controlling the flow of data between the plurality of nodes connected to the hub shown in FIGURE 1; and
FIGURES 4A and 4B are waveform diagrams of the data transmissions into and out of the hub shown in FIGURE 1.
Detailed Description of the Preferred Embodiment FIGURE 1 illustrates an Ethernet network 10 in a star configuration that connects a plurality of personal computers 12 to a hub 20. Each of the computers connected to the star network 10 is hereinafter referred to as a node 0 through 3. As will be appreciated by those familiar with networked computer systems, depending on system and hub capacity, any number of computers (i.e., nodes) could be connected via the star network 10. In addition, any type of computer including but not limited to portable computers, personal digital assistants, etc. that is equipped with the necessary interface hardware may be connected to the star network 10. In addition to, or perhaps instead of, the computers 12, other electronic devices may also be connected to the star network 10, if equipped with the necessary interface hardware. Suitable electronic devices may include video cameras, speakers, television sets, telephones, lamps, etc.
The present invention provides a method and apparatus that control the flow of data via the Ethernet network 10. Specifically, the present invention determines which node will be granted access to the network so that it may transmit a data packet to the hub 20, and when the hub will repeat that data packet to the remaining nodes within the confines of the Ethernet network standard. In the actual embodiment of the present invention shown in FIGURES 1 and 2, the hub 20 interconnecting the nodes 0, 1, 2 and 3 is a repeater hub. Consequently, when the hub 20 receives a data packet from any one of the nodes 0, 1, 2 or 3, the hub retransmits or "repeats" the data packet to each of the other nodes. For example, if node 1 transmits a data packet to the hub 20, the hub will repeat the data packet to each of the remaining nodes 0, 2 and 3.
FIGURE 2 illustrates in greater detail the hub 20 shown in FIGURE 1. The hub 20 includes a microprocessor 14 and a data loopback and preamble generation state machine 16 that control the flow of data packets via the network 10 as will be described in more detail below. In general, the microprocessor determines which one of the nodes will be granted access to the network; determines the time at which a received data packet will be sent to the other nodes; and selects which nodes will be sent the data packet. The state machine 16, on the other hand, is a collection of flip flops, NAND, AND and OR gates that generate a series of signals based on instruction received from the microprocessor, that control the flow of data packets from the node granted access, to the other nodes. From the following description, one of ordinary skill in the art will recognize that the circuitry of the state machine 16 can take on various configurations appropriate for implementing the present invention. A description of the specific state machine 16 circuitry is not provided because such a description is not necessary to an understanding of the present invention and may be unduly limiting. The hub 20 receives data packets from the nodes 12 via a plurality of ports 18. More specifically, each node O, 1, 2 and 3 transmits and receives data through ports 0, 1, 2 and 3, respectively. The hub also includes a plurality of decoders 22 designated as decoders 0, 1, 2 and 3 and corresponding to ports 0, 1, 2 and 3, respectively. The decoders decode the data packet transmitted by the corresponding nodes via the corresponding ports into a receive data signal, a receive clock signal, and a receive carrier signal. These signals are then provided to a selector 30 that selects the signals to be processed, i.e., selects which node will be granted access to the network 10, based on a control signal received from the microprocessor 14. The selected receive data, receive clock, and carrier signals are then provided by the selector 30 to the state machine 16.
The hub 20 also includes a demultiplexer 24 and a plurality of encoders 26, designated as encoders 0, 1, 2 or 3, corresponding to ports 0, 1, 2 or 3 and thus, nodes 0, 1, 2 or 3, respectively. Upon receipt of a transmit control signal from the microprocessor 14, the state machine 16 sends the demultiplexer a transmit enable signal and the encoders a transmit data signal and a transmit clock signal. Based on a port select signal received from the microprocessor 14, the demultiplexer 24 supplies the transmit enable signal to the encoders that are to encode the transmit data signal using the transmit clock signal. As noted above, the hub transmits received data to all of the nodes except the one sending the data to the hub. Thus, all of the encoders, except for one are enabled by the transmit enable signal. The demultiplexer 24 in essence decides which encoder is not to be enabled based on the nature of the port select signal and the transmit enable signal. The encoders then encode the data signal and send the encoded data signal to the corresponding nodes via the corresponding ports.
The flow of data through the Ethernet network 10 via the hub 20 is controlled in accordance with the logic illustrated in FIGURES 3A and 3B. Essentially, the hub 20 grants each node access to the network 10 in a round-robin fashion, i.e., the hub 20 only allows one node to transmit a data packet via the network at a time. Consequently, no node becomes favored and the "capture effect" is eliminated. Further, because only one node is allowed to transmit data at a time, packet collision is avoided and the distribution of backlogged packets is evenly spread out over the network. The overall result is predictable packet latencies over the network.
After startup in a block 100, a current node variable used to identify the node being granted access by the hub is initialized to zero at a block 102 of FIGURE 2A. Those of ordinary skill in the art will appreciate that since the hub grants access to the network one node at a time, it is immaterial which node is selected first, as long as each node is ultimately granted access in a timely fashion. Next at a block 104, a number of ports counter that is used to keep track of the number of ports 18 in the hub 20 is initialized. In the illustrated embodiment, the number of ports counter is set equal to 4 because there are four ports 0, 1, 2 and 3, one for each node 0, 1, 2 and 3. As will be described more fully below, the number of ports counter is used to determine the next node that will be granted access to the network 10.
Next at a block 106, the hub 20 waits for a predetermined spacing interval, for example, 4.5 microseconds (identified as interval "B" in FIGURES 4A and 4B) to expire. Then in a block 108, a timer is set to zero and started. As the timer runs, the logic proceeds to a decision block 110 where a test is made to determine if the hub 20 is receiving a data packet from the current node via the current port. If the result of decision block 110 is negative, the logic proceeds to a block 116 where the hub 20 begins transmitting a traffic signal 32 to all of the nodes connected to the hub, except the current node. As illustrated, the traffic signal is preferably comprised of alternating Is and 0s and created by changing the state of a bit between zero and one each time a pass is made through block 116. As shown in FIGURES 4A and 4B, sending all nodes, except the current node, a traffic signal 32 prevents the other nodes from transmitting a data packet to the hub. The lack of a traffic signal being sent to the current node creates a gap (identified as interval "A" in FIGURE 4A) during which the current node can transmit a data packet to the hub.
In a decision block 118, the logic determines if a 9.6 microsecond transmission interval has passed. This is determined by testing the timer (started in block 108) to determine if the timer value is greater than or equal to 9.6 microseconds. If the a transmission interval has not passed, the logic returns to decision block 110. If the result of decision block 110 is positive, meaning that the hub 20 is receiving a data packet from the current node, the logic proceeds to block 112. At block 112 the hub 20 transmits the received data packet to all of the nodes except the current node. Next, at a decision block 114, the logic determines if the hub has finished receiving the data packet from the current node via the current port. If the result is negative, the logic returns to block 112. Blocks 112 and 114 are repeated until the hub has finished receiving the data packet from the current node. Then the logic proceeds to decision block 118 where it determines if the 9.6 microsecond transmission interval has passed. It will be appreciated that the transmission interval may have long been exceeded depending on the size of the data - -
packet transmitted by the current node and the amount of time blocks 112 and 114 are repeated.
It will be appreciated from the foregoing description and viewing of FIGURE 4 A that if the hub 20 has already begun to send traffic signals 32 to the remaining nodes when data is received from the current node, the hub will cease sending the traffic signal and transmit the data packet to the remaining nodes immediately following the bits of the traffic signal already sent such that the packet appears "spliced" onto a portion 36 of the traffic signal as shown in FIGURE 4B. Under the Ethernet network standard, each data packet contains a preamble of at least fifty-six bits of alternating Is and 0s. Since there is no maximum specification for the length of the preamble under the Ethernet standard, the extra alternating bits spliced onto the transmitted data packet are merely treated by the Ethernet network 10 as part of the preamble. Hence, the present invention seamlessly interfaces with the Ethernet network, and there is no need to modify or change the existing Ethernet network adapters, network cards or driver software in any of the connected nodes.
Blocks 110-118 are repeated until the 9.6 microsecond transmission interval has expired, or longer if the retransmission of data received from the current node exceeds the transmission interval. As shown in FIGURE 3B, once the 9.6 microsecond transmission interval has expired (or been exceeded), the logic proceeds to a decision block 120 where a test is made to determine if the hub 20 had received a data packet from the current node via the current port during the period of time the logic was cycling through blocks 106-118. If the result is negative, it will be appreciated that a traffic signal was transmitted to all of the remaining nodes, except for the current node for exactly 9.6 microseconds. Further, no data was received by the hub from the current node for a total of 14.1 microseconds (4.5 microseconds plus 9.6 microseconds). Before the hub grants access to another node and repeats the process described above beginning with the initiation of the 4.5 microsecond spacing interval, the hub must synchronize all of the nodes. This is done to ensure that all of the nodes begin the 4.5 microsecond spacing interval at the same point in time.
Thus, if the result of decision block 120 is negative, the logic proceeds to a block 122 where the timer is reset to 0. Next, at a block 124, the hub begins transmitting a synchronization signal 34 to all of the nodes, including the current node. As illustrated, the synchronization signal is preferably created by changing the state of a bit between zero and one each time a pass is made through this part of the logic. With respect to the current node, this first bit marks the end of the 9.6 microsecond interval during which the hub 20 gave the current node the opportunity to transmit a data packet.
Next, at a decision block 126, the logic determines if the hub 20 is now receiving a data packet from the current node via the current port. If the result is positive, the logic proceeds to block 128 where the alternate bit is sent to all nodes, e.g., if the bit sent at block 124 was a one, the bit sent at black 128 is a zero and vice versa. Blocks 126 and 128 are repeated as long as the hub continues to receive a data packet from the current node. Hence, rather than recognizing and retransmitting the received data packet to all the other nodes, the hub ignores the received data packet and continues sending the synchronization signal 34 comprising alternating Is and 0s to all of the nodes, including the current node.
Next, at a decision block 130, the logic determines if a predetermined (preferably 3.2 microsecond) synchronization interval has passed, i.e., the timer is tested to determine if its value is greater than or equal to 3.2 microseconds. If the result is negative, the logic returns to block 124 where the hub 20 sends the opposite (alternate) bit to all of the ports. Blocks 124-130 are repeated until the 3.2 microsecond synchronization interval has expired. As shown in FIGURES 4A and 4B, sending a synchronization signal 34 to each of the ports at the end of the transmission interval results in all of the nodes being synchronized before the initiation of the next 4.5 microsecond spacing interval. More specifically, if the current node does not transmit a data packet to the hub during the 9.6 microsecond transmission interval set by the timer started in block 108 (after the original 9.6 microsecond traffic signal 32 is sent to all of the nodes other than the current node), the traffic signal essentially is extended another 3.2 microseconds to end at the same time as the 3.2 microsecond synchronization signal 34 transmitted to the current node. However, if the hub receives a data packet from the current node during the 3.2 microsecond interval, the hub ignores the incoming data and continues to send the synchronization signal 34 (comprised of alternating bits) to all of the nodes, including the current node until the entire data packet has been received by the hub. Necessarily, the synchronization signal transmitted to all of the nodes will end at the same time, even though the 3.2 microsecond synchronization signal may have since been exceeded.
After the 3.2 microsecond synchronization interval has elapsed, or if the results of decision block 120 is positive, meaning that the hub 20 has received a data packet from the current node before the 9.6 microsecond timer timed out, the logic proceeds to block 132 where the next node to be granted access is determined. Preferably this is accomplished by the current node variable that identifies the node to be granted access being reset equal to the remainder of the current node variable incremented by a value of one and divided by the number of ports (Current Node = Current Node + 1 MOD Number Ports in block 132). Such a function basically results in nodes being granted access to the network in a defined order. The logic "~"then returns to block 106 where the hub waits for a 4.5 microsecond spacing interval to expire. Blocks 106-132 are continuously repeated as the hub grants network access to each node, one node at a time.
As will be appreciated by those of ordinary skill in the art after the hub receives a data packet, it repeats the data packet to each of the other nodes simultaneously. The data signals comprising the data packet received by each node and the data signals comprising the data packet transmitted by the current node end at the same time. Therefore, it is not necessary to resynchronize the initiation of the next 4.5 microsecond spacing interval. FIGURE 4B is an exemplary diagram that first depicts the traffic and data signals received and transmitted by the hub 20 when node 2 is granted access to the network 10 and transmits a data packet 38 to the hub via input port 2 following the expiration of the 4.5 microsecond spacing interval. The data packet 38 is repeated to the remaining nodes 0, 1, and 3 via ports 0, 1, and 3, respectively, spliced to the portion 36 of the traffic signals that occurred prior to when the hub first received the data packet from node 2. Upon completion of the data packet, the hub grants access to the network 10 to node 3 connected to the hub via port 3. The hub then waits for another 4.5 microsecond spacing interval to expire and sends a traffic signal 32 comprising alternating Is and 0s to each of the other nodes, namely, nodes 0, 1, and 2 via their corresponding ports. Since node 3 does not transmit a data packet during the next 9.6 microsecond transmission interval, the transmission of signals to each of the nodes is resynchronized by transmitting a synchronization signal 34 comprised of alternating Is and 0s that is sent to all of the nodes by the hub for the next 3.2 microseconds. Once synchronized, the hub grants access to a new node, another 4.5 microsecond spacing interval begins, and so on.
While an actual embodiment of this invention has been illustrated and described, it will be appreciated that various changes can be made therein without departing from the spirit and scope of the invention. Rather than granting nodes access to the network in a defined, round-robin fashion, other grant ordering schemes can be implemented. For example, a timing scheme could be implemented that would assign a timer to each node of the hub. In this scheme, the hub would grant access in round-robin fashion to all of the nodes whose timers have timed out. If no node timed out, the hub would grant access to each node in turn. The timers could also be adapted such that if the nodes sent data on every grant the time out would decrease. If the node did not send a packet on two consecutive grants, the time out for that node would increase. Consequently, nodes that required more bandwidth would be granted more. The hub could also be configured so as to provided guaranteed bandwidth to each node. Further, although the 9.6 microsecond transmission interval, 3.2 microsecond synchronization interval and the 4.5 microsecond spacing interval have been calibrated to provide a smooth interface with the Ethernet standard, those of ordinary skill in the art will recognize that under some circumstances it may be desirable to vary or modify the length of these time intervals. Accordingly, it is not intended that the scope of the invention be limited by the disclosure of the actual embodiment described above. Instead the invention should be determined entirely by reference to the claims that follow.

Claims

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. A method for controlling the flow of data between a plurality of nodes connected to a centralized hub comprising:
(a) selecting one of the plurality of nodes to transmit data to the hub;
(b) during a first predetermined time interval, transmitting a traffic signal to all nodes other than the selected node so as to prevent all nodes other than the selected node from transmitting data to the hub;
(c) if the selected node transmits data to the hub during the first predetermined time interval or after the transmission of data to all nodes other than the selected node, transmitting the data to all nodes other than the selected node;
(d) if the selected node does not transmit data to the hub during the first predetermined time interval, transmitting a synchronization signal to each node including the selected node; and
(e) repeating (a)-(e) upon expiration of a second predetermined time interval.
2. The method of Claim 1 , wherein transmitting the data to all nodes other than the selected node further comprises ceasing transmission of the traffic signal to all nodes other than the selected node and transmitting the data to all nodes other than the selected node.
3. The method of Claim 1, wherein the synchronization signal is sent to all nodes during a third predetermined time interval such that the synchronization signal ends simultaneously for all nodes.
4. The method of Claim 3, wherein if the selected node transmits data to the hub during the third predetermined time interval, the synchronization signal is sent to all nodes as long as the selected node transmits the data to the hub.
5. An apparatus connecting a plurality of nodes communicating with one another via an Ethernet network, for controlling the flow of data via the Ethernet network, the apparatus comprising:
(a) a plurality of ports, each port receiving data from and transmitting data to one of the nodes; (b) a state machine for generating and transmitting signals to all of the nodes via the ports; and
(b) a processing unit for selecting one of the plurality of nodes to transmit data via the Ethernet network at a time, and upon selection of the node, the processing unit causing the state machine to:
(i) during a first predetermined time interval, transmit a traffic signal to all nodes other than the selected node so as to prevent all nodes other than the selected node from transmitting data via the Ethernet network;
(ii) if the selected node transmits data via the Ethernet network during the first predetermined time interval, transmit the data to all nodes other than the selected node;
(iii) if the selected node does not transmit data via the Ethernet network during the first predetermined time interval or after the transmission of data to all nodes other than the selected node, transmit a synchronization signal to all nodes including the selected node; and
(iv) repeat (a)-(c) upon expiration of a second predetermined time interval and selection by the processing unit of a new node to transmit data.
6. The apparatus of Claim 5, wherein the state machine transmits the data to all nodes other than the selected node following transmission of the traffic signal.
7. The apparatus of Claim 5, wherein the state machines transmits the synchronization signal to all nodes during a third predetermined time interval such that the synchronization signal ends simultaneously for all nodes.
8. The apparatus of Claim 7, wherein if the selected node transmits data to via the network during the third predetermined time interval, the state machine sends the synchronization signal to all nodes instead of the data.
9. The apparatus of Claim 5, wherein the processing unit selects one of the plurality of nodes to transmit data as a function of the number of ports receiving and transmitting data.
10. A method of controlling a flow of data via an Ethernet network having a plurality of nodes connected to a centralized hub, wherein all nodes sends data to the other nodes through the hub, the method comprising:
(a) granting access to the Ethernet network to a current node;
(b) creating a gap of time for the current node to transmit data to the hub by transmitting a traffic signal to all nodes except the current node so as to prevent all nodes except the current node from transmitting data to the hub;
(c) if the current node transmits data to the hub during the gap of time, transmitting the data to all nodes except the current node;
(d) if the current node does not transmit data to the hub during the gap of time, transmitting a synchronization signal to all nodes including the current node; and
(e) repeating (a)-(d) upon expiration of a spacing time interval.
11. The method of Claim 10, wherein transmitting the data to all nodes except the current node further comprises ceasing transmission of the traffic signal to all nodes except the current node and transmitting the data to all nodes except the current node immediately following the traffic signal such that the traffic signal is treated by the Ethernet network as a preamble to the data being transmitted.
12. The method of Claim 11, wherein the synchronization signal is sent to all nodes during a synchronization time interval such that the synchronization signal ends simultaneously for all nodes.
13. The method of Claim 12, wherein if the current node transmits data to the hub during the synchronization time interval, the synchronization signal is sent to all nodes instead of the data.
14. The method of Claim 10, wherein the current node is granted access to the network as a function of a bandwidth of the Ethernet network.
15. The method of Claim 10, wherein the current node is granted access to the Ethernet network as a function of a defined order.
16. The method of Claim 10, wherein the current node is granted access to the Ethernet network as a function of time.
17. A method of controlling a flow of data via an Ethernet network having a plurality of nodes connected to a centralized hub, wherein all nodes sends data to the other nodes through the hub, the method comprising:
(a) granting access to the Ethernet network to a current node;
(b) creating a gap of time for the current node to transmit data to the hub by transmitting a traffic signal to all nodes except the current node so as to prevent all nodes except the current node from transmitting data to the hub;
(c) if the current node transmits data to the hub during the gap of time:
(i) ceasing transmission of the traffic signal to all nodes except the current node such that a fragment of the traffic signal is transmitted to all nodes except the current node; and
(ii) transmitting the data to all nodes except the current node such that the fragment of the traffic signal precedes the data transmitted, the fragment being treated by the Ethernet network as a portion of a preamble to the data;
(d) if the current node does not transmit data to the hub during the gap of time or after the transmission of data to all nodes except the current node, transmitting a synchronization signal to all nodes including the current node; and
(e) repeating (a)-(d) upon expiration of a spacing time interval.
18. The method of Claim 17, wherein the synchronization signal is sent to all nodes during a synchronization time interval such that the synchronization signal ends simultaneously for all nodes.
19. The method of Claim 18, wherein if the current node transmits data to the hub during the synchronization time interval, the synchronization signal is sent to all nodes instead of the data.
20. The method of Claim 17, wherein the current node is granted access to the Ethernet network as a function of a defined order.
21. The method of Claim 17, wherein the current node is granted access to the Ethernet network as a function of time.
22. The method of Claim 17, wherein the current node is granted access to the network as a function of a bandwidth of the Ethernet network.
PCT/US1997/008111 1996-05-15 1997-05-14 Method and apparatus for controlling the flow of data via an ethernet network WO1997043843A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US64855796A 1996-05-15 1996-05-15
US08/648,557 1996-05-15

Publications (1)

Publication Number Publication Date
WO1997043843A1 true WO1997043843A1 (en) 1997-11-20

Family

ID=24601285

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/008111 WO1997043843A1 (en) 1996-05-15 1997-05-14 Method and apparatus for controlling the flow of data via an ethernet network

Country Status (1)

Country Link
WO (1) WO1997043843A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19835376A1 (en) * 1998-08-05 2000-02-10 Abb Research Ltd Device operating method for carrier sense multiple access network provides busy signal for all devices except one which is free to transmit signals, with cyclic selection of each device for signal transmission
WO2002054680A1 (en) * 2000-12-28 2002-07-11 Lanxpress Plc Control device for communications network
WO2003077479A1 (en) * 2002-03-14 2003-09-18 Wolfram Kress Method for the multi-directional exchange of data sets
CN114244773A (en) * 2020-09-09 2022-03-25 英业达科技有限公司 Packet processing system and packet processing method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0123507A1 (en) * 1983-04-21 1984-10-31 International Computers Limited Data communication system and apparatus
US5355375A (en) * 1993-03-18 1994-10-11 Network Systems Corporation Hub controller for providing deterministic access to CSMA local area network
US5469439A (en) * 1994-05-04 1995-11-21 Hewlett Packard Company Two priority fair distributed round robin protocol for a network having cascaded hubs

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0123507A1 (en) * 1983-04-21 1984-10-31 International Computers Limited Data communication system and apparatus
US5355375A (en) * 1993-03-18 1994-10-11 Network Systems Corporation Hub controller for providing deterministic access to CSMA local area network
US5469439A (en) * 1994-05-04 1995-11-21 Hewlett Packard Company Two priority fair distributed round robin protocol for a network having cascaded hubs

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19835376A1 (en) * 1998-08-05 2000-02-10 Abb Research Ltd Device operating method for carrier sense multiple access network provides busy signal for all devices except one which is free to transmit signals, with cyclic selection of each device for signal transmission
WO2002054680A1 (en) * 2000-12-28 2002-07-11 Lanxpress Plc Control device for communications network
WO2003077479A1 (en) * 2002-03-14 2003-09-18 Wolfram Kress Method for the multi-directional exchange of data sets
CN114244773A (en) * 2020-09-09 2022-03-25 英业达科技有限公司 Packet processing system and packet processing method thereof

Similar Documents

Publication Publication Date Title
US5784597A (en) Communications network system including acknowledgement indicating successful receipt of request for reserved communication slots and start time for said reserved communication slots
EP0765061B1 (en) High-speed data communications modem
CA1205883A (en) Digital communication system
US4750171A (en) Data switching system and method
US6965933B2 (en) Method and apparatus for token distribution
US7756153B1 (en) Distributed method and apparatus for allocating a communication medium
JP2502929B2 (en) DATA TRANSMISSION METHOD AND INTERFACE DEVICE
US6072804A (en) Ring bus data transfer system
EP0591940A2 (en) Multi-media network bus
WO1999043130A1 (en) A packet-switched multiple-access network system with distributed fair priority queuing
US5568469A (en) Method and apparatus for controlling latency and jitter in a local area network which uses a CSMA/CD protocol
US5621725A (en) Communication system capable of preventing dropout of data block
US6243391B1 (en) Non-polled dynamic slot time allocation protocol
Mark Distributed scheduling conflict-free multiple access for local area communication networks
WO2001033755A1 (en) Adaptive universal multiple access
WO1997043843A1 (en) Method and apparatus for controlling the flow of data via an ethernet network
EP0829152B1 (en) A ring bus data transfer system
KR100455030B1 (en) Bandwidth allocation method for real time data transmission in profibus
JPH03270432A (en) Local area network
JP2003505929A (en) Method of synchronizing network slots for computer network communication channels
JPS5975730A (en) Synchronizing signal priority type random access transmitter
MXPA97009060A (en) An ani collecting bar data transfer system
JPH05260064A (en) Multimedia integrated lan system using token ring
EP0830767A1 (en) A method for arbritrating for access to a control channel in a data bus system
JPS63121340A (en) Bus access system for local area network

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 97541075

Format of ref document f/p: F

NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase