WO1998006194A1 - Method and apparatus for network clock synchronization - Google Patents

Method and apparatus for network clock synchronization Download PDF

Info

Publication number
WO1998006194A1
WO1998006194A1 PCT/US1997/013555 US9713555W WO9806194A1 WO 1998006194 A1 WO1998006194 A1 WO 1998006194A1 US 9713555 W US9713555 W US 9713555W WO 9806194 A1 WO9806194 A1 WO 9806194A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
rate adjustment
adjustment factor
synchronization
time
Prior art date
Application number
PCT/US1997/013555
Other languages
French (fr)
Inventor
David J. Warman
Mark A. Lacas
Alexander Stoll
Original Assignee
Medialink Technologies Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medialink Technologies Corporation filed Critical Medialink Technologies Corporation
Publication of WO1998006194A1 publication Critical patent/WO1998006194A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0638Clock or time synchronisation among nodes; Internode synchronisation
    • H04J3/0658Clock or time synchronisation among packet nodes
    • H04J3/0661Clock or time synchronisation among packet nodes using timestamps
    • H04J3/0664Clock or time synchronisation among packet nodes using timestamps unidirectional timestamps

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Synchronisation In Digital Transmission Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Each of the nodes of a plurality of nodes communicating with one another via a network (20) is provided with a time synchronization circuit (43) for synchronizing time values maintained by each node of the network (20). Each node includes a random access memory (40 or 62) and an electronically erasable read only memory (38 or 63), an internal clock (42) and a processing unit (36 or 60) electronically coupled to the clock (42) for: (a) causing the node to determine if the node is a unique node based on a predetermined protocol and, if the node is a unique node, causing the node to generate and transmit synchronization packets (64) to the other nodes connected to the network (20); and (b) process any received synchronization packets (64) in order to adjust the rate at which the time value maintained by the node is incremented in response to pulses from the clock (42) so that the time value maintained by the node is synchronized to the time value maintained by the unique node. The time synchronization circuit includes a synchronized time portion (99), a rate adjustment portion (92) and a timer portion (82). The synchronized time portion maintains the time value of the node. The rate adjustment portion receives a rate adjustment factor computed by the processor and uses the rate adjustment factor to adjust the rate at which the time value maintained by the synchronized time portion is incremented. The timer portion receives a local computation time interval computed by the processor and transfers the rate adjustment factor to the rate adjustment portion upon expiration of the local computation time interval.

Description

METHOD AND APPARATUS FOR NETWORK CLOCK SYNCHRONIZATION
Field of the Invention
This invention generally relates to a network for allowing an electronic device or computer to communicate with one or more other electronic devices or computers over a coupling medium, and more particularly, a method and apparatus for synchronizing the time values produced by the internal clocks of each electronic device and/or computer connected to such a communication network.
Background of the Invention Networks for connecting together a number of computers and associated electronic devices are now commonplace in a wide variety of environments. A network can be as small as a local area network consisting of a few computers and/or other electronic devices, or it can consist of many devices and computers distributed over a vast area. Virtually any electronic device equipped with the necessary hardware can be connected to a computer via a network. Suitable electronic devices include lamps, television sets, VCRs, video cameras, telephones, amplifiers, CD players, equalizers, etc.
Each computer and/or electronic device that is connected to the network and is capable of communicating with other electronic devices or computers connected to the network can be referred to as a node. An underlying network protocol is used to establish communication and ensure reliable transfer of information between the nodes of the network. A network can involve permanent connections, such as cables, or temporary connections made through telephone or other communications links. Each node connected to the network will include its own internal clock operating at nominally the same frequency as the internal clocks of the other nodes of the network. However, due to inevitable variances in manufacturing, materials, etc., the frequency of each clock actually differs slightly. Further, as time goes by, each clock will begin to diverge from its initialized frequency. This divergence is often referred to as "clock drift." As a result, the time values maintained by each node in accordance with its internal clock will vary. Such variances make it exceedingly difficult to synchronize network functions and schedule delivery of data, especially streaming, multimedia data.
Accordingly, a method and apparatus for synchronizing the time value produced by the clocks of each node connected to the network is needed. The method and apparatus should provide overlying applications with the ability to schedule the delivery of data with a high degree of accuracy. Further, the method and apparatus should neither depend on the type of underlying network protocol, nor interfere with the normal operation of the network. In addition, the method and apparatus should accommodate for any differences in frequency between the clocks of each node and converge upon a synchronized time value quickly. As explained in the following, the present invention provides a method and apparatus that meets these criteria and solves other problems in the prior art.
Summary of the Invention In accordance with the present invention, a method and apparatus are provided for synchronizing the time values maintained by each node of plurality of nodes connected to a network. Each node includes an internal clock used to increment the time value maintained by the node. Each node computes a rate adjustment factor that is used to adjust the rate at which the time value of the node is incremented so that the time value for the node is synchronized to the time value of a unique node connected to the network.
In this regard, a unique node connected to the network generates and transmits a stream of synchronization packets to the other nodes connected to the network. Each synchronization packet of the stream contains a time of transmission of the synchronization packet by the unique node. Each node receiving the stream of synchronization packets stores a time of reception for each synchronization packet of the stream received by the node. The node processes the stream of synchronization packets to determine a rate adjustment factor as a function of the time of transmission contained in each synchronization packet and the time of reception for each synchronization packet. The rate adjustment factor is then used to adjust the rate at which the time value maintained by the node is incremented so that the time value maintained by the node becomes synchronized to the time value maintained by the unique node.
The rate adjustment factor is determined by each node receiving the stream of synchronization packets by processing alternating synchronization packets. First, an actual rate adjustment factor representing the rate adjustment factor of the unique node is determined as a function of the time of transmission and the time of reception of the alternating synchronization packet, and the time of transmission and the time of reception of a prior synchronization packet. Next, a local computation time interval is determined for the receiving node that is the time interval during which the rate adjustment factor is to be computed. Finally, the rate adjustment factor is computed as a function of the actual rate adjustment factor, the local computation time interval and a predefined synchronization time interval.
In accordance with further aspects of the invention, a time synchronization circuit is provided for synchronizing time values maintained by each node of the network. The time synchronization circuit includes a synchronized time portion, a rate adjustment portion and a timer portion. The synchronized time portion maintains the time value of the node. The rate adjustment portion receives the rate adjustment factor computed by the processor and uses the rate adjustment factor to adjust the rate at which the time value maintained by the synchronized time portion is incremented. The timer portion receives the local computation time interval computed by the processor and transfers the rate adjustment factor to the rate adjustment portion upon expiration of the local computation time interval.
Brief Description of the Drawings
The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein: FIGURE 1 is a pictorial diagram of a plurality of electronic devices and personal computers interconnected by a bus network of the type with which the present invention can be used;
FIGURE 2 is a block diagram of the bus network and the interconnected electronic devices and personal computers shown in FIGURE 1; FIGURE 3 is a block diagram illustrating the electronic devices and personal computers shown in FIGURES 1 and 2 as a plurality of nodes 1-6 interconnected via the bus network;
FIGURE 4 is a diagram of a synchronization packet generated by one of the nodes shown in FIGURE 3;
FIGURE 5 is a block diagram of a time synchronization circuit formed in accordance with the present invention, that is employed by each node to produce a synchronized time value;
FIGURE 6 is a flow chart illustrating the logic used by each node upon startup to initialize the values used by the time synchronization circuit shown in FIGURE 5 to produce the synchronized time value;
FIGURES 7 A, 7B and 7C are flow charts illustrating the logic used by each node to process received synchronization packets in order to produce the synchronized time value; FIGURE 8 is a flow chart illustrating the logic used to reinitialize the values used by the time synchronization circuit to produce the next synchronized time value; and
FIGURE 9 is a graph illustrating the relationship between a synchronization packet event and the reception and transmission time values maintained by the time synchronization circuit at that synchronization packet.
Detailed Description of the Preferred Embodiment
FIGURE 1 illustrates a bus network 20 interconnecting a plurality of personal computers 22 and the plurality of electronic devices 28. Each of these devices and computers connected to the bus network 20 is depicted as a node 1-6 in FIGURE 3. As will be appreciated by those familiar with networked computer systems from the following description, depending on system capacity, any number of electronic devices and computer systems could be connected via the bus network 20, if equipped with the necessary interface hardware. As discussed in copending U.S. Patent
Application Serial No. 08/334,416, filed November 4, 1994, and titled A Method and Apparatus for Controlling Non-Computer System Devices by Manipulating a Graphic
Representation, the subject matter of which is incorporated herein by reference, suitable electronic devices may include video cameras, speakers, television sets, telephones, lamps, etc. Even a simple light switch can form a node on bus network 20. In addition, any computer system including, but not limited to, portable computers, personal device assistants, etc. that is equipped with the necessary interface hardware, e.g., bridge 26, may be connected to bus network 20. However, the present invention does not require that a computer system of any type be connected to the network. That is, it is possible for the bus network to connect only a plurality of electronic devices and/or bridges. As will be described in more detail below, each node connected to the bus network 20 is provided with a time synchronization circuit 43 that essentially manipulates the frequency of incoming clock pulses from an internal clock 42 to increase or decrease the rate at which a time value for the node is incremented. The rate at which the time value for the node is incremented is determined by the time synchronization circuit 43 in response to a stream of synchronization packets transmitted by a unique or "lowest numbered" node of the network so that the time value for the node becomes synchronized to the time value for the unique node. It follows that if the time value for each node is synchronized to the unique node, then the time values for all nodes will be synchronized across the bus network 20. Inter-Device Communication
FIGURE 2 illustrates the bus network 20 interconnecting the same electronic devices 28 and personal computers 22 shown in FIGURE 1. Each electronic device 28 includes an interface 35 comprising an input output (I/O) circuit 32 and a processor circuit 34 which allow the devices to be directly connected to the bus network 20. The I/O circuit 32 is specifically constructed for use with the bus network configuration and a particular type of coupling medium, whereas the processor circuit 34 can be used with different communication configurations and coupling media.
The processor circuit 34 of the interface 35 controls the communication of the devices 28 over the bus network 20. The processor circuit 34 includes a processor 36, an electrically erasable programmable read-only memory (EEPROM) 38, a random access memory (RAM) 40, an internal clock 42 and a time synchronization circuit 43. The EEPROM 38 is used by the processor 36 to control the functionality of the electronic device 28 and stores its own operating system. The RAM 40 is used by the processor 36 to temporarily store program code and data. In the illustrated embodiment, the internal clock 42 oscillates at a frequency of ten megahertz (10 MHz) and sends a signal to the time synchronization circuit 43 upon the rising edge of each clock pulse. As noted above, the time synchronization circuit 43 manipulates the frequency of incoming clock pulses to increase or decrease the rate at which a time value for the node is incremented such that the time value is ultimately synchronized to the time value maintained by a unique node of the bus network 20. One of ordinary skill in the art will recognize that the interface 35 includes many more components than those shown in FIGURE 2. Such components are not described because they are conventional, and a description of them is not necessary to an understanding of the present invention.
When an electronic device 28 sends data, a command, or program code (collectively referred to herein as data) the processor circuit 34 of the interface 35 housed in the device constructs a packet of data representing the relevant information. The processor circuit 34 determines the time at which the bus network 20 is available for sending the packet. The processor circuit 34 sends the packet to the I O circuit 32 at the appropriate time, and upon receipt of the packet, the I/O circuit transmits the packet via the bus network 20 to the other devices and computers. When an electronic device 28 receives data from a data source, such as personal computer 22 or another device 28, the I/O circuit 32 receives the packet over the bus network 20, and sends the packet to the processor circuit 34. Upon receipt of the packet, the processor circuit 34 of the interface 35 processes the packet and performs any appropriate function, possibly including sending back a response packet or adjusting the time value maintained by the time synchronization circuit 43.
Also shown in FIGURE 2, is each of the personal computers 22 connected to the bus network 20 by way of a lOBaseT Ethernet interface 44 and a bridge 26. Standard personal computers typically include an Ethernet card 48 and a lOBaseT cable 50 for communicating with network devices such as servers. lOBaseT is an interface standard which specifies signal voltage levels, etc. Ethernet is a packet-level communication protocol that specifies framing, packet content, CRC error detection, etc. for sending packets of data. Generally, a higher level communication protocol is used on top of the Ethernet packet-level communication protocol, i.e., the personal computers 22 include software defining a point-to-point communication protocol that is used on top of the Ethernet packet-level protocol.
The bridge 26 provides the interface between the lOBaseT cables 50 and the bus network 20, on which a network communication protocol is used. The bridge 26 includes an Ethernet I/O circuit 52 for connecting the lOBaseT cable 50, and a second I/O circuit 54 that is constructed for connecting to the communication medium of the bus network 20. The bridge 26 includes a bridge processor circuit 56 connected to each of the I/O circuits 52, 54, to translate between the point-to-point protocol and the bus network communication protocol used on the bus network 20. However, to make the bridge 26 compatible with the different communication medium and/or communication protocol, the I/O circuits 52, 54 could be replaced with more appropriate I/O circuits.
The circuitry of the bridge 26 includes many of the same components as the interface 35 of an electronic device 28 and thus, receives and transmits packets of data in much the same manner. The bridge processor circuit 56 includes a processor 60, e.g., one of the I960 family of Intel processors. The bridge processor circuit 56 also includes an internal clock 42, the time synchronization circuit 43, a random access memory (RAM) 62, and an electrically erasable programmable read-only memory (EEPROM) 63. The EEPROM 63 is used by the bridge processor 60 to control the functionality of the bridge 26 and stores its own operating system. The RAM 62 is used by the bridge processor 60 to temporarily store program code and data. The internal clock 42 and the time synchronization circuit 43 are the same as those of the processor circuit 34 of the device interface 35 described above. One of ordinary skill in the art will recognize that the bridge 26 includes many more components than those shown in FIGURE 2. Such components are not described because they are conventional, and a description of them is not necessary to an understanding of the present invention.
The bus network 20 shown in FIGURES 1 and 2 can be formed of various coupling media such as glass or plastic fiber optic cables, coaxial cables, twisted wire paired cables, ribbon cables, etc. In addition, one of ordinary skill in the art will appreciate that the coupling medium can also include a radio frequency coupling medium or other coupling media. In view of the ubiquitous availability of preinstalled wiring in current commercial environments, twisted wire pair copper cables are used to form the bus network 20 in the preferred embodiment of the present invention. Accordingly, the I/O circuit 32 of interface 35 shown in FIGURE 2 is constructed for use with a twisted wire pair copper cable.
Network Communication Protocol Various network communication protocols can be used to communicate over the bus network 20. The particular network communication protocol used is dictated by the program code utilized by the processors 34 and I/O circuits 32 of the device interfaces 35 embodied in the electronic devices 28, the bridge processors 60 of the bridges 26 connected to the personal computers 22. In one actual embodiment of the present invention, the network communication protocol used to communicate over the bus network 20 is of the type disclosed in commonly assigned U.S. Patent No. 5,245,604, entitled "Communication System," the disclosure and drawings of which are specifically incorporated herein by reference. The network communication protocol described in U.S. Patent No. 5,245,604 is referred to herein as the MediaLink protocol. The advantage of the MediaLink protocol is that it provides an upper limit on the amount of time it takes to communicate over the bus network 20. This is important in real-time environments, such as a multimedia environment, where unpredictable delay would result in unacceptable distortion. As all network communication protocols must, the MediaLink protocol includes a network resource sharing and management algorithm such that only one device communicates over the bus network 20 at any one given time and such that each device has sufficient access to the bus network 20.
As shown in FIGURE 3, the personal computers 22 and electronic devices 28, including personal computer bridges 26 and device interfaces 35, may be depicted as a plurality of nodes numbered in increasing numerical order as 1, 2, 3, 4, 5, and 6, wherein nodes 1-6 are displaced from one another and are operative under the MediaLink protocol to become activated in a sequence. It is immaterial for purposes of this discussion which electronic device or personal computer comprises which node. It is only necessary that the electronic devices and personal computers are equipped with the time synchronization circuit 43 and the necessary hardware, e.g., device interface 35 or personal computer bridge 26, to connect to the bus network and embody the present invention. It will be appreciated that a vast array of electronic devices and computer systems equipped with such hardware may comprise individual nodes on the bus network. It will also be appreciated that the number of nodes in the sequence can vary from 2 to rt, where n is theoretically any number, but in reality is limited by the ability of the network to transmit messages within an adequate period of time.
In normal operation, the nodes 1-6 become successfully activated and communicate data packets to one another so that only one node is allowed to transmit data packets over the bus network at any one time. A particular node is "activated, " i.e., a particular node is allowed to communicate data packets, after it receives an activating packet from another node in the sequence. Sending an activating packet to a particular node is referred to as "passing the vector" and is depicted in FIGURE 3 by solid directional arrows. Meanwhile, receiving the activating packet is referred to as "receiving the vector." Each node in the sequence identifies: (1) itself with a "current node number"; and (2) a previous node in the sequence, i.e., the node from which it received the vector, with a "previous node number." For example, the node designated with a number "4," or "node 4," in FIGURE 3 has a current node number equal to four and a previous node number equal to three. However, it is possible that node 3 has not been activated yet or that it has been deactivated. Consequently, node 4 receives the vector from node 2. Thus, node 4 has a previous node number equal to two, while node 2 has a next node number equal to four.
One node of the network is also identified as the "lowest numbered node." The lowest numbered node on the network is the node whose previous node number is greater than its current node number. For instance, in the bus network 20 illustrated in FIGURE 3, node 1 is the lowest numbered node because its previous node number is six and its current node number is one. It will be appreciated that if the original lowest numbered node is deactivated or malfunctions, a new lowest numbered node for bus network 20 shall be identified. For example, node 2 would become the lowest numbered node if node 1 were deactivated. Malfunctioning or non-functioning of a node or physical damage to the coupling media connecting two nodes to the bus network can cause the bus network 20 to break down and divide into multiple subnetworks. In this case, each newly formed subnetwork determines its own new lowest numbered node.
As will be better understood from the following description, the lowest numbered node is the node that controls synchronization of the time value maintained by each node connected to the network 20. The lowest numbered node transmits a particular type of data packet, i.e., a "synchronization packet," to each of the remaining nodes, i.e., "receiving nodes." In response to the synchronization packets transmitted by the lowest numbered node, the time synchronization circuit 43 of each receiving node synchronizes the time value for the receiving node to the time value maintained by the lowest numbered node. While the MediaLink protocol and the bus network as described above are presently preferred in implementing the present invention, it is to be understood that the present invention will also find use in other network protocols and networks other than the type shown in FIGURES 1, 2 and 3, wherein a unique node for sending synchronization packets is determined in a manner determined by the protocol. Network Clock Synchronization As mentioned earlier, each node on the bus network 20, whether a bridge 26 of a personal computer 22 or a device interface 35 of an electronic device 28, is equipped with a time synchronization circuit 43 and an internal clock 42. In one actual embodiment of the present invention, the internal clock 42 generates clock pulses at a frequency of approximately 10 MHz. Although each clock 42 of each node in the network 20 operates at nominally the same frequency, i.e., 10 MHz, each clock may differ slightly in actual frequency and, further, may be experiencing clock drift. It follows that if the time value for each node was incremented at the same frequency as the node's interval clock, the time values for each node would vary.
In accordance with the present invention, the frequency of the incoming clock pulses from the internal clock 42 is manipulated by the time synchronization circuit 43 in order to increase or decrease the rate at which the time value maintained by the node is incremented so that it approaches the rate at which the time value for the lowest numbered node is incremented. Since each node will eventually increment its time value at the same rate as the lowest numbered node, the time values for each node connected to the bus network 20 will eventually synchronize.
The time synchronization circuit 43 synchronizes the time value of the node to the time value of the lowest numbered node in response to a synchronization packet 64 periodically broadcast by the lowest numbered node of the bus network 20 to each of the receiving nodes of the network. The synchronization packets 64 are generated by the lowest numbered node and processed by each receiving node using program code stored in the memory of the node, e.g., EEPROM 38 of device interface 35 or EEPROM 63 of personal computer bridge 26. As illustrated in FIGURE 4, the synchronization packet 64 comprises at least three fields. Those of ordinary skill in the art will recognize that the packet may contain additional fields for storing flags, start and stop bits, checksum bits, etc., that are used by any network protocol for transmission of the packet. However, these fields are not described because they are conventional and a description of them is not necessary to an understanding of the present invention. Returning to the synchronization packet 64, first field 66 identifies the packet as a synchronization packet. A second field 68 identifies the source node sending the synchronization packet, i.e., the lowest numbered node. A third field 70 contains a snapshot of the time value in microseconds at which the synchronization packet 64 was transmitted by the lowest numbered node. Further, the third field 70 contains a synchronization interval, Sp, which is determined by the overlying network protocol to be the time interval between transmission of synchronization packets 64. The time synchronization circuit 43 of each receiving node uses the time snapshot and synchronization interval stored in the third field 70 of the synchronization packet 64 to adjust the rate at which its time value is incremented to approach the rate at which the time value of the lowest numbered node is incremented.
The time synchronization circuit 43 of each node, including the lowest numbered node, connected to the bus network 20 is illustrated in FIGURE 5. The time synchronization circuit 43 can be divided conceptually into three functional portions: a synchronized time portion 99; a rate adjustment portion 92; and a timer portion 82. The synchronized time portion 99 includes a master synchronization counter (MSC) 93, which contains the time value for the node, and a transmission/reception time snapshot register (TSR) 96 into which the time value stored in the MSC 93 is copied when a synchronization packet 64 is transmitted by the lowest numbered node or received by a receiving node, whichever the case may be. The synchronized time portion 99 also includes a scheduling target register (STR) 94. The STR 94 provides overlying applications with a means to schedule certain events. A value representing the scheduled time of a particular function will be stored in the STR. When the MSC 93 is eventually incremented to a time value equal to that stored in the STR, a software interrupt will be initiated indicating to the overlying network protocol that the particular function may be performed.
The rate adjustment portion 92 of the time synchronization circuit 43 includes a fractional increment register (FTR) 86 that receives a corrected rate adjustment factor Mj used to increase or decrease the rate at which the time value stored in the MSC 93 is incremented A master increment register (MIR) 84 is used as a data pathway to set the corrected rate adjustment factor M} into the FIR. A fractional accumulator (FA) 88 receives clock pulses from the internal clock 42 and is used in conjunction with the FIR 86 to increase or decrease the rate at which the MSC 93 is incremented. Specifically, the value of the FA 88 is added to the value of the FTR 86 by an adder 91 on the rising edge of every clock pulse from the internal clock 42. When the sum of the FIR value and the FA value overflows the adder 91, the MSC 93 is incremented by one. Otherwise, the sum is merely returned to the FA 88 and the process is repeated. For example, for the 10 MHz clock 42 and a value of hex 19999A stored in the FIR, (i.e., decimal 0.1), the MSC 93 will be incremented at a rate of once every 10 clock pulses, or once every one microsecond. In order to ensure synchronization, there must be precise control over when the FIR 86 is updated with the value stored in the MIR 84. Therefore, the timer portion 82 determines exactly when the corrected rate adjustment factor Mx initially stored in the MIR 84 is latched into the FIR 86. The timer portion 82 comprises an increment load register (ILR) 76 which contains a local computation time interval LCT, which marks the interval of time necessary for the processor of the receiving node to have calculated the corrected rate adjustment factor Mi . Upon receipt or transmission of a synchronization packet 64, a packet start detect circuit 22 is triggered and sends a signal to a flip-flop 78. Once set, the flip-flop 78 enables an increment load counter (ILC) 74 to begin counting. As the ILC 74 counts, the value stored in the ILC is compared to the local computation time interval LCT stored in the ILR 76 by a comparator 80. When the value of the ILC 74 equals the local computation time interval time LCT, i.e., when the local computation time interval LCT expires, the comparator sends a signal to the FIR 86 allowing the corrected rate adjustment factor M-- to be latched from the MIR 84 into the FIR 86. A firmware interrupt is simultaneously generated to initiate the program code illustrated in FIGURE 8 and described below that reinitializes the variables that will be used to recalculate the corrected rate adjustment factor M-- upon receipt of subsequent synchronization packets. The overall operation of the time synchronization circuit 43 can be described as follows. In the case of the lowest numbered node, an actual rate adjustment factor Ma is directly loaded in the FIR 86 of the lowest numbered node's time synchronization circuit 43 upon startup of the lowest numbered node and periodically thereafter as will be described in more detail below. Thus, the actual rate adjustment factor M-, is the rate adjustment factor being used to increment the MSC 93 of the lowest numbered node's time synchronization circuit 43. When the lowest numbered node transmits a synchronization packet 64, the packet start detect 72 is triggered, causing the time value stored in the MSC 93 plus the most significant eight bits of the FA 88 to be latched into the TSR 96. The time value and the most eight significant bits are then copied from the TSR 96 into field 70 of the synchronization packet 64 as it is transmitted. The actual rate adjustment factor Mg in the FIR 86 is then added to the value stored in the FA 88 upon every clock pulse from the internal clock 42 of the lowest numbered node. When the sum of these values overflows the adder 91, a cany signal is generated which causes the MSC 93 to be incremented. The MSC 93 then continues to be incremented at the rate dictated by the actual rate adjustment factor Ma, until a new actual rate adjustment factor Mg is provided. One of ordinary skill in the art will recognize that as long as the lowest numbered node remains as such, the actual rate adjustment factor Ma will remain constant.
When a receiving node receives the transmitted synchronization packet, the packet start detect circuit 72 is triggered, sending a signal to reset the flip-flop 78. The flip-flop then emits a signal enabling the ILC 74, which begins counting. The received synchronization packet is also stored in memory of the receiving node along with the time value stored in the MSC 93 of the receiving node's time synchronization circuit 43. When the value of the ILC 74 eventually equates the local computation time interval LCT stored in the ILR 76, the corrected rate adjustment factor l*Λγ stored in the MIR 84 is latched into the FIR 86. The corrected rate adjustment factor Mj in the FIR 86 is then added to the value stored in the FA 88 upon every clock pulse from the internal clock 42. When the sum of these values overflows the adder 91, a carry signal is generated which causes the MSC 93 to be incremented. The MSC 93 then continues to be incremented at the rate dictated by the corrected rate adjustment factor M-, until a new corrected rate adjustment factor Mj is provided. Hence, the rate at which MSC 93 is incremented depends upon the rate at which the carry signals are generated by the adder 91, which in turn depends upon the value of the corrected rate adjustment factor Mi stored in the FIR 86. The corrected rate adjustment factor M-- , as well as the local computation time interval LCT, are computed in accordance with the firmware routines illustrated in FIGURES 6, 7A-7C and 8. The routines illustrated in FIGURES 6, 7A-7C and 8 are processed by the bridge processor 60 of a personal computer bridge 26 or the processor 36 of a device interface 35, as the case may be, as required. However, it will be appreciated from the following description that in the case of the lowest numbered node, it is unnecessary to process the synchronization packet 64 that it is sending, since the receiving nodes are synchronizing their time values to that of the lowest numbered node. Hence, some portions of the firmware routines depicted in FIGURES 7A-7C and FIGURE 8 are not actually implemented by the lowest numbered node as long as the lowest numbered node remains such.
Now referring to FIGURE 6, a synchronization monitor task is employed by each node, including the lowest numbered node, connected to the bus network 20 upon startup of the node. The synchronization monitor task is performed by each node in order to initialize the values that will be used by the time synchronization circuit 43 to synchronize the time value for the node and to monitor the status of the node to determine if the node has become the lowest numbered node.
The logic begins in a block 100 and proceeds to a block 102 in which initialization is performed. First a synchronization packet receipt variable (SyncRx) is initialized to zero, indicating that the node has not yet received a synchronization packet 64. Next an actual rate adjustment factor Mg is initialized to a predetermined value. As will be described in more detail below, the actual rate adjustment factor Mg is the rate adjustment factor being used by the lowest numbered node to increment its own MSC 93, as viewed from the receiving node's frame of reference. In other words, Ma is the rate adjustment factor of the lowest numbered node as calculated by the receiving node. In the illustrated embodiment, the actual rate adjustment factor Ma is initialized equal to 0.1.
In addition to the actual rate adjustment factor Ma, an intermediary rate adjustment factor MQ and the corrected adjustment factor M-^, are initialized in block 102. As described above, the corrected rate adjustment factor Mj is used to dictate the frequency at which the time value for the node is incremented. MQ, on the other hand, is an intermediary rate adjustment factor used to ultimately calculate the corrected rate adjustment factor Mj. In the illustrated embodiment, Mj and Mg are also initialized to 0.1. However, one of ordinary skill in the art will recognize that Mj, Mg and Mj. could be initialized to any value deemed suitable by the computer programmer. Next, in block 102, a synchronization time interval S is also initialized as twice the allotted time a node takes to receive a packet in accordance with the overlying network protocol, e.g., four seconds in the illustrated embodiment. A synchronization monitor task timer (SMT timer) is then initialized equal to the synchronization time interval S . Finally, in block 102, a variable representing a synchronization packet receipt state (SyncRxState) is initialized to zero.
As will be described in conjunction with the synchronization packet processing routine illustrated in FIGURES 7A-7C, the value of the SyncRxState variable determines whether a received synchronization packet will be processed to compute the corrected rate adjustment factor M or to reinitialize the values used to compute the corrected rate adjustment factor Mj upon receipt of a subsequent synchronization packet. Specifically, when the SyncRxState variable is set to one, the corrected rate adjustment factor Mj will be computed by the receiving node using the time snapshot, i.e., the time of transmission, stored in field 70 of the synchronization packet 64 received by the node. When SyncRxState variable is set to two, the values used to calculate the corrected rate adjustment factor M-- are reinitialized using the time snapshot stored in field 70 of the received synchronization packet 64. However, upon startup, the SyncRxState is initialized to zero, because a synchronization packet 64 has not yet been received and the values used to compute the corrected rate adjustment factor M} must be initialized for the first time. It will be appreciated that the SyncRxState will only be equal to zero after startup of the node and before reception of the first synchronization packet. After receipt of the first synchronization packet, the value of the SyncRxState variable will switch between one and two after receipt of every subsequent synchronization packet. In a block 104, the FIR 86 in the time synchronization circuit 43 of the node is loaded directly with the actual rate adjustment factor Ma. In block 106 the SMT timer is initiated. In a decision block 108, the logic determines if the SMT timer has expired. If not, the logic proceeds to another decision block 109 where it determines if the node has received a synchronization packet since initiation of the SMT timer, i.e., if the SyncRx variable is equal to one. If so, the SyncRx variable is reset to zero in block 110, indicating that the node is waiting for another synchronization packet. The logic then proceeds directly to a block 118 where the SMT timer is reset. However, if the result of decision block 109 is negative, a synchronization packet has not been received by the node since initiation of the timer. Therefore, the logic returns to decision block 108 and repeats blocks 108 through 111 until the SMT timer has expired or a synchronization packet has been received, whichever occurs first.
If the node does not receive a synchronization packet 64 before the SMT timer expires, the logic will proceed to a decision block 111 where it determines if the node should be sending synchronization packets since it is not receiving any synchronization packets. In other words, the logic determines if the node is the lowest numbered node. If so, the node prepares and sends a synchronization packet 64 in its capacity as the lowest numbered node in block 112. In block 114, the SyncRxState variable is set equal to one, indicating that the next time a synchronization packet 64 is received, it will be processed to compute the corrected rate adjustment factor M}. In block 116, the FIR 86 of the lowest numbered node is loaded with the most recent value calculated by the lowest numbered node for the actual rate adjustment factor Ma. It will be appreciated by those of ordinary skill in the art that if the node is the first or original lowest numbered node of the bus network 20, the most recent actual rate adjustment factor ^ is merely the value to which it was initialized in block 102. However, if the node has only just become the lowest numbered node, due to a break in network communication as described above, the actual rate adjustment factor Ma will have the value most recently calculated by the node in its capacity as a receiving node. Calculation of the actual rate adjustment factor Mj will be discussed in more detail below. If the node is not the lowest numbered node, the logic skips blocks 112 through 116 and proceeds directly to a block 118. In block 118, the SMT timer is reset equal to the time value stored in the MSC 93 of the node, plus half of the synchronization time interval S-i . The logic then returns to block 106 where the new SMT timer is initiated, and blocks 108 through 110 are repeated until the SMT timer expires or until a synchronization packet is received by the node. Thus, each node checks itself to determine if it has become the lowest numbered node each time the SMT timer expires.
Whenever a packet receipt event occurs, i.e., whenever a node receives any type of packet, whether a synchronization packet or not, the synchronization monitor task illustrated in FIGURE 6 is interrupted and the packet processing routine illustrated in FIGURES 7A-7C is implemented by the node. Referring now to FIGURE 7A, the logic begins in a block 130. Upon a packet receipt event in block 132, the logic proceeds to a decision block 134 where it determines if the SyncRxState variable is equal to 0, i.e., if the node has yet to receive its first synchronization packet after startup. If the result is negative, the receiving node has already received a synchronization packet 64 and the initialization has already been completed. Therefore, the logic will skip initialization, i.e., blocks 136 through 142, and proceeds directly to a decision block 144 in FIGURE 7B to determine if the SyncRxState variable is equal to one. On the other hand, if the synchronization packet 64 is the first to be received, the logic proceeds from decision block 134 to a decision block 136 where it determines if the received packet is a synchronization packet 64. If not, the logic merely returns to event block 132 to await the next packet event. It will be appreciated by those of ordinary skill in the art that the program code implemented by the node to process the received packet is not depicted or described here because a description of such program code is not necessary for an understanding of the present invention. Those of ordinary skill in the art will also recognize that if the node is the lowest numbered node, a synchronization packet will not be received, and the logic will not proceed beyond block 136. If the result of decision block 136 is positive, the node is a receiving node and has received its first synchronization packet 64. Therefore, the logic proceeds to block 138 where a number of values that will be required to calculate the corrected rate adjustment factor Mi are initialized for the first time. Specifically, the SyncRx variable is set equal to one, indicating that a synchronization packet 64 has been received. Next, the processor of the receiving node calculates a time T0 at which the first synchronization packet was transmitted by the lowest numbered node as seen from the receiving node's own frame of reference. In other words, the receiving node calculates TQ equal to the time snapshot Tp stored in field 70 of the synchronization packet 64 plus the propagation delay Pm between the lowest numbered node and the receiving node. Those of ordinary skill in the art will recognize that the propagation delay between the lowest numbered node and the receiving node can be determined in any one of several ways well known in the art, e.g., as a function of network topology, and stored in memory of the node upon startup of the node. Next, the receiving node initializes a time of reception RQ of the first synchronization packet 64. Since the receiving node has never before received a synchronization packet and thus, never attempted to synchronize its time value to that of the lowest numbered node, the node merely sets the time of reception RQ equal to the time of transmission T0. The intermediary rate adjustment factor MQ which will determine the rate at which the MSC 93 of the receiving node is to be incremented after receipt of the first synchronization packet is then initialized to a predetermined value. In the illustrated embodiment, the intermediary rate adjustment factor MQ is set equal to decimal 0.1, so that once it is loaded in the FIR 86 of the receiving node's time synchronization circuit 43, the MSC 93 of the time synchronization will be incremented approximately once every one microsecond depending on the actual frequency of the receiving node's internal clock 42. The synchronization time interval Si is then reset equal to twice the synchronization packet interval Sp stored in field 70 of the first synchronization packet 64. Finally, the SyncRxState is set equal to one, indicating that the receiving node will calculate the corrected rate adjustment factor Mi, upon receipt of the next synchronization packet.
After initialization in block 138, the logic proceeds to a block 140 where the appropriate components of the receiving node's time synchronization circuit 43 are loaded with the initialized values. More specifically, MST 95 is set equal to the time of transmission T0 so that when the MSC 93 is next incremented, the MSC 93 will be loaded with a time value that at least approximates the time value of the lowest numbered node. Next, the FIR 86 is loaded with the intermediary rate adjustment factor Mo which will dictate the rate at which the MSC 93 of the receiving node is incremented until the reception of the next synchronization packet. The logic then proceeds to block 142 of FIGURE 7B to await another packet receipt event. The result of the initialization described above is shown in FIGURE 9.
Specifically, FIGURE 9 is a graph illustrating the relationship between the time values stored in the MSC 93 of a receiving node at particular transmission and reception times, and synchronization packet receipt events. The time values stored in the MSC 93 of the receiving node at particular synchronization packet transmission and reception times are found along the Y-axis, while synchronization packet events in absolute time are shown along the X-axis.
The actual rate at which the time value stored in the MSC 93 of the lowest numbered node's time synchronization circuit 43 is being incremented is represented as line 190. The slope of that line is thus directly proportional to the actual rate adjustment factor Ma being used by the time synchronization circuit 43 of the lowest numbered node. Before reception by a receiving node of the first synchronization packet 64, the MSC 93 of the receiving node will be incremented at a rate that differs from the actual rate at which the lowest numbered node's MSC 93 is being incremented. The rate at which the MSC 93 of the receiving node is incremented is depicted on the graph illustrated in FIGURE 9 as line segment 192. Obviously, line segment 192 has a slope that differs from line 190 since synchronization has yet to be attempted.
Upon receipt of the first synchronization packet 64, i.e., synchronization packet event zero, the MSC 93 of the receiving node is set equal to the time of transmission T0. The time value stored in the MSC 93 is then incremented at a rate determined by the intermediary rate adjustment factor MQ as represented by line segment 194 in the graph illustrated in FIGURE 9. The slope of the line segment 194 is directly proportional to the intermediary rate adjustment factor MQ being used by the time synchronization circuit 43 after receipt of the first synchronization packet 64. It is readily apparent from the graph illustrated in FIGURE 9 that in order for the time value of the receiving node to approach the time value of the lowest numbered node, the intermediary rate adjustment factor Q must be adjusted to approach the actual rate adjustment factor Ma.
Returning to FIGURE 7B, after processing its first synchronization packet 64, the receiving node awaits another packet receipt event in block 142. Hence, upon receipt of a packet, the logic proceeds to a decision block 144 where it determines if the SyncRxState variable is equal to one. If not, the corrected rate adjustment factor Mj has just been computed. Therefore, the logic skips computation of the corrected rate adjustment factor M^ i.e., blocks 146 through 160, and proceeds directly to a decision block 162 in FIGURE 7C to determine if the SyncRxState variable is equal to two.
On the other hand, if the result of decision block 144 is positive, the portion of the firmware used to calculate the corrected rate adjustment factor M-^ is executed. Accordingly, the logic proceeds to a block 146 where it determines if the received packet is a synchronization packet 64. If not, the logic proceeds to a decision block 148 where it determines if a variable indicating whether the ILC 74 is presently active and counting (ILCactive) is equal to one. If so, the receiving node has processed a prior synchronization packet and stored corrected rate adjustment factor Mi in the MIR 86 of the time synchronization circuit 43. All that remains is for the local computation time interval LCT stored in the ILR 76 to expire. Therefore, the ILC 74 must be allowed to keep counting. If the ILCactive variable does not equal one, the ILC 74 is not active and counting until it equals the local computation time interval stored in the ILR 76. Therefore, in block 150 the ILC 74 is reset and continues to be inhibited from counting until receipt of the next synchronization packet. If the received packet is indeed a synchronization packet, the logic proceeds from decision block 146 to block 152 to perform additional initialization. In block 152 the SyncRx variable is set equal to one, indicating a synchronization packet has been received. Next, the processor initializes the time of receipt R of the synchronization packet equal to the time Rp that was stored in memory of the node upon receipt of the synchronization packet. Next, the time Ti at which the synchronization packet 64 was transmitted by the lowest numbered node is calculated by the receiving node. Specifically, the time of transmission T is set equal to the time of transmission Tp found in field 70 of the received synchronization packet 64 plus the propagation delay Pm between the lowest numbered node and the receiving node. Finally, the synchronization time interval, Sj is set equal to twice the synchronization packet interval Sp found in the synchronization packet 64.
In block 154, these values are used to calculate the corrected rate adjustment factor M to be loaded into the MIR 84 to adjust the rate at which the MSC 93 of the receiving node is incremented so that the time value stored in the receiving node is eventually synchronized to the time value of the lowest numbered node. In this regard, the receiving node first calculates the actual rate adjustment factor Ma being used by the time synchronization circuit 43 of the lowest numbered node to increment its own MSC 93.
As can be seen in FIGURE 9, for the period between reception of the first synchronization packet (RQ) and the reception of the current synchronization packet (Rl), the intermediary rate adjustment factor Mo is used to adjust the rate at which the MSC 93 of the receiving node is being incremented. Meanwhile, for the period between transmission of the prior synchronization packet (T0) and transmission of the current synchronization packet (Ti), the actual rate adjustment factor used by the lowest numbered node to increment the MSC 93 of the lowest numbered node is Ma. Since the time interval between the receiving node's reception of the current and prior synchronization packets and the lowest numbered node's transmission of the current and prior synchronization packets should be the same if both the receiving node and the lowest numbered node are incrementing their respective time values at the same rate, the following relationship can be asserted:
(τ- - T0) + Ma = (R, - R0) + M0 (1)
From this relationship, it follows that the actual rate adjustment factor being used by the lowest numbered node to increment its MSC 93 is represented by the following equation: Ma = M0 * ft - T0) ÷ (R-. - R0) (2)
Thus, the actual rate adjustment factor Ma is calculated in block 154 in accordance with equation (2). However, there will inevitably be some error between the actual rate adjustment factor calculated using equation (2) and the actual rate adjustment factor Ma used by the lowest numbered node. More specifically, when the receiving node calculates the actual rate adjustment factor M^ at which the MSC 93 of the lowest numbered node should be incremented, the receiving node has not guaranteed that its MSC 93 is counting the same time value in lock step with the MSC 93 of the lowest numbered node. This is clearly depicted in FIGURE 9, which shows a dotted line 196 having a slope directly proportional to the actual rate adjustment factor calculated by the receiving node using equation (2) and approximately equal to the slope of line 190. However, the dotted line 196 has a different Y-intercept than line 190, indicating that the time values produced are not equal, although they are incremented at the same rate. Thus, the receiving node must make an additional correction to the actual rate adjustment factor Ma calculated using equation (2). This corrected rate adjustment factor Mj is the factor used by time synchronization circuit 43 of the receiving node to speed up or slow down incrementing of the receiving node's MSC 93 so that the time value of the receiving node's MSC 93 agrees with the time value of the lowest numbered node's MSC after receipt of two more synchronization packets. Therefore, the next time the FTR 86 of the receiving node is updated, the FIR 86 of the receiving node is set very close to the actual rate adjustment factor of the lowest numbered node. However, this requires predicting the time at which the receiving node will update its FTR 86 with the corrected rate adjustment factor Mi after reception and transmission of two more synchronization packets 64.
In order to predict when the FIR 86 will next be updated, precise control over when the corrected rate adjustment factor Mt is transferred from the MIR 84 to the FIR 86 of the time synchronization circuit 43 of the receiving node is necessary and should be done as soon as possible after computation of the corrected rate adjustment factor Mi. Obviously, the corrected rate adjustment factor Mt cannot be loaded into the MIR 84 until it has been computed. Therefore, a computation time interval must be set that is long enough to guarantee adequate time to make the necessary computations. A system wide constant for this time interval with respect to the lowest numbered node, i.e., with respect to transmission of synchronization packets, is initialized to a value CT. In the preferred embodiment of the present invention the constant computation interval is set to ten milliseconds. However, those or ordinary skill in the art will recognize that the constant computation interval is actually determined as a function of the speed of the processor calculating the corrected rate adjustment factor Mj. Referring to FIGURE 9, this value can be represented by the equation:
CT = % - Ti (3) wherein TTJI is the time at which the receiving node would update the FIR 86 of its time synchronization packet 43 with the corrected rate adjustment factor Mj after transmission of the current synchronization packet.
Although a system wide constant computation time interval CT is established with respect to the lowest numbered node, each receiving node must compute its own such computation time interval, or local computation time interval, LCT. As shown in FIGURE 9, this value can be represented by the following equation: LCT = R - Ri (4) wherein Rm is the time at which the receiving node updates the FIR 86 of its time synchronization circuit 43 after reception of the current synchronization packet. By analogy to equation (1) the following relationship can be asserted:
CT ÷ Ma = LCT ÷ M0 (5)
From equation (5) it follows that:
LCT = (M0 * CT) ÷ Ma (6)
Thus, without actually determining the values Tyi and RTJI, the local computation time interval LCT is calculated in block 154 using equation (6) and a predefined value for CT. In the present invention, the local computation time interval LCT is not only used to determine when the corrected rate adjustment factor Mi is to be transferred from the MIR 84 to the FIR 86 of the time synchronization circuit 43 of the receiving node, it is also used to determine the corrected rate adjustment factor M1 itself. As shown in FIGURE 9, Tτj3 equals the time at which the receiving node next updates its FIR 86 after transmission of two more synchronization packets 64, while R*U3 is the time at which the receiving node updates its FIR 86 after receipt of two more synchronization packets. By definition, the following equation is asserted:
RU3 = TU3 (7) By definition of the local time counter LCT, the following equations can be asserted:
RU1 = Rι + LCT ; and (8)
T S T, + CT . (9)
Finally, since the time interval between synchronization packets is defined as Sp, it follows that the time interval between two synchronization packets is two times Sp, i.e., the synchronization time interval S^ Therefore, the time interval between updating the FIR 86 of a receiving node after transmission of the current synchronization packet and the transmission of two more synchronization packets is also S . Hence, the following equation can be asserted. T^ = T0] + S, (10) By analogy to equation (1) noted above, the following equation can be established:
ftj3 " Tui) ÷Ma = (RU3 - Rm) + Mi (11)
From this equation, it follows that:
Figure imgf000025_0001
Since it is known from equation (10) that Ty3 - T-QI equals Sγ, it follows that:
M! = M,. (R U3 " Ul) ÷ Sx (13)
In addition, since it is known from equation (7) that Rτj3 equals TU3, the following equation for the corrected rate adjustment factor Mj can ultimately be asserted:
Mi = Ma, (TU3 - R ) ÷ Si (14)
Therefore, in block 152, the corrected rate adjustment factor Mj is calculated in accordance with equations (8), (9), (10) and (14). As can be recognized from FIGURE 9, if the MSC 93 of the receiving node is incremented at a rate determined by the corrected rate adjustment factor of Mi (as represented by a line 198 in FIGURE 9), the MSC 93 will be incremented quickly enough to reach a time value approximately equal to that of the MSC 93 of the lowest numbered node within the reception of only two more synchronization packets. After the local computation time interval LCT and the corrected rate adjustment factor M are calculated, the values are then loaded into the ILC 74 and MIR 84 of the receiving node's time synchronization circuit 43, respectively, in block 156.
Returning to FIGURE 7B, after the ILR 74 has been loaded with the local computation time interval LCT and the MTR 84 has been set to the corrected rate adjustment factor M^ the logic proceeds to a block 158 where the ILCactive variable is set equal to one to indicate that the ILC 74 is now active and counting. Next, the SyncRxState variable is set equal to two, indicating that the receiving node will reinitialize the values used to compute the corrected rate adjustment factor M}, upon receipt of the next synchronization packet 64.
As discussed above, the corrected rate adjustment factor M is latched from the MIR 84 into the FIR 86 upon expiration of the local time counter LCT. The corrected rate adjustment factor is then added to the value stored in the FA 88 until the adder 91 overflows and increments the MSC 93. Thus, the MSC 93 will continue to be incremented at the rate dictated by the values stored in the FIR 86, e.g., the corrected rate adjustment factor M^ until a newly coσected rate adjustment factor Mx is latched to the FTR 86. When the corrected rate adjustment factor Mx is latched into the FIR 86 of the time synchronization circuit 43 upon expiration of the local computation time interval LCT, a software interrupt is simultaneously provided for reinitializing the values that will be used upon reception of subsequent synchronization packets 64 to calculate a new corrected rate adjustment factor Mi and reset the ILC 74. The ILC interrupt is handled by the firmware routine depicted in FIGURE 8. After the interrupt occurs in block 180, the logic proceeds to a block 182 where the intermediary rate adjustment factor MQ is set equal to the corrected rate adjustment factor Mi and the corrected factor Mj is set equal to the actual factor Ma. The ILCactive variable is then set to zero, indicating that the ILC 74 is no longer active and counting. Accordingly, in block 184 the ILC 74 is reset and inhibited from counting until receipt of the next synchronization packet, by resetting the flip flop 78 using a software-initiated signal and thus, clearing the ILC 74. The logic then ends in block 186. After processing of the current synchronization packet 64, the logic proceeds to a block 160 on FIGURE 7C to await another packet receipt event. Upon such receipt, the logic proceeds to a decision block 162 where it determines if the SyncRxState variable is equal to two. If not, reinitialization of the values used to compute corrected rate adjustment factor Mi has just occurred. Therefore, the logic skips reinitialization, i.e., blocks 164 through 124, and returns to decision block 144 in FIGURE 7B to determine if the SyncRxState variable is equal to one. On the other hand, if the result of decision block 162 is positive, the logic proceeds to decision block 164 where it determines if the received packet is a synchronization packet 64. If not, the logic proceeds to a decision block 166 where it determines if a variable indicating whether the ILC 74 is presently active and counting, i.e., if the ILCactive variable equals one. If so, the receiving node has processed a prior synchronization packet and loaded corrected rate adjustment factor Mj in the MIR 86 of the time synchronization circuit 43. All that remains is for the local computation time interval LCT stored in the ILR 76 to expire. Therefore, the ILC 74 must be allowed to keep counting. If the ILCactive variable does not equal one, the ILC 74 is not active and counting. Therefore, in block 168 the ILC 74 is reset and inhibited from counting until receipt of the next synchronization packet. On the other hand, if the received packet is indeed a synchronization packet 64, the logic proceeds from decision block 164 to block 170 to perform additional reinitialization. Specifically, in block 170 a number of variables are reinitialized so that the process of determining a local computation time interval LCT and a corrected rate adjustment factor M can begin again upon receipt of the next synchronization packet (i.e., when the SyncRxState variable equals one). Specifically, R , T0, and S are reinitialized.
In block 172 the ILC 74 of the time synchronization circuit 43 is reset and inhibited from counting until receipt of another synchronization packet 64 by resetting the flip-flop 78 and clearing the ILC 74. In block 174, the ILCactive variable is set equal to zero, indicating that the ILC is not counting. In addition, the SyncRxState variable is set equal to one, indicating that the next synchronization packet to be received is to be processed so as to produce a newly corrected rate adjustment factor Mi and a new local computation time interval LCT. The logic then returns to block 140 in FIGURE 7B to await another packet receipt event. Consequently, blocks 140 through 172 are repeated as long as the receiving node remains active. Thus, every other synchronization packet is processed to determine a newly corrected rate adjustment factor Mi and a new local computation time interval LCT. However, it will be appreciated from the foregoing description that the newly corrected rate adjustment factor Mi will converge upon the actual rate adjustment factor used by the lowest numbered node to increment its MSC 93, within only a few synchronization packets. However, the receiving node continues to recalculate the corrected rate adjustment factor Mi so as to continuously compensate for clock drift and variations infrequency, which may vary greatly over time. While the preferred embodiment of the invention has been illustrated and described, it will be appreciated that various changes can be made therein without departing from the spirit and scope of the invention.

Claims

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. A method for synchronizing time values in a network comprising a plurality of nodes, each node maintaining a time value incremented in response to pulses from an internal clock, the method comprising:
(a) causing a unique node to transmit a stream of synchronization packets to the other nodes connected to the network, each synchronization packet of the stream containing a time of transmission of the synchronization packet by the unique node;
(b) for each node receiving the stream of synchronization packets, (i) storing a time of reception for each synchronization packet of the stream received by the node; and
(ii) processing the stream of synchronization packets to determine a rate adjustment factor as a function of the time of transmission contained in each synchronization packet and the time of reception for each synchronization packet, wherein the rate adjustment factor adjusts the rate at which the time value maintained by the node is incremented so that the time value maintained by the node becomes synchronized to the time value maintained by the unique node.
2. The method of Claim 1, wherein causing a unique node of the network to transmit a first stream of synchronization packets to the other nodes of the network comprises:
(a) determining which node of the network is the unique node of the network based on a predetermined protocol;
(b) causing the unique node to generate synchronization packets, wherein each synchronization packet contains the time of transmission of the synchronization packet; and
(c) causing the unique node to transmit the synchronization packets to the other nodes of the network.
3. The method of Claim 2, wherein processing the stream of synchronization packets comprises: for a current synchronization packet being processed by the receiving node, determining if the current synchronization packet is the first synchronization packet to be received by the receiving node, and if so:
(a) initializing an intermediary rate adjustment factor to a predetermined value; and
(b) initializing a synchronization time interval as a function of a time interval between transmission of synchronization packets.
4. The method of Claim 3, wherein processing the stream of synchronization packets further comprises: for the current synchronization packet being processed by the receiving node, determining if the rate adjustment factor was computed using a prior synchronization packet, and if so:
(a) reinitializing the intermediary rate adjustment factor to the rate adjustment factor;
(b) adjusting the time of transmission stored in the synchronization packet to compensate for propagation delay between the unique node and the receiving node; and
(c) reinitializing the synchronization time interval as a function of the time interval between transmission of synchronization packets.
5. The method of Claim 4, wherein processing the stream of synchronization packets further comprises: if the rate adjustment factor was not computed using the prior synchronization packet,
(a) determining an actual rate adjustment factor as a function of the time of transmission and the time of reception of the current synchronization packet, and the time of transmission and the time of reception of the prior synchronization packet, wherein the actual rate adjustment factor represents the rate adjustment factor of the unique node;
(b) determining a local computation time interval for the receiving node during which the rate adjustment factor is to be computed; and
(c) computing the rate adjustment factor as a function of the actual rate adjustment factor, the local computation time interval and the synchronization time interval.
6. The method of Claim 5, wherein determining the local computation time interval comprises:
(a) initializing a constant computation time interval for the plurality of nodes; and
(b) computing the local computation time interval as a function of the intermediary rate adjustment factor determined by processing the prior synchronization packet, the actual rate adjustment factor and the constant computation time interval.
7. The method of Claim 2, wherein the rate adjustment factor of the unique node is initialized to a predetermined value upon startup of the unique node, if the unique node is the first unique node to transmit synchronization packets to the other nodes of the network.
8. The method of Claim 6, wherein the rate adjustment factor of the unique node is reinitialized to the actual rate adjustment factor determined by the unique node prior to becoming the unique node, if the unique node is not the first unique node to transmit synchronization packets to the other nodes of the network.
9. A method for synchronizing time values maintained by each node of a plurality of nodes connected to a communication medium, each node incrementing the time value maintained in response to pulses from an internal clock, the method comprising.
(a) causing each node to periodically determine if it is a unique node based on a predetermined protocol;
(b) causing the unique node to generate and transmit a stream of synchronization packets to the other nodes connected to the communication medium, each of the synchronization packets including a transmission time value;
(c) causing each of the nodes receiving the stream of synchronization packets to store a reception time value for each synchronization packet of the stream and process the stream of synchronization packets to determine a rate adjustment factor based on the transmission time values included in the synchronization packets and the stored reception time values for the synchronization packets; and (d) causing each node receiving the stream of synchronization packets to adjust a rate of incoming clock pulses used to increment the time value maintained by the node by the rate adjustment factor.
10. The method of Claim 9, wherein the synchronization packets making up the stream of synchronization packets generated and transmitted by the unique node are generated at predefined synchronization packet time intervals.
11. The method of Claim 10, wherein causing each of the nodes receiving the stream of synchronization packets to process the stream of synchronization packets comprises processing alternating synchronization packets to determine the rate adjustment factor.
12. The method of Claim 11, wherein processing alternating synchronization packets to determine the rate adjustment factor comprises:
(a) determining an actual rate adjustment factor as a function of the transmission time value and the reception time value of the alternating synchronization packet, and the transmission time value and the reception time value of a prior synchronization packet, wherein the actual rate adjustment factor represents the rate adjustment factor of the unique node;
(b) determining a local computation time interval for the receiving node during which the rate adjustment factor is to be computed; and
(c) computing the rate adjustment factor as a function of the actual rate adjustment factor, the local computation time interval and the predefined synchronization time intervals.
13. The method of Claim 12, wherein the prior synchronization packet is the first packet to be received by the node and the node processes the prior synchronization packet by:
(a) initializing an intermediary rate adjustment factor to a predetermined value;
(b) adjusting the transmission time value to compensate for propagation delay between the unique node and the receiving node; and
(c) resetting the reception time value to the adjusted transmission time value.
14. The method of Claim 13, wherein the prior synchronization packet is not the first packet to be received by the node and the node processes the prior synchronization packet by:
(a) reinitializing the intermediary rate adjustment factor to the rate adjustment factor; and
(b) adjusting the transmission time value contained in the prior synchronization packet to compensate for propagation delay between the unique node and the receiving node.
15. The method of Claim 14, wherein determining the local computation time interval comprises:
(a) initializing a constant computation time interval; and
(b) computing the local computation time interval as a function of the intermediary rate adjustment factor determined by processing the prior synchronization packet, the actual rate adjustment factor and the constant computation time interval.
16. The method of Claim 8, wherein the rate adjustment factor of the unique node is initialized to a predetermined value upon startup of the unique node, if the unique node is the first unique node to transmit synchronization packets to the other nodes connected to the communication medium.
17. The method of Claim 16, wherein the rate adjustment factor of the unique node is reinitialized to the actual rate adjustment factor determined by the unique node prior to becoming the unique node, if the unique nodes is not the first unique node to transmit synchronization packets to the other nodes connected to the communication medium.
18. Apparatus, included in each of the nodes of a plurality of nodes communicating with one another via a network, for synchronizing time values maintained by each node of the network, each node having a memory, an internal clock and a processor electronically coupled to the clock for: (a) causing the node to determine if the node is a unique node based on a predetermined protocol and, if the node is a unique node, causing the node to generate and transmit synchronization packets to the other nodes connected to the network; and (b) process any received synchronization packets in order to adjust the rate at which the time value maintained by the node is incremented in response to pulses from the internal clock so that the time value maintained by the node is synchronized to the time value maintained by the unique node, the apparatus comprising:
(a) a synchronized time component for maintaining the time value of the node;
(b) a rate adjustment component electronically coupled to the internal clock and the synchronized time component, the rate adjustment component receiving a rate adjustment factor computed by the processor, the rate adjustment factor being used by the rate adjustment component to adjust the rate at which the time value maintained by the synchronized time component is incremented; and
(c) a timer component electronically coupled to the rate adjustment component, the timer component receiving a local computation time interval computed by the processor, the timer component transferring the rate adjustment factor to the rate adjustment component upon expiration of the local computation time interval.
19. The apparatus of Claim 18, wherein the synchronized time component comprises:
(a) a master synchronization counter containing the time value for the node; and
(b) a time snapshot register into which the time value stored in the master synchronization counter is copied upon transmission of a synchronization packet, if the node is the unique node, and upon receipt of a synchronization packet if the node is not the unique node.
20. The apparatus of Claim 19, wherein the synchronized time component further comprises:
(a) a scheduling target register which receives a scheduled time value from the processor; and
(b) a scheduling comparator electronically coupled to the scheduling target register and the master synchronization counter, the scheduling comparator comparing the scheduled time value stored in the scheduling target register and the time value stored in the master synchronization counter such that when the scheduled time value and the time value stored in the master synchronization counter are equal, the scheduling comparator sends an interrupt signal to the processor.
21. The apparatus of Claim 20, wherein the rate adjustment component comprises:
(a) a fractional increment register that receives the rate adjustment factor;
(b) a fractional accumulator which receives clock pulses from the internal clock; and
(c) an adder electronically coupled to the fractional increment register, the fractional accumulator and the master synchronization counter, the adder computing a sum of a value stored in the fractional accumulator with the rate adjustment factor stored in the fractional increment register upon each pulse received from the internal clock by the fractional accumulator, the adder generating a signal which increments the master synchronization counter when the sum of the value stored in the fractional accumulator and the rate adjustment factor overflow the adder.
22. The apparatus of Claim 21 , wherein the timer component comprises:
(a) an increment load register which receives the local computation time interval computed by the processor;
(b) an increment load counter which contains a counting value that begins incrementing upon transmission of a synchronization packet if the node is the unique node, and upon reception of a synchronization packet if the node is not the unique node; and
(c) a comparator which compares the counting value stored in the increment load counter to the local computation time interval stored in the increment load register such that when the counting value equals the local computation time interval, the comparator generates a signal which transfers the rate adjustment factor into the fractional increment register of the rate adjustment component.
23. The apparatus of Claim 22, wherein the rate adjustment component further comprises a master increment register which receives the rate adjustment factor computed by the processor.
24. The apparatus of Claim 23, wherein the rate adjustment factor is transferred from the master increment register to the fractional increment register of the rate adjustment component when the counting value stored in the increment load counter is equal to the local computation time interval stored in the increment load register of the timer component.
25. The apparatus of Claim 23, wherein the rate adjustment factor received by the firactional accumulator is computed by the processor by causing each of the nodes receiving the stream of synchronization packets to store a reception time value for each synchronization packet of the stream in memory and process the stream of synchronization packets to determine the rate adjustment factor based on the transmission time values included in the synchronization packets and the stored reception time values for the synchronization packets.
26. The apparatus of Claim 25, wherein each of the nodes receiving synchronization packets processes alternating synchronization packets to determine the rate adjustment factor by
(a) determining an actual rate adjustment factor representing the rate adjustment factor of the unique node, as a function of the transmission time value and the reception time value of the alternating synchronization packet, and the transmission time value and the reception time value of a prior synchronization packet; and
(b) computing the rate adjustment factor as a function of the actual rate adjustment factor, the local computation time interval and the predefined synchronization time intervals.
27. The apparatus of Claim 26, wherein the prior synchronization packet is the first packet to be received by the node and the node processes the prior synchronization packet by:
(a) reinitializing an intermediary rate adjustment factor to a predetermined value;
(b) adjusting the transmission time value to compensate for propagation delay between the unique node and the receiving node; and
(c) resetting the reception time value stored in memory to the adjusted transmission time value.
28. The apparatus of Claim 27, wherein the prior synchronization packet is not the first packet to be received by the node and the node processes the prior synchronization packet by:
(a) reinitializing the intermediary rate adjustment factor to the rate adjustment factor, and (b) adjusting the transmission time value contained in the prior synchronization packet to compensate for propagation delay between the unique node and the receiving node.
29. The apparatus of Claim 28, wherein the local computation time interval is determined by:
(a) initializing a constant computation time interval to a predefined value; and
(b) computing the local computation time interval as a function of the intermediary rate adjustment factor determined by processing the prior synchronization packet, the actual rate adjustment factor and the constant computation time interval.
30. The apparatus of Claim 29, wherein the rate adjustment factor of the unique node is initialized to a predetermined value and stored in the fractional increment register upon startup of the unique node, if the unique node is the first unique node to transmit synchronization packets to the other nodes connected to the communication medium.
31. The apparatus of Claim 30, wherein the rate adjustment factor of the unique node is reinitialized to the actual rate adjustment factor determined by the unique node prior to becoming the unique node and is stored in the fractional increment register of the node when it becomes the unique node, if the unique nodes is not the first unique node to transmit synchronization packets to the other nodes connected to the communication medium.
PCT/US1997/013555 1996-08-02 1997-08-01 Method and apparatus for network clock synchronization WO1998006194A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US69148796A 1996-08-02 1996-08-02
US08/691,487 1996-08-02

Publications (1)

Publication Number Publication Date
WO1998006194A1 true WO1998006194A1 (en) 1998-02-12

Family

ID=24776730

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/013555 WO1998006194A1 (en) 1996-08-02 1997-08-01 Method and apparatus for network clock synchronization

Country Status (1)

Country Link
WO (1) WO1998006194A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2341749A (en) * 1998-09-19 2000-03-22 Nokia Telecommunications Oy Digital network synchronisation
GB2356774A (en) * 1999-11-26 2001-05-30 Roke Manor Research Synchronisation of base stations
EP2034642A1 (en) * 2007-09-07 2009-03-11 Siemens Aktiengesellschaft Method for transmitting synchronisation messages in a communications network
US7613212B1 (en) * 2003-06-10 2009-11-03 Atrica Israel Ltd. Centralized clock synchronization for time division multiplexed traffic transported over Ethernet networks
CN111343039A (en) * 2018-12-18 2020-06-26 西蒙兹精密产品公司 Distributed time synchronization protocol for asynchronous communication systems
CN117289754A (en) * 2023-08-22 2023-12-26 北京辉羲智能科技有限公司 Time-synchronous chip architecture and software control method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4713807A (en) * 1985-05-24 1987-12-15 Stc Plc Intelligence transmission system of the local area network type
US4893318A (en) * 1988-01-26 1990-01-09 Computer Sports Medicine, Inc. Method for referencing multiple data processors to a common time reference
US4939752A (en) * 1989-05-31 1990-07-03 At&T Company Distributed timing recovery for a distributed communication system
US5023871A (en) * 1988-04-28 1991-06-11 Hitachi, Ltd. Method of controlling the operation of stations in a ring network
EP0450879A1 (en) * 1990-04-04 1991-10-09 Hunting Communication Technology Limited Ring communication system
EP0722233A2 (en) * 1994-12-21 1996-07-17 Hewlett-Packard Company Timing in a data communications network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4713807A (en) * 1985-05-24 1987-12-15 Stc Plc Intelligence transmission system of the local area network type
US4893318A (en) * 1988-01-26 1990-01-09 Computer Sports Medicine, Inc. Method for referencing multiple data processors to a common time reference
US5023871A (en) * 1988-04-28 1991-06-11 Hitachi, Ltd. Method of controlling the operation of stations in a ring network
US4939752A (en) * 1989-05-31 1990-07-03 At&T Company Distributed timing recovery for a distributed communication system
EP0450879A1 (en) * 1990-04-04 1991-10-09 Hunting Communication Technology Limited Ring communication system
EP0722233A2 (en) * 1994-12-21 1996-07-17 Hewlett-Packard Company Timing in a data communications network

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2341749A (en) * 1998-09-19 2000-03-22 Nokia Telecommunications Oy Digital network synchronisation
GB2356774A (en) * 1999-11-26 2001-05-30 Roke Manor Research Synchronisation of base stations
US7613212B1 (en) * 2003-06-10 2009-11-03 Atrica Israel Ltd. Centralized clock synchronization for time division multiplexed traffic transported over Ethernet networks
EP2034642A1 (en) * 2007-09-07 2009-03-11 Siemens Aktiengesellschaft Method for transmitting synchronisation messages in a communications network
US7848360B2 (en) 2007-09-07 2010-12-07 Siemens Aktiengesellschaft Method for transmitting synchronization messages in a communication network
CN111343039A (en) * 2018-12-18 2020-06-26 西蒙兹精密产品公司 Distributed time synchronization protocol for asynchronous communication systems
CN117289754A (en) * 2023-08-22 2023-12-26 北京辉羲智能科技有限公司 Time-synchronous chip architecture and software control method thereof

Similar Documents

Publication Publication Date Title
KR100614424B1 (en) Network Node Synchronization Method
US7441048B2 (en) Communications system and method for synchronizing a communications cycle
US6032261A (en) Bus bridge with distribution of a common cycle clock to all bridge portals to provide synchronization of local buses, and method of operation thereof
US6665317B1 (en) Method, system, and computer program product for managing jitter
EP1198085B1 (en) Cycle synchronization between interconnected sub-networks
JP3698074B2 (en) Network synchronization method, LSI, bus bridge, network device, and program
US5386542A (en) System for generating a time reference value in the MAC layer of an ISO/OSI communications model among a plurality of nodes
WO1998006194A1 (en) Method and apparatus for network clock synchronization
US8145787B1 (en) Adaptive bandwidth utilization over fabric links
US6952137B2 (en) System and method for automatic parameter adjustment within a phase locked loop system
US20050100006A1 (en) Adaptive clock recovery
US7433986B2 (en) Minimizing ISR latency and overhead
KR100216374B1 (en) Apparatus and method of operation administration and maintenance in atm system
JP2001308868A (en) Ieee1394 bus connection device, and medium and information aggregate
JP2997492B2 (en) Network system
GB2392589A (en) Adaptive clock recovery using a packet delay variation buffer and packet count
WO2001022202A1 (en) Method for synchronizing clocks in electronic units connected to a multi processor data bus
KR100295412B1 (en) Apparatus for producing data request signal of bit unit
JP3214468B2 (en) Broadcast system, broadcast control method and recording medium
JPS637063A (en) Data backup system
CN115065688A (en) Data transmission method, device, equipment and computer readable storage medium
JP2003037620A (en) Method, equipment and system for data communication
EP0639910A1 (en) Resequencing system
JPH10207846A (en) Load controller
JPH0433433A (en) Illegal address management method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 98508086

Format of ref document f/p: F

NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase