WO1994029983A1 - Performance monitoring and failure isolation in a point-to-multipoint communication network - Google Patents

Performance monitoring and failure isolation in a point-to-multipoint communication network Download PDF

Info

Publication number
WO1994029983A1
WO1994029983A1 PCT/US1993/011048 US9311048W WO9429983A1 WO 1994029983 A1 WO1994029983 A1 WO 1994029983A1 US 9311048 W US9311048 W US 9311048W WO 9429983 A1 WO9429983 A1 WO 9429983A1
Authority
WO
WIPO (PCT)
Prior art keywords
error
remote units
errors
distribution
isolating
Prior art date
Application number
PCT/US1993/011048
Other languages
French (fr)
Inventor
Adam Opoczynski
Original Assignee
Adc Telecommunications, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Adc Telecommunications, Inc. filed Critical Adc Telecommunications, Inc.
Priority to AU56686/94A priority Critical patent/AU687370B2/en
Priority to DE69326257T priority patent/DE69326257T2/en
Priority to EP94902250A priority patent/EP0702870B1/en
Publication of WO1994029983A1 publication Critical patent/WO1994029983A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/22Arrangements for detecting or preventing errors in the information received using redundant apparatus to increase reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/24Testing correct operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L2001/0092Error control systems characterised by the topology of the transmission link
    • H04L2001/0093Point-to-multipoint

Definitions

  • This invention relates generally to the field of communications systems, and more particularly to a system for monitoring the performance of a passive distribution network connected in a point-to-multipoint configuration and for isolating the location of equipment failures therein.
  • performance monitoring is related to the quality of transmission over a network.
  • One measure of a network's performance or transmission quality is the bit error rate (BER) .
  • BER bit error rate
  • BER is a measure of the number of errors which occur in a certain number of bits of transmission. For example, in a typical network the maximum acceptable BER is 10 "10 . This value means that one transmission error is allowed every 10 10 bits. If the BER is greater than 10 "10 the quality of transmission is not acceptable.
  • a point-to-multipoint passive distribution network configuration consists of a head end connected to a single data path which splits into multiple branches, each branch associated with a unique remote unit. Communication between the head end and the multiple remote units is multiplexed on a passive distribution network (PDN) and each remote unit is programmed to extract and send data in a unique time slot.
  • PDN passive distribution network
  • Performance monitoring of such a configuration represents a major challenge because the indication of an error in the error code does not itself identify the particular equipment which is causing the errors. Because the multiple remote units share the feeder section of the PDN, in existing systems it is not possible to identify whether an error was introduced in the shared feeder, in a dedicated distribution branch or in one of the remote units themselves. Typically, the location of an equipment failure is determined by downing all or part of the system and performing interactive diagnostic tests between the head end and the multiple remote units. This method, however, results in an undesirable and severe degradation in performance of the system as a whole. There is therefore a need in the art for a means of monitoring the performance of a PDN connected in a point-to-multipoint configuration, which can isolate the location of equipment failures with minimal impact on overall system performance.
  • the present invention provides a system which monitors the performance and isolates the location of equipment failures in a PDN network arranged in a point-to- multipoint configuration.
  • the system passively monitors the performance of the network, operating in the background during normal data and/or voice transmission.
  • the system unintrusively monitors the system for errors and isolates whether an equipment failure responsible for generating the errors is located in the shared feeder section of the PDN, in one of the dedicated distribution sections of the PDN or in one of the remote units themselves.
  • the present invention recognizes that different equipment failures in a point-to-multipoint configuration result in unique error distributions as seen across all remote units over time.
  • the invention uses these error distributions to isolate the location of error causing equipment failures in the network.
  • the head end To generate the error distributions, the head end generates and inserts multiple downstream error codes, one for each remote unit, into the downstream traffic path.
  • the remote units extract and decode their respective error code to determine whether a downstream error occurred.
  • Each remote unit then sets an internal error flag indicating the result of the downstream decode result.
  • Upstream error codes are generated by each remote unit from the respective upstream data message.
  • the upstream error codes are then set to indicate an error if a downstream error was received.
  • the head end receives the upstream error codes from all the remote units, decodes them, and accumulates the error data from all remote units over a predetermined period of time. An error distribution representing the distribution of errors occurring across the entire system is generated from the accumulated error data.
  • the error distribution is analyzed via software data processing methods to identify the type of error distribution occurring and to isolate therefrom the location of equipment failures which introduce the errors into the system.
  • the location is isolated to either the shared feeder section of the PDN, to one of the distribution sections of the PDN or to one of the remote units.
  • the information thus obtained can then be used to initiate facility and equipment protection and/or maintenance procedures.
  • the system therefore results in an increased quality of transmission with minimal system down time and minimal impact on overall system performance.
  • Figure 1 shows a representative telecommunications system in block diagram form
  • Figure 2 shows a simplified block diagram of a representative telecommunications system, showing the locations of the feeder and distribution sections of the PDN network
  • FIG. 3 shows a block diagram of the relevant portions of HDT 300
  • Figure 4 shows a block diagram of the relevant portions of RU 600
  • Figure 5 shows a first type of error distribution which can occur in a point-to-multipoint system configuration
  • Figure 6 shows a second type of error distribution
  • Figure 7 shows a third type of error distribution
  • Figure 8 shows a fourth type of error distribution
  • Figure 9 shows a flow diagram of the error data processing methods used to isolate the location of equipment failures
  • Figure 10 shows a flow diagram of the polling scheme used to determine whether an error was caused by a failure in the upstream or downstream data traffic path.
  • FIG. 1 shows a representative telecommunications system 100 in block diagram form. It shall be understood that other configurations are possible without departing from the scope of the present invention.
  • the system includes a headend, or Host Digital Terminal (HDT) 300, which acts as an interface between a Local Digital Switch (LDS) 200 and multiple Passive Distribution Networks (PDN's) 500.
  • HDT 300 receives and transmits signals to LDS 200 using the well-known transmission format described in Bellcore document TR-TSY-000008, known as the TR-8 transmission format.
  • TR-8 transmission format The primary signal carried over this format is a DSl signal, and the transmission system is the Tl digital transmission system.
  • a DSl signal consists of 24 groups of multiplexed eight-bit samples (DSO's) and one framing bit. Each eight-bit sample or DSO represents an individual channel (a telephone conversation, for example) . It shall be understood, however, that the performance monitoring scheme of the present invention could also be used in systems utilizing other interface formats such as OC-X, STS-X, etc., or a multitude of other transmission formats, without departing from the scope of the present invention.
  • Each HDT 300 includes termination points for the DSl lines coming into the FDT 300, circuitry for converting from TR-8 format to PDN format, and circuitry which routes the incoming channels to the appropriate remote unit. The routed channels are then multiplexed to form the downstream traffic signals which are sent over PDN's 500 to their respective destinations.
  • the downstream optical signals are transmitted over PDN's 500, which in the case of a passive optical network consist of a network of optical fiber and passive optical splitters, and which terminate at a series of Remote Units (RU) 600 which are in the preferred optical system embodiment of an Optical Network Units, or ONU's.
  • RU Remote Unit
  • each HDT 200 can interface to up to 128 RU's 600.
  • Each RU 600 includes terminations for copper and fiber cables, electronics for signal conversion between PDN media (e.g.
  • RU's 600 can be physically located at each individual subscriber premise or in a curbside unit which is shared between multiple subscribers (as is shown in Figure 1), in which case each RU would house the interface to multiple subscriber telephone lines.
  • FIG. 2 shows a simplified view of a representative telecommunications system connected in a point-to- multipoint configuration, including an HDT 300, PDN 500, and multiple RU's 600.
  • Each PDN 500 is comprised of a shared feeder 520 which transmits multiplexed data messages to and from all RU's 600 to HDT 300.
  • Each RU has an associated distribution path 540, or branch of the PDN dedicated to deliver and transit data messages to and from that RU.
  • Downstream 522 (from HDT 300 to RU's 600) and upstream 524 (from RU's 600 to HDT 300) data messages between HDT 300 and all RU's 600 are multiplexed over the shared feeder 520 section of the PDN 500.
  • the present invention can be used with either Time Division Multiplexing (TDM), Code Division Multiplexing (CDM), or any other appropriate multiplexing scheme.
  • TDM Time Division Multiplexing
  • CDM Code Division Multiplexing
  • Splitter 530 splits PDN 500 into multiple branches or distribution 540 sections, each associated with a particular RU 600.
  • the RU's are programmed to extract downstream messages and insert upstream messages into the appropriate TDM slot for that RU as is well-known in the art.
  • HDT 300 is programmed to insert data bound for a particular RU into the appropriate downstream time slot.
  • Upstream messages received at HDT 300 from the RU's 600 are identified as to their source by the location of the messages in the TDM frame.
  • Performance monitoring in a point-to-multipoint system such as that shown in Figures 1 and 2 represents a major challenge.
  • Equipment failures in the PDN network can occur which cause errors to be introduced in the downstream and upstream data messages.
  • Each RU in a point-to-multipoint system does not have a dedicated communication link to HDT 300. Instead all RU's have a common path to HDT 300, that is shared feeder 520. This makes it difficult in a point-to-multipoint network configuration to identify the part of the network in which the equipment failure is located. Specifically, it is not readily discernable, from the error code alone whether a particular error or group of errors was introduced by shared feeder 520, one of the dedicated distribution sections 540 of PDN 500 or by one of the RU's themselves.
  • the present system provides a nonintrusive means to passively monitor the performance of a PDN system and to isolate the location of equipment failures in a PDN system, with minimal impact on the overall performance of the network. To do so, the present system recognizes and makes use of the fact that different equipment failures and the errors they produce manifest themselves in unique error distributions across all RU's over time. For example, a certain type of error distribution reveals that an equipment failure is located in the shared feeder section of the PDN, a different type of error distribution reveals that an equipment failure is located in one of the dedicated distribution sections of the PDN, and a still different type error distribution reveals that a failure lies within one of the RU's themselves.
  • HDT 300 includes an error code generator 310, which generates multiple error codes, one for each RU, which are inserted into the downstream traffic path via multiplexor 320. Each RU has a unique error code associated with it, and extracts and decodes the appropriate error code from the downstream traffic path. Each downstream error code corresponds to a data packet or message sent to an RU in the downstream traffic path and is generated using conventional error code generating techniques.
  • error codes known in the art which can be used with the present invention include parity, Cyclic Redundance Check (CRC), Single Error Correction Double Error Detection (SECDED) codes, among others.
  • CRC Cyclic Redundance Check
  • SECDED Single Error Correction Double Error Detection
  • Figure 4 shows the relevant hardware of an RU 600.
  • RU 600 receives the downstream traffic and the respective error code is decoded by decoder 610.
  • Error code generator 640 receives the upstream data messages from the subscribers and generates a corresponding upstream error code using any of the known error code techniques. Error code generator 640 also receives the result of the downstream decode and modifies the generated upstream error code to indicate an error if a downstream error was received.
  • Mux 630 inserts the upstream error code into a unique position in the upstream traffic path that is unique to that RU.
  • HDT 300 includes processor 330 which receives and accumulates the upstream decode information from all the RU's over a specified period of time.
  • Software data processing techniques shown and described below with respect to Figure 9 are used to generate an error distribution from the accumulated error data. Analysis and interpretation of the resulting error distribution via the software data processing techniques shown in Figure 9, as discussed below, reveals the number and frequency of errors that occurred, and identifies the relative type, or "shape" of the resulting error distribution.
  • the location of the equipment failure is isolated to either the shared feeder 520 section of PDN 500, one of the distribution 540 sections of PDN 500, or one of the RU's 600.
  • the present invention has discovered that four commonly occurring equipment failures in a point-to-multipoint PDN configuration result in four different types, or "shapes" of error distributions as seen across all RU's over time.
  • Figures 5-8 show representative error distributions associated with four commonly occurring equipment failures in a point-to-multipoint PDN network. It shall be understood that the example error distributions shown in Figures 5-8 are presented herein to show the general shape of the error distributions under certain equipment failure conditions and are not necessarily to scale.
  • the control software shown in flow diagram form in Figure 9 runs in processor 330 (shown in Figure 3).
  • the first processing step in the present system is to receive user settable parameters.
  • certain parameters are settable by the user to achieve the desired degree of accuracy required with a particular network application.
  • the minimum acceptable BER and an alarm triggering BER are both settable by the user for customizing the present system for a particular user or network needs and requirements.
  • the minimal acceptable Bit Error Rate, or BER in the examples of Figures 5-8 is set at 10 "10
  • the alarm triggering BER is set at 10 "3 .
  • the upstream error codes are received and decoded.
  • the error data is accumulated over a predetermined period of time. After this time period is complete, an error distribution across all RU's is generated from the accumulated error data in accordance with conventional data processing techniques.
  • the resulting error distribution is analyzed to identify the type or "shape" of the error distribution.
  • the type of error distribution identified indicates an isolated portion of the PDN network where a particular equipment failure is located. For types of error distributions which commonly occur in a point-to- multipoint network and the associated equipment failures indicated and isolated thereby will now be discussed.
  • Figure 5 shows a first type of error distribution. It shows an example of an even distribution of errors reported across all RU's.
  • Preferably shared feeder 520 is equipment protected with a standby unit, and the recognition of the equipment failure as determined by the present performance monitoring and failure isolation system can be used to cause a protection switch to the standby unit, thus minimizing network downtime.
  • a type 1 alarm is raised by processor 330 to alert maintenance personnel that a type 1 equipment failure was detected and that a protection switch occurred so that the failed equipment can be appropriately repaired or replaced.
  • Figure 6 shows a second type of error distribution having an acceptable BER (e.g., less than 10 "10 in this example) for all RU's except one (RU number 56 in this example), which has an unacceptably high BER of almost 10 "3 '
  • This error distribution reveals that some equipment associated with that RU, i.e., either the RU itself or the associated dedicated distribution path 540, is the location of the equipment failure. Because each distribution path 540 is dedicated to a single RU, equipment failures in a distribution section of PDN 500 cause errors to occur only in data messages traveling to or from that particular RU. Thus, the resulting error distribution such as that shown in Figure 6 shows an acceptable rate of errors for all RU's with an error peak at the particular RU with which the failure is associated.
  • an acceptable BER e.g., less than 10 "10 in this example
  • RU number 56 in this example
  • the error handling procedure for a type 2 error distribution is shown in Figure 9.
  • An alarm is raised by processor 330 to notify maintenance personnel of the type 2 error found and identifying the suspect RU.
  • appropriate interactive maintenance routines known in the art are run by maintenance personnel to determine whether the equipment failure is located in the RU itself or in the dedicated distribution path.
  • a failed RU erroneously reads from or writes into a TDM time slot assigned to a different RU.
  • Such an equipment failure results in a dually peaked error distribution such as that shown in Figure 7.
  • One of the BER peaks represents the failed RU and the other BER peak represents the overwritten RU.
  • This dually peaked error distribution indicates that an equipment failure is located in one of the RU's having a BER peak.
  • the two peaks must be of the same magnitude to ensure that the problem is appropriately identified. If the problem is that an RU is reading or writing into another RU's time slot, the resulting errors apparent in the two RU's error distributions will be of the same magnitude.
  • This magnitude is in the preferred embodiment settable by the user and is shown as 10 "3 in the example of Figure 7.
  • the error handling procedure shown in Figure 9 raises an alarm to alert maintenance personnel of the type 3 error occurred.
  • Appropriate interactive maintenance routines known in the art can be run by maintenance personnel between HDT 300 and the suspect RU's can be run to determine which of the suspect RU's contains the error causing equipment failure.
  • a fourth type of equipment failure results in an RU randomly reading to or writing from the TDM time slots of all the other RU's.
  • Figure 8 shows a typical error distribution for this fourth type of equipment failure.
  • Figure 8 reveals an unacceptably heavy BER across all RU's, with a BER error peak at one particular RU (72 in this example) .
  • the number of errors occurring over all RU's represented by the error curve must be of the same magnitude as the number of errors represented by the error peak at RU 72.
  • the total errors represented by shaded area 810 should be of the same magnitude as the total number of errors represented by shaded area 812. Checking the magnitude of these errors ensures that the correct problem is identified. If the magnitudes are comparable then the problem can be properly identified as one RU (72 in the example of Figure 8) randomly writing or reading to the time slots of other RU's.
  • the error distribution shown in Figure 8 may simply be a variation of the distribution shown in Figure 5.
  • an alarm is raised to alert maintenance personnel of the type of failure of which occurred and to identify the suspect RU so that appropriate interactive maintenance routing known in the art between the RU and the head end can be performed, as shown in Figure 9.
  • the above described performance monitoring and failure isolation system enjoys several advantages over existing methods. For example, the present method raises an immediate alarm condition to report errors.
  • the present invention also passively and unobtrusively monitors the signal integrity across the PDN with only minimal impact on overall network performance. This is opposed to existing schemes which down the entire PDN network and initiate interactive polling of all RU's to determine the location of the problem, resulting in extreme and undesirable reductions in network performance.
  • the information obtained by the performance monitoring and failure isolation system of the present invention is very useful for protection switching. For example, if the error distribution reveals that the error source is located in shared feeder section 520, that information can be used to initiate a switch to a standby shared feeder unit as described above.
  • the downstream and upstream passive distribution media are switched together as pairs.
  • the present invention need only locate which section, either shared feeder or distribution, contains the equipment failure and not whether the error source is in the downstream or upstream data traffic path.
  • the present invention does provide a means for determining whether the error source is in the downstream or upstream data traffic path for those applications such as maintenance and troubleshooting where such information is required.
  • each RU Upon receipt and decode of the downstream error code, each RU sets an internal error flag, shown in Figure 4, indicating whether a downstream error was received.
  • HDT 300 can interactively poll the respective RU according to the flow diagram shown in Figure 10 to determine whether a downstream error occurred. The HDT 300 polls the RU to get the value of the RU internal error flag. If the flag is set, the error occurs in the downstream traffic path. If the error flag is not set, no downstream error occurred, meaning that the error was introduced in the upstream traffic path. Maintenance procedures appropriate to the given problem can be performed by maintenance personnel.
  • the present performance monitoring and failure isolation system is not dependent on any particular type of error detection code used.
  • the type of error detection selected will typically depend on the network's available bandwidth. For example, the typical network has little additional bandwidth available for insertion of error detection codes. In that case, a parity error detection scheme, i.e., a single bit error detection code, is advantageous. It is conceivable, however, that some PDN networks have more bandwidth available. In such a case, a more complex error detection/correction scheme requiring more bits, such as Single Error Correction Double Error Detection (SECDED), Cyclic Redundancy Check (CRC), or other more complex error codes could be used.
  • SECDED Single Error Correction Double Error Detection
  • CRC Cyclic Redundancy Check
  • the type of error detection mechanism implemented does not change the fundamental nature of the point-to-multipoint performance monitoring and failure isolation system of the present invention.
  • the use of either simple error schemes such as parity type codes and the use of more complex error detection/correction codes are anticipated in and are within the scope of the present invention.
  • error distribution analysis or data processing methods used on the received error codes is also not a limiting factor in the present invention.
  • the preferred embodiment uses, among other methods, an analysis of looking for a predetermined threshold of the proportion of total errors to the proportion of errors associated with a particular RU, many different types of analysis on the error data could be performed to arrive at the same result without departing from the spirit and scope of the present invention.
  • performance monitoring and failure isolation system of the present invention may be used individually if desired to achieve differing levels of functionality in a particular system. For example, if it is not necessary to differentiate downstream or upstream equipment failures, the system could be assembled without the RU error flag. Also, generation, decode and analysis of the upstream error codes alone could be used to achieve reduced levels of performance monitoring and failure isolation.

Abstract

A system monitors the performance of a communications network and isolates the location of equipment failures therein through analysis of performance error data. The system passively monitors the performance of the network, operating in the background during normal data and/or voice transmission. The head end generates and inserts multiple error codes, one for each remote unit, into the downstream traffic path. Each remote unit extracts and decodes its respective error code to determine whether a downstream error occurred. Each remote unit then calculates a new error code based on the result of the downstream decode and the respective upstream data message. The head end receives the upstream error codes from all the remote units, decodes them, and accumulates the error data from all remote units over a period of time. An error distribution is generated, and an analysis is performed thereon to isolate the location of error causing equipment failures. The information thus obtained can then be used to initiate facility and equipment protection and/or appropriate maintenance procedures. The invention results in an increased quality of transmission with minimal network down time and minimal impact on overall system performance.

Description

PERFORMANCE MONITORING AND FAILURE ISOLATION IN A POINT-TO-MULTIPOINT COMMUNICATION NETWORK
Field of the Invention This invention relates generally to the field of communications systems, and more particularly to a system for monitoring the performance of a passive distribution network connected in a point-to-multipoint configuration and for isolating the location of equipment failures therein.
Background of the Invention
Deployment of passive distribution systems, such as optical fiber, in the local telecommunications loop has opened opportunities for new types of services. Most of the new services target data transmission instead of voice. The major difference in the network requirements for digital data transmission versus voice transmission is in the quality of the transmission. The demand for high quality transmission and low outage time is forcing the new generation of systems to protect equipment as well as the passive distribution facilities.
The term "performance monitoring" is related to the quality of transmission over a network. One measure of a network's performance or transmission quality is the bit error rate (BER) . Because of imperfections in the network and environmental conditions some data errors inevitably occur. However, equipment failures such as breakages, power loss, etc., can also cause data transmission errors to be introduced. BER is a measure of the number of errors which occur in a certain number of bits of transmission. For example, in a typical network the maximum acceptable BER is 10"10. This value means that one transmission error is allowed every 1010 bits. If the BER is greater than 10"10 the quality of transmission is not acceptable.
In order to determine the BER means must be provided to detect errors that occur. Another desirable feature is a mechanism to pinpoint the locations of the equipment failure which caused the errors so that equipment and protection facility switching can be achieved, or appropriate maintenance procedures performed. A point-to-multipoint passive distribution network configuration consists of a head end connected to a single data path which splits into multiple branches, each branch associated with a unique remote unit. Communication between the head end and the multiple remote units is multiplexed on a passive distribution network (PDN) and each remote unit is programmed to extract and send data in a unique time slot. This means that all remote units share the single, or "feeder" section of the network, and that each have a dedicated branch, or "distribution" section of the network associated with it. One exemplary passive optical system is described in U.S. Patent Number 4,977,593, to Ballance, issued December 11, 1990 and assigned to British Telecommunications, which is incorporated herein by reference.
Performance monitoring of such a configuration represents a major challenge because the indication of an error in the error code does not itself identify the particular equipment which is causing the errors. Because the multiple remote units share the feeder section of the PDN, in existing systems it is not possible to identify whether an error was introduced in the shared feeder, in a dedicated distribution branch or in one of the remote units themselves. Typically, the location of an equipment failure is determined by downing all or part of the system and performing interactive diagnostic tests between the head end and the multiple remote units. This method, however, results in an undesirable and severe degradation in performance of the system as a whole. There is therefore a need in the art for a means of monitoring the performance of a PDN connected in a point-to-multipoint configuration, which can isolate the location of equipment failures with minimal impact on overall system performance.
Summary of the Invention
To achieve the goals described above, the present invention provides a system which monitors the performance and isolates the location of equipment failures in a PDN network arranged in a point-to- multipoint configuration. The system passively monitors the performance of the network, operating in the background during normal data and/or voice transmission. The system unintrusively monitors the system for errors and isolates whether an equipment failure responsible for generating the errors is located in the shared feeder section of the PDN, in one of the dedicated distribution sections of the PDN or in one of the remote units themselves.
To isolate the location of equipment failures in the network, the present invention recognizes that different equipment failures in a point-to-multipoint configuration result in unique error distributions as seen across all remote units over time. The invention uses these error distributions to isolate the location of error causing equipment failures in the network.
To generate the error distributions, the head end generates and inserts multiple downstream error codes, one for each remote unit, into the downstream traffic path. The remote units extract and decode their respective error code to determine whether a downstream error occurred. Each remote unit then sets an internal error flag indicating the result of the downstream decode result. Upstream error codes are generated by each remote unit from the respective upstream data message. The upstream error codes are then set to indicate an error if a downstream error was received. The head end receives the upstream error codes from all the remote units, decodes them, and accumulates the error data from all remote units over a predetermined period of time. An error distribution representing the distribution of errors occurring across the entire system is generated from the accumulated error data. The error distribution is analyzed via software data processing methods to identify the type of error distribution occurring and to isolate therefrom the location of equipment failures which introduce the errors into the system. The location is isolated to either the shared feeder section of the PDN, to one of the distribution sections of the PDN or to one of the remote units. The information thus obtained can then be used to initiate facility and equipment protection and/or maintenance procedures. The system therefore results in an increased quality of transmission with minimal system down time and minimal impact on overall system performance.
Brief Description of the Drawings
In the drawings, where like numerals refer to like elements throughout the several views:
Figure 1 shows a representative telecommunications system in block diagram form; Figure 2 shows a simplified block diagram of a representative telecommunications system, showing the locations of the feeder and distribution sections of the PDN network;
Figure 3 shows a block diagram of the relevant portions of HDT 300;
Figure 4 shows a block diagram of the relevant portions of RU 600;
Figure 5 shows a first type of error distribution which can occur in a point-to-multipoint system configuration;
Figure 6 shows a second type of error distribution; Figure 7 shows a third type of error distribution;
Figure 8 shows a fourth type of error distribution; Figure 9 shows a flow diagram of the error data processing methods used to isolate the location of equipment failures; and
Figure 10 shows a flow diagram of the polling scheme used to determine whether an error was caused by a failure in the upstream or downstream data traffic path.
Detailed Description of the Preferred Embodiment
In the following detailed description of the preferred embodiment, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration a specific embodiment in which the invention may be practiced. This embodiment is described in sufficient detail to enable one skilled in the art to make and use the invention. It will be understood that other embodiments may be utilized and that structural changes may be made without departing from the spirit and scope of the present invention. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the invention is to be defined by the appended claims.
A general description of a telecommunications system will now be given. Figure 1 shows a representative telecommunications system 100 in block diagram form. It shall be understood that other configurations are possible without departing from the scope of the present invention. The system includes a headend, or Host Digital Terminal (HDT) 300, which acts as an interface between a Local Digital Switch (LDS) 200 and multiple Passive Distribution Networks (PDN's) 500. In the preferred embodiment, HDT 300 receives and transmits signals to LDS 200 using the well-known transmission format described in Bellcore document TR-TSY-000008, known as the TR-8 transmission format. The primary signal carried over this format is a DSl signal, and the transmission system is the Tl digital transmission system. A DSl signal consists of 24 groups of multiplexed eight-bit samples (DSO's) and one framing bit. Each eight-bit sample or DSO represents an individual channel (a telephone conversation, for example) . It shall be understood, however, that the performance monitoring scheme of the present invention could also be used in systems utilizing other interface formats such as OC-X, STS-X, etc., or a multitude of other transmission formats, without departing from the scope of the present invention.
Each HDT 300 includes termination points for the DSl lines coming into the FDT 300, circuitry for converting from TR-8 format to PDN format, and circuitry which routes the incoming channels to the appropriate remote unit. The routed channels are then multiplexed to form the downstream traffic signals which are sent over PDN's 500 to their respective destinations.
The downstream optical signals are transmitted over PDN's 500, which in the case of a passive optical network consist of a network of optical fiber and passive optical splitters, and which terminate at a series of Remote Units (RU) 600 which are in the preferred optical system embodiment of an Optical Network Units, or ONU's. In the preferred embodiment, which uses optical fiber as the transmission medium, each HDT 200 can interface to up to 128 RU's 600. However it shall be easily understood that any greater or lesser number could be used without departing from the scope of the present invention. Each RU 600 includes terminations for copper and fiber cables, electronics for signal conversion between PDN media (e.g. optical fiber, coaxial cable or other passive transmission media) and subscriber in-house wiring, and electronics for multiplexing, digital-to-analog conversion, signalling and testing. RU's 600 can be physically located at each individual subscriber premise or in a curbside unit which is shared between multiple subscribers (as is shown in Figure 1), in which case each RU would house the interface to multiple subscriber telephone lines.
Figure 2 shows a simplified view of a representative telecommunications system connected in a point-to- multipoint configuration, including an HDT 300, PDN 500, and multiple RU's 600. Each PDN 500 is comprised of a shared feeder 520 which transmits multiplexed data messages to and from all RU's 600 to HDT 300. Each RU has an associated distribution path 540, or branch of the PDN dedicated to deliver and transit data messages to and from that RU.
Downstream 522 (from HDT 300 to RU's 600) and upstream 524 (from RU's 600 to HDT 300) data messages between HDT 300 and all RU's 600 are multiplexed over the shared feeder 520 section of the PDN 500. The present invention can be used with either Time Division Multiplexing (TDM), Code Division Multiplexing (CDM), or any other appropriate multiplexing scheme. For the sake of illustration the preferred embodiment will be described with respect to TDM. Splitter 530 splits PDN 500 into multiple branches or distribution 540 sections, each associated with a particular RU 600. The RU's are programmed to extract downstream messages and insert upstream messages into the appropriate TDM slot for that RU as is well-known in the art. HDT 300 is programmed to insert data bound for a particular RU into the appropriate downstream time slot. Upstream messages received at HDT 300 from the RU's 600 are identified as to their source by the location of the messages in the TDM frame.
Performance monitoring in a point-to-multipoint system such as that shown in Figures 1 and 2 represents a major challenge. Equipment failures in the PDN network can occur which cause errors to be introduced in the downstream and upstream data messages. Each RU in a point-to-multipoint system does not have a dedicated communication link to HDT 300. Instead all RU's have a common path to HDT 300, that is shared feeder 520. This makes it difficult in a point-to-multipoint network configuration to identify the part of the network in which the equipment failure is located. Specifically, it is not readily discernable, from the error code alone whether a particular error or group of errors was introduced by shared feeder 520, one of the dedicated distribution sections 540 of PDN 500 or by one of the RU's themselves.
The present system provides a nonintrusive means to passively monitor the performance of a PDN system and to isolate the location of equipment failures in a PDN system, with minimal impact on the overall performance of the network. To do so, the present system recognizes and makes use of the fact that different equipment failures and the errors they produce manifest themselves in unique error distributions across all RU's over time. For example, a certain type of error distribution reveals that an equipment failure is located in the shared feeder section of the PDN, a different type of error distribution reveals that an equipment failure is located in one of the dedicated distribution sections of the PDN, and a still different type error distribution reveals that a failure lies within one of the RU's themselves.
Referring now to Figure 3, the HDT hardware used to generate and analyze the error distributions will now be described. To generate the error distribution, HDT 300 includes an error code generator 310, which generates multiple error codes, one for each RU, which are inserted into the downstream traffic path via multiplexor 320. Each RU has a unique error code associated with it, and extracts and decodes the appropriate error code from the downstream traffic path. Each downstream error code corresponds to a data packet or message sent to an RU in the downstream traffic path and is generated using conventional error code generating techniques. It shall be understood that the present invention is not limited to the use of any particular type of error code, however some example error codes known in the art which can be used with the present invention include parity, Cyclic Redundance Check (CRC), Single Error Correction Double Error Detection (SECDED) codes, among others.
Figure 4 shows the relevant hardware of an RU 600. RU 600 receives the downstream traffic and the respective error code is decoded by decoder 610. Error code generator 640 receives the upstream data messages from the subscribers and generates a corresponding upstream error code using any of the known error code techniques. Error code generator 640 also receives the result of the downstream decode and modifies the generated upstream error code to indicate an error if a downstream error was received. Mux 630 inserts the upstream error code into a unique position in the upstream traffic path that is unique to that RU.
Referring again to Figure 3, the upstream error codes from each of the RU's are received at HDT 300 and the upstream error codes are decoded by decoder 340. HDT 300 includes processor 330 which receives and accumulates the upstream decode information from all the RU's over a specified period of time. Software data processing techniques shown and described below with respect to Figure 9 are used to generate an error distribution from the accumulated error data. Analysis and interpretation of the resulting error distribution via the software data processing techniques shown in Figure 9, as discussed below, reveals the number and frequency of errors that occurred, and identifies the relative type, or "shape" of the resulting error distribution. Depending upon the type of error distribution identified, the location of the equipment failure is isolated to either the shared feeder 520 section of PDN 500, one of the distribution 540 sections of PDN 500, or one of the RU's 600.
To isolate the location of equipment failures, the present invention has discovered that four commonly occurring equipment failures in a point-to-multipoint PDN configuration result in four different types, or "shapes" of error distributions as seen across all RU's over time.
The process by which the present system analyzes and isolates the location of equipment failures in a point- to-multipoint configured network will now be described with respect to Figures 5-8 and the flow diagram shown in Figure 9. Figures 5-8 show representative error distributions associated with four commonly occurring equipment failures in a point-to-multipoint PDN network. It shall be understood that the example error distributions shown in Figures 5-8 are presented herein to show the general shape of the error distributions under certain equipment failure conditions and are not necessarily to scale.
The control software shown in flow diagram form in Figure 9 runs in processor 330 (shown in Figure 3). As shown in the flow diagram of Figure 9, the first processing step in the present system is to receive user settable parameters. In the preferred embodiment of the present invention, certain parameters are settable by the user to achieve the desired degree of accuracy required with a particular network application. For example, the minimum acceptable BER and an alarm triggering BER are both settable by the user for customizing the present system for a particular user or network needs and requirements. Assume for purposes of discussion and not by way of limitation that the minimal acceptable Bit Error Rate, or BER, in the examples of Figures 5-8 is set at 10"10, and that the alarm triggering BER is set at 10"3. Referring again to Figure 9, after the user settable parameters are received and programmed, the upstream error codes are received and decoded. The error data is accumulated over a predetermined period of time. After this time period is complete, an error distribution across all RU's is generated from the accumulated error data in accordance with conventional data processing techniques. Next, the resulting error distribution is analyzed to identify the type or "shape" of the error distribution. The type of error distribution identified indicates an isolated portion of the PDN network where a particular equipment failure is located. For types of error distributions which commonly occur in a point-to- multipoint network and the associated equipment failures indicated and isolated thereby will now be discussed. Figure 5 shows a first type of error distribution. It shows an example of an even distribution of errors reported across all RU's. Notice that the BER is greater than 10"10, the minimal number of acceptable errors in this example. This error distribution reveals that shared feeder 520 section of PDN 500 is the source of the errors. This is because feeder 520 is common for all RU's 600 associated with that shared PDN. Therefore, an equipment failure in shared feeder 520 will cause errors to be randomly distributed across all RU's. Thus, the resulting distribution, such as that shown in Figure 5, shows an evenly distributed unacceptably high level of errors across all RU's. The error handling procedure which occurs upon identifying this first type of error distribution and isolating the equipment failure shown in Figure 5 is shown in Figure 9. Preferably shared feeder 520 is equipment protected with a standby unit, and the recognition of the equipment failure as determined by the present performance monitoring and failure isolation system can be used to cause a protection switch to the standby unit, thus minimizing network downtime. A type 1 alarm is raised by processor 330 to alert maintenance personnel that a type 1 equipment failure was detected and that a protection switch occurred so that the failed equipment can be appropriately repaired or replaced. Figure 6 shows a second type of error distribution having an acceptable BER (e.g., less than 10"10 in this example) for all RU's except one (RU number 56 in this example), which has an unacceptably high BER of almost 10"3' This error distribution reveals that some equipment associated with that RU, i.e., either the RU itself or the associated dedicated distribution path 540, is the location of the equipment failure. Because each distribution path 540 is dedicated to a single RU, equipment failures in a distribution section of PDN 500 cause errors to occur only in data messages traveling to or from that particular RU. Thus, the resulting error distribution such as that shown in Figure 6 shows an acceptable rate of errors for all RU's with an error peak at the particular RU with which the failure is associated. The error handling procedure for a type 2 error distribution is shown in Figure 9. An alarm is raised by processor 330 to notify maintenance personnel of the type 2 error found and identifying the suspect RU. In addition appropriate interactive maintenance routines known in the art are run by maintenance personnel to determine whether the equipment failure is located in the RU itself or in the dedicated distribution path.
For certain types of RU equipment failures, a failed RU erroneously reads from or writes into a TDM time slot assigned to a different RU. Such an equipment failure results in a dually peaked error distribution such as that shown in Figure 7. One of the BER peaks represents the failed RU and the other BER peak represents the overwritten RU. This dually peaked error distribution indicates that an equipment failure is located in one of the RU's having a BER peak. The two peaks must be of the same magnitude to ensure that the problem is appropriately identified. If the problem is that an RU is reading or writing into another RU's time slot, the resulting errors apparent in the two RU's error distributions will be of the same magnitude. This magnitude is in the preferred embodiment settable by the user and is shown as 10"3 in the example of Figure 7. When this type 3 error distribution occurs, the error handling procedure shown in Figure 9 raises an alarm to alert maintenance personnel of the type 3 error occurred. Appropriate interactive maintenance routines known in the art can be run by maintenance personnel between HDT 300 and the suspect RU's can be run to determine which of the suspect RU's contains the error causing equipment failure. A fourth type of equipment failure results in an RU randomly reading to or writing from the TDM time slots of all the other RU's. Figure 8 shows a typical error distribution for this fourth type of equipment failure. Figure 8 reveals an unacceptably heavy BER across all RU's, with a BER error peak at one particular RU (72 in this example) . If the errors represented by the error peak reach a certain level, then this error distribution reveals that the peaking RU itself contains an equipment failure. In Figure 8, for example, the number of errors occurring over all RU's represented by the error curve must be of the same magnitude as the number of errors represented by the error peak at RU 72. In other words, the total errors represented by shaded area 810 should be of the same magnitude as the total number of errors represented by shaded area 812. Checking the magnitude of these errors ensures that the correct problem is identified. If the magnitudes are comparable then the problem can be properly identified as one RU (72 in the example of Figure 8) randomly writing or reading to the time slots of other RU's. If the magnitudes are not comparable, then the error distribution shown in Figure 8 may simply be a variation of the distribution shown in Figure 5. For the fourth type of equipment failure identified by the error distribution of Figure 8, an alarm is raised to alert maintenance personnel of the type of failure of which occurred and to identify the suspect RU so that appropriate interactive maintenance routing known in the art between the RU and the head end can be performed, as shown in Figure 9.
The above described performance monitoring and failure isolation system enjoys several advantages over existing methods. For example, the present method raises an immediate alarm condition to report errors. The present invention also passively and unobtrusively monitors the signal integrity across the PDN with only minimal impact on overall network performance. This is opposed to existing schemes which down the entire PDN network and initiate interactive polling of all RU's to determine the location of the problem, resulting in extreme and undesirable reductions in network performance. The information obtained by the performance monitoring and failure isolation system of the present invention is very useful for protection switching. For example, if the error distribution reveals that the error source is located in shared feeder section 520, that information can be used to initiate a switch to a standby shared feeder unit as described above.
In most PDN systems, the downstream and upstream passive distribution media are switched together as pairs. Thus for most applications the present invention need only locate which section, either shared feeder or distribution, contains the equipment failure and not whether the error source is in the downstream or upstream data traffic path.
However, the present invention does provide a means for determining whether the error source is in the downstream or upstream data traffic path for those applications such as maintenance and troubleshooting where such information is required. Upon receipt and decode of the downstream error code, each RU sets an internal error flag, shown in Figure 4, indicating whether a downstream error was received. Later, if HDT 300 receives error indication from the RU, HDT 300 can interactively poll the respective RU according to the flow diagram shown in Figure 10 to determine whether a downstream error occurred. The HDT 300 polls the RU to get the value of the RU internal error flag. If the flag is set, the error occurs in the downstream traffic path. If the error flag is not set, no downstream error occurred, meaning that the error was introduced in the upstream traffic path. Maintenance procedures appropriate to the given problem can be performed by maintenance personnel.
It shall be understood that the present performance monitoring and failure isolation system is not dependent on any particular type of error detection code used. The type of error detection selected will typically depend on the network's available bandwidth. For example, the typical network has little additional bandwidth available for insertion of error detection codes. In that case, a parity error detection scheme, i.e., a single bit error detection code, is advantageous. It is conceivable, however, that some PDN networks have more bandwidth available. In such a case, a more complex error detection/correction scheme requiring more bits, such as Single Error Correction Double Error Detection (SECDED), Cyclic Redundancy Check (CRC), or other more complex error codes could be used. Moreover, the type of error detection mechanism implemented does not change the fundamental nature of the point-to-multipoint performance monitoring and failure isolation system of the present invention. The use of either simple error schemes such as parity type codes and the use of more complex error detection/correction codes are anticipated in and are within the scope of the present invention.
The specific type of error distribution analysis or data processing methods used on the received error codes is also not a limiting factor in the present invention. Although the preferred embodiment uses, among other methods, an analysis of looking for a predetermined threshold of the proportion of total errors to the proportion of errors associated with a particular RU, many different types of analysis on the error data could be performed to arrive at the same result without departing from the spirit and scope of the present invention.
It shall also be understood that various aspects of the performance monitoring and failure isolation system of the present invention may be used individually if desired to achieve differing levels of functionality in a particular system. For example, if it is not necessary to differentiate downstream or upstream equipment failures, the system could be assembled without the RU error flag. Also, generation, decode and analysis of the upstream error codes alone could be used to achieve reduced levels of performance monitoring and failure isolation.
Although a specific embodiment has been illustrated and described for the preferred embodiment of the present invention set forth herein, it will be readily apparent to those of skill in the art that many modifications and alterations to the preferred embodiment as described herein are possible without deviating from the scope and spirit of the present invention. Special conditions employed for the implementation of the preferred embodiment are not intended to be limiting and are easily adaptable to alternate implementations. For example, the control structure of the present invention could be implemented using microprocessor based architectures and logic functions, including the use of external computer control, RAM microcode control, PLA or PAL logic structures and hardwired or software controlled state machines. Furthermore, the present invention is in no way limited to a certain analytical method used to arrive at and interpret the error distribution across the PDN. Thus, it will be understood that many modifications will be readily apparent to those of ordinary skill in the art, and that this application is intended to cover any adaptations or variations thereof. Therefore, it is manifestly intended that this invention be limited only by the claims and the equivalents thereof.

Claims

WHAT IS CLAIMED IS:
1. A system for performance monitoring a network and isolating the location of equipment failures therein, the network including a head end and a plurality of remote units connected with a distribution means for transmitting data messages between the head end and the plurality of remote ends, comprising: first generating means in the head end for generating a plurality of first error codes based on a plurality of downstream data messages; a plurality of first decoding means, each in a different one of the plurality of remote units, each for decoding one of the plurality of first error codes and determining therefrom whether a downstream error occurred; a plurality of generating means, each in a different one of the plurality of remote units, each for generating one of a plurality of error codes corresponding to an upstream data message; decoding means in the head end for decoding the plurality of error codes received from the plurality of remote units and producing therefrom a plurality of error data; processing means in the head end for producing an error distribution from the plurality of error data; and means for analyzing the error distribution to isolate the location of equipment failures.
2. The system of claim 1 wherein the distribution means further includes: a shared feeder connected to the head end; and a plurality of distribution branches, each connected to the shared feeder and to a different one of the plurality of remote units.
3. The system of claim 2 wherein the means for analyzing further includes isolating the equipment failure to the shared feeder, to one of the distribution branches, or to one or more of the plurality of remote units.
4. The system of claim 2 wherein the means for analyzing further includes means for identifying when the error distribution takes the form of a uniform and unacceptably high level of errors across all remote units.
5. The system of claim 4 wherein the means for analyzing further includes means for isolating the equipment failure to the shared feeder.
6. The system of claim 2 wherein the means for analyzing further includes means for identifying when the error distribution takes the form of an unacceptably high level of errors for two of the plurality of remote units and an acceptable level of errors for the remaining plurality of remote units.
7. The system of claim 6 wherein the means for analyzing further includes means for isolating the equipment failure to the two remote units having the unacceptably high level of errors.
8. The system of claim 7 wherein the means for analyzing further includes means for identifying when the magnitudes of the unacceptably high level of errors for the two of the plurality of remote units are equivalent.
9. The system of claim 2 wherein the means for analyzing further includes means for identifying when the error distribution takes the form of an unacceptably high level of errors for one of the plurality of remote units and an acceptable level of errors for the remaining plurality of remote units.
10. The system of claim 9 wherein the means for analyzing further includes means for isolating the equipment failure to the remote unit having the unacceptably high level of errors or to the distribution branch connected thereto.
11. The system of claim 2 wherein the means for analyzing further includes means for identifying when the error distribution takes the form of a first unacceptably high level of errors for one of the plurality of remote units and a second unacceptably high level of errors for the remaining plurality of remote units, wherein the second unacceptably high level of errors is less than the first unacceptably high level of errors.
12. The system of claim 11 wherein the means for analyzing further includes means for isolating the equipment failure to the remote unit having the first unacceptably high level of errors.
13. The system of claim 12 wherein the means for analyzing further includes means for identifying when the first unacceptably high level of errors is of the same magnitude as the total number of errors occurring across all the remaining plurality of remote units.
14. The system according to claim 4 wherein the means for analyzing further includes means for determining whether the unacceptably high level of errors is reached by comparison to a user settable maximum acceptable bit error rate.
15. The system according to claim 11 wherein the means for analyzing further includes means for determining whether the first unacceptably high level of errors is reached by comparison to a user settable alarm triggering bit error rate, and for determining whether the second unacceptably high level of errors is reached by comparison to a user settable maximum acceptable bit error rate.
16. The system of claim 1 wherein the distribution means is comprised of optical fiber.
17. A system for isolating the location of equipment failures in a telecommunications system including a head end, a plurality of remote ends and a distribution means for transmitting messages between the head end and the plurality of remote units, the distribution means including a shared feeder section connected to the head end and a plurality of distribution sections each connected to the shared feeder and to a different one of the plurality of remote units, the system comprising: means in the head end for generating a plurality of downstream error codes, each based on a different one of a plurality of downstream messages; means in each of the remote units for receiving a different one of the plurality of downstream messages and the respective one of the plurality of downstream error codes; means in each of the remote units for decoding the received downstream error code and determining therefrom whether a downstream error occurred; error flag means in each of the remote units for indicating whether a downstream error occurred; means in each of the remote units for generating one of a plurality of upstream error codes each based on one of a plurality of upstream messages and on the decoded downstream error code; means in the head end for receiving the plurality of upstream messages and the plurality of upstream error codes; means in the head end for decoding the plurality of upstream error codes, determining therefrom whether any upstream errors occurred, and producing therefrom upstream error data; accumulating means for accumulating the upstream error data over a selected period of time and producing therefrom accumulated error data; means for producing an error distribution across the plurality of remote ends from the accumulated error data; and means for analyzing the error distribution and identifying therefrom a shape of the error distribution indicative of the location of an equipment failure.
18. The system according to claim 17 further including means for setting a maximum acceptable bit error rate.
19. The system according to claim 18 further including means for setting an alarm triggering bit error rate.
20. The system according to claim 18 wherein the maximum acceptable bit error rate is user settable.
21. The system according to claim 19 wherein the alarm triggering bit error rate is user settable.
22. The system according to claim 18 wherein the means for analyzing further includes means for identifying when the error distribution shows a uniform level of errors across the plurality of remote units that is higher than the maximum acceptable bit error rate.
23. The system according to claim 22 wherein the means for analyzing further includes means for isolating the equipment failure to the shared feeder section.
24. The system according to claim 19 wherein the means for analyzing further includes means for identifying when the error distribution shows a first level of errors associated with one of the plurality of remote units that is higher than the alarm triggering bit error rate and a uniform level of errors across the remaining plurality of remote units at a second level that is lower than the maximum acceptable bit error rate.
25. The system according to claim 24 wherein the means for analyzing further includes means for identifying when the equipment failure to the one of the plurality of remote units having the first level of errors or to the distribution section connected thereto.
26. The system according to claim 19 wherein the means for analyzing further includes means for identifying when the error distribution shows a first level of errors associated with two of the plurality of remote units that is higher than the alarm triggering bit error rate and a uniform level of errors across the remaining plurality of remote units at a second level that is lower than the maximum acceptable bit error rate.
27. The system according to claim 26 wherein the means for analyzing further includes means for identifying when the equipment failure to the two of the plurality of remote units having the first level of errors or to the distribution sections connected thereto.
28. The system according to claim 18 wherein the means for analyzing further includes means for identifying when the error distribution shows a first level of errors associated with one of the plurality of remote units that is higher than the maximum acceptable bit error rate and a uniform level of errors across the remaining plurality of remote units at a second level that is higher than the maximum acceptable bit error rate and lower than the first level of errors.
29. The system according to claim 28 wherein the means for analyzing further includes means for isolating the equipment failure to the one of the plurality of remote units having the first level of errors or to the distribution section connected thereto.
30. The system according to claim 17 further including means for polling the error flag means to determine whether a downstream or an upstream error occurred.
31. A method of analyzing an error distribution to isolate the location of equipment failures in a communications system, the communications system including a distribution network including a shared feeder connected to a head end and a plurality of distribution sections, each connected to the shared feeder and to a different one of a plurality of remote units, the method comprising the steps of: (a) setting a maximum acceptable bit error rate;
(b) setting an alarm triggering bit error rate, wherein the alarm triggering bit error rate is higher than the maximum acceptable bit error rate;
(c) detecting errors received from the plurality of remote units and generating therefrom a set of error data;
(d) generating an error distribution from a set of error data received from a plurality of remote units;
(e) comparing the error distribution with the maximum acceptable bit error rate and the alarm triggering bit error rate; and (f) isolating an equipment failure based on the outcome of comparing step (e) .
32. The method according to claim 31 wherein isolating step (f) further includes the step of isolating a first equipment failure if the result of comparison step (d) shows an even distribution of errors across all remote units at a level that is higher than the maximum acceptable bit error rate.
33. The method according to claim 32 wherein isolating step (f) further includes the step of isolating the equipment failure to the shared feeder section.
34. The method according to claim 31 wherein isolating step (f) further includes the step of isolating a second equipment failure if the result of comparison step (e) shows a level of errors that is higher than the alarm triggering bit error rate for one of the plurality of remote units and an even distribution of errors across the remaining plurality of remote units at a level that is lower than the maximum acceptable bit error rate.
35. The method according to claim 34 wherein isolating step (f) further includes the step of isolating the equipment failure to the one of the plurality of remote units.
36. The method according to claim 31 wherein isolating step (f) further includes the step of isolating a third equipment failure if the result of comparison step (e) shows a level of errors that is higher than the alarm triggering bit error rate for two of the plurality of remote units and an even distribution of errors across the remaining plurality of remote units at a level that is lower than the maximum acceptable bit error rate.
37. The method according to claim 34 wherein isolating step (f) further includes the step of isolating the equipment failure to the two of the plurality of remote units.
38. The method according to claim 31 wherein isolating step (f) further includes the step of isolating a fourth equipment failure if the result of comparison step (e) shows a first rate of errors for one of the plurality of remote units that is higher than the maximum acceptable bit error rate and an even distribution of errors across the remaining plurality of remote units at a second rate of errors that is higher than the maximum acceptable bit error rate, wherein the second rate of errors is lower than the first rate of errors.
39. The method according to claim 38 wherein isolating step (f) further includes the step of isolating the equipment failure to the one of the plurality of remote units.
40. A method for monitoring the performance of a telecommunications system connected in a point-to- multipoint configuration, and for isolating the location of equipment failures therein, the telecommunications system including a distribution network including a shared feeder section connected to a head end and a plurality of distribution sections each connected to the shared feeder section and to a different one of a plurality of remote units, the method comprising the steps of:
(a) generating a plurality of first error codes based on a plurality at the head end of downstream messages;
(b) transmitting each of the plurality of error codes to a different one of a plurality of remote units over the distribution network;
(c) decoding at each remote unit each of the error codes and downstream messages to determine whether any downstream errors occurred;
(d) generating at each remote unit a plurality of second error codes based on the decoded downstream error codes and on a plurality of upstream messages, wherein each of the plurality of second error codes is generated in a different one of the plurality of remote units;
(e) transmitting the plurality of second error codes and corresponding upstream messages to the head end over the distribution network; (f) accumulating the plurality of error codes received from the plurality of remote units over a selected period of time;
(g) generating an error distribution from the accumulated error codes; and (h) analyzing the error distribution and isolating therefrom the location of equipment failures.
41. The method according to claim 40 further including the step of setting a downstream error flag in the remote unit if the result of decoding step (c) indicates that a downstream error occurred.
42. The method according to claim 41 further including the step of polling the downstream error flag from the head end to determine whether an equipment failure is in a downstream traffic path or an upstream traffic path.
43. The method according to claim 40 further including the steps of:
(i) setting a maximum acceptable bit error rate; (j) setting an alarm triggering bit error rate, wherein the alarm triggering bit error rate is higher than the maximum acceptable bit error rate;
(k) comparing the error distribution with the maximum acceptable bit error rate and the alarm triggering bit error rate;
(1) isolating an equipment failure based on the outcome of comparing step (k) .
44. The method according to claim 43 wherein said isolating step (1) further includes the step of isolating a first equipment failure if the result of comparison step (k) shows an even distribution of errors across all remote units at a rate that is higher than the maximum acceptable bit error rate.
45. The method according to claim 44 wherein isolating step (1) further includes the step of isolating the equipment failure to the shared feeder section.
46. The method according to claim 43 wherein said isolating step (1) further includes the step of isolating a second equipment failure if the result of comparison step (k) shows a rate of errors that is higher than the alarm triggering bit error rate for one of the plurality of remote units and an even distribution of errors across the remaining plurality of remote units at a rate that is lower than the maximum acceptable bit error rate.
47. The method according to claim 46 wherein isolating step (1) further includes the step of isolating the equipment failure to the one of the plurality of remote units.
48. The method according to claim 43 wherein said isolating step (1) further includes the step of isolating a third equipment failure if the result of comparison step (k) shows a rate of errors that is higher than the alarm triggering bit error rate for two of the plurality of remote units and an even distribution of errors across the remaining plurality of remote units at a rate that is lower than the maximum acceptable bit error rate.
49. The method according to claim 48 wherein isolating step (1) further includes the step of isolating the equipment failure to the two of the plurality of remote units.
50. The method according to claim 43 wherein said isolating step (1) further includes the step of isolating a fourth equipment failure if the result of comparison step (k) shows a first rate of errors that is higher than the maximum acceptable bit error rate for one of the plurality of remote units and an even distribution of errors across the remaining plurality of remote units at a second rate of errors that is higher than the maximum acceptable bit error rate, wherein the second rate of errors is lower than the first rate of errors.
51. The method according to claim 50 wherein isolating step (1) further includes the step of isolating the equipment failure to the one of the plurality of remote units.
52. A system for monitoring the performance of and isolating equipment failures in a telecommunications network, the telecommunications network including a head end connected to a plurality of remote units by a passive optical network, the passive optical network comprised of a first optical fiber connected to the head end and a plurality of second optical fibers, each connected to the first optical fiber and to a different one of the plurality of remote units, the system comprising: a first error code generator in the head end; a plurality of first decoders, each in a different one of the plurality of remote units, each connected to receive and decode one of a plurality of first error codes; a plurality of second error code generators, each in a different one of the plurality of remote units; a second decoder in the head end connected to receive and decode a plurality of second error codes; processing means in the head end, connected to receive the plurality of second error codes, for producing an error distribution from the plurality of second error codes; the processing means further for identifying a particular type of error distribution and isolating therefrom the location of equipment failures in the telecommunications network.
53. A method of analyzing an error distribution to isolate the location of equipment failures in a point- to-multipoint telecommunications network, comprising the steps of: (a) identifying a type of error distribution;
(b) isolating from the type of error distribution identified in step (a) the location of equipment failures in the telecommunications network.
PCT/US1993/011048 1993-06-10 1993-11-15 Performance monitoring and failure isolation in a point-to-multipoint communication network WO1994029983A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU56686/94A AU687370B2 (en) 1993-06-10 1993-11-15 Performance monitoring and failure isolation in a point-to-multipoint communication network
DE69326257T DE69326257T2 (en) 1993-06-10 1993-11-15 FUNCTION MONITORING AND ERROR-DEFINED SHUTDOWN IN A POINT-TO-MULTI-POINT COMMUNICATION NETWORK
EP94902250A EP0702870B1 (en) 1993-06-10 1993-11-15 Performance monitoring and failure isolation in a point-to-multipoint communication network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/074,913 US5519830A (en) 1993-06-10 1993-06-10 Point-to-multipoint performance monitoring and failure isolation system
US08/074,913 1993-06-10

Publications (1)

Publication Number Publication Date
WO1994029983A1 true WO1994029983A1 (en) 1994-12-22

Family

ID=22122402

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1993/011048 WO1994029983A1 (en) 1993-06-10 1993-11-15 Performance monitoring and failure isolation in a point-to-multipoint communication network

Country Status (7)

Country Link
US (2) US5519830A (en)
EP (1) EP0702870B1 (en)
AT (1) ATE184141T1 (en)
AU (1) AU687370B2 (en)
DE (1) DE69326257T2 (en)
ES (1) ES2135561T3 (en)
WO (1) WO1994029983A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997027550A2 (en) * 1996-01-24 1997-07-31 Adc Telecommunications, Inc. Communication system with multicarrier telephony transport
WO1997048197A3 (en) * 1996-05-20 1997-12-18 Adc Telecommunications Inc Communication system with multicarrier telephony transport
WO2000031957A2 (en) * 1998-11-23 2000-06-02 Trilithic, Inc. Multi-user access of reverse path ingress
US7076573B2 (en) 2003-11-20 2006-07-11 International Business Machines Corporation Method, apparatus, and program for detecting sequential and distributed path errors in MPIO
USRE41771E1 (en) 1995-02-06 2010-09-28 Adc Telecommunications, Inc. System for multiple use subchannels
USRE42236E1 (en) 1995-02-06 2011-03-22 Adc Telecommunications, Inc. Multiuse subcarriers in multipoint-to-point communication using orthogonal frequency division multiplexing

Families Citing this family (151)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5519830A (en) * 1993-06-10 1996-05-21 Adc Telecommunications, Inc. Point-to-multipoint performance monitoring and failure isolation system
US6782886B2 (en) 1995-04-05 2004-08-31 Aerogen, Inc. Metering pumps for an aerosolizer
US5682470A (en) * 1995-09-01 1997-10-28 International Business Machines Corporation Method and system for achieving collective consistency in detecting failures in a distributed computing system
US5574495A (en) * 1995-10-18 1996-11-12 General Instrument Corporation Cable television data path error analyzer located at the set-top terminal
FR2741220B1 (en) * 1995-11-15 1997-12-19 Bochereau Jean Pierre SYSTEM FOR MONITORING AND MANAGING A POINT-MULTIPOINT ACCESS NETWORK
US5954825A (en) * 1997-04-11 1999-09-21 International Business Machines Corporation Method for isolating faults on a clocked synchronous bus
DE19815150B4 (en) * 1997-04-21 2005-03-10 Leuze Electronic Gmbh & Co Kg sensor arrangement
US6088390A (en) * 1997-07-22 2000-07-11 Analog Devices, Inc. Integrating decision feedback equalization and forward error correction to improve performance in point-to-multipoint digital transmission
US5983388A (en) * 1997-08-25 1999-11-09 Analog Devices Forward error correction arrangement (FEC) for multipoint to single point communication systems
US6148422A (en) * 1997-10-07 2000-11-14 Nortel Networks Limited Telecommunication network utilizing an error control protocol
EP0966808A2 (en) * 1997-11-14 1999-12-29 Tektronix, Inc. Method of operating a digital data distribution network
US6263458B1 (en) * 1997-12-31 2001-07-17 Alcatel Usa Sourcing, L.P. Regulated push method of data collection
US6542266B1 (en) * 1999-06-24 2003-04-01 Qwest Communications International Inc. System and method for providing broadband data service
US6553515B1 (en) * 1999-09-10 2003-04-22 Comdial Corporation System, method and computer program product for diagnostic supervision of internet connections
JP3875436B2 (en) * 1999-10-28 2007-01-31 富士通株式会社 Network management apparatus and recording medium
US6876991B1 (en) 1999-11-08 2005-04-05 Collaborative Decision Platforms, Llc. System, method and computer program product for a collaborative decision platform
US8271336B2 (en) 1999-11-22 2012-09-18 Accenture Global Services Gmbh Increased visibility during order management in a network-based supply chain environment
US7130807B1 (en) 1999-11-22 2006-10-31 Accenture Llp Technology sharing during demand and supply planning in a network-based supply chain environment
US6606744B1 (en) 1999-11-22 2003-08-12 Accenture, Llp Providing collaborative installation management in a network-based supply chain environment
US7716077B1 (en) 1999-11-22 2010-05-11 Accenture Global Services Gmbh Scheduling and planning maintenance and service in a network-based supply chain environment
US7124101B1 (en) 1999-11-22 2006-10-17 Accenture Llp Asset tracking in a network-based supply chain environment
US6671818B1 (en) * 1999-11-22 2003-12-30 Accenture Llp Problem isolation through translating and filtering events into a standard object format in a network based supply chain
US8032409B1 (en) 1999-11-22 2011-10-04 Accenture Global Services Limited Enhanced visibility during installation management in a network-based supply chain environment
US6665820B1 (en) * 1999-12-22 2003-12-16 Ge Medical Technology Services, Inc. Method and system for communications connectivity failure diagnosis
US7139728B2 (en) * 1999-12-30 2006-11-21 Rod Rigole Systems and methods for online selection of service providers and management of service accounts
US7343401B2 (en) * 2000-03-31 2008-03-11 Fujitsu Limited Remote maintenance apparatus, terminal connected to the apparatus and computer readable medium for realizing the apparatus and the terminal
US6513092B1 (en) 2000-04-18 2003-01-28 Nec Eluminant Technologies, Inc. 1:N protection switching architecture for common processing units
RU2172560C1 (en) 2000-06-08 2001-08-20 Васильев Владимир Павлович Optical communication facility
US6654915B1 (en) * 2000-09-11 2003-11-25 Unisys Corporation Automatic fault management system utilizing electronic service requests
US6915348B1 (en) * 2000-09-14 2005-07-05 Nortel Networks Limited Validation of a connection between arbitrary end-points in a communications network using an augmented SPE
US6795655B1 (en) 2000-10-09 2004-09-21 Meklyn Enterprises Limited Free-space optical communication system with spatial multiplexing
CA2425628C (en) * 2000-10-17 2007-07-24 Sprint Communications Company, L.P. Performance management system
US7213265B2 (en) * 2000-11-15 2007-05-01 Lockheed Martin Corporation Real time active network compartmentalization
US7225467B2 (en) * 2000-11-15 2007-05-29 Lockheed Martin Corporation Active intrusion resistant environment of layered object and compartment keys (airelock)
US6718489B1 (en) * 2000-12-07 2004-04-06 Unisys Corporation Electronic service request generator for automatic fault management system
US6760772B2 (en) 2000-12-15 2004-07-06 Qualcomm, Inc. Generating and implementing a communication protocol and interface for high data rate signal transfer
US6417965B1 (en) 2001-02-16 2002-07-09 Onetta, Inc. Optical amplifier control system
US6522459B1 (en) 2001-02-22 2003-02-18 Onetta, Inc. Temperature control and monitoring of optical detector components in an optical communication system
GB2373607B (en) * 2001-03-23 2003-02-12 Sun Microsystems Inc A computer system
GB2373606B (en) * 2001-03-23 2003-06-04 Sun Microsystems Inc A computer system
US20040060073A1 (en) * 2001-05-08 2004-03-25 Bialk Harvey R. Method and system for provisioning broadband network resources
US7543328B2 (en) * 2001-05-08 2009-06-02 At&T Corp. Method and system for providing an efficient use of broadband network resources
US6952729B2 (en) * 2001-05-08 2005-10-04 At&T Corp. Network management method and system for managing a broadband network providing multiple services
US7802287B2 (en) * 2001-05-08 2010-09-21 At&T Intellectual Property Ii, L.P. Method and system for generating geographic visual displays of broadband network data
US6990609B2 (en) * 2001-06-12 2006-01-24 Sun Microsystems, Inc. System and method for isolating faults in a network
US7058844B2 (en) * 2001-06-15 2006-06-06 Sun Microsystems, Inc. System and method for rapid fault isolation in a storage area network
US6707599B1 (en) 2001-06-25 2004-03-16 Onetta, Inc. Optical network equipment with triggered data storage
US20030018445A1 (en) * 2001-07-19 2003-01-23 General Instrument Corporation Detection of unauthorized applications, objects, or configurations in a local device of a cable system
US20030028556A1 (en) * 2001-07-28 2003-02-06 Allison Michael S. Method for analyzing events from system firmware and software
US8812706B1 (en) 2001-09-06 2014-08-19 Qualcomm Incorporated Method and apparatus for compensating for mismatched delays in signals of a mobile display interface (MDDI) system
US8977284B2 (en) 2001-10-04 2015-03-10 Traxcell Technologies, LLC Machine for providing a dynamic data base of geographic location information for a plurality of wireless devices and process for making same
US6766482B1 (en) 2001-10-31 2004-07-20 Extreme Networks Ethernet automatic protection switching
US6970823B1 (en) 2001-12-14 2005-11-29 Networks Associates Technology, Inc. System, method and computer program product for monitoring voice application calls over a network
US6604139B1 (en) 2001-12-14 2003-08-05 Networks Associates Technology, Inc. Voice protocol filtering system and method
US6814842B1 (en) 2001-12-14 2004-11-09 Networks Associates Technology, Inc. System and method for organizing objects of a voice call in a tree representation
US20040083466A1 (en) * 2002-10-29 2004-04-29 Dapp Michael C. Hardware parser accelerator
US20070061884A1 (en) * 2002-10-29 2007-03-15 Dapp Michael C Intrusion detection accelerator
US7146643B2 (en) * 2002-10-29 2006-12-05 Lockheed Martin Corporation Intrusion detection accelerator
US7080094B2 (en) * 2002-10-29 2006-07-18 Lockheed Martin Corporation Hardware accelerated validating parser
US7457864B2 (en) * 2002-11-27 2008-11-25 International Business Machines Corporation System and method for managing the performance of a computer system based on operational characteristics of the system components
US7694190B2 (en) * 2003-01-16 2010-04-06 Nxp B.V. Preventing distribution of modified or corrupted files
EP1604277A2 (en) * 2003-02-28 2005-12-14 Lockheed Martin Corporation Hardware accelerator personality compiler
EP2001192B1 (en) 2003-06-02 2011-05-11 Qualcomm Incorporated Generating and implementing a signal protocol and interface for higher data rates
EP2363989B1 (en) * 2003-08-13 2018-09-19 Qualcomm Incorporated A signal interface for higher data rates
KR100951158B1 (en) 2003-09-10 2010-04-06 콸콤 인코포레이티드 High data rate interface
CA2542649A1 (en) 2003-10-15 2005-04-28 Qualcomm Incorporated High data rate interface
AU2004307162A1 (en) 2003-10-29 2005-05-12 Qualcomm Incorporated High data rate interface
CN1902886B (en) 2003-11-12 2011-02-23 高通股份有限公司 High data rate interface with improved link control
KR20060096161A (en) 2003-11-25 2006-09-07 콸콤 인코포레이티드 High data rate interface with improved link synchronization
MXPA06006452A (en) 2003-12-08 2006-08-31 Qualcomm Inc High data rate interface with improved link synchronization.
EP2375677B1 (en) 2004-03-10 2013-05-29 Qualcomm Incorporated High data rate interface apparatus and method
AU2005223960B2 (en) 2004-03-17 2009-04-09 Qualcomm Incorporated High data rate interface apparatus and method
WO2005096594A1 (en) 2004-03-24 2005-10-13 Qualcomm Incorporated High data rate interface apparatus and method
WO2005122509A1 (en) 2004-06-04 2005-12-22 Qualcomm Incorporated High data rate interface apparatus and method
US8650304B2 (en) 2004-06-04 2014-02-11 Qualcomm Incorporated Determining a pre skew and post skew calibration data rate in a mobile display digital interface (MDDI) communication system
US8667363B2 (en) * 2004-11-24 2014-03-04 Qualcomm Incorporated Systems and methods for implementing cyclic redundancy checks
US8539119B2 (en) * 2004-11-24 2013-09-17 Qualcomm Incorporated Methods and apparatus for exchanging messages having a digital data interface device message format
US8873584B2 (en) 2004-11-24 2014-10-28 Qualcomm Incorporated Digital data interface device
US8723705B2 (en) * 2004-11-24 2014-05-13 Qualcomm Incorporated Low output skew double data rate serial encoder
US8692838B2 (en) 2004-11-24 2014-04-08 Qualcomm Incorporated Methods and systems for updating a buffer
US8699330B2 (en) 2004-11-24 2014-04-15 Qualcomm Incorporated Systems and methods for digital data transmission rate control
US7830807B2 (en) * 2005-01-21 2010-11-09 Time Warner Cable, Inc. Fault isolation system and method
US7366102B2 (en) * 2005-01-21 2008-04-29 Time Warner Cable Fault isolation system and method
FR2881595B1 (en) * 2005-01-28 2007-10-12 Thales Sa SECURE SYSTEM OF MONODIRECTIONAL INTERCONNECTION
US7599308B2 (en) * 2005-02-04 2009-10-06 Fluke Corporation Methods and apparatus for identifying chronic performance problems on data networks
WO2006089403A1 (en) * 2005-02-22 2006-08-31 Aethera Networks Inc. Robust service delivery node and method therefor
US7290185B2 (en) * 2005-04-28 2007-10-30 International Business Machines Corporation Methods and apparatus for reducing memory errors
US8692839B2 (en) 2005-11-23 2014-04-08 Qualcomm Incorporated Methods and systems for updating a buffer
US8730069B2 (en) * 2005-11-23 2014-05-20 Qualcomm Incorporated Double data rate serial encoder
US7529976B2 (en) * 2006-05-20 2009-05-05 International Business Machines Corporation Multiple subsystem error reporting
US8873585B2 (en) * 2006-12-19 2014-10-28 Corning Optical Communications Wireless Ltd Distributed antenna system for MIMO technologies
DE102007029553B4 (en) * 2006-12-21 2015-03-26 Ifm Electronic Gmbh Method for assessing the transmission quality of communication in a bus system
US20100054746A1 (en) 2007-07-24 2010-03-04 Eric Raymond Logan Multi-port accumulator for radio-over-fiber (RoF) wireless picocellular systems
US20090064255A1 (en) * 2007-08-27 2009-03-05 At&T Knowledge Ventures, Lp System and method of providing performance data
US8175459B2 (en) 2007-10-12 2012-05-08 Corning Cable Systems Llc Hybrid wireless/wired RoF transponder and hybrid RoF communication system using same
FR2924553B1 (en) * 2007-12-04 2010-04-23 Thales Sa METHOD FOR IMPROVING THE INTEGRITY OF COMMUNICATION MEANS
WO2009081376A2 (en) 2007-12-20 2009-07-02 Mobileaccess Networks Ltd. Extending outdoor location based services and applications into enclosed areas
JP2012517190A (en) 2009-02-03 2012-07-26 コーニング ケーブル システムズ リミテッド ライアビリティ カンパニー Fiber optic based distributed antenna system, components and related methods for monitoring and configuration thereof
JP5480916B2 (en) 2009-02-03 2014-04-23 コーニング ケーブル システムズ リミテッド ライアビリティ カンパニー Fiber optic based distributed antenna system, components, and related methods for calibration thereof
US9673904B2 (en) 2009-02-03 2017-06-06 Corning Optical Communications LLC Optical fiber-based distributed antenna systems, components, and related methods for calibration thereof
US9590733B2 (en) 2009-07-24 2017-03-07 Corning Optical Communications LLC Location tracking using fiber optic array cables and related systems and methods
US8280259B2 (en) 2009-11-13 2012-10-02 Corning Cable Systems Llc Radio-over-fiber (RoF) system for protocol-independent wired and/or wireless communication
US8275265B2 (en) 2010-02-15 2012-09-25 Corning Cable Systems Llc Dynamic cell bonding (DCB) for radio-over-fiber (RoF)-based networks and communication systems and related methods
EP2553839A1 (en) 2010-03-31 2013-02-06 Corning Cable Systems LLC Localization services in optical fiber-based distributed communications components and systems, and related methods
US8570914B2 (en) 2010-08-09 2013-10-29 Corning Cable Systems Llc Apparatuses, systems, and methods for determining location of a mobile device(s) in a distributed antenna system(s)
US9252874B2 (en) 2010-10-13 2016-02-02 Ccs Technology, Inc Power management for remote antenna units in distributed antenna systems
US9160449B2 (en) 2010-10-13 2015-10-13 Ccs Technology, Inc. Local power management for remote antenna units in distributed antenna systems
WO2012071367A1 (en) 2010-11-24 2012-05-31 Corning Cable Systems Llc Power distribution module(s) capable of hot connection and/or disconnection for distributed antenna systems, and related power units, components, and methods
US11296504B2 (en) 2010-11-24 2022-04-05 Corning Optical Communications LLC Power distribution module(s) capable of hot connection and/or disconnection for wireless communication systems, and related power units, components, and methods
EP2702710A4 (en) 2011-04-29 2014-10-29 Corning Cable Sys Llc Determining propagation delay of communications in distributed antenna systems, and related components, systems and methods
CN103609146B (en) 2011-04-29 2017-05-31 康宁光缆系统有限责任公司 For increasing the radio frequency in distributing antenna system(RF)The system of power, method and apparatus
EP2832012A1 (en) 2012-03-30 2015-02-04 Corning Optical Communications LLC Reducing location-dependent interference in distributed antenna systems operating in multiple-input, multiple-output (mimo) configuration, and related components, systems, and methods
US9781553B2 (en) 2012-04-24 2017-10-03 Corning Optical Communications LLC Location based services in a distributed communication system, and related components and methods
WO2013162988A1 (en) 2012-04-25 2013-10-31 Corning Cable Systems Llc Distributed antenna system architectures
US8948587B2 (en) * 2012-06-27 2015-02-03 Centurylink Intellectual Property Llc Use of dying gasp to locate faults in communications networks
US9154222B2 (en) 2012-07-31 2015-10-06 Corning Optical Communications LLC Cooling system control in distributed antenna systems
EP2883416A1 (en) 2012-08-07 2015-06-17 Corning Optical Communications Wireless Ltd. Distribution of time-division multiplexed (tdm) management services in a distributed antenna system, and related components, systems, and methods
US9455784B2 (en) 2012-10-31 2016-09-27 Corning Optical Communications Wireless Ltd Deployable wireless infrastructures and methods of deploying wireless infrastructures
US10257056B2 (en) 2012-11-28 2019-04-09 Corning Optical Communications LLC Power management for distributed communication systems, and related components, systems, and methods
WO2014085115A1 (en) 2012-11-29 2014-06-05 Corning Cable Systems Llc HYBRID INTRA-CELL / INTER-CELL REMOTE UNIT ANTENNA BONDING IN MULTIPLE-INPUT, MULTIPLE-OUTPUT (MIMO) DISTRIBUTED ANTENNA SYSTEMS (DASs)
US9647758B2 (en) 2012-11-30 2017-05-09 Corning Optical Communications Wireless Ltd Cabling connectivity monitoring and verification
US9158864B2 (en) 2012-12-21 2015-10-13 Corning Optical Communications Wireless Ltd Systems, methods, and devices for documenting a location of installed equipment
US9497706B2 (en) 2013-02-20 2016-11-15 Corning Optical Communications Wireless Ltd Power management in distributed antenna systems (DASs), and related components, systems, and methods
EP3008828B1 (en) 2013-06-12 2017-08-09 Corning Optical Communications Wireless Ltd. Time-division duplexing (tdd) in distributed communications systems, including distributed antenna systems (dass)
EP3008515A1 (en) 2013-06-12 2016-04-20 Corning Optical Communications Wireless, Ltd Voltage controlled optical directional coupler
US9247543B2 (en) 2013-07-23 2016-01-26 Corning Optical Communications Wireless Ltd Monitoring non-supported wireless spectrum within coverage areas of distributed antenna systems (DASs)
US9661781B2 (en) 2013-07-31 2017-05-23 Corning Optical Communications Wireless Ltd Remote units for distributed communication systems and related installation methods and apparatuses
US9246756B2 (en) * 2013-08-23 2016-01-26 Heat Software Usa Inc. Dynamic filters for posted event messages initiated automatically by equipment
EP3039814B1 (en) 2013-08-28 2018-02-21 Corning Optical Communications Wireless Ltd. Power management for distributed communication systems, and related components, systems, and methods
US9385810B2 (en) 2013-09-30 2016-07-05 Corning Optical Communications Wireless Ltd Connection mapping in distributed communication systems
WO2015072028A1 (en) * 2013-11-18 2015-05-21 株式会社日立製作所 Storage control device
WO2015079435A1 (en) 2013-11-26 2015-06-04 Corning Optical Communications Wireless Ltd. Selective activation of communications services on power-up of a remote unit(s) in a distributed antenna system (das) based on power consumption
US9178635B2 (en) 2014-01-03 2015-11-03 Corning Optical Communications Wireless Ltd Separation of communication signal sub-bands in distributed antenna systems (DASs) to reduce interference
US9775123B2 (en) 2014-03-28 2017-09-26 Corning Optical Communications Wireless Ltd. Individualized gain control of uplink paths in remote units in a distributed antenna system (DAS) based on individual remote unit contribution to combined uplink power
US9357551B2 (en) 2014-05-30 2016-05-31 Corning Optical Communications Wireless Ltd Systems and methods for simultaneous sampling of serial digital data streams from multiple analog-to-digital converters (ADCS), including in distributed antenna systems
US9509133B2 (en) 2014-06-27 2016-11-29 Corning Optical Communications Wireless Ltd Protection of distributed antenna systems
US9525472B2 (en) 2014-07-30 2016-12-20 Corning Incorporated Reducing location-dependent destructive interference in distributed antenna systems (DASS) operating in multiple-input, multiple-output (MIMO) configuration, and related components, systems, and methods
US9730228B2 (en) 2014-08-29 2017-08-08 Corning Optical Communications Wireless Ltd Individualized gain control of remote uplink band paths in a remote unit in a distributed antenna system (DAS), based on combined uplink power level in the remote unit
US9653861B2 (en) 2014-09-17 2017-05-16 Corning Optical Communications Wireless Ltd Interconnection of hardware components
US9602210B2 (en) 2014-09-24 2017-03-21 Corning Optical Communications Wireless Ltd Flexible head-end chassis supporting automatic identification and interconnection of radio interface modules and optical interface modules in an optical fiber-based distributed antenna system (DAS)
US9420542B2 (en) 2014-09-25 2016-08-16 Corning Optical Communications Wireless Ltd System-wide uplink band gain control in a distributed antenna system (DAS), based on per band gain control of remote uplink paths in remote units
US9729267B2 (en) 2014-12-11 2017-08-08 Corning Optical Communications Wireless Ltd Multiplexing two separate optical links with the same wavelength using asymmetric combining and splitting
US20160249365A1 (en) 2015-02-19 2016-08-25 Corning Optical Communications Wireless Ltd. Offsetting unwanted downlink interference signals in an uplink path in a distributed antenna system (das)
US9785175B2 (en) 2015-03-27 2017-10-10 Corning Optical Communications Wireless, Ltd. Combining power from electrically isolated power paths for powering remote units in a distributed antenna system(s) (DASs)
US9681313B2 (en) 2015-04-15 2017-06-13 Corning Optical Communications Wireless Ltd Optimizing remote antenna unit performance using an alternative data channel
US9948349B2 (en) 2015-07-17 2018-04-17 Corning Optical Communications Wireless Ltd IOT automation and data collection system
US10560214B2 (en) 2015-09-28 2020-02-11 Corning Optical Communications LLC Downlink and uplink communication path switching in a time-division duplex (TDD) distributed antenna system (DAS)
US9648580B1 (en) 2016-03-23 2017-05-09 Corning Optical Communications Wireless Ltd Identifying remote units in a wireless distribution system (WDS) based on assigned unique temporal delay patterns
US10236924B2 (en) 2016-03-31 2019-03-19 Corning Optical Communications Wireless Ltd Reducing out-of-channel noise in a wireless distribution system (WDS)
US9940235B2 (en) 2016-06-29 2018-04-10 Oracle International Corporation Method and system for valid memory module configuration and verification
CN110232006B (en) * 2019-05-16 2022-06-28 平安科技(深圳)有限公司 Equipment alarm method and related device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0118763A2 (en) * 1983-03-11 1984-09-19 Hitachi, Ltd. Method of searching fault locations in digital transmission line
DE3719283A1 (en) * 1987-06-10 1988-12-22 Bosch Gmbh Robert METHOD FOR LOCALIZING DEFECTIVE STATIONS IN LOCAL NETWORKS AND RELATED INTERFACE CONTROLLERS
GB2254984A (en) * 1991-04-15 1992-10-21 Hochiki Co Method of detecting transmission error in disaster prevention supervisory system
WO1993000760A1 (en) * 1991-06-26 1993-01-07 Siemens Aktiengesellschaft Process for determining the origin of bit errors

Family Cites Families (72)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3787613A (en) * 1972-06-27 1974-01-22 Bell Telephone Labor Inc Pulse transmission system for conveying data and control words by means of alternating polarity pulses and violations thereof
US3934224A (en) * 1974-10-29 1976-01-20 Honeywell Information Systems, Inc. Apparatus for continuous assessment of data transmission accuracy in a communication system
FR2433189A1 (en) * 1978-08-11 1980-03-07 Cit Alcatel DEVICE FOR SEALED CONNECTION OF AN OPTICAL FIBER CABLE TO AN UNDERWATER HOUSING
US4245340A (en) * 1978-12-05 1981-01-13 Bell Telephone Laboratories, Incorporated Data link for digital channel bank systems
DE2922418C2 (en) * 1979-06-01 1981-12-03 Licentia Patent-Verwaltungs-Gmbh, 6000 Frankfurt Integrated services message transmission and switching system for sound, images and data
US4282400A (en) * 1980-02-25 1981-08-04 Bell Telephone Laboratories, Incorporated Signaling unit for interchange of data with multipoint line selection units and data terminals
CA1161545A (en) * 1980-04-30 1984-01-31 Manitoba Telephone System (The) Video distribution control system
DE3022725A1 (en) * 1980-06-18 1981-12-24 Licentia Patent-Verwaltungs-Gmbh, 6000 Frankfurt SERVICE INTEGRATED MESSAGE TRANSMISSION AND MEDIATION SYSTEM
US4316284A (en) * 1980-09-11 1982-02-16 Bell Telephone Laboratories, Incorporated Frame resynchronization circuit for digital receiver
US4397020A (en) * 1980-09-11 1983-08-02 Bell Telephone Laboratories, Incorporated Error monitoring in digital transmission systems
USRE33900E (en) * 1980-09-11 1992-04-28 At&T Bell Laboratories Error monitoring in digital transmission systems
US4316285A (en) * 1980-09-11 1982-02-16 Bell Telephone Laboratories, Incorporated Framing circuit for digital receiver
DE3044657A1 (en) * 1980-11-27 1982-07-08 Licentia Patent-Verwaltungs-Gmbh, 6000 Frankfurt "SERVICE-INTEGRATED DIGITAL TRANSMISSION SYSTEM"
US4491983A (en) * 1981-05-14 1985-01-01 Times Fiber Communications, Inc. Information distribution system
DE3218261A1 (en) * 1982-05-14 1983-12-22 Siemens AG, 1000 Berlin und 8000 München BROAD COMMUNICATION SYSTEM
US4500989A (en) * 1982-08-02 1985-02-19 Dahod Ashraf M Digital communication system
US4506387A (en) * 1983-05-25 1985-03-19 Walter Howard F Programming-on-demand cable system and method
US4574305A (en) * 1983-08-11 1986-03-04 Tocum, Incorporated Remote hub television and security systems
CA1227844A (en) * 1983-09-07 1987-10-06 Michael T.H. Hewitt Communications network having a single node and a plurality of outstations
US4709418A (en) * 1983-09-14 1987-11-24 British Telecommunications Public Limited Company Wideband cable network
GB8327587D0 (en) * 1983-10-14 1983-11-16 British Telecomm Multipoint data communications
DE3403659A1 (en) * 1984-02-03 1985-08-14 Standard Elektrik Lorenz Ag, 7000 Stuttgart BROADBAND INTEGRATED SUBSCRIBER CONNECTION SYSTEM
JPS6124338A (en) * 1984-07-12 1986-02-03 Nec Corp Multi-direction multiplex communication system
GB8505938D0 (en) * 1985-03-07 1985-04-11 British Telecomm Optical signal power divider
CA1284211C (en) * 1985-04-29 1991-05-14 Terrence Henry Pocock Cable television system selectively distributing pre-recorder video and audio messages
DE3528252A1 (en) * 1985-08-07 1987-02-12 Standard Elektrik Lorenz Ag FIBER OPTICAL DISTRIBUTION SYSTEM FOR BROADBAND SIGNALS
IT1185902B (en) * 1985-09-10 1987-11-18 Face Standard Ind VOICE AND DATA DISTRIBUTION SYSTEM WITH MULTINODE OPTICAL FIBER STAR NETWORK
US4705350A (en) * 1985-09-19 1987-11-10 Bell Communications Research, Inc. Optical transmission network
US4710924A (en) * 1985-09-19 1987-12-01 Gte Sprint Communications Corp. Local and remote bit error rate monitoring for early warning of fault location of digital transmission system
FR2593654B1 (en) * 1986-01-28 1988-03-11 Comp Generale Electricite COHERENT PHOTON TELECOMMUNICATIONS DEVICE.
US4769761A (en) * 1986-10-09 1988-09-06 International Business Machines Corporation Apparatus and method for isolating and predicting errors in a local area network
GB8727846D0 (en) * 1987-11-27 1987-12-31 British Telecomm Optical communications network
CA1314935C (en) * 1987-01-05 1993-03-23 British Telecommunications Public Limited Company Optical communications network
WO1988005233A1 (en) * 1987-01-05 1988-07-14 British Telecommunications Public Limited Company Optical communications network
DE3706768A1 (en) * 1987-03-03 1988-09-15 Philips Patentverwaltung RECEIVING DEVICE FOR THE STOCK LENGTH OF AT LEAST ONE LIGHT WAVE GUIDE
GB8709072D0 (en) * 1987-04-15 1987-05-20 British Telecomm Transmission system
AU610596B2 (en) * 1987-05-06 1991-05-23 British Telecommunications Public Limited Company Control of optical systems
EP0308150B1 (en) * 1987-09-14 1993-07-07 BRITISH TELECOMMUNICATIONS public limited company Method of communicating digital signals and receiver for use with such method
US5136411A (en) * 1987-12-11 1992-08-04 General Instrument Corporation Dynamically responsive CATV system with shared fiber optic link
US4835763A (en) * 1988-02-04 1989-05-30 Bell Communications Research, Inc. Survivable ring network
GB8807050D0 (en) * 1988-03-24 1988-04-27 British Telecomm Communication system
US4975916A (en) * 1988-07-26 1990-12-04 International Business Machines Corporation Character snychronization
GB8826476D0 (en) * 1988-11-11 1988-12-14 British Telecomm Communications system
US4891694A (en) * 1988-11-21 1990-01-02 Bell Communications Research, Inc. Fiber optic cable television distribution system
EP0374303B1 (en) * 1988-12-23 1994-10-12 Siemens Aktiengesellschaft Process for the individual monitoring of transmission sections of a communications transmission link, and apparatus for carrying out the process
GB8902746D0 (en) * 1989-02-08 1989-03-30 British Telecomm Communications network
CA2008900C (en) * 1989-04-04 1998-01-20 Ta-Shing Chu Optical fiber microcellular mobile radio
US5002356A (en) * 1989-04-28 1991-03-26 Raynet Corp. Optical fiber tap handling tray with fiber installation tool
GB8912012D0 (en) * 1989-05-25 1989-07-12 British Telecomm Optical networks
GB8912014D0 (en) * 1989-05-25 1989-07-12 British Telecomm Optical networks
CA2064562A1 (en) * 1989-07-31 1991-02-01 Andrew Martin Hicks Fibre modulators
US5016969A (en) * 1989-08-28 1991-05-21 Raynet Corporation Mechanism for nesting an optical fiber in an optical coupler and providing positive stop opening and closing positions for the coupler
GB8920733D0 (en) * 1989-09-13 1989-10-25 British Telecomm An optical detector
GB8923488D0 (en) * 1989-10-18 1989-12-06 British Telecomm Optical receiver
US5107489A (en) * 1989-10-30 1992-04-21 Brown Paul J Switch and its protocol for making dynamic connections
GB8926548D0 (en) * 1989-11-24 1990-01-17 British Telecomm Passive optical network
GB8927783D0 (en) * 1989-12-08 1990-02-14 British Telecomm Frequency agility
JP2562701B2 (en) * 1989-12-26 1996-12-11 株式会社小松製作所 Error display device for data transmission system
GB9001595D0 (en) * 1990-01-24 1990-03-21 British Telecomm Passive optical network
US5157667A (en) * 1990-04-30 1992-10-20 International Business Machines Corporation Methods and apparatus for performing fault isolation and failure analysis in link-connected systems
US5054050A (en) * 1990-04-30 1991-10-01 American Telephone & Telegraph Co. Drop testing in fiber to the home systems
US5101290A (en) * 1990-08-02 1992-03-31 At&T Bell Laboratories High-performance packet-switched wdm ring networks with tunable lasers
US5111497A (en) * 1990-09-17 1992-05-05 Raychem Corporation Alarm and test system for a digital added main line
JP2559923B2 (en) * 1990-09-04 1996-12-04 インターナショナル・ビジネス・マシーンズ・コーポレイション Method and apparatus for isolating errors occurring in serially connected links
DE59008907D1 (en) * 1990-09-14 1995-05-18 Siemens Ag Bidirectional telecommunication system.
GB9022682D0 (en) * 1990-10-18 1990-11-28 British Telecomm Passive optical network
GB9022681D0 (en) * 1990-10-18 1990-11-28 British Telecomm Passive optical network
EP0558561A1 (en) * 1990-11-22 1993-09-08 BRITISH TELECOMMUNICATIONS public limited company Test apparatus
US5067173A (en) * 1990-12-20 1991-11-19 At&T Bell Laboratories Microcellular communications system using space diversity reception
US5097530A (en) * 1991-04-04 1992-03-17 Raychem Corporation Optical fiber enclosure including novel retaining ring
US5331642A (en) * 1992-09-01 1994-07-19 International Business Machines Corporation Management of FDDI physical link errors
US5519830A (en) * 1993-06-10 1996-05-21 Adc Telecommunications, Inc. Point-to-multipoint performance monitoring and failure isolation system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0118763A2 (en) * 1983-03-11 1984-09-19 Hitachi, Ltd. Method of searching fault locations in digital transmission line
DE3719283A1 (en) * 1987-06-10 1988-12-22 Bosch Gmbh Robert METHOD FOR LOCALIZING DEFECTIVE STATIONS IN LOCAL NETWORKS AND RELATED INTERFACE CONTROLLERS
GB2254984A (en) * 1991-04-15 1992-10-21 Hochiki Co Method of detecting transmission error in disaster prevention supervisory system
WO1993000760A1 (en) * 1991-06-26 1993-01-07 Siemens Aktiengesellschaft Process for determining the origin of bit errors

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Data Switch Error Isolation and Reporting.", IBM TECHNICAL DISCLOSURE BULLETIN., vol. 32, no. 4B, September 1989 (1989-09-01), NEW YORK US, pages 201 - 203, XP000067014 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE44460E1 (en) 1994-09-26 2013-08-27 Htc Corporation Systems for synchronous multipoint-to-point orthogonal frequency division multiplexing communication
US7069577B2 (en) 1995-02-06 2006-06-27 Sdc Telecommunications, Inc. Dynamic bandwidth allocation
USRE41771E1 (en) 1995-02-06 2010-09-28 Adc Telecommunications, Inc. System for multiple use subchannels
USRE42236E1 (en) 1995-02-06 2011-03-22 Adc Telecommunications, Inc. Multiuse subcarriers in multipoint-to-point communication using orthogonal frequency division multiplexing
WO1997027550A2 (en) * 1996-01-24 1997-07-31 Adc Telecommunications, Inc. Communication system with multicarrier telephony transport
WO1997027550A3 (en) * 1996-01-24 1998-03-05 Adc Telecommunications Inc Communication system with multicarrier telephony transport
WO1997048197A3 (en) * 1996-05-20 1997-12-18 Adc Telecommunications Inc Communication system with multicarrier telephony transport
WO1997048197A2 (en) * 1996-05-20 1997-12-18 Adc Telecommunications, Inc. Communication system with multicarrier telephony transport
US6603822B2 (en) 1996-05-20 2003-08-05 Adc Telecommunications, Inc. Communicating errors in a telecommunications system
WO2000031957A2 (en) * 1998-11-23 2000-06-02 Trilithic, Inc. Multi-user access of reverse path ingress
WO2000031957A3 (en) * 1998-11-23 2000-11-09 Trilithic Inc Multi-user access of reverse path ingress
US7076573B2 (en) 2003-11-20 2006-07-11 International Business Machines Corporation Method, apparatus, and program for detecting sequential and distributed path errors in MPIO

Also Published As

Publication number Publication date
ES2135561T3 (en) 1999-11-01
AU687370B2 (en) 1998-02-26
US5655068A (en) 1997-08-05
AU5668694A (en) 1995-01-03
US5519830A (en) 1996-05-21
DE69326257D1 (en) 1999-10-07
ATE184141T1 (en) 1999-09-15
EP0702870A1 (en) 1996-03-27
EP0702870B1 (en) 1999-09-01
DE69326257T2 (en) 2000-01-05

Similar Documents

Publication Publication Date Title
AU687370B2 (en) Performance monitoring and failure isolation in a point-to-multipoint communication network
US8285139B2 (en) Method, system, and apparatus for managing alarms in long-reach passive optical network system
US5299201A (en) Method and apparatus for isolating faults in a network having serially connected links
EP0948858B1 (en) Method and apparatus for storing and retrieving performance data collected by a network interface unit
US5926303A (en) System and apparatus for optical fiber interface
US8879905B2 (en) Performance monitoring in passive optical networks
US6061328A (en) Integrated multi-fabric digital cross-connect integrated office links
US7434139B2 (en) Remote module for a communications network
JPH09233448A (en) Cable television data path error analyzer
US5627837A (en) Apparatus and method for suppressing protection switching in a digital communication system in the event of an error burst
US6654375B1 (en) Method and apparatus for time-profiling T-carrier framed service
EP1036483A2 (en) Redundancy termination for dynamic fault isolation
CN110518966B (en) ONU positioning system and positioning method based on orthogonal coding
US6831927B1 (en) Fault protection for hitless and errorless switching of telecommunications signals
US5247690A (en) Method for detecting transmitting control code using M our of N detection scheme for initiating a latching loopback test procedure
US20060245366A1 (en) Method and device for optimized adsl data transmission
US5402479A (en) Method and apparatus for translating signaling information
JP3326760B2 (en) Monitoring method and monitoring system for optical transmission line
Irvin Monitoring the performance of commercial T1-rate transmission service
Hajbandeh T1, T3, and SONET Networks
Ueda et al. ATM advanced operation and management functions for B-ISDN
JPH0319439A (en) Monitoring system for multiplex transmission line
JPH10200514A (en) Transmission quality supervisory device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR BY CA CH CZ DE DK ES FI GB HU JP KP KR KZ LK LU LV MG MN MW NL NO NZ PL PT RO RU SD SE SK UA UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1994902250

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1994902250

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

WWG Wipo information: grant in national office

Ref document number: 1994902250

Country of ref document: EP