US20030048501A1 - Metropolitan area local access service system - Google Patents

Metropolitan area local access service system Download PDF

Info

Publication number
US20030048501A1
US20030048501A1 US09/952,284 US95228401A US2003048501A1 US 20030048501 A1 US20030048501 A1 US 20030048501A1 US 95228401 A US95228401 A US 95228401A US 2003048501 A1 US2003048501 A1 US 2003048501A1
Authority
US
United States
Prior art keywords
distribution network
network
collector
subscriber
feeder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/952,284
Inventor
Michael Guess
Paul Niezgoda
Fraser Street
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qwest Communications International Inc
Original Assignee
OnFiber Communications Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OnFiber Communications Inc filed Critical OnFiber Communications Inc
Priority to US09/952,284 priority Critical patent/US20030048501A1/en
Assigned to ONFIBER COMMUNICATIONS, INC. reassignment ONFIBER COMMUNICATIONS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: STREET, FRASER, GUESS, MICHAEL, NIEZGODA, PAUL
Priority to US09/975,474 priority patent/US20030048746A1/en
Priority to PCT/US2002/028457 priority patent/WO2003024029A2/en
Priority to EP02768815A priority patent/EP1425881A2/en
Publication of US20030048501A1 publication Critical patent/US20030048501A1/en
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ONFIBER COMMUNICATIONS, INC.
Priority to US11/087,938 priority patent/US8031589B2/en
Assigned to ONFIBER COMMUNICATIONS, INC. reassignment ONFIBER COMMUNICATIONS, INC. RELEASE Assignors: SILICON VALLEY BANK
Assigned to COMERICA BANK reassignment COMERICA BANK SECURITY AGREEMENT Assignors: INFO-TECH COMMUNICATIONS, ONFIBER CARRIER SERVICES - VIRGINIA, INC., ONFIBER CARRIER SERVICES, INC., ONFIBER COMMUNICATIONS, INC.
Assigned to INFO-TECH COMMUNICATIONS, ONFIBER CARRIER SERVICES, INC., ONFIBER CARRIER SERVICES-VIRGINIA, INC., ONFIBER COMMUNICATIONS, INC. reassignment INFO-TECH COMMUNICATIONS RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: COMERICA BANK
Assigned to QWEST COMMUNICATIONS INTERNATIONAL INC. reassignment QWEST COMMUNICATIONS INTERNATIONAL INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ONFIBER COMMUNICATIONS, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/356Switches specially adapted for specific applications for storage area networks
    • H04L49/357Fibre channel switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2852Metropolitan area networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4637Interconnected ring systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/40Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/354Switches specially adapted for specific applications for supporting virtual local area networks [VLAN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/60Software-defined switches
    • H04L49/602Multilayer or multiprotocol switching, e.g. IP switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q11/0067Provisions for optical access or distribution networks, e.g. Gigabit Ethernet Passive Optical Network (GE-PON), ATM-based Passive Optical Network (A-PON), PON-Ring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q11/0071Provisions for the electrical-optical layer interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0073Provisions for forwarding or routing, e.g. lookup tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0079Operation or maintenance aspects
    • H04Q2011/0081Fault tolerance; Redundancy; Recovery; Reconfigurability

Definitions

  • the invention generally relates to the field of fiber optic communications networks, and more particularly to a new system and method for deploying and operating a metropolitan area local access distribution network.
  • Fiber deployment in Metro Area Networks (“MANs”) has been primarily to carrier and service provider locations, or to a relatively small number of very large commercial office building sites. At the current time, it is estimated that as few as 10% of all commercial buildings in the United States are served with fiber-optic networks.
  • a service network provides customers with a highly-available transparent Layer 2 network connection between their edge IP equipment and their subscribers' edge IP equipment.
  • Layer 2 known as the bridging or switching layer, allows edge IP equipment addressing and attachment. It forwards packets based on the unique Media Access Control (“MAC”) address of each end station. Data packets consist of both infrastructure content, such as MAC addresses and other information, and end-user content.
  • MAC Media Access Control
  • Data packets consist of both infrastructure content, such as MAC addresses and other information, and end-user content.
  • MAC Media Access Control
  • At Layer 2 generally no modification is required to packet infrastructure content when going between like Layer 1 interfaces, like Ethernet to Fast Ethernet. However, minor changes to infrastructure content—not end-user data content—may occur when bridging between unlike types such as FDDI and Ethernet. Additionally, the Ethernet service can inter-connect customers to create an “extended” LAN service.
  • Layer 3 known as the routing layer, provides logical partitioning of subnetworks, scalability, security, and Quality of Service (“QoS”). Therefore, it is desirable that the network remain transparent to Layer 3 protocols such as IP. This is accomplished by the combination of a particular network topology combined with failure detection/recovery mechanisms, as more fully described herein.
  • Embodiments of the present invention may include the following advantages: (1) in the BDN, a dedicated pair of diversely routed optical fibers for each customer; (2) in the core, a dual physical overlay ring core topology; (3) working and protection logical path connectivity; (4) no 802.1D Spanning Tree for recovery; (5) resilience to any single network failure in any device or link; (6) quick recovery times from failure relative to mechanisms based on Spanning Tree; and (7) a failure detection/recovery protocol that is not “active” on any devices other than the devices directly attached to the subscriber.
  • FIG. 1 is a schematic diagram of a local distribution portion of an overall fiber optic network, illustrating the relationship between multiple subscribers disposed on collection loops connected to a hub facility via a feeder loop;
  • FIG. 2 is a schematic diagram illustrating a typical longest path around an access distribution network
  • FIG. 3 is a schematic diagram illustrating an alternative design with nested feeders
  • FIG. 4 is a schematic diagram of a dual overlay ring topology within the core
  • FIG. 5 is a schematic diagram of a working path and a protection path across the core connecting a subscriber's Layer 3 switch to its carrier/ISP;
  • FIG. 6 is a simplified logical diagram of the end-to-end Ethernet service indicating where ESRP is utilized.
  • a fiber optic transport network can generally be described in terms of three primary components: (i) a leased transport network (LTN), (ii) a leased distribution network (LDN); and (iii) a built distribution network (BDN), which may be a distribution network in accordance to the present invention (see FIGS. 1 - 6 ).
  • LTN leased transport network
  • LDN leased distribution network
  • BDN built distribution network
  • the LTN is the main transport layer of each metropolitan system. It typically consists of a high-bandwidth, flexible DWDM transport pipe used to connect customer locations (such as data centers, co-location hotels, and large customer POPs) to distribution networks.
  • customer locations such as data centers, co-location hotels, and large customer POPs
  • the distribution networks may comprise both LDN and BDN designs, though either may be excluded. Although similar in general purpose, an LDN and a BDN may use differing architectural approaches to bring traffic to the LTN. While the LDN typically relies on TDM (and sometimes WDM) electronics to multiplex traffic onto limited quantities of fiber, the distribution network according to the present invention uses larger quantities of fiber, enabling a reduced reliance upon multiplexing electronics.
  • TDM and sometimes WDM
  • the following description will focus specifically on the architectural design and operation of a distribution network especially suitable for a BDN, though it may have other applications, particularly to an LDN. Detailed discussions of LTN and LDN designs may be found in other publicly available documents.
  • the distribution network architecture maximizes the saturation of the potential subscriber base at minimal expense and is designed with the following criteria in mind.
  • Each subscriber should have access to a route-diverse connection to the LTN hub.
  • these connections are capable of supporting:
  • Wavelength services 1000-LX/LH/ZX, OC-48/OC-48c).
  • the distribution design is scalable and flexible enough to adapt to the eventual traffic needs of the network. Circuits from multiple subscribers should be reasonably segregated. Where feasible, the distribution architecture should ensure that work requested by one subscriber seldom impacts other subscribers.
  • the distribution network comprises a major feeder ring 10 with a series of smaller, subtending collector rings 11 - 13 .
  • collector rings are installed to follow city streets.
  • Feeder ring 10 accesses at least one LTN Hub 20 , where the distribution network fiber may be terminated to high-density fiber distribution panels (FDPs).
  • FDPs high-density fiber distribution panels
  • One particular feature of any local distribution architecture is the quantity of fiber run on the distribution network.
  • fiber counts will vary based on the logistics of the distribution area, a typical feeder ring 10 will contain 432 fibers, and typical collectors 11 - 13 each will contain 144 fibers. Laterals (e.g., 15 ) extend from the collector rings 11 - 13 to subscriber buildings (e.g., 17 ), and will typically contain 48 fibers.
  • each collector ( 11 - 13 ) is preferably deployed with two splice points to the feeder 10 .
  • fiber counts may be varied upwardly or downwardly without deviation from the present invention.
  • the overall goal of the preferred embodiment is to provide, for each subscriber, optical service with at least one diversely-routed, dedicated fiber pair.
  • Circuit protection Isolating each subscriber's optical service on a dedicated fiber pair reduces the possibility that work requested by one subscriber affects other subscribers. This represents a significant advantage in network accessibility when compared to designs that rely on multiple subscribers sharing a TDM resource.
  • a primary goal of the preferred embodiment of the BDN design is to reduce the use of electronics at each subscriber site, electronic components will still be required for subscribers who elect to use electrical circuits (e.g., DS-3, 10-base, and 100-base). Electrical circuits must still be converted into optical circuits for transport around the BDN. Due to the distances within the BDN, single-mode fiber connectivity is the preferred embodiment to support the connection between the subscriber site and the hub location. Therefore, additional electronics may be required for subscribers who desire optical circuits when these subscribers occupy locations or operate equipment with an embedded base of Multi-Mode Fiber (“MMF”).
  • MMF Multi-Mode Fiber
  • FIG. 2 illustrates the longest optical path 25 around the distribution network. This calculation is the sum of the length of the longest collector (shown as 11 ) and the length of the feeder 20 .
  • the longest optical path 25 is a significant limitation to be considered in the design of the distribution network, as discussed in greater detail below.
  • distribution network fiber can be terminated to high-density Fiber Distribution Panels (FDPs).
  • FDPs Fiber Distribution Panels
  • subscriber circuits may be cross-connected to ADM equipment, Ethernet switches, or directly to an LTN DWDM system.
  • the ADMs and Ethernet switches aggregate circuits with common destinations (e.g., customer locations) and transfer them to the LTN for transport around the metropolitan network.
  • a lateral fiber offshoot can be deployed to connect the appropriate feeder 10 fibers to a low-density FDP on the subscriber's premises.
  • this FDP will serve as a demarcation point between the distribution network and the subscriber equipment.
  • an additional component can be placed at the subscriber's site. This component typically will be a media converter capable of converting an electrical signal into a higher-rate optical signal for transport over the distribution network.
  • This converter equipment can usually be powered by the subscriber's AC power facilities, although a small UPS (Uninterruptible Power Supply) device may be required in cases where brownout protection is lacking from the subscriber's AC feed.
  • UPS Uninterruptible Power Supply
  • Access to multiple-tenant facilities may be similarly designed. A primary difference will often be the equipment location. Any necessary auxiliary electrical equipment (FDP, DSX, patch panel, SONET TDM, Ethernet switch, media converter) may be located either within a Minimum Point of Entry (MPOE) facility inside the building or within the subscriber's location. When it is located within the MPOE, such equipment preferably should be within a protected enclosure (e.g., a cage or locked cabinet). DC power (e.g., ⁇ 48V regulated with battery reserve) may be provided as an option in larger MPOE facilities. However, AC power with a UPS reserve is also feasible.
  • Non-Dispersion Shifted Fiber is the preferred fiber for such distribution network deployment.
  • NZ-DSF Non-Zero Dispersion Shifted Fibers
  • MMF Multi-Mode Fiber
  • a 48-count fiber bundle can be run in a single 1.5′′ conduit between the collectors 11 - 13 and subscriber facilities. As a result, most laterals will be single-threaded. A person of ordinary skill in the art will readily appreciate that dual-threaded laterals, and laterals of different fiber counts, may also be run.
  • fusion or mechanical splices may be utilized. Mechanical splices are preferably used between the lateral and the Collector fibers. High quality mechanical splices can be obtained that provide typical insertion loss below 0.10 dB.
  • Fusion splices are preferably utilized between the lateral and the FDP within the subscriber site. Fusion splices can routinely introduce insertion losses of less than 0.05 dB.
  • a collector loop will consist of a 144-count fiber bundle run in a single 4′′ conduit.
  • the 4′′ collector can compartmentalized, such as with individual 1.0′′ conduits or “MaxCell”® fabric inner ducts.
  • the Collector fibers will utilize one of the Feeder's expansion conduits instead of the 4′′ conduit discussed above.
  • Both ends of a Collector loop will not necessarily intersect the Feeder at the same physical location. Fusion splices are preferably utilized between the Collector and Feeder loops.
  • feeder loop 10 will consist of a 288 or 432-count fiber bundle run in a single 1.5′′ conduit.
  • fiber bundles of greater or lesser count may be used as appropriate.
  • Additional conduits preferably will be included along the Feeder path to accommodate future growth.
  • a Collector loop runs parallel to a Feeder loop, it is expected that the Collector will utilize one of the Feeder's surplus 1.5′′ conduits instead of the Collector's usual 4′′ conduit.
  • Fusion splices should be utilized for all connections to and from Feeder loops. All fusion splices should introduce an insertion loss of no greater than 0.05 dB.
  • Feeder 10 fibers can be spliced to pigtails and terminated in the Hub 20 location on initial installation. This reduces the frequency of adding new splices on the feeder loop 10 and reduces the interval required for service activation.
  • additional electronic equipment can be deployed at either the subscriber facility or the hub 20 to provide intermediate-reach optics on both sides of the transmission link.
  • the distribution network provider deploys a pair of nested Feeder rings 30 in each distribution network.
  • the collectors 31 , 32 and 33 closest to the hub 20 are placed on the nested feeder 30 , while the collectors 40 , 41 and 42 located farther out are placed on the longer feeder 40 .
  • FIG. 3 displays a generic example of this configuration.
  • the longer feeder 40 can remain longer (e.g., more than 7 miles in circumference) without stranding capacity because the collectors closest to the LTN hub 20 have a shorter path available to them.
  • the additional cross-section of fiber that completes the interior Feeder may increase the cost of the distribution network, it may also provide the opportunity to place one or more additional Collectors that would have otherwise been difficult to attach to the single Feeder design.
  • the distribution network design can be directed based on the guidelines below.
  • the longest subscriber path is calculated as follows.
  • Each Collector has a corresponding longest circuit path.
  • the longest circuit path can be defined as the sum of the circumference of the Collector and the longest route around the Feeder between the Collector and the Hub. This value represents the maximum distance that a subscriber circuit on that Collector can possibly travel en route to the Hub. This value is unique to each Collector on the distribution network. After calculating this value for each collector on the distribution network, the largest of these values would represent the longest subscriber path on that distribution network.
  • any distribution network that falls in this category will encounter complications based on the optical link budget.
  • the distribution network should be examined in detail to determine whether a nested feeder approach is appropriate.
  • the nested feeder architecture is desirable when a significant portion of potential subscribers must traverse more than nine miles of fiber (longest route around the Feeder) to access the Hub or the additional cross section of fiber added to create an Interior Feeder allows the addition of a new, desirable Collector that would have otherwise been inaccessible.
  • any distribution network in this classification gives rise to design problems as one begins to exceed the limits of both Gigabit Ethernet and SONET IR-1 optics.
  • either the distribution network may be configured to utilize the nested feeder architecture, or it can be redesigned to shorten the longest subscriber path.
  • Synchronization Subscribers purchasing SONET services can synchronize their equipment with the Network by line-timing from the optics of the ADM at the Hub.
  • subscribers purchasing Ethernet services can line-time from the optics of the system Ethernet Switch at the Hub facility.
  • this option is not available for wavelength services, as these circuits bypass any equipment that can connect to a BITS clock.
  • Subscribers who desire wavelength services must therefore either provide their own clock source or line-time from the customer equipment that they logically attach to on the far end of the distribution network. Should either of these options be unavailable for a given subscriber circuit, there is still a likely option available to provide error-free service.
  • Telcordia compliant devices should contain an internal SONET Minimum Clock (SMC) source or Stratum 3 clock source. Either should provide adequate synchronization for SONET signals. Any equipment free-running on a Stratum 3 or SMC source should operate error-free under normal conditions. The major perceptible difference will be an increase in the frequency of pointer justification events between interconnected devices.
  • SMC SONET Minimum Clock
  • SONET equipment installed at a subscriber site may be owned and maintained either by the distribution network operator or by the individual subscriber.
  • Ethernet equipment installed at a subscriber site will generally be owned and maintained by the subscriber.
  • All distribution network electronics installed at subscriber locations that are owned and maintained by the distribution network operator should be remotely manageable, and should be capable of forwarding alarm messages to the system NOC.
  • SONET equipment will commonly utilize the SONET Section Data Communications Channel (SDCC) to communicate with the ADM equipment installed at the Hub.
  • SDCC SONET Section Data Communications Channel
  • a network of the type described herein be substantially always available.
  • a desirable network architecture will provide fast recovery from failure to meet uptime objectives. Taking as an example Ethernet as the local loop technology, it is an objective that Ethernet services be highly available. This objective makes the elimination of any Spanning Tree Protocol (“STP”) from the architecture desirable.
  • STP Spanning Tree Protocol
  • STP is not used because otherwise, network recovery times may be of the order of minutes per failure.
  • the network elements which provide redundancy need not be co-located with the primary network elements. This design technique reduces the probability that problems with the physical environment will interrupt service. Problems with software bugs or upgrades or configuration errors or changes can often be dealt with separately in the primary and secondary forwarding paths without completely interrupting service. Therefore, network-level redundancy can also reduce the impact of non-hardware failure mechanisms. With the redundancy provided by the network, each network device no longer needs to be configured for the ultimate in standalone fault tolerance. Redundant networks can be configured to fail-over automatically from primary to secondary facilities without operator intervention. The duration of service interruption is equal to the time it takes for fail-over to occur. Fail-over times as low as a few seconds are possible in this manner.
  • the local services network (e.g., Ethernet) according to the preferred embodiment of the present invention comprises a dual overlay ring topology within the core. This topology is shown in FIG. 4. As can be seen, the dual overlay ring topology is a physical topology in which two complete physical paths are disposed to ensure that two data channels are available during normal periods of use so that at least one is available to communicate information in the event the other becomes unavailable.
  • This physical topology allows the creation of a working path 50 and a protection path 52 across the network connecting each subscriber (L3 Switch 54 ) to their carrier/ISP (L3 Switches 56 , 58 ).
  • the working path 50 can be provisioned on one ring while the protection path 52 can be provisioned on the other ring shown, creating the logical connectivity topology shown in FIG. 5.
  • Logical connectivity may be accomplished in many ways, such as by using Ethernet Virtual LAN (VLAN) tagging, as defined in the IEEE 802.1Q standard.
  • VLAN Virtual LAN
  • a VLAN can be roughly equated to a broadcast domain. More specifically, VLANs can be seen as analogous to a group of end-stations, perhaps on multiple physical LAN segments, which are not constrained by their physical location and can communicate as if they were on a common LAN.
  • the 802.1Q header adds two octets to the standard Ethernet frame.
  • ports on the Ethernet switches e.g., 54
  • the logical connectivity paths are created through the network. This process is somewhat analogous to creating a Permanent Virtual Circuit (“PVC”) in the Frame Relay or ATM environment.
  • PVC Permanent Virtual Circuit
  • ESRP Extreme Network's Standby Router Protocol
  • Additional protocols may be implemented to support detection and recovery of failures that occur at the Carrier/ISP connection.
  • Some of these protocols are Hot Standby Router Protocol (“HSRP”) and Virtual Router Redundancy Protocol (“VRRP”).
  • HSRP Hot Standby Router Protocol
  • VRRP Virtual Router Redundancy Protocol
  • standard Layer 2 protection protocols such as 802.1D Spanning Tree are not required in some embodiments of the present invention.
  • ESRP is a feature of the Extreme OS (operating system) that allows multiple switches to provide redundant services to users. In addition to providing Layer 3 routing redundancy for IP, ESRP also provides Layer 2 redundancy. The Layer 2 redundancy features of ESRP offer fast failure recovery and provide for a dual-homed system design generally independent of end-user attached equipment.
  • ESRP is configured on a per-VLAN basis on each switch.
  • This system utilizes ESRP in a two switch configuration, one master and one standby.
  • the switches exchange keep-alive packets for each VLAN independently. Only one switch can actively provide Layer 2 switching for each VLAN.
  • the switch performing the forwarding for a particular VLAN is considered the “master” for that VLAN.
  • the other participating switch for the VLAN is in ‘standby’ mode.
  • each participating switch uses the same MAC address and must be configured with the same IP address. It is possible for one switch to be master for one or more VLANs while being in standby for others, thus allowing the load to be split across participating switches.
  • ESRP Extreme Discovery Protocol
  • a switch If a switch is master, it actively provides Layer 2 switching between all the ports of that VLAN. Additionally, the switch exchanges ESRP packets with other switches that are in standby mode.
  • a switch If a switch is in standby mode, it exchanges ESRP packets with other switches on that same VLAN. When a switch is in standby, it does not perform Layer 2 switching services for the VLAN. From a Layer 2 switching perspective, no forwarding occurs between the member ports of the VLAN. This prevents loops and maintains redundancy.
  • ESRP can be configured to track connectivity to one or more specified VLANs as criteria for fail-over.
  • the switch that has the greatest number of active ports for a particular VLAN takes highest precedence and will become master. If at any time the number of active ports for a particular VLAN on the master switch becomes less than the standby switch, the master switch automatically relinquishes master status and remains in standby mode.
  • ESRP can be configured to track connectivity using a simple ping to any outside responder (ping tracking).
  • the responder may represent the default route of the switch, or any device meaningful to network connectivity of the master ESRP switch. It should be noted that the responder must reside on a different VLAN than ESRP. The switch automatically relinquishes master status and remains in standby mode if a ping keep-alive fails three consecutive times.
  • FIG. 6 depicts ESRP enabled in the switches ( 62 , 63 ) directly attached to the subscriber 60 .
  • Port track is used to detect local failure of a link directly connected to these switches while ping track is used to detect core network failures. If a failure is detected anywhere along the active path 64 , ESRP will failover to allow traffic to flow on the standby path 65 .
  • ESRP port count can be used to protect dual customer connections to the network.
  • ESRP ping tracking is used to protect the core VLAN.
  • VRRP or HSRP protects the Carrier/ISP L 3 switch.
  • a preferred embodiment of the network includes network enhancements, including Extreme Network's ESRP, to support rapid failover of subscriber equipment when a network or core failure occurs.
  • ESRP Failover Link Transition Enhancement This enhancement refers to the ability of a “Master” ESRP switch, when transitioning to standby state to “bounce” or restart auto-negotiation on a set of physical ports. This enhancement will cause an end device to flush its Layer 2 forwarding database and cause it to re-broadcast immediately for a new path through the network. This provides the end station the ability to switch from the primary to the secondary path in a very short time.
  • This enhancement relates to the ability of a “Master” ESRP switch, when transitioning to a standby state to “bounce” or restart auto-negotiation on a set of physical ports. This is useful in this architecture to inform an end-user Layer 2 device of a failure farther within the network that does not directly impact the end-user Layer 2 device.
  • Typical Layer 3 switches use the Address Resolution Protocol (ARP) to populate their forwarding databases. This forwarding database determines which port packets are sent out on based on destination MAC address. Once this information is learned through the ARP process, typical Layer 2 devices will not modify this forwarding information unless one of two events occur.
  • ARP Address Resolution Protocol
  • a Loss of Signal occurs on the port or 2) the ARP max age timer expires.
  • the ARP max age value is set to 5 minutes.
  • the Layer 2 device will re-ARP to update its forwarding database information. Therefore, if a failure occurs within the core of the network that does not cause a LOS on the end-user device, that device will continue to forward packets into the network even though they cannot reach their ultimate destination until the ARP max age timer expires. This is known as a black hole situation.
  • the enhancement proposed here prevents a black hole situation, by notifying the end device of the core failure by “bouncing” the port to force the equipment to re-ARP to update its forwarding database information immediately.
  • the terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
  • ADM add/Drop Multiplexer.
  • a SONET component capable of inserting and removing traffic to/from the SONET line payload.
  • ADMs also commonly perform other functions, such as generating/processing APS commands and synchronizing the transport optics to an external clock source.
  • APS Automatic Protection Switching.
  • BDN Built Distribution Network. A portion of a network dedicated to the aggregation of multiple subscribers. A BDN typically utilizes fiber to provide dedicated fiber links between individual subscribers and a Hub facility.
  • BITS Building Integrated Timing Source. A highly accurate and precise clock source used to synchronize multiple nodes on a SONET transport system.
  • Chromatic Dispersion A linear effect that causes pulse broadening or compression within an optical transmission system. Chromatic Dispersion is occurs because different wavelengths of light travel at different velocities through the transmission media.
  • Collector Loop A fiber loop (typically 144 ct) used to connect multiple subscribers to the larger Feeder loop on a BDN.
  • Customer A business entity (such as an ISP or LEC) that provides telecommunications service within a Metropolitan area.
  • the BDN operator typically will serve as an intermediate transport mechanism to connect subscribers to various customers.
  • DMD Downifferential Mode Delay. A linear effect that degrades the quality of laser transmissions across MMF. A single laser transmission can inadvertently become subdivided upon ingress to MMF. These identical signals traverse unique transmission paths within the large core of MMF and leave the fiber offset in time.
  • DS3 Digital Signal 3. A digital signal rate of approximately 44.736 Mb/s corresponding to the North American T3 designator. A plesiochronous transport protocol equivalent to 672 voice lines at 64 kb/s each.
  • DWDM Dense Wavelength Division Multiplexing. A method of allowing multiple transmission signals to be transmitted simultaneously over a single fiber by giving each a unique frequency range (or wavelength) within the transmission spectrum. DWDM wavelengths within the C-Band are standardized by the ITU-T.
  • Ethernet A standardized (IEEE 802.3) packet-based data transport protocol developed by Xerox Corporation.
  • Ethernet Switch A device used to route data packets to their proper destination in an Ethernet-based transport network.
  • FDP Fiber Distribution Panel. An enclosure built to organize, manage, and protect physical cross-connections between multiple fiber-optic cables.
  • Feeder Loop A fiber loop (typically 432 ct) used to connect multiple collector loops to a Hub facility on a BDN.
  • Fusion Splice The process of joining of two discrete fiber-optic cables via localized heating of the fiber ends. Fusion splices are typically characterized as permanent in nature and exhibit relatively minor loss ( ⁇ 0.05 dB) at the fusion point.
  • Hub Facility A facility used to connect a distribution network (BDN or LDN) to a transport network (LTN) within a Metropolitan area.
  • IR-1 A specification for transmission lasers and receiver photodiodes standardized by Telcordia. IR-1 optics typically provide a 13.0 dB link budget and are optimized for NDSF.
  • ITU-T International Telecommunications Union—Telecommunications Standardizations Sector.
  • multiple fibers e.g., 48 ct
  • LDN Leased Distribution Network. A portion of a network dedicated to the aggregation of multiple subscribers. An LDN typically utilizes fiber to provide fiber links between individual subscribers and a Hub facility.
  • LR-1 A specification for transmission lasers and receiver photodiodes standardized by Telcordia. LR-1 optics typically provide a 25 dB link budget and are optimized for NDSF.
  • LTN Leased Transport Network. A portion of a network dedicated to connecting various Customer sites to Hub facilities. An LTN will typically utilize TDM and DWDM equipment over a small quantity of leased fiber.
  • MCP Modal Conditioning Patch cord.
  • Mechanical Splice The process of joining of two discrete fiber-optic cables by aligning them within a mechanical enclosure or adhesive media. Mechanical splices typically utilize an index-matching gel to reduce reflection at the splice point. Expect a moderate power loss (0.10 to 0.20 dB) at the splice point.
  • Media Converter A generic classification of devices used to alter protocols and/or media of a transmitted signal.
  • MMF Multi-Mode Fiber.
  • MMF is typically utilized with LED-based optical transmission systems.
  • Modal Distortion A linear effect that causes pulse broadening of transmission signals over MMF. Rays taking more direct paths (fewer reflections in the core) through the MMF core traverse the fiber more quickly than rays taking less direct paths. Modal distortion limits the bandwidth and distance of transmission links over MMF.
  • MPOE Minimum Point of Entry. A common space within a multi-tenant building used to interconnect multiple tenants with common external telecommunications facilities.
  • NDSF Non Dispersion-Shifted Fiber. Single-mode optical fiber with a nominal zero-dispersion wavelength within the conventional 1310 nm transmission window.
  • NZ-DSF Non-Zero Dispersion-Shifted Fiber. Single-mode optical fiber with a nominal zero-dispersion wavelength shifted to reduce chromatic dispersion within the 1530 nm to 1560 nm transmission window.
  • OC-3 Optical Carrier 3.
  • a synchronous transport protocol equivalent to 2016 voice lines at 64 kb/s each. Protocol is specified by Telcordia standards.
  • OC-3c Optical Carrier 3, Concatenated.
  • OC-12 Optical Carrier 12.
  • OC-12c Optical Carrier 12, Concatenated.
  • OC-48 Optical Carrier 48.
  • OC-48c Optical Carrier 48, Concatenated.
  • a non-channelized variant of the OC-48 primarily utilized for data transmissions over SONET. Protocol is specified by Telcordia standards.
  • Plesiochronous The relationship between two transmission devices, where each is timed from similar, yet diverse clock sources. A slight difference in either frequency or phase must exist between the diverse clocks.
  • POP Point of Presence. The physical facility in which interexchange carriers and local exchange carriers provide access services.
  • SMF Single Mode Fiber. A type of optical fiber in which only a single transport path (mode) is available through the core at a given wavelength.
  • SONET Synchronous Optical NETwork.
  • Use of the SONET TDM protocol is primarily limited to North America.
  • Splice Box An enclosure built to organize, manage, and protect physical splices between multiple fiber-optic cables.
  • SR A specification for transmission lasers and receiver photodiodes standardized by Telcordia. SR optics typically provide an 8 dB link budget and are optimized for NDSF.
  • Subscriber An end-user (or desired end-user) of a Customer's telecommunications service.
  • a BDN operator typically will serve as an intermediate transport mechanism between subscribers and customers.
  • Synchronous The relationship between two transmission devices, where both are timed from identical clock sources.
  • the clocks must be identical in frequency and phase.
  • TDM Time-Division Multiplexing. Combining multiple transmission signals into a common, higher-frequency bit-stream.
  • WDM Wavelength-Division Multiplexing. A method of allowing multiple transmission signals to be transmitted simultaneously over a single fiber by giving each a unique frequency range (or wavelength) within the transmission spectrum.

Abstract

A local access fiber optical distribution network is disclosed in which a dedicated pair of diversely routed optical fibers is routed in the distribution network for each customer. In a preferred embodiment, a dual physical overlay ring core topology is used in the core. The distribution network includes working and protection logical path connectivity. No 802.1D Spanning Tree is required for recovery, providing resilience to any single network failure in any device or link, quick recovery times from failure, and a failure detection/recovery protocol that is not active on any devices other than the devices directly attached to the subscriber.

Description

    FIELD OF THE INVENTION
  • The invention generally relates to the field of fiber optic communications networks, and more particularly to a new system and method for deploying and operating a metropolitan area local access distribution network. [0001]
  • RELATED ART
  • The growth of the Internet has created unprecedented demand for high-speed broadband connectivity in telecommunications networks. However, access connections between corporate Local Area Networks (“LANs”) and existing service provider networks, such as those operated by long-haul carriers and Internet Service Providers (“ISPs”), generally have been limited to relatively slow, hard to provision Ti (1.5 Mbps) or DS-3 (45 Mbps) data speeds due to infrastructure limitations in most metropolitan areas. [0002]
  • The lack of bandwidth throughout metropolitan areas is a function, principally, of two independent factors. First, there is a deficiency in high speed fiber optic access rings and/or fiber optic “tails” into major buildings in metro areas. Second, the existing metropolitan area carriers continue to use older, installed SONET (Synchronous Optical NETwork) architecture which, although it allows data streams of different formats to be combined onto a single high speed fiber optic synchronous data stream, cannot be scaled to meet future bandwidth requirements. Although customer demand for increased bandwidth has been growing at exponential rates, there is a mismatch between carrier long-haul backbones and metro area backbones on the one hand and local loop access on the other hand. Despite the aggressive deployment of fiber-optic networks nationwide, relatively little fiber has been deployed in the local access market or “last mile.” Fiber deployment in Metro Area Networks (“MANs”) has been primarily to carrier and service provider locations, or to a relatively small number of very large commercial office building sites. At the current time, it is estimated that as few as 10% of all commercial buildings in the United States are served with fiber-optic networks. [0003]
  • Currently, most local connectivity service providers are primarily providing SONET-based services and are investing little in the services required for expanded local connectivity—e.g., Ethernet and Wavelength services. In general, existing local service providers have not moved forward to upgrade local fiber infrastructure to support these latter services. [0004]
  • In order to provide compatibility and easy upgrading from existing services to new services, it is desirable to provide SONET, Ethernet and Wavelength services in the metropolitan and access segments of the communications infrastructure, making use of a common interface system and fiber optic cables. In this way, it is possible for customers to migrate smoothly and at an opportune time from the traditional SONET-based circuits to Ethernet circuits and, possibly, to transparent wavelengths. Such an evolutionary connectivity path enables customers to access the right amount of bandwidth at the right time. [0005]
  • SUMMARY OF THE INVENTION
  • In accordance with the present invention, a service network provides customers with a highly-available [0006] transparent Layer 2 network connection between their edge IP equipment and their subscribers' edge IP equipment.
  • [0007] Layer 2, known as the bridging or switching layer, allows edge IP equipment addressing and attachment. It forwards packets based on the unique Media Access Control (“MAC”) address of each end station. Data packets consist of both infrastructure content, such as MAC addresses and other information, and end-user content. At Layer 2, generally no modification is required to packet infrastructure content when going between like Layer 1 interfaces, like Ethernet to Fast Ethernet. However, minor changes to infrastructure content—not end-user data content—may occur when bridging between unlike types such as FDDI and Ethernet. Additionally, the Ethernet service can inter-connect customers to create an “extended” LAN service.
  • [0008] Layer 3, known as the routing layer, provides logical partitioning of subnetworks, scalability, security, and Quality of Service (“QoS”). Therefore, it is desirable that the network remain transparent to Layer 3 protocols such as IP. This is accomplished by the combination of a particular network topology combined with failure detection/recovery mechanisms, as more fully described herein.
  • Embodiments of the present invention may include the following advantages: (1) in the BDN, a dedicated pair of diversely routed optical fibers for each customer; (2) in the core, a dual physical overlay ring core topology; (3) working and protection logical path connectivity; (4) no 802.1D Spanning Tree for recovery; (5) resilience to any single network failure in any device or link; (6) quick recovery times from failure relative to mechanisms based on Spanning Tree; and (7) a failure detection/recovery protocol that is not “active” on any devices other than the devices directly attached to the subscriber. [0009]
  • Further features and advantages of the invention will appear more clearly from a reading of the detailed description of the preferred embodiments of the invention, which are given below by way of example only, and with reference to the accompanying drawings, in which like references indicate similar elements, and in which: [0010]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic diagram of a local distribution portion of an overall fiber optic network, illustrating the relationship between multiple subscribers disposed on collection loops connected to a hub facility via a feeder loop; [0011]
  • FIG. 2 is a schematic diagram illustrating a typical longest path around an access distribution network; [0012]
  • FIG. 3 is a schematic diagram illustrating an alternative design with nested feeders; [0013]
  • FIG. 4 is a schematic diagram of a dual overlay ring topology within the core; [0014]
  • FIG. 5 is a schematic diagram of a working path and a protection path across the core connecting a subscriber's [0015] Layer 3 switch to its carrier/ISP; and
  • FIG. 6 is a simplified logical diagram of the end-to-end Ethernet service indicating where ESRP is utilized.[0016]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • A fiber optic transport network can generally be described in terms of three primary components: (i) a leased transport network (LTN), (ii) a leased distribution network (LDN); and (iii) a built distribution network (BDN), which may be a distribution network in accordance to the present invention (see FIGS. [0017] 1-6).
  • The LTN is the main transport layer of each metropolitan system. It typically consists of a high-bandwidth, flexible DWDM transport pipe used to connect customer locations (such as data centers, co-location hotels, and large customer POPs) to distribution networks. [0018]
  • The distribution networks may comprise both LDN and BDN designs, though either may be excluded. Although similar in general purpose, an LDN and a BDN may use differing architectural approaches to bring traffic to the LTN. While the LDN typically relies on TDM (and sometimes WDM) electronics to multiplex traffic onto limited quantities of fiber, the distribution network according to the present invention uses larger quantities of fiber, enabling a reduced reliance upon multiplexing electronics. The following description will focus specifically on the architectural design and operation of a distribution network especially suitable for a BDN, though it may have other applications, particularly to an LDN. Detailed discussions of LTN and LDN designs may be found in other publicly available documents. The distribution network architecture maximizes the saturation of the potential subscriber base at minimal expense and is designed with the following criteria in mind. [0019]
  • Each subscriber should have access to a route-diverse connection to the LTN hub. In a preferred embodiment, these connections are capable of supporting: [0020]
  • (1) SONET services that require Line Overhead termination and Automatic Protection Switching (APS) controlled by the distribution network (DS-3, OC-3/OC-3c, OC-12/OC-12c, and OC-48/OC-48c). [0021]
  • (2) Data devices with SONET interfaces that require Line Overhead termination, but may lack APS functionality (DS-3, OC-3c, OC-12c, and OC-48c). [0022]
  • (3) Ethernet services (10, 100, and 1000 base). [0023]
  • (4) Wavelength services (1000-LX/LH/ZX, OC-48/OC-48c). [0024]
  • In a preferred embodiment, the distribution design is scalable and flexible enough to adapt to the eventual traffic needs of the network. Circuits from multiple subscribers should be reasonably segregated. Where feasible, the distribution architecture should ensure that work requested by one subscriber seldom impacts other subscribers. [0025]
  • Referring now to FIG. 1, the distribution network comprises a [0026] major feeder ring 10 with a series of smaller, subtending collector rings 11-13. In a common metropolitan-wide network design, collector rings are installed to follow city streets. Feeder ring 10 accesses at least one LTN Hub 20, where the distribution network fiber may be terminated to high-density fiber distribution panels (FDPs).
  • One particular feature of any local distribution architecture is the quantity of fiber run on the distribution network. Although fiber counts will vary based on the logistics of the distribution area, a [0027] typical feeder ring 10 will contain 432 fibers, and typical collectors 11-13 each will contain 144 fibers. Laterals (e.g., 15) extend from the collector rings 11-13 to subscriber buildings (e.g., 17), and will typically contain 48 fibers. As shown, each collector (11-13) is preferably deployed with two splice points to the feeder 10. A person of ordinary skill in the art will readily appreciate that fiber counts may be varied upwardly or downwardly without deviation from the present invention. The overall goal of the preferred embodiment is to provide, for each subscriber, optical service with at least one diversely-routed, dedicated fiber pair.
  • Select embodiments of the presently proposed distribution network architecture have the following advantages over conventional TDM and WDM distribution networks. [0028]
  • (i) Lower cost—At the present time, the major cost associated with any new fiber run is the cost of opening and closing the trench. Since this cost is substantially independent of the number of fibers being run, the comparison between a bulk-fiber distribution network and a TDM-based distribution network (similar to the LDN design), for example, becomes mainly a comparison between costs of fiber versus electronics. On relatively short fiber runs (like a distribution network), additional fiber is generally less expensive than TDM electronics. When the costs associated with space, power, operation, maintenance, and management of the TDM electronics is added, the cost advantages of a bulk-fiber approach increase dramatically. [0029]
  • (ii) Manageability—Connecting subscribers to a distribution network becomes a relatively simple task of splicing fibers between the subscriber building and the collector. This design eliminates an extra layer of TDM circuit provisioning and management. Requirements of TDM software upgrades and equipment failures are likewise reduced. [0030]
  • (iii) Scalability—Since each subscriber's optical service may be on a dedicated fiber pair, significant capacity exists at the outset and there is no concern regarding TDM circuit fill ratios or provisioning anomalies. This design also minimizes the need for TDM reconfigurations to support capacity expansions. [0031]
  • (iv) Circuit protection—Isolating each subscriber's optical service on a dedicated fiber pair reduces the possibility that work requested by one subscriber affects other subscribers. This represents a significant advantage in network accessibility when compared to designs that rely on multiple subscribers sharing a TDM resource. [0032]
  • Although a primary goal of the preferred embodiment of the BDN design is to reduce the use of electronics at each subscriber site, electronic components will still be required for subscribers who elect to use electrical circuits (e.g., DS-3, 10-base, and 100-base). Electrical circuits must still be converted into optical circuits for transport around the BDN. Due to the distances within the BDN, single-mode fiber connectivity is the preferred embodiment to support the connection between the subscriber site and the hub location. Therefore, additional electronics may be required for subscribers who desire optical circuits when these subscribers occupy locations or operate equipment with an embedded base of Multi-Mode Fiber (“MMF”). [0033]
  • FIG. 2 illustrates the longest [0034] optical path 25 around the distribution network. This calculation is the sum of the length of the longest collector (shown as 11) and the length of the feeder 20. The longest optical path 25 is a significant limitation to be considered in the design of the distribution network, as discussed in greater detail below.
  • The physical connections of circuits and facilities on the distribution network are described in greater detail below. Exemplary subscriber connections can be found by reference to FIGS. [0035] 3-7, discussed below.
  • LTN Hubs [0036]
  • At [0037] LTN Hub 20 locations, distribution network fiber can be terminated to high-density Fiber Distribution Panels (FDPs). From these locations, subscriber circuits may be cross-connected to ADM equipment, Ethernet switches, or directly to an LTN DWDM system. The ADMs and Ethernet switches aggregate circuits with common destinations (e.g., customer locations) and transfer them to the LTN for transport around the metropolitan network.
  • Single-Tenant Subscriber Facilities [0038]
  • In a single-tenant subscriber facility, a lateral fiber offshoot can be deployed to connect the [0039] appropriate feeder 10 fibers to a low-density FDP on the subscriber's premises. For optical services, this FDP will serve as a demarcation point between the distribution network and the subscriber equipment. For electrical services, an additional component can be placed at the subscriber's site. This component typically will be a media converter capable of converting an electrical signal into a higher-rate optical signal for transport over the distribution network. This converter equipment can usually be powered by the subscriber's AC power facilities, although a small UPS (Uninterruptible Power Supply) device may be required in cases where brownout protection is lacking from the subscriber's AC feed.
  • Multiple-Tenant Subscriber Facilities [0040]
  • Access to multiple-tenant facilities may be similarly designed. A primary difference will often be the equipment location. Any necessary auxiliary electrical equipment (FDP, DSX, patch panel, SONET TDM, Ethernet switch, media converter) may be located either within a Minimum Point of Entry (MPOE) facility inside the building or within the subscriber's location. When it is located within the MPOE, such equipment preferably should be within a protected enclosure (e.g., a cage or locked cabinet). DC power (e.g., −48V regulated with battery reserve) may be provided as an option in larger MPOE facilities. However, AC power with a UPS reserve is also feasible. [0041]
  • Fiber Plant [0042]
  • At present, the majority of optical circuits transported over the distribution network preferably will utilize 1310 nm lasers and therefore, Non-Dispersion Shifted Fiber (NDSF) is the preferred fiber for such distribution network deployment. Non-Zero Dispersion Shifted Fibers (NZ-DSF) and Multi-Mode Fiber (MMF), though not presently preferred, may be used in alternative embodiments. [0043]
  • Subscriber Laterals [0044]
  • Normally, a 48-count fiber bundle can be run in a single 1.5″ conduit between the collectors [0045] 11-13 and subscriber facilities. As a result, most laterals will be single-threaded. A person of ordinary skill in the art will readily appreciate that dual-threaded laterals, and laterals of different fiber counts, may also be run. Depending on system requirements, fusion or mechanical splices may be utilized. Mechanical splices are preferably used between the lateral and the Collector fibers. High quality mechanical splices can be obtained that provide typical insertion loss below 0.10 dB. Fusion splices are preferably utilized between the lateral and the FDP within the subscriber site. Fusion splices can routinely introduce insertion losses of less than 0.05 dB.
  • Collector Loops [0046]
  • In a preferred embodiment, a collector loop will consist of a 144-count fiber bundle run in a single 4″ conduit. The 4″ collector can compartmentalized, such as with individual 1.0″ conduits or “MaxCell”® fabric inner ducts. In cases where a single Collector runs in the same trench as a Feeder loop, it is expected that the Collector fibers will utilize one of the Feeder's expansion conduits instead of the 4″ conduit discussed above. Both ends of a Collector loop will not necessarily intersect the Feeder at the same physical location. Fusion splices are preferably utilized between the Collector and Feeder loops. [0047]
  • In order to minimize the frequency of adding new splices between Collector and Feeder loops, a reasonable quantity of splices will be generated at the outset to cover the near-term growth of traffic on the distribution network. [0048]
  • Feeder Loops [0049]
  • In most cases, [0050] feeder loop 10 will consist of a 288 or 432-count fiber bundle run in a single 1.5″ conduit. A person of ordinary skill in the art will readily appreciate that fiber bundles of greater or lesser count may be used as appropriate. Additional conduits preferably will be included along the Feeder path to accommodate future growth. In cases where a Collector loop runs parallel to a Feeder loop, it is expected that the Collector will utilize one of the Feeder's surplus 1.5″ conduits instead of the Collector's usual 4″ conduit. Fusion splices should be utilized for all connections to and from Feeder loops. All fusion splices should introduce an insertion loss of no greater than 0.05 dB.
  • [0051] Feeder 10 fibers can be spliced to pigtails and terminated in the Hub 20 location on initial installation. This reduces the frequency of adding new splices on the feeder loop 10 and reduces the interval required for service activation.
  • An Alternative Embodiment—Additional Equipment [0052]
  • In this embodiment, in addition to the conventional feeder/collector architecture, additional electronic equipment can be deployed at either the subscriber facility or the [0053] hub 20 to provide intermediate-reach optics on both sides of the transmission link.
  • For example, with respect to SONET equipment, a series of ADMs will already exist at the hub locations to aggregate subscriber traffic, and IR-1 optics can be supported on each optical interface of the ADMs. Wavelength services pose a more complex problem. Since these services enter the DWDM directly at the Hub, they are limited by the current SR client-side interface on the DWDM equipment. Since it is unlikely that any wavelength service below an OC-48 or Gigabit Ethernet data rate will be used in this context (as this would require dedicating a DWDM wavelength to an OC-3 or OC-12 rate circuit), this would only pose a problem for OC-48 or Gigabit Ethernet wavelength services. [0054]
  • In the case of Gigabit Ethernet services, upgrading a subscriber to a GBIC equivalent to the Finisar 1319-5A-30 would improve the optical reach to roughly 16.3 miles. This is less than one mile shorter than the range of a bi-directional IR-1 link. The OC-48/OC-48c case is more difficult. To support this service, a subscriber positioned near a Hub either should use LR-1 optics (assuming they are available on the subscriber equipment), or place an OC-48 regenerator at the Hub location. [0055]
  • Second Alternative Embodiment—Nested Feeders [0056]
  • In this scenario, the distribution network provider deploys a pair of nested Feeder rings [0057] 30 in each distribution network. The collectors 31, 32 and 33 closest to the hub 20 are placed on the nested feeder 30, while the collectors 40, 41 and 42 located farther out are placed on the longer feeder 40. FIG. 3 displays a generic example of this configuration. With the FIG. 3 type of configuration, the longer feeder 40 can remain longer (e.g., more than 7 miles in circumference) without stranding capacity because the collectors closest to the LTN hub 20 have a shorter path available to them.
  • Although the additional cross-section of fiber that completes the interior Feeder may increase the cost of the distribution network, it may also provide the opportunity to place one or more additional Collectors that would have otherwise been difficult to attach to the single Feeder design. [0058]
  • Decision Rule for Distribution Network Variants [0059]
  • In significant part, the distribution network design can be directed based on the guidelines below. In each case, the longest subscriber path is calculated as follows. Each Collector has a corresponding longest circuit path. The longest circuit path can be defined as the sum of the circumference of the Collector and the longest route around the Feeder between the Collector and the Hub. This value represents the maximum distance that a subscriber circuit on that Collector can possibly travel en route to the Hub. This value is unique to each Collector on the distribution network. After calculating this value for each collector on the distribution network, the largest of these values would represent the longest subscriber path on that distribution network. [0060]
  • Longest Subscriber Path is Less than 9 Miles [0061]
  • Any distribution network that meets this requirement can be designed using the conventional single Feeder, multiple Collector architecture. [0062]
  • Longest Subscriber Path is Between 9 and 16 Miles [0063]
  • Any distribution network that falls in this category will encounter complications based on the optical link budget. With this in mind, the distribution network should be examined in detail to determine whether a nested feeder approach is appropriate. In most cases, the nested feeder architecture is desirable when a significant portion of potential subscribers must traverse more than nine miles of fiber (longest route around the Feeder) to access the Hub or the additional cross section of fiber added to create an Interior Feeder allows the addition of a new, desirable Collector that would have otherwise been inaccessible. [0064]
  • Longest Subscriber Path is Greater than 16 Miles [0065]
  • Any distribution network in this classification gives rise to design problems as one begins to exceed the limits of both Gigabit Ethernet and SONET IR-1 optics. In this case, either the distribution network may be configured to utilize the nested feeder architecture, or it can be redesigned to shorten the longest subscriber path. [0066]
  • Synchronization Subscribers purchasing SONET services can synchronize their equipment with the Network by line-timing from the optics of the ADM at the Hub. Similarly, subscribers purchasing Ethernet services can line-time from the optics of the system Ethernet Switch at the Hub facility. However, this option is not available for wavelength services, as these circuits bypass any equipment that can connect to a BITS clock. Subscribers who desire wavelength services must therefore either provide their own clock source or line-time from the customer equipment that they logically attach to on the far end of the distribution network. Should either of these options be unavailable for a given subscriber circuit, there is still a likely option available to provide error-free service. Depending on the age of the equipment, Telcordia compliant devices should contain an internal SONET Minimum Clock (SMC) source or [0067] Stratum 3 clock source. Either should provide adequate synchronization for SONET signals. Any equipment free-running on a Stratum 3 or SMC source should operate error-free under normal conditions. The major perceptible difference will be an increase in the frequency of pointer justification events between interconnected devices.
  • Depending on the situation, SONET equipment installed at a subscriber site may be owned and maintained either by the distribution network operator or by the individual subscriber. Ethernet equipment installed at a subscriber site will generally be owned and maintained by the subscriber. [0068]
  • All distribution network electronics installed at subscriber locations that are owned and maintained by the distribution network operator should be remotely manageable, and should be capable of forwarding alarm messages to the system NOC. SONET equipment will commonly utilize the SONET Section Data Communications Channel (SDCC) to communicate with the ADM equipment installed at the Hub. [0069]
  • The Ethernet Services Network [0070]
  • Resiliency [0071]
  • It is desirable that a network of the type described herein be substantially always available. In addition, a desirable network architecture will provide fast recovery from failure to meet uptime objectives. Taking as an example Ethernet as the local loop technology, it is an objective that Ethernet services be highly available. This objective makes the elimination of any Spanning Tree Protocol (“STP”) from the architecture desirable. In a preferred embodiment, STP is not used because otherwise, network recovery times may be of the order of minutes per failure. [0072]
  • The network elements which provide redundancy need not be co-located with the primary network elements. This design technique reduces the probability that problems with the physical environment will interrupt service. Problems with software bugs or upgrades or configuration errors or changes can often be dealt with separately in the primary and secondary forwarding paths without completely interrupting service. Therefore, network-level redundancy can also reduce the impact of non-hardware failure mechanisms. With the redundancy provided by the network, each network device no longer needs to be configured for the ultimate in standalone fault tolerance. Redundant networks can be configured to fail-over automatically from primary to secondary facilities without operator intervention. The duration of service interruption is equal to the time it takes for fail-over to occur. Fail-over times as low as a few seconds are possible in this manner. [0073]
  • Dual Physical Overlay Ring Core Topology [0074]
  • The local services network (e.g., Ethernet) according to the preferred embodiment of the present invention comprises a dual overlay ring topology within the core. This topology is shown in FIG. 4. As can be seen, the dual overlay ring topology is a physical topology in which two complete physical paths are disposed to ensure that two data channels are available during normal periods of use so that at least one is available to communicate information in the event the other becomes unavailable. [0075]
  • This physical topology allows the creation of a working [0076] path 50 and a protection path 52 across the network connecting each subscriber (L3 Switch 54) to their carrier/ISP (L3 Switches 56, 58). The working path 50 can be provisioned on one ring while the protection path 52 can be provisioned on the other ring shown, creating the logical connectivity topology shown in FIG. 5.
  • Logical connectivity may be accomplished in many ways, such as by using Ethernet Virtual LAN (VLAN) tagging, as defined in the IEEE 802.1Q standard. A VLAN can be roughly equated to a broadcast domain. More specifically, VLANs can be seen as analogous to a group of end-stations, perhaps on multiple physical LAN segments, which are not constrained by their physical location and can communicate as if they were on a common LAN. The 802.1Q header adds two octets to the standard Ethernet frame. By configuring ports on the Ethernet switches (e.g., [0077] 54) to be part of the specific customer's VLAN, the logical connectivity paths are created through the network. This process is somewhat analogous to creating a Permanent Virtual Circuit (“PVC”) in the Frame Relay or ATM environment.
  • Extreme Network's Standby Router Protocol (“ESRP”) may be used to detect and recover from failures that occur within the Ethernet Network. Additional protocols may be implemented to support detection and recovery of failures that occur at the Carrier/ISP connection. Some of these protocols are Hot Standby Router Protocol (“HSRP”) and Virtual Router Redundancy Protocol (“VRRP”). Note that [0078] standard Layer 2 protection protocols such as 802.1D Spanning Tree are not required in some embodiments of the present invention.
  • Overview of ESRP [0079]
  • ESRP is a feature of the Extreme OS (operating system) that allows multiple switches to provide redundant services to users. In addition to providing [0080] Layer 3 routing redundancy for IP, ESRP also provides Layer 2 redundancy. The Layer 2 redundancy features of ESRP offer fast failure recovery and provide for a dual-homed system design generally independent of end-user attached equipment.
  • ESRP is configured on a per-VLAN basis on each switch. This system utilizes ESRP in a two switch configuration, one master and one standby. The switches exchange keep-alive packets for each VLAN independently. Only one switch can actively provide [0081] Layer 2 switching for each VLAN. The switch performing the forwarding for a particular VLAN is considered the “master” for that VLAN. The other participating switch for the VLAN is in ‘standby’ mode.
  • For a VLAN with ESRP enabled, each participating switch uses the same MAC address and must be configured with the same IP address. It is possible for one switch to be master for one or more VLANs while being in standby for others, thus allowing the load to be split across participating switches. [0082]
  • To have two or more switches participate in ESRP, the following must be true. For each VLAN to be made redundant, the switches must have the ability to exchange packets on the [0083] same Layer 2 broadcast domain for that VLAN. Multiple paths of exchange can be used, and typically exist in most network system designs that take advantage of ESRP. In order for a VLAN to be recognized as participating in ESRP, the assigned IP address for the separate switches must be identical. ESRP must be enabled on the desired VLANs for each switch. Extreme Discovery Protocol (EDP) must be enabled on the ports that are members of the ESRP VLANs.
  • Master Switch Behavior [0084]
  • If a switch is master, it actively provides [0085] Layer 2 switching between all the ports of that VLAN. Additionally, the switch exchanges ESRP packets with other switches that are in standby mode.
  • Standby Switch Behavior [0086]
  • If a switch is in standby mode, it exchanges ESRP packets with other switches on that same VLAN. When a switch is in standby, it does not perform [0087] Layer 2 switching services for the VLAN. From a Layer 2 switching perspective, no forwarding occurs between the member ports of the VLAN. This prevents loops and maintains redundancy.
  • ESRP Tracking [0088]
  • ESRP can be configured to track connectivity to one or more specified VLANs as criteria for fail-over. The switch that has the greatest number of active ports for a particular VLAN takes highest precedence and will become master. If at any time the number of active ports for a particular VLAN on the master switch becomes less than the standby switch, the master switch automatically relinquishes master status and remains in standby mode. [0089]
  • Additionally, ESRP can be configured to track connectivity using a simple ping to any outside responder (ping tracking). The responder may represent the default route of the switch, or any device meaningful to network connectivity of the master ESRP switch. It should be noted that the responder must reside on a different VLAN than ESRP. The switch automatically relinquishes master status and remains in standby mode if a ping keep-alive fails three consecutive times. [0090]
  • A simplified drawing of the logical topology is shown in FIG. 6, indicating where ESRP is utilized in the present distribution network design. FIG. 6 depicts ESRP enabled in the switches ([0091] 62, 63) directly attached to the subscriber 60. Port track is used to detect local failure of a link directly connected to these switches while ping track is used to detect core network failures. If a failure is detected anywhere along the active path 64, ESRP will failover to allow traffic to flow on the standby path 65. As shown, ESRP port count can be used to protect dual customer connections to the network. ESRP ping tracking is used to protect the core VLAN. In the exemplary embodiment shown, VRRP or HSRP protects the Carrier/ISP L3 switch.
  • ESRP Enhancements [0092]
  • A preferred embodiment of the network includes network enhancements, including Extreme Network's ESRP, to support rapid failover of subscriber equipment when a network or core failure occurs. In the context of ESRP, this is referred to as “ESRP Failover Link Transition Enhancement.” This enhancement refers to the ability of a “Master” ESRP switch, when transitioning to standby state to “bounce” or restart auto-negotiation on a set of physical ports. This enhancement will cause an end device to flush its [0093] Layer 2 forwarding database and cause it to re-broadcast immediately for a new path through the network. This provides the end station the ability to switch from the primary to the secondary path in a very short time.
  • This enhancement relates to the ability of a “Master” ESRP switch, when transitioning to a standby state to “bounce” or restart auto-negotiation on a set of physical ports. This is useful in this architecture to inform an end-[0094] user Layer 2 device of a failure farther within the network that does not directly impact the end-user Layer 2 device. As background: Typical Layer 3 switches use the Address Resolution Protocol (ARP) to populate their forwarding databases. This forwarding database determines which port packets are sent out on based on destination MAC address. Once this information is learned through the ARP process, typical Layer 2 devices will not modify this forwarding information unless one of two events occur. First, a Loss of Signal (LOS) occurs on the port or 2) the ARP max age timer expires. Typically, the ARP max age value is set to 5 minutes. When this timer expires, the Layer 2 device will re-ARP to update its forwarding database information. Therefore, if a failure occurs within the core of the network that does not cause a LOS on the end-user device, that device will continue to forward packets into the network even though they cannot reach their ultimate destination until the ARP max age timer expires. This is known as a black hole situation. The enhancement proposed here prevents a black hole situation, by notifying the end device of the core failure by “bouncing” the port to force the equipment to re-ARP to update its forwarding database information immediately.
  • Although certain preferred embodiments of the present invention have been described above by way of example, it will be understood that modifications may be made to the disclosed embodiments without departing from the scope of the invention, which is defined by the appended claims. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or element of any or all the claims. As used herein, the terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. [0095]
  • Glossary of Terms
  • ADM—Add/Drop Multiplexer. A SONET component capable of inserting and removing traffic to/from the SONET line payload. ADMs also commonly perform other functions, such as generating/processing APS commands and synchronizing the transport optics to an external clock source. [0096]
  • APS—Automatic Protection Switching. A SONET fault recovery protocol standardized by Telcordia. APS will generally provide fault recovery in less than 50 ms. [0097]
  • BDN—Built Distribution Network. A portion of a network dedicated to the aggregation of multiple subscribers. A BDN typically utilizes fiber to provide dedicated fiber links between individual subscribers and a Hub facility. [0098]
  • BITS—Building Integrated Timing Source. A highly accurate and precise clock source used to synchronize multiple nodes on a SONET transport system. [0099]
  • Chromatic Dispersion—A linear effect that causes pulse broadening or compression within an optical transmission system. Chromatic Dispersion is occurs because different wavelengths of light travel at different velocities through the transmission media. [0100]
  • Collector Loop—A fiber loop (typically 144 ct) used to connect multiple subscribers to the larger Feeder loop on a BDN. [0101]
  • Customer—A business entity (such as an ISP or LEC) that provides telecommunications service within a Metropolitan area. The BDN operator typically will serve as an intermediate transport mechanism to connect subscribers to various customers. [0102]
  • DMD—Differential Mode Delay. A linear effect that degrades the quality of laser transmissions across MMF. A single laser transmission can inadvertently become subdivided upon ingress to MMF. These identical signals traverse unique transmission paths within the large core of MMF and leave the fiber offset in time. [0103]
  • DS3—[0104] Digital Signal 3. A digital signal rate of approximately 44.736 Mb/s corresponding to the North American T3 designator. A plesiochronous transport protocol equivalent to 672 voice lines at 64 kb/s each.
  • DWDM—Dense Wavelength Division Multiplexing. A method of allowing multiple transmission signals to be transmitted simultaneously over a single fiber by giving each a unique frequency range (or wavelength) within the transmission spectrum. DWDM wavelengths within the C-Band are standardized by the ITU-T. [0105]
  • Ethernet—A standardized (IEEE 802.3) packet-based data transport protocol developed by Xerox Corporation. [0106]
  • Ethernet Switch—A device used to route data packets to their proper destination in an Ethernet-based transport network. [0107]
  • FDP—Fiber Distribution Panel. An enclosure built to organize, manage, and protect physical cross-connections between multiple fiber-optic cables. [0108]
  • Feeder Loop—A fiber loop (typically 432 ct) used to connect multiple collector loops to a Hub facility on a BDN. [0109]
  • Fusion Splice—The process of joining of two discrete fiber-optic cables via localized heating of the fiber ends. Fusion splices are typically characterized as permanent in nature and exhibit relatively minor loss (<0.05 dB) at the fusion point. [0110]
  • Hub Facility—A facility used to connect a distribution network (BDN or LDN) to a transport network (LTN) within a Metropolitan area. [0111]
  • IR-1—A specification for transmission lasers and receiver photodiodes standardized by Telcordia. IR-1 optics typically provide a 13.0 dB link budget and are optimized for NDSF. [0112]
  • ITU-T—International Telecommunications Union—Telecommunications Standardizations Sector. [0113]
  • Lateral—A fiber spur containing multiple fibers (e.g., 48 ct) used to connect a Collector loop to a subscriber site. [0114]
  • LDN—Leased Distribution Network. A portion of a network dedicated to the aggregation of multiple subscribers. An LDN typically utilizes fiber to provide fiber links between individual subscribers and a Hub facility. [0115]
  • LR-1—A specification for transmission lasers and receiver photodiodes standardized by Telcordia. LR-1 optics typically provide a 25 dB link budget and are optimized for NDSF. [0116]
  • LTN—Leased Transport Network. A portion of a network dedicated to connecting various Customer sites to Hub facilities. An LTN will typically utilize TDM and DWDM equipment over a small quantity of leased fiber. [0117]
  • MCP—Modal Conditioning Patch cord. A hybrid fiber-optic cable used to overcome DMD problems by allowing a laser to mimic the overfilled launch characteristics of an LED. [0118]
  • Mechanical Splice—The process of joining of two discrete fiber-optic cables by aligning them within a mechanical enclosure or adhesive media. Mechanical splices typically utilize an index-matching gel to reduce reflection at the splice point. Expect a moderate power loss (0.10 to 0.20 dB) at the splice point. [0119]
  • Media Converter—A generic classification of devices used to alter protocols and/or media of a transmitted signal. [0120]
  • MMF—Multi-Mode Fiber. A fiber-optic cable with a relatively large (50 to 62 μm) transmission core that allows signals to traverse multiple, discrete transmission paths (modes) within the cable. MMF is typically utilized with LED-based optical transmission systems. [0121]
  • Modal Distortion—A linear effect that causes pulse broadening of transmission signals over MMF. Rays taking more direct paths (fewer reflections in the core) through the MMF core traverse the fiber more quickly than rays taking less direct paths. Modal distortion limits the bandwidth and distance of transmission links over MMF. [0122]
  • MPOE—Minimum Point of Entry. A common space within a multi-tenant building used to interconnect multiple tenants with common external telecommunications facilities. [0123]
  • NDSF—Non Dispersion-Shifted Fiber. Single-mode optical fiber with a nominal zero-dispersion wavelength within the conventional 1310 nm transmission window. [0124]
  • NZ-DSF—Non-Zero Dispersion-Shifted Fiber. Single-mode optical fiber with a nominal zero-dispersion wavelength shifted to reduce chromatic dispersion within the 1530 nm to 1560 nm transmission window. [0125]
  • OC-3—[0126] Optical Carrier 3. The optical equivalent to an STS-3, with a digital signal rate of approximately 155.52 Mb/s. A synchronous transport protocol equivalent to 2016 voice lines at 64 kb/s each. Protocol is specified by Telcordia standards.
  • OC-3c—[0127] Optical Carrier 3, Concatenated. A non-channelized variant of the OC-3, primarily utilized for data transmissions over SONET. Protocol is specified by Telcordia standards.
  • OC-12—[0128] Optical Carrier 12. The optical equivalent to an STS-12, with a digital signal rate of approximately 622.08 Mb/s. A synchronous transport protocol equivalent to 8064 voice lines at 64 kb/s each. Protocol is specified by Telcordia standards.
  • OC-12c—[0129] Optical Carrier 12, Concatenated. A non-channelized variant of the OC-12, primarily utilized for data transmissions over SONET. Protocol is specified by Telcordia standards.
  • OC-48—Optical Carrier 48. The optical equivalent to an STS-48, with a digital signal rate of approximately 2.488 Gb/s. A synchronous transport protocol equivalent to 32256 voice lines at 64 kb/s each. Protocol is specified by Telcordia standards. [0130]
  • OC-48c—Optical Carrier 48, Concatenated. A non-channelized variant of the OC-48, primarily utilized for data transmissions over SONET. Protocol is specified by Telcordia standards. [0131]
  • Plesiochronous—The relationship between two transmission devices, where each is timed from similar, yet diverse clock sources. A slight difference in either frequency or phase must exist between the diverse clocks. [0132]
  • POP—Point of Presence. The physical facility in which interexchange carriers and local exchange carriers provide access services. [0133]
  • SMF—Single Mode Fiber. A type of optical fiber in which only a single transport path (mode) is available through the core at a given wavelength. [0134]
  • SONET—Synchronous Optical NETwork. A circuit-based transmission/restoration protocol defined by Telcordia standards. Use of the SONET TDM protocol is primarily limited to North America. [0135]
  • Splice Box—An enclosure built to organize, manage, and protect physical splices between multiple fiber-optic cables. [0136]
  • SR—A specification for transmission lasers and receiver photodiodes standardized by Telcordia. SR optics typically provide an 8 dB link budget and are optimized for NDSF. [0137]
  • Subscriber—An end-user (or desired end-user) of a Customer's telecommunications service. A BDN operator typically will serve as an intermediate transport mechanism between subscribers and customers. [0138]
  • Synchronous—The relationship between two transmission devices, where both are timed from identical clock sources. The clocks must be identical in frequency and phase. [0139]
  • TDM—Time-Division Multiplexing. Combining multiple transmission signals into a common, higher-frequency bit-stream. [0140]
  • WDM—Wavelength-Division Multiplexing. A method of allowing multiple transmission signals to be transmitted simultaneously over a single fiber by giving each a unique frequency range (or wavelength) within the transmission spectrum. [0141]

Claims (23)

What is claimed is:
1. A fiber optic distribution network comprising:
a network feeder loop, said network feeder loop connected to a hub communicating with a metropolitan area network; and
a plurality of collector loops extending from said network feeder loop, each comprising a plurality of optical fibers, such that each of said collector loops forms a plurality of intersections with said feeder loop.
2. The distribution network of claim 1, wherein at least some of said plurality of intersections are disposed at different physical locations on said feeder loop.
3. The distribution network of claim 1, wherein said intersections of said collector loops with said network feeder loop are formed by splices.
4. The distribution network of claim 3, wherein said splices are fusion splices.
5. The distribution network of claim 4, wherein said network feeder loop comprises 288 optical fibers.
6. The distribution network of claim 4, wherein said network feeder loop comprises 432 optical fibers.
7. The distribution network of claim 4, wherein said network feeder loop comprises an amount of optical fibers equal to one half the sum of the count of the optical fibers comprising all of said collector loops.
8. The distribution network of claim 1, wherein a plurality of subscriber laterals are disposed on said collector loops and form connections therewith, and wherein said subscriber laterals extend to a plurality of subscriber facilities.
9. The distribution network of claim 8, wherein said laterals each comprise a plurality of optical fibers.
10. The distribution network of claim 8, wherein at least some of said connections between said subscriber laterals and said collector loops are formed with mechanical splices.
11. The distribution network of claim 8, wherein said at least one of said laterals comprises 48 optical fibers.
12. The distribution network of claim 8, wherein all of said laterals on one of said collector loops comprise a number of optical fibers substantially equal to the count of the optical fibers comprising that collector loop.
13. A fiber optic distribution network comprising:
an exterior feeder loop, said exterior feeder loop connected to a hub, wherein said hub communicates with a wide area distribution network;
a plurality of exterior collector loops connected to said exterior feeder loop, wherein said exterior collector loops form a plurality of intersections with said exterior feeder loop;
an interior feeder loop, said interior feeder loop connected to said hub, and wherein said interior feeder loop is disposed at least in substantial part within said exterior feeder loop; and
an interior collector loop connected to said interior feeder loop, wherein said interior collector loop forms a plurality of intersections with said interior feeder loop.
14. The distribution network of claim 13, wherein said intersections between said exterior collector loops with said exterior feeder loops are formed by splices.
15. The distribution network of claim 14, wherein at least some of said splices are fusion splices.
16. The distribution network of claim 13, further comprising a plurality of subscriber laterals forming connections with said exterior collector loops and extending from said exterior collector loops to a plurality of subscriber facilities.
17. The distribution network of claim 16, wherein at least one of said laterals comprises a plurality of optical fibers.
18. The distribution network of claim 16, wherein at least some of said connections are formed with mechanical splices.
19. The distribution network of claim 13, wherein said interior collector loop is closer to said hub facility than said exterior feeder loops.
20. The distribution network of claim 13, wherein said interior feeder loop is disposed to reduce a longest subscriber path of the distribution network.
21. A core service network comprising:
a dual overlay ring comprising a plurality of complete physical paths from a carrier party to a subscriber party,
wherein one of said plurality of complete physical paths is allocated as a primary virtual line; and
wherein one of said complete physical paths is allocated as a secondary virtual line.
22. The core service network of claim 21, wherein said primary virtual line is configured to serve as a data channel.
23. The core service network of claim 22, wherein said secondary virtual line is configured to serve as a back-up channel to said primary virtual line.
US09/952,284 2001-09-12 2001-09-12 Metropolitan area local access service system Abandoned US20030048501A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US09/952,284 US20030048501A1 (en) 2001-09-12 2001-09-12 Metropolitan area local access service system
US09/975,474 US20030048746A1 (en) 2001-09-12 2001-10-11 Metropolitan area local access service system
PCT/US2002/028457 WO2003024029A2 (en) 2001-09-12 2002-09-06 Metropolitan area local access service system
EP02768815A EP1425881A2 (en) 2001-09-12 2002-09-06 Metropolitan area local access service system
US11/087,938 US8031589B2 (en) 2001-09-12 2005-03-23 Metropolitan area local access service system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/952,284 US20030048501A1 (en) 2001-09-12 2001-09-12 Metropolitan area local access service system

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US09/975,474 Continuation US20030048746A1 (en) 2001-09-12 2001-10-11 Metropolitan area local access service system
US11/087,938 Division US8031589B2 (en) 2001-09-12 2005-03-23 Metropolitan area local access service system

Publications (1)

Publication Number Publication Date
US20030048501A1 true US20030048501A1 (en) 2003-03-13

Family

ID=25492743

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/952,284 Abandoned US20030048501A1 (en) 2001-09-12 2001-09-12 Metropolitan area local access service system
US09/975,474 Abandoned US20030048746A1 (en) 2001-09-12 2001-10-11 Metropolitan area local access service system
US11/087,938 Expired - Fee Related US8031589B2 (en) 2001-09-12 2005-03-23 Metropolitan area local access service system

Family Applications After (2)

Application Number Title Priority Date Filing Date
US09/975,474 Abandoned US20030048746A1 (en) 2001-09-12 2001-10-11 Metropolitan area local access service system
US11/087,938 Expired - Fee Related US8031589B2 (en) 2001-09-12 2005-03-23 Metropolitan area local access service system

Country Status (3)

Country Link
US (3) US20030048501A1 (en)
EP (1) EP1425881A2 (en)
WO (1) WO2003024029A2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6717922B2 (en) * 2002-03-04 2004-04-06 Foundry Networks, Inc. Network configuration protocol and method for rapid traffic recovery and loop avoidance in ring topologies
US20040208558A1 (en) * 2002-07-15 2004-10-21 Innovance, Inc. Wavelength routing on an optical metro network subtended off an agile core optical network
WO2006071131A1 (en) * 2004-12-27 2006-07-06 Fragoso Freitas Simoes Fernand Implementation method for a fixed optical communication network
US20070165648A1 (en) * 2006-01-16 2007-07-19 Min-Kyu Joo Packet processing apparatus and method
US7489867B1 (en) * 2002-05-06 2009-02-10 Cisco Technology, Inc. VoIP service over an ethernet network carried by a DWDM optical supervisory channel
US7558195B1 (en) 2002-04-16 2009-07-07 Foundry Networks, Inc. System and method for providing network route redundancy across layer 2 devices
US20090274153A1 (en) * 2002-10-01 2009-11-05 Andrew Tai-Chin Kuo System and method for implementation of layer 2 redundancy protocols across multiple networks
US20110228669A1 (en) * 2010-03-19 2011-09-22 Brocade Communications Systems, Inc. Techniques for link redundancy in layer 2 networks
US20110262141A1 (en) * 2010-04-21 2011-10-27 Lorenzo Ghioni Innovative architecture for fully non blocking service aggregation without o-e-o conversion in a dwdm multiring interconnection node
US20140341012A1 (en) * 2013-05-17 2014-11-20 Ciena Corporation Resilient dual-homed data network hand-off

Families Citing this family (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030048501A1 (en) * 2001-09-12 2003-03-13 Michael Guess Metropolitan area local access service system
US8451711B1 (en) * 2002-03-19 2013-05-28 Cisco Technology, Inc. Methods and apparatus for redirecting traffic in the presence of network address translation
US7145865B1 (en) * 2002-06-26 2006-12-05 Bellsouth Intellectual Property Corp. Method for moving network elements with minimal network outages in an active ATM network
US7120336B2 (en) * 2002-08-29 2006-10-10 Micron Technology, Inc. Resonator for thermo optic device
US7020365B2 (en) * 2002-08-29 2006-03-28 Micron Technology, Inc. Resistive heater for thermo optic device
US7006746B2 (en) * 2002-08-29 2006-02-28 Micron Technology, Inc. Waveguide for thermo optic device
US20050147027A1 (en) * 2002-08-30 2005-07-07 Nokia Corporation Method, system and hub for loop initialization
WO2004027628A1 (en) * 2002-09-17 2004-04-01 Broadcom Corporation System and method for access point (ap) aggregation and resiliency in a hybrid wired/wireless local area network
US8051211B2 (en) 2002-10-29 2011-11-01 Cisco Technology, Inc. Multi-bridge LAN aggregation
US20040160895A1 (en) * 2003-02-14 2004-08-19 At&T Corp. Failure notification method and system in an ethernet domain
US20040165595A1 (en) * 2003-02-25 2004-08-26 At&T Corp. Discovery and integrity testing method in an ethernet domain
US7292542B2 (en) * 2003-03-05 2007-11-06 At&T Bls Intellectual Property, Inc. Method for traffic engineering of connectionless virtual private network services
US7345991B1 (en) * 2003-05-28 2008-03-18 Atrica Israel Ltd. Connection protection mechanism for dual homed access, aggregation and customer edge devices
US7911937B1 (en) * 2003-05-30 2011-03-22 Sprint Communications Company L.P. Communication network architecture with diverse-distributed trunking and controlled protection schemes
US7463579B2 (en) * 2003-07-11 2008-12-09 Nortel Networks Limited Routed split multilink trunking
ATE387659T1 (en) * 2003-07-21 2008-03-15 Alcatel Lucent METHOD AND SYSTEM FOR REPLACING THE SOFTWARE
JP4438367B2 (en) * 2003-10-01 2010-03-24 日本電気株式会社 Network, relay transmission apparatus, and optical signal control method used therefor
DE10358344A1 (en) * 2003-12-12 2005-07-14 Siemens Ag Method for the replacement switching of spatially separated switching systems
KR100735239B1 (en) * 2004-05-28 2007-07-03 삼성전자주식회사 Optical fiber for metro network
US8218434B1 (en) * 2004-10-15 2012-07-10 Ciena Corporation Ethernet facility and equipment protection
JP4523382B2 (en) * 2004-11-02 2010-08-11 富士通株式会社 Multimode fiber transmission system
CN100352223C (en) * 2004-12-31 2007-11-28 华为技术有限公司 Method for protecting data service in metropolitan area transmission network
DE102005004151A1 (en) * 2005-01-28 2006-08-10 Siemens Ag Method and device for assigning packet addresses of a plurality of devices
FR2882166B1 (en) * 2005-02-16 2007-04-27 Alcatel Sa DEVICE FOR CONVERTING ATM-BASED IP DATA TO ETHERNET IP DATA WITH HIGH AVAILABILITY FOR A COMMUNICATION NETWORK
US7688716B2 (en) * 2005-05-02 2010-03-30 Cisco Technology, Inc. Method, apparatus, and system for improving ethernet ring convergence time
US8264947B1 (en) * 2005-06-15 2012-09-11 Barclays Capital, Inc. Fault tolerant wireless access system and method
US7792017B2 (en) * 2005-06-24 2010-09-07 Infinera Corporation Virtual local area network configuration for multi-chassis network element
EP1763204B1 (en) * 2005-09-13 2013-12-04 Unify GmbH & Co. KG System and method for redundant switches taking into account learning bridge functionality
US8204064B2 (en) * 2005-09-16 2012-06-19 Acme Packet, Inc. Method and system of session media negotiation
EP1768307B1 (en) * 2005-09-23 2008-11-05 Nokia Siemens Networks Gmbh & Co. Kg Method for augmenting a network
EP1865662A1 (en) * 2006-06-08 2007-12-12 Koninklijke KPN N.V. Connection method and system for delivery of services to customers
US20080042536A1 (en) 2006-08-21 2008-02-21 Afl Telecommunications Llc. Strain relief system
US20080144634A1 (en) * 2006-12-15 2008-06-19 Nokia Corporation Selective passive address resolution learning
DE102007015449B4 (en) * 2007-03-30 2009-09-17 Siemens Ag Method for reconfiguring a communication network
DE102007015539B4 (en) * 2007-03-30 2012-01-05 Siemens Ag Method for reconfiguring a communication network
US7836360B2 (en) * 2007-04-09 2010-11-16 International Business Machines Corporation System and method for intrusion prevention high availability fail over
EP2206325A4 (en) * 2007-10-12 2013-09-04 Nortel Networks Ltd Multi-point and rooted multi-point protection switching
US9473382B2 (en) * 2008-06-27 2016-10-18 Telefonaktiebolaget Lm Ericsson (Publ) Method and system for link aggregation
JP5088281B2 (en) * 2008-09-19 2012-12-05 沖電気工業株式会社 Packet synchronous switching method and gateway device
GB0906290D0 (en) * 2009-04-09 2009-05-20 Nomura Internat Plc Ultra low latency securities trading infrastructure
CN101645750B (en) * 2009-09-02 2013-09-11 中兴通讯股份有限公司 Distributed electrical cross device and system and method thereof for realizing SNC cascade protection
CN101980488B (en) * 2010-10-22 2015-09-16 中兴通讯股份有限公司 The management method of ARP and three-tier switch
US8902734B2 (en) * 2011-01-31 2014-12-02 Telefonaktiebolaget L M Ericsson (Publ) System and method for providing communication connection resilience
US9491041B2 (en) * 2011-03-07 2016-11-08 Tejas Networks Limited Ethernet chain protection switching
US9264303B2 (en) 2011-03-11 2016-02-16 Tejas Networks Limited Protection switching method and system provision by a distributed protection group
DE102013110784B4 (en) * 2012-10-05 2021-03-18 Zte (Usa) Inc. SERVICE-SENSITIVE FLEXIBLE IPOWDM NETWORK AND OPERATING PROCEDURES
US10211944B2 (en) * 2015-05-26 2019-02-19 Nippon Telegraph And Telephone Corporation Station-side device and communication method

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4871225A (en) * 1984-08-24 1989-10-03 Pacific Bell Fiber optic distribution network
US5107490A (en) * 1985-04-24 1992-04-21 Artel Communications Corporation Ring-type communication network
US5418785A (en) * 1992-06-04 1995-05-23 Gte Laboratories Incorporated Multiple-channel token ring network with single optical fiber utilizing subcarrier multiplexing with a dedicated control channel
US5530782A (en) * 1993-10-22 1996-06-25 Sumitomo Electric Industries, Ltd. Intermediate branching method for optical path
US5796501A (en) * 1995-07-12 1998-08-18 Alcatel N.V. Wavelength division multiplexing optical communication network
US5920410A (en) * 1994-06-08 1999-07-06 British Telecommunications Public Limited Company Access network
US6205144B1 (en) * 1995-07-18 2001-03-20 Siemens Aktiengesellschaft Program unit, particularly for digital, data-compressed video distribution signals
US6236640B1 (en) * 1997-02-03 2001-05-22 Siemens Aktiengesellschaft Method for alternate circuiting of transmission equipment in ring architectures for bidirectional transmission of ATM cells
US6351582B1 (en) * 1999-04-21 2002-02-26 Nortel Networks Limited Passive optical network arrangement
US6470032B2 (en) * 2001-03-20 2002-10-22 Alloptic, Inc. System and method for synchronizing telecom-related clocks in ethernet-based passive optical access network
US20030011852A1 (en) * 2001-07-16 2003-01-16 Lemoff Brian E. All-optical network distribution system
US20030012238A1 (en) * 2000-04-01 2003-01-16 Qi Wu Apparatus and method for heating a small area of an object to a high temperature and for accurately maintaining this temperature
US6512614B1 (en) * 1999-10-12 2003-01-28 At&T Corp. WDM-based architecture for flexible switch placement in an access network
US6519399B2 (en) * 2001-02-19 2003-02-11 Corning Cable Systems Llc Fiber optic cable with profiled group of optical fibers
US20030048746A1 (en) * 2001-09-12 2003-03-13 Michael Guess Metropolitan area local access service system
US6565269B2 (en) * 2001-02-07 2003-05-20 Fitel Usa Corp. Systems and methods for low-loss splicing of optical fibers having a high concentration of fluorine to other types of optical fiber
US7164698B1 (en) * 2000-03-24 2007-01-16 Juniper Networks, Inc. High-speed line interface for networking devices

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3071007B2 (en) * 1991-10-22 2000-07-31 富士通株式会社 Communication network control method
US5608726A (en) * 1995-04-25 1997-03-04 Cabletron Systems, Inc. Network bridge with multicast forwarding table
US5835696A (en) * 1995-11-22 1998-11-10 Lucent Technologies Inc. Data router backup feature
US6108300A (en) * 1997-05-02 2000-08-22 Cisco Technology, Inc Method and apparatus for transparently providing a failover network device
US6859438B2 (en) * 1998-02-03 2005-02-22 Extreme Networks, Inc. Policy based quality of service
US7430164B2 (en) * 1998-05-04 2008-09-30 Hewlett-Packard Development Company, L.P. Path recovery on failure in load balancing switch protocols
US6370110B1 (en) * 1998-09-16 2002-04-09 At&T Corp Back-up restoration technique for SONET/SHD rings
CA2285101A1 (en) * 1998-10-08 2000-04-08 Wayne D. Grover Integrated ring-mesh network
US6330229B1 (en) * 1998-11-09 2001-12-11 3Com Corporation Spanning tree with rapid forwarding database updates
US7246168B1 (en) * 1998-11-19 2007-07-17 Cisco Technology, Inc. Technique for improving the interaction between data link switch backup peer devices and ethernet switches
US6088328A (en) * 1998-12-29 2000-07-11 Nortel Networks Corporation System and method for restoring failed communication services
US6856627B2 (en) * 1999-01-15 2005-02-15 Cisco Technology, Inc. Method for routing information over a network
US6751191B1 (en) * 1999-06-29 2004-06-15 Cisco Technology, Inc. Load sharing and redundancy scheme
US6415323B1 (en) * 1999-09-03 2002-07-02 Fastforward Networks Proximity-based redirection system for robust and scalable service-node location in an internetwork
EP1132844A3 (en) * 2000-03-02 2002-06-05 Telseon IP Services Inc. E-commerce system facilitating service networks including broadband communication service networks
US6963575B1 (en) * 2000-06-07 2005-11-08 Yipes Enterprise Services, Inc. Enhanced data switching/routing for multi-regional IP over fiber network
US7009933B2 (en) * 2001-01-30 2006-03-07 Broadcom Corporation Traffic policing of packet transfer in a dual speed hub
US7058296B2 (en) * 2001-03-12 2006-06-06 Lucent Technologies Inc. Design method for WDM optical networks including alternate routes for fault recovery
US6834056B2 (en) * 2001-06-26 2004-12-21 Occam Networks Virtual local area network protection switching
US7274656B2 (en) * 2001-07-10 2007-09-25 Tropic Networks Inc. Protection system and method for resilient packet ring (RPR) interconnection
US7289428B2 (en) * 2001-08-13 2007-10-30 Tellabs Operations, Inc. Inter-working mesh telecommunications networks
CN100550715C (en) * 2001-09-04 2009-10-14 朗米·谢尔雅·冈达 On Ethernet, support synchronous digital hierarchy/synchronous optical network to protect the method for exchange automatically
US6766482B1 (en) * 2001-10-31 2004-07-20 Extreme Networks Ethernet automatic protection switching
US6826056B2 (en) * 2001-12-11 2004-11-30 Hewlett-Packard Development Company, L.P. Systems for use with data storage devices
US20030223379A1 (en) * 2002-05-28 2003-12-04 Xuguang Yang Method and system for inter-domain loop protection using a hierarchy of loop resolving protocols

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4871225A (en) * 1984-08-24 1989-10-03 Pacific Bell Fiber optic distribution network
US5107490A (en) * 1985-04-24 1992-04-21 Artel Communications Corporation Ring-type communication network
US5418785A (en) * 1992-06-04 1995-05-23 Gte Laboratories Incorporated Multiple-channel token ring network with single optical fiber utilizing subcarrier multiplexing with a dedicated control channel
US5530782A (en) * 1993-10-22 1996-06-25 Sumitomo Electric Industries, Ltd. Intermediate branching method for optical path
US5920410A (en) * 1994-06-08 1999-07-06 British Telecommunications Public Limited Company Access network
US5796501A (en) * 1995-07-12 1998-08-18 Alcatel N.V. Wavelength division multiplexing optical communication network
US6205144B1 (en) * 1995-07-18 2001-03-20 Siemens Aktiengesellschaft Program unit, particularly for digital, data-compressed video distribution signals
US6236640B1 (en) * 1997-02-03 2001-05-22 Siemens Aktiengesellschaft Method for alternate circuiting of transmission equipment in ring architectures for bidirectional transmission of ATM cells
US6351582B1 (en) * 1999-04-21 2002-02-26 Nortel Networks Limited Passive optical network arrangement
US6512614B1 (en) * 1999-10-12 2003-01-28 At&T Corp. WDM-based architecture for flexible switch placement in an access network
US7164698B1 (en) * 2000-03-24 2007-01-16 Juniper Networks, Inc. High-speed line interface for networking devices
US20030012238A1 (en) * 2000-04-01 2003-01-16 Qi Wu Apparatus and method for heating a small area of an object to a high temperature and for accurately maintaining this temperature
US6565269B2 (en) * 2001-02-07 2003-05-20 Fitel Usa Corp. Systems and methods for low-loss splicing of optical fibers having a high concentration of fluorine to other types of optical fiber
US6519399B2 (en) * 2001-02-19 2003-02-11 Corning Cable Systems Llc Fiber optic cable with profiled group of optical fibers
US6470032B2 (en) * 2001-03-20 2002-10-22 Alloptic, Inc. System and method for synchronizing telecom-related clocks in ethernet-based passive optical access network
US20030011852A1 (en) * 2001-07-16 2003-01-16 Lemoff Brian E. All-optical network distribution system
US20030048746A1 (en) * 2001-09-12 2003-03-13 Michael Guess Metropolitan area local access service system

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6717922B2 (en) * 2002-03-04 2004-04-06 Foundry Networks, Inc. Network configuration protocol and method for rapid traffic recovery and loop avoidance in ring topologies
US8593987B2 (en) 2002-04-16 2013-11-26 Brocade Communications Systems, Inc. System and method for providing network route redundancy across layer 2 devices
US8014301B2 (en) 2002-04-16 2011-09-06 Brocade Communications Systems, Inc. System and method for providing network route redundancy across layer 2 devices
US9450893B2 (en) 2002-04-16 2016-09-20 Brocade Communications Systems, Inc. System and method for providing network route redundancy across layer 2 devices
US7558195B1 (en) 2002-04-16 2009-07-07 Foundry Networks, Inc. System and method for providing network route redundancy across layer 2 devices
US20090296565A1 (en) * 2002-04-16 2009-12-03 Foundry Networks, Inc. System and method for providing network route redundancy across layer 2 devices
US7489867B1 (en) * 2002-05-06 2009-02-10 Cisco Technology, Inc. VoIP service over an ethernet network carried by a DWDM optical supervisory channel
US20040208558A1 (en) * 2002-07-15 2004-10-21 Innovance, Inc. Wavelength routing on an optical metro network subtended off an agile core optical network
US7200331B2 (en) * 2002-07-15 2007-04-03 Lucent Technologies Inc. Wavelength routing on an optical metro network subtended off an agile core optical network
US20090274153A1 (en) * 2002-10-01 2009-11-05 Andrew Tai-Chin Kuo System and method for implementation of layer 2 redundancy protocols across multiple networks
US8462668B2 (en) 2002-10-01 2013-06-11 Foundry Networks, Llc System and method for implementation of layer 2 redundancy protocols across multiple networks
US9391888B2 (en) 2002-10-01 2016-07-12 Foundry Networks, Llc System and method for implementation of layer 2 redundancy protocols across multiple networks
WO2006071131A1 (en) * 2004-12-27 2006-07-06 Fragoso Freitas Simoes Fernand Implementation method for a fixed optical communication network
US7864782B2 (en) * 2006-01-16 2011-01-04 Samsung Electronics Co., Ltd. Packet processing apparatus and method
US20070165648A1 (en) * 2006-01-16 2007-07-19 Min-Kyu Joo Packet processing apparatus and method
US8654630B2 (en) * 2010-03-19 2014-02-18 Brocade Communications Systems, Inc. Techniques for link redundancy in layer 2 networks
US20110228669A1 (en) * 2010-03-19 2011-09-22 Brocade Communications Systems, Inc. Techniques for link redundancy in layer 2 networks
US20110262141A1 (en) * 2010-04-21 2011-10-27 Lorenzo Ghioni Innovative architecture for fully non blocking service aggregation without o-e-o conversion in a dwdm multiring interconnection node
US8412042B2 (en) * 2010-04-21 2013-04-02 Cisco Technology, Inc. Innovative architecture for fully non blocking service aggregation without O-E-O conversion in a DWDM multiring interconnection node
US20140341012A1 (en) * 2013-05-17 2014-11-20 Ciena Corporation Resilient dual-homed data network hand-off
US9769058B2 (en) * 2013-05-17 2017-09-19 Ciena Corporation Resilient dual-homed data network hand-off
US10122619B2 (en) 2013-05-17 2018-11-06 Ciena Corporation Resilient dual-homed data network hand-off

Also Published As

Publication number Publication date
WO2003024029A3 (en) 2003-07-31
WO2003024029A2 (en) 2003-03-20
EP1425881A2 (en) 2004-06-09
US20030048746A1 (en) 2003-03-13
US8031589B2 (en) 2011-10-04
US20050180339A1 (en) 2005-08-18

Similar Documents

Publication Publication Date Title
US8031589B2 (en) Metropolitan area local access service system
US7660238B2 (en) Mesh with protection channel access (MPCA)
US6532088B1 (en) System and method for packet level distributed routing in fiber optic rings
US7606886B1 (en) Method and system for providing operations, administration, and maintenance capabilities in packet over optics networks
US7315511B2 (en) Transmitter, SONET/SDH transmitter, and transmission system
US7649847B2 (en) Architectures for evolving traditional service provider networks and methods of optimization therefor
EP1302035B1 (en) Joint IP/optical layer restoration after a routewr failure
US20070086364A1 (en) Methods and system for a broadband multi-site distributed switch
US7606224B2 (en) Transmission apparatus for making ring switching at SONET/SDH and RPR levels
US7170851B1 (en) Systems and methods for automatic topology provisioning for SONET networks
US20040109408A1 (en) Fast protection for TDM and data services
WO2004010653A1 (en) Metropolitan area local access service system
US20060062246A1 (en) Multi-service transport apparatus for integrated transport networks
EP2472778A1 (en) Method for time division multiplex service protection
EP1435153B1 (en) Metropolitan area local access service system
EP1246408B1 (en) Mapping of data frames from a local area network into a synchronous digital telecommunications system
Cisco Overview
Cisco Overview
Cisco Cisco ONS 15454 SDH Product Overview, Release 3.3
US7315695B2 (en) Method and apparatus for defining optical broadband services on an optical communication network
Jones et al. Sprint long distance network survivability: today and tomorrow
Lam et al. Optical Ethernet: Protocols, management, and 1–100 G technologies
Alegria et al. The WaveStar™ BandWidth Manager: The key building block in the next-generation transport network
Graber et al. Multi-service switches and the Service Intelligent™ optical architecture for SONET/SDH metro networks
Hamad Performance analysis and management of RPR (resilient packet ring) rings attached to an new large layer 2 (L2) networks (NLL2N)

Legal Events

Date Code Title Description
AS Assignment

Owner name: ONFIBER COMMUNICATIONS, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GUESS, MICHAEL;NIEZGODA, PAUL;STREET, FRASER;REEL/FRAME:012174/0948;SIGNING DATES FROM 20010830 TO 20010910

AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:ONFIBER COMMUNICATIONS, INC.;REEL/FRAME:014514/0548

Effective date: 20030815

AS Assignment

Owner name: ONFIBER COMMUNICATIONS, INC., TEXAS

Free format text: RELEASE;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:017093/0859

Effective date: 20050925

AS Assignment

Owner name: COMERICA BANK, MICHIGAN

Free format text: SECURITY AGREEMENT;ASSIGNORS:ONFIBER COMMUNICATIONS, INC.;ONFIBER CARRIER SERVICES - VIRGINIA, INC.;INFO-TECH COMMUNICATIONS;AND OTHERS;REEL/FRAME:017379/0215

Effective date: 20051006

AS Assignment

Owner name: ONFIBER CARRIER SERVICES, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:COMERICA BANK;REEL/FRAME:018847/0033

Effective date: 20070202

Owner name: ONFIBER CARRIER SERVICES-VIRGINIA, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:COMERICA BANK;REEL/FRAME:018847/0033

Effective date: 20070202

Owner name: ONFIBER COMMUNICATIONS, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:COMERICA BANK;REEL/FRAME:018847/0033

Effective date: 20070202

Owner name: INFO-TECH COMMUNICATIONS, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:COMERICA BANK;REEL/FRAME:018847/0033

Effective date: 20070202

AS Assignment

Owner name: QWEST COMMUNICATIONS INTERNATIONAL INC., COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ONFIBER COMMUNICATIONS, INC.;REEL/FRAME:019781/0759

Effective date: 20070830

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION