US5606664A - Apparatus and method for automatically determining the topology of a local area network - Google Patents

Apparatus and method for automatically determining the topology of a local area network Download PDF

Info

Publication number
US5606664A
US5606664A US08/046,405 US4640593A US5606664A US 5606664 A US5606664 A US 5606664A US 4640593 A US4640593 A US 4640593A US 5606664 A US5606664 A US 5606664A
Authority
US
United States
Prior art keywords
hubs
hub
topology
communication network
concentrator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/046,405
Inventor
Brian Brown
Shabbir A. Chowdhury
Jean-Luc Fontaine
Chao-Yu Liang
Ronald V. Schmidt
Chang-Jung Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nortel Networks NA Inc
Constellation Technologies LLC
Original Assignee
Bay Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bay Networks Inc filed Critical Bay Networks Inc
Priority to US08/046,405 priority Critical patent/US5606664A/en
Application granted granted Critical
Publication of US5606664A publication Critical patent/US5606664A/en
Assigned to NORTEL NETWORKS LIMITED reassignment NORTEL NETWORKS LIMITED CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: NORTEL NETWORKS CORPORATION
Assigned to NORTEL NETWORKS GROUP INC. reassignment NORTEL NETWORKS GROUP INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: BAY NETWORKS GROUP, INC.
Assigned to SYNOPTICS COMMUNICATIONS, INC. reassignment SYNOPTICS COMMUNICATIONS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROWN, BRIAN, SCHMIDT, RONALD V., WANG, CHANG-JUNG, CHOUDHURY, SHABBIR AHMED, LIANG, Chao-yu, FONTAINE, JEAN-LUC
Assigned to BAY NETWORKS GROUP, INC. reassignment BAY NETWORKS GROUP, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SYNOPTICS COMMUNICATIONS, INC.
Assigned to NORTEL NETWORKS CORPORATION reassignment NORTEL NETWORKS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NORTEL NETWORKS GROUP INC.
Assigned to Rockstar Bidco, LP reassignment Rockstar Bidco, LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NORTEL NETWORKS LIMITED
Assigned to ROCKSTAR CONSORTIUM US LP reassignment ROCKSTAR CONSORTIUM US LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Rockstar Bidco, LP
Assigned to CONSTELLATION TECHNOLOGIES LLC reassignment CONSTELLATION TECHNOLOGIES LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ROCKSTAR CONSORTIUM US LP
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3048Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the topology of the computing system or computing system component explicitly influences the monitoring activity, e.g. serial, hierarchical systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/22Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/24Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using dedicated network management hardware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/26Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using dedicated tools for LAN [Local Area Network] management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/32Monitoring with visual or acoustical indication of the functioning of the machine
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/34Signalling channels for network management communication
    • H04L41/344Out-of-band transfers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S715/00Data processing: presentation processing of document, operator interface processing, and screen saver display processing
    • Y10S715/961Operator interface with visual structure or function dictated by intended use
    • Y10S715/965Operator interface with visual structure or function dictated by intended use for process control and configuration
    • Y10S715/969Network layout and operation interface
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S715/00Data processing: presentation processing of document, operator interface processing, and screen saver display processing
    • Y10S715/961Operator interface with visual structure or function dictated by intended use
    • Y10S715/965Operator interface with visual structure or function dictated by intended use for process control and configuration
    • Y10S715/97Instrumentation and component modelling, e.g. interactive control panel

Definitions

  • the present invention relates generally to local area networks and more particularly to apparatus and methods for monitoring the status of a local area network by producing a topology map of the network configuration and by producing a control console display image depicting the appearance of selected network hubs.
  • Local area networks for interconnecting data terminal equipment such as computers are well known in the art. Such networks may include a large number of components which may be configured in a variety of ways.
  • the network may include a large number of hubs or concentrators, each of which form the center of a star configuration.
  • the concentrators may each be capable of servicing a large number of data terminal equipment such as personal computers.
  • the network medium may be shielded twisted pair cable, unshielded twisted pair cable or fiber optic cable or a combination of all three.
  • each type of cabling may be supported by various types of modules located in each of the concentrators.
  • None of the conventional apparatus for monitoring and displaying the status of a network are capable of conveying the actual status of the network in a manner which can be easily comprehended by a user.
  • the disclosed apparatus and method overcomes such limitations and allow the actual status of the network to be automatically monitored and displayed.
  • the information displayed depicts in great detail the status of a network which can be easily comprehended by individuals with a minimum amount of training even if the network is relatively complex. Further, the status of the network is automatically updated.
  • the network typically includes a plurality of hubs, such as concentrators, with each hub having data ports for coupling the hub in a star configuration to either data terminal equipment, such as personal computers, or for coupling the hub to another hub of the network.
  • the network is of the type which utilizes network contention control such as the well known Carrier Sense Multiple Access With Collision Detection (CSMA/CD).
  • CSMA/CD Carrier Sense Multiple Access With Collision Detection
  • the apparatus automatically determines the overall topology of the network, with the hubs having at least three data ports each.
  • the apparatus includes a transmit means associated with each of the hubs having both originate and repeat means.
  • the originate means functions to transmit messages over the network which originate at the associated hub and which contain an identifying address of the associated hub.
  • the repeat means functions to transmit messages received by the associated hub over the network which originated from other hubs of the network.
  • Each of the hubs further includes port identifying means for identifying which of the data ports has received one of the messages transmitted by another hub of the network. In this manner, topology data regarding the connection of the various ports of the associated hub to other hubs of the network are obtained.
  • the topology data from a single hub usually does not contain enough information to ascertain the overall network topology.
  • the apparatus further includes control means coupled to the network for receiving the topology data from each of the hubs in the network.
  • the topology data identify a particular one of the data ports of the hub reporting the topology data and address of the other ones of the hubs which originated network messages received by the reporting hub over that particular port.
  • the apparatus includes processing means for determining the overall topology of the network utilizing the received topology data.
  • the apparatus monitors the status of each of the hubs of a star configured network by producing an image, on a control console display for example, which depicts the appearance of the actual hub.
  • Each hub of the network includes a chassis for receiving a plurality of modules.
  • the modules have at least one port for connecting the data terminal equipment such as a computer to the hub, with the modules being of varying types. For example, some modules may be adapted for use with unshielded twisted pair cables and other modules may be adapted for use with optical cables.
  • the apparatus includes location means for producing location data indicative of the location of each of the modules in the hub chassis.
  • An exemplary location means would include hard-wired slot identification bits located on the chassis which are transferred to any module inserted into the chassis slot associated with the hard-wired bits.
  • Type means are further included for producing type data indicative of the type of each of the modules in the hub.
  • An exemplary type means would include hard-wired bits on the module which indicate the type of module.
  • the apparatus includes display means for producing an image of the hub utilizing the location data and the type data, with the image depicting the location of the modules in the hub and the type of modules.
  • FIG. 1 is a diagram of an exemplary local area network of the type in which the subject invention can be used and which includes three concentrators or hubs and associated data terminal equipment.
  • FIG. 2 is schematic diagram of a local area network, having twenty-four concentrators, of the type in which the subject invention can be used.
  • FIG. 3 is an exemplary display produced in accordance with the present invention depicting a selected portion of the topology of the network of FIG. 2.
  • FIG. 4 is a schematic diagram of a local area network with the upper level concentrators connected to a common coaxial cable.
  • FIG. 5 is an exemplary display produced in accordance with the present invention depicting a selected portion of the topology of the network of FIG. 4.
  • FIG. 6 is a section of a display menu showing a portion of a main menu bar and an exemplary selected submenu.
  • FIG. 7 is a section of a display showing a detailed view image which depicts the actual appearance of the front panel of a selected network concentrator, including the location of modules in the concentrator and the type of modules.
  • FIGS. 8A-8F are enlarged views of selected portions of the FIG. 7 image showing details of the various type of modules.
  • FIG. 9 is similar to FIG. 7 except that another style of concentrator is depicted.
  • FIG. 10 is a block diagram of one of the network concentrators showing the network management module and host modules all connected to a common concentrator backplane together with various data terminal equipment in the network management control console connected to the concentrator.
  • FIG. 11 is a block diagram showing the network management interface for the host modules for interfacing the modules to the concentrator backplane.
  • FIG. 12 is a block diagram of a further exemplary network showing the interconnection of the concentrators of the network.
  • FIG. 13 is a Network Management Module List showing the various ports of each of the concentrators and the addresses of the other concentrators which transmit messages received over the ports.
  • FIG. 14 is a flow chart depicting the process whereby the link data are obtained from the concentrators to construct the FIG. 13 List.
  • FIG. 15 is a flow chart depicting the process whereby the link data of the FIG. 13 List are processed to form the Ancestor Table of FIG. 16.
  • FIG. 16 is a Ancestor Table constructed from the data contained in the FIG. 13 List.
  • FIG. 17 is a block diagram of a network where the up port of the highest level concentrators are connected together so that no concentrator will be assigned the Level 0 position of the topology display.
  • FIG. 18 is a simplified display image of the overall topology of a network based upon the data of the FIG. 16 Ancestor Table.
  • FIG. 19 is a functional block diagram of the network management module located in each of the network concentrators.
  • FIG. 20 is a functional block diagram of the control console adapter, the adapter being an expansion card used to convert a personal computer to a network management control console.
  • FIGS. 21A-21C are flow charts depicting the process for producing the detailed view of the concentrators such as depicted in FIGS. 7 and 9.
  • FIG. 1 is a diagram showing the physical connection of a typical simplified local area network.
  • the depicted network function to interconnect six personal computers or PCs 20a-f.
  • the network includes three concentrators 22a, 22b and 22c.
  • the concentrators function as a hub in the star network topology and provide basic Ethernet functions.
  • Many of the details which will be provided regarding the network are exemplary only, it being understood that the present invention may be utilized in connection with a wide variety of communication networks.
  • Each of the concentrators includes several plug-in modules 26 which connect to a backplane (not depicted) of each concentrator 22.
  • modules including host modules which have ports for connecting the associated module to data terminal equipment (DTE).
  • concentrator 22b includes a host module 26c having a port (not designated) connected to personal computer 20a by way of an interface device 24a.
  • Device 24a is a transceiver (transmitter/receiver) used to link the computer 20a (DTE) or node to the network cable.
  • Module 26c will typically have several other ports (not depicted) for connecting to other DTEs.
  • One of the DTEs such as personal computer 20d, is designated as the network management control console (NMCC).
  • NMCC network management control console
  • the designated computer 20d is provided with a control console adapter (CCA), which is an expansion board which adapts the computer for use as a control console.
  • CCA control console adapter
  • a user can perform various network monitoring and control functions at the NMCC.
  • a pointer device such as a mouse 23 having primary and secondary control buttons 23a and 23b, respectively, is used for carrying out these functions.
  • Each concentrator 22a, 22b and 22c is provided with a network management module (NMM).
  • the network management module NMM gathers data received on a port of a host module and transmits the data to other modules in the concentrator. Further, the network management module NMM will forward the received data to other concentrators in the network that may be connected to the concentrator.
  • the foregoing can be further illustrated by way of example.
  • the personal computer 20e has a message for computer 20b.
  • Each computer or node in the network has an associated address. Messages directed to a particular computer will be decoded by a conventional network controller card installed in the computer and, if the destination address in the message matches the computer address, the message will be processed by the computer.
  • the message originating from computer 20e will include a destination address of computer 20b.
  • the message will be transmitted to the associated transceiver 24e and will be received by a port (not designated) on host module 26g of concentrator 22c. Module 26g will transmit the received message to the network management module NMM in concentrator 22c by way of the concentrator backplane (not depicted).
  • the NMM will transmit the received message to each host module in concentrator 22c, including modules 26f, 26g and 26e.
  • the message will exit each port of each module and will be received by transceiver 24d and 24f (but not 24e which received the message originating from computer 20e).
  • transceiver 24d and 24f but not 24e which received the message originating from computer 20e.
  • the network controller cards installed in computer 20d and 20f will not process the messages.
  • the NMM in concentrator 22c will also forward the received message to concentrator 22c by way of transceiver 24g.
  • the message will be received by a port in module 26d and by the NMM in concentrator 22a.
  • the NMM of concentrator 22a will transmit the message to the NMM of concentrator 22b.
  • the NMM of concentrator 22b will transmit the message to each module in concentrator 22b so that the message will be received by transceivers 24a, 24b and 24c. Since transceiver 24b has an address which matches the destination address, transceiver 24b will forward the message to the associated computer 20b.
  • the other two transceivers 24a and 24c connected to concentrator 22b will refrain from forwarding the message.
  • each message received by a concentrator is retransmitted, only a single message can be transmitted over the network at one time. In the event two computers attempt to transmit at the same time, the messages will interfere and cause a collision in the network. As is well known, when a collision on the network occurs, the concentrator connected to the two cables on which the collision occurred will detect the presence of the collision.
  • a "jam" signal When a collision is detected by an NMM, a "jam" signal will be transmitted by the NMM of the concentrator over the network.
  • the computers involved in the collision will detect the presence of the collided signals and will resort to statistical contention for the network. Other computers not involved in the collision will sense the carrier signal and refrain from transmitting on the network.
  • a computer wishing to transmit first listens for message traffic on the network and transmits only if there is no traffic and only in the absence of any other carrier signal.
  • This well known method of providing access to a common local area network medium is referred to as Carrier Sense Multiple Access with Collision Detection or CSMA/CD.
  • the network depicted in FIG. 1 is relatively small and includes two levels of concentrators.
  • Concentrator 22a is at the top level (level "0") and concentrators 22b and 22c are at the next from top level (level "1"). It would be possible to connect several additional DTEs, including computers, work stations, servers, and the like to the concentrators 22a, 22b and 22c. Further, additional concentrators could be connected to the exiting concentrators.
  • FIG. 2 shows how a much larger network consisting of twenty three concentrators. None of the port connections to the individual host modules in the concentrators are shown.
  • Concentrator 28 is the top level, or level "0" concentrator. Concentrator 28 is connected to the next from top level, or level “1" concentrators 30a-30f. The six level 1 concentrators are connected to a total of seventeen level 2 concentrators 32a-32g. Note that a lower level concentrator can be connected to a higher level concentrator by way of a connection to the network management module NMM of the lower level concentrator. It is also possible to connect the higher level concentrator to a host module port in the lower level concentrator.
  • the network management module NMM performs monitoring and controlling functions within the concentrator in which it is located. In addition, the NMM sends status and diagnostic reports to the network management control console NMCC. Further, the NMM executes commands issued by the control console.
  • the network management control console NMCC is a designated computer of the network which includes a control console adapter CCA in the form of an expansion board which is installed in the computer.
  • the designated computer uses a graphical user interface, such as a commercially-available software package called Microsoft Windows sold by Microsoft Corporation of Redmond, Wash. Other commercially available software packages which provide a window environment similar to Microsoft Windows could be used for the present application.
  • a principal function of the network management control console NMCC is to monitor the status of the network topology.
  • An important feature of the present invention is the ability to automatically acquire information regarding the topology of the network so that a display of the topology can be generated and automatically updated to reflect changes in the network.
  • FIG. 3 is an image, generally designated by the numeral 36, which will be produced on the NMCC video display terminal showing the topology of the exemplary network depicted in FIG. 2.
  • the display is a menu-driven graphics display which uses a pointing device such as a mouse, light pen or the like.
  • the rectangular boxes depicted in the display are concentrator icons 34a-34e which represent the concentrators in the network.
  • the screen is only capable of displaying a relatively limited number of concentrator icons at a time. Accordingly, it is necessary to scroll the display to depict all twenty four of the concentrators, as will be explained.
  • Concentrator icon 34a corresponds to concentrator 28 in FIG. 2.
  • Icons 34b-34g represent concentrators 30a-30f of FIG. 2.
  • the icons representing the remaining concentrators 32a-32g can be viewed only by scrolling the display both horizontally and vertically.
  • Display 36 is split between a "Level 0" and a "Level 1".
  • Concentrator icon 34a is shown in the upper “level 0", with the remainder of the icons located in the lower half or “level 1” portion of the screen.
  • the display can be scrolled vertically by placing the cursor icon or mouse pointer 38 over one of the triangle-shaped elements 40 and "clicking" on actuating the control button mouse.
  • the display will replace the "Level 0" icon at the top with the level “1” icons and replace the "Level 1" icons at the bottom with “Level 2” icons. Since there are a total of seventeen "Level 2" concentrators 32a-32g (FIG. 2), it will be necessary to scroll the display horizontally to view all of these concentrators. Scrolling to the left is accomplished by “clicking" left arrow symbol 40a using the mouse and scrolling to the right is accomplished by “clicking" right arrow symbol 40b using the mouse.
  • Concentrator icon 34a displays various information regarding the status of concentrator 28 (FIG. 2).
  • the chart recorder image displays the amount of message traffic received by the concentrator over time.
  • the designation "000081000002" represents the concentrator identification or address.
  • the designation "Normal” indicates the overall status of the concentrator. The designation will change to indicate a fault or warning condition.
  • the designation "2 Levels Below” on icon 34a indicates that two levels of concentrators are located below the "Level 0" of concentrator 28, namely “Level 1" and “Level 2".
  • the designation "3000” indicates the type of concentrator.
  • Other concentrators, such as the concentrator 34e are Model “1000” type concentrators, which have fewer capabilities than do Model “3000” concentrators.
  • the designation "23 Concentrators Below” indicates that there are twenty three concentrators connected either directly to concentrator 28 or indirectly to concentrator 28 through other concentrators.
  • Each concentrator icon in the lower part of the display 36 in this case the "Level 1" part of the display, has a vertical bar referred to as a linkage bar.
  • the linkage bar such as bar 42 above icon 34b indicates the connection between the concentrator and the parent of the concentrator located in the next higher level.
  • linkage bar 42 depicts the connection between the Level 1 Model 3000 concentrator 30a (FIG. 2) and the Level 0 Model 3000 concentrator 28.
  • the upper tag "2-1" indicates the slot number in which the module is located in the concentrator and the port number on that module.
  • concentrator 28 (FIG. 2) is connected to concentrator 30a by way of port number 1 of a host module which is located in slot 2 of concentrator 28.
  • the lower tag "2" of linkage bar 42 indicates the slot number of the Model 3000 module which provides the connection.
  • NMM network management module
  • Linkage indicator 44 shows the manner in which a Model 1000 concentrator 30d is connected to concentrator 28.
  • Model 1000 concentrators are interconnected by way of the concentrator backplane, abbreviated “BkPl” and by way of an up-port, abbreviated “UpPt”. Accordingly, indicator 44 shows that concentrator 30d (FIG. 2) is connected to concentrator 28 by way of a cable connected between the up-port of concentrator 30d and port 4 of a module located in slot 2 of concentrator 28.
  • the network branches out from one central unit in an inverted tree hierarchy.
  • the upper level usually displays a concentrator icon.
  • the topology may have two or more concentrators linked in parallel at the top level of the network. In that case, there will be no concentrator icon in the Level 0 position.
  • FIG. 4 shows a network topology with seven concentrators 46 connected together at the top level in parallel.
  • the common connection may be, for example, a coaxial cable 48 which forms a "backbone" of the network.
  • no single concentrator occupies the Level 0 position of the display, as can be seen in the display 50 of FIG. 5.
  • Display 50 reflects this topology by leaving Level 0 empty and places the concentrators linked in parallel in Level 1. Note that the linkage bars carry a top tag with the description "??-??" to reflect the fact that there are no concentrators located above the Level 1 concentrators.
  • One of the network concentrators is connected to the personal computer which functions as the Network Management Control Console NMCC. As shown in FIGS. 3 and 5, the concentrator which is connected to the NMCC is designated on the display screen with a small icon 51 which resembles a personal computer.
  • NMCC Network Management Control Console
  • the interface is a menu-driven display running in a Microsoft Windows environment.
  • Menu selections are made using a selective technique which is standard to Microsoft Windows pull-down menus.
  • the mouse cursor 38 is placed over the name of the desired selection on the menu bar.
  • the menu bar 52 is located at the top of the display and includes selections or functions "FAULT”, CONFIGURATION”, “PERFORMANCE”, “SECURITY”, AND “LOG”.
  • the mouse cursor 38 is positioned over the desired selection or function on the menu bar 52 and the primary mouse button is actuated.
  • This action causes a submenu to be displayed.
  • the submenu 53 shown in FIG. 6 is displayed immediately below the main menu selection.
  • a particular subfunction is selected by dragging the mouse with the primary button still actuated. This action caused sequential subfunctions in submenu 53 to be highlighted.
  • the mouse button is released thereby selecting the subfunction.
  • a check mark (not depicted) will appear in the display next to the selected subfunction and will remain there until another function/subfunction is chosen.
  • the next step is to select a target object for the previously-selected function or subfunction. This is accomplished by positioning the mouse cursor over the target object on the screen. For example, if it is desired to monitor message traffic for a particular concentrator in the network, the DIAGNOSTIC subfunction depicted in FIG. 6 is selected.
  • the mouse cursor is positioned over the target object, such as the concentrator icon 34b in FIG. 3.
  • the cursor is positioned within the identifier button 43 of icon 34c which contains the concentrator identifier "000081001001".
  • the primary mouse button is then actuated thereby raising a pop-up window to appear or a portion of the display depicting an detailed view of the front panel of the selected concentrator.
  • FIG. 7 is an exemplary pop-up detailed view window 56 which is displayed when a concentrator is selected.
  • Window 56 occupies a relatively small portion of the display screen, and the position of the window on the display can be changed as desired.
  • the image which appears in window 56 represents the physical appearance of the front panel of the actual concentrator represented by icon 43.
  • the concentrator image 56 shows that the concentrator includes thirteen plug-in modules the front panels of which are represented by image sections 60a-60m.
  • the modules depicted are exemplary only and it is possible to interchange modules and delete modules as required by the local area network.
  • Each concentrator must include at least one Network Management Module NMM. If one or more modules are deleted, one or more empty concentrator slots will be depicted.
  • Concentrator image section 60a depicts the front panel of a primary Network Management Module NMM of the concentrator which is shown to be located in the left most position in the actual concentrator. This position is referred to as slot 1 of the concentrator.
  • Image section 60b depicts the front panel of another type of Network Management Module NMM which functions as a backup in the event the primary NMM fails and is located in slot 2.
  • Image sections 60c-60l depict internetworking and host module front panels located is slots 3 through 12, respectively.
  • image section 60m depicts the front panel of the power supply for the concentrator.
  • FIG. 8A is an enlarged view of the Network Management Module image section 60a.
  • the image includes a depiction of the front panel mounting screws 62a. Also depicted is the inserter/extractor bar 62b and the model designation "3314M-ST" at location 62c.
  • the model designation represents the Model 3314M-ST Network Management Module.
  • the model designation indicates that the NMM and hence the concentrator in which the NMM is installed is a Model 3000 rather than a Model 1000
  • Image section 60a also depicts three light emitting diodes (LEDs) labeled “STA”, “PAR” and “NMC” at location 62d.
  • LED “STA” represents a green LED which is illuminated when the associated module is functioning properly. Should the module lose power or experience another type of monitored function failure, the LED will be turned off. The image of the "STA” will be a rectangle filled with green to indicate that the actual LED is illuminated and will be filled with black when the LED is off.
  • the LED labeled "PAR" is yellow and is illuminated (shown in yellow) when the NMM has been disconnected or partitioned from the concentrator backplane. If the module is partitioned, the backup NMM depicted in image section 60b will function as the primary NMM. In the event there is no backup NMM, a partition of the NMM will cause the entire concentrator to be removed from the network.
  • Image section 60a further depicts a pair of fiber optic connectors at location 62e which are standard ST-type bayonet connectors.
  • the top connector is for receiving data and the bottom connector is for transmitting data.
  • the two connectors function together as a port for interconnecting concentrators.
  • the concentrators can also be interconnected by way of host module ports if the concentrator is Model 3000 type concentrator.
  • Two LEDs are depicted at location 62f, with a yellow LED labeled "P" being illuminated when the port has been partitioned or disconnected.
  • the LED labeled "L” is a green link status indicator which is illuminated when the receiving terminal of the port is connected to a transmitting device in another module. The LED "L” will not be illuminated (will be shown in black) if the transmit and receive optical cables are reversed.
  • Section 62g includes a depiction of seven LEDs, two of which are labeled "ONL” and "P/S” green.
  • the LED labeled “P/S” and indicates whether the NMM is acting as the primary or secondary NMM.
  • the “P/S” LED is green and is illuminated when the NMM is the in the primary mode.
  • the LED labeled "ONL” is green and indicates when the NMM is on line.
  • the “ONL” LED image flashes when the NMM has not received software downloaded from the CCA and is illuminated steadily when the software download has succeeded.
  • the top three LEDs relate to network traffic. In the actual module, the top LED, which is yellow, is illuminated for 250 ms when a collision is detected in the concentrator.
  • the second from top LED which is green, is illuminated while data are present in the concentrator.
  • the third from top LED is green and is illuminated for 250 ms after each data transmission.
  • the fourth from the top LED is a green microprocessor fault indicator which shows the status of the microprocessor in the NMM.
  • a nine pin male type DB-9 connector is depicted at location 62g of section 60a which functions as a service port.
  • Location 62j includes a circular element which represent a microprocessor reset button on the actual module.
  • Location 62k includes a rectangular element which represents a switch which allows termination of the connector at image section 62i.
  • locations 62h and 62i represent type RJ45 female connectors for connection to unshield twisted pair (UTP) cable.
  • the connector at location 62h is for a serial port for out-of-band communication between concentrators (NMMs) and the connector at location 62i is for connection to an internal modem for out-of-band communication.
  • Out-of-bond communication is communication separate from the primary network communication paths and may be, for example, a telephone line.
  • FIG. 8B is an enlarged view of image section 60b of FIG. 7 which depicts the backup or secondary Network Management Module.
  • the image is substantially identical to image section 60a with a few minor exceptions.
  • the designation "3314-ST" appears which represents the Model 3314-ST Network Management Module.
  • the Model 3314-ST includes an additional connector, the image of which is depicted at area 64b.
  • the connector is a D13-25 type interconnect and functions as a standard RS-232 serial port for out-of-band connection to a telephone network.
  • the telephone network can be used to communicate out-of-bond with the NMM in lieu of the standard network (CSMA ⁇ CD) communication path.
  • the image section 60c of FIG. 8C depicts an internetworking module.
  • the designation "3323" of area 66a of the image indicates that the module is Model 3323.
  • the module is a local bridge which functions to interconnect two local area networks of the same type. The local bridge will only pass traffic that originates in one segment of the network and is intended for the other network segment.
  • Front panel image 60c includes a region 66c which depicts a fifteen pin D type female connector for connecting the module to an Attachment Unit Interface (AUI) device.
  • An AUI is a standard logical, electrical and mechanical interface for connecting Data Terminal Equipment (DTE) such as a personal computer, server and the like to the network.
  • Region 66d includes ten LEDs which provide static and dynamic status conditions of the internetworking module.
  • the image section 60f of the FIG. 7 image is shown enlarged in FIG. 8D depicting the front panel of a host module.
  • the upper region includes the designation "3304-ST” which indicates that the module is a Model 3304-ST host module.
  • the image depicts a total of six ports for connection to up to six DTEs. As shown in region 68b, each port is represented by the image of two ST-type bayonet optical fiber connectors, with the top connector for receiving data and the bottom connector for transmitting data.
  • the designation "P" and “L” and the image of two LEDs in region 68c correspond to the image and designation in regions 62e and 62f of the image section 60a of FIG. 8A.
  • FIG. 8E is an enlargement of the image section 60d of FIG. 7.
  • the designation "3302" at region 70a of the image indicates that the depicted module is Model 3302 host module.
  • a total of six ports are depicted for connection to up to six DTEs.
  • each port is represented by an image of a nine pin type DB-9 connector for connecting to a shield twisted pair (STP) cable.
  • STP shield twisted pair
  • Region 70b includes the standard LEDs "STA”, “PAR” and “NMI”.
  • the region includes six pairs of LEDs, with one pair associated with one of the six ports.
  • the LEDs labeled “P” are a yellow LED which indicate, when illuminated (shown in yellow), that the associated port has been partitioned or disconnected from the network.
  • the LEDs labeled “L” is a green LED showing the link status. If the port is connected to a compatible transceiver or network interface card, the LED is illuminated (shown in green). If the port is not connected, the LED turns off (shown in black) and an autopartition takes place, automatically partitioning the port so that the associate LED "P" will turn on.
  • FIG. 8F An enlargement of image section 72b is shown in FIG. 8F.
  • the designation "3305" at region 72a of the image section indicates that the module is a Model 3305 host module.
  • the host module has twelve ports depicted by twelve images of a connector.
  • the image of the connector associated with the first port is at region 72c.
  • the image is of a standard RJ-45 modular female connector for connection to unshield twisted pair (UTP).
  • UTP unshield twisted pair
  • the function of the twelve LED pairs labeled "P" and "L” is the same as the LED pairs bearing a similar label in region 70b of FIG. 8E.
  • the image 60m of the front panel of the concentrator power supply also includes an image of a status LED which is illuminated (shown in green) when the power supply is producing the specified voltages.
  • concentrator objects There are three category of concentrator objects which can be selected, including the overall concentrator, a particular module in the concentrator, or a particular port of a particular module.
  • the overall concentrator object is selected by placing the mouse cursor 38 over the image 60a of the primary network management module NMM of the concentrator of the detailed view concentrator image 56 of FIG. 7 and actuating the primary mouse button. Assuming that the subfunction DIAGNOSTIC had been previously selected, diagnostic information regarding the entire concentrator will be displayed. This includes, for example, specific data packet errors, such as alignment errors, in the concentrator. Data packets are a form of data structure for Ethernet communication and alignment errors are errors which occur when a received frame does not contain an integer number of bytes. A frame is a packaging structure for Ethernet data and control information.
  • the mouse cursor 38 is positioned over the top portion of the desired module.
  • selecting the primary network management module is equivalent to selecting the entire concentrator, as previously noted.
  • a window 74 will enclose the image of the model number and the status LEDs for the module if the mouse cursor is positioned in that image area and the secondary button actuated. If the primary mouse button is then actuated, the module is selected as an object.
  • diagnostic information regarding the module represented by image section 60d will be displayed. Such information may include, for example, packet alignment errors received by all ports on the module.
  • the mouse cursor is positioned over the image of the port.
  • a window will then appear, such as window 75 (FIG. 7) associated with port number 6 of the module depicted by image section 60f.
  • window 75 (FIG. 7) associated with port number 6 of the module depicted by image section 60f.
  • a window 76 will appear as shown. If the user then actuates the primary mouse button, diagnostic information regarding the selected port will be displayed. Such information includes, for example, packet alignment errors received by the port.
  • FIG. 9 shows an image 78 of a different style of concentrator, referred to as the Model 3030, which can accommodate up to four plug-in modules.
  • image 78 depicts the physical appearance of the front panel of the actual concentrator.
  • Image section 80c shows the power supply section of the concentrator, with two status LEDs with the associated designation "Power” and "FAN” being depicted.
  • the "Power” LED is shown green when the D.C. power is at the proper voltages.
  • the “Fan” LED is shown yellow in the event the fan speed falls below a minimum rate of rotation.
  • Image section 80b represents a Model 3314M-ST Network Management Module which is the same module depicted in image section 60a in FIG. 7.
  • Image section 80c, 80d and 80e of FIG. 9 depict modules Models 3314-ST, 3304-ST and 3305 which are the same modules depicted in image sections 60b, 60f and 60g, respectively, of FIG. 7.
  • the user can select the entire concentrator, a particular module (slot) and a particular port using a mouse in the same manner previously described in connection with the image shown in FIG. 7.
  • each concentrator includes a chassis which receives several plug-in modules which are inserted in adjacent concentrator slots. Each module is provided with a rear connector which engages a common concentrator backplane 82 which is located along the entire rear portion of the concentrator.
  • Backplane 82 includes a set of electrical connections which form a CSMA/CD bus 84.
  • the backplane further includes several electrical connections which form a control bus 86.
  • the CSMA/CD bus is similar to an Ethernet network coaxial cable which carries conventional 10 Megabit Manchester encoded digital signals of the type distributed throughout the entire network.
  • the control bus 86 of the concentrator backplane enables the concentrator Network Monitor Module 88 to communicate with the other modules in the concentrator, including host modules 90a through 90b (only two host modules are shown).
  • the host modules 90a-90b each has one or more ports which can be connected to Data Terminal Equipment DTE 91 as shown in FIG. 10.
  • DTE Data Terminal Equipment
  • the CSMA/CD signal is received by a port and is transferred by the host module associated with the port to the CSMA/CD bus 84 of backplane 82.
  • NMM 88 receives the CSMA/CD signal by way of data steering logic (DSL) represented by block 90.
  • DSL data steering logic
  • the logic is capable of receiving data from CSMA/CD bus 84 and transmitting data onto the bus.
  • the other modules in the concentrator do not receive the CSMA/CD data at this time.
  • the received CSMA/CD data are transferred to a repeater and retiming unit (RRU) represented by block 92.
  • RRU repeater and retiming unit
  • the RRU retransmits the CSMA/CD data. In doing so it is necessary to retime the data to account for the distortion inherent in the transmission link.
  • the RRU 92 transfers the retimed CSMA/CD data back to the data steering logic DSL 90 which places the CSMA/CD signal back on bus 84.
  • the other modules in the concentrator are configured to receive the repeated CSMA/CD data.
  • the repeated CSMA/CD data are also transferred to the network by way of media dependent adapter MDA represented by block 94.
  • the output of the MDA on line 102 will connect the CSMA/CD signal to the appropriate transmission media, such as fiber optic cable, unshield twisted pair (UTP) or shield twisted pair (STP).
  • Line 102 can be connected to another other concentrator of the network so that the entire network will receive the CSMA/CD data.
  • Other concentrators can also be connected to the subject concentrator by way of the host module ports.
  • Each module in concentrator is connected to the control bus 86 of the backplane by way of a Network Module Interface (NMI), including NMI 106 in the Network Management Module (NMM) 88 and NMIs 107 in the various host modules 90a-90b.
  • NMI Network Module Interface
  • NMM Network Management Module
  • the NMM 88 which includes a central processor unit and associated memory, as will be described, monitors and controls the other modules in the concentrator by way of control bus 86.
  • the other modules in the concentrator are not required to contain any form of processor, with the "intelligence" of the concentrator management function residing almost exclusively in the NMM 88.
  • a Network Management Control Console NMCC 93 is connected to port 2 of host module 90b.
  • the NMCC is typically a personal computer having a graphic user interface 93b and a control console adapter CCA 93c in the form of an expansion card located in the computer.
  • the NMM can control the various modules either in response to commands received by the NMM over the network which originate from the Network Management Control Console NMCC 93.
  • the user at the NMCC can command the NMM in a particular concentrator to reset the entire concentrator or reset a particular module or port.
  • the RESET subfunction depicted in FIG. 6 is first selected by the user.
  • the expanded view image of the desired concentrator is displayed as depicted in FIGS. 7 or FIG. 9.
  • the user can then select the desired object including the entire concentrator, a particular module or a particular port using the mouse.
  • the previously-described LEDs on the concentrator modules will indicate in some predetermined manner whether the reset command was successful.
  • the user can also use the NMCC to initiate a loopback test wherein a test packet is transmitted over the network to a selected concentrator and the concentrator is instructed to transmit the test packet back to the NMCC.
  • the user first selects the "LOOP BACK" subfunction shown in FIG. 6. Next, the user selects the concentrator to be tested.
  • the NMCC verifies communication between itself and the selected concentrator by transmitting a data package to the concentrator which is echoed back by the concentrator to the NMCC.
  • the NMCC 88 can also monitor the status of a particular concentrator, module or port.
  • the user first selects the STATUS subfunction as depicted in FIG. 6. Next, the user selects the object using the expanded view of the concentrator as shown, for example, in FIG. 7 or 9.
  • a status report regarding the selected object is then displayed in a pop-up window. If the selected object is a concentrator, the report will indicate, among other things, whether the retiming unit, such as RRU 92 in FIG. 10, is functioning properly. If the selected object is a module, the status report will indicate whether the module power supply is functioning properly, whether the module has been enabled or disabled, and whether the Network Monitor Interface NMI, such as NMI 107 in FIG. 10, is functioning properly. If a port has been selected, the status report will indicate whether the port is active or has been partitioned. Other information can be included in the status reports, if desired.
  • FIG. 11 A functional block diagram of the Network Management Interface NMI 107 for interfacing the host modules to the control bus 86 of the concentrator backplane 82 is shown in FIG. 11.
  • the NMI 107 includes a state machine for transferring network management interface data between the host module and the control bus 86.
  • the NMM issues a command by way of the NMM Network Management Interface NMI 106 to one of the host modules over the control bus and the recipient host module transmits a response back to the NMM over the control bus.
  • Block 92 represent a sequential state machine which transfers eight bits of interface data between the host module and the control bus.
  • State machine 92 is preferably a gate array although other types of circuitry could be used.
  • a bidirection buffer 94 is connected to eight lines of the control 86 bus through eight pins (not depicted) on the rear electrical connector of the host module.
  • the eight bidirection lines 96 each carry one of the eight bits of Network Management Interface Data (NMIDAT).
  • NMIDAT Network Management Interface Data
  • Lines 96 are connected to the input/output of the bidirectional buffer 94.
  • Another set of buffer input/output lines are connected to the state machine by way of eight bidirection data lines 98. Data are transferred through buffer 94 either from the control bus 86 to the state machine or from the state machine to the control bus, depending on the status of the direction line 100 controlled by the state machine.
  • Timing and control signals are produced by the Network Management Module, NMM, on three lines, collectively designated by numeral 102.
  • Timing and control signals DENL, RD/WRL and DAT/CMDL are used for transferring the interface data on lines 96 between the NMM and the host modules over the control bus 80.
  • Each slot of the concentrator which is capable of receiving a plug in module, is assigned a unique slot identification number.
  • the four bits of slot identification (Slot ID) on lines 104 are produced by hard wiring (strapping) selected ones of four connector pins of the slot to either high or low logic levels.
  • the four slot identification bits are transferred to the state machine 92 on four lines designated by the numeral 104.
  • the state machine 92 uses the slot identification bits to decode commands from the NMM transmitted over the control bus which is directed to the particular module which is inserted in the identified slot.
  • PAL programmable array logic device
  • PAL 106 also receives signals from each of the twelve ports of the modules over twelve separate line designated by the numeral 108. If a particular port is active (receiving data over the network), the appropriate one of the twelve lines will go to a logic high level. If the particular module in the slot has less than twelve parts, not all of the lines will be used.
  • PAL 106 includes circuitry for encoding the signals from the port and producing a four bit port identification code on four lines designated by the numeral 112. The code uniquely identifies a particular port receiving data over the network.
  • PAL When one of the port activity lines 108 is active, PAL will produce the appropriate port identification on lines 112 and will also transmit the slot identification code IDOUT on four lines 110. As will be described later, this information is received by the NMM and eventually transferred to the Network Management Control Console NMCC and is used to automatically generate the topology of the network.
  • State machine 92 receives eight bits of module identification code on lines 134 and four bits of module revision code on line 132.
  • the identification code and revision code identify the module model number and revision number.
  • the codes are produced in the module by hard wiring the appropriate lines in the module itself to either a logic low or logic high signal.
  • the model and revision identification codes are transferred to the NMM, when requested by the NMM, over data lines 96. This information is eventually forwarded by the NMM to the Network Management Control Console and is used to produce the expanded view images, such as depicted in FIGS. 7 and 9.
  • the Network Management Control Console NMCC can command the concentrator, modules and ports to provide certain status information.
  • the NMM in the appropriate concentrator receives the commands over the network and issues commands to particular modules over the control bus 86 in response to the NMCC command or in respond to commands originating in the NMM itself.
  • One command requests the status of a particular module, with the command containing the slot identification number or address of the module.
  • the state machine of the Network Management Interface 107 in the host module receives and decodes the command and provides the requested status information.
  • This information includes the module identification and module revision (module type information) originating on lines 132 and 134.
  • Lines 130 carry four status bits which can also provided to the NMM.
  • One of the four status bits is the state of the status LED located at the top of the front panel of the majority of the plug in modules, as shown in region 62d of FIG. 8A.
  • the green status LED is illuminated (depicted in green) if the module is powered and if other basic module functions are proper.
  • the NMM can also issue commands in response to the Network Management Control Console to reset the module as previously described. If the state machine detects a reset command on the control bus 86 directed to the module, the state machine will produce a reset signal on line 128.
  • a module disable command can be issued by the NMM to disconnect or partition the entire module from the network. As previously described, this command can be issued by the NMM in response to a command originating from the Network Management Control Console. When this command is detected by the state machine on the control bus 86, a disable signal is produced on line 126 and the module is partitioned. As will be described later, the other commands can be used to disable or partition individual ports.
  • the NMM can also issue a watchdog activity pulse command.
  • a watch dog signal is produced on line 124.
  • the signal is typically used to rest a count-down timer on the module.
  • the NMM is programmed to periodically issue the watchdog command at a sufficient frequency so as to prevent the counter from counting down completely. If the counter does count down completely, this is typically an indication of a fault condition. If this condition is detected, the NMM is reset.
  • the state machine 92 is also capable of detecting commands which are unique to the particular type or kind of module. For example, a command can be issued which will disable or partition a particular port. Since the number of ports on host modules differ, it is necessary to construct unique port partition commands for different module types. Also, different types of status commands can be transmitted by the NMM requesting status data unique to a particular type of module, such as the status of various LEDs located on the front panel of the module.
  • Eight bidirectional data lines represented by numeral 114 are connected to the state machine 92 for providing data to support the unique commands. Data are read out of the state machine on lines 114 when the read/write signal RD/WR on line 118 is high. Data can be transferred to the machine if the signal on line 118 is low.
  • the five lines designated by the numeral 116 are network management function NMFC decodes which are produced in response to various unique commands received by the state machine.
  • the decodes are used to control logic circuits on the module in a predetermined manner, depending upon the type of command and the module type.
  • the decodes are ten bits, with the signal HBEN on line 120 indicating when the five bits on lines 116 are the high or most significant bits.
  • the signal LBEN on line 122 indicates when the five bits on lines 116 are the low or least significant bits.
  • the network topology is displayed at the Network Management Console 93.
  • the topology is produced automatically and is updated automatically in the event the network configuration is altered.
  • FIG. 12 is a block diagram of one exemplary network depicting the topology of the network.
  • the exemplary network includes a total of eight interconnected concentrators 100, 102, 104, 106, 108, 110, 112 and 114. Each concentrator is provided with a Network Management Module.
  • the network is monitored and controlled by a Network Management Control Console NMCC (not depicted) which can be a part of a DTE associated with any one of the network concentrators.
  • NMCC Network Management Control Console
  • the depicted network includes a mix of two types of concentrators including both Models 1000 and 3000.
  • the Model 1000 is less flexible than the Model 3000 type.
  • the Model 1000 includes an NMM which can distinguish only two ports. There is an Up Port which refers to the connection of the NMM to the Media Dependent Adapter (MDA). There is the Down Port which collectively refers to all of the ports on the host modules of the concentrator.
  • MDA Media Dependent Adapter
  • the Model 1000 NMM cannot distinguish between different host module ports.
  • the Model 3000 concentrator includes the capability of distinguishing between the individual host module ports and can identify the particular port number and slot number on which a message is received on the network. The operation of this feature was previously described in connection with the Network Management Interface (NMI) of FIG. 11.
  • NMI Network Management Interface
  • the Model 1000 NMMs can be connected Up Port to Up Port, but Down Port to Down Port connections are not allowed.
  • the Model 3000 NMMs are more flexible and can be connected by way of any of the ports of the concentrator.
  • the Control Console Adaptor (CCA) of the Network Management Control Console (NMCC) is responsible for building and maintaining the topology of the network.
  • the CCA interacts with all of the Network Management Modules over the network through a sequence of protocol frames referred to as Protocol Data Units (PDUs).
  • PDUs Protocol Data Units
  • the CCA also exchanges messages with the User Interface (UI) 93c (FIG. 10) of the personal computer of the Network Management Control Console.
  • the User Interface (UI) includes the Windows graphic user interface.
  • the CCA communicates with the User Interface by way of a Memory Resident Driver (MRI) on the personal computer and converts the PDU formats to and from the User Interface.
  • MRI Memory Resident Driver
  • Each of the Network Management Modules includes a processor and associated memory, as will be described.
  • the NMM processor executes code in a local RAM which is downloaded from the CCA of the NMCC to each of the NMMs.
  • Control Console Adapter transmits a Protocol Data Unit (PDU) over the network to each of the network NMMs.
  • PDU Protocol Data Unit
  • Loadserver Ready is transmitted every five seconds and indicates to the NMMs that the CCA is ready to download code to each of the NMMs.
  • the Hello message is transmitted with a predefined group of multicast addresses.
  • Each NMM in the network should receive the Hello message which contains the address of the NMM which originated the message.
  • the Hello messages contains information regarding the model and revision number of the originating NMM as described in connection with FIG. 11 (lines 134, 132). This latter information will be used to distinguish between Model 1000 and 3000 NMMs (and concentrators).
  • Each concentrator should receive a Hello message from each of the other concentrators in the network.
  • a Model 3000 has the capability of monitoring the particular port over which the Hello message is received, as previously described. Model 1000 concentrators can only distinguish between Up Port and Down Port messages.
  • Each NMM maintains an internal list or table of the port-slot and NMM address for each of the received Hello messages. If the NMM is a Model 1000 the "port-slot" will be either Up Port or Down Port.
  • concentrator 106 having NMM address "05”, has direct connection to the three other concentrators.
  • Concentrator 112, having address 06, is connected to concentrator 106 at slot 3, port 2 (3-2).
  • Concentrator 114, having address 08, is connected to slot 6, port 1 (6-1) of concentrator 106.
  • concentrator 100 having address 03, is connected to slot 4, port 2 (4-2) of concentrator 106.
  • concentrator 106 Once concentrator 106 has received a Hello message originating from each concentrators in the network, it will construct an NMM list. The list will have a total of seven entries since there are seven other concentrators in the network. The NMM list will reflect that a Hello message from concentrator address 06 was received over slot 3, port 2 (3-2), and that another Hello message originating from concentrator address 08 was received over slot 6, port 1 (6-1). Finally, the list should further reflect that Hello messages originating from the remaining five concentrators were received on slot 4, port 2 (4-2) of concentrator 106.
  • FIG. 13 is on NMM List table containing the NMM list for all of the concentrators of the FIG. 12 network.
  • the left column the "Reporting Concentrator Address” column, contains the address of each of the reporting concentrators.
  • the NMM entry in the NMM List table for concentrator 106 which has address 05, indicates that Hello messages originating from the remaining seven concentrators were received over three separate ports (3-2, 6-1 and 4-2), as previously described. This information is collected and stored in concentrator 106, to be eventually forwarded to the CCA 93c. Similarly, concentrator 104, having address 01 received all seven messages over a single port, namely slot-port 2-1. This information is also collected and stored in concentrator 104 to be forwarded to the CCA. The other concentrators collect and store similar information as can be seen in the NMM List table of FIG. 13.
  • Each concentrator has only limited information concerning the topology of the network. For example, concentrator 106 (address 05), can ascertain that it received Hello messages from five concentrators over slot-port 4-2. However, the concentrator is unable to ascertain the actual path taken over the network by the Hello messages. For example, concentrator address 05 cannot ascertain that the Hello message originating from concentrator 104 (address 01) was forwarded by concentrator 100 (address 03) instead of taking some other path, such as a direct path.
  • the NMM list of a particular concentrator does not contain sufficient information to create the network topology
  • the total information in the NMM tables created by each of the concentrators does contain sufficient information. As will be described, this information is collected from the NMMs by the CCA in a format such as the FIG. 13 NMM List table to create the overall network topology.
  • the CCA monitors the various Hello messages transmitted by each NMM in the network.
  • the CCA creates a table, such as the FIG. 16 Ancestor Table, which initially only includes the NMM addresses of all of the concentrators in the Network. The NMM addresses are used to allocate memory for creating the Ancestor Table.
  • the CCA also determines which NMM and associated concentrator will become the root of the topology tree.
  • the NMM List table will also contain information (not depicted) regarding the particular type of NMM so that the CCA can distinguish between Model 1000 and 3000 NMMs.
  • FIG. 14 is a block diagram of the process after the Hello PDUs have been received by the CCA which contain the addresses of the reporting concentrators.
  • the diagram illustrates the manner in which the CCA obtains information from the concentrators to complete the Link Table of FIG. 13.
  • Element 118 of FIG. 14 represents the block request transmit process where the CCA transmits a separate message Nl-Nn, represented by lines 122, to each of the concentrators in the network 120 for which a Hello PDU was received.
  • the messages request that the concentrators provide the CCA with the NMM list associated with the concentrator.
  • Each of the messages of the block request is referred to as a Get NMM List message and the messages are transmitted sequentially over the network, preferably back-to-back.
  • the concentrators in the network 120 respond to the Get NMM List messages by transmitting back to the CCA an NMM List PDU, represented by lines 124.
  • the NMM List PDU contains NMM List of the reporting concentrator. As previously described, the list contains the addresses of the concentrator received by the reporting concentrators and the slot-port over which the messages are received.
  • the NMM List PDUs may be received and processed by the CCA, as represented by block 126 in any order. It is possible that some of the concentrators will not respond to the Get NMM List PDU for some reason. In that case the returned messages Nr will be less than the transmitted messages Nl-Nn. As indicated in block 126, the CCA monitors the number of response by incrementing an NMM List received counter.
  • the CCA will monitor the received NMM Lists. If the CCA fails to receive an NMM List from one or more concentrators, the CCA will wait a predetermined period of time. Once the period of time has expired, as represented by element 128, the CCA will issue a further block request comprising Get NMM List PDUs separately directed to the concentrators which have not responded or which have been dropped in the CCA or dropped somewhere in the network. This is a connection list protocol so that delivery of a message is not assured. The message could have been involved in a collision, for example. This process is repeated a maximum of five times, as indicated by element 130. If the maximum number of retries is exceeded, an error message is issued as shown by block 132.
  • the process carried out by the CCA utilizing the NMM List data is represented by the flow diagram of FIG. 15. The results of the process are shown in the Ancestor Table of FIG. 16, as will be explained.
  • Each NMM sends its link information to the CCA in the format given in FIG. 13.
  • the link data is also known as the NMM List.
  • the link data are arranged by slot/port group. For example, in FIG. 13, the NMM with address 05 sends three groups of link data, one group associated with slot-port 3-2, another associated with slot-port 6-1, and the last associated with slot-port 4-2.
  • the CCA processes each list using the following method.
  • the CCA searches through each slot-port group for the presence of the root NMM, which had been previously selected. If the Root NMM exists in that group, the slot-port number is entered in the "Own-Slot-Port" field of the Ancestor Table, such as the NMM at concentrator address 05 in FIG. 16.
  • the CCA accesses the "Ancestor" field of each group member in FIG. 16 and enters the address and slot-port numbers of the reporting NMM. That is, the reporting NMM becomes one of the ancestors for NMMs existing in slot-port groups not containing the Root NMM.
  • the CCA processes these lists on-the-fly, and discards the data. There is no requirement to save individual NMM link data, which can exceed 1200 bytes for a large network.
  • Block 138 of FIG. 14 is executed after all NMM lists are received and processed by the CCA. This block checks the Ancestor Table of FIG. 16, and resolves immediate parent-child relationships of each NMM.
  • the parent of an NMM is the ancestor of the NMM which is one level lower than itself. Note that the Root NMM does not have any ancestors.
  • the topology depicted in the Network Management Control Console NMCC display includes various concentrators connected in an inverse tree hierarchy.
  • the highest level of the hierarchy, Level 0 contains a single root concentrator.
  • Level 1 contains the concentrator in the next from highest level and so forth.
  • concentrator 100 shown in FIG. 12 is shown at the top of the network hierarchy but the topology could be drawn with another concentrator as the root concentrator.
  • the CCA selects one of the concentrators to be the root concentrator before requesting the NMM List for the concentrator.
  • the CCA selects the concentrator with the largest number of Down Port links as the root concentrator. This will also be the one concentrator with no Up Port links, with one exception. It is possible that more than one concentrator having a Model 1000 is connected by way of the Up Port. For example, in FIG. 17, there are three concentrators, 116a, 116b and 116c, connected by way of the Up Port link. In that event none of the concentrators will be selected as the root. A dummy concentrator will be assigned the root position (a virtual root) and the actual concentrators 116 will be assigned to the Level 1 and lower hierarchy levels.
  • any of the concentrators can be selected as the root.
  • the CCA selects the concentrator with the maximum number of links.
  • a concentrator having a Model 1000 NMM With no other concentrator having a Model 1000 NMM linked to the Up Port, will be selected as the root. Although any concentrator having a Model 3000 NMM located on the Up Port side of the highest level Model 1000 NMM could also be selected, this information is not available to the CCA from the network.
  • concentrator 102 having a concentrator address 02 is selected as the root of the network because it is the only concentrator having a Model 1000 NMM at this early stage of the processing.
  • one block of entries is taken from the NMM List of FIG. 13 as indicated by element 146 of the chart.
  • An entry is defined herein as the address of a particular concentrator heard over a particular slot-port of the reporting concentrator.
  • a block of concentrator entry means all of the concentrator addresses heard over a particular slot-port.
  • reporting concentrator address 03 has three blocks of entries. The first block includes the address of the concentrators heard over slot-port 2-1, including address 02, 07 and 04.
  • the next step is to determine whether the block contains the address of the previously-selected root concentrator, as indicated by element 148.
  • Each reporting concentrator will receive message from all concentrators, including the root concentrator.
  • the block of entries for slot-port 2-1 of concentrator address 03 includes root concentrator address 02. As indicated by element 150 in the flow chart, the slot-port over which the reporting concentrator receives messages from the root concentrator is added to the Root column of the FIG. 16 Ancestor Table. Thus, entry "2-1" is added to the Root column for reporting concentrator address 03.
  • the next block includes one entry, address 01, which indicates that the reporting concentrator address 03 received messages from concentrator address 01 over slot-port 3-2.
  • This information indicates that reporting concentrator address 03 must be used by concentrator address to communicate with the root concentrator.
  • reporting concentrator address 03 is an "ancestor" of concentrator address 01. If concentrator 01 is connected directly to concentrator 03, concentrator addresses 03 and 01 have a "parent" - “child” relationship, respectively. At this point in the processing, there is only enough information to indicate that an ancestor relationship exists between the two concentrators.
  • the address of the entry is obtained, address 01, so that the appropriate location in the Ancestor Table of FIG. 16 is located.
  • the address of the reporting concentrator, 03 is entered in the Ancestor column associated with concentrator 01, thereby indicating that concentrator address 03 is an ancestor of concentrator address 01.
  • the slot-port showing the connection to the ancestor concentrator, 3-2, is added to the slot-port column of the Table.
  • the Level column in the Ancestor Table indicates the level of the associated concentrator in the network hierarchy. The level is initially set to zero for all of the concentrators. Only the root concentrator will remain at Level 0.
  • concentrator address 01 Since concentrator address 01 is below concentrator 03 in the hierarchy of the network, it is known that the concentrator address 01 cannot be at Level 0. The value of the level entry for concentrator address 01 is increased by one, as indicated by block 156. Eventually, the level for address 01 will be increased to two, as shown in the Table.
  • the entry does not contain the root concentrator at address 02 (element 148). Accordingly, the address for the first entry of the block, address 05, is read so that the appropriate location in the Topology Table is found. Next, the reporting address 03 is entered in the Ancestor column of the Table together with the slot-port 4-1. The level number for address 05 is increased by one (block 156) and a determination is made as to whether all entries for the block have been processed (element 158).
  • Concentrator address 03 is entered as an ancestor to concentrator 06 together with the slot-port 4-1 (block 154) and the level is increased by one. Concentrator address 03 is then entered as an ancestor for concentrator address 08 together with slot-port 4-1. Again, the level number for address 08 is increased by one.
  • the next block of entries is obtained (block 146). There is one block of entries associated with reporting concentrator 04 in the NMM List. Since the block contains the root concentrator address 02, the slot-port 3-1 is entered in the Root column of the Topology Table (block 150). All blocks of entry have been processed (element 160), therefore, the next block is obtained.
  • the process is repeated for the three blocks of entry for reporting concentrator addresses 01, 08 and 06.
  • the block contains the root concentrator so the slot-port over which the root concentrator is heard is added to the "Own Slot-Port" column of the Ancestor Table.
  • slot-port 2-1, 3-1, and 2-6 are inserted in the Root column for reporting concentrators 01, 08 and 06, respectively.
  • Reporting concentrator address 05 in the NMM List includes three blocks on entry. The process is repeated with concentrator address 05 being inserted in the Ancestor Table as an ancestor to concentrator 06 and 08 together with port-slot 3-2 and 6-1, respectively (block 154). The level numbers for the two addresses are also increased by one (block 156).
  • the next entry for reporting concentrator 05 indicates that the root concentrator address 02 is heard by concentrator address 05 over port-slot 4-2. That entry is made (block 150) in the Root column of the Ancestor Table at concentrator address 05.
  • root concentrator address 02. The next block of entry is for root concentrator address 02.
  • all concentrators are descendants of the root concentrator. Accordingly, the root concentrator address 02 will be inserted as an ancestor for each of the seven other concentrators, together with the associating slot-ports. Since the root concentrator is a Model 1000 concentrator, the slot-port will be either Down or Up. In addition, the level for each of the concentrators, other than the root concentrator, will be increased by one.
  • the last block of entry is for concentrator address 07.
  • the block contains the root concentrator, so the slot-port, 4-7, is entered (block 150) in the Root column of the Topology Table.
  • element 160 all blocks of the Link Table will have been processed or updated. Accordingly, the information in the Link Table is no longer needed and the Table is discarded, as represented by block 164.
  • Ancestor Table is then used to create a display of the actual network topology.
  • the Level column of the Table will have been increased each time an ancestor is added so that the final value accurately reflects the level of the concentrator in the network. It is then possible to distinguish parent concentrators from other ancestor concentrators.
  • concentrator address 01 has two ancestor, and is, therefore, at level 2.
  • the ancestors include addresses 03 and 02.
  • Concentrator address 03 is at Level 1 and address 02, the root concentrator, is at Level 0. Accordingly, concentrator 03 is the parent concentrator of concentrator address 01.
  • FIG. 18 A somewhat simplified display topology image is depicted in FIG. 18 which is generally designated by the numeral 137.
  • the topology image is based upon the exemplary network of FIG. 12. Note that the actual display will show only two levels of concentrators, rather than the three levels depicted.
  • Level 1 shows the icon for the root concentrator address 02.
  • the icon indicates that there are three levels of concentrators below Level 0 and there are a total of seven concentrators below.
  • the Level 1 icons depict the concentrator addresses 03, 04 and 07.
  • the linkage symbol for address 03 indicates that the concentrator is connected from port 1, slot 2 of the Model 3000 concentrator to the Backplane (BK PL) of the Model 1000 root concentrator at address 02.
  • the up link for address 03 is from a particular port, port 1, of the module located in slot 2 of the concentrator.
  • the up link could also have been made from the Network Management Module NMM of the concentrator. Since a NMM has only a single port, only the slot number of the NMM in the concentrator would be depicted in the linkage symbol.
  • the CCA will automatically update the Network Topology Table of FIG. 16 should the configuration of the network changes. For example, if a concentrator is added by connecting it to slot-port 4-8 of concentrator address 07 of FIG. 12, the presence of the new concentrator would be detected and the topology display updated.
  • the CCA monitors the Hello PDUs from every NMM in the network. If three consecutive Hello PDUs are missed from a particular NMM, the NMM is treated as missing and a warning message is sent to the User Interface for display.
  • the NMM is aged out of the NMM list in the Link Table.
  • the NMM is also issued a Kill NMM PDU by the other NMMs. If the NMM of the concentrator responds, it will transmit what is called a New Hello PDU and will rejoin the network as a new concentrator. This procedure ensures that the CCA can detect any changes in concentrator connectivity.
  • An NMM in the network will transmit a Hello PDU in any of the three situations.
  • a Hello PDU will be transmitted after the NMM has received down loaded code from the CCA.
  • an NMM will issue a Hello PDU if the NMM fails to receive three consecutive CCA Loadserver Ready PDUs which, as previously noted, indicate to the NMMs that the CCA is ready to download code to the NMM.
  • an NMM will transmit the Hello PDU after it receives a Kill NMM PDU.
  • the CCA When the CCA detects a New Hello PDU, it commences an update of the network topology. If the number of new concentrators is relatively small, less than ten, the CCA updates each NMM location in the topology tree by using a binary search method. First, the CCA compiles a table of new concentrators. The CCA then requests the root concentrators for its NMM list.
  • the NMMs in the next level down are requested to provide the associated NMM list. This process is repeated until the NMM list from the reporting concentrator indicates the presence of only the new NMM over that port.
  • NMMs Network Management Modules
  • CCA Control Console Adapter
  • FIG. 19 is a functional block diagram of the Network Management Module (NMM).
  • the NMM includes a microprocessor 166, such as a microprocessor manufactured by NEC having the designation V35.
  • the microprocessor includes a core 168 and service port 170.
  • a serial port 172 is provided for connecting the NMM to a modem for out of band (external to the network) control of the NMM.
  • the microprocessor is coupled to a multiplexed address output bus 176 and a data bus 178.
  • An address buffer 177 is provided for storing the upper byte of address so that the full address can be used for dynamic random access memory (DRAM) 180, programmable read only memory (PROM) 182 and static random access memory (SRAM) 184.
  • DRAM dynamic random access memory
  • PROM programmable read only memory
  • SRAM static random access memory
  • PROM 182 is a boot memory and DRAM 180 holds the code for the individual NMMs which is downloaded by the CCA to the NMM over the network.
  • SRAM 184 is used to hold frames of the data either received from the network or to be transmitted over the network.
  • Block 192 collectively representing miscellaneous control circuitry, including SRAM access control, data bus control, memory I/O decodes, various counters, special functions and various glue logic.
  • the data and address busses are coupled to a conventional network interface controller (NIC) 188, such as the controller marketed by National Semiconductor under the designation DP8390/NS32490.
  • the NIC 188 is for interfacing with CSMA/CD type local area networks such as Ethernet.
  • the NIC functions to receive and transmit data packets to and from the network and includes all bus arbitration and memory support logic on a single chip.
  • the NIC has a single bus 189 which serves both as an address bus and a data bus.
  • Latch 186 is used to hold the addresses for the NIC and addresses from the NIC.
  • a data buffer 179 functions to isolate the data bus 178 of the processor from the NIC bus 187 and also functions to provide some bus arbitration features.
  • the NMM further includes a serial network interface (SNI) 190.
  • SNI 190 can be an interface device marketed by National Semiconductor under the designation DP8391/NS32491.
  • the CSMA/CD data on the network and back plane bus 84 are Manchester encoded, and the SNI performs Manchester encoding and decoding functions.
  • the SNI receives the encoded data from the data steering logic (DSL) 90 which receives the CSMA/CD data from the CSMA/CD backplane bus 84 for forwarding to SNI 190.
  • DSL 90 transmits CSMA/CD data from the SNI to the CSMA/CD bus.
  • the data steering logic SLD 90 responds to and controls a status line 93 which is used to control the mode of the NMM.
  • the NMM has various modes of operation wherein the NMM can act as a primary NMM for the concentrator, a backup NMM for the concentrator and a partition mode. In the partition mode the NMM can be isolated (partitioned) from the CSMA/CD bus 84 or isolated (partitioned) from the NMM up port connection in response to commands from the CCA.
  • the NMM further includes a repeater and retiming unit (RRU) 92, as previously explained, which repeats the CSMA/CD data received on bus 84 by way of the data steering logic 90.
  • RRU also retimes the data for transmission back to the CSMA/CD bus 84 and for transmission on the NMM Up Port by way of a medium dependent adapter (MDA) 198.
  • MDA 198 also receives CSMA/CD data from the Up Port connection for retransmission on the CSMA/CD bus 84.
  • the data bus 178 is also connected to the Network Management Interface (NMI) 106 which provides the interface between the NMM and the backplane control bus. As previously described in connection with the description of FIGS. 10 and 11, the NMI 106 cooperates with the NMIs 107 in the host modules.
  • NMI Network Management Interface
  • the NMI 106 of the NMM provides two functions. First, the NMI 106 is implemented to have the same circuitry as the NMI 107 used in the host modules as depicted in FIG. 11. This circuity is used to provide backup NMM status similar to host module status, to the primary NMM. Second, the NMI 106 is implemented to provide control signals for control bus 86. The control bus delivers commands sent by the NMM to the host module and further provides the response from the host module back to the NMM.
  • the NMM can issue either control commands or status commands to a particular host module in the concentration.
  • the specific module (slot location) will decode the command and take action.
  • the command transmitted over the control bus from the NMM NMI 106 and the responses from the host module NMI 107 are on eight lines of the bus which carry signals NMIDAT (FIG. 11).
  • NMI 106 also produces the three control signals RD/WRL, DAT/CMDL and DENL which are used by the host module NMI 107 for receiving the commands and transmitting the response back to the NMM NMI 106.
  • the control console adapter CCA is similar to the NMM as can be seen in the functional block diagram of FIG. 20. Functional elements common to the CCA and NMM are designated with the same numerals.
  • the CCA is functionally an intelligent Ethernet card.
  • the serial network interface SNI 190 is connected to a fifteen pin electrical connector 191.
  • the interface of the CCA with the network is simpler than the network interface of the NMM since the NMM must function with a CSMA/CD bus 84 whereas the CCA need only connect to a single network port.
  • the connector 191 is typically coupled to an off card transceiver or media access unit (MAU) (not depicted) by way of an off card attachment unit interface (AUI).
  • MAU media access unit
  • the data bus 179 of the CCA is connected to a personal computer PC I/O circuitry 83.
  • I/O circuitry 183 is coupled to the PC bus 97 and to the graphical user interface UI (the windows interface) of the PC by way of a memory resident interface (MRI) which is not depicted.
  • MRI memory resident interface
  • the CCA provides processing capability, it is possible to use a conventional Ethernet card with the processing function of the CCA being carried out by the processor and associated memory in the personal computer of the NMCC.
  • the User Interface in the Network Management Control Console also generates the expanded view of a particular concentrator front panel, as shown in FIGS. 7 and 9 utilizing data provided by the individual NMMs.
  • Each NMM is capable of determining the configuration of the concentrator in which it is located by way of the control bus 86 (FIG. 10) of the concentrator back plane. As previously described, the NMM can ascertain which slots in the concentrator are occupied and, if occupied, the model and revision of the module which is located in the slot.
  • the CCA When the user selects a particular concentrator to be displayed in the expanded view window, the CCA requests status information regarding the concentrator from the associated NMM. Data regarding the model type and revision of each module in the concentrator, including the reporting NMM, and the location of the modules in the concentrator, are forwarded to the CCA.
  • the bit mapped graphics are stored in the personal computer data base of the NMMCC. The appropriate bit map data can then be used to produce the concentrator image utilizing a conventional windows graphic user interface having graphics capability such as Microsoft Windows.
  • the various modules have one or more LEDs which provide miscellaneous status information.
  • the expanded view image of the concentrator will reflect the actual status of an LED by either displaying a particular color area at the LED image location (to show illumination) or by displaying a black area (to show lack of illumination).
  • the CCA will request that the NMM of a particular concentrator issue a command to the host modules in the concentrator, soliciting LED status information. The information is forwarded to the CCA and used by the interface to control the appearance of the LED images to reflect the actual status of the LEDs.
  • FIGS. 21A-21C The foregoing can be further illustrated by the flow charts depicted in FIGS. 21A-21C.
  • block 200 indicates that the user first utilizes the mouse to select the particular concentrator to be displayed.
  • block 202 shows that the window graphic user interface creates the expanded view process/window.
  • the User Interface UI then obtains the module type data from the data attached to the concentrator child window as shown by block 204.
  • the UI then obtains from its own data base, using the module type and revision number, the detailed module data including, for example, data for depicting the indicia of module model number, LED location, port location, and so forth, such as depicted in FIGS. 7 and 9.
  • the graphics bit map for the concentrator image is loaded and displayed on the NMCC screen, as indicated by block 208. Finally, a message is sent to the expanded view process to update itself with LED status information thereby concluding the process, as shown by elements 210 and 212.
  • FIGS. 21B and 21C flow charts relate to the LED status update sequence.
  • Block 214 of FIG. 21B indicate that a message is received by the expanded view process to update itself with LED status.
  • the NMCC then transmits an LED STATUS PDU over the network to the appropriate concentrator (Block 216).
  • the concentrator NMM will respond with 32 bits of encoded data for each module in the concentrator. Each bit of the data is capable of indicating the status of an LED in the module, with a "1" indicating that the LED is illuminated and a "0" indicating that the LED is off.
  • Block 220 of FIG. 21C indicates that the LED status data are received by the NMCC from the NMM of the concentrator over the network.
  • the rectangular locations in the bit mapped graphics for the LED are then filled with the appropriate color (or black) based upon the received data thereby concluding the update (elements 222 and 224).

Abstract

Apparatus for monitoring and displaying the status of a local area network. The network includes a hub with ports for connection to various data terminal equipment in a star configuration and for connection to other hubs of the network. The hubs each have different types of plug-in modules which have ports for connecting the hub to different types of network cable such as fiber optic cable, unshielded twisted pair cable and shielded twisted pair cable. Information is automatically provided to a control console identifying the types of modules and the location of the modules in the hub so that an image of the actual hub can be displayed on the screen of the control console. The actual hub image shows the location and types of modules installed in the hub. In addition, information regarding the connection of each of the hubs to other hubs of the network is obtained and provided to the control console. The information is processed so as to automatically produce a topology map on the control console display showing the overall topology of the network.

Description

This is divisional of application Ser. No. 07/526,567, filed May 21, 1990, now U.S. Pat. No. 5,226,120.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to local area networks and more particularly to apparatus and methods for monitoring the status of a local area network by producing a topology map of the network configuration and by producing a control console display image depicting the appearance of selected network hubs.
2. Background Art
Local area networks for interconnecting data terminal equipment such as computers are well known in the art. Such networks may include a large number of components which may be configured in a variety of ways.
Although equipment exists for monitoring the status of local area networks, such equipment is not capable of accurately monitoring and reporting network status in a manner which may be readily interpreted. For example, the network may include a large number of hubs or concentrators, each of which form the center of a star configuration. The concentrators may each be capable of servicing a large number of data terminal equipment such as personal computers. The network medium may be shielded twisted pair cable, unshielded twisted pair cable or fiber optic cable or a combination of all three. Further, each type of cabling may be supported by various types of modules located in each of the concentrators.
None of the conventional apparatus for monitoring and displaying the status of a network are capable of conveying the actual status of the network in a manner which can be easily comprehended by a user. The disclosed apparatus and method overcomes such limitations and allow the actual status of the network to be automatically monitored and displayed. The information displayed depicts in great detail the status of a network which can be easily comprehended by individuals with a minimum amount of training even if the network is relatively complex. Further, the status of the network is automatically updated. These and other advantages of the present invention will become apparent to those skilled in the art upon a reading of the Detailed Description of the Preferred Embodiment together with the drawings.
SUMMARY OF THE INVENTION
Apparatus and a method of monitoring the status of a local area network are disclosed. The network typically includes a plurality of hubs, such as concentrators, with each hub having data ports for coupling the hub in a star configuration to either data terminal equipment, such as personal computers, or for coupling the hub to another hub of the network. The network is of the type which utilizes network contention control such as the well known Carrier Sense Multiple Access With Collision Detection (CSMA/CD).
In one embodiment of the invention, the apparatus automatically determines the overall topology of the network, with the hubs having at least three data ports each. The apparatus includes a transmit means associated with each of the hubs having both originate and repeat means. The originate means functions to transmit messages over the network which originate at the associated hub and which contain an identifying address of the associated hub. The repeat means functions to transmit messages received by the associated hub over the network which originated from other hubs of the network.
Each of the hubs further includes port identifying means for identifying which of the data ports has received one of the messages transmitted by another hub of the network. In this manner, topology data regarding the connection of the various ports of the associated hub to other hubs of the network are obtained. The topology data from a single hub usually does not contain enough information to ascertain the overall network topology.
The apparatus further includes control means coupled to the network for receiving the topology data from each of the hubs in the network. The topology data identify a particular one of the data ports of the hub reporting the topology data and address of the other ones of the hubs which originated network messages received by the reporting hub over that particular port. Finally, the apparatus includes processing means for determining the overall topology of the network utilizing the received topology data.
In another embodiment of the invention, the apparatus monitors the status of each of the hubs of a star configured network by producing an image, on a control console display for example, which depicts the appearance of the actual hub.
Each hub of the network includes a chassis for receiving a plurality of modules. The modules have at least one port for connecting the data terminal equipment such as a computer to the hub, with the modules being of varying types. For example, some modules may be adapted for use with unshielded twisted pair cables and other modules may be adapted for use with optical cables.
The apparatus includes location means for producing location data indicative of the location of each of the modules in the hub chassis. An exemplary location means would include hard-wired slot identification bits located on the chassis which are transferred to any module inserted into the chassis slot associated with the hard-wired bits. Type means are further included for producing type data indicative of the type of each of the modules in the hub. An exemplary type means would include hard-wired bits on the module which indicate the type of module.
Finally, the apparatus includes display means for producing an image of the hub utilizing the location data and the type data, with the image depicting the location of the modules in the hub and the type of modules.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram of an exemplary local area network of the type in which the subject invention can be used and which includes three concentrators or hubs and associated data terminal equipment.
FIG. 2 is schematic diagram of a local area network, having twenty-four concentrators, of the type in which the subject invention can be used.
FIG. 3 is an exemplary display produced in accordance with the present invention depicting a selected portion of the topology of the network of FIG. 2.
FIG. 4 is a schematic diagram of a local area network with the upper level concentrators connected to a common coaxial cable.
FIG. 5 is an exemplary display produced in accordance with the present invention depicting a selected portion of the topology of the network of FIG. 4.
FIG. 6 is a section of a display menu showing a portion of a main menu bar and an exemplary selected submenu.
FIG. 7 is a section of a display showing a detailed view image which depicts the actual appearance of the front panel of a selected network concentrator, including the location of modules in the concentrator and the type of modules.
FIGS. 8A-8F are enlarged views of selected portions of the FIG. 7 image showing details of the various type of modules.
FIG. 9 is similar to FIG. 7 except that another style of concentrator is depicted.
FIG. 10 is a block diagram of one of the network concentrators showing the network management module and host modules all connected to a common concentrator backplane together with various data terminal equipment in the network management control console connected to the concentrator.
FIG. 11 is a block diagram showing the network management interface for the host modules for interfacing the modules to the concentrator backplane.
FIG. 12 is a block diagram of a further exemplary network showing the interconnection of the concentrators of the network.
FIG. 13 is a Network Management Module List showing the various ports of each of the concentrators and the addresses of the other concentrators which transmit messages received over the ports.
FIG. 14 is a flow chart depicting the process whereby the link data are obtained from the concentrators to construct the FIG. 13 List.
FIG. 15 is a flow chart depicting the process whereby the link data of the FIG. 13 List are processed to form the Ancestor Table of FIG. 16.
FIG. 16 is a Ancestor Table constructed from the data contained in the FIG. 13 List.
FIG. 17 is a block diagram of a network where the up port of the highest level concentrators are connected together so that no concentrator will be assigned the Level 0 position of the topology display.
FIG. 18 is a simplified display image of the overall topology of a network based upon the data of the FIG. 16 Ancestor Table.
FIG. 19 is a functional block diagram of the network management module located in each of the network concentrators.
FIG. 20 is a functional block diagram of the control console adapter, the adapter being an expansion card used to convert a personal computer to a network management control console.
FIGS. 21A-21C are flow charts depicting the process for producing the detailed view of the concentrators such as depicted in FIGS. 7 and 9.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENT
Referring to the drawings, FIG. 1 is a diagram showing the physical connection of a typical simplified local area network. The depicted network function to interconnect six personal computers or PCs 20a-f. The network includes three concentrators 22a, 22b and 22c. The concentrators function as a hub in the star network topology and provide basic Ethernet functions. Many of the details which will be provided regarding the network are exemplary only, it being understood that the present invention may be utilized in connection with a wide variety of communication networks.
Each of the concentrators includes several plug-in modules 26 which connect to a backplane (not depicted) of each concentrator 22. There are various types of modules including host modules which have ports for connecting the associated module to data terminal equipment (DTE). For example, concentrator 22b includes a host module 26c having a port (not designated) connected to personal computer 20a by way of an interface device 24a. Device 24a is a transceiver (transmitter/receiver) used to link the computer 20a (DTE) or node to the network cable. Module 26c will typically have several other ports (not depicted) for connecting to other DTEs.
One of the DTEs, such as personal computer 20d, is designated as the network management control console (NMCC). The designated computer 20d is provided with a control console adapter (CCA), which is an expansion board which adapts the computer for use as a control console. As will be explained, a user can perform various network monitoring and control functions at the NMCC. A pointer device, such as a mouse 23 having primary and secondary control buttons 23a and 23b, respectively, is used for carrying out these functions.
Each concentrator 22a, 22b and 22c is provided with a network management module (NMM). The network management module NMM gathers data received on a port of a host module and transmits the data to other modules in the concentrator. Further, the network management module NMM will forward the received data to other concentrators in the network that may be connected to the concentrator.
The foregoing can be further illustrated by way of example. Assume the personal computer 20e has a message for computer 20b. Each computer or node in the network has an associated address. Messages directed to a particular computer will be decoded by a conventional network controller card installed in the computer and, if the destination address in the message matches the computer address, the message will be processed by the computer. The message originating from computer 20e will include a destination address of computer 20b. The message will be transmitted to the associated transceiver 24e and will be received by a port (not designated) on host module 26g of concentrator 22c. Module 26g will transmit the received message to the network management module NMM in concentrator 22c by way of the concentrator backplane (not depicted).
The NMM will transmit the received message to each host module in concentrator 22c, including modules 26f, 26g and 26e. The message will exit each port of each module and will be received by transceiver 24d and 24f (but not 24e which received the message originating from computer 20e). However, since the destination address does not match the address of computers 20d and 20f, the network controller cards installed in computer 20d and 20f will not process the messages.
The NMM in concentrator 22c will also forward the received message to concentrator 22c by way of transceiver 24g. The message will be received by a port in module 26d and by the NMM in concentrator 22a. The NMM of concentrator 22a will transmit the message to the NMM of concentrator 22b. The NMM of concentrator 22b will transmit the message to each module in concentrator 22b so that the message will be received by transceivers 24a, 24b and 24c. Since transceiver 24b has an address which matches the destination address, transceiver 24b will forward the message to the associated computer 20b. The other two transceivers 24a and 24c connected to concentrator 22b will refrain from forwarding the message.
Since each message received by a concentrator is retransmitted, only a single message can be transmitted over the network at one time. In the event two computers attempt to transmit at the same time, the messages will interfere and cause a collision in the network. As is well known, when a collision on the network occurs, the concentrator connected to the two cables on which the collision occurred will detect the presence of the collision.
When a collision is detected by an NMM, a "jam" signal will be transmitted by the NMM of the concentrator over the network. The computers involved in the collision will detect the presence of the collided signals and will resort to statistical contention for the network. Other computers not involved in the collision will sense the carrier signal and refrain from transmitting on the network.
Eventually, the jam signal will disappear and the computers will contend for access to the network. A computer wishing to transmit first listens for message traffic on the network and transmits only if there is no traffic and only in the absence of any other carrier signal. This well known method of providing access to a common local area network medium is referred to as Carrier Sense Multiple Access with Collision Detection or CSMA/CD.
The network depicted in FIG. 1 is relatively small and includes two levels of concentrators. Concentrator 22a is at the top level (level "0") and concentrators 22b and 22c are at the next from top level (level "1"). It would be possible to connect several additional DTEs, including computers, work stations, servers, and the like to the concentrators 22a, 22b and 22c. Further, additional concentrators could be connected to the exiting concentrators.
FIG. 2 shows how a much larger network consisting of twenty three concentrators. None of the port connections to the individual host modules in the concentrators are shown. Concentrator 28 is the top level, or level "0" concentrator. Concentrator 28 is connected to the next from top level, or level "1" concentrators 30a-30f. The six level 1 concentrators are connected to a total of seventeen level 2 concentrators 32a-32g. Note that a lower level concentrator can be connected to a higher level concentrator by way of a connection to the network management module NMM of the lower level concentrator. It is also possible to connect the higher level concentrator to a host module port in the lower level concentrator.
The network management module NMM performs monitoring and controlling functions within the concentrator in which it is located. In addition, the NMM sends status and diagnostic reports to the network management control console NMCC. Further, the NMM executes commands issued by the control console.
One important function of the network management control console NMCC is to monitor the network topology. As previously noted, the NMCC is a designated computer of the network which includes a control console adapter CCA in the form of an expansion board which is installed in the computer. The designated computer uses a graphical user interface, such as a commercially-available software package called Microsoft Windows sold by Microsoft Corporation of Redmond, Wash. Other commercially available software packages which provide a window environment similar to Microsoft Windows could be used for the present application.
A principal function of the network management control console NMCC is to monitor the status of the network topology. An important feature of the present invention is the ability to automatically acquire information regarding the topology of the network so that a display of the topology can be generated and automatically updated to reflect changes in the network.
FIG. 3 is an image, generally designated by the numeral 36, which will be produced on the NMCC video display terminal showing the topology of the exemplary network depicted in FIG. 2. The display is a menu-driven graphics display which uses a pointing device such as a mouse, light pen or the like. Referring to FIG. 3, the rectangular boxes depicted in the display are concentrator icons 34a-34e which represent the concentrators in the network. The screen is only capable of displaying a relatively limited number of concentrator icons at a time. Accordingly, it is necessary to scroll the display to depict all twenty four of the concentrators, as will be explained.
Concentrator icon 34a corresponds to concentrator 28 in FIG. 2. Icons 34b-34g represent concentrators 30a-30f of FIG. 2. The icons representing the remaining concentrators 32a-32g can be viewed only by scrolling the display both horizontally and vertically.
Display 36 is split between a "Level 0" and a "Level 1". Concentrator icon 34a is shown in the upper "level 0", with the remainder of the icons located in the lower half or "level 1" portion of the screen. The display can be scrolled vertically by placing the cursor icon or mouse pointer 38 over one of the triangle-shaped elements 40 and "clicking" on actuating the control button mouse. When the mouse is clicked, the display will replace the "Level 0" icon at the top with the level "1" icons and replace the "Level 1" icons at the bottom with "Level 2" icons. Since there are a total of seventeen "Level 2" concentrators 32a-32g (FIG. 2), it will be necessary to scroll the display horizontally to view all of these concentrators. Scrolling to the left is accomplished by "clicking" left arrow symbol 40a using the mouse and scrolling to the right is accomplished by "clicking" right arrow symbol 40b using the mouse.
Concentrator icon 34a displays various information regarding the status of concentrator 28 (FIG. 2). The chart recorder image displays the amount of message traffic received by the concentrator over time. The designation "000081000002" represents the concentrator identification or address. The designation "Normal" indicates the overall status of the concentrator. The designation will change to indicate a fault or warning condition.
The designation "2 Levels Below" on icon 34a indicates that two levels of concentrators are located below the "Level 0" of concentrator 28, namely "Level 1" and "Level 2". The designation "3000" indicates the type of concentrator. Other concentrators, such as the concentrator 34e, are Model "1000" type concentrators, which have fewer capabilities than do Model "3000" concentrators. Finally, the designation "23 Concentrators Below" indicates that there are twenty three concentrators connected either directly to concentrator 28 or indirectly to concentrator 28 through other concentrators.
Each concentrator icon in the lower part of the display 36, in this case the "Level 1" part of the display, has a vertical bar referred to as a linkage bar. The linkage bar such as bar 42 above icon 34b indicates the connection between the concentrator and the parent of the concentrator located in the next higher level.
The information depicted by the linkage bar and associated text depends upon the type of concentrator. For example, linkage bar 42 depicts the connection between the Level 1 Model 3000 concentrator 30a (FIG. 2) and the Level 0 Model 3000 concentrator 28. For the Model 3000 concentrator linkage bar 42, the upper tag "2-1" indicates the slot number in which the module is located in the concentrator and the port number on that module. In other words, concentrator 28 (FIG. 2) is connected to concentrator 30a by way of port number 1 of a host module which is located in slot 2 of concentrator 28. The lower tag "2" of linkage bar 42 indicates the slot number of the Model 3000 module which provides the connection. In the event the module is a network management module (NMM) there is only one port, therefore only the slot number of the NMM is depicted. If the module was a host module, the slot and port numbers would both be depicted.
Linkage indicator 44 shows the manner in which a Model 1000 concentrator 30d is connected to concentrator 28. Model 1000 concentrators are interconnected by way of the concentrator backplane, abbreviated "BkPl" and by way of an up-port, abbreviated "UpPt". Accordingly, indicator 44 shows that concentrator 30d (FIG. 2) is connected to concentrator 28 by way of a cable connected between the up-port of concentrator 30d and port 4 of a module located in slot 2 of concentrator 28.
Only one concentrator icon at a time can occupy the Level 0 position of the display. In the typical "star" network topology, the network branches out from one central unit in an inverted tree hierarchy. The upper level usually displays a concentrator icon. However, the topology may have two or more concentrators linked in parallel at the top level of the network. In that case, there will be no concentrator icon in the Level 0 position.
FIG. 4 shows a network topology with seven concentrators 46 connected together at the top level in parallel. The common connection may be, for example, a coaxial cable 48 which forms a "backbone" of the network. In this case, no single concentrator occupies the Level 0 position of the display, as can be seen in the display 50 of FIG. 5. Display 50 reflects this topology by leaving Level 0 empty and places the concentrators linked in parallel in Level 1. Note that the linkage bars carry a top tag with the description "??-??" to reflect the fact that there are no concentrators located above the Level 1 concentrators.
One of the network concentrators is connected to the personal computer which functions as the Network Management Control Console NMCC. As shown in FIGS. 3 and 5, the concentrator which is connected to the NMCC is designated on the display screen with a small icon 51 which resembles a personal computer.
As previously noted, the Network Management Control Console (NMCC) provides a user interface for monitoring and controlling network operations. The interface is a menu-driven display running in a Microsoft Windows environment.
Menu selections are made using a selective technique which is standard to Microsoft Windows pull-down menus. First, the mouse cursor 38 is placed over the name of the desired selection on the menu bar. As can be seen in FIGS. 3 and 5, the menu bar 52 is located at the top of the display and includes selections or functions "FAULT", CONFIGURATION", "PERFORMANCE", "SECURITY", AND "LOG".
The mouse cursor 38 is positioned over the desired selection or function on the menu bar 52 and the primary mouse button is actuated. This action causes a submenu to be displayed. For example, if the "FAULT" function is selected, the submenu 53 shown in FIG. 6 is displayed immediately below the main menu selection. As can be seen, there are seven subfunctions or selections in the "FAULT" submenu. A particular subfunction is selected by dragging the mouse with the primary button still actuated. This action caused sequential subfunctions in submenu 53 to be highlighted. When the desired subfunction is highlighted, the mouse button is released thereby selecting the subfunction. A check mark (not depicted) will appear in the display next to the selected subfunction and will remain there until another function/subfunction is chosen.
The next step is to select a target object for the previously-selected function or subfunction. This is accomplished by positioning the mouse cursor over the target object on the screen. For example, if it is desired to monitor message traffic for a particular concentrator in the network, the DIAGNOSTIC subfunction depicted in FIG. 6 is selected. Next, the mouse cursor is positioned over the target object, such as the concentrator icon 34b in FIG. 3. In particular, the cursor is positioned within the identifier button 43 of icon 34c which contains the concentrator identifier "000081001001". The primary mouse button is then actuated thereby raising a pop-up window to appear or a portion of the display depicting an detailed view of the front panel of the selected concentrator.
FIG. 7 is an exemplary pop-up detailed view window 56 which is displayed when a concentrator is selected. Window 56 occupies a relatively small portion of the display screen, and the position of the window on the display can be changed as desired. The image which appears in window 56 represents the physical appearance of the front panel of the actual concentrator represented by icon 43.
The concentrator image 56 shows that the concentrator includes thirteen plug-in modules the front panels of which are represented by image sections 60a-60m. The modules depicted are exemplary only and it is possible to interchange modules and delete modules as required by the local area network. Each concentrator must include at least one Network Management Module NMM. If one or more modules are deleted, one or more empty concentrator slots will be depicted.
Concentrator image section 60a depicts the front panel of a primary Network Management Module NMM of the concentrator which is shown to be located in the left most position in the actual concentrator. This position is referred to as slot 1 of the concentrator. Image section 60b depicts the front panel of another type of Network Management Module NMM which functions as a backup in the event the primary NMM fails and is located in slot 2. Image sections 60c-60l depict internetworking and host module front panels located is slots 3 through 12, respectively. Finally, image section 60m depicts the front panel of the power supply for the concentrator.
FIG. 8A is an enlarged view of the Network Management Module image section 60a. The image includes a depiction of the front panel mounting screws 62a. Also depicted is the inserter/extractor bar 62b and the model designation "3314M-ST" at location 62c. The model designation represents the Model 3314M-ST Network Management Module. The model designation indicates that the NMM and hence the concentrator in which the NMM is installed is a Model 3000 rather than a Model 1000
Image section 60a also depicts three light emitting diodes (LEDs) labeled "STA", "PAR" and "NMC" at location 62d. LED "STA" represents a green LED which is illuminated when the associated module is functioning properly. Should the module lose power or experience another type of monitored function failure, the LED will be turned off. The image of the "STA" will be a rectangle filled with green to indicate that the actual LED is illuminated and will be filled with black when the LED is off.
The LED labeled "PAR" is yellow and is illuminated (shown in yellow) when the NMM has been disconnected or partitioned from the concentrator backplane. If the module is partitioned, the backup NMM depicted in image section 60b will function as the primary NMM. In the event there is no backup NMM, a partition of the NMM will cause the entire concentrator to be removed from the network.
Image section 60a further depicts a pair of fiber optic connectors at location 62e which are standard ST-type bayonet connectors. The top connector is for receiving data and the bottom connector is for transmitting data. The two connectors function together as a port for interconnecting concentrators. The concentrators can also be interconnected by way of host module ports if the concentrator is Model 3000 type concentrator. Two LEDs are depicted at location 62f, with a yellow LED labeled "P" being illuminated when the port has been partitioned or disconnected. The LED labeled "L" is a green link status indicator which is illuminated when the receiving terminal of the port is connected to a transmitting device in another module. The LED "L" will not be illuminated (will be shown in black) if the transmit and receive optical cables are reversed.
Section 62g includes a depiction of seven LEDs, two of which are labeled "ONL" and "P/S" green. The LED labeled "P/S" and indicates whether the NMM is acting as the primary or secondary NMM. The "P/S" LED is green and is illuminated when the NMM is the in the primary mode. The LED labeled "ONL" is green and indicates when the NMM is on line. The "ONL" LED image flashes when the NMM has not received software downloaded from the CCA and is illuminated steadily when the software download has succeeded. The top three LEDs relate to network traffic. In the actual module, the top LED, which is yellow, is illuminated for 250 ms when a collision is detected in the concentrator. The second from top LED, which is green, is illuminated while data are present in the concentrator. The third from top LED is green and is illuminated for 250 ms after each data transmission. The fourth from the top LED is a green microprocessor fault indicator which shows the status of the microprocessor in the NMM.
A nine pin male type DB-9 connector is depicted at location 62g of section 60a which functions as a service port. Location 62j includes a circular element which represent a microprocessor reset button on the actual module. Location 62k includes a rectangular element which represents a switch which allows termination of the connector at image section 62i. Finally, locations 62h and 62i represent type RJ45 female connectors for connection to unshield twisted pair (UTP) cable. The connector at location 62h is for a serial port for out-of-band communication between concentrators (NMMs) and the connector at location 62i is for connection to an internal modem for out-of-band communication. Out-of-bond communication is communication separate from the primary network communication paths and may be, for example, a telephone line.
FIG. 8B is an enlarged view of image section 60b of FIG. 7 which depicts the backup or secondary Network Management Module. The image is substantially identical to image section 60a with a few minor exceptions. At location 64a, the designation "3314-ST" appears which represents the Model 3314-ST Network Management Module. The Model 3314-ST includes an additional connector, the image of which is depicted at area 64b. The connector is a D13-25 type interconnect and functions as a standard RS-232 serial port for out-of-band connection to a telephone network. The telephone network can be used to communicate out-of-bond with the NMM in lieu of the standard network (CSMA\CD) communication path.
The image section 60c of FIG. 8C depicts an internetworking module. The designation "3323" of area 66a of the image indicates that the module is Model 3323. The module is a local bridge which functions to interconnect two local area networks of the same type. The local bridge will only pass traffic that originates in one segment of the network and is intended for the other network segment.
Front panel image 60c includes a region 66c which depicts a fifteen pin D type female connector for connecting the module to an Attachment Unit Interface (AUI) device. An AUI is a standard logical, electrical and mechanical interface for connecting Data Terminal Equipment (DTE) such as a personal computer, server and the like to the network. Region 66d includes ten LEDs which provide static and dynamic status conditions of the internetworking module.
The image section 60f of the FIG. 7 image is shown enlarged in FIG. 8D depicting the front panel of a host module. The upper region includes the designation "3304-ST" which indicates that the module is a Model 3304-ST host module. The image depicts a total of six ports for connection to up to six DTEs. As shown in region 68b, each port is represented by the image of two ST-type bayonet optical fiber connectors, with the top connector for receiving data and the bottom connector for transmitting data. The designation "P" and "L" and the image of two LEDs in region 68c correspond to the image and designation in regions 62e and 62f of the image section 60a of FIG. 8A.
FIG. 8E is an enlargement of the image section 60d of FIG. 7. The designation "3302" at region 70a of the image indicates that the depicted module is Model 3302 host module. A total of six ports are depicted for connection to up to six DTEs. As shown in section 70c, each port is represented by an image of a nine pin type DB-9 connector for connecting to a shield twisted pair (STP) cable.
Region 70b includes the standard LEDs "STA", "PAR" and "NMI". In addition, the region includes six pairs of LEDs, with one pair associated with one of the six ports. The LEDs labeled "P" are a yellow LED which indicate, when illuminated (shown in yellow), that the associated port has been partitioned or disconnected from the network. The LEDs labeled "L" is a green LED showing the link status. If the port is connected to a compatible transceiver or network interface card, the LED is illuminated (shown in green). If the port is not connected, the LED turns off (shown in black) and an autopartition takes place, automatically partitioning the port so that the associate LED "P" will turn on.
An enlargement of image section 72b is shown in FIG. 8F. The designation "3305" at region 72a of the image section indicates that the module is a Model 3305 host module. The host module has twelve ports depicted by twelve images of a connector. The image of the connector associated with the first port is at region 72c. The image is of a standard RJ-45 modular female connector for connection to unshield twisted pair (UTP). There are twelve pairs of LEDs depicted in region 72b, with one pair of LEDs associated with one of the twelve ports. The function of the twelve LED pairs labeled "P" and "L" is the same as the LED pairs bearing a similar label in region 70b of FIG. 8E.
The image 60m of the front panel of the concentrator power supply (FIG. 7) also includes an image of a status LED which is illuminated (shown in green) when the power supply is producing the specified voltages.
There are various specific objects in the concentrator image which a user can select using the mouse pointer. In doing so, the user can select the object of the function or subfunction, as depicted in FIG. 6, which was previously selected.
There are three category of concentrator objects which can be selected, including the overall concentrator, a particular module in the concentrator, or a particular port of a particular module.
The overall concentrator object is selected by placing the mouse cursor 38 over the image 60a of the primary network management module NMM of the concentrator of the detailed view concentrator image 56 of FIG. 7 and actuating the primary mouse button. Assuming that the subfunction DIAGNOSTIC had been previously selected, diagnostic information regarding the entire concentrator will be displayed. This includes, for example, specific data packet errors, such as alignment errors, in the concentrator. Data packets are a form of data structure for Ethernet communication and alignment errors are errors which occur when a received frame does not contain an integer number of bytes. A frame is a packaging structure for Ethernet data and control information.
If a particular module is to be selected, the mouse cursor 38 is positioned over the top portion of the desired module. Note that selecting the primary network management module is equivalent to selecting the entire concentrator, as previously noted. When this occurs a small window formed from dashed lines automatically appears thereby indicating to the user that the cursor has been positioned in an active image location. As can be seen in the upper portion of image 60d of FIG. 7, a window 74 will enclose the image of the model number and the status LEDs for the module if the mouse cursor is positioned in that image area and the secondary button actuated. If the primary mouse button is then actuated, the module is selected as an object. Thus, if the subfunction is DIAGNOSTIC, diagnostic information regarding the module represented by image section 60d will be displayed. Such information may include, for example, packet alignment errors received by all ports on the module.
If a particular port is desired, the mouse cursor is positioned over the image of the port. A window will then appear, such as window 75 (FIG. 7) associated with port number 6 of the module depicted by image section 60f. As another example, if port number 1 of the module depicted by image section 60l is selected, a window 76 will appear as shown. If the user then actuates the primary mouse button, diagnostic information regarding the selected port will be displayed. Such information includes, for example, packet alignment errors received by the port.
FIG. 9 shows an image 78 of a different style of concentrator, referred to as the Model 3030, which can accommodate up to four plug-in modules. Again, image 78 depicts the physical appearance of the front panel of the actual concentrator. Image section 80c shows the power supply section of the concentrator, with two status LEDs with the associated designation "Power" and "FAN" being depicted. The "Power" LED is shown green when the D.C. power is at the proper voltages. The "Fan" LED is shown yellow in the event the fan speed falls below a minimum rate of rotation.
The four exemplary plug-in modules are mounted horizontally in the Model 3030 concentrator, as can be seen in FIG. 9. Image section 80b represents a Model 3314M-ST Network Management Module which is the same module depicted in image section 60a in FIG. 7. Image section 80c, 80d and 80e of FIG. 9 depict modules Models 3314-ST, 3304-ST and 3305 which are the same modules depicted in image sections 60b, 60f and 60g, respectively, of FIG. 7. The user can select the entire concentrator, a particular module (slot) and a particular port using a mouse in the same manner previously described in connection with the image shown in FIG. 7.
A functional block diagram of a typical concentrator 28 is shown in FIG. 10. As previously described, each concentrator includes a chassis which receives several plug-in modules which are inserted in adjacent concentrator slots. Each module is provided with a rear connector which engages a common concentrator backplane 82 which is located along the entire rear portion of the concentrator. Backplane 82 includes a set of electrical connections which form a CSMA/CD bus 84. The backplane further includes several electrical connections which form a control bus 86. The CSMA/CD bus is similar to an Ethernet network coaxial cable which carries conventional 10 Megabit Manchester encoded digital signals of the type distributed throughout the entire network. The control bus 86 of the concentrator backplane enables the concentrator Network Monitor Module 88 to communicate with the other modules in the concentrator, including host modules 90a through 90b (only two host modules are shown).
The host modules 90a-90b each has one or more ports which can be connected to Data Terminal Equipment DTE 91 as shown in FIG. 10. When a particular DTE transmits data over the network, the CSMA/CD signal is received by a port and is transferred by the host module associated with the port to the CSMA/CD bus 84 of backplane 82. NMM 88 receives the CSMA/CD signal by way of data steering logic (DSL) represented by block 90. The logic is capable of receiving data from CSMA/CD bus 84 and transmitting data onto the bus. The other modules in the concentrator do not receive the CSMA/CD data at this time.
The received CSMA/CD data are transferred to a repeater and retiming unit (RRU) represented by block 92. As is well known the RRU retransmits the CSMA/CD data. In doing so it is necessary to retime the data to account for the distortion inherent in the transmission link.
The RRU 92 transfers the retimed CSMA/CD data back to the data steering logic DSL 90 which places the CSMA/CD signal back on bus 84. The other modules in the concentrator are configured to receive the repeated CSMA/CD data. The repeated CSMA/CD data are also transferred to the network by way of media dependent adapter MDA represented by block 94. The output of the MDA on line 102 will connect the CSMA/CD signal to the appropriate transmission media, such as fiber optic cable, unshield twisted pair (UTP) or shield twisted pair (STP). Line 102 can be connected to another other concentrator of the network so that the entire network will receive the CSMA/CD data. Other concentrators can also be connected to the subject concentrator by way of the host module ports.
Each module in concentrator is connected to the control bus 86 of the backplane by way of a Network Module Interface (NMI), including NMI 106 in the Network Management Module (NMM) 88 and NMIs 107 in the various host modules 90a-90b.
The NMM 88, which includes a central processor unit and associated memory, as will be described, monitors and controls the other modules in the concentrator by way of control bus 86. Typically, the other modules in the concentrator are not required to contain any form of processor, with the "intelligence" of the concentrator management function residing almost exclusively in the NMM 88.
A Network Management Control Console NMCC 93 is connected to port 2 of host module 90b. As previously explained, the NMCC is typically a personal computer having a graphic user interface 93b and a control console adapter CCA 93c in the form of an expansion card located in the computer.
The NMM can control the various modules either in response to commands received by the NMM over the network which originate from the Network Management Control Console NMCC 93. For example, the user at the NMCC can command the NMM in a particular concentrator to reset the entire concentrator or reset a particular module or port. The RESET subfunction depicted in FIG. 6 is first selected by the user. Next, the expanded view image of the desired concentrator is displayed as depicted in FIGS. 7 or FIG. 9. The user can then select the desired object including the entire concentrator, a particular module or a particular port using the mouse. The previously-described LEDs on the concentrator modules will indicate in some predetermined manner whether the reset command was successful.
The user can also use the NMCC to initiate a loopback test wherein a test packet is transmitted over the network to a selected concentrator and the concentrator is instructed to transmit the test packet back to the NMCC. The user first selects the "LOOP BACK" subfunction shown in FIG. 6. Next, the user selects the concentrator to be tested. The NMCC then verifies communication between itself and the selected concentrator by transmitting a data package to the concentrator which is echoed back by the concentrator to the NMCC.
The NMCC 88 can also monitor the status of a particular concentrator, module or port. The user first selects the STATUS subfunction as depicted in FIG. 6. Next, the user selects the object using the expanded view of the concentrator as shown, for example, in FIG. 7 or 9. A status report regarding the selected object is then displayed in a pop-up window. If the selected object is a concentrator, the report will indicate, among other things, whether the retiming unit, such as RRU 92 in FIG. 10, is functioning properly. If the selected object is a module, the status report will indicate whether the module power supply is functioning properly, whether the module has been enabled or disabled, and whether the Network Monitor Interface NMI, such as NMI 107 in FIG. 10, is functioning properly. If a port has been selected, the status report will indicate whether the port is active or has been partitioned. Other information can be included in the status reports, if desired.
A functional block diagram of the Network Management Interface NMI 107 for interfacing the host modules to the control bus 86 of the concentrator backplane 82 is shown in FIG. 11. The NMI 107 includes a state machine for transferring network management interface data between the host module and the control bus 86. Typically, the NMM issues a command by way of the NMM Network Management Interface NMI 106 to one of the host modules over the control bus and the recipient host module transmits a response back to the NMM over the control bus.
Block 92 represent a sequential state machine which transfers eight bits of interface data between the host module and the control bus. State machine 92 is preferably a gate array although other types of circuitry could be used. A bidirection buffer 94 is connected to eight lines of the control 86 bus through eight pins (not depicted) on the rear electrical connector of the host module. The eight bidirection lines 96 each carry one of the eight bits of Network Management Interface Data (NMIDAT).
Lines 96 are connected to the input/output of the bidirectional buffer 94. Another set of buffer input/output lines are connected to the state machine by way of eight bidirection data lines 98. Data are transferred through buffer 94 either from the control bus 86 to the state machine or from the state machine to the control bus, depending on the status of the direction line 100 controlled by the state machine.
Various timing and control signals are produced by the Network Management Module, NMM, on three lines, collectively designated by numeral 102. Timing and control signals DENL, RD/WRL and DAT/CMDL are used for transferring the interface data on lines 96 between the NMM and the host modules over the control bus 80.
Each slot of the concentrator, which is capable of receiving a plug in module, is assigned a unique slot identification number. The four bits of slot identification (Slot ID) on lines 104 are produced by hard wiring (strapping) selected ones of four connector pins of the slot to either high or low logic levels. The four slot identification bits are transferred to the state machine 92 on four lines designated by the numeral 104. The state machine 92 uses the slot identification bits to decode commands from the NMM transmitted over the control bus which is directed to the particular module which is inserted in the identified slot.
The four bits of slot identification are also transferred to programmable array logic device (PAL) 106. PAL 106 also receives signals from each of the twelve ports of the modules over twelve separate line designated by the numeral 108. If a particular port is active (receiving data over the network), the appropriate one of the twelve lines will go to a logic high level. If the particular module in the slot has less than twelve parts, not all of the lines will be used. PAL 106 includes circuitry for encoding the signals from the port and producing a four bit port identification code on four lines designated by the numeral 112. The code uniquely identifies a particular port receiving data over the network.
When one of the port activity lines 108 is active, PAL will produce the appropriate port identification on lines 112 and will also transmit the slot identification code IDOUT on four lines 110. As will be described later, this information is received by the NMM and eventually transferred to the Network Management Control Console NMCC and is used to automatically generate the topology of the network.
State machine 92 receives eight bits of module identification code on lines 134 and four bits of module revision code on line 132. The identification code and revision code identify the module model number and revision number. The codes are produced in the module by hard wiring the appropriate lines in the module itself to either a logic low or logic high signal. The model and revision identification codes are transferred to the NMM, when requested by the NMM, over data lines 96. This information is eventually forwarded by the NMM to the Network Management Control Console and is used to produce the expanded view images, such as depicted in FIGS. 7 and 9.
As previously described, the Network Management Control Console NMCC can command the concentrator, modules and ports to provide certain status information. The NMM in the appropriate concentrator receives the commands over the network and issues commands to particular modules over the control bus 86 in response to the NMCC command or in respond to commands originating in the NMM itself.
One command requests the status of a particular module, with the command containing the slot identification number or address of the module. The state machine of the Network Management Interface 107 in the host module receives and decodes the command and provides the requested status information. This information includes the module identification and module revision (module type information) originating on lines 132 and 134. Lines 130 carry four status bits which can also provided to the NMM. One of the four status bits is the state of the status LED located at the top of the front panel of the majority of the plug in modules, as shown in region 62d of FIG. 8A. The green status LED is illuminated (depicted in green) if the module is powered and if other basic module functions are proper.
The NMM can also issue commands in response to the Network Management Control Console to reset the module as previously described. If the state machine detects a reset command on the control bus 86 directed to the module, the state machine will produce a reset signal on line 128.
A module disable command can be issued by the NMM to disconnect or partition the entire module from the network. As previously described, this command can be issued by the NMM in response to a command originating from the Network Management Control Console. When this command is detected by the state machine on the control bus 86, a disable signal is produced on line 126 and the module is partitioned. As will be described later, the other commands can be used to disable or partition individual ports.
The NMM can also issue a watchdog activity pulse command. When this command is received, a watch dog signal is produced on line 124. The signal is typically used to rest a count-down timer on the module. The NMM is programmed to periodically issue the watchdog command at a sufficient frequency so as to prevent the counter from counting down completely. If the counter does count down completely, this is typically an indication of a fault condition. If this condition is detected, the NMM is reset.
The state machine 92 is also capable of detecting commands which are unique to the particular type or kind of module. For example, a command can be issued which will disable or partition a particular port. Since the number of ports on host modules differ, it is necessary to construct unique port partition commands for different module types. Also, different types of status commands can be transmitted by the NMM requesting status data unique to a particular type of module, such as the status of various LEDs located on the front panel of the module.
Eight bidirectional data lines represented by numeral 114 are connected to the state machine 92 for providing data to support the unique commands. Data are read out of the state machine on lines 114 when the read/write signal RD/WR on line 118 is high. Data can be transferred to the machine if the signal on line 118 is low.
The five lines designated by the numeral 116 are network management function NMFC decodes which are produced in response to various unique commands received by the state machine. The decodes are used to control logic circuits on the module in a predetermined manner, depending upon the type of command and the module type. The decodes are ten bits, with the signal HBEN on line 120 indicating when the five bits on lines 116 are the high or most significant bits. The signal LBEN on line 122 indicates when the five bits on lines 116 are the low or least significant bits.
As previously described, the network topology is displayed at the Network Management Console 93. The topology is produced automatically and is updated automatically in the event the network configuration is altered.
FIG. 12 is a block diagram of one exemplary network depicting the topology of the network. The exemplary network includes a total of eight interconnected concentrators 100, 102, 104, 106, 108, 110, 112 and 114. Each concentrator is provided with a Network Management Module. The network is monitored and controlled by a Network Management Control Console NMCC (not depicted) which can be a part of a DTE associated with any one of the network concentrators.
The depicted network includes a mix of two types of concentrators including both Models 1000 and 3000. As previously described, the Model 1000 is less flexible than the Model 3000 type. The Model 1000 includes an NMM which can distinguish only two ports. There is an Up Port which refers to the connection of the NMM to the Media Dependent Adapter (MDA). There is the Down Port which collectively refers to all of the ports on the host modules of the concentrator. The Model 1000 NMM cannot distinguish between different host module ports.
The Model 3000 concentrator includes the capability of distinguishing between the individual host module ports and can identify the particular port number and slot number on which a message is received on the network. The operation of this feature was previously described in connection with the Network Management Interface (NMI) of FIG. 11. The Model 1000 NMMs can be connected Up Port to Up Port, but Down Port to Down Port connections are not allowed. The Model 3000 NMMs are more flexible and can be connected by way of any of the ports of the concentrator.
The Control Console Adaptor (CCA) of the Network Management Control Console (NMCC) is responsible for building and maintaining the topology of the network. The CCA interacts with all of the Network Management Modules over the network through a sequence of protocol frames referred to as Protocol Data Units (PDUs).
The CCA also exchanges messages with the User Interface (UI) 93c (FIG. 10) of the personal computer of the Network Management Control Console. The User Interface (UI) includes the Windows graphic user interface. The CCA communicates with the User Interface by way of a Memory Resident Driver (MRI) on the personal computer and converts the PDU formats to and from the User Interface.
Each of the Network Management Modules (NMMs) includes a processor and associated memory, as will be described. The NMM processor executes code in a local RAM which is downloaded from the CCA of the NMCC to each of the NMMs.
The sequence for producing the initial network topology will now be described. Initially, the Control Console Adapter (CCA) transmits a Protocol Data Unit (PDU) over the network to each of the network NMMs. The PDU, called Loadserver Ready, is transmitted every five seconds and indicates to the NMMs that the CCA is ready to download code to each of the NMMs.
After power up, all of the NMMs listen for the Loadserver Ready message. Once the message is received, the NMMs request the CCA to download their respective code.
Following download, all of the NMMs transmit what is referred to as a Hello PDU or message over the network. The Hello message is transmitted with a predefined group of multicast addresses. Each NMM in the network should receive the Hello message which contains the address of the NMM which originated the message. The Hello messages contains information regarding the model and revision number of the originating NMM as described in connection with FIG. 11 (lines 134, 132). This latter information will be used to distinguish between Model 1000 and 3000 NMMs (and concentrators).
Each concentrator should receive a Hello message from each of the other concentrators in the network. A Model 3000 has the capability of monitoring the particular port over which the Hello message is received, as previously described. Model 1000 concentrators can only distinguish between Up Port and Down Port messages.
Each NMM maintains an internal list or table of the port-slot and NMM address for each of the received Hello messages. If the NMM is a Model 1000 the "port-slot" will be either Up Port or Down Port. For example, for the configuration shown in FIG. 12, concentrator 106 having NMM address "05", has direct connection to the three other concentrators. Concentrator 112, having address 06, is connected to concentrator 106 at slot 3, port 2 (3-2). Concentrator 114, having address 08, is connected to slot 6, port 1 (6-1) of concentrator 106. Finally, concentrator 100, having address 03, is connected to slot 4, port 2 (4-2) of concentrator 106.
Once concentrator 106 has received a Hello message originating from each concentrators in the network, it will construct an NMM list. The list will have a total of seven entries since there are seven other concentrators in the network. The NMM list will reflect that a Hello message from concentrator address 06 was received over slot 3, port 2 (3-2), and that another Hello message originating from concentrator address 08 was received over slot 6, port 1 (6-1). Finally, the list should further reflect that Hello messages originating from the remaining five concentrators were received on slot 4, port 2 (4-2) of concentrator 106.
FIG. 13 is on NMM List table containing the NMM list for all of the concentrators of the FIG. 12 network. The left column, the "Reporting Concentrator Address" column, contains the address of each of the reporting concentrators. The middle column, the "Slot-Port" column, contains the slot and port over which Hello messages were received. The right column, the "Concentrators Heard" column, contains the concentrator address which indicates the address of the concentrator which originated the Hello message.
The NMM entry in the NMM List table for concentrator 106, which has address 05, indicates that Hello messages originating from the remaining seven concentrators were received over three separate ports (3-2, 6-1 and 4-2), as previously described. This information is collected and stored in concentrator 106, to be eventually forwarded to the CCA 93c. Similarly, concentrator 104, having address 01 received all seven messages over a single port, namely slot-port 2-1. This information is also collected and stored in concentrator 104 to be forwarded to the CCA. The other concentrators collect and store similar information as can be seen in the NMM List table of FIG. 13.
Each concentrator has only limited information concerning the topology of the network. For example, concentrator 106 (address 05), can ascertain that it received Hello messages from five concentrators over slot-port 4-2. However, the concentrator is unable to ascertain the actual path taken over the network by the Hello messages. For example, concentrator address 05 cannot ascertain that the Hello message originating from concentrator 104 (address 01) was forwarded by concentrator 100 (address 03) instead of taking some other path, such as a direct path.
Although the NMM list of a particular concentrator does not contain sufficient information to create the network topology, the total information in the NMM tables created by each of the concentrators does contain sufficient information. As will be described, this information is collected from the NMMs by the CCA in a format such as the FIG. 13 NMM List table to create the overall network topology.
The CCA monitors the various Hello messages transmitted by each NMM in the network. The CCA creates a table, such as the FIG. 16 Ancestor Table, which initially only includes the NMM addresses of all of the concentrators in the Network. The NMM addresses are used to allocate memory for creating the Ancestor Table. The CCA also determines which NMM and associated concentrator will become the root of the topology tree. The NMM List table will also contain information (not depicted) regarding the particular type of NMM so that the CCA can distinguish between Model 1000 and 3000 NMMs.
FIG. 14 is a block diagram of the process after the Hello PDUs have been received by the CCA which contain the addresses of the reporting concentrators. The diagram illustrates the manner in which the CCA obtains information from the concentrators to complete the Link Table of FIG. 13. Element 118 of FIG. 14 represents the block request transmit process where the CCA transmits a separate message Nl-Nn, represented by lines 122, to each of the concentrators in the network 120 for which a Hello PDU was received. The messages request that the concentrators provide the CCA with the NMM list associated with the concentrator.
Each of the messages of the block request is referred to as a Get NMM List message and the messages are transmitted sequentially over the network, preferably back-to-back. The concentrators in the network 120 respond to the Get NMM List messages by transmitting back to the CCA an NMM List PDU, represented by lines 124. The NMM List PDU contains NMM List of the reporting concentrator. As previously described, the list contains the addresses of the concentrator received by the reporting concentrators and the slot-port over which the messages are received.
The NMM List PDUs may be received and processed by the CCA, as represented by block 126 in any order. It is possible that some of the concentrators will not respond to the Get NMM List PDU for some reason. In that case the returned messages Nr will be less than the transmitted messages Nl-Nn. As indicated in block 126, the CCA monitors the number of response by incrementing an NMM List received counter.
The CCA will monitor the received NMM Lists. If the CCA fails to receive an NMM List from one or more concentrators, the CCA will wait a predetermined period of time. Once the period of time has expired, as represented by element 128, the CCA will issue a further block request comprising Get NMM List PDUs separately directed to the concentrators which have not responded or which have been dropped in the CCA or dropped somewhere in the network. This is a connection list protocol so that delivery of a message is not assured. The message could have been involved in a collision, for example. This process is repeated a maximum of five times, as indicated by element 130. If the maximum number of retries is exceeded, an error message is issued as shown by block 132.
A determination is then made as to, as indicated by element 136, whether the CCA has received a NMM list from all of the concentrators in the network which are to be included in the topology. The process carried out by the CCA utilizing the NMM List data is represented by the flow diagram of FIG. 15. The results of the process are shown in the Ancestor Table of FIG. 16, as will be explained.
Each NMM sends its link information to the CCA in the format given in FIG. 13. The link data is also known as the NMM List. The link data are arranged by slot/port group. For example, in FIG. 13, the NMM with address 05 sends three groups of link data, one group associated with slot-port 3-2, another associated with slot-port 6-1, and the last associated with slot-port 4-2. The CCA processes each list using the following method. The CCA searches through each slot-port group for the presence of the root NMM, which had been previously selected. If the Root NMM exists in that group, the slot-port number is entered in the "Own-Slot-Port" field of the Ancestor Table, such as the NMM at concentrator address 05 in FIG. 16. However, if the Root NMM is not present in the slot-port group, the CCA accesses the "Ancestor" field of each group member in FIG. 16 and enters the address and slot-port numbers of the reporting NMM. That is, the reporting NMM becomes one of the ancestors for NMMs existing in slot-port groups not containing the Root NMM.
The CCA processes these lists on-the-fly, and discards the data. There is no requirement to save individual NMM link data, which can exceed 1200 bytes for a large network.
Block 138 of FIG. 14 is executed after all NMM lists are received and processed by the CCA. This block checks the Ancestor Table of FIG. 16, and resolves immediate parent-child relationships of each NMM. The parent of an NMM is the ancestor of the NMM which is one level lower than itself. Note that the Root NMM does not have any ancestors.
As previously described, the topology depicted in the Network Management Control Console NMCC display includes various concentrators connected in an inverse tree hierarchy. The highest level of the hierarchy, Level 0, contains a single root concentrator. Level 1 contains the concentrator in the next from highest level and so forth.
It should be noted that there is not necessarily a unique topology display for a particular network configuration. For example, concentrator 100 shown in FIG. 12 is shown at the top of the network hierarchy but the topology could be drawn with another concentrator as the root concentrator.
The CCA selects one of the concentrators to be the root concentrator before requesting the NMM List for the concentrator.
If the network is comprised exclusively of Model 1000 NMMs, the CCA selects the concentrator with the largest number of Down Port links as the root concentrator. This will also be the one concentrator with no Up Port links, with one exception. It is possible that more than one concentrator having a Model 1000 is connected by way of the Up Port. For example, in FIG. 17, there are three concentrators, 116a, 116b and 116c, connected by way of the Up Port link. In that event none of the concentrators will be selected as the root. A dummy concentrator will be assigned the root position (a virtual root) and the actual concentrators 116 will be assigned to the Level 1 and lower hierarchy levels.
If the network includes only concentrators having Model 3000 NMMs, any of the concentrators can be selected as the root. However, the CCA selects the concentrator with the maximum number of links.
Assuming that the network includes concentrators having both Model 1000 and 3000 NMMs, a concentrator having a Model 1000 NMM, with no other concentrator having a Model 1000 NMM linked to the Up Port, will be selected as the root. Although any concentrator having a Model 3000 NMM located on the Up Port side of the highest level Model 1000 NMM could also be selected, this information is not available to the CCA from the network.
In the present example, concentrator 102 having a concentrator address 02 is selected as the root of the network because it is the only concentrator having a Model 1000 NMM at this early stage of the processing.
Returning to the FIG. 15 flow chart, once the root concentrator is selected, one block of entries is taken from the NMM List of FIG. 13 as indicated by element 146 of the chart. An entry is defined herein as the address of a particular concentrator heard over a particular slot-port of the reporting concentrator. A block of concentrator entry means all of the concentrator addresses heard over a particular slot-port. For example, in the NMM List of FIG. 13, reporting concentrator address 03 has three blocks of entries. The first block includes the address of the concentrators heard over slot-port 2-1, including address 02, 07 and 04.
Assume that the first block of entry for concentrator 03 has been obtained. The next step is to determine whether the block contains the address of the previously-selected root concentrator, as indicated by element 148.
Each reporting concentrator will receive message from all concentrators, including the root concentrator. The block of entries for slot-port 2-1 of concentrator address 03 includes root concentrator address 02. As indicated by element 150 in the flow chart, the slot-port over which the reporting concentrator receives messages from the root concentrator is added to the Root column of the FIG. 16 Ancestor Table. Thus, entry "2-1" is added to the Root column for reporting concentrator address 03.
Once the slot-port has been added, a determination is made as to whether all blocks have been processed or updated, as shown by element 160. In the present case, only a single block has been processed, therefore the next block of entry from the Link Table of FIG. 13 is processed, as represented by blocks 162 146.
The next block includes one entry, address 01, which indicates that the reporting concentrator address 03 received messages from concentrator address 01 over slot-port 3-2. This information indicates that reporting concentrator address 03 must be used by concentrator address to communicate with the root concentrator. In other words, reporting concentrator address 03 is an "ancestor" of concentrator address 01. If concentrator 01 is connected directly to concentrator 03, concentrator addresses 03 and 01 have a "parent" - "child" relationship, respectively. At this point in the processing, there is only enough information to indicate that an ancestor relationship exists between the two concentrators.
As shown by block 152, the address of the entry is obtained, address 01, so that the appropriate location in the Ancestor Table of FIG. 16 is located. Next, the address of the reporting concentrator, 03, is entered in the Ancestor column associated with concentrator 01, thereby indicating that concentrator address 03 is an ancestor of concentrator address 01. The slot-port showing the connection to the ancestor concentrator, 3-2, is added to the slot-port column of the Table.
The Level column in the Ancestor Table indicates the level of the associated concentrator in the network hierarchy. The level is initially set to zero for all of the concentrators. Only the root concentrator will remain at Level 0.
Since concentrator address 01 is below concentrator 03 in the hierarchy of the network, it is known that the concentrator address 01 cannot be at Level 0. The value of the level entry for concentrator address 01 is increased by one, as indicated by block 156. Eventually, the level for address 01 will be increased to two, as shown in the Table.
Next, a determination is made as to whether all the entries for the block have been processed or updated as shown by element 158. In the present example, there was only one entry in the second block. Accordingly, the next block of entry for reporting concentrator address 03 is obtained. This block contains three entries, including concentrator addresses 05, 06 and 08 which transmitted messages received by reporting concentrator 03 over slot-port 4-1.
The entry does not contain the root concentrator at address 02 (element 148). Accordingly, the address for the first entry of the block, address 05, is read so that the appropriate location in the Topology Table is found. Next, the reporting address 03 is entered in the Ancestor column of the Table together with the slot-port 4-1. The level number for address 05 is increased by one (block 156) and a determination is made as to whether all entries for the block have been processed (element 158).
Since the block contains the two additional entries, addresses 06 and 08, these entries are then processed. Concentrator address 03 is entered as an ancestor to concentrator 06 together with the slot-port 4-1 (block 154) and the level is increased by one. Concentrator address 03 is then entered as an ancestor for concentrator address 08 together with slot-port 4-1. Again, the level number for address 08 is increased by one.
Once all of the entries for the block have been processed (element 160), the next block of entries is obtained (block 146). There is one block of entries associated with reporting concentrator 04 in the NMM List. Since the block contains the root concentrator address 02, the slot-port 3-1 is entered in the Root column of the Topology Table (block 150). All blocks of entry have been processed (element 160), therefore, the next block is obtained.
The process is repeated for the three blocks of entry for reporting concentrator addresses 01, 08 and 06. In each case, the block contains the root concentrator so the slot-port over which the root concentrator is heard is added to the "Own Slot-Port" column of the Ancestor Table. Thus, slot-port 2-1, 3-1, and 2-6 are inserted in the Root column for reporting concentrators 01, 08 and 06, respectively.
Reporting concentrator address 05 in the NMM List includes three blocks on entry. The process is repeated with concentrator address 05 being inserted in the Ancestor Table as an ancestor to concentrator 06 and 08 together with port-slot 3-2 and 6-1, respectively (block 154). The level numbers for the two addresses are also increased by one (block 156).
The next entry for reporting concentrator 05 indicates that the root concentrator address 02 is heard by concentrator address 05 over port-slot 4-2. That entry is made (block 150) in the Root column of the Ancestor Table at concentrator address 05.
The next block of entry is for root concentrator address 02. By definition, all concentrators are descendants of the root concentrator. Accordingly, the root concentrator address 02 will be inserted as an ancestor for each of the seven other concentrators, together with the associating slot-ports. Since the root concentrator is a Model 1000 concentrator, the slot-port will be either Down or Up. In addition, the level for each of the concentrators, other than the root concentrator, will be increased by one.
The last block of entry is for concentrator address 07. The block contains the root concentrator, so the slot-port, 4-7, is entered (block 150) in the Root column of the Topology Table. As indicated by element 160, all blocks of the Link Table will have been processed or updated. Accordingly, the information in the Link Table is no longer needed and the Table is discarded, as represented by block 164.
Information in the FIG. 16 Ancestor Table is then used to create a display of the actual network topology. The Level column of the Table will have been increased each time an ancestor is added so that the final value accurately reflects the level of the concentrator in the network. It is then possible to distinguish parent concentrators from other ancestor concentrators. For example, concentrator address 01 has two ancestor, and is, therefore, at level 2. The ancestors include addresses 03 and 02. Concentrator address 03 is at Level 1 and address 02, the root concentrator, is at Level 0. Accordingly, concentrator 03 is the parent concentrator of concentrator address 01.
The information in the Ancestor Table, together with the information regarding the model number of the concentrator, is then forwarded to the User Interface of the Network Management Control Console to create the network topology display. A somewhat simplified display topology image is depicted in FIG. 18 which is generally designated by the numeral 137. The topology image is based upon the exemplary network of FIG. 12. Note that the actual display will show only two levels of concentrators, rather than the three levels depicted.
Level 1 shows the icon for the root concentrator address 02. The icon indicates that there are three levels of concentrators below Level 0 and there are a total of seven concentrators below.
The Level 1 icons depict the concentrator addresses 03, 04 and 07. The linkage symbol for address 03 indicates that the concentrator is connected from port 1, slot 2 of the Model 3000 concentrator to the Backplane (BK PL) of the Model 1000 root concentrator at address 02. The up link for address 03 is from a particular port, port 1, of the module located in slot 2 of the concentrator. The up link could also have been made from the Network Management Module NMM of the concentrator. Since a NMM has only a single port, only the slot number of the NMM in the concentrator would be depicted in the linkage symbol.
The CCA will automatically update the Network Topology Table of FIG. 16 should the configuration of the network changes. For example, if a concentrator is added by connecting it to slot-port 4-8 of concentrator address 07 of FIG. 12, the presence of the new concentrator would be detected and the topology display updated.
As previously described, the CCA monitors the Hello PDUs from every NMM in the network. If three consecutive Hello PDUs are missed from a particular NMM, the NMM is treated as missing and a warning message is sent to the User Interface for display.,
Similarly, if three consecutive Hello PDUs are missed from a particular NMM of a concentrator, the NMM is aged out of the NMM list in the Link Table. The NMM is also issued a Kill NMM PDU by the other NMMs. If the NMM of the concentrator responds, it will transmit what is called a New Hello PDU and will rejoin the network as a new concentrator. This procedure ensures that the CCA can detect any changes in concentrator connectivity.
An NMM in the network will transmit a Hello PDU in any of the three situations. First, a Hello PDU will be transmitted after the NMM has received down loaded code from the CCA. Second, an NMM will issue a Hello PDU if the NMM fails to receive three consecutive CCA Loadserver Ready PDUs which, as previously noted, indicate to the NMMs that the CCA is ready to download code to the NMM. Finally, an NMM will transmit the Hello PDU after it receives a Kill NMM PDU.
When the CCA detects a New Hello PDU, it commences an update of the network topology. If the number of new concentrators is relatively small, less than ten, the CCA updates each NMM location in the topology tree by using a binary search method. First, the CCA compiles a table of new concentrators. The CCA then requests the root concentrators for its NMM list.
If a new NMM is present in the root NMM list, the NMMs in the next level down are requested to provide the associated NMM list. This process is repeated until the NMM list from the reporting concentrator indicates the presence of only the new NMM over that port.
If more than ten new NMMs have been added to the network, it is more efficient to simply rebuild the topology as previously described.
The Network Management Modules (NMMs) and the Control Console Adapter (CCA) can be implemented in a variety of ways to carry out the previously described functions. Since the particular details regarding the construction of the NMM and the CCA form no part of the present invention, the NMM and the CCA will only be briefly described.
FIG. 19 is a functional block diagram of the Network Management Module (NMM). The NMM includes a microprocessor 166, such as a microprocessor manufactured by NEC having the designation V35. The microprocessor includes a core 168 and service port 170. A serial port 172 is provided for connecting the NMM to a modem for out of band (external to the network) control of the NMM.
The microprocessor is coupled to a multiplexed address output bus 176 and a data bus 178. An address buffer 177 is provided for storing the upper byte of address so that the full address can be used for dynamic random access memory (DRAM) 180, programmable read only memory (PROM) 182 and static random access memory (SRAM) 184.
PROM 182 is a boot memory and DRAM 180 holds the code for the individual NMMs which is downloaded by the CCA to the NMM over the network. SRAM 184 is used to hold frames of the data either received from the network or to be transmitted over the network.
Block 192, collectively representing miscellaneous control circuitry, including SRAM access control, data bus control, memory I/O decodes, various counters, special functions and various glue logic.
The data and address busses are coupled to a conventional network interface controller (NIC) 188, such as the controller marketed by National Semiconductor under the designation DP8390/NS32490. The NIC 188 is for interfacing with CSMA/CD type local area networks such as Ethernet. The NIC functions to receive and transmit data packets to and from the network and includes all bus arbitration and memory support logic on a single chip.
The NIC has a single bus 189 which serves both as an address bus and a data bus. Latch 186 is used to hold the addresses for the NIC and addresses from the NIC. A data buffer 179 functions to isolate the data bus 178 of the processor from the NIC bus 187 and also functions to provide some bus arbitration features.
The NMM further includes a serial network interface (SNI) 190. SNI 190 can be an interface device marketed by National Semiconductor under the designation DP8391/NS32491. The CSMA/CD data on the network and back plane bus 84 are Manchester encoded, and the SNI performs Manchester encoding and decoding functions.
The SNI receives the encoded data from the data steering logic (DSL) 90 which receives the CSMA/CD data from the CSMA/CD backplane bus 84 for forwarding to SNI 190. In addition, DSL 90 transmits CSMA/CD data from the SNI to the CSMA/CD bus. The data steering logic SLD 90 responds to and controls a status line 93 which is used to control the mode of the NMM. The NMM has various modes of operation wherein the NMM can act as a primary NMM for the concentrator, a backup NMM for the concentrator and a partition mode. In the partition mode the NMM can be isolated (partitioned) from the CSMA/CD bus 84 or isolated (partitioned) from the NMM up port connection in response to commands from the CCA.
The NMM further includes a repeater and retiming unit (RRU) 92, as previously explained, which repeats the CSMA/CD data received on bus 84 by way of the data steering logic 90. The RRU also retimes the data for transmission back to the CSMA/CD bus 84 and for transmission on the NMM Up Port by way of a medium dependent adapter (MDA) 198. MDA 198 also receives CSMA/CD data from the Up Port connection for retransmission on the CSMA/CD bus 84.
The data bus 178 is also connected to the Network Management Interface (NMI) 106 which provides the interface between the NMM and the backplane control bus. As previously described in connection with the description of FIGS. 10 and 11, the NMI 106 cooperates with the NMIs 107 in the host modules.
The NMI 106 of the NMM provides two functions. First, the NMI 106 is implemented to have the same circuitry as the NMI 107 used in the host modules as depicted in FIG. 11. This circuity is used to provide backup NMM status similar to host module status, to the primary NMM. Second, the NMI 106 is implemented to provide control signals for control bus 86. The control bus delivers commands sent by the NMM to the host module and further provides the response from the host module back to the NMM.
As previously described, the NMM can issue either control commands or status commands to a particular host module in the concentration. The specific module (slot location) will decode the command and take action. The command transmitted over the control bus from the NMM NMI 106 and the responses from the host module NMI 107 are on eight lines of the bus which carry signals NMIDAT (FIG. 11). NMI 106 also produces the three control signals RD/WRL, DAT/CMDL and DENL which are used by the host module NMI 107 for receiving the commands and transmitting the response back to the NMM NMI 106.
The control console adapter CCA is similar to the NMM as can be seen in the functional block diagram of FIG. 20. Functional elements common to the CCA and NMM are designated with the same numerals. The CCA is functionally an intelligent Ethernet card. The serial network interface SNI 190 is connected to a fifteen pin electrical connector 191. The interface of the CCA with the network is simpler than the network interface of the NMM since the NMM must function with a CSMA/CD bus 84 whereas the CCA need only connect to a single network port. The connector 191 is typically coupled to an off card transceiver or media access unit (MAU) (not depicted) by way of an off card attachment unit interface (AUI).
The data bus 179 of the CCA is connected to a personal computer PC I/O circuitry 83. I/O circuitry 183 is coupled to the PC bus 97 and to the graphical user interface UI (the windows interface) of the PC by way of a memory resident interface (MRI) which is not depicted.
Although the CCA provides processing capability, it is possible to use a conventional Ethernet card with the processing function of the CCA being carried out by the processor and associated memory in the personal computer of the NMCC.
The User Interface in the Network Management Control Console also generates the expanded view of a particular concentrator front panel, as shown in FIGS. 7 and 9 utilizing data provided by the individual NMMs. Each NMM is capable of determining the configuration of the concentrator in which it is located by way of the control bus 86 (FIG. 10) of the concentrator back plane. As previously described, the NMM can ascertain which slots in the concentrator are occupied and, if occupied, the model and revision of the module which is located in the slot.
When the user selects a particular concentrator to be displayed in the expanded view window, the CCA requests status information regarding the concentrator from the associated NMM. Data regarding the model type and revision of each module in the concentrator, including the reporting NMM, and the location of the modules in the concentrator, are forwarded to the CCA. The bit mapped graphics are stored in the personal computer data base of the NMMCC. The appropriate bit map data can then be used to produce the concentrator image utilizing a conventional windows graphic user interface having graphics capability such as Microsoft Windows.
As previously explained, the various modules have one or more LEDs which provide miscellaneous status information. The expanded view image of the concentrator will reflect the actual status of an LED by either displaying a particular color area at the LED image location (to show illumination) or by displaying a black area (to show lack of illumination). The CCA will request that the NMM of a particular concentrator issue a command to the host modules in the concentrator, soliciting LED status information. The information is forwarded to the CCA and used by the interface to control the appearance of the LED images to reflect the actual status of the LEDs.
The foregoing can be further illustrated by the flow charts depicted in FIGS. 21A-21C. In FIG. 21A, block 200 indicates that the user first utilizes the mouse to select the particular concentrator to be displayed. Next, block 202 shows that the window graphic user interface creates the expanded view process/window.
The User Interface UI then obtains the module type data from the data attached to the concentrator child window as shown by block 204. The UI then obtains from its own data base, using the module type and revision number, the detailed module data including, for example, data for depicting the indicia of module model number, LED location, port location, and so forth, such as depicted in FIGS. 7 and 9.
The graphics bit map for the concentrator image is loaded and displayed on the NMCC screen, as indicated by block 208. Finally, a message is sent to the expanded view process to update itself with LED status information thereby concluding the process, as shown by elements 210 and 212.
The FIGS. 21B and 21C flow charts relate to the LED status update sequence. Block 214 of FIG. 21B indicate that a message is received by the expanded view process to update itself with LED status. The NMCC then transmits an LED STATUS PDU over the network to the appropriate concentrator (Block 216). The concentrator NMM will respond with 32 bits of encoded data for each module in the concentrator. Each bit of the data is capable of indicating the status of an LED in the module, with a "1" indicating that the LED is illuminated and a "0" indicating that the LED is off.
Block 220 of FIG. 21C indicates that the LED status data are received by the NMCC from the NMM of the concentrator over the network. The rectangular locations in the bit mapped graphics for the LED are then filled with the appropriate color (or black) based upon the received data thereby concluding the update (elements 222 and 224).
Thus, a novel apparatus and method have been disclosed to generate the topology of a network and to display an image of a concentrator which accurately depicts the state of the concentrator. While the invention has been described in some detail it is to be understood that those skilled in the art can make certain changes in the actual implementation without departing from the spirit and scope of the invention as defined in the following claims.

Claims (31)

We claim:
1. Apparatus for automatically determining the topology of a local area network of interconnected hubs which utilize contention control, with individual hubs having at least three data ports, each of which is for coupling the hub in a star configuration to either a data terminal device or another hub in the local area network, said apparatus comprising:
(1) transmit means at each of the hubs for transmitting hub messages over the local area network, said transmit means including
(a) originate means for transmitting said hub messages which originate at an associated hub and contain an identifying address of said associated hub;
(b) repeat means for transmitting said hub messages received by said associated hub over the local area network which originated from other ones of said hubs of the network, said repeat means comprising a timing unit for retiming data to account for transmission distortion;
(2) port identifying means at each of the hubs for identifying which of said data ports of said associated hub has received which of said hub messages transmitted by other of said hubs of the local area network;
(3) control means coupled to said local area network for receiving topology data reported from each of said hubs, said topology data reported for each data port of a particular reporting hub, said topology data identifying a particular one of said data ports of said particular reporting hub and said topology data identifying addresses associated with other hubs which originated network messages received by said particular reporting hub over said particular one of said data ports; and
(4) processing means for determining the overall topology of the local area network by utilizing and combining said received topology data from each of said reporting hubs.
2. The apparatus of claim 1 wherein said originate means periodically transmits one of said hub messages originating at said associated hub.
3. The apparatus of claim 1 wherein said control means receives said topology data in response to topology request messages which the control means transmits over the local area network to the hubs.
4. The apparatus of claim 3 wherein said control means transmits separate ones of said topology request messages to each of said hubs.
5. The apparatus of claim 4 wherein said control means monitors which of said hubs has responded to said topology request message and transmits additional topology request messages directed to any of the hubs for which a response in said topology data is not received by said control means.
6. The apparatus of claim 1 wherein each of said hubs includes a plurality of modules, with each of said modules having at least one of said data ports and wherein said hub includes monitoring means for identifying a particular one of said modules and a particular data port of said modules over which said hub has received said hub messages originating from other one of said hubs in the local area network.
7. The apparatus of claim 6 wherein said modules are of different types having varying capabilities and wherein said monitoring means is also a means for identifying said type of module and wherein said topology data further includes type data indicative of the type of modules in said particular reporting hub.
8. The apparatus of claim 7 wherein said monitoring means identifies a particular one of said modules and a particular one of said ports by determining a physical location of said module in said hub.
9. The apparatus of claim 8 wherein said hub includes a chassis having an electrical backplane for interconnecting said modules and said modules may be inserted in said chassis in any one of predetermined locations along said backplane, with said monitoring means determining said physical location of said module in said hub by sensing the predetermined location where said modules are inserted.
10. In a communication network arrangement having a plurality of communication hubs coupled together wherein each of said hubs contains a plurality of communication ports for coupling with other hubs of said plurality of hubs, an apparatus for determining a topology of said communication network, said apparatus comprising:
circuitry for transmitting a plurality of first identifying messages, each of said first identifying messages originating from an originator hub and destined to other of said plurality of hubs, each of said plurality of first identifying messages comprising an address of said originator hub;
logic for recording into a link identification dataset for said each hub, a list associating individual ports of said each hub with addresses of originator hubs that originated identifying messages that were received over said individual ports; and
logic for generating said topology of said communication network based on said link identification datasets, said logic for generating said topology comprising logic for analyzing said identification datasets to construct an ancestor dataset comprising a topology level number and an ancestor list for individual hubs.
11. An apparatus for determining a topology of said communication network as described in claim 10 further comprising link transmit circuitry for transmitting said link identification dataset from said each hub to said logic for generating said topology.
12. An apparatus for determining a topology of said communication network as described in claim 11 wherein said link transmit circuitry is initiated in response to an activation message from a control console adapter coupled to said communication network.
13. In a communication network arrangement having a plurality of communication hubs coupled together wherein individual hubs contain a plurality of communication ports for coupling with other hubs of said plurality of hubs, a computer implemented method for determining a topology of said communication network, said method comprising the steps of:
transmitting a plurality of first identifying messages, each of said first identifying messages originating from an originator hub and destined to other of said plurality of hubs, each of said plurality of first identifying messages comprising an address of said originator hub;
recording into a link identification dataset for said each hub, a list associating each particular port of said each hub with addresses of originator hubs that originated identifying messages that were received over said each particular port; and
generating said topology of said communication network based on link identification datasets of said plurality of hubs, said step of generating said topology comprising the further step of analyzing said identification datasets to construct an ancestor dataset comprising a topology level number and an ancestor list for said each hub.
14. A method for determining a topology of said communication network as described in claim 13 further comprising the step of transmitting, for said each hub, said link identification dataset to said step of generating said topology.
15. A method for determining a topology of said communication network as described in claim 13 wherein said step of transmitting is initiated in response to an activation message from a control console adapter coupled to said communication network.
16. In a communication network arrangement having a plurality of communication hubs coupled together wherein individual hubs contain a plurality of coupling ports for coupling with other of said plurality of hubs, an apparatus for determining a topology of said communication network, said apparatus comprising;
means for transmitting a plurality of first identifying messages, each of said first identifying messages originating from an originator hub and destined to other of said plurality of hubs, each of said plurality of first identifying messages comprising an address of said originator hub;
means for receiving said plurality of first identifying messages over particular ports of said hubs;
means for recording, into a link of identification dataset for each hub, a list associating particular ports of said each hub with addresses of originator hubs that originated identifying messages that were received over said particular ports of said hubs; and
means for generating said topology of said communication network based on said link identification datasets comprising:
means for receiving an individual link identification dataset from individual hubs of said communication network;
means for determining a root hub of said individual hubs of said communication network;
means for constructing an ancestor table based on each link identification dataset received, said ancestor table including, for each individual hub, a topology level number and a list of ancestor hubs; and
means for constructing a topology of said communication network based on said ancestor table and said root hub.
17. An apparatus for determining a topology of said communication network as described in claim 16 wherein said means for recording comprises means for generating an individual link identification dataset for individual hubs, of said plurality of hubs.
18. An apparatus for determining a topology of said communication network as described in claim 16 wherein said means for generating said topology comprises;
means for receiving said link identification dataset;
means for determining a root hub of said plurality of hubs;
means for constructing an ancestor table based on said link identification dataset, said ancestor table including, for individual hubs, a topology level and a list of ancestors; and
means for constructing a topology of said communication network based on said ancestor table.
19. In a communication network arrangement having a plurality of communication hubs coupled together wherein each of said hubs contains a plurality of communication ports for coupling with other hubs of said plurality of hubs, an apparatus for determining a topology of said communication network, said apparatus comprising:
(1) means for constructing a link dataset for each hub of said plurality of hubs, said means for constructing a link dataset operable within said each hub and comprising:
(a) means for generating identification messages from said each hub and to other of said plurality of hubs, each identification message comprising an origination address specifying said each hub that originated said identification message;
(b) means for receiving identification messages that originated from other hubs through particular ports of said each hub; and
(c) means for recording a listing, for said each hub, associating port numbers of said particular ports with origination addresses of identification messages that were received over said particular ports of said each hub; and
(2) means for generating said topology of said communication network based on each link dataset constructed comprising:
(a) means for generating an ancestor table based on said transmitted link datasets wherein said ancestor table comprises, for said each hub, an ancestor list and a topology level number; and
(b) means for generating said topology of said hubs of said communication network based on said ancestor table.
20. An apparatus for determining a topology of said communication network as described in claim 19 where said means for constructing a link dataset and said means for transmitting said link dataset are operable within a network management module of said each hub.
21. An apparatus for determining a topology of said communication network as described in claim 19 wherein said means for generating said topology is operable within a control console adapter of said communication network.
22. In a communication network arrangement having a plurality of communication hubs coupled together wherein each of said hubs contains a plurality of coupling ports for coupling with other of said plurality of hubs, a computer implemented method for determining a topology of said communication network, said method comprising the steps of:
transmitting a plurality of first identifying messages, each of said first identifying messages originating from an originator hub and destined to other of said plurality of hubs, each of said plurality of first identifying messages comprising an address of said originating hub;
receiving said plurality of first identifying messages over particular ports of said hubs;
recording, into a link identification dataset, a list associating particular ports of said each hub with addresses of originator hubs that originated identifying messages that were received over said particular ports of said hubs; and
generating said topology of said communication network based on said link identification datasets by:
(a) receiving an individual link identification dataset from individual hubs of said communication network;
(b) determining a root hub of said plurality of hubs of said communication network;
(c) constructing an ancestor table based on each link identification dataset received, said ancestor table including, for individual hub, a topology level number and a list of ancestor hubs; and
(d) constructing a topology of said communication network based on said ancestor table and said root hub.
23. A method as described in claim 22 wherein said step of recording is operable within individual hubs of said plurality of hubs.
24. A method for determining a topology of said communication network as described in claim 22 wherein said step of recording comprises the step of generating an individual link identification dataset for individual hubs of said plurality of hubs.
25. In a communication network arrangement having a plurality of communication hubs coupled together wherein each of said hubs contains a plurality of coupling ports for coupling with other of said plurality of hubs, a computer implemented method for determining a topology of said communication network, said method comprising the steps of:
transmitting a plurality of first identifying messages, each of said first identifying messages originating from an originator hub and destined to other of said plurality of hubs, each of said plurality of first identifying messages comprising an address of said originating hub;
receiving said plurality of first identifying messages over particular ports of said hubs;
recording, into a link identification dataset, a list associating particular ports of said each hub with addresses of originator hubs that originated identifying messages that were received over said particular ports of said hubs; and
generating said topology of said communication network based on said link identification datasets by:
(a) receiving said link identification dataset;
(b) determining a root hub of said plurality of hubs; and
(c) onstructing an ancestor table based on said link identification dataset, said ancestor table including, for each hub, a topology level and a list of ancestors; and
(d) constructing a topology of said communication network based on said ancestor table.
26. In a communication network arrangement having a plurality of communication hubs coupled together wherein individual hubs contain a plurality of communication ports for coupling with other hubs of said plurality of hubs, a method for determining a topology of said communication network, said method comprising the steps of:
(1) constructing link dataset for each hub of said plurality of hubs, said step of constructing a link dataset operable within said each hub and comprising the further steps of:
(a) generating identification messages from said each hub that are destined to other of said plurality of hubs, each identification message comprising an origination address specifying a hub that originated said identification message;
(b) receiving identification messages that originated from other hubs through particular ports of said each hub; and
(c) recording a listing, for said each hub, associating port numbers of said particular ports with origination addresses of said identification messages that were received over said particular ports; and
(2) generating said topology of said communication network based on each link dataset by:
(a) generating an ancestor table based on said transmitted link datasets wherein said ancestor table comprises, for said each hub, an ancestor list and a topology level number; and
(b) generating said topology of said hubs of said communication network based on said ancestor table.
27. A method for determining a topology of said communication network as described in claim 26 wherein said step of generating said topology is operable within a control console adapter of said communication network.
28. The method as recited by claim 26 wherein said step of constructing a link dataset and said step of transmitting said link dataset are operable within a network management module of each hub.
29. A method for determining a topology of said communication network as described in claim 26 wherein said step of generating said topology is operable within a control console adapter of said communication network.
30. A method of generating a topology representing a configuration of hubs coupled within a communication network, individual hubs having a plurality of ports, said method comprising the steps of:
generating and sending identification messages from individual hubs to other hubs in said network, said identification messages identifying the generating hub;
receiving, at individual hubs, identification messages generated by other hubs from the network, said identification messages received via ports of said individual hubs;
creating individual listing datasets for individual hubs by associating each port of an individual hub with identifications of all generating hubs that generated identifying messages which were received over said each port;
collecting each listing dataset from individual hubs over the network; and
generating a topology based on each listing dataset by:
(a) creating an ancestor dataset based on each listing dataset for individual hubs, said ancestor dataset comprised of a listing of ancestor hubs and topology level for said individual hubs; and
(b) generating a topology of said network based on said ancestor dataset.
31. A method of generating a topology as described in claim 30 further comprising the step of displaying on a computer display screen a generated topology of said network.
US08/046,405 1990-05-21 1993-04-12 Apparatus and method for automatically determining the topology of a local area network Expired - Lifetime US5606664A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/046,405 US5606664A (en) 1990-05-21 1993-04-12 Apparatus and method for automatically determining the topology of a local area network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/526,567 US5226120A (en) 1990-05-21 1990-05-21 Apparatus and method of monitoring the status of a local area network
US08/046,405 US5606664A (en) 1990-05-21 1993-04-12 Apparatus and method for automatically determining the topology of a local area network

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US07/526,567 Division US5226120A (en) 1990-05-21 1990-05-21 Apparatus and method of monitoring the status of a local area network

Publications (1)

Publication Number Publication Date
US5606664A true US5606664A (en) 1997-02-25

Family

ID=24097856

Family Applications (2)

Application Number Title Priority Date Filing Date
US07/526,567 Expired - Lifetime US5226120A (en) 1990-05-21 1990-05-21 Apparatus and method of monitoring the status of a local area network
US08/046,405 Expired - Lifetime US5606664A (en) 1990-05-21 1993-04-12 Apparatus and method for automatically determining the topology of a local area network

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US07/526,567 Expired - Lifetime US5226120A (en) 1990-05-21 1990-05-21 Apparatus and method of monitoring the status of a local area network

Country Status (1)

Country Link
US (2) US5226120A (en)

Cited By (148)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5734824A (en) * 1993-02-10 1998-03-31 Bay Networks, Inc. Apparatus and method for discovering a topology for local area networks connected via transparent bridges
US5742760A (en) * 1992-05-12 1998-04-21 Compaq Computer Corporation Network packet switch using shared memory for repeating and bridging packets at media rate
WO1998018306A2 (en) * 1996-10-28 1998-05-07 Switchsoft Systems, Inc. Method and apparatus for generating a network topology
US5751965A (en) * 1996-03-21 1998-05-12 Cabletron System, Inc. Network connection status monitor and display
US5774669A (en) * 1995-07-28 1998-06-30 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Scalable hierarchical network management system for displaying network information in three dimensions
US5793366A (en) * 1996-11-12 1998-08-11 Sony Corporation Graphical display of an animated data stream between devices on a bus
US5793975A (en) * 1996-03-01 1998-08-11 Bay Networks Group, Inc. Ethernet topology change notification and nearest neighbor determination
US5793362A (en) * 1995-12-04 1998-08-11 Cabletron Systems, Inc. Configurations tracking system using transition manager to evaluate votes to determine possible connections between ports in a communications network in accordance with transition tables
US5805819A (en) * 1995-04-24 1998-09-08 Bay Networks, Inc. Method and apparatus for generating a display based on logical groupings of network entities
US5838904A (en) * 1993-10-21 1998-11-17 Lsi Logic Corp. Random number generating apparatus for an interface unit of a carrier sense with multiple access and collision detect (CSMA/CD) ethernet data network
US5841981A (en) * 1995-09-28 1998-11-24 Hitachi Software Engineering Co., Ltd. Network management system displaying static dependent relation information
US5845062A (en) * 1996-06-25 1998-12-01 Mci Communications Corporation System and method for monitoring network elements organized in data communication channel groups with craft interface ports
US5848243A (en) * 1995-11-13 1998-12-08 Sun Microsystems, Inc. Network topology management system through a database of managed network resources including logical topolgies
US5850397A (en) * 1996-04-10 1998-12-15 Bay Networks, Inc. Method for determining the topology of a mixed-media network
US5864640A (en) * 1996-10-25 1999-01-26 Wavework, Inc. Method and apparatus for optically scanning three dimensional objects using color information in trackable patches
WO1999007111A1 (en) * 1997-07-30 1999-02-11 Sony Electronics, Inc. Method for describing the human interface features and functionality of av/c-based devices
US5887132A (en) * 1995-12-05 1999-03-23 Asante Technologies, Inc. Network hub interconnection circuitry
US5930476A (en) * 1996-05-29 1999-07-27 Sun Microsystems, Inc. Apparatus and method for generating automatic customized event requests
US5949976A (en) * 1996-09-30 1999-09-07 Mci Communications Corporation Computer performance monitoring and graphing tool
US5949818A (en) * 1997-08-27 1999-09-07 Winbond Electronics Corp. Expandable ethernet network repeater unit
US5956665A (en) * 1996-11-15 1999-09-21 Digital Equipment Corporation Automatic mapping, monitoring, and control of computer room components
US5999174A (en) * 1997-07-02 1999-12-07 At&T Corporation Reusable sparing cell software component for a graphical user interface
US6003081A (en) * 1998-02-17 1999-12-14 International Business Machines Corporation Data processing system and method for generating a detailed repair request for a remote client computer system
US6014697A (en) * 1994-10-25 2000-01-11 Cabletron Systems, Inc. Method and apparatus for automatically populating a network simulator tool
US6049828A (en) * 1990-09-17 2000-04-11 Cabletron Systems, Inc. Method and apparatus for monitoring the status of non-pollable devices in a computer network
US6055267A (en) * 1997-10-17 2000-04-25 Winbond Electronics Corp. Expandable ethernet network repeater unit
US6079034A (en) * 1997-12-05 2000-06-20 Hewlett-Packard Company Hub-embedded system for automated network fault detection and isolation
US6088665A (en) * 1997-11-03 2000-07-11 Fisher Controls International, Inc. Schematic generator for use in a process control network having distributed control functions
US6100887A (en) * 1997-12-05 2000-08-08 At&T Corporation Reusable reversible progress indicator software component for a graphical user interface
US6112241A (en) * 1997-10-21 2000-08-29 International Business Machines Corporation Integrated network interconnecting device and probe
US6131119A (en) * 1997-04-01 2000-10-10 Sony Corporation Automatic configuration system for mapping node addresses within a bus structure to their physical location
US6148241A (en) * 1998-07-01 2000-11-14 Sony Corporation Of Japan Method and system for providing a user interface for a networked device using panel subunit descriptor information
GB2349962A (en) * 1999-05-10 2000-11-15 3Com Corp Network supervision with visual display
US6157378A (en) * 1997-07-02 2000-12-05 At&T Corp. Method and apparatus for providing a graphical user interface for a distributed switch having multiple operators
US6205122B1 (en) * 1998-07-21 2001-03-20 Mercury Interactive Corporation Automatic network topology analysis
US6233611B1 (en) 1998-05-08 2001-05-15 Sony Corporation Media manager for controlling autonomous media devices within a network environment and managing the flow and format of data between the devices
US6243411B1 (en) * 1997-10-08 2001-06-05 Winbond Electronics Corp. Infinitely expandable Ethernet network repeater unit
WO2001055832A1 (en) * 2000-01-26 2001-08-02 Vyyo, Ltd. Graphical interface for management of a broadband access network
US6295479B1 (en) 1998-07-01 2001-09-25 Sony Corporation Of Japan Focus in/out actions and user action pass-through mechanism for panel subunit
US6308207B1 (en) * 1997-09-09 2001-10-23 Ncr Corporation Distributed service subsystem architecture for distributed network management
US20010036841A1 (en) * 2000-01-26 2001-11-01 Vyyo Ltd. Power inserter configuration for wireless modems
US20010051512A1 (en) * 2000-01-26 2001-12-13 Vyyo Ltd. Redundancy scheme for the radio frequency front end of a broadband wireless hub
US20010053180A1 (en) * 2000-01-26 2001-12-20 Vyyo, Ltd. Offset carrier frequency correction in a two-way broadband wireless access system
US6339798B1 (en) * 1996-10-25 2002-01-15 Somfy Process for hooking up a group control module with a control module and/or an action module and/or a measurement module
US6349131B1 (en) 1998-07-07 2002-02-19 Samsung Electronics Co., Ltd. Apparatus and method for graphically outputting status of trunk in switching system
US20020024975A1 (en) * 2000-03-14 2002-02-28 Hillel Hendler Communication receiver with signal processing for beam forming and antenna diversity
US20020035460A1 (en) * 2000-09-21 2002-03-21 Hales Robert J. System and method for network infrastructure management
US6381507B1 (en) 1998-07-01 2002-04-30 Sony Corporation Command pass-through functionality in panel subunit
US20020052205A1 (en) * 2000-01-26 2002-05-02 Vyyo, Ltd. Quality of service scheduling scheme for a broadband wireless access system
US20020056132A1 (en) * 2000-01-26 2002-05-09 Vyyo Ltd. Distributed processing for optimal QOS in a broadband access system
US6393483B1 (en) * 1997-06-30 2002-05-21 Adaptec, Inc. Method and apparatus for network interface card load balancing and port aggregation
US20020060695A1 (en) * 2000-07-19 2002-05-23 Ashok Kumar System and method for providing a graphical representation of a frame inside a central office of a telecommunications system
US6404861B1 (en) 1999-10-25 2002-06-11 E-Cell Technologies DSL modem with management capability
US6418070B1 (en) * 1999-09-02 2002-07-09 Micron Technology, Inc. Memory device tester and method for testing reduced power states
US6421069B1 (en) 1997-07-31 2002-07-16 Sony Corporation Method and apparatus for including self-describing information within devices
US6433903B1 (en) 1999-12-29 2002-08-13 Sycamore Networks, Inc. Optical management channel for wavelength division multiplexed systems
US20020138608A1 (en) * 2001-03-23 2002-09-26 International Business Machines Corp. System and method for mapping a network
US20020159511A1 (en) * 2000-01-26 2002-10-31 Vyyo Ltd. Transverter control mechanism for a wireless modem in a broadband access system
US20020165934A1 (en) * 2001-05-03 2002-11-07 Conrad Jeffrey Richard Displaying a subset of network nodes based on discovered attributes
US20020188709A1 (en) * 2001-05-04 2002-12-12 Rlx Technologies, Inc. Console information server system and method
US20020188718A1 (en) * 2001-05-04 2002-12-12 Rlx Technologies, Inc. Console information storage system and method
US6498821B2 (en) 2000-01-26 2002-12-24 Vyyo, Ltd. Space diversity method and system for broadband wireless access
US6502130B1 (en) 1999-05-27 2002-12-31 International Business Machines Corporation System and method for collecting connectivity data of an area network
US6526442B1 (en) * 1998-07-07 2003-02-25 Compaq Information Technologies Group, L.P. Programmable operational system for managing devices participating in a network
US6549949B1 (en) 1999-08-31 2003-04-15 Accenture Llp Fixed format stream in a communication services patterns environment
US6556221B1 (en) 1998-07-01 2003-04-29 Sony Corporation Extended elements and mechanisms for displaying a rich graphical user interface in panel subunit
US6571282B1 (en) 1999-08-31 2003-05-27 Accenture Llp Block-based communication in a communication services patterns environment
US6578068B1 (en) 1999-08-31 2003-06-10 Accenture Llp Load balancer in environment services patterns
US6601192B1 (en) 1999-08-31 2003-07-29 Accenture Llp Assertion component in environment services patterns
US6601234B1 (en) 1999-08-31 2003-07-29 Accenture Llp Attribute dictionary in a business logic services environment
US6601097B1 (en) 2000-01-10 2003-07-29 International Business Machines Corporation Method and system for determining the physical location of computers in a network by storing a room location and MAC address in the ethernet wall plate
US20030154276A1 (en) * 2002-02-14 2003-08-14 Caveney Jack E. VOIP telephone location system
US6614785B1 (en) 2000-01-05 2003-09-02 Cisco Technology, Inc. Automatic propagation of circuit information in a communications network
US6615253B1 (en) 1999-08-31 2003-09-02 Accenture Llp Efficient server side data retrieval for execution of client side applications
US6615293B1 (en) 1998-07-01 2003-09-02 Sony Corporation Method and system for providing an exact image transfer and a root panel list within the panel subunit graphical user interface mechanism
US6636242B2 (en) * 1999-08-31 2003-10-21 Accenture Llp View configurer in a presentation services patterns environment
US6640249B1 (en) 1999-08-31 2003-10-28 Accenture Llp Presentation services patterns in a netcentric environment
US6640238B1 (en) 1999-08-31 2003-10-28 Accenture Llp Activity component in a presentation services patterns environment
US6639900B1 (en) 1999-12-15 2003-10-28 International Business Machines Corporation Use of generic classifiers to determine physical topology in heterogeneous networking environments
US6640244B1 (en) 1999-08-31 2003-10-28 Accenture Llp Request batcher in a transaction services patterns environment
US6646996B1 (en) 1999-12-15 2003-11-11 International Business Machines Corporation Use of adaptive resonance theory to differentiate network device types (routers vs switches)
US6650342B1 (en) 1998-06-30 2003-11-18 Samsung Electronics Co., Ltd. Method for operating network management system in a graphic user interface enviroment and network management system
US6664985B1 (en) 1997-07-02 2003-12-16 At&T Corporation Method and apparatus for supervising a processor within a distributed platform switch through graphical representations
US20030231205A1 (en) * 1999-07-26 2003-12-18 Sony Corporation/Sony Electronics, Inc. Extended elements and mechanisms for displaying a rich graphical user interface in panel subunit
US6697338B1 (en) 1999-10-28 2004-02-24 Lucent Technologies Inc. Determination of physical topology of a communication network
EP1398906A1 (en) * 2002-09-13 2004-03-17 FITEL USA CORPORATION (a Delaware Corporation) Self-registration systems and methods for dynamically updating information related to a network
US20040059850A1 (en) * 2002-09-19 2004-03-25 Hipp Christopher G. Modular server processing card system and method
US6715145B1 (en) 1999-08-31 2004-03-30 Accenture Llp Processing pipeline in a base services pattern environment
US20040073597A1 (en) * 2002-01-30 2004-04-15 Caveney Jack E. Systems and methods for managing a network
US20040090925A1 (en) * 2000-12-15 2004-05-13 Thomas Schoeberl Method for testing a network, and corresponding network
US6742015B1 (en) 1999-08-31 2004-05-25 Accenture Llp Base services patterns in a netcentric environment
US6741568B1 (en) 1999-12-15 2004-05-25 International Business Machines Corporation Use of adaptive resonance theory (ART) neural networks to compute bottleneck link speed in heterogeneous networking environments
US6747878B1 (en) 2000-07-20 2004-06-08 Rlx Technologies, Inc. Data I/O management system and method
WO2004053694A1 (en) * 2002-12-12 2004-06-24 Koninklijke Philips Electronics N.V. Communication system with display of the actual network topology
US6757748B1 (en) * 2000-07-20 2004-06-29 Rlx Technologies, Inc. Modular network interface system and method
US20040153568A1 (en) * 2003-01-31 2004-08-05 Yong Boon Ho Method of storing data concerning a computer network
US6842906B1 (en) 1999-08-31 2005-01-11 Accenture Llp System and method for a refreshable proxy pool in a communication services patterns environment
US20050111491A1 (en) * 2003-10-23 2005-05-26 Panduit Corporation System to guide and monitor the installation and revision of network cabling of an active jack network
US20050114478A1 (en) * 2003-11-26 2005-05-26 George Popescu Method and apparatus for providing dynamic group management for distributed interactive applications
US20050141431A1 (en) * 2003-08-06 2005-06-30 Caveney Jack E. Network managed device installation and provisioning technique
US20050159036A1 (en) * 2003-11-24 2005-07-21 Caveney Jack E. Communications patch panel systems and methods
US6954220B1 (en) 1999-08-31 2005-10-11 Accenture Llp User context component in environment services patterns
US6954437B1 (en) 2000-06-30 2005-10-11 Intel Corporation Method and apparatus for avoiding transient loops during network topology adoption
US20050245127A1 (en) * 2004-05-03 2005-11-03 Nordin Ronald A Powered patch panel
US6970433B1 (en) * 1996-04-29 2005-11-29 Tellabs Operations, Inc. Multichannel ring and star networks with limited channel conversion
US20050267638A1 (en) * 2001-02-12 2005-12-01 The Stanley Works System and architecture for providing a modular intelligent assist system
US20050281190A1 (en) * 2004-06-17 2005-12-22 Mcgee Michael S Automated recovery from a split segment condition in a layer2 network for teamed network resources of a computer systerm
US6985967B1 (en) 2000-07-20 2006-01-10 Rlx Technologies, Inc. Web server network system and method
US6987754B2 (en) 2000-03-07 2006-01-17 Menashe Shahar Adaptive downstream modulation scheme for broadband wireless access systems
US20060023671A1 (en) * 2004-07-28 2006-02-02 International Business Machines Corporation Determining the physical location of resources on and proximate to a network
US20060041660A1 (en) * 2000-02-28 2006-02-23 Microsoft Corporation Enterprise management system
US20060047800A1 (en) * 2004-08-24 2006-03-02 Panduit Corporation Systems and methods for network management
US20060050656A1 (en) * 2002-09-12 2006-03-09 Sebastien Perrot Method for determining a parent portal in a wireless network and corresponding portal device
US20060190587A1 (en) * 2000-06-30 2006-08-24 Mikael Sylvest Network topology management
US7120128B2 (en) 1998-10-23 2006-10-10 Brocade Communications Systems, Inc. Method and system for creating and implementing zones within a fibre channel system
EP1710958A3 (en) * 2005-04-07 2006-10-25 Samsung Electronics Co., Ltd. Method and apparatus for detecting topology of network
US20060262802A1 (en) * 2005-05-20 2006-11-23 Martin Greg A Ethernet repeater with local link status that reflects the status of the entire link
US20070013384A1 (en) * 2003-07-03 2007-01-18 Alcatel Method for single ended line testing and single ended line testing device
US20070055740A1 (en) * 2005-08-23 2007-03-08 Luciani Luis E System and method for interacting with a remote computer
US20070076632A1 (en) * 2005-10-05 2007-04-05 Hewlett-Packard Development Company, L.P. Network port for tracing a connection topology
US20070081469A1 (en) * 2005-10-11 2007-04-12 Sbc Knowledge Ventures L.P. System and methods for wireless fidelity (WIFI) venue utilization monitoring and management
US7289964B1 (en) 1999-08-31 2007-10-30 Accenture Llp System and method for transaction services patterns in a netcentric environment
US20070297349A1 (en) * 2003-11-28 2007-12-27 Ofir Arkin Method and System for Collecting Information Relating to a Communication Network
US7359434B2 (en) 2000-01-26 2008-04-15 Vyyo Ltd. Programmable PHY for broadband wireless access systems
US20090135732A1 (en) * 2007-11-28 2009-05-28 Acterna Llc Characterizing Home Wiring Via AD HOC Networking
US20090154501A1 (en) * 2000-04-25 2009-06-18 Charles Scott Roberson Method and Apparatus For Transporting Network Management Information In a Telecommunications Network
US20090225667A1 (en) * 1997-11-17 2009-09-10 Adc Telecommunications, Inc. System and method for electronically identifying connections of a cross-connect system
US20090265318A1 (en) * 2008-04-21 2009-10-22 Alcatel Lucent Port Location Determination for Wired Intelligent Terminals
US7646722B1 (en) 1999-06-29 2010-01-12 Cisco Technology, Inc. Generation of synchronous transport signal data used for network protection operation
US20100211697A1 (en) * 2009-02-13 2010-08-19 Adc Telecommunications, Inc. Managed connectivity devices, systems, and methods
US20100275146A1 (en) * 2009-04-24 2010-10-28 Dell Products, Lp System and method for managing devices in an information handling system
US20110161877A1 (en) * 2009-12-31 2011-06-30 John Chapra System, method, and computer-readable medium for providing a dynamic view and testing tool of power cabling of a multi-chassis computer system
US20110185012A1 (en) * 2010-01-27 2011-07-28 Colley Matthew D System and method for generating a notification mailing list
US20120066606A1 (en) * 2000-01-21 2012-03-15 Zavgren Jr John Richard Systems and methods for visualizing a communications network
US8832503B2 (en) 2011-03-25 2014-09-09 Adc Telecommunications, Inc. Dynamically detecting a defective connector at a port
US8874814B2 (en) 2010-06-11 2014-10-28 Adc Telecommunications, Inc. Switch-state information aggregation
US9038141B2 (en) 2011-12-07 2015-05-19 Adc Telecommunications, Inc. Systems and methods for using active optical cable segments
US9081537B2 (en) 2011-03-25 2015-07-14 Adc Telecommunications, Inc. Identifier encoding scheme for use with multi-path connectors
US9207417B2 (en) 2012-06-25 2015-12-08 Adc Telecommunications, Inc. Physical layer management for an active optical module
US9380874B2 (en) 2012-07-11 2016-07-05 Commscope Technologies Llc Cable including a secure physical layer management (PLM) whereby an aggregation point can be associated with a plurality of inputs
US9407510B2 (en) 2013-09-04 2016-08-02 Commscope Technologies Llc Physical layer system with support for multiple active work orders and/or multiple active technicians
US9473361B2 (en) 2012-07-11 2016-10-18 Commscope Technologies Llc Physical layer management at a wall plate device
US9497098B2 (en) 2011-03-25 2016-11-15 Commscope Technologies Llc Event-monitoring in a system for automatically obtaining and managing physical layer information using a reliable packet-based communication protocol
US20160349312A1 (en) * 2015-05-28 2016-12-01 Keysight Technologies, Inc. Automatically Generated Test Diagram
US9544058B2 (en) 2013-09-24 2017-01-10 Commscope Technologies Llc Pluggable active optical module with managed connectivity support and simulated memory table
US10153954B2 (en) 2013-08-14 2018-12-11 Commscope Technologies Llc Inferring physical layer connection status of generic cables from planned single-end connection events
US10756984B2 (en) * 2015-04-13 2020-08-25 Wirepath Home Systems, Llc Method and apparatus for creating and managing network device port VLAN configurations
US11113642B2 (en) 2012-09-27 2021-09-07 Commscope Connectivity Uk Limited Mobile application for assisting a technician in carrying out an electronic work order

Families Citing this family (250)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2938104B2 (en) * 1989-11-08 1999-08-23 株式会社日立製作所 Shared resource management method and information processing system
US5301303A (en) * 1990-04-23 1994-04-05 Chipcom Corporation Communication system concentrator configurable to different access methods
JP3159979B2 (en) * 1990-05-01 2001-04-23 株式会社日立製作所 Network management display processing system and method
US5463731A (en) * 1990-06-27 1995-10-31 Telefonaktiebolaget L M Ericsson Monitor screen graphic value indicator system
JPH0727445B2 (en) * 1990-09-04 1995-03-29 インターナショナル・ビジネス・マシーンズ・コーポレイション User interface for computer processor operation
US5559955A (en) * 1990-09-17 1996-09-24 Cabletron Systems, Inc. Method and apparatus for monitoring the status of non-pollable device in a computer network
US5295244A (en) * 1990-09-17 1994-03-15 Cabletron Systems, Inc. Network management system using interconnected hierarchies to represent different network dimensions in multiple display views
US5751933A (en) * 1990-09-17 1998-05-12 Dev; Roger H. System for determining the status of an entity in a computer network
US5768552A (en) * 1990-09-28 1998-06-16 Silicon Graphics, Inc. Graphical representation of computer network topology and activity
US5687313A (en) * 1991-03-14 1997-11-11 Hitachi, Ltd. Console apparatus for information processing system
US5421024A (en) * 1991-04-30 1995-05-30 Hewlett-Packard Company Detection of a relative location of a network device using a multicast packet processed only by hubs
JPH0522556A (en) * 1991-07-11 1993-01-29 Canon Inc Multiple address display method in plural display device
JP3160017B2 (en) * 1991-08-28 2001-04-23 株式会社日立製作所 Network management display device
US5500934A (en) * 1991-09-04 1996-03-19 International Business Machines Corporation Display and control system for configuring and monitoring a complex system
JPH0575628A (en) * 1991-09-13 1993-03-26 Fuji Xerox Co Ltd Network resource monitor system
DE69319757T2 (en) * 1992-01-10 1999-04-15 Digital Equipment Corp Method for connecting a line card to an address recognition unit
US5687315A (en) * 1992-04-16 1997-11-11 Hitachi, Ltd. Support system for constructing an integrated network
US5483467A (en) * 1992-06-10 1996-01-09 Rit Technologies, Ltd. Patching panel scanner
US5355452A (en) * 1992-07-02 1994-10-11 Hewlett-Packard Company Dual bus local area network interfacing system
JP2502914B2 (en) * 1992-07-31 1996-05-29 インターナショナル・ビジネス・マシーンズ・コーポレイション DATA TRANSFER METHOD AND DEVICE
US5491808A (en) * 1992-09-30 1996-02-13 Conner Peripherals, Inc. Method for tracking memory allocation in network file server
US5535335A (en) * 1992-12-22 1996-07-09 International Business Machines Corporation Method and system for reporting the status of an aggregate resource residing in a network of interconnected real resources
US5428806A (en) * 1993-01-22 1995-06-27 Pocrass; Alan L. Computer networking system including central chassis with processor and input/output modules, remote transceivers, and communication links between the transceivers and input/output modules
DE4321458A1 (en) * 1993-06-29 1995-01-12 Alcatel Network Services Network management support method and network management facility therefor
US6269398B1 (en) 1993-08-20 2001-07-31 Nortel Networks Limited Method and system for monitoring remote routers in networks for available protocols and providing a graphical representation of information received from the routers
US5594426A (en) * 1993-09-20 1997-01-14 Hitachi, Ltd. Network station and network management system
US5446726A (en) * 1993-10-20 1995-08-29 Lsi Logic Corporation Error detection and correction apparatus for an asynchronous transfer mode (ATM) network device
US5640399A (en) * 1993-10-20 1997-06-17 Lsi Logic Corporation Single chip network router
US5708659A (en) * 1993-10-20 1998-01-13 Lsi Logic Corporation Method for hashing in a packet network switching system
US6481005B1 (en) * 1993-12-20 2002-11-12 Lucent Technologies Inc. Event correlation feature for a telephone network operations support system
EP0662664B1 (en) * 1994-01-05 2001-10-31 Hewlett-Packard Company, A Delaware Corporation Self-describing data processing system
US5568605A (en) * 1994-01-13 1996-10-22 International Business Machines Corporation Resolving conflicting topology information
US5485576A (en) * 1994-01-28 1996-01-16 Fee; Brendan Chassis fault tolerant system management bus architecture for a networking
US5522042A (en) 1994-01-28 1996-05-28 Cabletron Systems, Inc. Distributed chassis agent for distributed network management
US5485455A (en) * 1994-01-28 1996-01-16 Cabletron Systems, Inc. Network having secure fast packet switching and guaranteed quality of service
AU1835895A (en) * 1994-01-31 1995-08-15 Lannet Inc. Application and method for communication switching
WO1995022106A1 (en) * 1994-02-10 1995-08-17 Elonex Technologies, Inc. I/o decoder map
JPH07245614A (en) * 1994-03-04 1995-09-19 Fujitsu Ltd Method for measuring inter-equipments distance on lan and equipment therefor
US5509123A (en) * 1994-03-22 1996-04-16 Cabletron Systems, Inc. Distributed autonomous object architectures for network layer routing
US5572652A (en) * 1994-04-04 1996-11-05 The United States Of America As Represented By The Secretary Of The Navy System and method for monitoring and controlling one or more computer sites
US5509006A (en) * 1994-04-18 1996-04-16 Cisco Systems Incorporated Apparatus and method for switching packets using tree memory
US5519704A (en) * 1994-04-21 1996-05-21 Cisco Systems, Inc. Reliable transport protocol for internetwork routing
US5708772A (en) * 1994-04-29 1998-01-13 Bay Networks, Inc. Network topology determination by dissecting unitary connections and detecting non-responsive nodes
US5432789A (en) * 1994-05-03 1995-07-11 Synoptics Communications, Inc. Use of a single central transmit and receive mechanism for automatic topology determination of multiple networks
GB9412264D0 (en) * 1994-06-18 1994-08-10 Int Computers Ltd Console facility for a computer system
US5481674A (en) * 1994-06-20 1996-01-02 Mahavadi; Manohar R. Method and apparatus mapping the physical topology of EDDI networks
US5655140A (en) * 1994-07-22 1997-08-05 Network Peripherals Apparatus for translating frames of data transferred between heterogeneous local area networks
US5619615A (en) * 1994-07-22 1997-04-08 Bay Networks, Inc. Method and apparatus for identifying an agent running on a device in a computer network
PT693837E (en) * 1994-07-22 2001-04-30 Koninkl Kpn Nv METHOD FOR ESTABLISHING LINKS IN A COMMUNICATION NETWORK
US5684988A (en) * 1994-07-22 1997-11-04 Bay Networks, Inc. MIB database and generic popup window architecture
US6061505A (en) * 1994-07-22 2000-05-09 Nortel Networks Corporation Apparatus and method for providing topology information about a network
US6195095B1 (en) * 1994-09-20 2001-02-27 International Business Machines Corporation Method and apparatus for displaying attributes of a computer work station on a graphical user interface
GB2295299B (en) * 1994-11-16 1999-04-28 Network Services Inc Enterpris Enterprise network management method and apparatus
US5867666A (en) * 1994-12-29 1999-02-02 Cisco Systems, Inc. Virtual interfaces with dynamic binding
US5832503A (en) * 1995-02-24 1998-11-03 Cabletron Systems, Inc. Method and apparatus for configuration management in communications networks
US6044400A (en) * 1995-03-25 2000-03-28 Lucent Technologies Inc. Switch monitoring system having a data collection device using filters in parallel orientation and filter counter for counting combination of filtered events
US5715432A (en) * 1995-04-04 1998-02-03 U S West Technologies, Inc. Method and system for developing network analysis and modeling with graphical objects
US5684959A (en) * 1995-04-19 1997-11-04 Hewlett-Packard Company Method for determining topology of a network
US6487513B1 (en) * 1995-06-07 2002-11-26 Toshiba America Medical Systems, Inc. Diagnostic test unit network and system
US6456306B1 (en) 1995-06-08 2002-09-24 Nortel Networks Limited Method and apparatus for displaying health status of network devices
US5793974A (en) * 1995-06-30 1998-08-11 Sun Microsystems, Inc. Network navigation and viewing system for network management system
US6097718A (en) 1996-01-02 2000-08-01 Cisco Technology, Inc. Snapshot routing with route aging
US6147996A (en) 1995-08-04 2000-11-14 Cisco Technology, Inc. Pipelined multiple issue packet switch
US5706440A (en) * 1995-08-23 1998-01-06 International Business Machines Corporation Method and system for determining hub topology of an ethernet LAN segment
US5878420A (en) * 1995-08-31 1999-03-02 Compuware Corporation Network monitoring and management system
US6182224B1 (en) 1995-09-29 2001-01-30 Cisco Systems, Inc. Enhanced network services using a subnetwork of communicating processors
US7246148B1 (en) 1995-09-29 2007-07-17 Cisco Technology, Inc. Enhanced network services using a subnetwork of communicating processors
US6917966B1 (en) 1995-09-29 2005-07-12 Cisco Technology, Inc. Enhanced network services using a subnetwork of communicating processors
US5966163A (en) 1995-10-20 1999-10-12 Scientific-Atlanta, Inc. Providing constant bit rate upstream data transport in a two way cable system by scheduling preemptive grants for upstream data slots using selected fields of a plurality of grant fields
US6230203B1 (en) 1995-10-20 2001-05-08 Scientific-Atlanta, Inc. System and method for providing statistics for flexible billing in a cable environment
US5726993A (en) * 1995-10-25 1998-03-10 Siemens Telecom Networks Signal detector for telephone line repeator remote loopback system
US5590120A (en) * 1995-10-31 1996-12-31 Cabletron Systems, Inc. Port-link configuration tracking method and apparatus
US5734842A (en) * 1995-12-18 1998-03-31 Asante Technologies, Inc. Network hub interconnection circuitry having power reset feature
US5734642A (en) * 1995-12-22 1998-03-31 Cabletron Systems, Inc. Method and apparatus for network synchronization
US6091725A (en) 1995-12-29 2000-07-18 Cisco Systems, Inc. Method for traffic management, traffic prioritization, access control, and packet forwarding in a datagram computer network
US6035105A (en) * 1996-01-02 2000-03-07 Cisco Technology, Inc. Multiple VLAN architecture system
US6199172B1 (en) 1996-02-06 2001-03-06 Cabletron Systems, Inc. Method and apparatus for testing the responsiveness of a network device
US6047321A (en) * 1996-02-23 2000-04-04 Nortel Networks Corporation Method and apparatus for monitoring a dedicated communications medium in a switched data network
US6441931B1 (en) 1996-02-23 2002-08-27 Nortel Networks Limited Method and apparatus for monitoring a dedicated communications medium in a switched data network
US5898837A (en) * 1996-02-23 1999-04-27 Bay Networks, Inc. Method and apparatus for monitoring a dedicated communications medium in a switched data network
US7028088B1 (en) 1996-04-03 2006-04-11 Scientific-Atlanta, Inc. System and method for providing statistics for flexible billing in a cable environment
US5793951A (en) * 1996-05-10 1998-08-11 Apple Computer, Inc. Security and report generation system for networked multimedia workstations
US6131112A (en) 1996-05-17 2000-10-10 Cabletron Systems, Inc. Method and apparatus for integrated network and systems management
US6308148B1 (en) 1996-05-28 2001-10-23 Cisco Technology, Inc. Network flow data export
US6243667B1 (en) 1996-05-28 2001-06-05 Cisco Systems, Inc. Network flow switching and flow data export
KR100463618B1 (en) * 1996-06-21 2005-02-28 소니 일렉트로닉스 인코포레이티드 Device user interface with topology map
US5883621A (en) * 1996-06-21 1999-03-16 Sony Corporation Device control with topology map in a digital network
EP0909508B1 (en) * 1996-06-21 2004-01-21 Sony Electronics Inc. Device user interface with topology map
AU3502597A (en) * 1996-06-25 1998-01-14 Mci Communications Corporation Intranet graphical user interface for sonet network management
US6061332A (en) * 1996-06-25 2000-05-09 Mci Communications Corporation System and method for the automated configuration of network elements in a telecommunications network
US5870558A (en) * 1996-06-25 1999-02-09 Mciworldcom, Inc. Intranet graphical user interface for SONET network management
US6212182B1 (en) 1996-06-27 2001-04-03 Cisco Technology, Inc. Combined unicast and multicast scheduling
US6434120B1 (en) 1998-08-25 2002-08-13 Cisco Technology, Inc. Autosensing LMI protocols in frame relay networks
GB9615423D0 (en) * 1996-07-23 1996-09-04 3Com Ireland Distributed stack management
GB2330741B (en) * 1996-07-23 2001-03-14 3Com Ireland Distributed stack management
IL119062A0 (en) * 1996-08-13 1996-11-14 Madge Networks Israel Ltd Apparatus and method for detecting a layout of a switched local network
US5910803A (en) * 1996-08-14 1999-06-08 Novell, Inc. Network atlas mapping tool
US6067093A (en) * 1996-08-14 2000-05-23 Novell, Inc. Method and apparatus for organizing objects of a network map
US6108637A (en) 1996-09-03 2000-08-22 Nielsen Media Research, Inc. Content display monitor
US5935209A (en) * 1996-09-09 1999-08-10 Next Level Communications System and method for managing fiber-to-the-curb network elements
US5909550A (en) * 1996-10-16 1999-06-01 Cisco Technology, Inc. Correlation technique for use in managing application-specific and protocol-specific resources of heterogeneous integrated computer network
US5802319A (en) * 1996-10-23 1998-09-01 Hewlett-Packard Company Method and apparatus for employing an intelligent agent to cause a packet to be sent to update a bridge's filtering database when a station is moved in a network
JP2001505731A (en) * 1996-10-29 2001-04-24 レクロイ・コーポレーション Physical layer monitoring provided by computer network cross-connect panel
US6031528A (en) * 1996-11-25 2000-02-29 Intel Corporation User based graphical computer network diagnostic tool
US6304546B1 (en) 1996-12-19 2001-10-16 Cisco Technology, Inc. End-to-end bidirectional keep-alive using virtual circuits
US6308328B1 (en) 1997-01-17 2001-10-23 Scientific-Atlanta, Inc. Usage statistics collection for a cable data delivery system
US6272150B1 (en) 1997-01-17 2001-08-07 Scientific-Atlanta, Inc. Cable modem map display for network management of a cable data delivery system
US6182132B1 (en) * 1997-03-17 2001-01-30 Mallikarjuna Gupta Bilakanti Process for determining in service status
US6643696B2 (en) 1997-03-21 2003-11-04 Owen Davis Method and apparatus for tracking client interaction with a network resource and creating client profiles and resource database
US5796952A (en) * 1997-03-21 1998-08-18 Dot Com Development, Inc. Method and apparatus for tracking client interaction with a network resource and creating client profiles and resource database
US6157956A (en) * 1997-03-28 2000-12-05 Global Maintech, Inc. Heterogeneous computing interface apparatus and method using a universal character set
US6286058B1 (en) 1997-04-14 2001-09-04 Scientific-Atlanta, Inc. Apparatus and methods for automatically rerouting packets in the event of a link failure
US6122272A (en) * 1997-05-23 2000-09-19 Cisco Technology, Inc. Call size feedback on PNNI operation
US6356530B1 (en) 1997-05-23 2002-03-12 Cisco Technology, Inc. Next hop selection in ATM networks
US6862284B1 (en) 1997-06-17 2005-03-01 Cisco Technology, Inc. Format for automatic generation of unique ATM addresses used for PNNI
US6122276A (en) * 1997-06-30 2000-09-19 Cisco Technology, Inc. Communications gateway mapping internet address to logical-unit name
US5983371A (en) * 1997-07-11 1999-11-09 Marathon Technologies Corporation Active failure detection
US6078590A (en) 1997-07-14 2000-06-20 Cisco Technology, Inc. Hierarchical routing knowledge for multicast packet routing
US6330599B1 (en) 1997-08-05 2001-12-11 Cisco Technology, Inc. Virtual interfaces with dynamic binding
US6512766B2 (en) 1997-08-22 2003-01-28 Cisco Systems, Inc. Enhanced internet packet routing lookup
US6157641A (en) * 1997-08-22 2000-12-05 Cisco Technology, Inc. Multiprotocol packet recognition and switching
US6212183B1 (en) 1997-08-22 2001-04-03 Cisco Technology, Inc. Multiple parallel packet routing lookup
US6049833A (en) * 1997-08-29 2000-04-11 Cisco Technology, Inc. Mapping SNA session flow control to TCP flow control
US6128662A (en) * 1997-08-29 2000-10-03 Cisco Technology, Inc. Display-model mapping for TN3270 client
US6782087B1 (en) 1997-09-19 2004-08-24 Mci Communications Corporation Desktop telephony application program for a call center agent
US6466663B1 (en) * 1997-09-30 2002-10-15 Don Ravenscroft Monitoring system client for a call center
US6490350B2 (en) 1997-09-30 2002-12-03 Mci Communications Corporation Monitoring system for telephony resources in a call center
US5909217A (en) * 1997-09-30 1999-06-01 International Business Machines Corporation Large scale system status map
US6343072B1 (en) 1997-10-01 2002-01-29 Cisco Technology, Inc. Single-chip architecture for shared-memory router
US7570583B2 (en) 1997-12-05 2009-08-04 Cisco Technology, Inc. Extending SONET/SDH automatic protection switching
US6195339B1 (en) * 1997-12-23 2001-02-27 Hyperedge Corp. Method and apparatus for local provisioning of telecommunications network interface unit
US6424649B1 (en) 1997-12-31 2002-07-23 Cisco Technology, Inc. Synchronous pipelined switch using serial transmission
US6111877A (en) * 1997-12-31 2000-08-29 Cisco Technology, Inc. Load sharing across flows
US6182135B1 (en) 1998-02-05 2001-01-30 3Com Corporation Method for determining whether two pieces of network equipment are directly connected
US6367018B1 (en) 1998-02-05 2002-04-02 3Com Corporation Method for detecting dedicated link between an end station and a network device
US6134666A (en) * 1998-03-12 2000-10-17 Cisco Technology, Inc. Power supervisor for electronic modular system
US6853638B2 (en) 1998-04-01 2005-02-08 Cisco Technology, Inc. Route/service processor scalability via flow-based distribution of traffic
WO1999053627A1 (en) 1998-04-10 1999-10-21 Chrimar Systems, Inc. Doing Business As Cms Technologies System for communicating with electronic equipment on a network
US6370121B1 (en) 1998-06-29 2002-04-09 Cisco Technology, Inc. Method and system for shortcut trunking of LAN bridges
US6920112B1 (en) 1998-06-29 2005-07-19 Cisco Technology, Inc. Sampling packets for network monitoring
US6377577B1 (en) 1998-06-30 2002-04-23 Cisco Technology, Inc. Access control list processing in hardware
US7756986B2 (en) * 1998-06-30 2010-07-13 Emc Corporation Method and apparatus for providing data management for a storage system coupled to a network
US6308219B1 (en) 1998-07-31 2001-10-23 Cisco Technology, Inc. Routing table lookup implemented using M-trie having nodes duplicated in multiple memory banks
US6182147B1 (en) 1998-07-31 2001-01-30 Cisco Technology, Inc. Multicast group routing using unidirectional links
US6389506B1 (en) 1998-08-07 2002-05-14 Cisco Technology, Inc. Block mask ternary cam
US6101115A (en) * 1998-08-07 2000-08-08 Cisco Technology, Inc. CAM match line precharge
US6229538B1 (en) * 1998-09-11 2001-05-08 Compaq Computer Corporation Port-centric graphic representations of network controllers
US6349306B1 (en) * 1998-10-30 2002-02-19 Aprisma Management Technologies, Inc. Method and apparatus for configuration management in communications networks
US8266266B2 (en) 1998-12-08 2012-09-11 Nomadix, Inc. Systems and methods for providing dynamic network authorization, authentication and accounting
US8713641B1 (en) 1998-12-08 2014-04-29 Nomadix, Inc. Systems and methods for authorizing, authenticating and accounting users having transparent computer access to a network using a gateway device
US7194554B1 (en) 1998-12-08 2007-03-20 Nomadix, Inc. Systems and methods for providing dynamic network authorization authentication and accounting
US6433802B1 (en) * 1998-12-29 2002-08-13 Ncr Corporation Parallel programming development environment
US6771642B1 (en) 1999-01-08 2004-08-03 Cisco Technology, Inc. Method and apparatus for scheduling packets in a packet switch
US7065762B1 (en) 1999-03-22 2006-06-20 Cisco Technology, Inc. Method, apparatus and computer program product for borrowed-virtual-time scheduling
US6757791B1 (en) 1999-03-30 2004-06-29 Cisco Technology, Inc. Method and apparatus for reordering packet data units in storage queues for reading and writing memory
US6760331B1 (en) 1999-03-31 2004-07-06 Cisco Technology, Inc. Multicast routing with nearest queue first allocation and dynamic and static vector quantization
US6603772B1 (en) 1999-03-31 2003-08-05 Cisco Technology, Inc. Multicast routing with multicast virtual output queues and shortest queue first allocation
US6397248B1 (en) * 1999-05-04 2002-05-28 Nortel Networks Limited System and method to discover end node physical connectivity to networking devices
US6654914B1 (en) * 1999-05-28 2003-11-25 Teradyne, Inc. Network fault isolation
US6487521B1 (en) * 1999-07-07 2002-11-26 International Business Machines Corporation Method, system, and program for monitoring a device to determine a power failure at the device
GB2352112B (en) * 1999-07-14 2002-02-13 3Com Corp Improved network device image
AUPQ206399A0 (en) 1999-08-06 1999-08-26 Imr Worldwide Pty Ltd. Network user measurement system and method
EP1204241A1 (en) * 1999-08-09 2002-05-08 Fujitsu Limited Package control device and method
US6718282B1 (en) 1999-10-20 2004-04-06 Cisco Technology, Inc. Fault tolerant client-server environment
US6381642B1 (en) * 1999-10-21 2002-04-30 Mcdata Corporation In-band method and apparatus for reporting operational statistics relative to the ports of a fibre channel switch
US7197556B1 (en) * 1999-10-22 2007-03-27 Nomadix, Inc. Location-based identification for use in a communications network
US8661111B1 (en) 2000-01-12 2014-02-25 The Nielsen Company (Us), Llc System and method for estimating prevalence of digital content on the world-wide-web
WO2001055854A1 (en) * 2000-01-28 2001-08-02 Telcordia Technologies, Inc. Physical layer auto-discovery for management of network elements
FI20000342A (en) * 2000-02-16 2001-08-16 Nokia Networks Oy network control
US6243510B1 (en) 2000-03-13 2001-06-05 Apcon, Inc. Electronically-controllable fiber optic patch panel
US7512894B1 (en) * 2000-09-11 2009-03-31 International Business Machines Corporation Pictorial-based user interface management of computer hardware components
US6453361B1 (en) * 2000-10-27 2002-09-17 Ipac Acquisition Subsidiary I, Llc Meta-application architecture for integrating photo-service websites
DE10055250A1 (en) * 2000-11-08 2002-06-06 Siemens Ag Software tool for monitoring an automation device for faults
US7197531B2 (en) 2000-12-29 2007-03-27 Fotomedia Technologies, Llc Meta-application architecture for integrating photo-service websites for browser-enabled devices
US7272788B2 (en) * 2000-12-29 2007-09-18 Fotomedia Technologies, Llc Client-server system for merging of metadata with images
US7028087B2 (en) * 2001-02-23 2006-04-11 Panduit Corp. Network documentation system with electronic modules
WO2002075587A1 (en) * 2001-03-15 2002-09-26 Andrew Killick Mapping system and method
US20020169869A1 (en) * 2001-05-08 2002-11-14 Shugart Technology, Inc. SAN monitor incorporating a GPS receiver
US20030033463A1 (en) * 2001-08-10 2003-02-13 Garnett Paul J. Computer system storage
US7069343B2 (en) * 2001-09-06 2006-06-27 Avaya Technologycorp. Topology discovery by partitioning multiple discovery techniques
US7200122B2 (en) * 2001-09-06 2007-04-03 Avaya Technology Corp. Using link state information to discover IP network topology
US20030135593A1 (en) * 2001-09-26 2003-07-17 Bernard Lee Management system
US7660886B2 (en) * 2001-09-27 2010-02-09 International Business Machines Corporation Apparatus and method of representing real-time distributed command execution status across distributed systems
US20030069953A1 (en) * 2001-09-28 2003-04-10 Bottom David A. Modular server architecture with high-availability management capability
US7116642B2 (en) * 2001-11-16 2006-10-03 Alcatel Canada Inc. SONET/SDH data link administration and management
US7571239B2 (en) 2002-01-08 2009-08-04 Avaya Inc. Credential management and network querying
US7656903B2 (en) 2002-01-30 2010-02-02 Panduit Corp. System and methods for documenting networks with electronic modules
US7076543B1 (en) 2002-02-13 2006-07-11 Cisco Technology, Inc. Method and apparatus for collecting, aggregating and monitoring network management information
IL148224A0 (en) * 2002-02-18 2002-09-12 Ofer Givaty A network-information device
US6978395B2 (en) * 2002-04-10 2005-12-20 Adc Dsl Systems, Inc. Protection switching of interface cards in communication system
WO2003098874A1 (en) * 2002-05-17 2003-11-27 Allied Telesis Kabushiki Kaisha Concentrator and its power supply reset management method
US8271778B1 (en) 2002-07-24 2012-09-18 The Nielsen Company (Us), Llc System and method for monitoring secure data on a network
US7007023B2 (en) * 2002-08-27 2006-02-28 International Business Machines Corporation Method for flagging differences in resource attributes across multiple database and transaction systems
US20040042416A1 (en) * 2002-08-27 2004-03-04 Ngo Chuong Ngoc Virtual Local Area Network auto-discovery methods
US7457234B1 (en) 2003-05-14 2008-11-25 Adtran, Inc. System and method for protecting communication between a central office and a remote premises
US7426577B2 (en) * 2003-06-19 2008-09-16 Avaya Technology Corp. Detection of load balanced links in internet protocol netwoks
IL158030A0 (en) * 2003-09-21 2004-03-28 Rit Techn Ltd Modular scanning system for cabling systems
US20050086368A1 (en) * 2003-10-15 2005-04-21 Dell Products L.P. System and method for determining nearest neighbor information
US7475351B1 (en) 2003-12-02 2009-01-06 Sun Microsystems, Inc. Interactive drag and snap connection tool
US7380244B1 (en) 2004-05-18 2008-05-27 The United States Of America As Represented By The Secretary Of The Navy Status display tool
US7366934B1 (en) * 2004-09-08 2008-04-29 Stryker Corporation Method of remotely controlling devices for endoscopy
US7304586B2 (en) 2004-10-20 2007-12-04 Electro Industries / Gauge Tech On-line web accessed energy meter
US9080894B2 (en) 2004-10-20 2015-07-14 Electro Industries/Gauge Tech Intelligent electronic device for receiving and sending data at high speeds over a network
US7747733B2 (en) * 2004-10-25 2010-06-29 Electro Industries/Gauge Tech Power meter having multiple ethernet ports
JP4790722B2 (en) * 2004-11-03 2011-10-12 パンドウィット・コーポレーション Patch Panel Documentation for Patch Panel and Methods and Equipment for Revision
US7613124B2 (en) 2005-05-19 2009-11-03 Panduit Corp. Method and apparatus for documenting network paths
US20060282529A1 (en) * 2005-06-14 2006-12-14 Panduit Corp. Method and apparatus for monitoring physical network topology information
EP1734692A1 (en) * 2005-06-14 2006-12-20 Panduit Corporation Method and apparatus for monitoring physical network topology information
KR100654837B1 (en) * 2005-08-04 2006-12-08 삼성전자주식회사 Control method for display apparatus
US7636050B2 (en) * 2005-08-08 2009-12-22 Panduit Corp. Systems and methods for detecting a patch cord end connection
US7234944B2 (en) * 2005-08-26 2007-06-26 Panduit Corp. Patch field documentation and revision systems
US7978845B2 (en) 2005-09-28 2011-07-12 Panduit Corp. Powered patch panel
US7811119B2 (en) * 2005-11-18 2010-10-12 Panduit Corp. Smart cable provisioning for a patch cord management system
US7768418B2 (en) * 2005-12-06 2010-08-03 Panduit Corp. Power patch panel with guided MAC capability
US8849980B2 (en) * 2006-02-06 2014-09-30 International Business Machines Corporation Apparatus, system, and method for monitoring computer system components
US7488206B2 (en) * 2006-02-14 2009-02-10 Panduit Corp. Method and apparatus for patch panel patch cord documentation and revision
US20070288207A1 (en) * 2006-06-12 2007-12-13 Autodesk, Inc. Displaying characteristics of a system of interconnected components at different system locations
US20080175159A1 (en) * 2006-12-13 2008-07-24 Panduit Corp. High Performance Three-Port Switch for Managed Ethernet Systems
JP4938488B2 (en) * 2007-02-13 2012-05-23 パナソニック株式会社 Power line communication device, power line communication system, connection state confirmation method, and connection processing method
US10845399B2 (en) 2007-04-03 2020-11-24 Electro Industries/Gaugetech System and method for performing data transfers in an intelligent electronic device
US20090100174A1 (en) * 2007-09-07 2009-04-16 Sushma Annareddy Method and system for automatic polling of multiple device types
EP2288174A3 (en) * 2007-10-19 2011-04-27 Panduit Corporation Communication port identification system
TWI376597B (en) * 2007-10-26 2012-11-11 Adlink Technology Inc System management apparatus and method for multi-shelf modular computing system
US7936671B1 (en) * 2007-11-12 2011-05-03 Marvell International Ltd. Cable far end port identification using repeating link state patterns
WO2009105632A1 (en) 2008-02-21 2009-08-27 Panduit Corp. Intelligent inter-connect and cross-connect patching system
US8306935B2 (en) 2008-12-22 2012-11-06 Panduit Corp. Physical infrastructure management system
WO2010078080A1 (en) * 2008-12-31 2010-07-08 Panduit Corp. Patch cord with insertion detection and light illumination capabilities
US8128428B2 (en) * 2009-02-19 2012-03-06 Panduit Corp. Cross connect patch guidance system
US8964601B2 (en) 2011-10-07 2015-02-24 International Business Machines Corporation Network switching domains with a virtualized control plane
US9088477B2 (en) 2012-02-02 2015-07-21 International Business Machines Corporation Distributed fabric management protocol
US9077651B2 (en) 2012-03-07 2015-07-07 International Business Machines Corporation Management of a distributed fabric system
US9077624B2 (en) 2012-03-07 2015-07-07 International Business Machines Corporation Diagnostics in a distributed fabric system
WO2014011898A1 (en) * 2012-07-11 2014-01-16 Anderson David J Managed fiber connectivity systems
CA2821726C (en) * 2012-07-23 2016-02-09 Brian William Karam Entertainment, lighting and climate control system
US9202304B1 (en) * 2012-09-28 2015-12-01 Emc Corporation Path performance mini-charts
US20140372809A1 (en) * 2013-06-12 2014-12-18 Ge Medical Systems Global Technology Company Llc Graphic self-diagnostic system and method
US9477276B2 (en) 2013-06-13 2016-10-25 Dell Products L.P. System and method for switch management
US9219928B2 (en) 2013-06-25 2015-12-22 The Nielsen Company (Us), Llc Methods and apparatus to characterize households with media meter data
US20150016277A1 (en) * 2013-07-10 2015-01-15 Dell Products L.P. Interconnect error notification system
US20150082176A1 (en) * 2013-09-16 2015-03-19 Alcatel-Lucent Usa Inc. Visual simulator for wireless systems
WO2015123201A1 (en) 2014-02-11 2015-08-20 The Nielsen Company (Us), Llc Methods and apparatus to calculate video-on-demand and dynamically inserted advertisement viewing probability
US10250464B2 (en) * 2014-10-15 2019-04-02 Accedian Networks Inc. Area efficient traffic generator
US10219039B2 (en) 2015-03-09 2019-02-26 The Nielsen Company (Us), Llc Methods and apparatus to assign viewers to media meter data
US10210068B2 (en) * 2015-04-13 2019-02-19 Leviton Manufacturing Co., Inc. Device topology definition system
USD769293S1 (en) * 2015-06-29 2016-10-18 International Business Machines Corporation Display screen with graphical user interface for network topology display
US9848224B2 (en) 2015-08-27 2017-12-19 The Nielsen Company(Us), Llc Methods and apparatus to estimate demographics of a household
US10791355B2 (en) 2016-12-20 2020-09-29 The Nielsen Company (Us), Llc Methods and apparatus to determine probabilistic media viewing metrics
USD829753S1 (en) * 2017-06-30 2018-10-02 Adp, Llc Display screen or a portion thereof with a graphical user interface
USD829752S1 (en) * 2017-06-30 2018-10-02 Adp, Llc Display screen or a portion thereof with a graphical user interface
CN108880932B (en) * 2018-05-31 2022-06-03 广东美的制冷设备有限公司 Interface display method, terminal device and computer readable storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4055808A (en) * 1976-05-20 1977-10-25 Intertel, Inc. Data communications network testing system
US4347498A (en) * 1979-11-21 1982-08-31 International Business Machines Corporation Method and means for demand accessing and broadcast transmission among ports in a distributed star network
US4545013A (en) * 1979-01-29 1985-10-01 Infinet Inc. Enhanced communications network testing and control system
US4578773A (en) * 1983-09-27 1986-03-25 Four-Phase Systems, Inc. Circuit board status detection system
US4644532A (en) * 1985-06-10 1987-02-17 International Business Machines Corporation Automatic update of topology in a hybrid network
US4750136A (en) * 1986-01-10 1988-06-07 American Telephone And Telegraph, At&T Information Systems Inc. Communication system having automatic circuit board initialization capability
US4827411A (en) * 1987-06-15 1989-05-02 International Business Machines Corporation Method of maintaining a topology database
US4937825A (en) * 1988-06-15 1990-06-26 International Business Machines Method and apparatus for diagnosing problems in data communication networks
US4937743A (en) * 1987-09-10 1990-06-26 Intellimed Corporation Method and system for scheduling, monitoring and dynamically managing resources
US4943998A (en) * 1988-04-22 1990-07-24 U.S. Philips Corp. Intermeshed communication network
US5049873A (en) * 1988-01-29 1991-09-17 Network Equipment Technologies, Inc. Communications network state and topology monitor
US5051987A (en) * 1988-07-20 1991-09-24 Racal-Milgo Limited Information transmission network including a plurality of nodes interconnected by links and methods for the transmission of information through a network including a plurality of nodes interconnected by links
US5101348A (en) * 1988-06-23 1992-03-31 International Business Machines Corporation Method of reducing the amount of information included in topology database update messages in a data communications network
US5136690A (en) * 1989-08-07 1992-08-04 At&T Bell Laboratories Dynamic graphical analysis of network data
US5138615A (en) * 1989-06-22 1992-08-11 Digital Equipment Corporation Reconfiguration system and method for high-speed mesh connected local area network
US5202985A (en) * 1988-04-14 1993-04-13 Racal-Datacom, Inc. Apparatus and method for displaying data communication network configuration after searching the network

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4055808A (en) * 1976-05-20 1977-10-25 Intertel, Inc. Data communications network testing system
US4545013A (en) * 1979-01-29 1985-10-01 Infinet Inc. Enhanced communications network testing and control system
US4347498A (en) * 1979-11-21 1982-08-31 International Business Machines Corporation Method and means for demand accessing and broadcast transmission among ports in a distributed star network
US4578773A (en) * 1983-09-27 1986-03-25 Four-Phase Systems, Inc. Circuit board status detection system
US4644532A (en) * 1985-06-10 1987-02-17 International Business Machines Corporation Automatic update of topology in a hybrid network
US4750136A (en) * 1986-01-10 1988-06-07 American Telephone And Telegraph, At&T Information Systems Inc. Communication system having automatic circuit board initialization capability
US4827411A (en) * 1987-06-15 1989-05-02 International Business Machines Corporation Method of maintaining a topology database
US4937743A (en) * 1987-09-10 1990-06-26 Intellimed Corporation Method and system for scheduling, monitoring and dynamically managing resources
US5049873A (en) * 1988-01-29 1991-09-17 Network Equipment Technologies, Inc. Communications network state and topology monitor
US5202985A (en) * 1988-04-14 1993-04-13 Racal-Datacom, Inc. Apparatus and method for displaying data communication network configuration after searching the network
US4943998A (en) * 1988-04-22 1990-07-24 U.S. Philips Corp. Intermeshed communication network
US4937825A (en) * 1988-06-15 1990-06-26 International Business Machines Method and apparatus for diagnosing problems in data communication networks
US5101348A (en) * 1988-06-23 1992-03-31 International Business Machines Corporation Method of reducing the amount of information included in topology database update messages in a data communications network
US5051987A (en) * 1988-07-20 1991-09-24 Racal-Milgo Limited Information transmission network including a plurality of nodes interconnected by links and methods for the transmission of information through a network including a plurality of nodes interconnected by links
US5138615A (en) * 1989-06-22 1992-08-11 Digital Equipment Corporation Reconfiguration system and method for high-speed mesh connected local area network
US5136690A (en) * 1989-08-07 1992-08-04 At&T Bell Laboratories Dynamic graphical analysis of network data

Non-Patent Citations (28)

* Cited by examiner, † Cited by third party
Title
Andrew Fraley "HP Open View Bridge Manager" Hewlett-Packard Journal, Apr. 1990 v41n 2 p. 66(5) -Fulltext Copy.
Andrew Fraley HP Open View Bridge Manager Hewlett Packard Journal, Apr. 1990 v41n 2 p. 66(5) Fulltext Copy. *
Brown Laure, "Graphical User Interface: a Developer's Quandary" Patricia Seybolds's Unix in the Office, 1989.
Brown Laure, Graphical User Interface: a Developer s Quandary Patricia Seybolds s Unix in the Office, 1989. *
Floyd Backes "Spanning Tree Bridges--Transparent Bridges for Interconnection of IEEE 802 LAN", IEEE Network, Jan. 1988, pp. 5-9.
Floyd Backes Spanning Tree Bridges Transparent Bridges for Interconnection of IEEE 802 LAN , IEEE Network, Jan. 1988, pp. 5 9. *
Frank J. Derfler, "Building Workgroup Solutions--LAN Management Systems", PC Magazine, Nov. 1989 pp. 285-306.
Frank J. Derfler, Building Workgroup Solutions LAN Management Systems , PC Magazine, Nov. 1989 pp. 285 306. *
IEEE STD 802.1D IEEE Standards for Local and Metropolitan Area Network Media Access Control (MAC) Bridges , 1990 pp. 1 141. *
IEEE STD 802.1D--"IEEE Standards for Local and Metropolitan Area Network--Media Access Control (MAC) Bridges", 1990 pp. 1-141.
Irwin Greenstein, "Management system desinged for Synoptics Ethernet," MIS Week, v9 n49 p. 29 Dec. 5, 1988.
Irwin Greenstein, Management system desinged for Synoptics Ethernet, MIS Week, v9 n49 p. 29 Dec. 5, 1988. *
Kara, et al. "An Architecture for Integrated Network Management," IEEE 1989, pp. 191-195.
Kara, et al. An Architecture for Integrated Network Management, IEEE 1989, pp. 191 195. *
Kayrl, Scott, "Cabletrons Enhances Remote LAN view; New Tools Include Remote Bridging, Graphical User Interface." PC Week May 1989 p. 42 2pages (Full Text).
Kayrl, Scott, "Taking Care of Business with SNMP", Data Communication, Mar. 1990, pp. 31-41.
Kayrl, Scott, Cabletrons Enhances Remote LAN view; New Tools Include Remote Bridging, Graphical User Interface. PC Week May 1989 p. 42 2pages (Full Text). *
Kayrl, Scott, Taking Care of Business with SNMP , Data Communication, Mar. 1990, pp. 31 41. *
Michael M. Clair, "Physical Layer Network Management: The Missing Link," Telecommunications, pp. 63-64 Feb. 1989.
Michael M. Clair, Physical Layer Network Management: The Missing Link, Telecommunications, pp. 63 64 Feb. 1989. *
Radia Perlman "An Algorithm for Distributed Computation of a Sapnning Tree in an Extended LAN", Ninth Data Comm. Symp. ACM SIGCOMM v.15 n.4, Sep. 1985 pp. 44-53.
Radia Perlman An Algorithm for Distributed Computation of a Sapnning Tree in an Extended LAN , Ninth Data Comm. Symp. ACM SIGCOMM v.15 n.4, Sep. 1985 pp. 44 53. *
Sandberg Russell, "The Phschology of Network Management," UNIX Review 1990.
Sandberg Russell, The Phschology of Network Management, UNIX Review 1990. *
Schnaidt Patricia, "Keep It Simple; SNMP Let you Manage A Heterogeneous Network Today," LAN Magazine, 1990.
Schnaidt Patricia, Keep It Simple; SNMP Let you Manage A Heterogeneous Network Today, LAN Magazine, 1990. *
Wilbur, et al. MAC Layer Security Measures in Local Area Networks , Lecture Notes in Computer Science, Workshop LANSEC 1989. Pp. 53 65. *
Wilbur, et al. MAC Layer Security Measures in Local Area Networks, Lecture Notes in Computer Science, Workshop LANSEC 1989. Pp. 53-65.

Cited By (227)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6049828A (en) * 1990-09-17 2000-04-11 Cabletron Systems, Inc. Method and apparatus for monitoring the status of non-pollable devices in a computer network
US5742760A (en) * 1992-05-12 1998-04-21 Compaq Computer Corporation Network packet switch using shared memory for repeating and bridging packets at media rate
US5734824A (en) * 1993-02-10 1998-03-31 Bay Networks, Inc. Apparatus and method for discovering a topology for local area networks connected via transparent bridges
US5838904A (en) * 1993-10-21 1998-11-17 Lsi Logic Corp. Random number generating apparatus for an interface unit of a carrier sense with multiple access and collision detect (CSMA/CD) ethernet data network
US6014697A (en) * 1994-10-25 2000-01-11 Cabletron Systems, Inc. Method and apparatus for automatically populating a network simulator tool
US5805819A (en) * 1995-04-24 1998-09-08 Bay Networks, Inc. Method and apparatus for generating a display based on logical groupings of network entities
US5774669A (en) * 1995-07-28 1998-06-30 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Scalable hierarchical network management system for displaying network information in three dimensions
US5841981A (en) * 1995-09-28 1998-11-24 Hitachi Software Engineering Co., Ltd. Network management system displaying static dependent relation information
US5848243A (en) * 1995-11-13 1998-12-08 Sun Microsystems, Inc. Network topology management system through a database of managed network resources including logical topolgies
US5793362A (en) * 1995-12-04 1998-08-11 Cabletron Systems, Inc. Configurations tracking system using transition manager to evaluate votes to determine possible connections between ports in a communications network in accordance with transition tables
US5887132A (en) * 1995-12-05 1999-03-23 Asante Technologies, Inc. Network hub interconnection circuitry
US5793975A (en) * 1996-03-01 1998-08-11 Bay Networks Group, Inc. Ethernet topology change notification and nearest neighbor determination
US5751965A (en) * 1996-03-21 1998-05-12 Cabletron System, Inc. Network connection status monitor and display
US5850397A (en) * 1996-04-10 1998-12-15 Bay Networks, Inc. Method for determining the topology of a mixed-media network
US6970433B1 (en) * 1996-04-29 2005-11-29 Tellabs Operations, Inc. Multichannel ring and star networks with limited channel conversion
US20100054263A1 (en) * 1996-04-29 2010-03-04 Tellabs Operations, Inc. Multichannel ring and star networks with limited channel conversion
US8134939B2 (en) 1996-04-29 2012-03-13 Tellabs Operations, Inc. Multichannel ring and star networks with limited channel conversion
US7606180B2 (en) 1996-04-29 2009-10-20 Tellabs Operations, Inc. Multichannel ring and star networks with limited channel conversion
US5930476A (en) * 1996-05-29 1999-07-27 Sun Microsystems, Inc. Apparatus and method for generating automatic customized event requests
US5845062A (en) * 1996-06-25 1998-12-01 Mci Communications Corporation System and method for monitoring network elements organized in data communication channel groups with craft interface ports
US5949976A (en) * 1996-09-30 1999-09-07 Mci Communications Corporation Computer performance monitoring and graphing tool
US6339798B1 (en) * 1996-10-25 2002-01-15 Somfy Process for hooking up a group control module with a control module and/or an action module and/or a measurement module
US5864640A (en) * 1996-10-25 1999-01-26 Wavework, Inc. Method and apparatus for optically scanning three dimensional objects using color information in trackable patches
WO1998018306A3 (en) * 1996-10-28 1998-07-16 Switchsoft Systems Inc Method and apparatus for generating a network topology
WO1998018306A2 (en) * 1996-10-28 1998-05-07 Switchsoft Systems, Inc. Method and apparatus for generating a network topology
US5793366A (en) * 1996-11-12 1998-08-11 Sony Corporation Graphical display of an animated data stream between devices on a bus
US5956665A (en) * 1996-11-15 1999-09-21 Digital Equipment Corporation Automatic mapping, monitoring, and control of computer room components
US6188973B1 (en) * 1996-11-15 2001-02-13 Compaq Computer Corporation Automatic mapping, monitoring, and control of computer room components
US6131119A (en) * 1997-04-01 2000-10-10 Sony Corporation Automatic configuration system for mapping node addresses within a bus structure to their physical location
USRE41397E1 (en) * 1997-06-30 2010-06-22 Adaptec, Inc. Method and apparatus for network interface card load balancing and port aggregation
US6393483B1 (en) * 1997-06-30 2002-05-21 Adaptec, Inc. Method and apparatus for network interface card load balancing and port aggregation
US6664985B1 (en) 1997-07-02 2003-12-16 At&T Corporation Method and apparatus for supervising a processor within a distributed platform switch through graphical representations
US6157378A (en) * 1997-07-02 2000-12-05 At&T Corp. Method and apparatus for providing a graphical user interface for a distributed switch having multiple operators
US5999174A (en) * 1997-07-02 1999-12-07 At&T Corporation Reusable sparing cell software component for a graphical user interface
CN100367713C (en) * 1997-07-30 2008-02-06 索尼电子有限公司 Method for describing human interface features and functionality of AV/C based device
WO1999007111A1 (en) * 1997-07-30 1999-02-11 Sony Electronics, Inc. Method for describing the human interface features and functionality of av/c-based devices
US6421069B1 (en) 1997-07-31 2002-07-16 Sony Corporation Method and apparatus for including self-describing information within devices
US5949818A (en) * 1997-08-27 1999-09-07 Winbond Electronics Corp. Expandable ethernet network repeater unit
US6308207B1 (en) * 1997-09-09 2001-10-23 Ncr Corporation Distributed service subsystem architecture for distributed network management
US6243411B1 (en) * 1997-10-08 2001-06-05 Winbond Electronics Corp. Infinitely expandable Ethernet network repeater unit
US6055267A (en) * 1997-10-17 2000-04-25 Winbond Electronics Corp. Expandable ethernet network repeater unit
US6112241A (en) * 1997-10-21 2000-08-29 International Business Machines Corporation Integrated network interconnecting device and probe
US6088665A (en) * 1997-11-03 2000-07-11 Fisher Controls International, Inc. Schematic generator for use in a process control network having distributed control functions
US9742633B2 (en) 1997-11-17 2017-08-22 Commscope Technologies Llc System and method for electronically identifying connections of a system used to make connections
US8804540B2 (en) 1997-11-17 2014-08-12 Adc Telecommunications, Inc. System and method for electronically identifying connections of a cross-connect system
US20110188383A1 (en) * 1997-11-17 2011-08-04 Adc Telecommunications, Inc. System and method for electronically identifying connections of a cross-connect system
US7907537B2 (en) 1997-11-17 2011-03-15 Adc Telecommunications, Inc. System and method for electronically identifying connections of a cross-connect system
US20090225667A1 (en) * 1997-11-17 2009-09-10 Adc Telecommunications, Inc. System and method for electronically identifying connections of a cross-connect system
US6100887A (en) * 1997-12-05 2000-08-08 At&T Corporation Reusable reversible progress indicator software component for a graphical user interface
US6079034A (en) * 1997-12-05 2000-06-20 Hewlett-Packard Company Hub-embedded system for automated network fault detection and isolation
US6003081A (en) * 1998-02-17 1999-12-14 International Business Machines Corporation Data processing system and method for generating a detailed repair request for a remote client computer system
US6496860B2 (en) * 1998-05-08 2002-12-17 Sony Corporation Media manager for controlling autonomous media devices within a network environment and managing the flow and format of data between the devices
US6233611B1 (en) 1998-05-08 2001-05-15 Sony Corporation Media manager for controlling autonomous media devices within a network environment and managing the flow and format of data between the devices
US6493753B2 (en) * 1998-05-08 2002-12-10 Sony Corporation Media manager for controlling autonomous media devices within a network environment and managing the flow and format of data between the devices
US6650342B1 (en) 1998-06-30 2003-11-18 Samsung Electronics Co., Ltd. Method for operating network management system in a graphic user interface enviroment and network management system
US6381507B1 (en) 1998-07-01 2002-04-30 Sony Corporation Command pass-through functionality in panel subunit
US6148241A (en) * 1998-07-01 2000-11-14 Sony Corporation Of Japan Method and system for providing a user interface for a networked device using panel subunit descriptor information
US6556221B1 (en) 1998-07-01 2003-04-29 Sony Corporation Extended elements and mechanisms for displaying a rich graphical user interface in panel subunit
US6615293B1 (en) 1998-07-01 2003-09-02 Sony Corporation Method and system for providing an exact image transfer and a root panel list within the panel subunit graphical user interface mechanism
US6295479B1 (en) 1998-07-01 2001-09-25 Sony Corporation Of Japan Focus in/out actions and user action pass-through mechanism for panel subunit
US20030221004A1 (en) * 1998-07-07 2003-11-27 Stupek Richard A. Programmable operational system for managing devices participating in a network
US6349131B1 (en) 1998-07-07 2002-02-19 Samsung Electronics Co., Ltd. Apparatus and method for graphically outputting status of trunk in switching system
CN100344136C (en) * 1998-07-07 2007-10-17 三星电子株式会社 Method for delivering trunk line state by means of graphics mode in exchange system
US6526442B1 (en) * 1998-07-07 2003-02-25 Compaq Information Technologies Group, L.P. Programmable operational system for managing devices participating in a network
US6205122B1 (en) * 1998-07-21 2001-03-20 Mercury Interactive Corporation Automatic network topology analysis
US7120128B2 (en) 1998-10-23 2006-10-10 Brocade Communications Systems, Inc. Method and system for creating and implementing zones within a fibre channel system
US6771287B1 (en) 1999-05-10 2004-08-03 3Com Corporation Graphically distinguishing a path between two points on a network
GB2349962B (en) * 1999-05-10 2001-07-11 3Com Corp Supervising a network
GB2349962A (en) * 1999-05-10 2000-11-15 3Com Corp Network supervision with visual display
US6502130B1 (en) 1999-05-27 2002-12-31 International Business Machines Corporation System and method for collecting connectivity data of an area network
US7646722B1 (en) 1999-06-29 2010-01-12 Cisco Technology, Inc. Generation of synchronous transport signal data used for network protection operation
US7865832B2 (en) 1999-07-26 2011-01-04 Sony Corporation Extended elements and mechanisms for displaying a rich graphical user interface in panel subunit
US20030231205A1 (en) * 1999-07-26 2003-12-18 Sony Corporation/Sony Electronics, Inc. Extended elements and mechanisms for displaying a rich graphical user interface in panel subunit
US7289964B1 (en) 1999-08-31 2007-10-30 Accenture Llp System and method for transaction services patterns in a netcentric environment
US6601192B1 (en) 1999-08-31 2003-07-29 Accenture Llp Assertion component in environment services patterns
US6601234B1 (en) 1999-08-31 2003-07-29 Accenture Llp Attribute dictionary in a business logic services environment
US6578068B1 (en) 1999-08-31 2003-06-10 Accenture Llp Load balancer in environment services patterns
US6571282B1 (en) 1999-08-31 2003-05-27 Accenture Llp Block-based communication in a communication services patterns environment
US6954220B1 (en) 1999-08-31 2005-10-11 Accenture Llp User context component in environment services patterns
US6615253B1 (en) 1999-08-31 2003-09-02 Accenture Llp Efficient server side data retrieval for execution of client side applications
US6549949B1 (en) 1999-08-31 2003-04-15 Accenture Llp Fixed format stream in a communication services patterns environment
US6636242B2 (en) * 1999-08-31 2003-10-21 Accenture Llp View configurer in a presentation services patterns environment
US6640249B1 (en) 1999-08-31 2003-10-28 Accenture Llp Presentation services patterns in a netcentric environment
US6640238B1 (en) 1999-08-31 2003-10-28 Accenture Llp Activity component in a presentation services patterns environment
US6842906B1 (en) 1999-08-31 2005-01-11 Accenture Llp System and method for a refreshable proxy pool in a communication services patterns environment
US6640244B1 (en) 1999-08-31 2003-10-28 Accenture Llp Request batcher in a transaction services patterns environment
US6715145B1 (en) 1999-08-31 2004-03-30 Accenture Llp Processing pipeline in a base services pattern environment
US6742015B1 (en) 1999-08-31 2004-05-25 Accenture Llp Base services patterns in a netcentric environment
US7161866B2 (en) 1999-09-02 2007-01-09 Micron Technology, Inc. Memory device tester and method for testing reduced power states
US6674677B2 (en) 1999-09-02 2004-01-06 Micron Technology, Inc. Memory device tester and method for testing reduced power states
US20050243638A1 (en) * 1999-09-02 2005-11-03 Micron Technology, Inc. Memory device tester and method for testing reduced power states
US6914843B2 (en) 1999-09-02 2005-07-05 Micron Technology, Inc. Memory device tester and method for testing reduced power states
US6775192B2 (en) 1999-09-02 2004-08-10 Micron Technology, Inc. Memory device tester and method for testing reduced power states
US6418070B1 (en) * 1999-09-02 2002-07-09 Micron Technology, Inc. Memory device tester and method for testing reduced power states
US6404861B1 (en) 1999-10-25 2002-06-11 E-Cell Technologies DSL modem with management capability
US6697338B1 (en) 1999-10-28 2004-02-24 Lucent Technologies Inc. Determination of physical topology of a communication network
US6646996B1 (en) 1999-12-15 2003-11-11 International Business Machines Corporation Use of adaptive resonance theory to differentiate network device types (routers vs switches)
US6639900B1 (en) 1999-12-15 2003-10-28 International Business Machines Corporation Use of generic classifiers to determine physical topology in heterogeneous networking environments
US6741568B1 (en) 1999-12-15 2004-05-25 International Business Machines Corporation Use of adaptive resonance theory (ART) neural networks to compute bottleneck link speed in heterogeneous networking environments
US6433903B1 (en) 1999-12-29 2002-08-13 Sycamore Networks, Inc. Optical management channel for wavelength division multiplexed systems
US7639665B1 (en) 2000-01-05 2009-12-29 Cisco Technology, Inc. Automatic propagation of circuit information in a communications network
US6614785B1 (en) 2000-01-05 2003-09-02 Cisco Technology, Inc. Automatic propagation of circuit information in a communications network
US6601097B1 (en) 2000-01-10 2003-07-29 International Business Machines Corporation Method and system for determining the physical location of computers in a network by storing a room location and MAC address in the ethernet wall plate
US20120066606A1 (en) * 2000-01-21 2012-03-15 Zavgren Jr John Richard Systems and methods for visualizing a communications network
US9077634B2 (en) * 2000-01-21 2015-07-07 Verizon Corporate Services Group Inc. Systems and methods for visualizing a communications network
US6941119B2 (en) 2000-01-26 2005-09-06 Vyyo Ltd. Redundancy scheme for the radio frequency front end of a broadband wireless hub
US20020052205A1 (en) * 2000-01-26 2002-05-02 Vyyo, Ltd. Quality of service scheduling scheme for a broadband wireless access system
US20010051512A1 (en) * 2000-01-26 2001-12-13 Vyyo Ltd. Redundancy scheme for the radio frequency front end of a broadband wireless hub
US6856786B2 (en) 2000-01-26 2005-02-15 Vyyo Ltd. Quality of service scheduling scheme for a broadband wireless access system
US6876834B2 (en) 2000-01-26 2005-04-05 Vyyo, Ltd. Power inserter configuration for wireless modems
US20010053180A1 (en) * 2000-01-26 2001-12-20 Vyyo, Ltd. Offset carrier frequency correction in a two-way broadband wireless access system
WO2001055832A1 (en) * 2000-01-26 2001-08-02 Vyyo, Ltd. Graphical interface for management of a broadband access network
US7123650B2 (en) 2000-01-26 2006-10-17 Vyyo, Inc. Offset carrier frequency correction in a two-way broadband wireless access system
US7359434B2 (en) 2000-01-26 2008-04-15 Vyyo Ltd. Programmable PHY for broadband wireless access systems
US6498821B2 (en) 2000-01-26 2002-12-24 Vyyo, Ltd. Space diversity method and system for broadband wireless access
US7027776B2 (en) 2000-01-26 2006-04-11 Vyyo, Inc. Transverter control mechanism for a wireless modem in a broadband access system
US20020056132A1 (en) * 2000-01-26 2002-05-09 Vyyo Ltd. Distributed processing for optimal QOS in a broadband access system
US7149188B2 (en) 2000-01-26 2006-12-12 Vyyo, Inc. Distributed processing for optimal QOS in a broadband access system
US20020159511A1 (en) * 2000-01-26 2002-10-31 Vyyo Ltd. Transverter control mechanism for a wireless modem in a broadband access system
US20010036841A1 (en) * 2000-01-26 2001-11-01 Vyyo Ltd. Power inserter configuration for wireless modems
US20060041660A1 (en) * 2000-02-28 2006-02-23 Microsoft Corporation Enterprise management system
US7873719B2 (en) * 2000-02-28 2011-01-18 Microsoft Corporation Enterprise management system
US8230056B2 (en) 2000-02-28 2012-07-24 Microsoft Corporation Enterprise management system
US6987754B2 (en) 2000-03-07 2006-01-17 Menashe Shahar Adaptive downstream modulation scheme for broadband wireless access systems
US7298715B2 (en) 2000-03-14 2007-11-20 Vyyo Ltd Communication receiver with signal processing for beam forming and antenna diversity
US20020024975A1 (en) * 2000-03-14 2002-02-28 Hillel Hendler Communication receiver with signal processing for beam forming and antenna diversity
US20090154501A1 (en) * 2000-04-25 2009-06-18 Charles Scott Roberson Method and Apparatus For Transporting Network Management Information In a Telecommunications Network
US7929573B2 (en) 2000-04-25 2011-04-19 Cisco Technology, Inc. Method and apparatus for transporting network management information in a telecommunications network
US6954437B1 (en) 2000-06-30 2005-10-11 Intel Corporation Method and apparatus for avoiding transient loops during network topology adoption
US20060190587A1 (en) * 2000-06-30 2006-08-24 Mikael Sylvest Network topology management
US20060179460A1 (en) * 2000-07-19 2006-08-10 Verizon Corporate Services Group Inc. System and method for providing a graphical representation of a frame inside a central office of a telecommunications system
US20020060695A1 (en) * 2000-07-19 2002-05-23 Ashok Kumar System and method for providing a graphical representation of a frame inside a central office of a telecommunications system
US7386795B2 (en) * 2000-07-19 2008-06-10 Verizon Corporate Services Group Inc. System and method for providing a graphical representation of a frame inside a central office of a telecommunications system
US7024627B2 (en) * 2000-07-19 2006-04-04 Verizon Corporate Services Group Inc. System and method for providing a graphical representation of a frame inside a central office of a telecommunications system
US6985967B1 (en) 2000-07-20 2006-01-10 Rlx Technologies, Inc. Web server network system and method
US6747878B1 (en) 2000-07-20 2004-06-08 Rlx Technologies, Inc. Data I/O management system and method
US6757748B1 (en) * 2000-07-20 2004-06-29 Rlx Technologies, Inc. Modular network interface system and method
US20020035460A1 (en) * 2000-09-21 2002-03-21 Hales Robert J. System and method for network infrastructure management
WO2002025505A1 (en) * 2000-09-21 2002-03-28 Hal-Tec Corporation System and method for network infrastructure management
US20040090925A1 (en) * 2000-12-15 2004-05-13 Thomas Schoeberl Method for testing a network, and corresponding network
US7602736B2 (en) * 2000-12-15 2009-10-13 Robert Bosch Gmbh Method for testing a network, and corresponding network
US20050267638A1 (en) * 2001-02-12 2005-12-01 The Stanley Works System and architecture for providing a modular intelligent assist system
US7120508B2 (en) * 2001-02-12 2006-10-10 The Stanley Works System and architecture for providing a modular intelligent assist system
US20020138608A1 (en) * 2001-03-23 2002-09-26 International Business Machines Corp. System and method for mapping a network
US7676567B2 (en) 2001-03-23 2010-03-09 International Business Machines Corporation System and method for mapping a network
US20020165934A1 (en) * 2001-05-03 2002-11-07 Conrad Jeffrey Richard Displaying a subset of network nodes based on discovered attributes
US20020188718A1 (en) * 2001-05-04 2002-12-12 Rlx Technologies, Inc. Console information storage system and method
US20020188709A1 (en) * 2001-05-04 2002-12-12 Rlx Technologies, Inc. Console information server system and method
US7519000B2 (en) 2002-01-30 2009-04-14 Panduit Corp. Systems and methods for managing a network
US20040073597A1 (en) * 2002-01-30 2004-04-15 Caveney Jack E. Systems and methods for managing a network
US20030154276A1 (en) * 2002-02-14 2003-08-14 Caveney Jack E. VOIP telephone location system
US7376734B2 (en) 2002-02-14 2008-05-20 Panduit Corp. VOIP telephone location system
US20060050656A1 (en) * 2002-09-12 2006-03-09 Sebastien Perrot Method for determining a parent portal in a wireless network and corresponding portal device
US20040054761A1 (en) * 2002-09-13 2004-03-18 Colombo Bruce A. Self-registration systems and methods for dynamically updating information related to a network
US7081808B2 (en) 2002-09-13 2006-07-25 Fitel Usa Corp. Self-registration systems and methods for dynamically updating information related to a network
EP1398906A1 (en) * 2002-09-13 2004-03-17 FITEL USA CORPORATION (a Delaware Corporation) Self-registration systems and methods for dynamically updating information related to a network
CN100492983C (en) * 2002-09-13 2009-05-27 菲特尔美国公司 Automatic registration system for dynamically-updating information related to network and its method
US20040059850A1 (en) * 2002-09-19 2004-03-25 Hipp Christopher G. Modular server processing card system and method
WO2004053694A1 (en) * 2002-12-12 2004-06-24 Koninklijke Philips Electronics N.V. Communication system with display of the actual network topology
US7512703B2 (en) * 2003-01-31 2009-03-31 Hewlett-Packard Development Company, L.P. Method of storing data concerning a computer network
US20040153568A1 (en) * 2003-01-31 2004-08-05 Yong Boon Ho Method of storing data concerning a computer network
US7307428B2 (en) * 2003-07-03 2007-12-11 Alcatel Method for single ended line testing and single ended line testing device
US20070013384A1 (en) * 2003-07-03 2007-01-18 Alcatel Method for single ended line testing and single ended line testing device
US20050141431A1 (en) * 2003-08-06 2005-06-30 Caveney Jack E. Network managed device installation and provisioning technique
US20080113560A1 (en) * 2003-08-06 2008-05-15 Panduit Corp. Network Managed Device Installation and Provisioning Technique
US8325770B2 (en) 2003-08-06 2012-12-04 Panduit Corp. Network managed device installation and provisioning technique
US20050111491A1 (en) * 2003-10-23 2005-05-26 Panduit Corporation System to guide and monitor the installation and revision of network cabling of an active jack network
US7207846B2 (en) 2003-11-24 2007-04-24 Panduit Corp. Patch panel with a motherboard for connecting communication jacks
US20050159036A1 (en) * 2003-11-24 2005-07-21 Caveney Jack E. Communications patch panel systems and methods
US7761514B2 (en) * 2003-11-26 2010-07-20 International Business Machines Corporation Method and apparatus for providing dynamic group management for distributed interactive applications
US7886039B2 (en) 2003-11-26 2011-02-08 International Business Machines Corporation Method and apparatus for providing dynamic group management for distributed interactive applications
US20050114478A1 (en) * 2003-11-26 2005-05-26 George Popescu Method and apparatus for providing dynamic group management for distributed interactive applications
US20070297349A1 (en) * 2003-11-28 2007-12-27 Ofir Arkin Method and System for Collecting Information Relating to a Communication Network
US20050245127A1 (en) * 2004-05-03 2005-11-03 Nordin Ronald A Powered patch panel
US7455527B2 (en) 2004-05-03 2008-11-25 Panduit Corp. Powered patch panel
US20050281190A1 (en) * 2004-06-17 2005-12-22 Mcgee Michael S Automated recovery from a split segment condition in a layer2 network for teamed network resources of a computer systerm
US7990849B2 (en) * 2004-06-17 2011-08-02 Hewlett-Packard Development Company, L.P. Automated recovery from a split segment condition in a layer2 network for teamed network resources of a computer system
US7257108B2 (en) 2004-07-28 2007-08-14 Lenovo (Singapore) Pte. Ltd. Determining the physical location of resources on and proximate to a network
US20060023671A1 (en) * 2004-07-28 2006-02-02 International Business Machines Corporation Determining the physical location of resources on and proximate to a network
US20060047800A1 (en) * 2004-08-24 2006-03-02 Panduit Corporation Systems and methods for network management
US20060250984A1 (en) * 2005-04-07 2006-11-09 Samsung Electronics Co., Ltd. Method and apparatus for detecting topology of network
EP1710958A3 (en) * 2005-04-07 2006-10-25 Samsung Electronics Co., Ltd. Method and apparatus for detecting topology of network
US20060262802A1 (en) * 2005-05-20 2006-11-23 Martin Greg A Ethernet repeater with local link status that reflects the status of the entire link
US20070055740A1 (en) * 2005-08-23 2007-03-08 Luciani Luis E System and method for interacting with a remote computer
US20070076632A1 (en) * 2005-10-05 2007-04-05 Hewlett-Packard Development Company, L.P. Network port for tracing a connection topology
US20070081469A1 (en) * 2005-10-11 2007-04-12 Sbc Knowledge Ventures L.P. System and methods for wireless fidelity (WIFI) venue utilization monitoring and management
US20090135732A1 (en) * 2007-11-28 2009-05-28 Acterna Llc Characterizing Home Wiring Via AD HOC Networking
US8223663B2 (en) 2007-11-28 2012-07-17 Ben Maxson Characterizing home wiring via AD HOC networking
US20090265318A1 (en) * 2008-04-21 2009-10-22 Alcatel Lucent Port Location Determination for Wired Intelligent Terminals
US8982715B2 (en) 2009-02-13 2015-03-17 Adc Telecommunications, Inc. Inter-networking devices for use with physical layer information
US20100215049A1 (en) * 2009-02-13 2010-08-26 Adc Telecommunications, Inc. Inter-networking devices for use with physical layer information
US20100211664A1 (en) * 2009-02-13 2010-08-19 Adc Telecommunications, Inc. Aggregation of physical layer information related to a network
US20100211665A1 (en) * 2009-02-13 2010-08-19 Adc Telecommunications, Inc. Network management systems for use with physical layer information
US9491119B2 (en) 2009-02-13 2016-11-08 Commscope Technologies Llc Network management systems for use with physical layer information
US9674115B2 (en) 2009-02-13 2017-06-06 Commscope Technologies Llc Aggregation of physical layer information related to a network
US20100211697A1 (en) * 2009-02-13 2010-08-19 Adc Telecommunications, Inc. Managed connectivity devices, systems, and methods
US9667566B2 (en) 2009-02-13 2017-05-30 Commscope Technologies Llc Inter-networking devices for use with physical layer information
US10554582B2 (en) 2009-02-13 2020-02-04 CommScope Technolgies LLC System including management system to determine configuration for inter-networking device based on physical layer information of a network
US9742696B2 (en) 2009-02-13 2017-08-22 Commscope Technologies Llc Network management systems for use with physical layer information
US10129179B2 (en) 2009-02-13 2018-11-13 Commscope Technologies Llc Managed connectivity devices, systems, and methods
US20100275146A1 (en) * 2009-04-24 2010-10-28 Dell Products, Lp System and method for managing devices in an information handling system
US9646112B2 (en) * 2009-12-31 2017-05-09 Teradata Us, Inc. System, method, and computer-readable medium for providing a dynamic view and testing tool of power cabling of a multi-chassis computer system
US20110161877A1 (en) * 2009-12-31 2011-06-30 John Chapra System, method, and computer-readable medium for providing a dynamic view and testing tool of power cabling of a multi-chassis computer system
US20110185012A1 (en) * 2010-01-27 2011-07-28 Colley Matthew D System and method for generating a notification mailing list
US8874814B2 (en) 2010-06-11 2014-10-28 Adc Telecommunications, Inc. Switch-state information aggregation
US9497098B2 (en) 2011-03-25 2016-11-15 Commscope Technologies Llc Event-monitoring in a system for automatically obtaining and managing physical layer information using a reliable packet-based communication protocol
US9081537B2 (en) 2011-03-25 2015-07-14 Adc Telecommunications, Inc. Identifier encoding scheme for use with multi-path connectors
US8949496B2 (en) 2011-03-25 2015-02-03 Adc Telecommunications, Inc. Double-buffer insertion count stored in a device attached to a physical layer medium
US8832503B2 (en) 2011-03-25 2014-09-09 Adc Telecommunications, Inc. Dynamically detecting a defective connector at a port
USRE47365E1 (en) 2011-12-07 2019-04-23 Commscope Technologies Llc Systems and methods for using active optical cable segments
US9038141B2 (en) 2011-12-07 2015-05-19 Adc Telecommunications, Inc. Systems and methods for using active optical cable segments
US9207417B2 (en) 2012-06-25 2015-12-08 Adc Telecommunications, Inc. Physical layer management for an active optical module
US9602897B2 (en) 2012-06-25 2017-03-21 Commscope Technologies Llc Physical layer management for an active optical module
US9742704B2 (en) 2012-07-11 2017-08-22 Commscope Technologies Llc Physical layer management at a wall plate device
US9380874B2 (en) 2012-07-11 2016-07-05 Commscope Technologies Llc Cable including a secure physical layer management (PLM) whereby an aggregation point can be associated with a plurality of inputs
US9473361B2 (en) 2012-07-11 2016-10-18 Commscope Technologies Llc Physical layer management at a wall plate device
US11113642B2 (en) 2012-09-27 2021-09-07 Commscope Connectivity Uk Limited Mobile application for assisting a technician in carrying out an electronic work order
US10819602B2 (en) 2013-08-14 2020-10-27 Commscope Technologies Llc Inferring physical layer connection status of generic cables from planned single-end connection events
US10153954B2 (en) 2013-08-14 2018-12-11 Commscope Technologies Llc Inferring physical layer connection status of generic cables from planned single-end connection events
US9407510B2 (en) 2013-09-04 2016-08-02 Commscope Technologies Llc Physical layer system with support for multiple active work orders and/or multiple active technicians
US9905089B2 (en) 2013-09-04 2018-02-27 Commscope Technologies Llc Physical layer system with support for multiple active work orders and/or multiple active technicians
US9544058B2 (en) 2013-09-24 2017-01-10 Commscope Technologies Llc Pluggable active optical module with managed connectivity support and simulated memory table
US10205519B2 (en) 2013-09-24 2019-02-12 Commscope Technologies Llc Pluggable active optical module with managed connectivity support and simulated memory table
US10700778B2 (en) 2013-09-24 2020-06-30 Commscope Technologies Llc Pluggable active optical module with managed connectivity support and simulated memory table
US10756984B2 (en) * 2015-04-13 2020-08-25 Wirepath Home Systems, Llc Method and apparatus for creating and managing network device port VLAN configurations
US10429437B2 (en) * 2015-05-28 2019-10-01 Keysight Technologies, Inc. Automatically generated test diagram
US20160349312A1 (en) * 2015-05-28 2016-12-01 Keysight Technologies, Inc. Automatically Generated Test Diagram

Also Published As

Publication number Publication date
US5226120A (en) 1993-07-06

Similar Documents

Publication Publication Date Title
US5606664A (en) Apparatus and method for automatically determining the topology of a local area network
US6915466B2 (en) Method and system for multi-user channel allocation for a multi-channel analyzer
US5179554A (en) Automatic association of local area network station addresses with a repeater port
CA1315008C (en) System permitting peripheral interchangeability
EP0495575B1 (en) Repeater interface controller
Lee et al. The principles and performance of Hubnet: A 50 Mbit/s glass fiber local area network
US20050002415A1 (en) High performance digital loop diagnostic technology
US5687319A (en) Method and system for determining maximum cable segments between all possible node to node paths on a serial bus
JPH088958A (en) Access provision system and connection automation judgement method
JP4087179B2 (en) Subscriber line terminal equipment
AU7868994A (en) Determination of network topology
WO1997037292A2 (en) Network connection status monitor and display
JPH073973B2 (en) Method of communicating between two control units
EP0537040B1 (en) Apparatus and method to test ring-shaped, high-throughput network
Cisco Introduction
Cisco Introduction
Cisco Introduction
Cisco Introduction
Cisco Introduction
Cisco Introduction
Cisco Introduction
Cisco Introduction
Cisco Introduction
Cisco Introduction
Cisco Introduction

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION UNDERGOING PREEXAM PROCESSING

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: NORTEL NETWORKS LIMITED, CANADA

Free format text: CHANGE OF NAME;ASSIGNOR:NORTEL NETWORKS CORPORATION;REEL/FRAME:011195/0706

Effective date: 20000830

Owner name: NORTEL NETWORKS LIMITED,CANADA

Free format text: CHANGE OF NAME;ASSIGNOR:NORTEL NETWORKS CORPORATION;REEL/FRAME:011195/0706

Effective date: 20000830

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: NORTEL NETWORKS CORPORATION, CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NORTEL NETWORKS GROUP INC.;REEL/FRAME:026138/0420

Effective date: 19991128

Owner name: SYNOPTICS COMMUNICATIONS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BROWN, BRIAN;CHOUDHURY, SHABBIR AHMED;FONTAINE, JEAN-LUC;AND OTHERS;SIGNING DATES FROM 19900707 TO 19900910;REEL/FRAME:026132/0396

Owner name: NORTEL NETWORKS GROUP INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:BAY NETWORKS GROUP, INC.;REEL/FRAME:026132/0472

Effective date: 19990420

Owner name: BAY NETWORKS GROUP, INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:SYNOPTICS COMMUNICATIONS, INC.;REEL/FRAME:026132/0405

Effective date: 19950110

AS Assignment

Owner name: ROCKSTAR BIDCO, LP, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NORTEL NETWORKS LIMITED;REEL/FRAME:027164/0356

Effective date: 20110729

AS Assignment

Owner name: ROCKSTAR CONSORTIUM US LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ROCKSTAR BIDCO, LP;REEL/FRAME:032099/0853

Effective date: 20120509

AS Assignment

Owner name: CONSTELLATION TECHNOLOGIES LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ROCKSTAR CONSORTIUM US LP;REEL/FRAME:032162/0524

Effective date: 20131113