US20070294410A1 - Software, systems and methods for managing a distributed network - Google Patents

Software, systems and methods for managing a distributed network Download PDF

Info

Publication number
US20070294410A1
US20070294410A1 US11/894,526 US89452607A US2007294410A1 US 20070294410 A1 US20070294410 A1 US 20070294410A1 US 89452607 A US89452607 A US 89452607A US 2007294410 A1 US2007294410 A1 US 2007294410A1
Authority
US
United States
Prior art keywords
socket
bandwidth
network
agent
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/894,526
Inventor
Suketu Pandya
Varad Joshi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Centrisoft Corp
Original Assignee
Centrisoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/532,101 external-priority patent/US6671724B1/en
Application filed by Centrisoft Corp filed Critical Centrisoft Corp
Priority to US11/894,526 priority Critical patent/US20070294410A1/en
Assigned to CENTRISOFT CORPORATION reassignment CENTRISOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PANDYA, SUKETU J.
Publication of US20070294410A1 publication Critical patent/US20070294410A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
    • H04L69/162Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields involving adaptations of sockets based mechanisms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/046Network management architectures or arrangements comprising network management agents or mobile agents therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0894Policy-based network configuration management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/22Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0894Packet rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/803Application aware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/828Allocation of resources per group of connections, e.g. per group of users
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1014Server selection for load balancing based on the content of a request
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • H04L67/306User profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3495Performance evaluation by tracing or monitoring for systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/508Network service management, e.g. ensuring proper service fulfilment according to agreements based on type of value added network service under agreement
    • H04L41/5096Network service management, e.g. ensuring proper service fulfilment according to agreements based on type of value added network service under agreement wherein the managed service relates to distributed or central networked applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/06Generation of reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0817Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0823Errors, e.g. transmission errors
    • H04L43/0829Packet loss
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • H04L43/087Jitter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0882Utilisation of link capacity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0888Throughput
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • H04L43/106Active monitoring, e.g. heartbeat, ping or trace-route using time related information in packets, e.g. by adding timestamps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/10015Access to distributed or replicated servers, e.g. using brokers

Definitions

  • the present invention relates generally to distributed network systems, and more particularly to software, systems and methods for managing resources of a distributed network.
  • a broad, conceptual class of management solutions may be thought of as attempts to increase “awareness” in a distributed networking environment.
  • the concept is that where the network is more aware of applications or other tasks running on networked devices, and vice versa, then steps can be taken to make more efficient use of network resources. For example, if network management software becomes aware that a particular user is running a low priority application, then the software could block or limit that user's access to network resources. If management software becomes aware that the network population at a given instance includes a high percentage of outside customers, bandwidth preferences and priorities could be modified to ensure that the customers had a positive experience with the network.
  • increasing application and network awareness is a desirable goal, however application vendors largely ignore these considerations and tend to focus not on network infrastructure, but rather on enhancing application functionality.
  • QoS Quality of service
  • policy-based management techniques represent efforts to bridge the gap between networks, applications and users in order to more efficiently manage the use of network resources.
  • QoS is a term referring to techniques which allow network-aware applications to request and receive a predictable level of service in terms of performance specifications such as bandwidth, jitter, delay and loss.
  • Known QoS methods include disallowing certain types of packets, slowing transmission rates, establishing distinct classes of services for certain types of packets, marking packets with a priority value, and various queuing methods.
  • QoS techniques In a distributed environment having scarce resources, QoS techniques necessarily introduce unfairness into the system by giving preferential treatment to certain network traffic.
  • Policy-based network management uses policies, or rules, to define how network resources are to be used.
  • a policy includes a condition and an action.
  • An example of a policy could be to block access or disallow packets (action) if the IP source address of the data is included on a list of disallowed addresses (condition).
  • action An example of a policy could be to block access or disallow packets (action) if the IP source address of the data is included on a list of disallowed addresses (condition).
  • One use of policy-based network management techniques is to determine when and how the unfairness introduced by QoS methods should apply.
  • the classification process can occur at various levels of data abstraction, and may be described in terms of layered communication protocols that network devices use to communicate across a network link.
  • the layers of the OSI model are: application (layer 7), presentation (layer 6), session (layer 5), transport (layer 4), network (layer 3), data link (layer 2) and physical (layer 1).
  • the second major model forms the basis for the TCP/IP protocol suite. Its layers are application, transport, network, data link and hardware, as also depicted in FIG. 1 .
  • the TCP/IP layers correspond in function to the OSI layers, but without a presentation or session layer. In both models, data is processed and changes form as it is sequentially passed between the layers.
  • Known policy based management solutions and QoS methods typically classify data by monitoring data flows at the transport layer and below.
  • a common multi-parameter classifier is the well known “five-tuple” consisting of (IP source address, IP destination address, IP protocol, TCP/UDP source port and TCP/UDP destination port). These parameters are all obtained at the transport and network layers of the models.
  • the large majority of existing policy-based, QoS solutions are implemented by monitoring and classifying network activity at these protocol layers. However, the higher the protocol layer, the more definitive and specific the available data and classifiers.
  • bandwidth allocations are static and are not adjusted dynamically in response to changing network conditions.
  • the present description provides systems and methods for managing network bandwidth consumption, which may include use of an agent module loadable on a networked computer.
  • the agent module is configured to obtain an allocation of network bandwidth usable by the networked computer, and is further configured to sub-allocate such allocation among multiple bandwidth-consuming components associated with the networked computer.
  • the system may further include multiple such agent modules loaded on plural networked computers, and a control module configured to interact with each of the agent modules to dynamically manage bandwidth usage by the networked computers.
  • FIG. 1 is a conceptual depiction of the OSI and TCP/IP layered protocol models.
  • FIG. 2 is a view of a distributed network system in which the software, systems and methods described herein may be deployed.
  • FIG. 3 is a schematic view of a computing device that may be deployed in the distributed network system of FIG. 2 .
  • FIG. 4 is a block diagram view depicting exemplary agent modules and control modules that may be used to manage resources in a distributed network such as that depicted in FIG. 2 .
  • FIG. 5 is a block diagram view depicting various exemplary components that may employed in connection with the described software, systems and methods, including two agent modules, a control module and a configuration utility.
  • FIG. 6 is a block diagram depicting an exemplary deployment of an agent module in relation to a layered protocol stack of a computing device.
  • FIG. 7 is a block diagram depicting another exemplary deployment of an agent module in relation to a layered protocol stack of a computing device.
  • FIG. 8 is a block diagram depicting yet another exemplary deployment of an agent module in relation to a layered protocol stack of a computing device.
  • FIG. 9 is a block diagram depicting exemplary component parts of an agent module embodiment.
  • FIG. 10 is a block diagram depicting exemplary component parts of a control module embodiment.
  • FIG. 11A is a flowchart depicting a method for allocating bandwidth among a plurality of computers.
  • FIG. 11B is a flowchart depicting another method for allocating bandwidth among a plurality of computers.
  • FIG. 11C is a flowchart depicting yet another method for allocating bandwidth among a plurality of computers.
  • FIG. 11D is a flowchart depicting yet another method for allocating bandwidth among a plurality of computers.
  • FIG. 12 is a flowchart depicting a method for monitoring the status of network resources.
  • FIG. 13 is a view of a main configuration screen of a configuration utility that may be employed in connection with the software, systems and methods described herein.
  • FIG. 14 is a view of another configuration screen of the configuration utility depicted in FIG. 13 .
  • FIG. 15 is a view of yet another configuration screen of the configuration utility depicted in FIG. 13 .
  • FIG. 16 is a view of yet another configuration screen of the configuration utility depicted in FIG. 13 .
  • FIG. 17 schematically depicts a multi-tier control architecture that may be employed in connection with the management software, systems and methods described herein.
  • FIG. 18 depicts an exemplary network configuration allowing for centralized definition of network policies, and dissemination of those policies to pertinent distributed locations.
  • FIG. 19 depicts an exemplary multi-tier system and method for providing granular control over bandwidth usage of individual bandwidth-consuming components.
  • FIG. 20 depicts an exemplary computing device loaded with an embodiment of an agent software module configured to manage bandwidth consumption of an individual application socket or transaction.
  • FIG. 21 depicts an exemplary computing device loaded with alternate embodiments of an agent module configured to monitor and manage network traffic to and from applications running on the computing device.
  • the present description provides a system and method for managing network resources in a distributed networking environment, such as distributed network 10 depicted in FIG. 2 .
  • the software, system and methods increase productivity and customer/user satisfaction, minimize frustration associated with using the network, and ultimately ensure that network resources are used in a way consistent with underlying business or other objectives.
  • the systems and methods may employ two main software components, an agent and a control module, also referred to as a control point.
  • the agents and control points may be deployed throughout distributed network 10 , and may interact with each other to manage network resources.
  • a plurality of agents may be deployed to intelligently couple clients, servers and other computing devices to the underlying network.
  • the deployed agents monitor, analyze and act upon network events relating to the networked devices with which they are associated.
  • the agents typically are centrally coordinated and/or controlled by one or more control points.
  • the agents and control points may interact to control and monitor network events, track operational and congestion status of network resources, select optimum targets for network requests, dynamically manage bandwidth usage, and share information about network conditions with customers, users and IT personnel.
  • distributed network 10 may include a local network 12 and a plurality of remote networks 14 linked by a public network 16 such as the Internet.
  • the local network and remote networks may be connected to the public network with network infrastructure devices such as routers 18 .
  • Local network 12 typically includes servers 20 and client devices such as client computers 22 interconnected by network link 24 . Additionally, local network 12 may include any number and variety of devices, including file servers, applications servers, mail servers, WWW servers, databases, client computers, remote access devices, storage devices, printers and network infrastructure devices such as routers, bridges, gateways, switches, hubs and repeaters. Remote networks 14 may similarly include any number and variety of networked devices.
  • Computer system 40 includes a processor 42 that processes digital data.
  • the processor may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, a microcontroller, or virtually any other processor/controller device.
  • the processor may be a single device or a plurality of devices.
  • processor 42 is coupled to a bus 44 which transmits signals between the processor and other components in the computer system.
  • bus 44 may be a single bus or a plurality of buses.
  • a memory 46 is coupled to bus 44 and comprises a random access memory (RAM) device 47 (referred to as main memory) that stores information or other intermediate data during execution by processor 42 .
  • RAM random access memory
  • Memory 46 also includes a read only memory (ROM) and/or other static storage device 48 coupled to the bus that stores information and instructions for processor 42 .
  • ROM read only memory
  • BIOS basic input/output system
  • BIOS basic routines that help to transfer information between elements of the computer system, such as during start-up, is stored in ROM 48 .
  • a data storage device 50 also is coupled to bus 44 and stores information and instructions.
  • the data storage device may be a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device or any other mass storage device.
  • a network interface 52 also is coupled to bus 44 . The network interface operates to connect the computer system to a network (not shown).
  • Computer system 40 may also include a display device controller 54 coupled to bus 44 .
  • the display device controller allows coupling of a display device to the computer system and operates to interface the display device to the computer system.
  • the display device controller 54 may be, for example, a monochrome display adapter (MDA) card, a color graphics adapter (CGA) card, or other display device controller.
  • MDA monochrome display adapter
  • CGA color graphics adapter
  • the display device (not shown) may be a television set, a computer monitor, a flat panel display or other display device.
  • the display device receives information and data from processor 42 through display device controller 54 and displays the information and data to the user of computer system 40 .
  • An input device 56 typically is coupled to bus 44 for communicating information and command selections to processor 42 .
  • input device 56 is not directly coupled to bus 44 , but interfaces with the computer system via infra-red coded signals transmitted from the input device to an infra-red receiver in the computer system (not shown).
  • the input device may also be a remote control unit having keys that select characters or command selections on the display device.
  • FIG. 1 respectively depicts the OSI layered protocol model and a layered model based on the TCP/IP suite of protocols. These two models dominate the field of network communications software.
  • the OSI model has seven layers, including an application layer, a presentation layer, a session layer, a transport layer, a network layer, a data link layer and a physical layer.
  • the TCP/IP-based model includes an application layer, a transport layer, a network layer, a data link layer and a physical layer.
  • Each layer in the models plays a different role in network communications.
  • all of the protocol layers lie in a data transmission path that is “between” an application program running on the particular networked device and the network link, with the application layer being closest to the application program.
  • the data is transferred down through the protocol layers of the first computer, across the network link, and then up through the protocol layers on the second computer.
  • the application layer is responsible for interacting with an operating system of the networked device and for providing a window for application programs running on the device to access the network.
  • the transport layer is responsible for providing reliable, end-to-end data transmission between two end points on a network, such as between a client device and a server computer, or between a web server and a DNS server.
  • transport functionality may be realized using either connection-oriented or connectionless data transfer.
  • the network layer typically is not concerned with end-to-end delivery, but rather with forwarding and routing data to and from nodes between endpoints.
  • the layers below the transport and network layers perform other functions, with the lowest levels addressing the physical and electrical issues of transmitting raw bits across a network link.
  • the systems and methods described herein are applicable to a wide variety of network environments employing communications protocols adhering to either of the layered models depicted in FIG. 1 , or to any other layered model. Furthermore, the systems and methods are applicable to any type of network topology, and to networks using both physical and wireless connections.
  • the present description provides software, systems and methods for managing the resources of an enterprise network, such as that depicted in FIG. 2 . This may be accomplished using two interacting software components, an agent and a control point, both of which may be adapted to run on, or be associated with, computing devices such as the computing device described with reference to FIG. 3 . As seen in FIG. 4 , a plurality of agents 70 and one or more control points 72 may be deployed throughout distributed network 74 by loading the agent and control point software modules on networked computing devices such as clients 22 and server 20 .
  • the agents and control points may be adapted and configured to enforce system policies; to monitor and analyze network events, and take appropriate action based on these events; to provide valuable information to users of the network; and ultimately to ensure that network resources are efficiently used in a manner consistent with underlying business or other goals.
  • this configuration utility is a platform-independent application that provides a graphical user interface for centrally managing configuration information for the control points and agents.
  • the configuration utility may be adapted to communicate and interface with other management systems, including management platforms supplied by other vendors.
  • each control point 72 is typically associated with multiple agents 70 , and the associated agents are referred to as being within a domain 76 of the particular control point.
  • the control points coordinate and control the activity of the distributed agents within their domains.
  • the control points may monitor the status of network resources, and share this information with management and support systems and with the agents.
  • Control points 72 and agents 70 may be flexibly deployed in a variety of configurations.
  • each agent may be associated with a primary control point and one or more backup control points that will assume primary control if necessary.
  • Such a configuration is illustrated in FIG. 4 , where control points 72 within the dashed lines function as primary connections, with the control point associated with server device 20 functioning as a backup connection for all of the depicted agents.
  • the described exemplary systems may be configured so that one control point coordinates and controls the activity of a single domain, or of multiple domains. Alternatively, one domain may be controlled and coordinated by the cooperative activity of multiple control points.
  • agents may be configured to have embedded control point functionality, and may therefore operate without an associated control point entity.
  • the agents monitor network resources and the activity of the device with which they are associated, and communicate this information to the control points.
  • the control points may alter the behavior of particular agents in order to provide the desired network services.
  • the control points and agents may be loaded on a wide variety of devices, including general purpose computers, servers, routers, hubs, palm computers, pagers, cellular telephones, and virtually any other networked device having a processor and memory. Agents and control points may reside on separate devices, or simultaneously on the same device.
  • FIG. 5 illustrates an example of the way in which the various components of the described software, systems and methods may be physically interconnected with a network link 90 .
  • the components are all connected to network link 90 by means of layered communications protocol software 92 .
  • the components communicate with each other via the communications software and network link.
  • network link 90 may be a physical or wireless connection, or a series of links including physical and wireless segments.
  • the depicted system includes an agent 70 associated with a client computing device 22 , including an application program 98 .
  • Another agent is associated with server computing device 20 .
  • the agents monitor the activity of their associated computing devices and communicate with control point 72 .
  • Configuration utility 106 communicates with all of the other components, and with other management systems, to configure the operation of the various components and monitor the status of the network.
  • the system policies that define how network resources are to be used may be centrally defined and tailored to most efficiently achieve underlying goals. Defined policies are accessed by the control points, which in turn communicate various elements and parameters associated with the policies to the agents within their domain.
  • a policy contains rules about how network resources are to be used, with the rules containing conditions and actions to be taken when the conditions are satisfied.
  • the agents and control points monitor the network and devices connected to the network to determine when various rules apply and whether the conditions accompanying those rules are satisfied. Once the agents and/or control points determine that action is required, they take the necessary action(s) to enforce the system policies.
  • Control points 72 would access these policies and provide policy data to agents 70 .
  • Agents 70 and control points 72 would communicate with each other and monitor the network to determine how many customers were accessing the network, what computers the customer(s) were accessing, and what applications were being accessed by the customers. Once the triggering conditions were detected, the agents and control points would interact to re-allocate bandwidth, provide specified service levels, block or restrict various non-customer activities, etc.
  • policy-based management would be to define an optimum specification of network resources or service levels for particular types of network tasks.
  • the particular policies would direct the management entities to determine whether the particular task was permitted, and if permitted, the management entities would interact to ensure that the desired level of resources was provided to accomplish the task. If the optimum resources were not available, the applicable policies could further specify that the requested task be blocked, and that the requesting user be provided with an informative message detailing the reason why the request was denied. Alternatively, the policies could specify that the user be provided with various options, such as proceeding with the requested task, but with sub-optimal resources, or waiting to perform the task until a later time.
  • continuous media applications such as IP telephony have certain bandwidth requirements for optimum performance, and are particularly sensitive to network jitter and delay.
  • Policies could be written to specify a desired level of service, including bandwidth requirements and threshold levels for jitter and delay, for client computers attempting to run IP telephony applications. The policies would further direct the agents and control modules to attempt to provide the specified level of service. Security checking could also be included to ensure that the particular user or client computer was permitted to run the application.
  • the requesting user could be provided with a message indicating that the resources for the request were not available.
  • the user could also be offered various options, including proceeding with a sub-optimal level of service, placing a conventional telephone call, waiting to perform the task until a later time, etc.
  • the software, system and methods of the present description may be used to implement a wide variety of system policies.
  • the policy rules and conditions may be based on any number of parameters, including IP source address, IP destination address, source port, destination port, protocol, application identity, user identity, device identity, URL, available device bandwidth, application profile, server profile, gateway identity, router identity, time-of-day, network congestion, network load, network population, available domain bandwidth and resource status, to name but a partial list.
  • the actions taken when the policy conditions are satisfied can include blocking network access, adjusting service levels and/or bandwidth allocations for networked devices, blocking requests to particular URLs, diverting network requests away from overloaded or underperforming resources, redirecting network requests to alternate resources and gathering network statistics.
  • client parameters Some of the parameters listed above may be thought of as “client parameters,” because they are normally evaluated by an agent monitoring a single networked client device. These include IP source address, IP destination address, source port, destination port, protocol, application identity, user identity, available device bandwidth and URL. Other parameters, such as application profile, server profile, gateway identity, router identity, time-of-day, network congestion, network load, network population, available domain bandwidth and resource status may be though of as “system parameters” because they pertain to shared resources, aggregate network conditions or require evaluation of data from multiple agent modules. Despite this, there is not a precise distinction between client parameters and system parameters. Certain parameters, such as time-of-day, may be considered either a client parameter or a system parameter, or both.
  • Policy-based network management, QoS implementation, and the other functions of the agents and control points depend on obtaining real-time information about the network.
  • certain described embodiments and implementations provide improvements over known policy-based QoS management solutions because of the enhanced ability to obtain detailed information about network conditions and the activity of networked devices.
  • Many of the policy parameters and conditions discussed above are accessible due to the particular way the agent module embodiments may be coupled to the communications software of their associated devices.
  • managing bandwidth and ensuring its availability for core applications is an increasingly important consideration in managing networks.
  • Certain embodiments described herein provide for improved dynamic allocation of bandwidth and control of resource consumption in response to changing network conditions.
  • the agent modules may monitor the status and activities of its associated client, server, pervasive computing device or other computing device; communicate this information to one or more control points; enforce system policies under the direction of the control points; and provide messages to network users and administrators concerning network conditions.
  • FIGS. 6-8 are conceptual depictions of networked computing devices, and show how the agent software may be associated with the networked devices relative to layered protocol software used by the devices for network communication.
  • agent 70 is interposed between application program 122 and a communications protocol layer for providing end-to-end data transmission, such as transport layer 124 of communications protocol stack 92 .
  • a communications protocol layer for providing end-to-end data transmission
  • the agent modules described herein may be used with network devices that employ layered communications software adhering to either the OSI or TCP/IP-based protocol models.
  • agent 70 is depicted as “interposed,” i.e. in a data path, between an application program and a transport protocol layer.
  • the various agent module embodiments may be used with protocol software not adhering to either the OSI or TCP/IP models, but that nonetheless includes a protocol layer providing transport functionality, i.e. providing for end-to-end data transmission.
  • agent 70 is able to monitor network traffic and obtain information that is not available by hooking into transport layer 124 or the layers below the transport layer. At the higher layers, the available data is richer and more detailed. Hooking into the stack at higher layers allows the network to become more “application-aware” than is possible when monitoring occurs at the transport and lower layers.
  • agent 70 may be associated with a client computer so that it is adjacent an application programming interface (API) adapted to provide a standardized interface for application program 122 to access a local operating system (not shown) and communications stack 92 .
  • API application programming interface
  • agent 70 is adjacent a winsock API 128 and interposed between application program 122 and the winsock interface.
  • FIG. 8 shows an alternate configuration, in which agent 70 again hooks into a socket object, such as API 128 , but downstream of the socket interface. With either configuration, the agent is interposed between the application and transport layer 124 of communications stack 92 , and is adapted to directly monitor data received by or sent from the winsock interface.
  • agent 70 may be configured to hook into lower layers of communications stack 92 . This allows the agent to accurately monitor network traffic volumes by providing a correction mechanism to account for data compression or encryption occurring at protocol layers below transport layer 124 . For example, if compression or encryption occurs within transport layer 124 , monitoring at a point above the transport layer would yield an inaccurate measure of the network traffic associated with the computing device. Hooking into lower layers with agent 70 allows network traffic to be accurately measured in the event that compression, encryption or other data processing that qualitatively or quantitatively affects network traffic occurs at lower protocol layers.
  • agent 70 may include a redirector module 130 , a traffic control module 132 , an administrator module 134 , a DNS module 136 , a popapp module 138 , a message broker module 140 , a system services module 142 , and a popapp 144 .
  • Redirector module 130 intercepts winsock API calls made by applications running on networked devices such as the client computers depicted in FIGS. 2 and 3 . Redirector module 130 then hands these calls to one or more of the other agent components for processing. As discussed with reference to FIGS.
  • redirector module is positioned to allow the agent to monitor data at a data transmission point between an application program running on the device and the transport layer of the communications stack.
  • the intercepted winsock calls may be rejected, changed, or passed on by agent 70 .
  • Traffic control module 132 implements QoS and system policies and assists in monitoring network conditions. Traffic control module 132 implements QoS methods by controlling the network traffic flow between applications running on the agent device and the network link. The traffic flow is controlled to deliver a specified network service level, which may include specifications of bandwidth, data throughput, jitter, delay and data loss.
  • traffic control module 132 may maintain a queue or plurality of queues.
  • redirector module 130 intercepts the data, and traffic module 132 places the individual units of data in the appropriate queue.
  • the control points may be configured to periodically provide traffic control commands, which may include the QoS parameters and service specifications discussed above.
  • traffic control module 132 controls the passing of data into, through or out of the queues in order to provide the specified service level.
  • the outgoing traffic rate may be controlled using a plurality of priority-based transmission queues, such as transmission queues 132 a .
  • a priority level is assigned to the application, based on centrally defined policies and priority data supplied by the control point.
  • the control points maintain user profiles, applications profiles and network resource profiles. These profiles include priority data which is provided to the agents.
  • Transmission queues 132 a may be configured to release data for transmission to the network at regular intervals. Using the parameters specified in traffic control commands issued by a control point, traffic module 132 calculates how much data can be released from the transmission queues in a particular interval. For example, if the specified average traffic rate is 100 KBps and the queue release interval is 1 ms, then the total amount of data that the queues can release in a given interval is 100 bits. The relative priorities of the queues containing data to be transmitted determine how much of the allotment may be released by each individual queue.
  • Q 1 will be permitted to transmit 66.66% of the overall allotted interval release if its priority is twice that of Q 2 .
  • Q 2 would only be permitted to release 33.33% of the allotment. If their priorities were equal, each queue would be permitted to release 50% of the interval allotment for forwarding to the network link.
  • waiting data is packaged into units that are larger than the amount a given queue is permitted to release, the queue accumulates “credits” for intervals in which it does not release any waiting data. When enough credits are accumulated, the waiting message is released for forwarding to the network.
  • traffic control module 132 may be configured to maintain a plurality of separate receive queues, such as receive queues 132 b .
  • various other methods may be employed to control the rate at which network traffic is sent and received by the queues.
  • the behavior of the transmit and receive queues may be controlled through various methods to control jitter, delay, loss and response time for network connections.
  • the transmit and receive queues may also be configured to detect network conditions such as congestion and slow responding applications or servers. For example, for each application, transmitted packets or other data units may be timestamped when passed out of a transmit queue. When corresponding packets are received for a particular application, the receive and send times may be compared to detect network congestion and/or slow response times for various target resources. This information may be reported to the control points and shared with other agents within the domain. The response time and other performance information obtained by comparing transmit and receive times may also be used to compile and maintain statistics regarding various network resources.
  • a control point may be configured to reduce network loads by instructing traffic control module 132 to close low priority sessions and block additional sessions whenever heavy network congestion is reported by one of the agents.
  • popapp 138 module may provide a message to the user explaining why sessions are being closed.
  • the control point may be configured to instruct the agents to block any further sessions. This action may also be accompanied by a user message in response to attempts to launch a new application or network process. When the network load is reduced, the control point will send a message to the agents allowing sessions.
  • traffic control module 132 may be more generally configured to aid in identifying downed or under-performing network resources.
  • traffic module 132 notifies popapp modules 138 , which in turn launches an executable to perform a root-cause analysis of the problem.
  • Agent 70 then provides the control point with a message identifying the resource and its status, if possible.
  • popapp module 138 may be configured to provide a message to the user, including an option to initiate an autoconnect routine targeting the unavailable resource. Enabling autoconnect causes the agent to periodically retry the unavailable resource. This feature may be disabled, if desired, to allow the control point to assume responsibility for determining when the resource becomes available again. As will be later discussed, the described system may be configured so that the control modules assume responsibility for monitoring unavailable resources in order to minimize unnecessary network traffic.
  • agent components also monitor network conditions and resource usage for the purpose of compiling statistics.
  • An additional function of traffic control module 132 is to aid in performing these functions by providing information to other agent components regarding accessed resources, including resource performance and frequency of access.
  • popapp module 138 stores and is responsible for launching a variety of small application modules such as application 144 , known as popapps, to perform various operations and enhance the functioning of the described system.
  • Popapps detect and diagnose network conditions such as downed resources, provide specific messages to users and IT personnel regarding errors and network conditions, and interface with other information management, reporting or operational support systems, such as policy managers, service level managers, and network and system management platforms.
  • Popapps may be customized to add features to existing products, to tailor products for specific customer needs, and to integrate the software, systems and methods with technology supplied by other vendors.
  • Administrator module 134 interacts with various other agent modules, maintains and provides network statistics, and provides an interface for centrally configuring agents and other components of the system.
  • agent configuration administrator module 134 interfaces with configuration utility 106 (shown in FIGS. 5 and 13 - 16 ), in order to configure various agent parameters.
  • Administrator module 134 also serves as a repository for local reporting and statistics information to be communicated to the control points. Based on information obtained by other agent modules, administrator module 134 maintains local information regarding accessed servers, DNS servers, gateways, routers, switches, applications and other resources. This information is communicated on request to the control point, and may be used for network planning or to dynamically alter the behavior of agents.
  • administrator module 134 stores system policies and/or components of policies, and provides policy data to various agent components as needed to implement and enforce the policies. Administrator module 134 also includes support for interfacing the described software and systems with standardized network management protocols and platforms.
  • DNS module 136 provides the agent with configurable address resolving services.
  • DNS module 136 may include a local cache of DNS information, and may be configured to first resolve address requests using this local cache. If the request cannot be resolved locally, the request is submitted to a control point, which resolves the address with its own cache, provided the address is in the control point cache and the user has permission to access the address. If the request cannot be resolved with the control point cache, the connected control point submits the request to a DNS server for resolution. If the address is still not resolved at this point, the control point sends a message to the agent, and the agent then submits the request directly to its own DNS server for resolution.
  • DNS module 136 also monitors address requests and shares the content of the requests with administrator module 134 .
  • the requests are locally compiled and ultimately provided to the control points, which maintain dynamically updated lists of the most popular DNS servers.
  • DNS module 136 is adapted to interact with control point 72 in order to redirect address resolving requests and other network requests to alternate targets, if necessary.
  • Message broker module 140 creates and maintains connections to the one or more control points with which the agent interacts.
  • the various agent components use the message broker module to communicate with each other and with a connected control point.
  • Message broker module 140 includes message creator and message dispatcher processes for creating and sending messages to the control points.
  • the message creator process includes member functions, which create control point messages by receiving message contents as parameters and encoding the contents in a standard network format.
  • the message creator process also includes a member function to decode the messages received from the control point and return the contents in a format usable by the various components of the agent.
  • control point messages are added to a transmission queue and extracted by the message dispatcher function for transmission to the control point. Messages extracted from the queue are sent to the agent's active control point.
  • the dispatcher may be configured to ensure delivery of the message using a sequence numbering scheme or other error detection and recovery methods.
  • Messages and communications from an agent to a control point are made using a unicast addressing mechanism. Communications from the control point to an agent or agents may be made using unicast or a multicast addressing scheme. When configured for multicast operation, the control point and agents may be set to revert to unicast to allow for communication with devices that do not support IP multicast.
  • message broker module 140 monitors the status of the connection and switches over to a backup control point upon detecting a connection failure. If both the active and backup connections are not active, network traffic is passed on transparently.
  • System services module 142 provides various support functions to the other agent components. First, system services module maintains dynamic lists of user profiles, server profiles, DNS server profiles, control point connections and other data. The system services module also provides a tracing capability for debugging, and timer services for use by other agent components. System services module may also be configured with a library of APIs to interface the agent with the operating systems and other components of the device that the agent is associated with.
  • control point 72 may include a traffic module 160 , a server profile module 162 , a DNS server profile module 164 , a gateway profile module 166 , an administrator module 168 , a message broker module 170 , a popapp interface 172 and a popapp 174 .
  • Control point traffic module 160 implements policy-based, QoS techniques by coordinating the service-level enforcement activities of the agents. As part of this function, traffic module 160 dynamically allocates bandwidth among the agents in its domain by regularly obtaining allocation data from the agents, calculating bandwidth allocations for each agent based on this data, and communicating the calculated allocations to the agents for enforcement. For example, control point 72 can be configured to recalculate bandwidth allocations every five seconds. During each cycle, between re-allocation, the agents restrict bandwidth usage by their associated devices to the allocated amount and monitor the amount of bandwidth actually used. At the end of the cycle, each agent reports the bandwidth usage and other allocation data to the control point to be used in re-allocating bandwidth.
  • traffic module 160 divides the total bandwidth available for the upcoming cycle among the agents within the domain according to the priority data reported by the agents.
  • the result is a configured bandwidth CB particular to each individual agent, corresponding to that agent's fair share of the available bandwidth.
  • the priorities and configured bandwidths are a function of system policies, and may be based on a wide variety of parameters, including application identity, user identity, device identity, source address, destination address, source port, destination port, protocol, URL, time of day, network load, network population, and virtually any other parameter concerning network resources that can be communicated to, or obtained by the control point.
  • the detail and specificity of client-side parameters that may be supplied to the control point is greatly enhanced by the position of agent redirector module 130 relative to the layered communications protocol stack. The high position within the stack allows bandwidth allocation and, more generally, policy implementation, to be performed based on very specific triggering criteria. This may greatly enhance the flexibility and power of the described software, systems and methods.
  • the priority data reported by the agents may include priority data associated with multiple application programs running on a single networked device.
  • the associated agent may be configured to report an “effective application priority,” which is a function of the individual application priorities. For example, if device A were running two application programs and device B were running a single application program, device A's effective application priority would be twice that of device B, assuming that the individual priorities of all three applications were the same.
  • the reported priority data for a device running multiple application programs may be further refined by weighting the reported priority based on the relative degree of activity for each application program.
  • the relative degree of activity for an application may be measured in terms of bandwidth usage, transmitted packets, or any other activity-indicating criteria.
  • each agent may be configured to report the amount of bandwidth UB used by its associated device during the prior period, as discussed above. Data is also available for each device's allocated bandwidth AB for the previous cycle. Traffic module 160 may compare configured bandwidth CB, allocated bandwidth AB or utilized bandwidth UB for each device, or any combination of those three parameters to determine the allocations for the upcoming cycle.
  • UB is the amount the networked device used in the prior cycle
  • AB is the maximum amount they were allowed to use
  • CB specifies the device's “fair share” of available bandwidth for the upcoming cycle.
  • Both utilized bandwidth UB and allocated bandwidth AB may be greater than, equal to, or less than configured bandwidth CB. This may happen, for example, when there are a number of networked devices using less than their configured share CB. To efficiently utilize the available bandwidth, these unused amounts are allocated to devices requesting additional bandwidth, with the result being that some devices are allocated amount AB that exceeds their configured fair share CB. Though AB and UB may exceed CB, utilized bandwidth UB cannot normally exceed allocated bandwidth AB, because the agent traffic control module enforces the allocation.
  • traffic module 160 may be configured to first reduce allocations of clients or other devices where the associated agent reports bandwidth usage UB below the allocated amount AB. Presumably, these devices won't be affected if their allocation is reduced. Generally, traffic module 160 should not reduce any other allocation until all the unused allocations, or portions of allocations, have been reduced. The traffic module may be configured to then reduce allocations that are particularly high, or make adjustments according to some other criteria.
  • Traffic module 160 may also be configured so that when bandwidth becomes available, the newly-available bandwidth is provisioned according to generalized preferences. For example, the traffic module can be configured to provide surplus bandwidth first to agents that have low allocations and that are requesting additional bandwidth. After these requests are satisfied, surplus bandwidth may be apportioned according to priorities or other criteria.
  • FIGS. 11A, 11B , 11 C and 11 D depict examples of various methods that may be implemented by traffic module 160 to dynamically allocate bandwidth.
  • FIG. 11A depicts a process by which traffic module 160 determines whether any adjustments to bandwidth allocations AB are necessary. Allocated bandwidths AB for certain agents are adjusted in at least the following circumstances. First, as seen in steps S 4 and S 10 , certain allocated bandwidths AB are modified if the sum of all the allocated bandwidths ABtotal exceeds the sum of the configured bandwidths CBtotal. This situation may occur where, for some reason, a certain portion of the total bandwidth available to the agents in a previous cycle becomes unavailable, perhaps because it has been reserved for another purpose. In such a circumstance, it is important to reduce certain allocations AB to prevent the total allocations from exceeding the total bandwidth available during the upcoming cycle.
  • the allocation for those agents is modified, as seen in steps S 6 and S 10 .
  • the allocations for any such agent are typically increased.
  • an agent has an allocation AB that is less than their configured bandwidth CB, i.e. their existing allocation is less than their fair share of the bandwidth that will be available in the upcoming cycle.
  • the reported usage UB for the prior cycle is at or near the enforced allocation AB, and it can thus be assumed that more bandwidth would be consumed by the associated device if its allocation AB were increased.
  • Step S 8 the allocation AB for such an agent is reduced for the upcoming period to free up the unused bandwidth.
  • Steps S 4 , S 6 and S 8 may be performed in any suitable order. Collectively, these three steps ensure that certain bandwidth allocations are modified, i.e. increased or reduced, if one or more of the following three conditions are true: (1) ABtotal>CBtotal, (2) AB ⁇ CB and UB ⁇ AB for any agent, or (3) UB ⁇ AB for any agent. If none of these are true, the allocations AB from the prior period are not adjusted. Traffic module 160 modifies allocations AB as necessary at step S 10 . After all necessary modifications are made, the control point communicates the new allocations to the agents for enforcement during the upcoming cycle.
  • FIG. 11B depicts re-allocation of bandwidth to ensure that total allocations AB do not exceed the total bandwidth available for the upcoming cycle.
  • traffic module 160 has determined that the sum of allocations AB from the prior period exceed the available bandwidth for the upcoming period, i.e. ABtotal>CBtotal. In this situation, certain allocations AB must be reduced.
  • traffic module 160 may be configured to first reduce allocations of agents that report bandwidth usage levels below their allocated amounts, i.e. UB ⁇ AB for a particular agent. These agents are not using a portion of their allocations, and thus are unaffected or only minimally affected when the unused portion of the allocation is removed.
  • the traffic module first determines whether there are any such agents.
  • the allocations AB for some or all of these agents are reduced. These reductions may be gradual, or the entire unused portion of the allocation may be removed at once.
  • step S 24 further reductions are taken from agents with existing allocations AB that are greater than configured bandwidth CB, i.e. AB>CB.
  • bandwidth is removed at step S 24 from devices with existing allocations that exceed the calculated “fair share” for the upcoming cycle.
  • step S 26 the reductions taken at steps S 22 and S 24 may be performed until the total allocations ABtotal are less than or equal to the total available bandwidth CBtotal for the upcoming cycle.
  • FIG. 11C depicts a method for increasing the allocation of certain agents.
  • the allocation AB for such an agent should be increased. The existence of this circumstance has been determined at step S 40 . To provide these agents with additional bandwidth, the allocations for certain other agents typically need to be reduced. Similar to steps S 20 and S 22 of FIG. 11B , unutilized bandwidth is first identified and removed (steps S 42 and S 44 ). Again, the control point may be configured to vary the rate at which unused allocation portions are removed. If reported data does not reflect unutilized bandwidth, traffic module 160 may be configured to then reduce allocations for agents having an allocation AB higher than their respective configured share CB, as seen in step S 46 .
  • the bandwidth recovered in steps S 44 and S 46 is then provided to the agents requesting additional bandwidth. Any number of methods may be used to provision the recovered bandwidth. For example, preference may be given to agents reporting the largest discrepancy between their allocation AB and their configured share CB. Alternatively, preferences may be based on application identity, user identity, priority data, other client or system parameters, or any other suitable criteria.
  • FIG. 11D depicts a general method for reallocating unused bandwidth.
  • step S 60 it has been determined that certain allocations AB are not being fully used by the respective agents, i.e. UB ⁇ AB for at least one agent.
  • step S 62 the allocations AB for these agents are reduced.
  • the rate of the adjustment may be varied through configuration changes to the control point. For example, it may be desired that only a fraction of unused bandwidth be removed during a single reallocation cycle. Alternatively, the entire unused portion may be removed and reallocated during the reallocation cycle.
  • the recovered amounts are provisioned as necessary.
  • the recovered bandwidth may be used to eliminate a discrepancy between the total allocations ABtotal and the available bandwidth, as in FIG. 11B , or to increase allocations of agents who are requesting additional bandwidth and have relatively low allocations, as in FIG. 11C .
  • allocations may be increased for agents requesting additional bandwidth, i.e. UB ⁇ AB, even where the current allocation AB for such an agent is fairly high, e.g. AB>CB.
  • the recovered bandwidth may be reallocated using a variety of methods and according to any suitable criteria.
  • traffic module 160 can be configured to vary the rate at which the above allocation adjustments are made. For example, assume that a particular device is allocated 64 KBps (AB) and reports usage during the prior cycle of 62 KBps (UB). Traffic module 160 cannot determine how much additional bandwidth the device would use. Thus, if the allocation were dramatically increased, say doubled, it is possible that a significant portion of the increase would go unused. However, because the device is using an amount roughly equal to the enforced allocation AB, it can be assumed that the device would use more if the allocation were increased. Thus, it is often preferable to provide small, incremental increases. The amount of these incremental adjustments and the rate at which they are made may be configured with the configuration utility, as will be discussed with reference to FIG. 16 . If the device consumes the additional amounts, successive increases can be provided if additional bandwidth is available.
  • bandwidth allocations and calculations may be performed separately for the transmit and receive rates for the networked devices.
  • the methods described with reference to FIGS. 11A-11D may be used to calculate a transmit allocation for a particular device, as well as a separate receive allocation.
  • the calculations may be combined to yield an overall bandwidth allocation.
  • Server profile module 162 , DNS server profile module 164 , gateway profile module 166 and administrator module 168 all interact with the agents to monitor the status of network resources. More specifically, FIG. 12 provides an illustrative example of how the control points and agents may be configured to monitor the status of resources on the network.
  • the monitored resource(s) may be a server, a DNS server, a router, gateway, switch, application, etc.
  • a resource status change has occurred. For example, a server has gone down, traffic through a router has dramatically increased, a particular application is unavailable, or the performance of a particular gateway has degraded beyond a predetermined threshold specified in a system policy.
  • a networked device attempts to access the resource or otherwise engages in activity on the network involving the particular resource. If the accessing or requesting device is an agent, an executable spawned by popapp module 138 ( FIG. 9 ) analyzes the resource, and reports the identity and status of the resource to the control point connected to the agent, as indicated at step S 104 .
  • Launch of the popapp may be triggered by connection errors, or by triggering criteria specified in system policies.
  • system policies may include performance benchmarks for various network resources, and may further specify that popapp analysis is to be performed when resource performance deviates by a certain amount from the established benchmark.
  • the control points may similarly be configured to launch popapps to analyze network resources.
  • control point reports the information to all of the agents in its domain, and instructs the agents how to handle further client requests involving the resource, as indicated at steps S 108 and S 110 .
  • the instructions given to the agents will depend on whether an alternate resource is available.
  • the control point stores dynamically updated lists of alternate available resources. If an alternate resource is available, the instructions provided to the agent may include an instruction to transparently redirect the request to an alternate resource, as shown in step S 108 . For example, if the control point knows of a server that mirrors the data of another server that has gone down, client requests to the down server can simply be redirected to the mirror server.
  • agent popapp module 138 the agent popapp module 138 .
  • popapp functionality may be employed by the control point to report status information to other control points and management platforms supplied by other vendors.
  • messages concerning resource status or network conditions may be provided via email or paging to IT personnel.
  • the control point may be configured to assume responsibility for tracking the status of the resource in order to determine when it again becomes available, as shown in step S 112 .
  • a slow polling technique is used to minimize unnecessary traffic on the network.
  • the agents either redirect requests to the resources or provide error messages, based on the instructions provided by the control point, as shown in step S 116 .
  • the control point shares this information with the agents and disables the instructions provided in steps S 108 and S 110 , as shown in step S 118 .
  • This method of tracking and monitoring resource status has important advantages. First, it reduces unnecessary and frustrating access attempts to unavailable resources. Instead of repeatedly attempting to perform a task, a user's requests are redirected so that the request can be serviced successfully, or the user is provided with information about why the attempt(s) was unsuccessful. With this information in hand, the user is less likely to generate wasteful network traffic with repeated access attempts in a short period of time. In addition, network traffic is also reduced by having only one entity, usually a control point, assume responsibility for monitoring a resource that is unavailable.
  • server profile module 162 maintains a dynamically updated list of the servers accessed by agents within its domain.
  • the server statistics may be retrieved using the configuration utility, or with a variety of other existing management platforms.
  • the server statistics may be used for network planning, or may be implemented into various system policies for dynamic enforcement by the agents and control points. For example, the control points and agents can be configured to divert traffic from heavily used servers or other resources.
  • DNS module 164 also performs certain particularized functions in addition to aiding the resource monitoring and tracking described with reference to FIG. 12 . Specifically, the DNS module maintains a local DNS cache for efficient local address resolution. As discussed with reference to agent DNS module 136 , the agents and control points interact to resolve address requests, and may be configured to resolve addresses by first referencing local DNS data maintained by the agents and/or control points. Similar to server profile module 162 , DNS module 164 also maintains statistics for use in network planning and dynamic system policies.
  • administrator module 168 maintains control point configuration parameters and distributes this information to the agents within the domain. Similar to the server, DNS and gateway modules, administrator module 168 also aids in collecting and maintaining statistical data concerning network resources. In addition, administrator module 168 retrieves policy data from centralized policy repositories, and stores the policy data locally for use by the control points and agents in enforcing system policies.
  • Control point 72 also includes a synchronization interface (not shown) for synchronizing information among multiple control points within the same domain.
  • message broker module 170 performs various functions to enable the control point to communicate with the agents. Similar to agent message broker module 140 , message broker module 170 includes message creator and message dispatcher processes.
  • the message creator process includes member functions that receive message contents as parameters and return encoded messages for transmission to agents and other network entities. Functions for decoding received messages are also included with the message creator process.
  • the dispatcher process transmits messages and ensures reliable delivery through a retry mechanism and error detection and recovery methods.
  • configuration utility 106 is a platform-independent application that provides a graphical user interface for centrally managing configuration information for the control points and agents.
  • the configuration utility interface with administrator module 134 of agent 70 and with administrator module 168 of control point 72 .
  • configuration utility 106 may interface with administrator module 168 of control point 72 , and the control point in turn may interface with administrator module 134 of agent 70 .
  • FIG. 13 depicts a main configuration screen 188 of configuration utility 106 .
  • main configuration screen 188 can be used to view various managed objects, including users, applications, control points, agents and other network entities and resources.
  • screen frame 190 on the left side of the main configuration screen 188 may be used to present an expandable representation of the control points that are configured for the network.
  • control point When a particular control point is selected in the main configuration screen 188 , various settings for the control point may be configured. For example, the name of the control point may be edited, agents and other entities may be added to the control point's domain, and the control point may be designated as a secondary connection for particular agents or groups of agents.
  • the system administrator may specify the total bandwidth available to agents within the control point's domain for transmitting and receiving, as shown in FIG. 14 . This bandwidth specification will affect the configured bandwidths CB and allocated bandwidths AB discussed with reference to control point traffic module 160 and the method depicted in FIG. 11 .
  • Configuration utility 106 also provides for configuration of various settings relating to users, applications and resources associated with a particular control point. For example, users may be grouped together for collective treatment, lists of prohibited URLs may be specified for particular users or groups of users, and priorities for applications may be specified, as shown in FIG. 15 . Priorities may also be assigned to users or groups of users. As discussed above, this priority data plays a role in determining bandwidth allocations for the agents and their associated devices.
  • optimum and minimum performance levels may be established for applications or other tasks using network resources.
  • the configuration utility may be used to specify a minimum threshold performance level for a networked device running the IP telephony application. This performance level may be specified in terms of QoS performance parameters such as bandwidth, throughput, jitter, delay and loss.
  • the agent module associated with the networked device would then monitor the network traffic associated with the IP telephony application to ensure that performance was above the minimum threshold. If the minimum level was not met, the control points and agents could interact to reallocate resources and provide the specified minimum service level.
  • an optimum service level may be specified for various network applications and tasks.
  • configuration utility 106 may be configured to manage system policies by providing functionality for authoring, maintaining and storing system policies, and for managing retrieval of system policies from other locations on a distributed network, such as a dedicated policy server.
  • configuration utility 106 may be used to configure the interval at which resource reallocation is performed.
  • the default interval for recalculating bandwidth allocations is 5000 milliseconds, or 5 seconds.
  • the rate at which resource allocation occurs may be specified in order to prevent overcompensation, unnecessary adjustments to allocations, and inefficient reconfigurations.
  • the percentage of over-utilized bandwidth that is removed from a client device and reallocated elsewhere may be specified with the configuration utility, as seen in FIG. 16 .
  • the rate at which agents provide feedback to the control points regarding network conditions or activities of their associated devices may be configured.
  • the systems and methods are implemented architecturally in two tiers.
  • the first tier may include one or more control modules, such as control points 72 . Because the control points control and coordinate operation of agent modules 70 , the control points may be referred to as “upstream” or “overlying” components, relative to the agent modules that they control. By contrast, the agents, which form the second tier of the system, may be referred to as “downstream” or “underlying” components, relative to the control points they are controlled by.
  • An upstream component may control one or more downstream components.
  • An upstream component can also be configured to be a downstream component of some other controlling upstream entity.
  • a downstream component may be configured to be an upstream component to control its downstream delegates.
  • FIG. 17 depicts an exemplary multi-tier implementation.
  • the Tier 1 component is upstream relative to three components in Tier 2 . Because the Tier 2 components underlie and are controlled by the Tier 1 component, the Tier 2 components are downstream components relative to the Tier 1 component. As indicated, the components within Tiers 2 , 3 and 4 may be configured to function as both downstream and upstream components.
  • the Tier 4 component may include a control module 72 , as shown, that is configured to control agent modules 70 that may be running on the underlying Tier 5 components.
  • FIG. 17 typically is implemented in a one-to-many configuration moving downstream.
  • an upstream component may control multiple downstream components in the tier immediately below it, and those downstream components may control multiple components in further downstream tiers.
  • a given downstream component typically reports to and is controlled by only a single upstream component. It should be appreciated that a wide variety of configurations are possible, with any desired number of tiers and components or groupings of components within the tiers.
  • FIG. 18 depicts an exemplary networking environment employing such mechanisms.
  • various branch offices are connected to a centralized data center 200 via network 202 .
  • the system may be configured to enable an administrator to define enterprise wide policies on a central server such as an Enterprise Policy Server (EPS) 204 .
  • EPS Enterprise Policy Server
  • the control point modules described herein may be implemented in connection with a Controlled Location Policy Server (CLPS) 206 , as shown in the exemplary branch office 208 at the bottom of FIG. 18 .
  • CLPS Controlled Location Policy Server
  • a given CLPS 206 retrieves policies pertinent to its branch office from its controlling EPS 204 .
  • the CLPS which may include a control module 72 , then distributes pertinent policies to the relevant agent modules 70 within its domain.
  • the retrieved policies are then distributed to the various agents that are controlled by that control point module.
  • This policy definition and distribution scheme may easily be adapted and scaled to manage widely varying enterprise configurations.
  • the distributed policies may, among other things, be used to facilitate the bandwidth management techniques described herein.
  • FIG. 19 depicts an exemplary system 300 for managing bandwidth on network link 302 .
  • system 300 may include various components referred to as bandwidth managers, which may be implemented within the control modules (e.g., control points 72 ) and agent modules (e.g., agent modules 70 ) described herein.
  • the depicted system includes a controlled location bandwidth manager (CL-BWMGR) 304 , which may be implemented within previously described control point 72 , and/or within a Controlled Location Policy Server 206 , such as shown in FIG. 18 .
  • CL-BWMGR 304 typically is implemented as a software program running on a computing device, such as a server (not shown), connected to network link 302 .
  • the CL-BWMGR typically communicates with agent software (e.g., agent modules 70 ) running on computing devices connected to network link 302 , so as to manage bandwidth usage by those devices.
  • agent software e.g., agent modules 70
  • plural computing devices 22 also referred to as agent computers, may be interconnected via network link 302 .
  • Each agent computer may be running an agent bandwidth manager (AG-BWMGR) 306 , one or more application bandwidth managers (APP-BWMGR) 308 , and one or more socket bandwidth managers (SOCK-BWMGR) 310 .
  • A-BWMGR agent bandwidth manager
  • APP-BWMGR application bandwidth managers
  • SOCK-BWMGR socket bandwidth managers
  • a given computing device 22 connected to network link 302 typically there is one AG-BWMGR 306 , and a variable number of APP-BWMGRs 308 and SOCK-BWMGRs 310 , depending on the applications and sockets being utilized.
  • the agent module typically is adapted to launch an APP-BWMGR 308 for each application 320 running on computer 22 , and a SOCK-BWMGR 310 for each socket 322 open by each application 320 .
  • a given computing device may be running multiple applications 320 , and a given application 320 may have multiple open sockets 322 .
  • Computing devices 22 transmit and receive data on network link 302 via transmit and receive control sections 324 and 326 . Sections 324 and 326 may respectively include transmit and receive queues 328 and 330 , as will be explained below.
  • CL-BWMGR 304 manages overall bandwidth on network link 302
  • a given AG-BWMGR 306 manages bandwidth usage by its associated agent computer 22
  • a given APP-BWMGR 308 manages bandwidth usage by its associated application 320
  • a given SOCK-BWMGR 310 manages bandwidth usage by its associated socket 322 .
  • the bandwidth managers may be arranged in a hierarchical control configuration, in which an upstream component interacts with and/or controls one or more underlying downstream components.
  • the CL-BWMGR manages bandwidth usage on network link 302 by interacting with underlying AG-BWMGRs, so as to manage bandwidth usage by the particular downstream computing devices 22 , applications 320 and sockets 322 that underlie the CL-BWMGR. It should be appreciated that the CL-BWMGR may manage bandwidth usage on any type of interconnection between computers. For example, in FIG.
  • a given remote network 14 could include a server computer running a CL-BWMGR 304 interconnected via a local network segment with plural client computers, where each client computer was loaded with an agent module 70 .
  • the control computer could control the clients' bandwidth usage on not only the local network segment (e.g., an Ethernet-based LAN), but also on the connection through router 18 to public network 16 .
  • a given AG-BWMGR 306 manages bandwidth usage by its associated computing device 22 by interacting with underlying APP-BWMGRs, so as to manage bandwidth usage by the particular downstream applications 320 and sockets 322 that underlie the AG-BWMGR.
  • a given APP-BWMGR 308 manages bandwidth usage by its associated application 320 by interacting with underlying SOCK-BWMGRs, so as to manage bandwidth usage by the particular downstream sockets 322 that underlie the APP-BWMGR.
  • downstream components typically facilitate control by reporting certain data, such as bandwidth consumption, upstream to overlying upstream components.
  • bandwidth may be managed by taking a bandwidth allocation existing at a particular hierarchical level, and sub-allocating the allocation for apportionment amongst downstream components. Where there are multiple downstream components, sub-allocation typically involves dividing the allocation into portions for each of the downstream components.
  • CL-BWMGR 304 may have an allocation corresponding to the available bandwidth on network link 302 .
  • the available bandwidth may be configured according to a system policy specifying parameters for network link 302 , or may be determined or configured through other methods.
  • the available bandwidth on link 302 may be sub-allocated by CL-BWMGR 304 into one or more agent allocations, depending on the number of agent computers 22 underlying the CL-BWMGR.
  • Each agent allocation represents the amount of bandwidth allotted to the respective agent computer. For example, if one hundred agent computers 22 are controlled by the CL-BWMGR, then the CL-BWMGR would typically sub-allocate available link bandwidth into one hundred individualized agent allocations for the respective underlying agent computers. The individualized agent allocation would be provided to the respective AG-BWMGRs at each agent computer 22 for enforcement and further sub-allocation.
  • the associated AG-BWMGR 306 may sub-allocate its agent allocation into individualized application allocations for each application 320 running on the agent computer.
  • a given application allocation represents the amount of bandwidth allotted to the corresponding application.
  • These application allocations typically are provided to the respective APP-BWMGRs for enforcement and further sub-allocation.
  • the associated APP-BWMGR 308 may sub-allocate its application allocation into individual socket allocations for the sockets 322 that are open for that application. The individual socket allocations typically are provided to the respective SOCK-BWMGRs 310 .
  • bandwidth managers are configured to communicate upstream in order to facilitate the various sub-allocations discussed above.
  • data may be provided upstream from each group of SOCK-BWMGRs 310 to their overlying APP-BWMGR 308 , from each group of APP-BWMGRs 308 to their overlying AG-BWMGR 306 , and from each group of AG-BWMGRs 306 to their overlying CL-BWMGR 304 .
  • the data provided upstream may be used to calculate sub-allocations, which are then sent back downstream for enforcement, and/or further sub-allocation.
  • it will be desirable that the data which is sent upstream pertain to socket activity, so as to efficiently adjust future allocations to take bandwidth from where it is less needed and provide it to where it is more needed.
  • bandwidth managers typically are performed to efficiently allocate network bandwidth among the various bandwidth-consuming components.
  • consuming components may include agent computers 22 , applications 320 running on those computers, and sockets 322 open for those applications.
  • One way in which the systems described herein may efficiently allocate bandwidth is by shifting bandwidth allocations toward high priority uses and away from uses of relatively lower priority.
  • Another allocation criteria may be employed which involves providing future allocations based on consumption of past allocations. For example, allocations to a particular bandwidth manager (e.g., an application allocation provided from an AG-BWMGR 306 to an APP-BWMGR 308 ) may be reduced if past allocations were only partially consumed, or if there is a trend of diminished usage.
  • CL-BWMGR 304 may hand out agent allocations for computers 22 at regular intervals, or only when a changed condition within the network is detected.
  • Application allocations and socket allocations may be disseminated downstream periodically, but at different rates. In any case, it will often be advantageous to repeatedly and dynamically update the allocations and sub-allocations, as will be discussed in more detail below.
  • FIG. 20 shows an agent computing device 22 coupled to other computers (not shown) via network link 302 .
  • Computing device 22 is running an application program 320 which communicates over network link 302 via a layered protocol stack (depicted in part at 92 ), as described with reference to earlier examples.
  • Agent module 70 may be interposed between application program 320 and lower layers of stack 92 , typically at a point where flow control may be achieved. Specifically, as described with reference to FIGS.
  • agent module 70 typically is positioned in a relatively high location within the protocol stack and interfaced with a socket object, to allow network traffic to be intercepted, or hooked into, at a point between application program 320 and transport layer 124 of stack 92 .
  • the agent module 70 depicted in FIG. 20 may include some or all of the components described with reference to FIG. 9 .
  • a redirector 130 FIG. 9 ) may be employed to allow agent module to intercept, or hook into, the socket data flow between application 320 and network link 302 .
  • traffic between application program 320 and network link 302 flows through transmit control section 324 and receive control section 326 , which may include transmit queues 328 and receive queues 330 , respectively.
  • Data flows through control sections 324 and 326 may be monitored and controlled via SOCK-BWMGR 310 of agent module 70 .
  • SOCK-BWMGR 310 may interact with upstream bandwidth managers (not shown in FIG. 20 ) implemented within agent module 70 .
  • socket allocations may be provided to SOCK-BWMGR 310 , which controls transmit control section 324 and receive control section 326 to ensure that those allocations are enforced.
  • SOCK-BWMGR 310 may provide feedback (e.g., concerning socket activity) to various upstream bandwidth managers. This feedback may be used to facilitate future allocations.
  • Socket allocations may be provided to SOCK-BWMGR 310 periodically at regular intervals.
  • the allocation typically is in the form of a number of bytes that may be transmitted, and/or a number of bytes that may be received, during a set interval (there may be separate transmit and receive allocations).
  • data passes through control sections 324 and 326 , and SOCK-BWMGR 310 monitors the amount of data sent to, or received from, network link 302 .
  • SOCK-BWMGR 310 may simply passively monitor the traffic.
  • queues 328 and 330 may be provided to hold overflow, until the socket is replenished with a new allocation, at which point the queued data may be transmitted and/or received.
  • queues 328 and 330 typically are not used, such that transmitted or received data passes through sections 324 and/or 326 without any queuing of data. In these implementations, queuing is triggered only when the socket allocation for a given interval is exceeded.
  • the upstream feedback for the socket (e.g., the feedback sent upstream by SOCK-BWMGR 310 ) include specification of how much bandwidth was consumed by the socket (e.g., how many bytes were sent or received). This consumption data may be used by upstream bandwidth managers in performing sub-allocations, as will be explained below.
  • socket should be broadly understood to mean network conversations or connections carried out by applications above the transport protocol layer (e.g., layer 124 in FIGS. 6, 7 , 8 and 20 ). In most cases, these conversations or connections are characterized at least in part by the ability to implement flow control. Accordingly, it will be appreciated that the agent modules are not limited to any one particular standard that is employed within the upper layers of the protocol stack. Indeed, the agent modules described herein may hook into network conversations carried on by applications that do not use the pervasive Winsock standard.
  • exemplary agent computer 22 is shown as running applications 320 a , 320 b , and 320 c .
  • Application 320 a employs the Winsock standard, and thus communicates with network link 302 via Winsock API 128 and lower layers of protocol stack (e.g., the layers shown at 340 and 342 ).
  • agent module 70 a may be interposed as shown relative to API 128 to hook into the network communications and carry out the monitoring and management functions described at length herein.
  • applications 320 b and 320 c do not work with the Winsock standard.
  • An example of such an application is an Active Directory application that employs the NetBios standard.
  • an alternate agent module 70 b may be provided. Similar to module 70 a and the previously discussed agent module embodiments, agent module 70 b is able to hook into network conversations above the transport layer. The module does not require winsock, though it is still active at a high protocol layer where flow control may be employed and where a rich variety of parameters may be accessed to enhance monitoring and policy-based management.
  • Priority may be assigned based on a variety of parameters, including IP source address, IP destination address, source port, destination port, protocol, application identity, user identity, device identity, URL, available device bandwidth, application profile, server profile, gateway identity, router identity, time-of-day, network congestion, network load, network population, available domain bandwidth and status of network resources. Other parameters are possible. As discussed above, the position of the described agent modules relative to the protocol stack allows access to many different parameters. The ability to predicate bandwidth management on such varying criteria allows for increased control over bandwidth allocations, and thereby increases efficiency.
  • the assigned priorities discussed above may be converted into or expressed in terms of a priority level that is specific to a given network “conversation,” or connection, such as a socket data flow. Indeed, many of the sub-allocation schemes discussed below depend on socket priorities for individual sockets.
  • the socket priorities typically are configured values derived from various parameters such as those listed above. For example, the configured priority for a given socket might be determined by a combination of user identity, the type of application being run, and the source address of the requested data, to name but one example.
  • the result may be multiplied by a convenient factor to avoid handling of fractional amounts.
  • APALLOC is the application allocation being apportioned by the APP-BWMGR among the various sockets
  • ESP(s) is the effective priority of the socket. It should thus be appreciated that the apportionment is derived via a weighted average of the effective socket priorities.
  • socket allocations will tend to be higher for sockets having higher priorities, and for sockets consuming larger percentages of prior allocations.
  • calculations are based on consumption data from multiple past allocation cycles (N cycles), though a single cycle calculation may be used. In some cases, multiple cycle calculations may smooth transitions and stabilize adjustment of allocations among the various sockets.
  • the calculations described above are performed by the overlying APP-BWMGR based on bandwidth consumption data received from underlying SOCK-BWMGRs.
  • an effective application priority may be calculated for each application.
  • the calculations described above are performed by the overlying AG-BWMGR based on bandwidth consumption data received from underlying APP-BWMGRs 308 .
  • application allocations will tend to be higher for applications having more sockets open, higher priority sockets, and sockets consuming greater percentages of prior socket allocations.
  • link bandwidth e.g., overall bandwidth available on network link 302
  • an effective priority may be calculated and then bandwidth may be apportioned according to a weighted average of the effective priorities.
  • a similar calculation may be performed for all of the other agent computers underlying the CL-BWMGR, in order to obtain all of the effective agent priorities.
  • the activity of individual sockets typically is communicated upstream (e.g., from a SOCK-BWMGR to an overlying APP-BWMGR, to an overlying AG-BWMGR, and to an overlying CL-BWMGR), and has an effect on future allocations and sub-allocations throughout the system. Consumption at a particular socket may affect future allocations to that socket and other sockets on the same application. In addition, because the consumption data is communicated upstream and used in other allocations and sub-allocations, the same socket can potentially affect future allocations to all sockets, applications and agent computers within the system. In certain implementations this feedback effect can greatly increase the efficiency of bandwidth management.
  • allocations and sub-allocations may be performed separately for transmitted data and received data.
  • the upstream feedback and downstream sub-allocations may occur at any desired frequency, or at irregular intervals.
  • the sub-allocations typically are performed at regular intervals, though it will often be desirable to re-calculate socket allocations at a greater frequency than the applications allocations.
  • agent allocations may be updated every 2 seconds, application allocations every half second, and socket allocations every 100 milliseconds.
  • the exemplary equations above apply primarily during steady state operation, when all involved sockets are open, have non-zero allocations, and are at least somewhat active (e.g., consuming some bandwidth).
  • some provision or modification is made for startup conditions, and for conditions where sockets are idle and/or have socket allocations that are negligible or zero.
  • an agent computer may have launched applications that are not communicating with the network.
  • the effective application priorities would all be zero because there is no socket activity, leading to an undefined result in equation (4).
  • a straight average may be used to address this condition, in which case available bandwidth is apportioned among the applications equally. Equal apportioning may also be used in the case of undefined results in equations (2) and (6).
  • the APP-BWMGRs and/or other bandwidth managers may be configured to over-provision bandwidth. This may be done to ensure that truly idle sockets maintain an effective socket priority of zero, and thus do not obtain positive allocations (e.g., through applying the weighted average of equation (2)). Over-provisioning may also be used to provide internal allocations to idle sockets that are becoming active, so as to allow them to achieve a non-zero effective socket priority and thereby obtain a share of the application allocation being sub-allocated by the overlying APP-BWMGR 308 .
  • bandwidth may be apportioned among entities or components at a particular level based on socket status of a given component, or of components underlying that component.
  • Socket status may include the number of sockets open within the domain of a given control module, on a given computing device, or for a given application. Equations (3) and (4) above provide an example of apportioning bandwidth among applications based partly on the number of sockets open at each application. All other things being equal, in these equations an application with more sockets open will have a higher effective priority and will receive a greater share of the allocated bandwidth.
  • socket status may also include assigned priorities of sockets.
  • assigned priority may be derived from a nearly limitless array of parameters, and may be set via network policies.
  • the above equations provide several examples of allocation and sub-allocation being performed based on socket priority. All other things being equal, a socket with a higher socket priority, or an application or computer with higher priority sockets, will receive larger allocations in several of the above examples.
  • Socket status may also include consumption of past allocations, as should be appreciated from the above exemplary equations.
  • the above examples provide numerous examples of shifting bandwidth allocations away from sockets, applications and computers consuming a lower portion of past allocations, relative to counterpart sockets, applications and computers.
  • Implementation of the above exemplary systems and methods may enable bandwidth resources to be allocated to high priority, critical tasks. Bandwidth may be shifted away from low priority uses, and/or consuming entities that are relatively idle in comparison to their counterparts.
  • the different embodiments above may be implemented to provide a more granular, or finer, level of control over bandwidth usage of a particular component within the system.
  • the bandwidth manager scheme described above may be configured to control socket and/or transaction level allocation for multiple conversations carried out by an application. For example, a given application may carry out a number of functions having varying levels of priority. The increased granularity described above allows system operators to assign relative bandwidth priorities to the various tasks carried out by a given application.
  • socket priorities e.g., socket priorities SP in equations (1) and (3) above
  • the agent module may be alternately configured to dynamically vary the socket priority when certain mission critical network tasks are being performed by the socket.
  • mission critical tasks may be defined through the policy mechanisms discussed above, using any desirable combination of policy parameters.
  • the assigned priority for the socket is modified (e.g., by overriding the assigned value with a higher priority).
  • This modification typically is employed to ensure that a desired level of service or resources is available for the predefined task (e.g., a desired allocation of bandwidth).
  • the predefined task may be a low priority task, such that the socket priority may be dynamically downgraded, to ensure that bandwidth is available for more important tasks. Any of the above embodiments or method implementations may be adapted to provide for such dynamic modification of priorities.
  • the granular control at the transaction level enables the system to detect the commencement of critical transactions. Upon detection of such a transaction, the system is able to communicate through a chain of upstream components to ensure that a sufficient amount of bandwidth is reserved to complete the transaction within an acceptable amount of time. For example, a dynamic increase of socket priority typically will lead to increased allocations for not only the socket, but also for the application associated with the socket, and for the computing device on which the application is running (e.g., through application of the exemplary equations discussed above). Alternatively, bandwidth may be dynamically decreased for low priority transactions, which may also be communicated upstream to affect allocations at the various tiers.
  • the above exemplary systems may be configured so that mission-critical bandwidth priority is assigned to only a specified portion of the data requested from a given web server application.
  • the network policy could specify that when a browser on a client computer attempts to access a particular web page, that socket priorities are overridden to ensure that critical data is provided at a high level of service. Other data (e.g., trivial graphical information) could be assigned lower priority.
  • loading a single web page may involve several “get” commands issued by a client browser application to obtain all of the data for the web page.
  • the agent modules described herein may be configured with a policy that specifies that the agent module is to monitor its associated computer to look for “get” commands seeking predefined high priority portions of data on the web page. Upon detection of such a get command, the socket level priority would be dynamically adjusted for that portion of the web page data.
  • Another example which illustrates the benefit of increased granularity is use of a multi-media application to deliver audio, video and other data over a wide-area network. Where network congestion is high, it typically will be desirable to assign a higher priority to the audio stream.
  • the increased granularity of the described system allows this prioritization through application of an appropriate network policy, and ensures that the video stream does not adversely affect the audio delivery in a significant way.
  • the systems described herein may be further provided with a layered communication architecture to facilitate communication between components.
  • Typical implementations involve use of a communication library to present a communication abstraction layer to various communicating components.
  • the abstraction layer is architecturally configured to hide the details of the communications mechanism and/or the location of the other communicating components. This allows the components to be transparent to the underlying transport mechanism used.
  • the transport mechanism can be TCP, UDP or any other transport protocol.
  • the transport protocol can be replaced with a new protocol without affecting the components and their operations.
  • the components communicate with each other by “passing objects” at the component interface.
  • the communication layer may convert these objects into binary data, serialize them to XML format, provide compression and/or encrypt the information to be transmitted over any media. These operations typically are performed so that they are transparent to the communicating components. When applicable, the communications layer also increases network efficiency through multiplexing of several communication streams onto a single connection.

Abstract

A system and method for managing network bandwidth consumption. The system may include an agent module loadable on a networked computer and configured to aid in managing bandwidth consumption within a network. The agent module is configured to obtain an allocation of network bandwidth usable by the networked computer, and is further configured to sub-allocate such allocation among multiple bandwidth-consuming components associated with the networked computer. The system may further include multiple such agent modules loadable on plural networked computers, and a control module configured to interact with each of the agent modules to dynamically manage bandwidth usage by the networked computers.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of copending U.S. patent application Ser. No. 10/369,259, filed Feb. 19, 2003, which is a continuation-in-part of U.S. patent application Ser. No. 09/532,101, filed Mar. 21, 2000, which is based upon and claims the benefit under 35 U.S.C. § 119 of U.S. provisional patent application Ser. No. 60/357,731, filed Feb. 18, 2002. All aforementioned applications to which the present application claims priority are hereby incorporated by reference.
  • TECHNICAL FIELD
  • The present invention relates generally to distributed network systems, and more particularly to software, systems and methods for managing resources of a distributed network.
  • BACKGROUND
  • Both public and private networks have shifted toward a predominantly distributed computing model, and have grown steadily in size, power and complexity. This growth has been accompanied by a corresponding increase in demands placed on information technology to increase enterprise-level productivity, operations and customer/user support. To achieve interoperability in increasingly complex network systems, TCP/IP and other standardized communication protocols have been aggressively deployed. Although many of these protocols have been effective at achieving interoperability, their widespread deployment has not been accompanied by a correspondingly aggressive development of management solutions for networks using these protocols.
  • Indeed, conventional computer networks provide little in the way of solutions for managing network resources, and instead typically provide what is known as “best efforts” service to all network traffic. Best efforts is the default behavior of TCP/IP networks, in which network nodes simply drop packets indiscriminately when faced with excessive network congestion. With best efforts service, no mechanism is provided to avoid the congestion that leads to dropped packets, and network traffic is not categorized to ensure reliable delivery of more important data. Also, users are not provided with information about network conditions or underperforming resources. This lack of management frequently results in repeated, unsuccessful network requests, user frustration and diminished productivity.
  • Problems associated with managing network resources are intensified by the dramatic increase in the demand for these resources. New applications for use in distributed networking environments are being developed at a rapid pace. These applications have widely varying performance requirements. Multimedia applications, for example, have a very high sensitivity to jitter, loss and delay. By contrast, other types of applications can tolerate significant lapses in network performance. Many applications, particularly continuous media applications, have very high bandwidth requirements, while others have bandwidth requirements that are comparatively modest. A further problem is that many bandwidth-intensive applications are used for recreation or other low priority tasks.
  • In the absence of effective management tools, the result of this increased and varied competition for network resources is congestion, application unpredictability, user frustration and loss of productivity. When networks are unable to distinguish unimportant tasks or requests from those that are mission critical, network resources are often used in ways that are inconsistent with business objectives. Bandwidth may be wasted or consumed by low priority tasks. Customers may experience unsatisfactory network performance as a result of internal users placing a high load on the network.
  • Various solutions have been employed, with limited success, to address these network management problems. For example, to alleviate congestion, network managers often add more bandwidth to congested links. This solution is expensive and can be temporary—network usage tends to shift and grow such that the provisioned link soon becomes congested again. This often happens where the underlying cause of the congestion is not addressed. Usually, it is desirable to intelligently manage existing resources, as opposed to “over-provisioning,” i.e. simply providing more resources to reduce scarcity.
  • A broad, conceptual class of management solutions may be thought of as attempts to increase “awareness” in a distributed networking environment. The concept is that where the network is more aware of applications or other tasks running on networked devices, and vice versa, then steps can be taken to make more efficient use of network resources. For example, if network management software becomes aware that a particular user is running a low priority application, then the software could block or limit that user's access to network resources. If management software becomes aware that the network population at a given instance includes a high percentage of outside customers, bandwidth preferences and priorities could be modified to ensure that the customers had a positive experience with the network. In the abstract, increasing application and network awareness is a desirable goal, however application vendors largely ignore these considerations and tend to focus not on network infrastructure, but rather on enhancing application functionality.
  • Quality of service (“QoS”) and policy-based management techniques represent efforts to bridge the gap between networks, applications and users in order to more efficiently manage the use of network resources. QoS is a term referring to techniques which allow network-aware applications to request and receive a predictable level of service in terms of performance specifications such as bandwidth, jitter, delay and loss. Known QoS methods include disallowing certain types of packets, slowing transmission rates, establishing distinct classes of services for certain types of packets, marking packets with a priority value, and various queuing methods. In a distributed environment having scarce resources, QoS techniques necessarily introduce unfairness into the system by giving preferential treatment to certain network traffic.
  • Policy-based network management uses policies, or rules, to define how network resources are to be used. In a broad sense, a policy includes a condition and an action. An example of a policy could be to block access or disallow packets (action) if the IP source address of the data is included on a list of disallowed addresses (condition). One use of policy-based network management techniques is to determine when and how the unfairness introduced by QoS methods should apply.
  • Policy-based management solutions typically require that network traffic be classified before it is acted upon. The classification process can occur at various levels of data abstraction, and may be described in terms of layered communication protocols that network devices use to communicate across a network link. There are two protocol layering models which dominate the field. The first is the OSI reference model, depicted in FIG. 1. The layers of the OSI model are: application (layer 7), presentation (layer 6), session (layer 5), transport (layer 4), network (layer 3), data link (layer 2) and physical (layer 1). The second major model forms the basis for the TCP/IP protocol suite. Its layers are application, transport, network, data link and hardware, as also depicted in FIG. 1. The TCP/IP layers correspond in function to the OSI layers, but without a presentation or session layer. In both models, data is processed and changes form as it is sequentially passed between the layers.
  • Known policy based management solutions and QoS methods typically classify data by monitoring data flows at the transport layer and below. For example, a common multi-parameter classifier is the well known “five-tuple” consisting of (IP source address, IP destination address, IP protocol, TCP/UDP source port and TCP/UDP destination port). These parameters are all obtained at the transport and network layers of the models. The large majority of existing policy-based, QoS solutions are implemented by monitoring and classifying network activity at these protocol layers. However, the higher the protocol layer, the more definitive and specific the available data and classifiers. Because conventional policy-based, QoS systems do not employ classifiers at higher than the transport layer, they cannot employ policy-based techniques or QoS methods using the richer and more detailed data available at the higher layers. The conventional systems are thus limited in their ability to make the network more application-aware and vice versa.
  • In addition, the known systems for managing network resources do not effectively address the problem of bandwidth management. Bandwidth is often consumed by low priority tasks at the expense of business critical applications. In systems that do provide for priority based bandwidth allocations, the bandwidth allocations are static and are not adjusted dynamically in response to changing network conditions.
  • SUMMARY
  • Accordingly, the present description provides systems and methods for managing network bandwidth consumption, which may include use of an agent module loadable on a networked computer. The agent module is configured to obtain an allocation of network bandwidth usable by the networked computer, and is further configured to sub-allocate such allocation among multiple bandwidth-consuming components associated with the networked computer. The system may further include multiple such agent modules loaded on plural networked computers, and a control module configured to interact with each of the agent modules to dynamically manage bandwidth usage by the networked computers.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a conceptual depiction of the OSI and TCP/IP layered protocol models.
  • FIG. 2 is a view of a distributed network system in which the software, systems and methods described herein may be deployed.
  • FIG. 3 is a schematic view of a computing device that may be deployed in the distributed network system of FIG. 2.
  • FIG. 4 is a block diagram view depicting exemplary agent modules and control modules that may be used to manage resources in a distributed network such as that depicted in FIG. 2.
  • FIG. 5 is a block diagram view depicting various exemplary components that may employed in connection with the described software, systems and methods, including two agent modules, a control module and a configuration utility.
  • FIG. 6 is a block diagram depicting an exemplary deployment of an agent module in relation to a layered protocol stack of a computing device.
  • FIG. 7 is a block diagram depicting another exemplary deployment of an agent module in relation to a layered protocol stack of a computing device.
  • FIG. 8 is a block diagram depicting yet another exemplary deployment of an agent module in relation to a layered protocol stack of a computing device.
  • FIG. 9 is a block diagram depicting exemplary component parts of an agent module embodiment.
  • FIG. 10 is a block diagram depicting exemplary component parts of a control module embodiment.
  • FIG. 11A is a flowchart depicting a method for allocating bandwidth among a plurality of computers.
  • FIG. 11B is a flowchart depicting another method for allocating bandwidth among a plurality of computers.
  • FIG. 11C is a flowchart depicting yet another method for allocating bandwidth among a plurality of computers.
  • FIG. 11D is a flowchart depicting yet another method for allocating bandwidth among a plurality of computers.
  • FIG. 12 is a flowchart depicting a method for monitoring the status of network resources.
  • FIG. 13 is a view of a main configuration screen of a configuration utility that may be employed in connection with the software, systems and methods described herein.
  • FIG. 14 is a view of another configuration screen of the configuration utility depicted in FIG. 13.
  • FIG. 15 is a view of yet another configuration screen of the configuration utility depicted in FIG. 13.
  • FIG. 16 is a view of yet another configuration screen of the configuration utility depicted in FIG. 13.
  • FIG. 17 schematically depicts a multi-tier control architecture that may be employed in connection with the management software, systems and methods described herein.
  • FIG. 18 depicts an exemplary network configuration allowing for centralized definition of network policies, and dissemination of those policies to pertinent distributed locations.
  • FIG. 19 depicts an exemplary multi-tier system and method for providing granular control over bandwidth usage of individual bandwidth-consuming components.
  • FIG. 20 depicts an exemplary computing device loaded with an embodiment of an agent software module configured to manage bandwidth consumption of an individual application socket or transaction.
  • FIG. 21 depicts an exemplary computing device loaded with alternate embodiments of an agent module configured to monitor and manage network traffic to and from applications running on the computing device.
  • DETAILED DESCRIPTION
  • The present description provides a system and method for managing network resources in a distributed networking environment, such as distributed network 10 depicted in FIG. 2. The software, system and methods increase productivity and customer/user satisfaction, minimize frustration associated with using the network, and ultimately ensure that network resources are used in a way consistent with underlying business or other objectives.
  • The systems and methods may employ two main software components, an agent and a control module, also referred to as a control point. The agents and control points may be deployed throughout distributed network 10, and may interact with each other to manage network resources. A plurality of agents may be deployed to intelligently couple clients, servers and other computing devices to the underlying network. The deployed agents monitor, analyze and act upon network events relating to the networked devices with which they are associated. The agents typically are centrally coordinated and/or controlled by one or more control points. The agents and control points may interact to control and monitor network events, track operational and congestion status of network resources, select optimum targets for network requests, dynamically manage bandwidth usage, and share information about network conditions with customers, users and IT personnel.
  • As indicated, distributed network 10 may include a local network 12 and a plurality of remote networks 14 linked by a public network 16 such as the Internet. The local network and remote networks may be connected to the public network with network infrastructure devices such as routers 18.
  • Local network 12 typically includes servers 20 and client devices such as client computers 22 interconnected by network link 24. Additionally, local network 12 may include any number and variety of devices, including file servers, applications servers, mail servers, WWW servers, databases, client computers, remote access devices, storage devices, printers and network infrastructure devices such as routers, bridges, gateways, switches, hubs and repeaters. Remote networks 14 may similarly include any number and variety of networked devices.
  • Indeed, virtually any type of computing device may be connected to the networks depicted in FIG. 2, including general purpose computers, laptop computers, handheld computers, wireless computing devices, mobile telephones, pagers, pervasive computing devices and various other specialty devices. Typically, many of the connected devices are general purpose computers which have at least some of the elements shown in FIG. 3, a block diagram depiction of a computer system 40. Computer system 40 includes a processor 42 that processes digital data. The processor may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, a microcontroller, or virtually any other processor/controller device. The processor may be a single device or a plurality of devices.
  • Referring still to FIG. 3, it will be noted that processor 42 is coupled to a bus 44 which transmits signals between the processor and other components in the computer system. Those skilled in the art will appreciate that the bus may be a single bus or a plurality of buses. A memory 46 is coupled to bus 44 and comprises a random access memory (RAM) device 47 (referred to as main memory) that stores information or other intermediate data during execution by processor 42. Memory 46 also includes a read only memory (ROM) and/or other static storage device 48 coupled to the bus that stores information and instructions for processor 42. A basic input/output system (BIOS) 49, containing the basic routines that help to transfer information between elements of the computer system, such as during start-up, is stored in ROM 48. A data storage device 50 also is coupled to bus 44 and stores information and instructions. The data storage device may be a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device or any other mass storage device. In the depicted computer system, a network interface 52 also is coupled to bus 44. The network interface operates to connect the computer system to a network (not shown).
  • Computer system 40 may also include a display device controller 54 coupled to bus 44. The display device controller allows coupling of a display device to the computer system and operates to interface the display device to the computer system. The display device controller 54 may be, for example, a monochrome display adapter (MDA) card, a color graphics adapter (CGA) card, or other display device controller. The display device (not shown) may be a television set, a computer monitor, a flat panel display or other display device. The display device receives information and data from processor 42 through display device controller 54 and displays the information and data to the user of computer system 40.
  • An input device 56, including alphanumeric and other keys, typically is coupled to bus 44 for communicating information and command selections to processor 42. Alternatively, input device 56 is not directly coupled to bus 44, but interfaces with the computer system via infra-red coded signals transmitted from the input device to an infra-red receiver in the computer system (not shown). The input device may also be a remote control unit having keys that select characters or command selections on the display device.
  • The various computing devices coupled to the networks of FIG. 2 typically communicate with each other across network links using communications software employing various communications protocols. The communications software for each networked device typically consists of a number of protocol layers, through which data is sequentially transferred as it is exchanged between devices across a network link. FIG. 1 respectively depicts the OSI layered protocol model and a layered model based on the TCP/IP suite of protocols. These two models dominate the field of network communications software. As seen in the figure, the OSI model has seven layers, including an application layer, a presentation layer, a session layer, a transport layer, a network layer, a data link layer and a physical layer. The TCP/IP-based model includes an application layer, a transport layer, a network layer, a data link layer and a physical layer.
  • Each layer in the models plays a different role in network communications. Conceptually, all of the protocol layers lie in a data transmission path that is “between” an application program running on the particular networked device and the network link, with the application layer being closest to the application program. When data is transferred from an application program running on one computer across the network to an application program running on another computer, the data is transferred down through the protocol layers of the first computer, across the network link, and then up through the protocol layers on the second computer.
  • In both of the depicted models, the application layer is responsible for interacting with an operating system of the networked device and for providing a window for application programs running on the device to access the network. The transport layer is responsible for providing reliable, end-to-end data transmission between two end points on a network, such as between a client device and a server computer, or between a web server and a DNS server. Depending on the particular transport protocol, transport functionality may be realized using either connection-oriented or connectionless data transfer. The network layer typically is not concerned with end-to-end delivery, but rather with forwarding and routing data to and from nodes between endpoints. The layers below the transport and network layers perform other functions, with the lowest levels addressing the physical and electrical issues of transmitting raw bits across a network link.
  • The systems and methods described herein are applicable to a wide variety of network environments employing communications protocols adhering to either of the layered models depicted in FIG. 1, or to any other layered model. Furthermore, the systems and methods are applicable to any type of network topology, and to networks using both physical and wireless connections.
  • The present description provides software, systems and methods for managing the resources of an enterprise network, such as that depicted in FIG. 2. This may be accomplished using two interacting software components, an agent and a control point, both of which may be adapted to run on, or be associated with, computing devices such as the computing device described with reference to FIG. 3. As seen in FIG. 4, a plurality of agents 70 and one or more control points 72 may be deployed throughout distributed network 74 by loading the agent and control point software modules on networked computing devices such as clients 22 and server 20. As will be discussed in detail, the agents and control points may be adapted and configured to enforce system policies; to monitor and analyze network events, and take appropriate action based on these events; to provide valuable information to users of the network; and ultimately to ensure that network resources are efficiently used in a manner consistent with underlying business or other goals.
  • The described software, systems and methods may be configured using a third software component, to be later discussed in more detail with reference to FIGS. 5 and 13-16. Typically, this configuration utility is a platform-independent application that provides a graphical user interface for centrally managing configuration information for the control points and agents. In addition, the configuration utility may be adapted to communicate and interface with other management systems, including management platforms supplied by other vendors.
  • As indicated in FIG. 4, each control point 72 is typically associated with multiple agents 70, and the associated agents are referred to as being within a domain 76 of the particular control point. The control points coordinate and control the activity of the distributed agents within their domains. In addition, the control points may monitor the status of network resources, and share this information with management and support systems and with the agents.
  • Control points 72 and agents 70 may be flexibly deployed in a variety of configurations. For example, each agent may be associated with a primary control point and one or more backup control points that will assume primary control if necessary. Such a configuration is illustrated in FIG. 4, where control points 72 within the dashed lines function as primary connections, with the control point associated with server device 20 functioning as a backup connection for all of the depicted agents. In addition, the described exemplary systems may be configured so that one control point coordinates and controls the activity of a single domain, or of multiple domains. Alternatively, one domain may be controlled and coordinated by the cooperative activity of multiple control points. In addition, agents may be configured to have embedded control point functionality, and may therefore operate without an associated control point entity.
  • Typically, the agents monitor network resources and the activity of the device with which they are associated, and communicate this information to the control points. In response to monitored network conditions and data reported by agents, the control points may alter the behavior of particular agents in order to provide the desired network services. The control points and agents may be loaded on a wide variety of devices, including general purpose computers, servers, routers, hubs, palm computers, pagers, cellular telephones, and virtually any other networked device having a processor and memory. Agents and control points may reside on separate devices, or simultaneously on the same device.
  • FIG. 5 illustrates an example of the way in which the various components of the described software, systems and methods may be physically interconnected with a network link 90. The components are all connected to network link 90 by means of layered communications protocol software 92. The components communicate with each other via the communications software and network link. As will be appreciated by those skilled in the art, network link 90 may be a physical or wireless connection, or a series of links including physical and wireless segments. More specifically, the depicted system includes an agent 70 associated with a client computing device 22, including an application program 98. Another agent is associated with server computing device 20. The agents monitor the activity of their associated computing devices and communicate with control point 72. Configuration utility 106 communicates with all of the other components, and with other management systems, to configure the operation of the various components and monitor the status of the network.
  • The system policies that define how network resources are to be used may be centrally defined and tailored to most efficiently achieve underlying goals. Defined policies are accessed by the control points, which in turn communicate various elements and parameters associated with the policies to the agents within their domain. At a very basic level, a policy contains rules about how network resources are to be used, with the rules containing conditions and actions to be taken when the conditions are satisfied. The agents and control points monitor the network and devices connected to the network to determine when various rules apply and whether the conditions accompanying those rules are satisfied. Once the agents and/or control points determine that action is required, they take the necessary action(s) to enforce the system policies.
  • For example, successful businesses often strive to provide excellent customer services. This underlying business goal can be translated into many different policies defining how network resources are to be used. One example of such a policy would be to prevent or limit access to non-business critical applications when performance of business critical applications is degraded beyond a threshold point. Another example would be to use QoS techniques to provide a guaranteed or high level of service to e-commerce applications. Yet another example would be to dynamically increase the network bandwidth allocated to a networked computer whenever it is accessed by a customer. Also, bandwidth for various applications might be restricted during times when there is heavy use of network resources by customers.
  • Control points 72 would access these policies and provide policy data to agents 70. Agents 70 and control points 72 would communicate with each other and monitor the network to determine how many customers were accessing the network, what computers the customer(s) were accessing, and what applications were being accessed by the customers. Once the triggering conditions were detected, the agents and control points would interact to re-allocate bandwidth, provide specified service levels, block or restrict various non-customer activities, etc.
  • Another example of policy-based management would be to define an optimum specification of network resources or service levels for particular types of network tasks. The particular policies would direct the management entities to determine whether the particular task was permitted, and if permitted, the management entities would interact to ensure that the desired level of resources was provided to accomplish the task. If the optimum resources were not available, the applicable policies could further specify that the requested task be blocked, and that the requesting user be provided with an informative message detailing the reason why the request was denied. Alternatively, the policies could specify that the user be provided with various options, such as proceeding with the requested task, but with sub-optimal resources, or waiting to perform the task until a later time.
  • For example, continuous media applications such as IP telephony have certain bandwidth requirements for optimum performance, and are particularly sensitive to network jitter and delay. Policies could be written to specify a desired level of service, including bandwidth requirements and threshold levels for jitter and delay, for client computers attempting to run IP telephony applications. The policies would further direct the agents and control modules to attempt to provide the specified level of service. Security checking could also be included to ensure that the particular user or client computer was permitted to run the application. In the event that the specified service level could not be provided, the requesting user could be provided with a message indicating that the resources for the request were not available. The user could also be offered various options, including proceeding with a sub-optimal level of service, placing a conventional telephone call, waiting to perform the task until a later time, etc.
  • The software, system and methods of the present description may be used to implement a wide variety of system policies. The policy rules and conditions may be based on any number of parameters, including IP source address, IP destination address, source port, destination port, protocol, application identity, user identity, device identity, URL, available device bandwidth, application profile, server profile, gateway identity, router identity, time-of-day, network congestion, network load, network population, available domain bandwidth and resource status, to name but a partial list. The actions taken when the policy conditions are satisfied can include blocking network access, adjusting service levels and/or bandwidth allocations for networked devices, blocking requests to particular URLs, diverting network requests away from overloaded or underperforming resources, redirecting network requests to alternate resources and gathering network statistics.
  • Some of the parameters listed above may be thought of as “client parameters,” because they are normally evaluated by an agent monitoring a single networked client device. These include IP source address, IP destination address, source port, destination port, protocol, application identity, user identity, available device bandwidth and URL. Other parameters, such as application profile, server profile, gateway identity, router identity, time-of-day, network congestion, network load, network population, available domain bandwidth and resource status may be though of as “system parameters” because they pertain to shared resources, aggregate network conditions or require evaluation of data from multiple agent modules. Despite this, there is not a precise distinction between client parameters and system parameters. Certain parameters, such as time-of-day, may be considered either a client parameter or a system parameter, or both.
  • Policy-based network management, QoS implementation, and the other functions of the agents and control points depend on obtaining real-time information about the network. As will be discussed, certain described embodiments and implementations provide improvements over known policy-based QoS management solutions because of the enhanced ability to obtain detailed information about network conditions and the activity of networked devices. Many of the policy parameters and conditions discussed above are accessible due to the particular way the agent module embodiments may be coupled to the communications software of their associated devices. Also, as the above examples suggest, managing bandwidth and ensuring its availability for core applications is an increasingly important consideration in managing networks. Certain embodiments described herein provide for improved dynamic allocation of bandwidth and control of resource consumption in response to changing network conditions.
  • The ability of the systems described herein to flexibly deploy policy-based, QoS management solutions based on detailed information about network conditions has a number of significant benefits. These benefits include reducing frustration associated with using the network, reducing help calls to IT personnel, increasing productivity, lowering business costs associated with managing and maintaining enterprise networks, and increased customer/user loyalty and satisfaction. Ultimately, the systems and methods ensure that network resources are used in a way that is consistent with underlying goals and objectives.
  • Implementation of policy-based QoS between the application and transport layers has another advantage. This allows support for encryption and other security implementations carried out using Virtual Private Networking (VPN) or IPSec protocol.
  • Referring now to FIGS. 6-9, illustrative embodiments of the agent module will be more particularly described. The agent modules may monitor the status and activities of its associated client, server, pervasive computing device or other computing device; communicate this information to one or more control points; enforce system policies under the direction of the control points; and provide messages to network users and administrators concerning network conditions. FIGS. 6-8 are conceptual depictions of networked computing devices, and show how the agent software may be associated with the networked devices relative to layered protocol software used by the devices for network communication.
  • As seen in FIG. 6, agent 70 is interposed between application program 122 and a communications protocol layer for providing end-to-end data transmission, such as transport layer 124 of communications protocol stack 92. Typically, the agent modules described herein may be used with network devices that employ layered communications software adhering to either the OSI or TCP/IP-based protocol models. Thus, agent 70 is depicted as “interposed,” i.e. in a data path, between an application program and a transport protocol layer. However, it will be appreciated by those skilled in the art that the various agent module embodiments may be used with protocol software not adhering to either the OSI or TCP/IP models, but that nonetheless includes a protocol layer providing transport functionality, i.e. providing for end-to-end data transmission.
  • Because of the depicted position within the data path, agent 70 is able to monitor network traffic and obtain information that is not available by hooking into transport layer 124 or the layers below the transport layer. At the higher layers, the available data is richer and more detailed. Hooking into the stack at higher layers allows the network to become more “application-aware” than is possible when monitoring occurs at the transport and lower layers.
  • The agent modules may be interposed at a variety of points between application program 122 and transport layer 124. Specifically, as shown in FIGS. 7 and 8, agent 70 may be associated with a client computer so that it is adjacent an application programming interface (API) adapted to provide a standardized interface for application program 122 to access a local operating system (not shown) and communications stack 92. In FIG. 7, agent 70 is adjacent a winsock API 128 and interposed between application program 122 and the winsock interface. FIG. 8 shows an alternate configuration, in which agent 70 again hooks into a socket object, such as API 128, but downstream of the socket interface. With either configuration, the agent is interposed between the application and transport layer 124 of communications stack 92, and is adapted to directly monitor data received by or sent from the winsock interface.
  • As shown in FIG. 8, agent 70 may be configured to hook into lower layers of communications stack 92. This allows the agent to accurately monitor network traffic volumes by providing a correction mechanism to account for data compression or encryption occurring at protocol layers below transport layer 124. For example, if compression or encryption occurs within transport layer 124, monitoring at a point above the transport layer would yield an inaccurate measure of the network traffic associated with the computing device. Hooking into lower layers with agent 70 allows network traffic to be accurately measured in the event that compression, encryption or other data processing that qualitatively or quantitatively affects network traffic occurs at lower protocol layers.
  • An embodiment of the agent module is depicted in FIG. 9. As shown, agent 70 may include a redirector module 130, a traffic control module 132, an administrator module 134, a DNS module 136, a popapp module 138, a message broker module 140, a system services module 142, and a popapp 144. Redirector module 130 intercepts winsock API calls made by applications running on networked devices such as the client computers depicted in FIGS. 2 and 3. Redirector module 130 then hands these calls to one or more of the other agent components for processing. As discussed with reference to FIGS. 6-8, redirector module is positioned to allow the agent to monitor data at a data transmission point between an application program running on the device and the transport layer of the communications stack. Depending on the configuration of the agent and control point, the intercepted winsock calls may be rejected, changed, or passed on by agent 70.
  • Traffic control module 132 implements QoS and system policies and assists in monitoring network conditions. Traffic control module 132 implements QoS methods by controlling the network traffic flow between applications running on the agent device and the network link. The traffic flow is controlled to deliver a specified network service level, which may include specifications of bandwidth, data throughput, jitter, delay and data loss.
  • To provide the specified network service level, traffic control module 132 may maintain a queue or plurality of queues. When data is sent from the client to the network, or from the network to the client, redirector module 130 intercepts the data, and traffic module 132 places the individual units of data in the appropriate queue. The control points may be configured to periodically provide traffic control commands, which may include the QoS parameters and service specifications discussed above. In response, traffic control module 132 controls the passing of data into, through or out of the queues in order to provide the specified service level.
  • More specifically, the outgoing traffic rate may be controlled using a plurality of priority-based transmission queues, such as transmission queues 132 a. When an application or process is invoked by a computing device with which agent 70 is associated, a priority level is assigned to the application, based on centrally defined policies and priority data supplied by the control point. Specifically, as will be discussed, the control points maintain user profiles, applications profiles and network resource profiles. These profiles include priority data which is provided to the agents.
  • Transmission queues 132 a may be configured to release data for transmission to the network at regular intervals. Using the parameters specified in traffic control commands issued by a control point, traffic module 132 calculates how much data can be released from the transmission queues in a particular interval. For example, if the specified average traffic rate is 100 KBps and the queue release interval is 1 ms, then the total amount of data that the queues can release in a given interval is 100 bits. The relative priorities of the queues containing data to be transmitted determine how much of the allotment may be released by each individual queue. For example, assuming there are only two queues, Q1 and Q2, that have data queued for transmission, Q1 will be permitted to transmit 66.66% of the overall allotted interval release if its priority is twice that of Q2. Q2 would only be permitted to release 33.33% of the allotment. If their priorities were equal, each queue would be permitted to release 50% of the interval allotment for forwarding to the network link.
  • If waiting data is packaged into units that are larger than the amount a given queue is permitted to release, the queue accumulates “credits” for intervals in which it does not release any waiting data. When enough credits are accumulated, the waiting message is released for forwarding to the network.
  • Similarly, to control the rate at which network traffic is received, traffic control module 132 may be configured to maintain a plurality of separate receive queues, such as receive queues 132 b. In addition to the methods discussed above, various other methods may be employed to control the rate at which network traffic is sent and received by the queues. Also, the behavior of the transmit and receive queues may be controlled through various methods to control jitter, delay, loss and response time for network connections.
  • The transmit and receive queues may also be configured to detect network conditions such as congestion and slow responding applications or servers. For example, for each application, transmitted packets or other data units may be timestamped when passed out of a transmit queue. When corresponding packets are received for a particular application, the receive and send times may be compared to detect network congestion and/or slow response times for various target resources. This information may be reported to the control points and shared with other agents within the domain. The response time and other performance information obtained by comparing transmit and receive times may also be used to compile and maintain statistics regarding various network resources.
  • Using this detection and reporting mechanism, a control point may be configured to reduce network loads by instructing traffic control module 132 to close low priority sessions and block additional sessions whenever heavy network congestion is reported by one of the agents. In conjunction, as will be explained, popapp 138 module may provide a message to the user explaining why sessions are being closed. In addition to closing the existing sessions, the control point may be configured to instruct the agents to block any further sessions. This action may also be accompanied by a user message in response to attempts to launch a new application or network process. When the network load is reduced, the control point will send a message to the agents allowing sessions.
  • In addition to identifying congestion and slow response times, traffic control module 132 may be more generally configured to aid in identifying downed or under-performing network resources. When a connection to a target resource fails, traffic module 132 notifies popapp modules 138, which in turn launches an executable to perform a root-cause analysis of the problem. Agent 70 then provides the control point with a message identifying the resource and its status, if possible.
  • In addition, when a connection fails, popapp module 138 may be configured to provide a message to the user, including an option to initiate an autoconnect routine targeting the unavailable resource. Enabling autoconnect causes the agent to periodically retry the unavailable resource. This feature may be disabled, if desired, to allow the control point to assume responsibility for determining when the resource becomes available again. As will be later discussed, the described system may be configured so that the control modules assume responsibility for monitoring unavailable resources in order to minimize unnecessary network traffic.
  • As discussed below, various agent components also monitor network conditions and resource usage for the purpose of compiling statistics. An additional function of traffic control module 132 is to aid in performing these functions by providing information to other agent components regarding accessed resources, including resource performance and frequency of access.
  • As suggested in the above discussion of traffic control module 132, popapp module 138 stores and is responsible for launching a variety of small application modules such as application 144, known as popapps, to perform various operations and enhance the functioning of the described system. Popapps detect and diagnose network conditions such as downed resources, provide specific messages to users and IT personnel regarding errors and network conditions, and interface with other information management, reporting or operational support systems, such as policy managers, service level managers, and network and system management platforms. Popapps may be customized to add features to existing products, to tailor products for specific customer needs, and to integrate the software, systems and methods with technology supplied by other vendors.
  • Administrator module 134 interacts with various other agent modules, maintains and provides network statistics, and provides an interface for centrally configuring agents and other components of the system. With regard to agent configuration, administrator module 134 interfaces with configuration utility 106 (shown in FIGS. 5 and 13-16), in order to configure various agent parameters. Administrator module 134 also serves as a repository for local reporting and statistics information to be communicated to the control points. Based on information obtained by other agent modules, administrator module 134 maintains local information regarding accessed servers, DNS servers, gateways, routers, switches, applications and other resources. This information is communicated on request to the control point, and may be used for network planning or to dynamically alter the behavior of agents. In addition, administrator module 134 stores system policies and/or components of policies, and provides policy data to various agent components as needed to implement and enforce the policies. Administrator module 134 also includes support for interfacing the described software and systems with standardized network management protocols and platforms.
  • DNS module 136 provides the agent with configurable address resolving services. DNS module 136 may include a local cache of DNS information, and may be configured to first resolve address requests using this local cache. If the request cannot be resolved locally, the request is submitted to a control point, which resolves the address with its own cache, provided the address is in the control point cache and the user has permission to access the address. If the request cannot be resolved with the control point cache, the connected control point submits the request to a DNS server for resolution. If the address is still not resolved at this point, the control point sends a message to the agent, and the agent then submits the request directly to its own DNS server for resolution.
  • DNS module 136 also monitors address requests and shares the content of the requests with administrator module 134. The requests are locally compiled and ultimately provided to the control points, which maintain dynamically updated lists of the most popular DNS servers. In addition, DNS module 136 is adapted to interact with control point 72 in order to redirect address resolving requests and other network requests to alternate targets, if necessary.
  • Message broker module 140 creates and maintains connections to the one or more control points with which the agent interacts. The various agent components use the message broker module to communicate with each other and with a connected control point. Message broker module 140 includes message creator and message dispatcher processes for creating and sending messages to the control points. The message creator process includes member functions, which create control point messages by receiving message contents as parameters and encoding the contents in a standard network format. The message creator process also includes a member function to decode the messages received from the control point and return the contents in a format usable by the various components of the agent.
  • After encoding by the creator process, control point messages are added to a transmission queue and extracted by the message dispatcher function for transmission to the control point. Messages extracted from the queue are sent to the agent's active control point. In addition, the dispatcher may be configured to ensure delivery of the message using a sequence numbering scheme or other error detection and recovery methods.
  • Messages and communications from an agent to a control point are made using a unicast addressing mechanism. Communications from the control point to an agent or agents may be made using unicast or a multicast addressing scheme. When configured for multicast operation, the control point and agents may be set to revert to unicast to allow for communication with devices that do not support IP multicast.
  • Once a connection with a control point is established, message broker module 140 monitors the status of the connection and switches over to a backup control point upon detecting a connection failure. If both the active and backup connections are not active, network traffic is passed on transparently.
  • System services module 142 provides various support functions to the other agent components. First, system services module maintains dynamic lists of user profiles, server profiles, DNS server profiles, control point connections and other data. The system services module also provides a tracing capability for debugging, and timer services for use by other agent components. System services module may also be configured with a library of APIs to interface the agent with the operating systems and other components of the device that the agent is associated with.
  • Referring now to FIGS. 10-12, control point 72 and its functions will be more particularly described. As seen in FIG. 10, control point may include a traffic module 160, a server profile module 162, a DNS server profile module 164, a gateway profile module 166, an administrator module 168, a message broker module 170, a popapp interface 172 and a popapp 174.
  • Control point traffic module 160 implements policy-based, QoS techniques by coordinating the service-level enforcement activities of the agents. As part of this function, traffic module 160 dynamically allocates bandwidth among the agents in its domain by regularly obtaining allocation data from the agents, calculating bandwidth allocations for each agent based on this data, and communicating the calculated allocations to the agents for enforcement. For example, control point 72 can be configured to recalculate bandwidth allocations every five seconds. During each cycle, between re-allocation, the agents restrict bandwidth usage by their associated devices to the allocated amount and monitor the amount of bandwidth actually used. At the end of the cycle, each agent reports the bandwidth usage and other allocation data to the control point to be used in re-allocating bandwidth.
  • During re-allocation, traffic module 160 divides the total bandwidth available for the upcoming cycle among the agents within the domain according to the priority data reported by the agents. The result is a configured bandwidth CB particular to each individual agent, corresponding to that agent's fair share of the available bandwidth. The priorities and configured bandwidths are a function of system policies, and may be based on a wide variety of parameters, including application identity, user identity, device identity, source address, destination address, source port, destination port, protocol, URL, time of day, network load, network population, and virtually any other parameter concerning network resources that can be communicated to, or obtained by the control point. The detail and specificity of client-side parameters that may be supplied to the control point is greatly enhanced by the position of agent redirector module 130 relative to the layered communications protocol stack. The high position within the stack allows bandwidth allocation and, more generally, policy implementation, to be performed based on very specific triggering criteria. This may greatly enhance the flexibility and power of the described software, systems and methods.
  • The priority data reported by the agents may include priority data associated with multiple application programs running on a single networked device. In such a situation, the associated agent may be configured to report an “effective application priority,” which is a function of the individual application priorities. For example, if device A were running two application programs and device B were running a single application program, device A's effective application priority would be twice that of device B, assuming that the individual priorities of all three applications were the same. The reported priority data for a device running multiple application programs may be further refined by weighting the reported priority based on the relative degree of activity for each application program. Thus, in the previous example, if one of the applications running on device A was dormant or idle, the contribution of that application to the effective priority of device A would be discounted such that, in the end, device A and device B would have nearly the same effective priority. To determine effective application priority using this weighted method, the relative degree of activity for an application may be measured in terms of bandwidth usage, transmitted packets, or any other activity-indicating criteria.
  • In addition to priority data, each agent may be configured to report the amount of bandwidth UB used by its associated device during the prior period, as discussed above. Data is also available for each device's allocated bandwidth AB for the previous cycle. Traffic module 160 may compare configured bandwidth CB, allocated bandwidth AB or utilized bandwidth UB for each device, or any combination of those three parameters to determine the allocations for the upcoming cycle. To summarize the three parameters, UB is the amount the networked device used in the prior cycle, AB is the maximum amount they were allowed to use, and CB specifies the device's “fair share” of available bandwidth for the upcoming cycle.
  • Both utilized bandwidth UB and allocated bandwidth AB may be greater than, equal to, or less than configured bandwidth CB. This may happen, for example, when there are a number of networked devices using less than their configured share CB. To efficiently utilize the available bandwidth, these unused amounts are allocated to devices requesting additional bandwidth, with the result being that some devices are allocated amount AB that exceeds their configured fair share CB. Though AB and UB may exceed CB, utilized bandwidth UB cannot normally exceed allocated bandwidth AB, because the agent traffic control module enforces the allocation.
  • Any number of processing algorithms may be used to compare CB, AB and UB for each agent in order to calculate a new allocation, however there are some general principles which are often employed. For example, when bandwidth is taken away from devices, it is often desirable to first reduce allocations for devices that will be least affected by the downward adjustment. Thus, traffic module 160 may be configured to first reduce allocations of clients or other devices where the associated agent reports bandwidth usage UB below the allocated amount AB. Presumably, these devices won't be affected if their allocation is reduced. Generally, traffic module 160 should not reduce any other allocation until all the unused allocations, or portions of allocations, have been reduced. The traffic module may be configured to then reduce allocations that are particularly high, or make adjustments according to some other criteria.
  • Traffic module 160 may also be configured so that when bandwidth becomes available, the newly-available bandwidth is provisioned according to generalized preferences. For example, the traffic module can be configured to provide surplus bandwidth first to agents that have low allocations and that are requesting additional bandwidth. After these requests are satisfied, surplus bandwidth may be apportioned according to priorities or other criteria.
  • FIGS. 11A, 11B, 11C and 11D depict examples of various methods that may be implemented by traffic module 160 to dynamically allocate bandwidth. FIG. 11A depicts a process by which traffic module 160 determines whether any adjustments to bandwidth allocations AB are necessary. Allocated bandwidths AB for certain agents are adjusted in at least the following circumstances. First, as seen in steps S4 and S10, certain allocated bandwidths AB are modified if the sum of all the allocated bandwidths ABtotal exceeds the sum of the configured bandwidths CBtotal. This situation may occur where, for some reason, a certain portion of the total bandwidth available to the agents in a previous cycle becomes unavailable, perhaps because it has been reserved for another purpose. In such a circumstance, it is important to reduce certain allocations AB to prevent the total allocations from exceeding the total bandwidth available during the upcoming cycle.
  • Second, if there are any agents for which AB<CB and UB≅AB, the allocation for those agents is modified, as seen in steps S6 and S10. The allocations for any such agent are typically increased. In this situation, an agent has an allocation AB that is less than their configured bandwidth CB, i.e. their existing allocation is less than their fair share of the bandwidth that will be available in the upcoming cycle. Also, the reported usage UB for the prior cycle is at or near the enforced allocation AB, and it can thus be assumed that more bandwidth would be consumed by the associated device if its allocation AB were increased.
  • Third, if there are any agents reporting bandwidth usage UB that is less than their allocation AB, as determined at step S8, then the allocation AB for such an agent is reduced for the upcoming period to free up the unused bandwidth. Steps S4, S6 and S8 may be performed in any suitable order. Collectively, these three steps ensure that certain bandwidth allocations are modified, i.e. increased or reduced, if one or more of the following three conditions are true: (1) ABtotal>CBtotal, (2) AB<CB and UB≅AB for any agent, or (3) UB<AB for any agent. If none of these are true, the allocations AB from the prior period are not adjusted. Traffic module 160 modifies allocations AB as necessary at step S10. After all necessary modifications are made, the control point communicates the new allocations to the agents for enforcement during the upcoming cycle.
  • FIG. 11B depicts re-allocation of bandwidth to ensure that total allocations AB do not exceed the total bandwidth available for the upcoming cycle. At step S18, traffic module 160 has determined that the sum of allocations AB from the prior period exceed the available bandwidth for the upcoming period, i.e. ABtotal>CBtotal. In this situation, certain allocations AB must be reduced. As seen in steps S20 and S22, traffic module 160 may be configured to first reduce allocations of agents that report bandwidth usage levels below their allocated amounts, i.e. UB<AB for a particular agent. These agents are not using a portion of their allocations, and thus are unaffected or only minimally affected when the unused portion of the allocation is removed. At step S20, the traffic module first determines whether there are any such agents. At step S22, the allocations AB for some or all of these agents are reduced. These reductions may be gradual, or the entire unused portion of the allocation may be removed at once.
  • After any and all unused allocation portions have been removed, it is possible that further reductions may be required to appropriately reduce the overall allocations ABtotal. As seen in step S24, further reductions are taken from agents with existing allocations AB that are greater than configured bandwidth CB, i.e. AB>CB. In contrast to step S22, where allocations were reduced due to unused bandwidth, bandwidth is removed at step S24 from devices with existing allocations that exceed the calculated “fair share” for the upcoming cycle. As seen at step S26, the reductions taken at steps S22 and S24 may be performed until the total allocations ABtotal are less than or equal to the total available bandwidth CBtotal for the upcoming cycle.
  • FIG. 11C depicts a method for increasing the allocation of certain agents. As discussed with reference to FIG. 11A, where AB<CB and UB≅AB for any agent, the allocation AB for such an agent should be increased. The existence of this circumstance has been determined at step S40. To provide these agents with additional bandwidth, the allocations for certain other agents typically need to be reduced. Similar to steps S20 and S22 of FIG. 11B, unutilized bandwidth is first identified and removed (steps S42 and S44). Again, the control point may be configured to vary the rate at which unused allocation portions are removed. If reported data does not reflect unutilized bandwidth, traffic module 160 may be configured to then reduce allocations for agents having an allocation AB higher than their respective configured share CB, as seen in step S46. The bandwidth recovered in steps S44 and S46 is then provided to the agents requesting additional bandwidth. Any number of methods may be used to provision the recovered bandwidth. For example, preference may be given to agents reporting the largest discrepancy between their allocation AB and their configured share CB. Alternatively, preferences may be based on application identity, user identity, priority data, other client or system parameters, or any other suitable criteria.
  • FIG. 11D depicts a general method for reallocating unused bandwidth. At step S60, it has been determined that certain allocations AB are not being fully used by the respective agents, i.e. UB<AB for at least one agent. At step S62, the allocations AB for these agents are reduced. As with the reductions and modifications described with reference to FIGS. 11A, 11B and 11C, the rate of the adjustment may be varied through configuration changes to the control point. For example, it may be desired that only a fraction of unused bandwidth be removed during a single reallocation cycle. Alternatively, the entire unused portion may be removed and reallocated during the reallocation cycle.
  • In step S64 of FIG. 11D, the recovered amounts are provisioned as necessary. The recovered bandwidth may be used to eliminate a discrepancy between the total allocations ABtotal and the available bandwidth, as in FIG. 11B, or to increase allocations of agents who are requesting additional bandwidth and have relatively low allocations, as in FIG. 11C. In addition, if there is enough bandwidth recovered, allocations may be increased for agents requesting additional bandwidth, i.e. UB≅AB, even where the current allocation AB for such an agent is fairly high, e.g. AB>CB. As with the methods depicted in FIGS. 11B and 11C, the recovered bandwidth may be reallocated using a variety of methods and according to any suitable criteria.
  • As indicated, traffic module 160 can be configured to vary the rate at which the above allocation adjustments are made. For example, assume that a particular device is allocated 64 KBps (AB) and reports usage during the prior cycle of 62 KBps (UB). Traffic module 160 cannot determine how much additional bandwidth the device would use. Thus, if the allocation were dramatically increased, say doubled, it is possible that a significant portion of the increase would go unused. However, because the device is using an amount roughly equal to the enforced allocation AB, it can be assumed that the device would use more if the allocation were increased. Thus, it is often preferable to provide small, incremental increases. The amount of these incremental adjustments and the rate at which they are made may be configured with the configuration utility, as will be discussed with reference to FIG. 16. If the device consumes the additional amounts, successive increases can be provided if additional bandwidth is available.
  • In addition, the bandwidth allocations and calculations may be performed separately for the transmit and receive rates for the networked devices. In other words, the methods described with reference to FIGS. 11A-11D may be used to calculate a transmit allocation for a particular device, as well as a separate receive allocation. Alternatively, the calculations may be combined to yield an overall bandwidth allocation.
  • Server profile module 162, DNS server profile module 164, gateway profile module 166 and administrator module 168 all interact with the agents to monitor the status of network resources. More specifically, FIG. 12 provides an illustrative example of how the control points and agents may be configured to monitor the status of resources on the network. The monitored resource(s) may be a server, a DNS server, a router, gateway, switch, application, etc. At step S100, a resource status change has occurred. For example, a server has gone down, traffic through a router has dramatically increased, a particular application is unavailable, or the performance of a particular gateway has degraded beyond a predetermined threshold specified in a system policy.
  • At step S102, a networked device attempts to access the resource or otherwise engages in activity on the network involving the particular resource. If the accessing or requesting device is an agent, an executable spawned by popapp module 138 (FIG. 9) analyzes the resource, and reports the identity and status of the resource to the control point connected to the agent, as indicated at step S104. Launch of the popapp may be triggered by connection errors, or by triggering criteria specified in system policies. For example, system policies may include performance benchmarks for various network resources, and may further specify that popapp analysis is to be performed when resource performance deviates by a certain amount from the established benchmark. In addition, the control points may similarly be configured to launch popapps to analyze network resources.
  • Once the control point obtains the status information, the control point reports the information to all of the agents in its domain, and instructs the agents how to handle further client requests involving the resource, as indicated at steps S108 and S110. In the event that the target resource is down, underperforming or otherwise unavailable, the instructions given to the agents will depend on whether an alternate resource is available. The control point stores dynamically updated lists of alternate available resources. If an alternate resource is available, the instructions provided to the agent may include an instruction to transparently redirect the request to an alternate resource, as shown in step S108. For example, if the control point knows of a server that mirrors the data of another server that has gone down, client requests to the down server can simply be redirected to the mirror server. Alternatively, if no alternate resource is available, the agent can be instructed to provide a user message in the event of an access attempt, as seen in step S110. The messaging function is handled by agent popapp module 138. In addition, popapp functionality may be employed by the control point to report status information to other control points and management platforms supplied by other vendors. In addition, messages concerning resource status or network conditions may be provided via email or paging to IT personnel.
  • Still referring to FIG. 12, the control point may be configured to assume responsibility for tracking the status of the resource in order to determine when it again becomes available, as shown in step S112. A slow polling technique is used to minimize unnecessary traffic on the network. During the interval in which the resource is unavailable, the agents either redirect requests to the resources or provide error messages, based on the instructions provided by the control point, as shown in step S116. Once the control point determines that the resource is again available, the control point shares this information with the agents and disables the instructions provided in steps S108 and S110, as shown in step S118.
  • This method of tracking and monitoring resource status has important advantages. First, it reduces unnecessary and frustrating access attempts to unavailable resources. Instead of repeatedly attempting to perform a task, a user's requests are redirected so that the request can be serviced successfully, or the user is provided with information about why the attempt(s) was unsuccessful. With this information in hand, the user is less likely to generate wasteful network traffic with repeated access attempts in a short period of time. In addition, network traffic is also reduced by having only one entity, usually a control point, assume responsibility for monitoring a resource that is unavailable.
  • In addition to assisting these resource monitoring functions, server profile module 162 maintains a dynamically updated list of the servers accessed by agents within its domain. The server statistics may be retrieved using the configuration utility, or with a variety of other existing management platforms. The server statistics may be used for network planning, or may be implemented into various system policies for dynamic enforcement by the agents and control points. For example, the control points and agents can be configured to divert traffic from heavily used servers or other resources.
  • DNS module 164 also performs certain particularized functions in addition to aiding the resource monitoring and tracking described with reference to FIG. 12. Specifically, the DNS module maintains a local DNS cache for efficient local address resolution. As discussed with reference to agent DNS module 136, the agents and control points interact to resolve address requests, and may be configured to resolve addresses by first referencing local DNS data maintained by the agents and/or control points. Similar to server profile module 162, DNS module 164 also maintains statistics for use in network planning and dynamic system policies.
  • In addition to the functions described above, administrator module 168 maintains control point configuration parameters and distributes this information to the agents within the domain. Similar to the server, DNS and gateway modules, administrator module 168 also aids in collecting and maintaining statistical data concerning network resources. In addition, administrator module 168 retrieves policy data from centralized policy repositories, and stores the policy data locally for use by the control points and agents in enforcing system policies.
  • Control point 72 also includes a synchronization interface (not shown) for synchronizing information among multiple control points within the same domain.
  • Message broker module 170 performs various functions to enable the control point to communicate with the agents. Similar to agent message broker module 140, message broker module 170 includes message creator and message dispatcher processes. The message creator process includes member functions that receive message contents as parameters and return encoded messages for transmission to agents and other network entities. Functions for decoding received messages are also included with the message creator process. The dispatcher process transmits messages and ensures reliable delivery through a retry mechanism and error detection and recovery methods.
  • Referring now to FIGS. 13-16, both the agents and control points may be configured using configuration utility 106. Typically, configuration utility 106 is a platform-independent application that provides a graphical user interface for centrally managing configuration information for the control points and agents. To configure the control points and agents, the configuration utility interface with administrator module 134 of agent 70 and with administrator module 168 of control point 72. Alternatively, configuration utility 106 may interface with administrator module 168 of control point 72, and the control point in turn may interface with administrator module 134 of agent 70.
  • FIG. 13 depicts a main configuration screen 188 of configuration utility 106. As indicated, main configuration screen 188 can be used to view various managed objects, including users, applications, control points, agents and other network entities and resources. For example, screen frame 190 on the left side of the main configuration screen 188 may be used to present an expandable representation of the control points that are configured for the network.
  • When a particular control point is selected in the main configuration screen 188, various settings for the control point may be configured. For example, the name of the control point may be edited, agents and other entities may be added to the control point's domain, and the control point may be designated as a secondary connection for particular agents or groups of agents. In addition, the system administrator may specify the total bandwidth available to agents within the control point's domain for transmitting and receiving, as shown in FIG. 14. This bandwidth specification will affect the configured bandwidths CB and allocated bandwidths AB discussed with reference to control point traffic module 160 and the method depicted in FIG. 11.
  • Configuration utility 106 also provides for configuration of various settings relating to users, applications and resources associated with a particular control point. For example, users may be grouped together for collective treatment, lists of prohibited URLs may be specified for particular users or groups of users, and priorities for applications may be specified, as shown in FIG. 15. Priorities may also be assigned to users or groups of users. As discussed above, this priority data plays a role in determining bandwidth allocations for the agents and their associated devices.
  • In addition, optimum and minimum performance levels may be established for applications or other tasks using network resources. Referring again to the IP telephony example discussed above, the configuration utility may be used to specify a minimum threshold performance level for a networked device running the IP telephony application. This performance level may be specified in terms of QoS performance parameters such as bandwidth, throughput, jitter, delay and loss. The agent module associated with the networked device would then monitor the network traffic associated with the IP telephony application to ensure that performance was above the minimum threshold. If the minimum level was not met, the control points and agents could interact to reallocate resources and provide the specified minimum service level. Similarly, an optimum service level may be specified for various network applications and tasks. More generally, configuration utility 106 may be configured to manage system policies by providing functionality for authoring, maintaining and storing system policies, and for managing retrieval of system policies from other locations on a distributed network, such as a dedicated policy server.
  • Referring now to FIG. 16, the configuration of various other control point and agent parameters will be discussed. As seen in the figure, configuration utility 106 may be used to configure the interval at which resource reallocation is performed. For example, the default interval for recalculating bandwidth allocations is 5000 milliseconds, or 5 seconds. Also, as discussed above, the rate at which resource allocation occurs may be specified in order to prevent overcompensation, unnecessary adjustments to allocations, and inefficient reconfigurations. Specifically, the percentage of over-utilized bandwidth that is removed from a client device and reallocated elsewhere may be specified with the configuration utility, as seen in FIG. 16. In addition, the rate at which agents provide feedback to the control points regarding network conditions or activities of their associated devices may be configured.
  • In many of the examples discussed above, the systems and methods are implemented architecturally in two tiers. The first tier may include one or more control modules, such as control points 72. Because the control points control and coordinate operation of agent modules 70, the control points may be referred to as “upstream” or “overlying” components, relative to the agent modules that they control. By contrast, the agents, which form the second tier of the system, may be referred to as “downstream” or “underlying” components, relative to the control points they are controlled by.
  • It will be appreciated that the systems and methods described herein are extremely flexible and scalable, and may be applied to networks of widely varying size. In some settings, scaling is achieved by extending hierarchical implementations to three or more tiers. Indeed, a very large architectural model may be built by extending the two-tier example above on the upstream and/or downstream side.
  • Such a model can be used to deliver consistent application performance in a very large and complex network configuration. An upstream component may control one or more downstream components. An upstream component can also be configured to be a downstream component of some other controlling upstream entity. Similarly, a downstream component may be configured to be an upstream component to control its downstream delegates.
  • FIG. 17 depicts an exemplary multi-tier implementation. In this example, the Tier 1 component is upstream relative to three components in Tier 2. Because the Tier 2 components underlie and are controlled by the Tier 1 component, the Tier 2 components are downstream components relative to the Tier 1 component. As indicated, the components within Tiers 2, 3 and 4 may be configured to function as both downstream and upstream components.
  • The agent modules and control points described herein may be implemented at any level within a multi-tier environment such as that shown in FIG. 17. For example, the Tier 4 component may include a control module 72, as shown, that is configured to control agent modules 70 that may be running on the underlying Tier 5 components.
  • Hierarchically, the structure of FIG. 17 typically is implemented in a one-to-many configuration moving downstream. In other words, an upstream component may control multiple downstream components in the tier immediately below it, and those downstream components may control multiple components in further downstream tiers. However, a given downstream component typically reports to and is controlled by only a single upstream component. It should be appreciated that a wide variety of configurations are possible, with any desired number of tiers and components or groupings of components within the tiers.
  • In multi-tier environments such as that described above, it will often be desirable to provide mechanisms for centrally defining and distributing policies relating to management of bandwidth and other resources. FIG. 18 depicts an exemplary networking environment employing such mechanisms. In the example, various branch offices are connected to a centralized data center 200 via network 202. The system may be configured to enable an administrator to define enterprise wide policies on a central server such as an Enterprise Policy Server (EPS) 204. In such an environment, the control point modules described herein may be implemented in connection with a Controlled Location Policy Server (CLPS) 206, as shown in the exemplary branch office 208 at the bottom of FIG. 18.
  • Typically, a given CLPS 206 retrieves policies pertinent to its branch office from its controlling EPS 204. The CLPS, which may include a control module 72, then distributes pertinent policies to the relevant agent modules 70 within its domain. The retrieved policies are then distributed to the various agents that are controlled by that control point module. This policy definition and distribution scheme may easily be adapted and scaled to manage widely varying enterprise configurations. The distributed policies may, among other things, be used to facilitate the bandwidth management techniques described herein.
  • In addition to or instead of the previously described bandwidth management features, bandwidth management may be effected using tiered implementations such as that described above. FIG. 19 depicts an exemplary system 300 for managing bandwidth on network link 302.
  • As shown, system 300 may include various components referred to as bandwidth managers, which may be implemented within the control modules (e.g., control points 72) and agent modules (e.g., agent modules 70) described herein. For example, the depicted system includes a controlled location bandwidth manager (CL-BWMGR) 304, which may be implemented within previously described control point 72, and/or within a Controlled Location Policy Server 206, such as shown in FIG. 18. In any case, CL-BWMGR 304 typically is implemented as a software program running on a computing device, such as a server (not shown), connected to network link 302.
  • The CL-BWMGR typically communicates with agent software (e.g., agent modules 70) running on computing devices connected to network link 302, so as to manage bandwidth usage by those devices. For example, as shown, plural computing devices 22, also referred to as agent computers, may be interconnected via network link 302. Each agent computer may be running an agent bandwidth manager (AG-BWMGR) 306, one or more application bandwidth managers (APP-BWMGR) 308, and one or more socket bandwidth managers (SOCK-BWMGR) 310. The bandwidth managers loaded on agent computer 22 typically are sub-components of agent module 70, discussed above.
  • Within a given computing device 22 connected to network link 302, typically there is one AG-BWMGR 306, and a variable number of APP-BWMGRs 308 and SOCK-BWMGRs 310, depending on the applications and sockets being utilized. For example, when the bandwidth managers are implemented within an agent module 70, the agent module typically is adapted to launch an APP-BWMGR 308 for each application 320 running on computer 22, and a SOCK-BWMGR 310 for each socket 322 open by each application 320. As indicated, a given computing device may be running multiple applications 320, and a given application 320 may have multiple open sockets 322. Computing devices 22 transmit and receive data on network link 302 via transmit and receive control sections 324 and 326. Sections 324 and 326 may respectively include transmit and receive queues 328 and 330, as will be explained below.
  • In the depicted example, CL-BWMGR 304 manages overall bandwidth on network link 302, a given AG-BWMGR 306 manages bandwidth usage by its associated agent computer 22, a given APP-BWMGR 308 manages bandwidth usage by its associated application 320, and a given SOCK-BWMGR 310 manages bandwidth usage by its associated socket 322.
  • As indicated by the arrows interconnecting the different bandwidth managers, the bandwidth managers may be arranged in a hierarchical control configuration, in which an upstream component interacts with and/or controls one or more underlying downstream components. In the depicted example, the CL-BWMGR manages bandwidth usage on network link 302 by interacting with underlying AG-BWMGRs, so as to manage bandwidth usage by the particular downstream computing devices 22, applications 320 and sockets 322 that underlie the CL-BWMGR. It should be appreciated that the CL-BWMGR may manage bandwidth usage on any type of interconnection between computers. For example, in FIG. 2, a given remote network 14 could include a server computer running a CL-BWMGR 304 interconnected via a local network segment with plural client computers, where each client computer was loaded with an agent module 70. In such a case, the control computer could control the clients' bandwidth usage on not only the local network segment (e.g., an Ethernet-based LAN), but also on the connection through router 18 to public network 16.
  • Referring again to FIG. 19, a given AG-BWMGR 306 manages bandwidth usage by its associated computing device 22 by interacting with underlying APP-BWMGRs, so as to manage bandwidth usage by the particular downstream applications 320 and sockets 322 that underlie the AG-BWMGR. Likewise, a given APP-BWMGR 308 manages bandwidth usage by its associated application 320 by interacting with underlying SOCK-BWMGRs, so as to manage bandwidth usage by the particular downstream sockets 322 that underlie the APP-BWMGR. As explained below, downstream components typically facilitate control by reporting certain data, such as bandwidth consumption, upstream to overlying upstream components.
  • In the depicted example, bandwidth may be managed by taking a bandwidth allocation existing at a particular hierarchical level, and sub-allocating the allocation for apportionment amongst downstream components. Where there are multiple downstream components, sub-allocation typically involves dividing the allocation into portions for each of the downstream components.
  • For example, CL-BWMGR 304 may have an allocation corresponding to the available bandwidth on network link 302. The available bandwidth may be configured according to a system policy specifying parameters for network link 302, or may be determined or configured through other methods. The available bandwidth on link 302 may be sub-allocated by CL-BWMGR 304 into one or more agent allocations, depending on the number of agent computers 22 underlying the CL-BWMGR. Each agent allocation represents the amount of bandwidth allotted to the respective agent computer. For example, if one hundred agent computers 22 are controlled by the CL-BWMGR, then the CL-BWMGR would typically sub-allocate available link bandwidth into one hundred individualized agent allocations for the respective underlying agent computers. The individualized agent allocation would be provided to the respective AG-BWMGRs at each agent computer 22 for enforcement and further sub-allocation.
  • Similarly, at each agent computer, the associated AG-BWMGR 306 may sub-allocate its agent allocation into individualized application allocations for each application 320 running on the agent computer. A given application allocation represents the amount of bandwidth allotted to the corresponding application. These application allocations typically are provided to the respective APP-BWMGRs for enforcement and further sub-allocation. Finally, at each application 320, the associated APP-BWMGR 308 may sub-allocate its application allocation into individual socket allocations for the sockets 322 that are open for that application. The individual socket allocations typically are provided to the respective SOCK-BWMGRs 310.
  • Typically, at least some of the bandwidth managers are configured to communicate upstream in order to facilitate the various sub-allocations discussed above. Indeed, data may be provided upstream from each group of SOCK-BWMGRs 310 to their overlying APP-BWMGR 308, from each group of APP-BWMGRs 308 to their overlying AG-BWMGR 306, and from each group of AG-BWMGRs 306 to their overlying CL-BWMGR 304. The data provided upstream may be used to calculate sub-allocations, which are then sent back downstream for enforcement, and/or further sub-allocation. Often, it will be desirable that the data which is sent upstream pertain to socket activity, so as to efficiently adjust future allocations to take bandwidth from where it is less needed and provide it to where it is more needed.
  • Indeed, the interaction between the various bandwidth managers typically is performed to efficiently allocate network bandwidth among the various bandwidth-consuming components. As indicated above, consuming components may include agent computers 22, applications 320 running on those computers, and sockets 322 open for those applications. One way in which the systems described herein may efficiently allocate bandwidth is by shifting bandwidth allocations toward high priority uses and away from uses of relatively lower priority. Another allocation criteria may be employed which involves providing future allocations based on consumption of past allocations. For example, allocations to a particular bandwidth manager (e.g., an application allocation provided from an AG-BWMGR 306 to an APP-BWMGR 308) may be reduced if past allocations were only partially consumed, or if there is a trend of diminished usage.
  • The interactions discussed above need not occur in any particular order, and may occur periodically or non-periodically, and/or at different rates for different levels. For example, CL-BWMGR 304 may hand out agent allocations for computers 22 at regular intervals, or only when a changed condition within the network is detected. Application allocations and socket allocations may be disseminated downstream periodically, but at different rates. In any case, it will often be advantageous to repeatedly and dynamically update the allocations and sub-allocations, as will be discussed in more detail below.
  • Bandwidth management for a representative socket will now be described with reference to FIG. 20, which shows an agent computing device 22 coupled to other computers (not shown) via network link 302. Computing device 22 is running an application program 320 which communicates over network link 302 via a layered protocol stack (depicted in part at 92), as described with reference to earlier examples. Agent module 70 may be interposed between application program 320 and lower layers of stack 92, typically at a point where flow control may be achieved. Specifically, as described with reference to FIGS. 6, 7 and 8, agent module 70 typically is positioned in a relatively high location within the protocol stack and interfaced with a socket object, to allow network traffic to be intercepted, or hooked into, at a point between application program 320 and transport layer 124 of stack 92. The agent module 70 depicted in FIG. 20 may include some or all of the components described with reference to FIG. 9. In particular, a redirector 130 (FIG. 9) may be employed to allow agent module to intercept, or hook into, the socket data flow between application 320 and network link 302.
  • For the depicted representative socket, traffic between application program 320 and network link 302 flows through transmit control section 324 and receive control section 326, which may include transmit queues 328 and receive queues 330, respectively. Data flows through control sections 324 and 326 may be monitored and controlled via SOCK-BWMGR 310 of agent module 70. As described above, SOCK-BWMGR 310 may interact with upstream bandwidth managers (not shown in FIG. 20) implemented within agent module 70. Specifically, socket allocations may be provided to SOCK-BWMGR 310, which controls transmit control section 324 and receive control section 326 to ensure that those allocations are enforced. Also, as indicated, SOCK-BWMGR 310 may provide feedback (e.g., concerning socket activity) to various upstream bandwidth managers. This feedback may be used to facilitate future allocations.
  • Socket allocations may be provided to SOCK-BWMGR 310 periodically at regular intervals. In such a case, the allocation typically is in the form of a number of bytes that may be transmitted, and/or a number of bytes that may be received, during a set interval (there may be separate transmit and receive allocations). During the interval, data passes through control sections 324 and 326, and SOCK-BWMGR 310 monitors the amount of data sent to, or received from, network link 302. Provided that the allocation for a given interval is not exceeded, SOCK-BWMGR 310 may simply passively monitor the traffic. However, once the allocation for a particular interval is used up, further transmission and reception is prevented, until a new allocation is obtained (e.g., during a succeeding interval). As indicated, queues 328 and 330 may be provided to hold overflow, until the socket is replenished with a new allocation, at which point the queued data may be transmitted and/or received. When an allocation is not exceeded, queues 328 and 330 typically are not used, such that transmitted or received data passes through sections 324 and/or 326 without any queuing of data. In these implementations, queuing is triggered only when the socket allocation for a given interval is exceeded.
  • In many cases, it will be desirable that the upstream feedback for the socket (e.g., the feedback sent upstream by SOCK-BWMGR 310) include specification of how much bandwidth was consumed by the socket (e.g., how many bytes were sent or received). This consumption data may be used by upstream bandwidth managers in performing sub-allocations, as will be explained below.
  • In connection with the agent device management described herein, the term socket should be broadly understood to mean network conversations or connections carried out by applications above the transport protocol layer (e.g., layer 124 in FIGS. 6, 7, 8 and 20). In most cases, these conversations or connections are characterized at least in part by the ability to implement flow control. Accordingly, it will be appreciated that the agent modules are not limited to any one particular standard that is employed within the upper layers of the protocol stack. Indeed, the agent modules described herein may hook into network conversations carried on by applications that do not use the pervasive Winsock standard.
  • Referring particularly to FIG. 21, exemplary agent computer 22 is shown as running applications 320 a, 320 b, and 320 c. Application 320 a employs the Winsock standard, and thus communicates with network link 302 via Winsock API 128 and lower layers of protocol stack (e.g., the layers shown at 340 and 342). Accordingly, agent module 70 a may be interposed as shown relative to API 128 to hook into the network communications and carry out the monitoring and management functions described at length herein.
  • In contrast, applications 320 b and 320 c do not work with the Winsock standard. An example of such an application is an Active Directory application that employs the NetBios standard. For these non-winsock applications, an alternate agent module 70 b may be provided. Similar to module 70 a and the previously discussed agent module embodiments, agent module 70 b is able to hook into network conversations above the transport layer. The module does not require winsock, though it is still active at a high protocol layer where flow control may be employed and where a rich variety of parameters may be accessed to enhance monitoring and policy-based management.
  • Returning to the discussion of bandwidth management, various different methodologies may be employed to provide downstream sub-allocations of bandwidth. Many of these methodologies employ a priority scheme, in which priorities may be assigned via network policies. Priority may be assigned based on a variety of parameters, including IP source address, IP destination address, source port, destination port, protocol, application identity, user identity, device identity, URL, available device bandwidth, application profile, server profile, gateway identity, router identity, time-of-day, network congestion, network load, network population, available domain bandwidth and status of network resources. Other parameters are possible. As discussed above, the position of the described agent modules relative to the protocol stack allows access to many different parameters. The ability to predicate bandwidth management on such varying criteria allows for increased control over bandwidth allocations, and thereby increases efficiency.
  • In typical implementations, the assigned priorities discussed above may be converted into or expressed in terms of a priority level that is specific to a given network “conversation,” or connection, such as a socket data flow. Indeed, many of the sub-allocation schemes discussed below depend on socket priorities for individual sockets. In the policy-based systems described herein, the socket priorities typically are configured values derived from various parameters such as those listed above. For example, the configured priority for a given socket might be determined by a combination of user identity, the type of application being run, and the source address of the requested data, to name but one example.
  • Referring again to FIG. 19, various exemplary methods for sub-allocating bandwidth will now be discussed. Beginning with the sub-allocation of an application allocation into one or more socket allocations, as may be performed by APP-BWMGR 308, assume that there are k sockets 322 open for a given application 320. For a given socket s (where s=1 through k), an effective socket priority ESP(s) may be calculated as follows: ESP ( s ) = SP ( s ) cons_last _N ( s ) alloc_last _N ( s ) factor , ( 1 )
    where SP(s) is the configured priority for the socket (e.g., as determined by network policies); cons_last_N(s) is the bandwidth consumed by the socket during the last N allocation cycles; and alloc_last_N(s) is the bandwidth allocated to the socket over the last N cycles. In addition, the result may be multiplied by a convenient factor to avoid handling of fractional amounts.
  • Then, the socket allocation SALLOC(s) for a given socket may be calculated as: SALLOC ( s ) = APALLOC ESP ( s ) s = 1 k ESP ( s ) , ( 2 )
    where APALLOC is the application allocation being apportioned by the APP-BWMGR among the various sockets; and ESP(s) is the effective priority of the socket. It should thus be appreciated that the apportionment is derived via a weighted average of the effective socket priorities.
  • Accordingly, in the above example, socket allocations will tend to be higher for sockets having higher priorities, and for sockets consuming larger percentages of prior allocations. In the above example, calculations are based on consumption data from multiple past allocation cycles (N cycles), though a single cycle calculation may be used. In some cases, multiple cycle calculations may smooth transitions and stabilize adjustment of allocations among the various sockets. Typically, the calculations described above are performed by the overlying APP-BWMGR based on bandwidth consumption data received from underlying SOCK-BWMGRs.
  • Turning now to the sub-allocation of agent allocations into one or more application allocations, as may be performed by AG-BWMGR 306, assume that there are j applications 320 running on a given computer 22. First, an effective application priority may be calculated for each application. Using the above application as an example, the effective application priority EAPP may be calculated as follows: EAPP = s = 1 k cons_last _N ( s ) SP ( s ) alloc_last _N ( s ) factor , ( 3 )
    Again, a factor may be employed to avoid handling of fractions, or to otherwise facilitate processing. A similar calculation is performed for all of the other applications 320 underlying the AG-BWMGR. Then, similar to the socket sub-allocations above, the individual application allocations may be produced with a weighted average. Specifically, for a given application ap (where ap=1 through j), an application allocation APPALLOC(ap) may be calculated as follows: APALLOC ( ap ) = AGALLOC EAPP ( ap ) ap = 1 j EAPP ( ap ) , ( 4 )
    where AGALLOC is the agent allocation being apportioned by the AG-BWMGR among the various applications; and EAPP(ap) is the effective priority of the application for which the allocation is being calculated. Typically, the calculations described above are performed by the overlying AG-BWMGR based on bandwidth consumption data received from underlying APP-BWMGRs 308. In the above example, application allocations will tend to be higher for applications having more sockets open, higher priority sockets, and sockets consuming greater percentages of prior socket allocations.
  • Turning now to the sub-allocation of link bandwidth (e.g., overall bandwidth available on network link 302) into one or more agent allocations, as may be performed by CL-BWMGR 304, assume that there are i agent computers 22 interconnected by network link 302. As with the above sub-allocations, an effective priority may be calculated and then bandwidth may be apportioned according to a weighted average of the effective priorities. Specifically, using the above agent as an example, the effective agent priority EAGP may be calculated by summing the underlying effective application priorities as follows: EAGP = ap = 1 j EAPP ( ap ) ( 5 )
  • A similar calculation may be performed for all of the other agent computers underlying the CL-BWMGR, in order to obtain all of the effective agent priorities. In some cases, it may be desirable to weight the newly calculated value with the prior value to obtain the effective priority, in order to smooth adjustments and bandwidth reallocations among the agent computers. In certain implementations, for example, it has proved advantageous to calculate the effective agent priority by blending the new value (derived from above equation) with the most recent value in a 60-40 ratio. Any other desirable weighting may be employed, and/or other methods may be used to smooth allocation transitions. In any case, the sub-allocation to the agent computers may be effected with a weighted average: AGALLOC ( ag ) = BWCL EAGP ( ag ) ag = 1 i EAGP ( ag ) , ( 6 )
    where AGALLOC(ag) is the agent allocation for a given agent computer ag (ag=1 through i); BWCL is the bandwidth available to be allocated among all the agent computers; and EAGP(ag) is the effective agent priority for the given agent computer.
  • In the above examples, the activity of individual sockets typically is communicated upstream (e.g., from a SOCK-BWMGR to an overlying APP-BWMGR, to an overlying AG-BWMGR, and to an overlying CL-BWMGR), and has an effect on future allocations and sub-allocations throughout the system. Consumption at a particular socket may affect future allocations to that socket and other sockets on the same application. In addition, because the consumption data is communicated upstream and used in other allocations and sub-allocations, the same socket can potentially affect future allocations to all sockets, applications and agent computers within the system. In certain implementations this feedback effect can greatly increase the efficiency of bandwidth management.
  • Those skilled in the art will appreciate that allocations and sub-allocations may be performed separately for transmitted data and received data. In addition, the upstream feedback and downstream sub-allocations may occur at any desired frequency, or at irregular intervals. Within agent computer 22, the sub-allocations typically are performed at regular intervals, though it will often be desirable to re-calculate socket allocations at a greater frequency than the applications allocations. As an example, agent allocations may be updated every 2 seconds, application allocations every half second, and socket allocations every 100 milliseconds.
  • The exemplary equations above apply primarily during steady state operation, when all involved sockets are open, have non-zero allocations, and are at least somewhat active (e.g., consuming some bandwidth). Typically, some provision or modification is made for startup conditions, and for conditions where sockets are idle and/or have socket allocations that are negligible or zero. For example, on occasion an agent computer may have launched applications that are not communicating with the network. In this case, the effective application priorities would all be zero because there is no socket activity, leading to an undefined result in equation (4). A straight average may be used to address this condition, in which case available bandwidth is apportioned among the applications equally. Equal apportioning may also be used in the case of undefined results in equations (2) and (6).
  • Referring now to equation (1), the APP-BWMGRs and/or other bandwidth managers may be configured to over-provision bandwidth. This may be done to ensure that truly idle sockets maintain an effective socket priority of zero, and thus do not obtain positive allocations (e.g., through applying the weighted average of equation (2)). Over-provisioning may also be used to provide internal allocations to idle sockets that are becoming active, so as to allow them to achieve a non-zero effective socket priority and thereby obtain a share of the application allocation being sub-allocated by the overlying APP-BWMGR 308.
  • Regardless of the particular methods employed, bandwidth may be apportioned among entities or components at a particular level based on socket status of a given component, or of components underlying that component. Socket status may include the number of sockets open within the domain of a given control module, on a given computing device, or for a given application. Equations (3) and (4) above provide an example of apportioning bandwidth among applications based partly on the number of sockets open at each application. All other things being equal, in these equations an application with more sockets open will have a higher effective priority and will receive a greater share of the allocated bandwidth.
  • From the above, it will be appreciated that socket status may also include assigned priorities of sockets. As discussed at length above, assigned priority may be derived from a nearly limitless array of parameters, and may be set via network policies. The above equations provide several examples of allocation and sub-allocation being performed based on socket priority. All other things being equal, a socket with a higher socket priority, or an application or computer with higher priority sockets, will receive larger allocations in several of the above examples.
  • Socket status may also include consumption of past allocations, as should be appreciated from the above exemplary equations. The above examples provide numerous examples of shifting bandwidth allocations away from sockets, applications and computers consuming a lower portion of past allocations, relative to counterpart sockets, applications and computers.
  • Implementation of the above exemplary systems and methods may enable bandwidth resources to be allocated to high priority, critical tasks. Bandwidth may be shifted away from low priority uses, and/or consuming entities that are relatively idle in comparison to their counterparts. The different embodiments above may be implemented to provide a more granular, or finer, level of control over bandwidth usage of a particular component within the system. In particular, the bandwidth manager scheme described above may be configured to control socket and/or transaction level allocation for multiple conversations carried out by an application. For example, a given application may carry out a number of functions having varying levels of priority. The increased granularity described above allows system operators to assign relative bandwidth priorities to the various tasks carried out by a given application.
  • Finer control over bandwidth usage may be obtained by providing for dynamic modification of socket priorities after they have been assigned. As discussed above, socket priorities (e.g., socket priorities SP in equations (1) and (3) above) typically are assigned as sockets are opened on the associated computer, and the assigned priority can be derived from virtually any variety of policy parameters. Though the socket priorities may remain static while the socket is open, the agent module may be alternately configured to dynamically vary the socket priority when certain mission critical network tasks are being performed by the socket. These mission critical tasks may be defined through the policy mechanisms discussed above, using any desirable combination of policy parameters. Upon detection of a predefined task, the assigned priority for the socket is modified (e.g., by overriding the assigned value with a higher priority). This modification typically is employed to ensure that a desired level of service or resources is available for the predefined task (e.g., a desired allocation of bandwidth). Alternatively, the predefined task may be a low priority task, such that the socket priority may be dynamically downgraded, to ensure that bandwidth is available for more important tasks. Any of the above embodiments or method implementations may be adapted to provide for such dynamic modification of priorities.
  • The granular control at the transaction level (e.g., by the agent module embodiments discussed herein) enables the system to detect the commencement of critical transactions. Upon detection of such a transaction, the system is able to communicate through a chain of upstream components to ensure that a sufficient amount of bandwidth is reserved to complete the transaction within an acceptable amount of time. For example, a dynamic increase of socket priority typically will lead to increased allocations for not only the socket, but also for the application associated with the socket, and for the computing device on which the application is running (e.g., through application of the exemplary equations discussed above). Alternatively, bandwidth may be dynamically decreased for low priority transactions, which may also be communicated upstream to affect allocations at the various tiers.
  • For example, the above exemplary systems may be configured so that mission-critical bandwidth priority is assigned to only a specified portion of the data requested from a given web server application. In this example, the network policy could specify that when a browser on a client computer attempts to access a particular web page, that socket priorities are overridden to ensure that critical data is provided at a high level of service. Other data (e.g., trivial graphical information) could be assigned lower priority. More particularly, in commonly employed protocols, loading a single web page may involve several “get” commands issued by a client browser application to obtain all of the data for the web page. The agent modules described herein may be configured with a policy that specifies that the agent module is to monitor its associated computer to look for “get” commands seeking predefined high priority portions of data on the web page. Upon detection of such a get command, the socket level priority would be dynamically adjusted for that portion of the web page data.
  • Another example which illustrates the benefit of increased granularity is use of a multi-media application to deliver audio, video and other data over a wide-area network. Where network congestion is high, it typically will be desirable to assign a higher priority to the audio stream. The increased granularity of the described system allows this prioritization through application of an appropriate network policy, and ensures that the video stream does not adversely affect the audio delivery in a significant way.
  • The systems described herein may be further provided with a layered communication architecture to facilitate communication between components. Typical implementations involve use of a communication library to present a communication abstraction layer to various communicating components. For a given communicating component, the abstraction layer is architecturally configured to hide the details of the communications mechanism and/or the location of the other communicating components. This allows the components to be transparent to the underlying transport mechanism used. The transport mechanism can be TCP, UDP or any other transport protocol. The transport protocol can be replaced with a new protocol without affecting the components and their operations.
  • The components communicate with each other by “passing objects” at the component interface. The communication layer may convert these objects into binary data, serialize them to XML format, provide compression and/or encrypt the information to be transmitted over any media. These operations typically are performed so that they are transparent to the communicating components. When applicable, the communications layer also increases network efficiency through multiplexing of several communication streams onto a single connection.
  • While the present embodiments and method implementations have been particularly shown and described, those skilled in the art will understand that many variations may be made therein without departing from the spirit and scope defined in the following claims. The description should be understood to include all novel and non-obvious combinations of elements described herein, and claims may be presented in this or a later application to any novel and non-obvious combination of these elements. Where the claims recite “a” or “a first” element or the equivalent thereof, such claims should be understood to include incorporation of one or more such elements, neither requiring nor excluding two or more such elements.

Claims (20)

1. An agent module configured to be loaded onto a computer to manage network bandwidth usage by such computer, the agent module comprising:
an agent bandwidth manager; and
an application bandwidth manager, the agent module being configured to launch one such application bandwidth manager for each of multiple applications running on the computer,
where the agent bandwidth manager and application bandwidth managers are configured to dynamically interact so as to sub-allocate network bandwidth available to the computer into individualized application allocations of network bandwidth for each of the applications, and
where each application allocation depends on a socket status of the corresponding application, relative to other applications which are to receive a portion of the network bandwidth available to the computer.
2. The agent module of claim 1, where each application allocation of network bandwidth depends on how many sockets are open for the corresponding application, relative to other applications which are to receive a portion of the network bandwidth available to the computer.
3. The agent module of claim 1, where each application allocation of network bandwidth depends on assigned priorities for sockets of the corresponding application, relative to other applications which are to receive a portion of the network bandwidth available to the computer.
4. The agent module of claim 1, where each application allocation of network bandwidth depends on consumption of prior allocations for the corresponding application, relative to other applications which are to receive a portion of the network bandwidth available to the computer.
5. The agent module of claim 1, where each application allocation of network bandwidth depends on how many sockets are open for the corresponding application, assigne
6. The agent module of claim 1, further comprising a socket bandwidth manager, the agent module being configured to launch one such socket bandwidth manager for each of multiple sockets open on the computer.
7. The agent module of claim 6, where the agent module is configured so that, when one of the applications running on the computer has multiple associated sockets, the application bandwidth manager associated with such application dynamically interacts with the socket bandwidth managers of those sockets so as to sub-allocate the application allocation of network bandwidth into a socket allocation of network bandwidth for each of the sockets.
8. The agent module of claim 7, where the application bandwidth manager and socket bandwidth managers are configured to dynamically and repeatedly update the socket allocations of network bandwidth in response to changing conditions at the sockets.
9. The agent module of claim 1, where the agent bandwidth manager and application bandwidth managers are configured to dynamically and repeatedly update the individualized application allocations of network bandwidth in response to changing conditions at the applications.
10. The agent module of claim 1, where the agent module is adapted to:
assign a socket priority to a socket opened on the computer on which the agent module is loaded;
provide a socket allocation of network bandwidth to the socket based on the socket priority; and
detect whether the socket is being used to perform a predefined network transaction and, after such detection, modify the socket priority for such socket and update the socket allocation of network bandwidth to account for such modification.
11. An agent module configured to be loaded onto a computer to manage network bandwidth usage by such computer, the agent module comprising:
an agent bandwidth manager; and
a socket bandwidth manager, the agent module being configured to launch one such socket bandwidth manager for each of multiple sockets open on the computer, where the agent bandwidth manager and the socket bandwidth managers are configured to interact so as to divide network bandwidth available to the computer into individualized socket allocations of network bandwidth for each of the sockets, and
where the agent bandwidth manager and socket bandwidth managers are configured to dynamically and repeatedly update the socket allocations based on consumption activity of the sockets relative to each other.
12. The agent module of claim 11 further comprising:
an application bandwidth manager, the agent module being configured to launch one such application bandwidth manager for each of multiple applications running on the computer, where the agent bandwidth manager and application bandwidth managers are configured to interact so as to divide network bandwidth available to the computer into individualized application allocations of network bandwidth for each of the applications;
where, for an application with multiple sockets, the corresponding application bandwidth manager and socket bandwidth managers are configured to interact so as to sub-allocate the application allocation of network bandwidth into a socket allocation of network bandwidth for each of the sockets.
13. A system for dynamically managing bandwidth consumption on a network link comprising:
an agent computer;
a software-based agent module configured to run on the agent computer, the agent module being further configured to:
assign a socket priority to a socket opened on the agent computer;
provide a socket allocation of network bandwidth to the socket based on the socket priority; and
detect whether the socket is being used to perform a predefined network transaction and, after such detection, modify the socket priority for such socket and update the socket allocation of network bandwidth to account for such modification.
14. The system of claim 13 wherein the agent module is further configured to:
assign a socket priority to each of a plurality of sockets as those sockets are opened on the agent computer;
provide a socket allocation of network bandwidth for each of the plurality of sockets, where the socket allocation for each of the plurality of sockets depends on the socket priority of such socket relative to the socket priorities of the other sockets; and
detect when one of the sockets is being used to perform a predefined network transaction and, after such detection, modify the socket priority for such socket and update the socket allocations of network bandwidth to account for such modification.
15. The system of claim 14, where the predefined network transaction pertains to only a portion of data being requested by the agent computer from a network resource, such that the socket priority is modified while the socket is being used to obtain such portion of data, and is unmodified while the socket is being used to obtain other portions of the data being requested from the network resource.
16. The system of claim 13, where the socket priority is restored to an unmodified value after completion of the predefined network transaction.
17. The system of claim 13, wherein the network link interconnects a plurality of computers, including a control computer and multiple agent computers, where each agent computer is configured to run one or more applications employing one or more sockets, and wherein the system further comprises:
plural agent modules, each being adapted to run on one of the agent computers and monitor and repeatedly report a socket status of its associated agent computer;
a control module adapted to run on the control computer, where the control module is adapted to dynamically allocate bandwidth on the network link among the agent computers based on relative socket statuses of the agent computers as reported by the agent modules.
18. A method of dynamically managing bandwidth in a distributed network system with a network link interconnecting a plurality of computers, comprising:
assigning a socket priority to a socket, where the socket is associated with an application running on one of the plurality of computers;
providing a socket allocation to the socket, where the socket allocation defines network bandwidth usable by the socket;
monitoring network transactions performed using the socket to identify whether the socket is being used to perform a predefined network transaction; and
dynamically modifying the socket priority after detecting that the socket is being used to perform the predefined network transaction, and updating the socket allocation to account for such modification of the socket priority.
19. The method of claim 18, further comprising:
providing an application allocation to the application, where the application allocation defines network bandwidth usable by the application and is dependent upon the socket allocation; and
dynamically modifying the application allocation to account for modification of the socket priority.
20. The method of claim 18, further comprising:
providing a computer allocation to the computer on which the application is running, where the computer allocation defines network bandwidth usable by such computer and is dependent upon the socket allocation; and
dynamically modifying the computer allocation to account for modification of the socket priority.
US11/894,526 2000-03-21 2007-08-20 Software, systems and methods for managing a distributed network Abandoned US20070294410A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/894,526 US20070294410A1 (en) 2000-03-21 2007-08-20 Software, systems and methods for managing a distributed network

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US09/532,101 US6671724B1 (en) 2000-03-21 2000-03-21 Software, systems and methods for managing a distributed network
US35773102P 2002-02-18 2002-02-18
US10/369,259 US7260635B2 (en) 2000-03-21 2003-02-18 Software, systems and methods for managing a distributed network
US11/894,526 US20070294410A1 (en) 2000-03-21 2007-08-20 Software, systems and methods for managing a distributed network

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/369,259 Continuation US7260635B2 (en) 2000-03-21 2003-02-18 Software, systems and methods for managing a distributed network

Publications (1)

Publication Number Publication Date
US20070294410A1 true US20070294410A1 (en) 2007-12-20

Family

ID=38862812

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/369,259 Expired - Fee Related US7260635B2 (en) 2000-03-21 2003-02-18 Software, systems and methods for managing a distributed network
US11/894,526 Abandoned US20070294410A1 (en) 2000-03-21 2007-08-20 Software, systems and methods for managing a distributed network

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/369,259 Expired - Fee Related US7260635B2 (en) 2000-03-21 2003-02-18 Software, systems and methods for managing a distributed network

Country Status (1)

Country Link
US (2) US7260635B2 (en)

Cited By (105)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040107212A1 (en) * 2002-12-02 2004-06-03 Michael Friedrich Centralized access and management for multiple, disparate data repositories
US20060045096A1 (en) * 2004-09-01 2006-03-02 International Business Machines Corporation Method, system, and computer product for controlling input message priority
US20060072457A1 (en) * 2004-10-06 2006-04-06 Netpriva Pty Ltd. Peer signaling protocol and system for decentralized traffic management
US20080056165A1 (en) * 2000-11-08 2008-03-06 Yevgeniy Petrovykh Method And Apparatus For Anticipating And Planning Communication-Center Resources Based On Evaluation Of Events Waiting In A Communication Center Master Queue
US20080312151A1 (en) * 2007-02-08 2008-12-18 Aspenbio Pharma, Inc. Compositions and methods including expression and bioactivity of bovine follicle stimulating hormone
US20090003229A1 (en) * 2007-06-30 2009-01-01 Kai Siang Loh Adaptive Bandwidth Management Systems And Methods
US20090248875A1 (en) * 2008-03-28 2009-10-01 Seiko Epson Corporation Socket Management Device and Socket Management Method
US20090252172A1 (en) * 2008-04-03 2009-10-08 Robert Lawrence Hare System and method for scheduling reservation requests for a communication network
US20090279568A1 (en) * 2003-02-26 2009-11-12 Xue Li Class-based bandwidth allocation and admission control for virtual private networks with differentiated service
US20090318235A1 (en) * 2006-07-26 2009-12-24 Hiroyuki Ashida Game system, game terminal therefor, and server device therefor
US20100029388A1 (en) * 2006-07-26 2010-02-04 Konami Digital Entertainment Co. Ltd Game system, game terminal therefor, and server device therefor
US20100095021A1 (en) * 2008-10-08 2010-04-15 Samuels Allen R Systems and methods for allocating bandwidth by an intermediary for flow control
US20100138565A1 (en) * 2008-12-02 2010-06-03 At&T Mobility Ii Llc Automatic qos determination with i/o activity logic
US20100134423A1 (en) * 2008-12-02 2010-06-03 At&T Mobility Ii Llc Automatic soft key adaptation with left-right hand edge sensing
US20100134424A1 (en) * 2008-12-02 2010-06-03 At&T Mobility Ii Llc Edge hand and finger presence and motion sensor
US20100138680A1 (en) * 2008-12-02 2010-06-03 At&T Mobility Ii Llc Automatic display and voice command activation with hand edge sensing
US20100191612A1 (en) * 2009-01-28 2010-07-29 Gregory G. Raleigh Verifiable device assisted service usage monitoring with reporting, synchronization, and notification
US20100278327A1 (en) * 2009-05-04 2010-11-04 Avaya, Inc. Efficient and cost-effective distribution call admission control
US20110131331A1 (en) * 2009-12-02 2011-06-02 Avaya Inc. Alternative bandwidth management algorithm
US20110202646A1 (en) * 2010-02-14 2011-08-18 Bhatia Randeep S Policy controlled traffic offload via content smart-loading
US20110209194A1 (en) * 2010-02-22 2011-08-25 Avaya Inc. Node-based policy-enforcement across mixed media, mixed-communications modalities and extensible to cloud computing such as soa
US8307111B1 (en) * 2010-04-13 2012-11-06 Qlogic, Corporation Systems and methods for bandwidth scavenging among a plurality of applications in a network
US8620989B2 (en) 2005-12-01 2013-12-31 Firestar Software, Inc. System and method for exchanging information among exchange applications
US20140108496A1 (en) * 2012-10-11 2014-04-17 Browsium, Inc. Systems and methods for browser redirection and navigation control
US8718261B2 (en) 2011-07-21 2014-05-06 Avaya Inc. Efficient and cost-effective distributed call admission control
US8831658B2 (en) 2010-11-05 2014-09-09 Qualcomm Incorporated Controlling application access to a network
US8838086B2 (en) 2011-08-29 2014-09-16 Qualcomm Incorporated Systems and methods for management of background application events
US8875277B2 (en) * 2012-06-04 2014-10-28 Google Inc. Forcing all mobile network traffic over a secure tunnel connection
US9008673B1 (en) * 2010-07-02 2015-04-14 Cellco Partnership Data communication device with individual application bandwidth reporting and control
US9031087B2 (en) 2000-11-08 2015-05-12 Genesys Telecommunications Laboratories, Inc. Method and apparatus for optimizing response time to events in queue
US9072092B1 (en) * 2013-04-05 2015-06-30 Time Warner Cable Enterprises Llc Resource allocation in a wireless mesh network environment
US9094311B2 (en) 2009-01-28 2015-07-28 Headwater Partners I, Llc Techniques for attribution of mobile device data traffic to initiating end-user application
US9106592B1 (en) * 2008-05-18 2015-08-11 Western Digital Technologies, Inc. Controller and method for controlling a buffered data transfer device
US9137701B2 (en) 2009-01-28 2015-09-15 Headwater Partners I Llc Wireless end-user device with differentiated network access for background and foreground device applications
US9154826B2 (en) 2011-04-06 2015-10-06 Headwater Partners Ii Llc Distributing content and service launch objects to mobile devices
US9178965B2 (en) 2011-03-18 2015-11-03 Qualcomm Incorporated Systems and methods for synchronization of application communications
US9198042B2 (en) 2009-01-28 2015-11-24 Headwater Partners I Llc Security techniques for device assisted services
US9204282B2 (en) 2009-01-28 2015-12-01 Headwater Partners I Llc Enhanced roaming services and converged carrier networks with device assisted services and a proxy
US9225797B2 (en) 2009-01-28 2015-12-29 Headwater Partners I Llc System for providing an adaptive wireless ambient service to a mobile device
US9247450B2 (en) 2009-01-28 2016-01-26 Headwater Partners I Llc Quality of service for device assisted services
US9253663B2 (en) 2009-01-28 2016-02-02 Headwater Partners I Llc Controlling mobile device communications on a roaming network based on device state
US9264868B2 (en) * 2011-01-19 2016-02-16 Qualcomm Incorporated Management of network access requests
US9351193B2 (en) 2009-01-28 2016-05-24 Headwater Partners I Llc Intermediate networking devices
US9386165B2 (en) 2009-01-28 2016-07-05 Headwater Partners I Llc System and method for providing user notifications
US9392462B2 (en) 2009-01-28 2016-07-12 Headwater Partners I Llc Mobile end-user device with agent limiting wireless data communication for specified background applications based on a stored policy
USRE46174E1 (en) 2001-01-18 2016-10-04 Genesys Telecommunications Laboratories, Inc. Method and apparatus for intelligent routing of instant messaging presence protocol (IMPP) events among a group of customer service representatives
US9491199B2 (en) 2009-01-28 2016-11-08 Headwater Partners I Llc Security, fraud detection, and fraud mitigation in device-assisted services systems
US9532261B2 (en) 2009-01-28 2016-12-27 Headwater Partners I Llc System and method for wireless network offloading
US9557889B2 (en) 2009-01-28 2017-01-31 Headwater Partners I Llc Service plan design, user interfaces, application programming interfaces, and device management
US9565543B2 (en) 2009-01-28 2017-02-07 Headwater Partners I Llc Device group partitions and settlement platform
US9565707B2 (en) 2009-01-28 2017-02-07 Headwater Partners I Llc Wireless end-user device with wireless data attribution to multiple personas
US9572019B2 (en) 2009-01-28 2017-02-14 Headwater Partners LLC Service selection set published to device agent with on-device service selection
US9571952B2 (en) 2011-04-22 2017-02-14 Qualcomm Incorporatd Offloading of data to wireless local area network
US9571559B2 (en) 2009-01-28 2017-02-14 Headwater Partners I Llc Enhanced curfew and protection associated with a device group
US9578182B2 (en) 2009-01-28 2017-02-21 Headwater Partners I Llc Mobile device and service management
US9591474B2 (en) 2009-01-28 2017-03-07 Headwater Partners I Llc Adapting network policies based on device service processor configuration
US9603085B2 (en) 2010-02-16 2017-03-21 Qualcomm Incorporated Methods and apparatus providing intelligent radio selection for legacy and non-legacy applications
US9609510B2 (en) 2009-01-28 2017-03-28 Headwater Research Llc Automated credential porting for mobile devices
US9647918B2 (en) 2009-01-28 2017-05-09 Headwater Research Llc Mobile device and method attributing media services network usage to requesting application
US9706061B2 (en) 2009-01-28 2017-07-11 Headwater Partners I Llc Service design center for device assisted services
CN107071698A (en) * 2013-07-29 2017-08-18 奇卡有限公司 Data bandwidth management system and method
US9755842B2 (en) 2009-01-28 2017-09-05 Headwater Research Llc Managing service user discovery and service launch object placement on a device
US9769207B2 (en) 2009-01-28 2017-09-19 Headwater Research Llc Wireless network service interfaces
US9819808B2 (en) 2009-01-28 2017-11-14 Headwater Research Llc Hierarchical service policies for creating service usage data records for a wireless end-user device
US9858559B2 (en) 2009-01-28 2018-01-02 Headwater Research Llc Network service plan design
USRE46776E1 (en) 2002-08-27 2018-04-03 Genesys Telecommunications Laboratories, Inc. Method and apparatus for optimizing response time to events in queue
US9955332B2 (en) 2009-01-28 2018-04-24 Headwater Research Llc Method for child wireless device activation to subscriber account of a master wireless device
US9954975B2 (en) 2009-01-28 2018-04-24 Headwater Research Llc Enhanced curfew and protection associated with a device group
US9980146B2 (en) 2009-01-28 2018-05-22 Headwater Research Llc Communications device with secure data path processing agents
US10044633B2 (en) * 2013-11-27 2018-08-07 Thomson Licensing Method for distributing available bandwidth of a network amongst ongoing traffic sessions run by devices of the network, corresponding device
US10057775B2 (en) 2009-01-28 2018-08-21 Headwater Research Llc Virtualized policy and charging system
US10064055B2 (en) 2009-01-28 2018-08-28 Headwater Research Llc Security, fraud detection, and fraud mitigation in device-assisted services systems
US10070305B2 (en) 2009-01-28 2018-09-04 Headwater Research Llc Device assisted services install
US10200541B2 (en) 2009-01-28 2019-02-05 Headwater Research Llc Wireless end-user device with divided user space/kernel space traffic policy system
US10237757B2 (en) 2009-01-28 2019-03-19 Headwater Research Llc System and method for wireless network offloading
US10248996B2 (en) 2009-01-28 2019-04-02 Headwater Research Llc Method for operating a wireless end-user device mobile payment agent
US10264138B2 (en) 2009-01-28 2019-04-16 Headwater Research Llc Mobile device and service management
US10326800B2 (en) 2009-01-28 2019-06-18 Headwater Research Llc Wireless network service interfaces
US10341208B2 (en) * 2013-09-26 2019-07-02 Taiwan Semiconductor Manufacturing Co., Ltd. File block placement in a distributed network
US10492102B2 (en) 2009-01-28 2019-11-26 Headwater Research Llc Intermediate networking devices
US10715342B2 (en) 2009-01-28 2020-07-14 Headwater Research Llc Managing service user discovery and service launch object placement on a device
US10779177B2 (en) 2009-01-28 2020-09-15 Headwater Research Llc Device group partitions and settlement platform
US10783581B2 (en) 2009-01-28 2020-09-22 Headwater Research Llc Wireless end-user device providing ambient or sponsored services
US10798252B2 (en) 2009-01-28 2020-10-06 Headwater Research Llc System and method for providing user notifications
US10841839B2 (en) 2009-01-28 2020-11-17 Headwater Research Llc Security, fraud detection, and fraud mitigation in device-assisted services systems
US11077366B2 (en) 2006-04-12 2021-08-03 Winview, Inc. Methodology for equalizing systemic latencies in television reception in connection with games of skill played in connection with live television programming
US11082746B2 (en) * 2006-04-12 2021-08-03 Winview, Inc. Synchronized gaming and programming
US11148050B2 (en) 2005-10-03 2021-10-19 Winview, Inc. Cellular phone games based upon television archives
US11154775B2 (en) 2005-10-03 2021-10-26 Winview, Inc. Synchronized gaming and programming
US11218854B2 (en) 2009-01-28 2022-01-04 Headwater Research Llc Service plan design, user interfaces, application programming interfaces, and device management
US11232033B2 (en) 2019-08-02 2022-01-25 Apple Inc. Application aware SoC memory cache partitioning
US11266896B2 (en) 2006-01-10 2022-03-08 Winview, Inc. Method of and system for conducting multiple contests of skill with a single performance
US11298621B2 (en) 2006-01-10 2022-04-12 Winview, Inc. Method of and system for conducting multiple contests of skill with a single performance
US11308765B2 (en) 2018-10-08 2022-04-19 Winview, Inc. Method and systems for reducing risk in setting odds for single fixed in-play propositions utilizing real time input
US11358064B2 (en) 2006-01-10 2022-06-14 Winview, Inc. Method of and system for conducting multiple contests of skill with a single performance
US20220237039A1 (en) * 2019-05-28 2022-07-28 Micron Technology, Inc. Throttle Memory as a Service based on Connectivity Bandwidth
US11400379B2 (en) 2004-06-28 2022-08-02 Winview, Inc. Methods and apparatus for distributed gaming over a mobile device
US11412366B2 (en) 2009-01-28 2022-08-09 Headwater Research Llc Enhanced roaming services and converged carrier networks with device assisted services and a proxy
US11451883B2 (en) 2005-06-20 2022-09-20 Winview, Inc. Method of and system for managing client resources and assets for activities on computing devices
US11551529B2 (en) 2016-07-20 2023-01-10 Winview, Inc. Method of generating separate contests of skill or chance from two independent events
US11601727B2 (en) 2008-11-10 2023-03-07 Winview, Inc. Interactive advertising system
US11654368B2 (en) 2004-06-28 2023-05-23 Winview, Inc. Methods and apparatus for distributed gaming over a mobile device
US11786813B2 (en) 2004-07-14 2023-10-17 Winview, Inc. Game of skill played by remote participants utilizing wireless devices in connection with a common game event
US11838851B1 (en) * 2014-07-15 2023-12-05 F5, Inc. Methods for managing L7 traffic classification and devices thereof
US11954042B2 (en) 2019-05-28 2024-04-09 Micron Technology, Inc. Distributed computing based on memory as a service

Families Citing this family (160)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6792455B1 (en) * 2000-04-28 2004-09-14 Microsoft Corporation System and method for implementing polling agents in a client management tool
US7216179B2 (en) * 2000-08-16 2007-05-08 Semandex Networks Inc. High-performance addressing and routing of data packets with semantically descriptive labels in a computer network
WO2003014927A2 (en) * 2001-08-08 2003-02-20 Trivium Systems Inc. Scalable messaging platform for the integration of business software components
EP1436719A1 (en) * 2001-10-15 2004-07-14 Semandex Networks Inc. Dynamic content based multicast routing in mobile networks
GB0126649D0 (en) * 2001-11-06 2002-01-02 Mitel Knowledge Corp System and method for facilitating the selection of electronic services using infrared and a network address identification
JP3852752B2 (en) * 2001-11-16 2006-12-06 パイオニア株式会社 Apparatus and method for bandwidth control of communication information
US7835365B2 (en) * 2002-09-26 2010-11-16 Sharp Laboratories Of America, Inc. Connection management in a centralized communication system
US7961736B2 (en) * 2002-09-26 2011-06-14 Sharp Laboratories Of America, Inc. Convergence and classification of data packets in a centralized communication system
WO2004088662A1 (en) * 2003-03-31 2004-10-14 Samsung Electronics Co., Ltd. Apparatus for use with information storage medium containing enhanced av (enav) buffer configuration information, reproducing method thereof and method for managing the buffer
JP4271478B2 (en) * 2003-04-08 2009-06-03 パナソニック株式会社 Relay device and server
US7581224B2 (en) * 2003-07-10 2009-08-25 Hewlett-Packard Development Company, L.P. Systems and methods for monitoring resource utilization and application performance
US9525566B2 (en) * 2003-07-31 2016-12-20 Cloudsoft Corporation Limited Self-managed mediated information flow
US7331019B2 (en) * 2003-08-02 2008-02-12 Pathway Technologies, Inc. System and method for real-time configurable monitoring and management of task performance systems
US7853640B2 (en) * 2003-09-05 2010-12-14 Texas Instruments Incorporated Key distribution
US20050128995A1 (en) * 2003-09-29 2005-06-16 Ott Maximilian A. Method and apparatus for using wireless hotspots and semantic routing to provide broadband mobile serveices
US20050100000A1 (en) * 2003-11-07 2005-05-12 Foursticks Pty Ltd Method and system for windows based traffic management
US8726278B1 (en) 2004-07-21 2014-05-13 The Mathworks, Inc. Methods and system for registering callbacks and distributing tasks to technical computing works
US7502745B1 (en) * 2004-07-21 2009-03-10 The Mathworks, Inc. Interfaces to a job manager in distributed computing environments
US8332483B2 (en) * 2003-12-15 2012-12-11 International Business Machines Corporation Apparatus, system, and method for autonomic control of grid system resources
US7680933B2 (en) * 2003-12-15 2010-03-16 International Business Machines Corporation Apparatus, system, and method for on-demand control of grid system resources
US7739346B1 (en) 2004-01-20 2010-06-15 Marvell International Ltd. Method and apparatus for specification of transaction failures in a network management protocol
US7706782B1 (en) 2004-03-01 2010-04-27 Adobe Systems Incorporated System and method for developing information for a wireless information system
US7478158B1 (en) * 2004-03-01 2009-01-13 Adobe Systems Incorporated Bandwidth management system
US7822428B1 (en) 2004-03-01 2010-10-26 Adobe Systems Incorporated Mobile rich media information system
DE102004011201B4 (en) * 2004-03-04 2006-10-12 Siemens Ag Method for managing and monitoring the operation of multiple distributed in at least one communication network integrated hardware and / or software systems and system for performing the method
US8046464B2 (en) * 2004-03-10 2011-10-25 The Boeing Company Quality of service resource management apparatus and method for middleware services
US8966498B2 (en) 2008-01-24 2015-02-24 Oracle International Corporation Integrating operational and business support systems with a service delivery platform
US8321498B2 (en) 2005-03-01 2012-11-27 Oracle International Corporation Policy interface description framework
US9038082B2 (en) 2004-05-28 2015-05-19 Oracle International Corporation Resource abstraction via enabler and metadata
US9245236B2 (en) 2006-02-16 2016-01-26 Oracle International Corporation Factorization of concerns to build a SDP (service delivery platform)
US8458703B2 (en) * 2008-06-26 2013-06-04 Oracle International Corporation Application requesting management function based on metadata for managing enabler or dependency
US9565297B2 (en) 2004-05-28 2017-02-07 Oracle International Corporation True convergence with end to end identity management
US20050289531A1 (en) * 2004-06-08 2005-12-29 Daniel Illowsky Device interoperability tool set and method for processing interoperability application specifications into interoperable application packages
US20060029106A1 (en) * 2004-06-14 2006-02-09 Semandex Networks, Inc. System and method for providing content-based instant messaging
FI20040888A0 (en) * 2004-06-28 2004-06-28 Nokia Corp Management of services in a packet switching data network
US7908313B2 (en) * 2004-07-21 2011-03-15 The Mathworks, Inc. Instrument-based distributed computing systems
US7490153B1 (en) * 2004-07-23 2009-02-10 International Business Machines Corporation Smart nodes for providing interoperability among a plurality of web services in a chain and dynamically orchestrating associated processes
US7546308B1 (en) * 2004-09-17 2009-06-09 Symantec Operating Corporation Model and method of an n-tier quality-of-service (QoS)
FI20041528A0 (en) * 2004-11-26 2004-11-26 Nokia Corp Contextual profile for data communication
NL1028507C2 (en) * 2005-03-10 2006-09-12 W W T S World Wide Technical S Method for dynamically connecting a computer, an update server, software and a system.
US7558858B1 (en) * 2005-08-31 2009-07-07 At&T Intellectual Property Ii, L.P. High availability infrastructure with active-active designs
US8248965B2 (en) * 2005-11-03 2012-08-21 Motorola Solutions, Inc. Method and apparatus regarding use of a service convergence fabric
US7623548B2 (en) * 2005-12-22 2009-11-24 At&T Intellectual Property, I,L.P. Methods, systems, and computer program products for managing access resources in an internet protocol network
US8149771B2 (en) * 2006-01-31 2012-04-03 Roundbox, Inc. Reliable event broadcaster with multiplexing and bandwidth control functions
US8311048B2 (en) 2008-05-09 2012-11-13 Roundbox, Inc. Datacasting system with intermittent listener capability
US20070220002A1 (en) * 2006-03-15 2007-09-20 Musicnet Dynamic server configuration for managing high volume traffic
KR101225374B1 (en) * 2006-06-09 2013-01-22 삼성전자주식회사 Apparatus and method for device management in mobile communication terminal
US8914493B2 (en) 2008-03-10 2014-12-16 Oracle International Corporation Presence-based event driven architecture
US8289965B2 (en) 2006-10-19 2012-10-16 Embarq Holdings Company, Llc System and method for establishing a communications session with an end-user based on the state of a network connection
US8194643B2 (en) 2006-10-19 2012-06-05 Embarq Holdings Company, Llc System and method for monitoring the connection of an end-user to a remote network
US8488447B2 (en) 2006-06-30 2013-07-16 Centurylink Intellectual Property Llc System and method for adjusting code speed in a transmission path during call set-up due to reduced transmission performance
US9094257B2 (en) 2006-06-30 2015-07-28 Centurylink Intellectual Property Llc System and method for selecting a content delivery network
US8000318B2 (en) 2006-06-30 2011-08-16 Embarq Holdings Company, Llc System and method for call routing based on transmission performance of a packet network
US8717911B2 (en) 2006-06-30 2014-05-06 Centurylink Intellectual Property Llc System and method for collecting network performance information
US8184549B2 (en) 2006-06-30 2012-05-22 Embarq Holdings Company, LLP System and method for selecting network egress
US7948909B2 (en) 2006-06-30 2011-05-24 Embarq Holdings Company, Llc System and method for resetting counters counting network performance information at network communications devices on a packet network
US7805529B2 (en) * 2006-07-14 2010-09-28 International Business Machines Corporation Method and system for dynamically changing user session behavior based on user and/or group classification in response to application server demand
US8743703B2 (en) 2006-08-22 2014-06-03 Centurylink Intellectual Property Llc System and method for tracking application resource usage
US9479341B2 (en) 2006-08-22 2016-10-25 Centurylink Intellectual Property Llc System and method for initiating diagnostics on a packet network node
US7684332B2 (en) 2006-08-22 2010-03-23 Embarq Holdings Company, Llc System and method for adjusting the window size of a TCP packet through network elements
US8144587B2 (en) 2006-08-22 2012-03-27 Embarq Holdings Company, Llc System and method for load balancing network resources using a connection admission control engine
US8619600B2 (en) 2006-08-22 2013-12-31 Centurylink Intellectual Property Llc System and method for establishing calls over a call path having best path metrics
US7940735B2 (en) 2006-08-22 2011-05-10 Embarq Holdings Company, Llc System and method for selecting an access point
US8064391B2 (en) 2006-08-22 2011-11-22 Embarq Holdings Company, Llc System and method for monitoring and optimizing network performance to a wireless device
US8015294B2 (en) 2006-08-22 2011-09-06 Embarq Holdings Company, LP Pin-hole firewall for communicating data packets on a packet network
US8223655B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc System and method for provisioning resources of a packet network based on collected network performance information
US8274905B2 (en) 2006-08-22 2012-09-25 Embarq Holdings Company, Llc System and method for displaying a graph representative of network performance over a time period
US8199653B2 (en) 2006-08-22 2012-06-12 Embarq Holdings Company, Llc System and method for communicating network performance information over a packet network
US8189468B2 (en) 2006-10-25 2012-05-29 Embarq Holdings, Company, LLC System and method for regulating messages between networks
US8407765B2 (en) 2006-08-22 2013-03-26 Centurylink Intellectual Property Llc System and method for restricting access to network performance information tables
US8549405B2 (en) 2006-08-22 2013-10-01 Centurylink Intellectual Property Llc System and method for displaying a graphical representation of a network to identify nodes and node segments on the network that are not operating normally
US8224255B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc System and method for managing radio frequency windows
US7808918B2 (en) 2006-08-22 2010-10-05 Embarq Holdings Company, Llc System and method for dynamically shaping network traffic
US8576722B2 (en) 2006-08-22 2013-11-05 Centurylink Intellectual Property Llc System and method for modifying connectivity fault management packets
US8238253B2 (en) 2006-08-22 2012-08-07 Embarq Holdings Company, Llc System and method for monitoring interlayer devices and optimizing network performance
US8098579B2 (en) 2006-08-22 2012-01-17 Embarq Holdings Company, LP System and method for adjusting the window size of a TCP packet through remote network elements
US8223654B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc Application-specific integrated circuit for monitoring and optimizing interlayer network performance
US8228791B2 (en) 2006-08-22 2012-07-24 Embarq Holdings Company, Llc System and method for routing communications between packet networks based on intercarrier agreements
US8537695B2 (en) 2006-08-22 2013-09-17 Centurylink Intellectual Property Llc System and method for establishing a call being received by a trunk on a packet network
US8125897B2 (en) 2006-08-22 2012-02-28 Embarq Holdings Company Lp System and method for monitoring and optimizing network performance with user datagram protocol network performance information packets
US8307065B2 (en) 2006-08-22 2012-11-06 Centurylink Intellectual Property Llc System and method for remotely controlling network operators
US8040811B2 (en) 2006-08-22 2011-10-18 Embarq Holdings Company, Llc System and method for collecting and managing network performance information
US8531954B2 (en) 2006-08-22 2013-09-10 Centurylink Intellectual Property Llc System and method for handling reservation requests with a connection admission control engine
WO2008024387A2 (en) 2006-08-22 2008-02-28 Embarq Holdings Company Llc System and method for synchronizing counters on an asynchronous packet communications network
US8130793B2 (en) 2006-08-22 2012-03-06 Embarq Holdings Company, Llc System and method for enabling reciprocal billing for different types of communications over a packet network
US8144586B2 (en) 2006-08-22 2012-03-27 Embarq Holdings Company, Llc System and method for controlling network bandwidth with a connection admission control engine
US8750158B2 (en) 2006-08-22 2014-06-10 Centurylink Intellectual Property Llc System and method for differentiated billing
US8194555B2 (en) 2006-08-22 2012-06-05 Embarq Holdings Company, Llc System and method for using distributed network performance information tables to manage network communications
US7843831B2 (en) 2006-08-22 2010-11-30 Embarq Holdings Company Llc System and method for routing data on a packet network
US8107366B2 (en) 2006-08-22 2012-01-31 Embarq Holdings Company, LP System and method for using centralized network performance tables to manage network communications
US8984579B2 (en) * 2006-09-19 2015-03-17 The Innovation Science Fund I, LLC Evaluation systems and methods for coordinating software agents
US8601530B2 (en) * 2006-09-19 2013-12-03 The Invention Science Fund I, Llc Evaluation systems and methods for coordinating software agents
US8607336B2 (en) * 2006-09-19 2013-12-10 The Invention Science Fund I, Llc Evaluation systems and methods for coordinating software agents
US8627402B2 (en) 2006-09-19 2014-01-07 The Invention Science Fund I, Llc Evaluation systems and methods for coordinating software agents
JP4785710B2 (en) * 2006-11-14 2011-10-05 富士通株式会社 Importance calculation method and apparatus for resources
US8146102B2 (en) * 2006-12-22 2012-03-27 Sap Ag Development environment for groupware integration with enterprise applications
US20080159139A1 (en) * 2006-12-29 2008-07-03 Motorola, Inc. Method and system for a context manager for a converged services framework
JP4758362B2 (en) * 2007-01-30 2011-08-24 株式会社日立製作所 Relay device, program, and relay method
US8214503B2 (en) * 2007-03-23 2012-07-03 Oracle International Corporation Factoring out dialog control and call control
US8041743B2 (en) * 2007-04-17 2011-10-18 Semandex Networks, Inc. Systems and methods for providing semantically enhanced identity management
US20090164387A1 (en) * 2007-04-17 2009-06-25 Semandex Networks Inc. Systems and methods for providing semantically enhanced financial information
US7958155B2 (en) 2007-04-17 2011-06-07 Semandex Networks, Inc. Systems and methods for the management of information to enable the rapid dissemination of actionable information
US8111692B2 (en) 2007-05-31 2012-02-07 Embarq Holdings Company Llc System and method for modifying network traffic
FR2919084A1 (en) * 2007-07-17 2009-01-23 Open Plug Sa METHOD FOR MANAGING SHARED RESOURCES OF A COMPUTER SYSTEM AND SUPERVISOR MODULE FOR IMPLEMENTATION, AND COMPUTER SYSTEM COMPRISING SUCH A MODULE
US8656449B1 (en) * 2007-07-30 2014-02-18 Sprint Communications Company L.P. Applying policy attributes to events
US8438301B2 (en) * 2007-09-24 2013-05-07 Microsoft Corporation Automatic bit rate detection and throttling
US8539097B2 (en) 2007-11-14 2013-09-17 Oracle International Corporation Intelligent message processing
US8161171B2 (en) 2007-11-20 2012-04-17 Oracle International Corporation Session initiation protocol-based internet protocol television
US8370508B2 (en) * 2007-12-20 2013-02-05 Intel Corporation Method, system and apparatus for main memory access subsystem usage to different partitions in a socket with sub-socket partitioning
US8635380B2 (en) * 2007-12-20 2014-01-21 Intel Corporation Method, system and apparatus for handling events for partitions in a socket with sub-socket partitioning
US8854966B2 (en) 2008-01-10 2014-10-07 Apple Inc. Apparatus and methods for network resource allocation
US9654515B2 (en) 2008-01-23 2017-05-16 Oracle International Corporation Service oriented architecture-based SCIM platform
US8589338B2 (en) 2008-01-24 2013-11-19 Oracle International Corporation Service-oriented architecture (SOA) management of data repository
US8401022B2 (en) 2008-02-08 2013-03-19 Oracle International Corporation Pragmatic approaches to IMS
US8180716B2 (en) * 2008-03-24 2012-05-15 At&T Intellectual Property I, L.P. Method and device for forecasting computational needs of an application
US7523206B1 (en) * 2008-04-07 2009-04-21 International Business Machines Corporation Method and system to dynamically apply access rules to a shared resource
US8068425B2 (en) 2008-04-09 2011-11-29 Embarq Holdings Company, Llc System and method for using network performance information to determine improved measures of path states
US8903981B2 (en) * 2008-05-05 2014-12-02 International Business Machines Corporation Method and system for achieving better efficiency in a client grid using node resource usage and tracking
US8239564B2 (en) * 2008-06-20 2012-08-07 Microsoft Corporation Dynamic throttling based on network conditions
US20100023622A1 (en) * 2008-07-25 2010-01-28 Electronics And Telecommunications Research Institute Method for assigning resource of united system
US10819530B2 (en) 2008-08-21 2020-10-27 Oracle International Corporation Charging enabler
US8285900B2 (en) 2009-02-17 2012-10-09 The Board Of Regents Of The University Of Texas System Method and apparatus for congestion-aware routing in a computer interconnection network
US8879547B2 (en) 2009-06-02 2014-11-04 Oracle International Corporation Telephony application services
EP2282479A1 (en) 2009-07-23 2011-02-09 Net Transmit & Receive, S.L. Controlling peer-to-peer programs
US20110029670A1 (en) * 2009-07-31 2011-02-03 Microsoft Corporation Adapting pushed content delivery based on predictiveness
CN102656579A (en) * 2009-11-04 2012-09-05 塞德克西斯公司 Internet infrastructure survey
US8583830B2 (en) 2009-11-19 2013-11-12 Oracle International Corporation Inter-working with a walled garden floor-controlled system
US8533773B2 (en) 2009-11-20 2013-09-10 Oracle International Corporation Methods and systems for implementing service level consolidated user information management
US8489722B2 (en) * 2009-11-24 2013-07-16 International Business Machines Corporation System and method for providing quality of service in wide area messaging fabric
US20110141924A1 (en) * 2009-12-16 2011-06-16 Tektronix Inc. System and Method for Filtering High Priority Signaling and Data for Fixed and Mobile Networks
US9509790B2 (en) 2009-12-16 2016-11-29 Oracle International Corporation Global presence
US9503407B2 (en) 2009-12-16 2016-11-22 Oracle International Corporation Message forwarding
US9125184B2 (en) * 2010-01-13 2015-09-01 Telefonaktiebolaget L M Ericsson (Publ) Compromise resource allocation field size when aggregating component carriers of differing size
US8495205B2 (en) * 2010-05-03 2013-07-23 At&T Intellectual Property I, L.P. Method and apparatus for mitigating an overload in a network
EP2469756A1 (en) 2010-12-24 2012-06-27 British Telecommunications Public Limited Company Communications network management
JP2012135381A (en) * 2010-12-24 2012-07-19 Fujifilm Corp Portable radiation imaging apparatus, photographing controller, and radiation imaging system
WO2013070346A2 (en) * 2011-10-05 2013-05-16 Freeband Technologies, Inc. Application enabled bandwidth billing system and method
KR102001215B1 (en) * 2012-07-20 2019-07-17 삼성전자주식회사 Method and system for sharing content, device and computer readable recording medium thereof
US9680559B1 (en) * 2012-09-05 2017-06-13 RKF Engineering Solutions, LLC Hierarchichal beam management
CN103841040A (en) * 2012-11-20 2014-06-04 英业达科技有限公司 Network system and load balance method
CN103067308A (en) * 2012-12-26 2013-04-24 中兴通讯股份有限公司 Method and system for bandwidth distribution
US10187520B2 (en) * 2013-04-24 2019-01-22 Samsung Electronics Co., Ltd. Terminal device and content displaying method thereof, server and controlling method thereof
US20150098390A1 (en) * 2013-10-04 2015-04-09 Vonage Network Llc Prioritization of data traffic between a mobile device and a network access point
US20150207742A1 (en) * 2014-01-22 2015-07-23 Wipro Limited Methods for optimizing data for transmission and devices thereof
WO2015152891A1 (en) * 2014-03-31 2015-10-08 Hewlett-Packard Development Company, L.P. Credit distribution to clients
US20170104633A1 (en) * 2014-05-29 2017-04-13 Startimes Communication Network Technology Co., Ltd. Wifi gateway control and interface
GB2536402A (en) * 2014-11-14 2016-09-21 Mastercard International Inc Workflow integration
US10182013B1 (en) 2014-12-01 2019-01-15 F5 Networks, Inc. Methods for managing progressive image delivery and devices thereof
US11895138B1 (en) 2015-02-02 2024-02-06 F5, Inc. Methods for improving web scanner accuracy and devices thereof
US10353762B2 (en) * 2015-06-11 2019-07-16 Instana, Inc. Hierarchical fault determination in an application performance management system
US10623258B2 (en) * 2015-06-22 2020-04-14 Arista Networks, Inc. Data analytics on internal state
US10742738B2 (en) * 2015-07-17 2020-08-11 The Boeing Company Flexible deterministic communications network
US10171589B2 (en) 2015-10-23 2019-01-01 International Business Machines Corporation Reducing interference from management and support applications to functional applications
US10667256B2 (en) * 2015-11-12 2020-05-26 Cisco Technology, Inc. System and method to provide dynamic bandwidth allocation over wide area networks
US10917255B2 (en) 2016-05-10 2021-02-09 Huawei Technologies Co., Ltd. Packet switched service identification method and terminal
US10095422B2 (en) * 2016-10-28 2018-10-09 Veritas Technologies, LLC Systems and methods for allocating input/output bandwidth in storage systems
US10764155B2 (en) * 2016-11-29 2020-09-01 CloudSmartz, Inc. System and method for on-demand network communication
US11151150B2 (en) 2019-09-13 2021-10-19 Salesforce.Com, Inc. Adjustable connection pool mechanism
US11636067B2 (en) 2019-10-04 2023-04-25 Salesforce.Com, Inc. Performance measurement mechanism
US11165857B2 (en) 2019-10-23 2021-11-02 Salesforce.Com, Inc. Connection pool anomaly detection mechanism

Family Cites Families (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5317568A (en) * 1991-04-11 1994-05-31 Galileo International Partnership Method and apparatus for managing and facilitating communications in a distributed hetergeneous network
CA2077061C (en) * 1991-11-22 1998-04-21 Mark J. Baugher Scheduling system for distributed multimedia resources
CA2095755C (en) * 1992-08-17 1999-01-26 Mark J. Baugher Network priority management
US5694548A (en) * 1993-06-29 1997-12-02 International Business Machines Corporation System and method for providing multimedia quality of service sessions in a communications network
FR2712411B1 (en) * 1993-11-08 1995-12-22 Bull Sa Communication system with a network including a set of administration.
WO1995027249A1 (en) 1994-04-05 1995-10-12 Intel Corporation Method and appartus for monitoring and controlling programs in a network
JPH07302236A (en) * 1994-05-06 1995-11-14 Hitachi Ltd Information processing system, method therefor and service providing method in the information processing system
US5461611A (en) * 1994-06-07 1995-10-24 International Business Machines Corporation Quality of service management for source routing multimedia packet networks
EP0689321A1 (en) * 1994-06-23 1995-12-27 International Business Machines Corporation System for high availablility of network-wide bandwidth allocation
US5978594A (en) * 1994-09-30 1999-11-02 Bmc Software, Inc. System for managing computer resources across a distributed computing environment by first reading discovery information about how to determine system resources presence
US5881313A (en) * 1994-11-07 1999-03-09 Digital Equipment Corporation Arbitration system based on requester class and relative priority including transmit descriptor valid bit for a shared resource having multiple requesters
US5655081A (en) * 1995-03-08 1997-08-05 Bmc Software, Inc. System for monitoring and managing computer resources and applications across a distributed computing environment using an intelligent autonomous agent architecture
US5764920A (en) * 1995-03-17 1998-06-09 Sprint Communications Co. L.P. System and method for routing administrative data over a telecommunications network to a remote processor
US5777549A (en) * 1995-03-29 1998-07-07 Cabletron Systems, Inc. Method and apparatus for policy-based alarm notification in a distributed network management environment
US5696486A (en) * 1995-03-29 1997-12-09 Cabletron Systems, Inc. Method and apparatus for policy-based alarm notification in a distributed network management environment
US5689708A (en) * 1995-03-31 1997-11-18 Showcase Corporation Client/server computer systems having control of client-based application programs, and application-program control means therefor
US5734865A (en) * 1995-06-07 1998-03-31 Bull Hn Information Systems Inc. Virtual local area network well-known port routing mechanism for mult--emulators in an open system environment
US5907324A (en) * 1995-06-07 1999-05-25 Intel Corporation Method for saving and accessing desktop conference characteristics with a persistent conference object
US5941947A (en) * 1995-08-18 1999-08-24 Microsoft Corporation System and method for controlling access to data entities in a computer network
US5657452A (en) * 1995-09-08 1997-08-12 U.S. Robotics Corp. Transparent support of protocol and data compression features for data communication
CA2162188C (en) * 1995-11-06 1999-05-25 Harold Jeffrey Gartner Location transparency of distributed objects over multiple middlewares
US5802146A (en) * 1995-11-22 1998-09-01 Bell Atlantic Network Services, Inc. Maintenance operations console for an advanced intelligent network
US5706437A (en) * 1995-12-29 1998-01-06 Mci Communications Corporation System and method for accessing a service on a services network
JP3282652B2 (en) * 1996-02-19 2002-05-20 日本電気株式会社 OSI multi-layer management system
US5968116A (en) * 1996-03-27 1999-10-19 Intel Corporation Method and apparatus for facilitating the management of networked devices
US6279039B1 (en) * 1996-04-03 2001-08-21 Ncr Corporation Resource management method and apparatus for maximizing multimedia performance of open systems
US5956482A (en) * 1996-05-15 1999-09-21 At&T Corp Multimedia information service access
US5892754A (en) * 1996-06-07 1999-04-06 International Business Machines Corporation User controlled adaptive flow control for packet networks
GB9612363D0 (en) 1996-06-13 1996-08-14 British Telecomm ATM network management
US6400687B1 (en) * 1996-06-13 2002-06-04 British Telecommunications Public Limited Company ATM network management
US5848266A (en) * 1996-06-20 1998-12-08 Intel Corporation Dynamic data rate adjustment to maintain throughput of a time varying signal
US5983261A (en) * 1996-07-01 1999-11-09 Apple Computer, Inc. Method and apparatus for allocating bandwidth in teleconferencing applications using bandwidth control
US5799002A (en) * 1996-07-02 1998-08-25 Microsoft Corporation Adaptive bandwidth throttling for network services
US5761428A (en) * 1996-07-05 1998-06-02 Ncr Corporation Method and aparatus for providing agent capability independent from a network node
US5944795A (en) * 1996-07-12 1999-08-31 At&T Corp. Client-server architecture using internet and guaranteed quality of service networks for accessing distributed media sources
US5996010A (en) * 1996-08-29 1999-11-30 Nortel Networks Corporation Method of performing a network management transaction using a web-capable agent
US5781703A (en) * 1996-09-06 1998-07-14 Candle Distributed Solutions, Inc. Intelligent remote agent for computer performance monitoring
US5901142A (en) * 1996-09-18 1999-05-04 Motorola, Inc. Method and apparatus for providing packet data communications to a communication unit in a radio communication system
US6097722A (en) * 1996-12-13 2000-08-01 Nortel Networks Corporation Bandwidth management processes and systems for asynchronous transfer mode networks using variable virtual paths
US6085243A (en) 1996-12-13 2000-07-04 3Com Corporation Distributed remote management (dRMON) for networks
US5953338A (en) * 1996-12-13 1999-09-14 Northern Telecom Limited Dynamic control processes and systems for asynchronous transfer mode networks
US5889958A (en) * 1996-12-20 1999-03-30 Livingston Enterprises, Inc. Network access control system and process
US5987611A (en) * 1996-12-31 1999-11-16 Zone Labs, Inc. System and methodology for managing internet access on a per application basis for client computers connected to the internet
US5958010A (en) * 1997-03-20 1999-09-28 Firstsense Software, Inc. Systems and methods for monitoring distributed applications including an interface running in an operating system kernel
US5948065A (en) * 1997-03-28 1999-09-07 International Business Machines Corporation System for managing processor resources in a multisystem environment in order to provide smooth real-time data streams while enabling other types of applications to be processed concurrently
US6216163B1 (en) * 1997-04-14 2001-04-10 Lucent Technologies Inc. Method and apparatus providing for automatically restarting a client-server connection in a distributed network
US6578077B1 (en) * 1997-05-27 2003-06-10 Novell, Inc. Traffic monitoring tool for bandwidth management
AU735024B2 (en) * 1997-07-25 2001-06-28 British Telecommunications Public Limited Company Scheduler for a software system
US5944783A (en) * 1997-07-29 1999-08-31 Lincom Corporation Apparatus and method for data transfers through software agents using client-to-server and peer-to-peer transfers
US6029201A (en) * 1997-08-01 2000-02-22 International Business Machines Corporation Internet application access server apparatus and method
US6763395B1 (en) * 1997-11-14 2004-07-13 National Instruments Corporation System and method for connecting to and viewing live data using a standard user agent
WO1999027684A1 (en) * 1997-11-25 1999-06-03 Packeteer, Inc. Method for automatically classifying traffic in a packet communications network
US6148337A (en) * 1998-04-01 2000-11-14 Bridgeway Corporation Method and system for monitoring and manipulating the flow of private information on public networks
US6317438B1 (en) 1998-04-14 2001-11-13 Harold Herman Trebes, Jr. System and method for providing peer-oriented control of telecommunications services
US6269400B1 (en) * 1998-07-22 2001-07-31 International Business Machines Corporation Method for discovering and registering agents in a distributed network
US6085241A (en) * 1998-07-22 2000-07-04 Amplify. Net, Inc. Internet user-bandwidth management and control tool
US6088796A (en) * 1998-08-06 2000-07-11 Cianfrocca; Francis Secure middleware and server control system for querying through a network firewall
US6631122B1 (en) * 1999-06-11 2003-10-07 Nortel Networks Limited Method and system for wireless QOS agent for all-IP network
US6466980B1 (en) * 1999-06-17 2002-10-15 International Business Machines Corporation System and method for capacity shaping in an internet environment
US6701324B1 (en) * 1999-06-30 2004-03-02 International Business Machines Corporation Data collector for use in a scalable, distributed, asynchronous data collection mechanism
US6466978B1 (en) * 1999-07-28 2002-10-15 Matsushita Electric Industrial Co., Ltd. Multimedia file systems using file managers located on clients for managing network attached storage devices
US7028264B2 (en) 1999-10-29 2006-04-11 Surfcast, Inc. System and method for simultaneous display of multiple information sources
US6681232B1 (en) * 2000-06-07 2004-01-20 Yipes Enterprise Services, Inc. Operations and provisioning systems for service level management in an extended-area data communications network
US6785889B1 (en) * 2000-06-15 2004-08-31 Aurema, Inc. System and method for scheduling bandwidth resources using a Kalman estimator with active feedback
US7660902B2 (en) 2000-11-20 2010-02-09 Rsa Security, Inc. Dynamic file access control and management
US20020156900A1 (en) 2001-03-30 2002-10-24 Brian Marquette Protocol independent control module
US20040103193A1 (en) * 2002-11-08 2004-05-27 Pandya Suketu J. Response time and resource consumption management in a distributed network environment

Cited By (260)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9031087B2 (en) 2000-11-08 2015-05-12 Genesys Telecommunications Laboratories, Inc. Method and apparatus for optimizing response time to events in queue
US20080056165A1 (en) * 2000-11-08 2008-03-06 Yevgeniy Petrovykh Method And Apparatus For Anticipating And Planning Communication-Center Resources Based On Evaluation Of Events Waiting In A Communication Center Master Queue
US7586859B2 (en) * 2000-11-08 2009-09-08 Genesys Telecommunications Laboratories, Inc. Method and apparatus for anticipating and planning communication-center resources based on evaluation of events waiting in a communication center master queue
USRE46174E1 (en) 2001-01-18 2016-10-04 Genesys Telecommunications Laboratories, Inc. Method and apparatus for intelligent routing of instant messaging presence protocol (IMPP) events among a group of customer service representatives
USRE46625E1 (en) 2001-08-17 2017-12-05 Genesys Telecommunications Laboratories, Inc. Method and apparatus for intelligent routing of instant messaging presence protocol (IMPP) events among a group of customer service representatives
US9648168B2 (en) 2002-08-27 2017-05-09 Genesys Telecommunications Laboratories, Inc. Method and apparatus for optimizing response time to events in queue
USRE47138E1 (en) 2002-08-27 2018-11-20 Genesys Telecommunications Laboratories, Inc. Method and apparatus for anticipating and planning communication-center resources based on evaluation of events waiting in a communication center master queue
USRE46853E1 (en) * 2002-08-27 2018-05-15 Genesys Telecommunications Laboratories, Inc. Method and apparatus for anticipating and planning communication-center resources based on evaluation of events waiting in a communication center master queue
USRE46852E1 (en) 2002-08-27 2018-05-15 Genesys Telecommunications Laboratories, Inc. Method and apparatus for anticipating and planning communication-center resources based on evaluation of events waiting in a communication center master queue
USRE46776E1 (en) 2002-08-27 2018-04-03 Genesys Telecommunications Laboratories, Inc. Method and apparatus for optimizing response time to events in queue
US20040107212A1 (en) * 2002-12-02 2004-06-03 Michael Friedrich Centralized access and management for multiple, disparate data repositories
US7599959B2 (en) * 2002-12-02 2009-10-06 Sap Ag Centralized access and management for multiple, disparate data repositories
US8027361B2 (en) * 2003-02-26 2011-09-27 Alcatel Lucent Class-based bandwidth allocation and admission control for virtual private networks with differentiated service
US20090279568A1 (en) * 2003-02-26 2009-11-12 Xue Li Class-based bandwidth allocation and admission control for virtual private networks with differentiated service
US11654368B2 (en) 2004-06-28 2023-05-23 Winview, Inc. Methods and apparatus for distributed gaming over a mobile device
US11400379B2 (en) 2004-06-28 2022-08-02 Winview, Inc. Methods and apparatus for distributed gaming over a mobile device
US11786813B2 (en) 2004-07-14 2023-10-17 Winview, Inc. Game of skill played by remote participants utilizing wireless devices in connection with a common game event
US20060045096A1 (en) * 2004-09-01 2006-03-02 International Business Machines Corporation Method, system, and computer product for controlling input message priority
US7519067B2 (en) * 2004-09-01 2009-04-14 International Business Machines Corporation Method, system, and computer product for controlling input message priority
US20060072457A1 (en) * 2004-10-06 2006-04-06 Netpriva Pty Ltd. Peer signaling protocol and system for decentralized traffic management
US8799472B2 (en) * 2004-10-06 2014-08-05 Riverbed Technology, Inc. Peer signaling protocol and system for decentralized traffic management
US11451883B2 (en) 2005-06-20 2022-09-20 Winview, Inc. Method of and system for managing client resources and assets for activities on computing devices
US11154775B2 (en) 2005-10-03 2021-10-26 Winview, Inc. Synchronized gaming and programming
US11148050B2 (en) 2005-10-03 2021-10-19 Winview, Inc. Cellular phone games based upon television archives
US8838737B2 (en) * 2005-12-01 2014-09-16 Firestar Software, Inc. System and method for exchanging information among exchange applications
US8620989B2 (en) 2005-12-01 2013-12-31 Firestar Software, Inc. System and method for exchanging information among exchange applications
US8838668B2 (en) 2005-12-01 2014-09-16 Firestar Software, Inc. System and method for exchanging information among exchange applications
US9742880B2 (en) 2005-12-01 2017-08-22 Firestar Software, Inc. System and method for exchanging information among exchange applications
US9860348B2 (en) 2005-12-01 2018-01-02 Firestar Software, Inc. System and method for exchanging information among exchange applications
US11358064B2 (en) 2006-01-10 2022-06-14 Winview, Inc. Method of and system for conducting multiple contests of skill with a single performance
US11338189B2 (en) 2006-01-10 2022-05-24 Winview, Inc. Method of and system for conducting multiple contests of skill with a single performance
US11298621B2 (en) 2006-01-10 2022-04-12 Winview, Inc. Method of and system for conducting multiple contests of skill with a single performance
US11918880B2 (en) 2006-01-10 2024-03-05 Winview Ip Holdings, Llc Method of and system for conducting multiple contests of skill with a single performance
US11951402B2 (en) 2006-01-10 2024-04-09 Winview Ip Holdings, Llc Method of and system for conducting multiple contests of skill with a single performance
US11266896B2 (en) 2006-01-10 2022-03-08 Winview, Inc. Method of and system for conducting multiple contests of skill with a single performance
US11825168B2 (en) 2006-04-12 2023-11-21 Winview Ip Holdings, Llc Eception in connection with games of skill played in connection with live television programming
US20210360325A1 (en) * 2006-04-12 2021-11-18 Winview, Inc. Synchronized gaming and programming
US11179632B2 (en) 2006-04-12 2021-11-23 Winview, Inc. Methodology for equalizing systemic latencies in television reception in connection with games of skill played in connection with live television programming
US11678020B2 (en) 2006-04-12 2023-06-13 Winview, Inc. Methodology for equalizing systemic latencies in television reception in connection with games of skill played in connection with live television programming
US11185770B2 (en) 2006-04-12 2021-11-30 Winview, Inc. Methodology for equalizing systemic latencies in television reception in connection with games of skill played in connection with live television programming
US11716515B2 (en) 2006-04-12 2023-08-01 Winview, Inc. Methodology for equalizing systemic latencies in television reception in connection with games of skill played in connection with live television programming
US11722743B2 (en) * 2006-04-12 2023-08-08 Winview, Inc. Synchronized gaming and programming
US11736771B2 (en) 2006-04-12 2023-08-22 Winview, Inc. Methodology for equalizing systemic latencies in television reception in connection with games of skill played in connection with live television programming
US11235237B2 (en) 2006-04-12 2022-02-01 Winview, Inc. Methodology for equalizing systemic latencies in television reception in connection with games of skill played in connection with live television programming
US11889157B2 (en) 2006-04-12 2024-01-30 Winview Ip Holdings, Llc Methodology for equalizing systemic latencies in television reception in connection with games of skill played in connection with live television programming
US11917254B2 (en) 2006-04-12 2024-02-27 Winview Ip Holdings, Llc Methodology for equalizing systemic latencies in television reception in connection with games of skill played in connection with live television programming
US11083965B2 (en) 2006-04-12 2021-08-10 Winview, Inc. Methodology for equalizing systemic latencies in television reception in connection with games of skill played in connection with live television programming
US11082746B2 (en) * 2006-04-12 2021-08-03 Winview, Inc. Synchronized gaming and programming
US11077366B2 (en) 2006-04-12 2021-08-03 Winview, Inc. Methodology for equalizing systemic latencies in television reception in connection with games of skill played in connection with live television programming
US8219617B2 (en) * 2006-07-26 2012-07-10 Konami Digital Entertainment Co., Ltd. Game system, game terminal therefor, and server device therefor
US20090318235A1 (en) * 2006-07-26 2009-12-24 Hiroyuki Ashida Game system, game terminal therefor, and server device therefor
US8280960B2 (en) 2006-07-26 2012-10-02 Konami Digital Entertainment Co., Ltd. Game system, game terminal therefor, and server device therefor
US20100029388A1 (en) * 2006-07-26 2010-02-04 Konami Digital Entertainment Co. Ltd Game system, game terminal therefor, and server device therefor
US20080312151A1 (en) * 2007-02-08 2008-12-18 Aspenbio Pharma, Inc. Compositions and methods including expression and bioactivity of bovine follicle stimulating hormone
US20090003229A1 (en) * 2007-06-30 2009-01-01 Kai Siang Loh Adaptive Bandwidth Management Systems And Methods
US8024445B2 (en) * 2008-03-28 2011-09-20 Seiko Epson Corporation Socket management device and socket management method
US20090248875A1 (en) * 2008-03-28 2009-10-01 Seiko Epson Corporation Socket Management Device and Socket Management Method
WO2009124147A3 (en) * 2008-04-03 2009-12-30 Entropic Communications, Inc. System and method for scheduling reservation requests for a communication network
US8948189B2 (en) 2008-04-03 2015-02-03 Entropie Communications, Inc. System and method for scheduling reservation requests for a communication network
US20090252172A1 (en) * 2008-04-03 2009-10-08 Robert Lawrence Hare System and method for scheduling reservation requests for a communication network
US9106592B1 (en) * 2008-05-18 2015-08-11 Western Digital Technologies, Inc. Controller and method for controlling a buffered data transfer device
US8504716B2 (en) * 2008-10-08 2013-08-06 Citrix Systems, Inc Systems and methods for allocating bandwidth by an intermediary for flow control
US20100095021A1 (en) * 2008-10-08 2010-04-15 Samuels Allen R Systems and methods for allocating bandwidth by an intermediary for flow control
US11601727B2 (en) 2008-11-10 2023-03-07 Winview, Inc. Interactive advertising system
US20100134423A1 (en) * 2008-12-02 2010-06-03 At&T Mobility Ii Llc Automatic soft key adaptation with left-right hand edge sensing
US8078769B2 (en) * 2008-12-02 2011-12-13 At&T Mobility Ii Llc Automatic QoS determination with I/O activity logic
US20100138565A1 (en) * 2008-12-02 2010-06-03 At&T Mobility Ii Llc Automatic qos determination with i/o activity logic
US20100134424A1 (en) * 2008-12-02 2010-06-03 At&T Mobility Ii Llc Edge hand and finger presence and motion sensor
US8368658B2 (en) 2008-12-02 2013-02-05 At&T Mobility Ii Llc Automatic soft key adaptation with left-right hand edge sensing
US20100138680A1 (en) * 2008-12-02 2010-06-03 At&T Mobility Ii Llc Automatic display and voice command activation with hand edge sensing
US8497847B2 (en) 2008-12-02 2013-07-30 At&T Mobility Ii Llc Automatic soft key adaptation with left-right hand edge sensing
US9769207B2 (en) 2009-01-28 2017-09-19 Headwater Research Llc Wireless network service interfaces
US10320990B2 (en) 2009-01-28 2019-06-11 Headwater Research Llc Device assisted CDR creation, aggregation, mediation and billing
US9198075B2 (en) 2009-01-28 2015-11-24 Headwater Partners I Llc Wireless end-user device with differential traffic control policy list applicable to one of several wireless modems
US9198074B2 (en) 2009-01-28 2015-11-24 Headwater Partners I Llc Wireless end-user device with differential traffic control policy list and applying foreground classification to roaming wireless data service
US9204374B2 (en) 2009-01-28 2015-12-01 Headwater Partners I Llc Multicarrier over-the-air cellular network activation server
US9204282B2 (en) 2009-01-28 2015-12-01 Headwater Partners I Llc Enhanced roaming services and converged carrier networks with device assisted services and a proxy
US9215613B2 (en) 2009-01-28 2015-12-15 Headwater Partners I Llc Wireless end-user device with differential traffic control policy list having limited user control
US20100191612A1 (en) * 2009-01-28 2010-07-29 Gregory G. Raleigh Verifiable device assisted service usage monitoring with reporting, synchronization, and notification
US9215159B2 (en) 2009-01-28 2015-12-15 Headwater Partners I Llc Data usage monitoring for media data services used by applications
US9220027B1 (en) 2009-01-28 2015-12-22 Headwater Partners I Llc Wireless end-user device with policy-based controls for WWAN network usage and modem state changes requested by specific applications
US11923995B2 (en) 2009-01-28 2024-03-05 Headwater Research Llc Device-assisted services for protecting network capacity
US9225797B2 (en) 2009-01-28 2015-12-29 Headwater Partners I Llc System for providing an adaptive wireless ambient service to a mobile device
US9232403B2 (en) 2009-01-28 2016-01-05 Headwater Partners I Llc Mobile device with common secure wireless message service serving multiple applications
US9247450B2 (en) 2009-01-28 2016-01-26 Headwater Partners I Llc Quality of service for device assisted services
US9253663B2 (en) 2009-01-28 2016-02-02 Headwater Partners I Llc Controlling mobile device communications on a roaming network based on device state
US9258735B2 (en) 2009-01-28 2016-02-09 Headwater Partners I Llc Device-assisted services for protecting network capacity
US11757943B2 (en) 2009-01-28 2023-09-12 Headwater Research Llc Automated device provisioning and activation
US9270559B2 (en) 2009-01-28 2016-02-23 Headwater Partners I Llc Service policy implementation for an end-user device having a control application or a proxy agent for routing an application traffic flow
US9271184B2 (en) 2009-01-28 2016-02-23 Headwater Partners I Llc Wireless end-user device with per-application data limit and traffic control policy list limiting background application traffic
US9277445B2 (en) 2009-01-28 2016-03-01 Headwater Partners I Llc Wireless end-user device with differential traffic control policy list and applying foreground classification to wireless data service
US9277433B2 (en) 2009-01-28 2016-03-01 Headwater Partners I Llc Wireless end-user device with policy-based aggregation of network activity requested by applications
US9319913B2 (en) 2009-01-28 2016-04-19 Headwater Partners I Llc Wireless end-user device with secure network-provided differential traffic control policy list
US9351193B2 (en) 2009-01-28 2016-05-24 Headwater Partners I Llc Intermediate networking devices
US9386165B2 (en) 2009-01-28 2016-07-05 Headwater Partners I Llc System and method for providing user notifications
US9386121B2 (en) 2009-01-28 2016-07-05 Headwater Partners I Llc Method for providing an adaptive wireless ambient service to a mobile device
US9392462B2 (en) 2009-01-28 2016-07-12 Headwater Partners I Llc Mobile end-user device with agent limiting wireless data communication for specified background applications based on a stored policy
US9198042B2 (en) 2009-01-28 2015-11-24 Headwater Partners I Llc Security techniques for device assisted services
US9491199B2 (en) 2009-01-28 2016-11-08 Headwater Partners I Llc Security, fraud detection, and fraud mitigation in device-assisted services systems
US9491564B1 (en) 2009-01-28 2016-11-08 Headwater Partners I Llc Mobile device and method with secure network messaging for authorized components
US9521578B2 (en) 2009-01-28 2016-12-13 Headwater Partners I Llc Wireless end-user device with application program interface to allow applications to access application-specific aspects of a wireless network access policy
US9532261B2 (en) 2009-01-28 2016-12-27 Headwater Partners I Llc System and method for wireless network offloading
US9532161B2 (en) 2009-01-28 2016-12-27 Headwater Partners I Llc Wireless device with application data flow tagging and network stack-implemented network access policy
US9544397B2 (en) 2009-01-28 2017-01-10 Headwater Partners I Llc Proxy server for providing an adaptive wireless ambient service to a mobile device
US9557889B2 (en) 2009-01-28 2017-01-31 Headwater Partners I Llc Service plan design, user interfaces, application programming interfaces, and device management
US9565543B2 (en) 2009-01-28 2017-02-07 Headwater Partners I Llc Device group partitions and settlement platform
US9565707B2 (en) 2009-01-28 2017-02-07 Headwater Partners I Llc Wireless end-user device with wireless data attribution to multiple personas
US9572019B2 (en) 2009-01-28 2017-02-14 Headwater Partners LLC Service selection set published to device agent with on-device service selection
US11750477B2 (en) 2009-01-28 2023-09-05 Headwater Research Llc Adaptive ambient services
US9571559B2 (en) 2009-01-28 2017-02-14 Headwater Partners I Llc Enhanced curfew and protection associated with a device group
US9578182B2 (en) 2009-01-28 2017-02-21 Headwater Partners I Llc Mobile device and service management
US9591474B2 (en) 2009-01-28 2017-03-07 Headwater Partners I Llc Adapting network policies based on device service processor configuration
US11665186B2 (en) 2009-01-28 2023-05-30 Headwater Research Llc Communications device with secure data path processing agents
US9609544B2 (en) 2009-01-28 2017-03-28 Headwater Research Llc Device-assisted services for protecting network capacity
US9609510B2 (en) 2009-01-28 2017-03-28 Headwater Research Llc Automated credential porting for mobile devices
US9609459B2 (en) 2009-01-28 2017-03-28 Headwater Research Llc Network tools for analysis, design, testing, and production of services
US9615192B2 (en) 2009-01-28 2017-04-04 Headwater Research Llc Message link server with plural message delivery triggers
US9641957B2 (en) 2009-01-28 2017-05-02 Headwater Research Llc Automated device provisioning and activation
US9647918B2 (en) 2009-01-28 2017-05-09 Headwater Research Llc Mobile device and method attributing media services network usage to requesting application
US9198117B2 (en) 2009-01-28 2015-11-24 Headwater Partners I Llc Network system with common secure wireless message service serving multiple applications on multiple wireless devices
US9674731B2 (en) 2009-01-28 2017-06-06 Headwater Research Llc Wireless device applying different background data traffic policies to different device applications
US9705771B2 (en) 2009-01-28 2017-07-11 Headwater Partners I Llc Attribution of mobile device data traffic to end-user application based on socket flows
US9706061B2 (en) 2009-01-28 2017-07-11 Headwater Partners I Llc Service design center for device assisted services
US11665592B2 (en) 2009-01-28 2023-05-30 Headwater Research Llc Security, fraud detection, and fraud mitigation in device-assisted services systems
US9179316B2 (en) 2009-01-28 2015-11-03 Headwater Partners I Llc Mobile device with user controls and policy agent to control application access to device location data
US9749899B2 (en) 2009-01-28 2017-08-29 Headwater Research Llc Wireless end-user device with network traffic API to indicate unavailability of roaming wireless connection to background applications
US9749898B2 (en) 2009-01-28 2017-08-29 Headwater Research Llc Wireless end-user device with differential traffic control policy list applicable to one of several wireless modems
US11589216B2 (en) 2009-01-28 2023-02-21 Headwater Research Llc Service selection set publishing to device agent with on-device service selection
US9755842B2 (en) 2009-01-28 2017-09-05 Headwater Research Llc Managing service user discovery and service launch object placement on a device
US9179359B2 (en) 2009-01-28 2015-11-03 Headwater Partners I Llc Wireless end-user device with differentiated network access status for different device applications
US9819808B2 (en) 2009-01-28 2017-11-14 Headwater Research Llc Hierarchical service policies for creating service usage data records for a wireless end-user device
US11582593B2 (en) 2009-01-28 2023-02-14 Head Water Research Llc Adapting network policies based on device service processor configuration
US9179315B2 (en) 2009-01-28 2015-11-03 Headwater Partners I Llc Mobile device with data service monitoring, categorization, and display for different applications and networks
US9858559B2 (en) 2009-01-28 2018-01-02 Headwater Research Llc Network service plan design
US9866642B2 (en) 2009-01-28 2018-01-09 Headwater Research Llc Wireless end-user device with wireless modem power state control policy for background applications
US9173104B2 (en) 2009-01-28 2015-10-27 Headwater Partners I Llc Mobile device with device agents to detect a disallowed access to a requested mobile data service and guide a multi-carrier selection and activation sequence
US9942796B2 (en) 2009-01-28 2018-04-10 Headwater Research Llc Quality of service for device assisted services
US9955332B2 (en) 2009-01-28 2018-04-24 Headwater Research Llc Method for child wireless device activation to subscriber account of a master wireless device
US9954975B2 (en) 2009-01-28 2018-04-24 Headwater Research Llc Enhanced curfew and protection associated with a device group
US9154428B2 (en) 2009-01-28 2015-10-06 Headwater Partners I Llc Wireless end-user device with differentiated network access selectively applied to different applications
US11570309B2 (en) 2009-01-28 2023-01-31 Headwater Research Llc Service design center for device assisted services
US9973930B2 (en) 2009-01-28 2018-05-15 Headwater Research Llc End user device that secures an association of application to service policy with an application certificate check
US9980146B2 (en) 2009-01-28 2018-05-22 Headwater Research Llc Communications device with secure data path processing agents
US11563592B2 (en) 2009-01-28 2023-01-24 Headwater Research Llc Managing service user discovery and service launch object placement on a device
US10028144B2 (en) 2009-01-28 2018-07-17 Headwater Research Llc Security techniques for device assisted services
US11538106B2 (en) 2009-01-28 2022-12-27 Headwater Research Llc Wireless end-user device providing ambient or sponsored services
US10057141B2 (en) 2009-01-28 2018-08-21 Headwater Research Llc Proxy system and method for adaptive ambient services
US10057775B2 (en) 2009-01-28 2018-08-21 Headwater Research Llc Virtualized policy and charging system
US10064033B2 (en) 2009-01-28 2018-08-28 Headwater Research Llc Device group partitions and settlement platform
US10064055B2 (en) 2009-01-28 2018-08-28 Headwater Research Llc Security, fraud detection, and fraud mitigation in device-assisted services systems
US10070305B2 (en) 2009-01-28 2018-09-04 Headwater Research Llc Device assisted services install
US10080250B2 (en) 2009-01-28 2018-09-18 Headwater Research Llc Enterprise access control and accounting allocation for access networks
US11533642B2 (en) 2009-01-28 2022-12-20 Headwater Research Llc Device group partitions and settlement platform
US9143976B2 (en) 2009-01-28 2015-09-22 Headwater Partners I Llc Wireless end-user device with differentiated network access and access status for background and foreground device applications
US10165447B2 (en) 2009-01-28 2018-12-25 Headwater Research Llc Network service plan design
US11516301B2 (en) 2009-01-28 2022-11-29 Headwater Research Llc Enhanced curfew and protection associated with a device group
US10171990B2 (en) 2009-01-28 2019-01-01 Headwater Research Llc Service selection set publishing to device agent with on-device service selection
US10171988B2 (en) 2009-01-28 2019-01-01 Headwater Research Llc Adapting network policies based on device service processor configuration
US10171681B2 (en) 2009-01-28 2019-01-01 Headwater Research Llc Service design center for device assisted services
US10200541B2 (en) 2009-01-28 2019-02-05 Headwater Research Llc Wireless end-user device with divided user space/kernel space traffic policy system
US10237757B2 (en) 2009-01-28 2019-03-19 Headwater Research Llc System and method for wireless network offloading
US10237146B2 (en) 2009-01-28 2019-03-19 Headwater Research Llc Adaptive ambient services
US10237773B2 (en) 2009-01-28 2019-03-19 Headwater Research Llc Device-assisted services for protecting network capacity
US10248996B2 (en) 2009-01-28 2019-04-02 Headwater Research Llc Method for operating a wireless end-user device mobile payment agent
US10264138B2 (en) 2009-01-28 2019-04-16 Headwater Research Llc Mobile device and service management
US10321320B2 (en) 2009-01-28 2019-06-11 Headwater Research Llc Wireless network buffered message system
US9198076B2 (en) 2009-01-28 2015-11-24 Headwater Partners I Llc Wireless end-user device with power-control-state-based wireless network access policy for background applications
US10326800B2 (en) 2009-01-28 2019-06-18 Headwater Research Llc Wireless network service interfaces
US10326675B2 (en) 2009-01-28 2019-06-18 Headwater Research Llc Flow tagging for service policy implementation
US11494837B2 (en) 2009-01-28 2022-11-08 Headwater Research Llc Virtualized policy and charging system
US11477246B2 (en) 2009-01-28 2022-10-18 Headwater Research Llc Network service plan design
US10462627B2 (en) 2009-01-28 2019-10-29 Headwater Research Llc Service plan design, user interfaces, application programming interfaces, and device management
US10492102B2 (en) 2009-01-28 2019-11-26 Headwater Research Llc Intermediate networking devices
US10536983B2 (en) 2009-01-28 2020-01-14 Headwater Research Llc Enterprise access control and accounting allocation for access networks
US10582375B2 (en) 2009-01-28 2020-03-03 Headwater Research Llc Device assisted services install
US10681179B2 (en) 2009-01-28 2020-06-09 Headwater Research Llc Enhanced curfew and protection associated with a device group
US10694385B2 (en) 2009-01-28 2020-06-23 Headwater Research Llc Security techniques for device assisted services
US10716006B2 (en) 2009-01-28 2020-07-14 Headwater Research Llc End user device that secures an association of application to service policy with an application certificate check
US10715342B2 (en) 2009-01-28 2020-07-14 Headwater Research Llc Managing service user discovery and service launch object placement on a device
US10749700B2 (en) 2009-01-28 2020-08-18 Headwater Research Llc Device-assisted services for protecting network capacity
US10771980B2 (en) 2009-01-28 2020-09-08 Headwater Research Llc Communications device with secure data path processing agents
US10779177B2 (en) 2009-01-28 2020-09-15 Headwater Research Llc Device group partitions and settlement platform
US10783581B2 (en) 2009-01-28 2020-09-22 Headwater Research Llc Wireless end-user device providing ambient or sponsored services
US10791471B2 (en) 2009-01-28 2020-09-29 Headwater Research Llc System and method for wireless network offloading
US10798558B2 (en) 2009-01-28 2020-10-06 Headwater Research Llc Adapting network policies based on device service processor configuration
US10798254B2 (en) 2009-01-28 2020-10-06 Headwater Research Llc Service design center for device assisted services
US10798252B2 (en) 2009-01-28 2020-10-06 Headwater Research Llc System and method for providing user notifications
US10803518B2 (en) 2009-01-28 2020-10-13 Headwater Research Llc Virtualized policy and charging system
US10834577B2 (en) 2009-01-28 2020-11-10 Headwater Research Llc Service offer set publishing to device agent with on-device service selection
US11425580B2 (en) 2009-01-28 2022-08-23 Headwater Research Llc System and method for wireless network offloading
US10841839B2 (en) 2009-01-28 2020-11-17 Headwater Research Llc Security, fraud detection, and fraud mitigation in device-assisted services systems
US11412366B2 (en) 2009-01-28 2022-08-09 Headwater Research Llc Enhanced roaming services and converged carrier networks with device assisted services and a proxy
US10848330B2 (en) 2009-01-28 2020-11-24 Headwater Research Llc Device-assisted services for protecting network capacity
US10855559B2 (en) 2009-01-28 2020-12-01 Headwater Research Llc Adaptive ambient services
US10869199B2 (en) 2009-01-28 2020-12-15 Headwater Research Llc Network service plan design
US10985977B2 (en) 2009-01-28 2021-04-20 Headwater Research Llc Quality of service for device assisted services
US11039020B2 (en) 2009-01-28 2021-06-15 Headwater Research Llc Mobile device and service management
US11405429B2 (en) 2009-01-28 2022-08-02 Headwater Research Llc Security techniques for device assisted services
US9137701B2 (en) 2009-01-28 2015-09-15 Headwater Partners I Llc Wireless end-user device with differentiated network access for background and foreground device applications
US9137739B2 (en) 2009-01-28 2015-09-15 Headwater Partners I Llc Network based service policy implementation with network neutrality and user privacy
US11096055B2 (en) 2009-01-28 2021-08-17 Headwater Research Llc Automated device provisioning and activation
US11134102B2 (en) * 2009-01-28 2021-09-28 Headwater Research Llc Verifiable device assisted service usage monitoring with reporting, synchronization, and notification
US9094311B2 (en) 2009-01-28 2015-07-28 Headwater Partners I, Llc Techniques for attribution of mobile device data traffic to initiating end-user application
US11405224B2 (en) 2009-01-28 2022-08-02 Headwater Research Llc Device-assisted services for protecting network capacity
US11363496B2 (en) 2009-01-28 2022-06-14 Headwater Research Llc Intermediate networking devices
US11337059B2 (en) 2009-01-28 2022-05-17 Headwater Research Llc Device assisted services install
US11190645B2 (en) 2009-01-28 2021-11-30 Headwater Research Llc Device assisted CDR creation, aggregation, mediation and billing
US11228617B2 (en) 2009-01-28 2022-01-18 Headwater Research Llc Automated device provisioning and activation
US11190545B2 (en) 2009-01-28 2021-11-30 Headwater Research Llc Wireless network service interfaces
US11190427B2 (en) 2009-01-28 2021-11-30 Headwater Research Llc Flow tagging for service policy implementation
US11218854B2 (en) 2009-01-28 2022-01-04 Headwater Research Llc Service plan design, user interfaces, application programming interfaces, and device management
US11219074B2 (en) 2009-01-28 2022-01-04 Headwater Research Llc Enterprise access control and accounting allocation for access networks
US20100278327A1 (en) * 2009-05-04 2010-11-04 Avaya, Inc. Efficient and cost-effective distribution call admission control
US8311207B2 (en) 2009-05-04 2012-11-13 Avaya Inc. Efficient and cost-effective distribution call admission control
US8010677B2 (en) * 2009-12-02 2011-08-30 Avaya Inc. Alternative bandwidth management algorithm
US20110131331A1 (en) * 2009-12-02 2011-06-02 Avaya Inc. Alternative bandwidth management algorithm
US20110202646A1 (en) * 2010-02-14 2011-08-18 Bhatia Randeep S Policy controlled traffic offload via content smart-loading
US9603085B2 (en) 2010-02-16 2017-03-21 Qualcomm Incorporated Methods and apparatus providing intelligent radio selection for legacy and non-legacy applications
GB2507941B (en) * 2010-02-22 2018-10-31 Avaya Inc Secure,policy-based communications security and file sharing across mixed media,mixed-communications modalities and extensible to cloud computing such as soa
US20110209194A1 (en) * 2010-02-22 2011-08-25 Avaya Inc. Node-based policy-enforcement across mixed media, mixed-communications modalities and extensible to cloud computing such as soa
US10015169B2 (en) * 2010-02-22 2018-07-03 Avaya Inc. Node-based policy-enforcement across mixed media, mixed-communications modalities and extensible to cloud computing such as SOA
US9215236B2 (en) 2010-02-22 2015-12-15 Avaya Inc. Secure, policy-based communications security and file sharing across mixed media, mixed-communications modalities and extensible to cloud computing such as SOA
US20110209195A1 (en) * 2010-02-22 2011-08-25 Avaya Inc. Flexible security boundaries in an enterprise network
US20110209196A1 (en) * 2010-02-22 2011-08-25 Avaya Inc. Flexible security requirements in an enterprise network
GB2507941A (en) * 2010-02-22 2014-05-21 Avaya Inc Secure,policy-based communications security and file sharing across mixed media,mixed-communications modalities and extensible to cloud computing such as soa
US20110209193A1 (en) * 2010-02-22 2011-08-25 Avaya Inc. Secure, policy-based communications security and file sharing across mixed media, mixed-communications modalities and extensible to cloud computing such as soa
US8607325B2 (en) 2010-02-22 2013-12-10 Avaya Inc. Enterprise level security system
WO2011103385A1 (en) * 2010-02-22 2011-08-25 Avaya Inc. Secure, policy-based communications security and file sharing across mixed media, mixed-communications modalities and extensible to cloud computing such as soa
US8434128B2 (en) 2010-02-22 2013-04-30 Avaya Inc. Flexible security requirements in an enterprise network
US9003038B1 (en) * 2010-04-13 2015-04-07 Qlogic, Corporation Systems and methods for bandwidth scavenging among a plurality of applications in a network
US8307111B1 (en) * 2010-04-13 2012-11-06 Qlogic, Corporation Systems and methods for bandwidth scavenging among a plurality of applications in a network
US9008673B1 (en) * 2010-07-02 2015-04-14 Cellco Partnership Data communication device with individual application bandwidth reporting and control
US8831658B2 (en) 2010-11-05 2014-09-09 Qualcomm Incorporated Controlling application access to a network
US9264868B2 (en) * 2011-01-19 2016-02-16 Qualcomm Incorporated Management of network access requests
US9178965B2 (en) 2011-03-18 2015-11-03 Qualcomm Incorporated Systems and methods for synchronization of application communications
US9154826B2 (en) 2011-04-06 2015-10-06 Headwater Partners Ii Llc Distributing content and service launch objects to mobile devices
US9571952B2 (en) 2011-04-22 2017-02-14 Qualcomm Incorporatd Offloading of data to wireless local area network
US8718261B2 (en) 2011-07-21 2014-05-06 Avaya Inc. Efficient and cost-effective distributed call admission control
US8838086B2 (en) 2011-08-29 2014-09-16 Qualcomm Incorporated Systems and methods for management of background application events
US9137737B2 (en) 2011-08-29 2015-09-15 Qualcomm Incorporated Systems and methods for monitoring of background application events
US9225685B2 (en) 2012-06-04 2015-12-29 Google Inc. Forcing all mobile network traffic over a secure tunnel connection
US8875277B2 (en) * 2012-06-04 2014-10-28 Google Inc. Forcing all mobile network traffic over a secure tunnel connection
US20140108496A1 (en) * 2012-10-11 2014-04-17 Browsium, Inc. Systems and methods for browser redirection and navigation control
US11743717B2 (en) 2013-03-14 2023-08-29 Headwater Research Llc Automated credential porting for mobile devices
US10171995B2 (en) 2013-03-14 2019-01-01 Headwater Research Llc Automated credential porting for mobile devices
US10834583B2 (en) 2013-03-14 2020-11-10 Headwater Research Llc Automated credential porting for mobile devices
US11206665B2 (en) 2013-04-05 2021-12-21 Time Warner Cable Enterprises Llc Resource allocation in a wireless mesh network environment
US9072092B1 (en) * 2013-04-05 2015-06-30 Time Warner Cable Enterprises Llc Resource allocation in a wireless mesh network environment
US10440713B2 (en) 2013-04-05 2019-10-08 Time Warner Cable Enterprises Llc Resource allocation in a wireless mesh network environment
US9750025B2 (en) 2013-04-05 2017-08-29 Time Warner Cable Enterprises Llc Resource allocation in a wireless mesh network environment
CN107071698A (en) * 2013-07-29 2017-08-18 奇卡有限公司 Data bandwidth management system and method
US10341208B2 (en) * 2013-09-26 2019-07-02 Taiwan Semiconductor Manufacturing Co., Ltd. File block placement in a distributed network
US10848433B2 (en) 2013-11-27 2020-11-24 Interdigital Vc Holdings, Inc. Method for distributing available bandwidth of a network amongst ongoing traffic sessions run by devices of the network, corresponding device
US10044633B2 (en) * 2013-11-27 2018-08-07 Thomson Licensing Method for distributing available bandwidth of a network amongst ongoing traffic sessions run by devices of the network, corresponding device
US11838851B1 (en) * 2014-07-15 2023-12-05 F5, Inc. Methods for managing L7 traffic classification and devices thereof
US11551529B2 (en) 2016-07-20 2023-01-10 Winview, Inc. Method of generating separate contests of skill or chance from two independent events
US11308765B2 (en) 2018-10-08 2022-04-19 Winview, Inc. Method and systems for reducing risk in setting odds for single fixed in-play propositions utilizing real time input
US20220237039A1 (en) * 2019-05-28 2022-07-28 Micron Technology, Inc. Throttle Memory as a Service based on Connectivity Bandwidth
US11954042B2 (en) 2019-05-28 2024-04-09 Micron Technology, Inc. Distributed computing based on memory as a service
US11232033B2 (en) 2019-08-02 2022-01-25 Apple Inc. Application aware SoC memory cache partitioning

Also Published As

Publication number Publication date
US20040153545A1 (en) 2004-08-05
US7260635B2 (en) 2007-08-21

Similar Documents

Publication Publication Date Title
US7260635B2 (en) Software, systems and methods for managing a distributed network
US6671724B1 (en) Software, systems and methods for managing a distributed network
US20040103193A1 (en) Response time and resource consumption management in a distributed network environment
US11026106B2 (en) Method and system of performance assurance with conflict management in provisioning a network slice service
US7564872B1 (en) Apparatus and methods for providing event-based data communications device configuration
US6816903B1 (en) Directory enabled policy management tool for intelligent traffic management
US6502131B1 (en) Directory enabled policy management tool for intelligent traffic management
KR101947354B1 (en) System to share network bandwidth among competing applications
US6208640B1 (en) Predictive bandwidth allocation method and apparatus
US6459682B1 (en) Architecture for supporting service level agreements in an IP network
US20040054766A1 (en) Wireless resource control system
US20020059274A1 (en) Systems and methods for configuration of information management systems
US20030135638A1 (en) Dynamic modification of application behavior in response to changing environmental conditions
US20070177604A1 (en) Network system for managing QoS
WO2002039275A2 (en) Systems and methods for using distributed interconnects in information management environments
JPH1093655A (en) Method for designating with route incoming message and system
WO2002039261A2 (en) Systems and methods for prioritization in information management environments
WO2002041575A2 (en) Systems and method for managing differentiated service in inform ation management environments
WO2007140337A2 (en) Systems and methods for wireless resource management
WO2000035130A1 (en) Directory enabled policy management tool for intelligent traffic management
Volpato et al. An autonomic QoS management architecture for software-defined networking environments
Kamboj et al. A policy based framework for quality of service management in software defined networks
EP1317109B1 (en) System and method for controlling the adaptation of adaptive distributed multimedia applications
WO2003071743A1 (en) Software, systems and methods for managing a distributed network
JP6981367B2 (en) Network system and network bandwidth control management method

Legal Events

Date Code Title Description
AS Assignment

Owner name: CENTRISOFT CORPORATION, OREGON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PANDYA, SUKETU J.;REEL/FRAME:019782/0049

Effective date: 20030403

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION