WO2006028547A1 - Method for managing resources of ad hoc wireless networks - Google Patents

Method for managing resources of ad hoc wireless networks Download PDF

Info

Publication number
WO2006028547A1
WO2006028547A1 PCT/US2005/021574 US2005021574W WO2006028547A1 WO 2006028547 A1 WO2006028547 A1 WO 2006028547A1 US 2005021574 W US2005021574 W US 2005021574W WO 2006028547 A1 WO2006028547 A1 WO 2006028547A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
cluster
tasks
resource
nodes
Prior art date
Application number
PCT/US2005/021574
Other languages
French (fr)
Inventor
Ionut E. Cardei
Lee B. Graba
Srivatsan Varadarajan
Allalaghatta Pavan
Original Assignee
Honeywell International Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honeywell International Inc. filed Critical Honeywell International Inc.
Publication of WO2006028547A1 publication Critical patent/WO2006028547A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/26Resource reservation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks

Definitions

  • the following description relates to telecommunications in general and to providing quality of service in a wireless network in particular.
  • One type of telecommunication network is a wireless network.
  • two or more devices communicate over a wireless communication link (for example, over a radio frequency (RF) communication link) .
  • RF radio frequency
  • one or more remote nodes communicate with a central node (also referred to here as a "base station") over respective wireless communication links.
  • a central node also referred to here as a "base station”
  • pre-existing network infrastructure is typically provided.
  • a network of base stations, each of which is coupled to one or more wired networks is provided.
  • the remote nodes typically do not communicate with one another directly.
  • a network is a cellular telephone network.
  • an ad hoc network is made up of a dynamic group of nodes that communicate over wireless communication links.
  • QOS quality of service
  • examples of such applications include real-time and mission critical applications such as search and rescue, wireless multimedia, command and control, and combat support systems.
  • a system includes a wireless network comprising a plurality of clusters. Each cluster comprises a set of nodes including a cluster head node. Each node includes at least one resource.
  • an initiator node in a local cluster included in the wireless network receives an admission request to execute an application comprising a set of tasks, the initiator node forwards the admission request to a local cluster head node for the local cluster.
  • the local cluster head node requests that at least one of the set of nodes included in the local cluster provide resource availability information to the initiator node.
  • the initiator node attempts to map the set of tasks to a subset of the nodes included in the local cluster using the resource availability information received from nodes in the local cluster. If the initiator node is unable to map the set of tasks to the subset of nodes included in the local cluster, the local cluster head node forwards the admission request to the cluster head node of successive clusters in the wireless network in order to have at least one node in each of the successive clusters send resource availability information to the initiator node until the initiator node is able to map the set of tasks to a subset of the nodes in the wireless network or until there are no additional clusters to forward the admission request to. The initiator node attempts to map the set of tasks to a subset of the nodes from which resource availability information has been received.
  • a method in another embodiment, includes attempting to map a set of tasks to at least one node within a first cluster of the wireless network based on resource availability of the nodes within the first cluster.
  • the wireless network has a plurality of clusters. Each cluster includes at least one of a plurality of nodes.
  • the method further includes, if unable to map the set of tasks to said at least one node in the first cluster, attempting to map the set of tasks to at least one node in at least one of the first cluster and at least one of the other clusters in the wireless network based on resource availability of the nodes within the first cluster and the at least one of the other clusters in the wireless network.
  • a system in another embodiment, includes a wireless network comprising a plurality of clusters. Each cluster includes a set of nodes including a cluster head node. Each node includes at least one resource.
  • an initiator node in a local cluster included in the wireless network receives an admission request to execute an application comprising a set of tasks, the initiator node forwards the admission request to a local cluster head node for the local cluster.
  • the local cluster head node requests that at least one of the set of nodes included in the local cluster provide resource availability information to the initiator node.
  • the initiator node attempts to map the set of tasks to a subset of the nodes included in the local cluster using the resource availability received from nodes in the local cluster. If the initiator node is unable to map the set of tasks to the subset of nodes included in the local cluster, the initiator node requests that the local cluster head node forward the admission request to at least one remote cluster head node of at least one remote cluster included in the wireless network. When the admission request is forwarded to the at least one remote cluster head node, the at least one remote cluster head node requests that at least one of the set of nodes included in the at least one remote cluster provide resource availability information to the initiator node. The initiator node attempts to map the set of tasks to a subset of the nodes included in at least one of the local cluster and the at least one remote cluster using the resource availability received from nodes in the local cluster and the at least one remote cluster.
  • a first node includes a wireless transceiver to send and receive data over a wireless network, a processor in communication with the wireless transceiver, and a tangible medium, in communication with the processor, in which program instructions are embodied.
  • the program instructions when executed by the processor, cause the first node to receive an admission request from a client.
  • the admission request requesting that a set of tasks be executed.
  • the program instructions when executed by the processor, cause the first node to forward the admission request to a local cluster head node for a local cluster in order to have at least one node in the local cluster send resource availability information to the first node.
  • the first node is a member of the local cluster.
  • the program instructions when executed by the processor, cause the first node to receive resource availability information from the at least one node in the local cluster and attempt to map the set of tasks to at least a subset of the nodes included in the local cluster using the resource availability information received from the at least one node in the local cluster.
  • the program instructions when executed by the processor, cause the first node to, if unable to map the set of tasks to the subset of nodes included in the local cluster, request that the local cluster head node of the local cluster forward the admission request to at least one remote cluster head node of at least one remote cluster included in the wireless network in order to have at least one node in the at least one remote cluster send resource availability information to the first node.
  • the program instructions when executed by the processor, cause the first node to, if unable to map the set of tasks to the subset of nodes included in the local cluster, attempt to map the set of tasks to at least a subset of the nodes included in at least one of the local cluster and the at least one remote cluster using the resource availability received from the at least one node in at least one of the local cluster and the at least one remote cluster.
  • software embodied .on a tangible medium readable by a programmable processor included in a first node of a wireless network.
  • the wireless network includes a plurality of clusters.
  • the software includes program instructions executable on at least one programmable processor included in the first node.
  • the program instructions are operable to cause the first node to receive an admission request from a client, the admission request requesting that a set of tasks be executed.
  • the program instructions are operable to cause the first node to forward the admission request to a local cluster head node for the local cluster in order to have at least one node in the local cluster send resource availability information to the first node.
  • the first node is a member of the local cluster.
  • the program instructions are operable to cause the first node to receive resource availability information from the at least one node in the local cluster and attempt to map the set of tasks to at least a subset of the nodes included in the local cluster using the resource availability information received from the at least one node in the local cluster.
  • the program instructions are operable to cause the first node to, if unable to map the set of tasks to the subset of nodes included in the local cluster, request that the local cluster head node of the local cluster forward the admission request to at least one remote cluster head node of at least one remote cluster included in the wireless network in order to have at least one node in the at least one remote cluster send resource availability information to the first node.
  • the program instructions are operable to cause the first node to, if unable to map the set of tasks to the subset of nodes included in the local cluster, attempt to map the set of tasks to at least a subset of the nodes included in at least one of the first cluster and the at least one remote cluster using the resource availability received from received from the at least one node in at least one of the first cluster and the at least one remote cluster.
  • a first node includes means for sending and receiving data over a wireless network, means for receiving an admission request from a client, the admission request requesting that a set of tasks be executed, and means for forwarding the admission request to a local cluster head node for a local cluster in order to have at least one node in the local cluster send resource availability information to the first node.
  • the first node is a member of the local cluster.
  • the first node further includes means for receiving resource availability information from the at least one node in the local cluster and means for attempting to map the set of tasks to at least a subset of the nodes included in the local cluster using the resource availability information received from the at least one node in the local cluster.
  • the first node further includes means for requesting that the local cluster head node of the local cluster forward the admission request to at least one remote cluster head node of at least one remote cluster included in the wireless network in order to have at least one node in the at least one remote cluster send resource availability information to the first node, if unable to map the set of tasks to the subset of nodes included in the local cluster.
  • the first node further includes means for attempting to map the set of tasks to at least a subset of the nodes included in at least one of the local cluster and the at least one remote cluster using the resource availability received from the at least one node in at least one of the local cluster and the at least one remote cluster, if unable to map the set of tasks to the subset of nodes included in the local cluster.
  • FIG. 1 is a block diagram of one exemplary embodiment of an ad hoc wireless network.
  • FIG. 2 is a block diagram of one embodiment of a combat support system.
  • FIG. 3 is a block diagram illustrating one embodiment of a system for resource management .
  • FIGS. 4A-4B, 5A-5B, and 6A-6B are flow diagrams of one embodiment of methods of admitting a distributed application in an ad hoc wireless network having a cluster topology.
  • FIGS. 7A-7D are block diagrams illustrating the operation of the embodiment of the application admission protocol shown in FIGS. 4A-4B, 5A-5B, and 6A-6B.
  • FIG. 8 is a simplified block diagram of one embodiment of a node.
  • FIG. 1 is a block diagram of one exemplary embodiment of an ad hoc wireless network 100.
  • network 100 is a mobile ad hoc wireless network 100 (also referred to here as a "MANET") .
  • Network 100 is an ad hoc wireless network that includes a dynamic set of nodes 102. Over time, various nodes 'typically will join and leave the network.
  • the nodes 102 are organized in clusters 108.
  • One of the nodes 102 in each cluster is designated as the "cluster head" node 102.
  • the clusters 108 are formed based on traffic locality and node mobility. In another implementation, the clusters 108 are formed based on logical membership and mobility patterns. In the embodiment shown in FIG. 1, the cost of communication within a cluster is typically lower than between clusters, though in other embodiments this is not necessarily the case.
  • one or more distributed applications 104 are executed by the nodes 102. Two distributed applications 104-1 and 104-2, respectively, are shown in FIG. 1.
  • Each distributed application 104 comprises one or more tasks 106 that are executed by a subset of the nodes 102 in the network 100.
  • the distributed applications 104-1 and 104-2 comprise tasks 106-1 and tasks 106-2, respectively.
  • Each distributed application 104 uses various resources in the course of being executed.
  • one type of resource is provided by, and is characterized relative to, a single node 102.
  • This type of resource is referred to here as a "node resource.”
  • node resources include processing time, memory usage, and energy.
  • Another type of resource is characterized relative to a pair of nodes 102 and is referred to here as a "network resource.”
  • Network bandwidth between two nodes 102 is one example of a network resource and is specified as a source- destination pair.
  • network 100 supports periodic distributed applications 104 with a pipeline topology comprising a chain of communication tasks.
  • FIG. 2 is a block diagram of one embodiment of a combat support system 200.
  • An ad hoc wireless network is used to link the various devices (that is, nodes) that are included in the network.
  • a first unmanned air vehicle 202 (for example, a PREDATOR drone) monitors an enemy target 204.
  • the first unmanned air vehicle 202 delivers real-time (that is, time-critical data) surveillance data (for example, high data rate video and/or infrared data) to a fire control terminal 206 operated by one or more soldiers.
  • the first unmanned air vehicle 202 delivers real-time (that is, time-critical data) surveillance data (for example, high data rate video and/or infrared data) to a fire control terminal 206 operated by one or more soldiers.
  • surveillance data for example, high data rate video and/or infrared data
  • the surveillance data from the first unmanned air vehicle 202 is routed to the fire control terminal 206 via a second unmanned air vehicle 208.
  • the fire control terminal 206 in such an embodiment, is used to control a weapon 210 (for example, to fire HOWITZER at the enemy target 204) .
  • a weapon 210 for example, to fire HOWITZER at the enemy target 204.
  • Such control information is time- critical.
  • Control information from the fire control terminal 206 is routed to the weapon 210 via the second unmanned air vehicle 208.
  • This type of mission-critical application demands strict limits on end-to-end latency and requires significant bandwidth for network connections.
  • Embodiments of the methods, devices, and systems described here are suitable for use in such an embodiment, though it is to be understood that such methods, devices, and systems are suitable for use with other types of applications and networks.
  • FIG. 3 is a block diagram illustrating one embodiment of a system 300 for resource management.
  • the embodiment of system 300 is described here as being implemented on each of the nodes 102 of the wireless network 100 of FIG. 1, though it is to be understood that other embodiments of system 300 are implemented in other ways and/or using other networks 100.
  • the system 300 includes an application service manager 302 and one or more resource managers 304.
  • Each of the resource managers 304 manages one or more resources available to an application 304 executing on the node 102.
  • Resources that are available to the node 102 include, for example, node resources such as CPUs, memory, storage, energy and network resources such as buffers and communication bandwidth.
  • all the resource managers 304 export a common interface for admission, adaptation and feedback adaptation that allows resource managers 304 for different resources and/or policies to be "plugged in" the system 300 relatively simply.
  • the system 300 includes a resource manager that manages CPU load available at that node 102.
  • This resource manager is also referred to here as the "CPU resource manager" 304.
  • the CPU resource manager 304 administers the local (that is, local relative to the node 102) CPU resource.
  • the CPU resource manager 304 based on the current CPU resource allocation for the node 102, builds a process scheduler and controls the utilization of the CPU for the node 102 by applications 104 (and the tasks 106 comprising such applications 106) executing on that node 102.
  • the CPU resource manager 304 is implemented as a middleware layer wrapped on top of the local scheduler of the operating system executing on the node 102.
  • the CPU resource manager 304 implements a real ⁇ time scheduling policy, such as the rate monotonic algorithm
  • the system 300 also includes a resource manager that controls communication bandwidth and delay.
  • This resource managers is also referred to here as the "network resource manager" 304.
  • the network resource manager 304 controls bandwidth allocation, enforces traffic shaping and extracts network topology from the routing layer.
  • the embodiment of an admission protocol described below in connection with FIGS. 4A-4B, 5, 6, and 7 works in cooperation with a cluster-based ad hoc routing protocol .
  • nodes 102 are organized in clusters 108 where the cost of communication within a cluster 108 may be lower than between clusters 108.
  • the application admission protocol described below in connection with FIGS. 4A-4B, 5A-5B, and 6A- 6B attempts to improve admission quality by decreasing cost of communication based on the assumption that communication in a MANET is less reliable while processing resources are plentiful .
  • the application service manager 302 is responsible for the end-to-end resource management for each distributed application.
  • the application service manager 302 handles end- to-end QoS negotiation, admission, and adaptation by breaking end-to-end requests into individual contracts for basic resources that are passed to the appropriate resource managers 304 and to other application service managers 302 executing on other nodes 102 in the network 100.
  • Application service managers 302 receive admission requests from clients 306.
  • Clients 306, as used here, include users, applications 104, or other application service managers 302 executing on other nodes 102 in the network 100.
  • Each admission request for a particular distributed application comprises a minimum and maximum range of acceptable QoS (CPU load, network bandwidth) for the tasks 106 of that particular application 104.
  • the distributed applications 104 comprise distributed periodic tasks 106 that are connected (that is, communicate) in a pipeline topology.
  • tasks 106 from the same application 104 may be mapped to and executed on the same node 102, on different nodes 102 in the same cluster, or on nodes 102 from different clusters.
  • each case incurs an increasing cost of intra-application communication.
  • such task constraints are addressed by defining special resources appropriate to the particular constraint.
  • a "sensor" resource and "target display” resource are defined.
  • the first unmanned air vehicle 202 executes a sensor resource manager 304 that manages access to the sensor resource available on the first unmanned air vehicle 202.
  • the fire control terminal 206 executes a target display resource manager that manages access to the target display resource available on the fire control terminal 206.
  • Application service managers 302 in the network 100 match requests made by tasks 106 for the sensor resource and the target display resource to the sensor resource manager 304 and the target display resource manager 304, respectively, as appropriate.
  • FIGS. 4A-4B, 5A-5B, and 6A-6B are flow diagrams of one embodiment of methods 400, 500, and 600, respectively, of admitting a distributed application in an ad hoc wireless network having a cluster topology.
  • the embodiments of method 400, 500, and 600 are described here as being implemented using the embodiment of network 100 shown in FIG. 1 and the embodiment of system 300 shown in FIG. 3, though it is to be understood that other embodiments are implemented in other ways.
  • the embodiment of method 400 shown in FIGS. 4A-4B is performed by an application service manager 302.
  • the application service manager 302 listens for an admission request sent from a client 306 executing on the same node 102 as the ASM 302 (block 402) .
  • the admission request indicates that the client 306 wishes to have a distributed application 104 admitted and executed on one or more nodes 102 in the network 100.
  • the application service manager 302 receives the admission request (block 404) .
  • the client 306 that sends the admission request is referred to here as the "initiator client.”
  • the admission request is received by the application service manger 302 executing on the same node 102 as the initiator client 306.
  • the receiving application service manager 302 is also referred to here as the “initiator application service manager” or “initiator ASM.
  • the node 102 on which the initiator client 306 and the initiator ASM 302 are executing is referred to here as the "initiator node 102."
  • the cluster 108 that the initiator node 102 is a member of is referred to here as the "local cluster 108.”
  • the cluster head node 102 of the local cluster 108 is referred to here as the "local cluster head node” or "local cluster head.”
  • the admission request identifies the distributed application 104 that the client 306 wishes to have admitted (which is also referred to here as the "pending application”) , the tasks 106 that comprise the distributed application 104 (which are also referred to here as the "pending tasks”) , and minimum and maximum resource allocations for each resource that is needed by the pending tasks 106.
  • the initiator ASM forwards the admission request on to the local cluster head node 102 (block 406) .
  • the initiator ASM 302 forwards the admission request to the local cluster head node 102.
  • the cluster head node 404 receives the admission request and forwards the admission request on to each of the nodes 102 in the local cluster 108. Such forwarding is done in accordance with the underlying routing protocol used in the network 100.
  • the nodes 102 in the local cluster 108 are also referred to here as the "local nodes" 102.
  • the cluster head node 102 to which the admission request was most recently forwarded is also referred to here as the "current cluster head node.”
  • the cluster associated with the current cluster node is also referred to here as the "current cluster.”
  • the local cluster head node 102 is the current cluster head node 102 and the local cluster 108 is the current cluster.
  • each of the nodes 102 that receives an admission message and, in response, sends a message to or otherwise informs the initiator ASM 302 of the resource availability of that node 102.
  • this process also includes the initiator ASM 102 obtaining the resource availability for the initiator node 102 but not from the current cluster head node 102.
  • the current cluster head node 102 does provide its resource availability to the initiator ASM 302.
  • the resource availability for a given node includes two parts -- the unused resource availability and the adaptation resource availability.
  • the unused resource availability of a given resource for a given node 102 includes the amount of that resource that is not currently being used by any task 106 executing on the particular node 102.
  • the adaptability resource availability of a given resource for a given node includes the amount of that resource that could be freed up by having one or more tasks adapt (that is, lower) their resource utilization of that resource.
  • the initiator ASM 302 After the initiator ASM 302 has received the resource availability from the nodes 102 in the current cluster (block 408) , the initiator ASM 302 attempts to map the pending tasks to one or more nodes from which resource availability has been received (block 410) . In one implementation, the initiator ASM 302 attempts to map the pending tasks after either the initiator ASM 302 has received the resource availability from all of the nodes 102 in the current cluster 108 or a predetermined timeout period has elapsed.
  • a greedy admission process is used to map the pending tasks to one or more nodes 102 from which resource availability has been received.
  • the greedy admission process uses the resource availability of each node 102 that provided resource availability information.
  • the initiator ASM 302 attempts to map the pending tasks 106 to the nodes 102 from which resource availability has been received based on the unused resource availability of each such node 102 using a best fit/first fit algorithm (block 412) .
  • the best fit/first fit algorithm attempts to map as many of the pending tasks 106 as possible on one node 102. If the greedy admission process does not result in enough resources being provided to the pending application 104 (checked in block 414) , then the initiator ASM 302 attempts to map the pending tasks 106 to the nodes 102 from which resource availability has been received based on the unused resource availability and adaptation resource availability of such nodes 102 (block 416) .
  • a best fit/first fit algorithm that attempts to fit as many of the pending tasks 106 on one node 102 as possible is used.
  • the initiator ASM 302 attempts to map the pending tasks 106 using the adaptation resource availability by determining if the pending tasks 104 can be admitted by reducing the resource utilization of one or more tasks 106 executing on one or more nodes 102 that have previously been admitted and are already executing at the time the mapping process occurs.
  • a task 106 that should have its resource utilization lowered is referred to here as an "adapted task" 106.
  • a distributed application 104 that comprises at least one adapted task 106 is also referred to here as an "adapted application” 104.
  • the initiator ASM 302 sends a commit message to each node 102 having at least one pending task 106 mapped to that node 102 for execution thereon (block 420) .
  • the commit message that is sent to each node 102 informs the node 102 which of the pending tasks 106 are to be executed on that node 102.
  • the commit message also indicates, for each pending task 106 to be executed on that node 102, the amount of each resource that should be used by that task 106.
  • the commit message indicates which, if any, of the tasks 106 currently running on that node 102 must be adapted and how they should be adapted (for example, by indicating which resources to reduce the utilization of and by how much) .
  • the nodes 102 that receive the commit messages "commit" the resources identified in the commit message to the pending tasks 106 identified in the commit message.
  • the initiator ASM 302 sends an adapt message to the other nodes 102 on which the distributed application 104 executes that have not received a commit message (block 424) .
  • Those nodes 102 that do not receive a commit message and on which the distributed application 104 execute need to be informed that the distributed application 104 has been adapted.
  • the node 102 receives an adapt message, the node 102 is able to adjust the resource utilization for the distributed application 104 and notify its tasks, if appropriate.
  • the resource utilization for the adapted application 104 on that receiving node 102 is adjusted to be compatible with the application resource utilization on the other nodes 102 on which the application 104 executes (for example, as described below in connection with FIG. 6B) .
  • the initiator ASM 302 requests that the local cluster head 102 forward the admission request on to another cluster 108 in the network 100 to check for resource availability (block 426) .
  • Such other cluster 108 is referred to here as a "remote cluster.” If there is no other remote cluster 108 left in the network 100 to check for resource availability (for example, when all the remote clusters 108 in the network 100 have previously been checked) (checked in block 428) , the pending application 104 is not admitted for execution in the network 100 (block 430) . This fact is communicated by the initiator client 306 to the initiator client 306.
  • the local cluster head 102 selects one such remote cluster 108 and forwards the admission request to the cluster head node 102 of the selected remote cluster 108 as described below in connection with FIG. 5B.
  • the cluster head node 102 for the selected remote cluster 108 receives the admission request and forwards the admission request on to each of the nodes 102 in the selected remote cluster 108.
  • the admission request is forwarded in accordance with the underlying routing protocol used in the network 100.
  • each node 102 in the remote cluster 108 sends a message to or otherwise informs the initiator ASM 302 of the resource availability for that node 102.
  • Method 400 then loops back to block 408, where the initiator ASM 302 receives the resource availability information communicated from the nodes 102 in the current cluster 108 and attempts to map the pending tasks to one or more nodes from which resource availability has been received
  • FIGS. 5A-5B are flow diagrams of a method 500 forwarding admission requests on to nodes 102 in a cluster.
  • the embodiment of method 500 shown in FIGS. 5A-5B is performed by an application service manager 302 executing on a cluster head node 102 in the network 100.
  • an admission request is sent to the cluster head node 102 (checked in block 502 of FIG. 5A)
  • the cluster head node 102 receives the admission request
  • block 504 forwards the admission request on to the nodes 102 in the cluster 108 of which the cluster head node 102 is a member (block 506) .
  • the local cluster head node 102 receives an admission request from an initiator ASM 302 executing on the initiator node 102 in the local cluster 108
  • the local cluster head node 102 forwards the admission request on to the local nodes 102 in the local cluster 108.
  • Each cluster head node 102 also listens for a request from an initiator ASM 302 to forward the admission request on to another cluster 108 in the network 100 (block 508 shown in FIG. 5B) .
  • the cluster head node 102 receives such a request, the cluster head node 102 is acting as the local cluster head node 102 for the initiator node 102 that sent the request.
  • the initiator ASM 302 makes such a request when the resources available from the nodes 102 in the local cluster 108 are not sufficient to admit the pending application 104.
  • the local cluster head node 102 selects another cluster 108 to check for resource availability (block 510) .
  • Such other clusters 108 are referred to here as "remote clusters" 108.
  • the local cluster head node 102 selects a remote cluster 108 that has not previously has its resource availability checked for that particular admission request.
  • each cluster head node 102 maintains a "preference list" that identifies the list of remote clusters 108 in the network 100.
  • the local cluster head node 102 selects the remote cluster 108 to check next using, for example, a round-robin policy. If there are no other remote clusters 108 left to check for resource availability for that particular admission request (for example, when all the remote clusters 108 in the network 100 have previously been checked) (checked in block 512) , the local cluster head node 102 communicates that fact to the initiator node 102 (block 514) .
  • the local cluster head node 102 forwards the admission request on to the cluster head node 102 of the selected remote cluster 108 (block 516) .
  • the cluster head node 102 of the selected remote cluster 108 receives the admission request and forwards the admission request to the nodes 102 in the selected remote cluster 108.
  • the nodes 102 in the selected remote cluster 108 communicate their resource availability to the initiator node 102 in response to the admission request.
  • FIGS. 6A-6B are flow diagrams of a method 600 of processing admission requests received from the cluster head node.
  • method 600 is performed by each node 102 in the network 100 that receives an admission request from cluster head node and that is to supply its resource availability to the initiator ASM 302 in response to the admission request.
  • the node 102 that performs method 600 is referred to here as the "receiving node.”
  • the functionality of method 600 is implemented as a part of the application service manager 302 executing on the receiving node 102, which interacts with the resource managers 304 on that node 102 as appropriate.
  • a receiving node 102 listens for an admission request originating from a cluster head node (block 602) .
  • the receiving node 102 receives the admission request (block 604) and, in response thereto, determines the availability on the receiving node 102 for each type of resource specified in the admission request (block 606) .
  • the application service manager 302 of the receiving node 102 contacts the resource manager 304 for each type of resource specified in the admission request.
  • Each contacted resource manager 304 determines the resource availability for the one or more resources managed by that resource manager 304.
  • the resources that a particular resource manager 304 manages are also referred to here as the "managed resources.”
  • each resource manager 304 determines, for each of its managed resources, the amount of that managed resource that is currently not being used (block 608) . This amount is also referred to here as the "unused resources” or “unused resource availability.” Also, each resource manager 304 determines, for each of its managed resources, any additional amount of that resource that could be freed up by having one or more tasks 106 adapt (that is, lower) their resource utilization of that resource (block 610) . This amount is referred to here as the "adaptation resources” or “adaptation resource availability. "
  • the adaptation resource availability determination is made based on the relative priority of the various tasks 106 that are using each managed resource.
  • each application 104 is assigned a priority level. If a first application 104 has a lower assigned priority level than the priority level assigned to a second application 104, the first application 104 and the tasks 106 that comprise the first application 104 have a lower priority than the second application 104 and the tasks that comprise the second application 104.
  • each resource manager 304 determines the adaptation resource availability, for each of its managed resources, by identifying those tasks 106 executing on the receiving node 102 that have a lower priority than the pending application 104 and that are utilizing that managed resource. For each such identified lower priority task 106, it is determined how much of that managed resource would be freed up if the lower priority task 106 reduced its resource utilization of that resource to the minimum level permitted under the lower priority task's QoS contract.
  • the adaptation resource availability determination is made based on, at least in part, a class assigned to the tasks 106 for a given managed resource.
  • each application 104 is assigned a QoS class (such as best efforts, ' essential, critical, etc.) for a given managed resource (such as a network resource) .
  • QoS class such as best efforts, ' essential, critical, etc.
  • Each class defines a policy or other relationship between the tasks assigned to that class and the given managed resource. The policy determines under what circumstances and by how much the utilization of the given managed resource by such tasks can be adapted.
  • each resource manager 304 determines the adaptation resource availability, for each of its managed resources, by identifying those tasks 106 executing on the receiving node 102 that have an assigned class that permits adaptation under the circumstances existing at that moment. For each such identified task 106, it is determined how much of that managed resource would be freed up if that task 106 reduced its resource utilization of that resource to the minimum level permitted under that task's QoS contract and assigned class.
  • the total resource availability for the receiving node 102 is then sent to the initiator node 102 (block 612) .
  • the initiator node 102 is identified in the admission request received by the receiving node 102.
  • the node 102 reserves those resources for those pending tasks (block 614) .
  • the receiving node 102 for each such pending task 106, reserves the maximum amount of each such resource that is available, up to the maximum resource level specified in the admission request for the pending task 106.
  • the receiving node 102 treats the reserved resources, for the purposes of determining resource availability for subsequent admission requests, as if the associated pending task 106 has actually been committed on the receiving node 102.
  • both unused resources and adaptation resources are reserved in this manner.
  • the reserved resources remain in the reserved state until the receiving node 102 receives a commit message from the initiator node 102 related to the previously received admission request (checked in block 616) or until a timeout period has elapsed since the reserved resources were reserved (checked in block 618) .
  • a timeout period 120 seconds is used.
  • the commit message sent from the initiator node 102 will specify which of the resources reserved on the receiving node 102 should actually be used to execute the associated pending tasks 106 on the receiving node 102.
  • the receiving node 102 commits each reserved resource specified in the commit message and starts execution of the associated pending task 106 (block 620) . Also, the receiving node 102 releases all the other reserved resources, if any (block 622) . If a commit message related to the previously received admission request is not received within the timeout period, the receiving node 102 releases all the reserved resources for that admission request (block 622) . After the reserved resources have been released, those resources are again available for subsequent admission requests. The overhead associated with "rolling back" such resource reservations when the reserved resources are not ultimately going to be used for the pending admission request is reduced in such an embodiment (for example, as compared to sending additional messages indicating that the reserved resources should be released) .
  • each receiving node 102 In addition to listening for and processing admission requests, each receiving node 102 also listens for and processes adapt messages that are sent out by an initiator node 102 as described above in connection with FIGS. 4A-4B. As noted above, an adapt message notifies the receiving node 102 that a distributed application 104 executing on the receiving node 102 has been adapted. An adapted application 104 is running on the receiving node 102 if one or more of the tasks 106 comprising the adapted application 104 are executing on the receiving node 102. When a receiving node 102 receives an adapt message
  • the receiving node 102 adapts the tasks 106 for the adapted application 104 that are executing on the receiving node 102 if appropriate (block 632) .
  • a distributed application 104 when a distributed application 104 has been adapted, at least one of the tasks 106 that comprise the adapted application 104 has had its resource utilization lowered on the node 102 on which that task 106 executes.
  • the nodes 102 on which such tasks 106 execute are sent a commit message.
  • an adapt message is sent to the other nodes 102 on which the adapted application 104 executes, if any, that have not received a commit message.
  • Those nodes 102 that do not receive a commit message and on which the adapted application 104 executes need to be informed that the application 104 has been adapted.
  • a node 102 When such a node 102 receives an adapt message, the node 102 is able to adjust the resource utilization for the adapted application 104 to be compatible with the application's resource utilization on other nodes 102 (for example, as described below in connection with FIG. 6B) .
  • a distributed application 104 has a pipeline topology. At some point during execution of the distributed application 104, one late-stage task 106 in the application 104 is adapted so that the rate at which the task 106 processes input is reduced. The input that is processed by such an adapted task 106 is the output from other earlier-stage tasks 106 in the same distributed application 104.
  • the nodes 102 executing such earlier-stage tasks 106 receive an adapt message, the nodes 102 are able to reduce the output of such earlier-stage tasks 106 to match the rate at which the late-stage task 106 can process such output. In this way, resources can be more efficiently used.
  • each node 102 that receives an adapt message releases any resources reserved for the pending application 104, if any (block 634) . That is, if the node 102 that received the adapt message had previously reserved resources for the pending application, those reserved resources are released since that node 102 knows by virtue of receiving the adapt message that it will not receive a commit message. In other embodiments and implementation, however, this may not be the case .
  • FIGS. 7A-7D are block diagrams illustrating the operation of the embodiment of the application admission protocol shown in FIGS. 4A-4B, 5A-5B, and 6A-6B.
  • the initiator client 306 sends an admission request to the initiator application service manager 302 that is executing on the initiator node 102 (shown using a solid line in FIG. 7A) .
  • the initiator client 306 and the initiator ASM 302 are executing on the same node, the initiator node 102.
  • the initiator ASM 302 receives the admission request and forwards the admission request to the local cluster head node 102 (shown using a dashed line in FIG. 7A) .
  • the cluster head node 102 (more specifically, the ASM 302 executing on the cluster head node 102) receives the admission request and forwards the admission request to all the nodes 102 in the local cluster 108 (shown using dotted lines in FIG. 7A) .
  • Each of the nodes 102 (more specifically, the ASM 302 executing on each node 102) in the local cluster 108 determines the resource availability for that node 102.
  • the resource availability includes both the unused resource availability and the adaptation resource availability.
  • Each node 102 in the local cluster 108 sends or otherwise informs the initiator ASM 302 of that node's resource availability (shown using solid lines in FIG. 7B) . Also, each such node 102 in the local cluster 108 reserves the resources
  • the initiator ASM 302 After the initiator ASM 302 has received the resource availability from the nodes 102 in the local cluster 108 (or after a predetermined period has elapsed) , the initiator ASM 302 attempts to map the pending tasks 106 that comprise the pending application 104 on to the nodes 102 that provided resource availability information to the initiator ASM 302. In this example, there are not enough available resources on the nodes 102 in the local cluster 108. Therefore, the initiator ASM 302 is not able to successfully map all pending tasks 106 to the nodes 102 in the local cluster 108. As a result, the initiator ASM 302 requests (shown using a solid line in FIG.
  • the local cluster head 102 forward the admission request on to another cluster 108 in the network to check for resource availability in that other cluster 108.
  • the local cluster head 102 receives the request and selects a remote cluster 108 in the network 100 that has not previously been checked for resource availability for the current pending application 104.
  • the local cluster head 102 forwards the admission request to the cluster head node 102 for the selected remote cluster 108 (shown using a dashed line in FIG. 7C) .
  • the cluster head node 102 of the selected remote cluster 108 forwards the admission request on to all the nodes 102 in the selected remote cluster 108 (shown using a dotted lines in FIG. 7C) .
  • Each of the nodes 102 (more specifically, the ASM 302 on each of the nodes 102) in the selected remote cluster 108 determines the resource availability for that node 102. As noted above, this determination includes determining both the unused resource availability and the adaptation resource availability for each node 102.
  • Each node 102 in the selected remote cluster 108 sends or otherwise informs the initiator ASM 302 of that node's resource availability (shown with solid lines in FIG. 7D) . Also, each such node 102 in the selected remote cluster 108 reserves the resources (both unused resources and adaptation resources) needed for the pending application 104.
  • the initiator ASM 302 After the initiator ASM 302 has received the resource availability from the nodes 102 in the selected remote cluster 108 (or after a predetermined period has elapsed) , the initiator ASM 302 attempts to map the pending tasks 106 that comprise the pending application 104 on to the nodes 102 that provided resource availability information to the initiator ASM 302 (that is, the nodes 102 in the local cluster 108 and the selected remote cluster 108) . In this example, there are enough available resources in the local cluster 108 and the remote cluster 108 to admit the pending application.
  • the initiator ASM 302 on the initiator node 102 sends commit messages to those nodes 102 in the local cluster 108 and the selected remote cluster 108 on which a pending task has been mapped and will execute (shown using a solid line in FIG. 7E) .
  • Each node 102 that receives a commit message commits those resources and executes those pending tasks 106 identified in the commit message received by that node 102. Also if the commit message indicates that another task executing on receiving node should be adapted, the receiving node 102 adapts the indicated task as specified in the commit message.
  • Each node 102 that receives a commit message also releases those resources not needed to execute any pending task 106.
  • one distributed application 104 needs to be adapted in order to admit the pending application 104.
  • the initiator ASM 302 sends an adapt message to those nodes 102 in the network 100 that on which the adapted application 104 executes that have not received a commit message (shown using a solid line in FIG. 7F) .
  • Each node 102 that receives an adapt message adapts the tasks 106 identified in the adapt message received by that node 102 as appropriate.
  • each node 102 that receives an adapt message release any resources reserved for the pending application 104, if any.
  • FIG. 8 is a simplified block diagram of one embodiment of a node 800.
  • the node 800 is suitable for use in the ad hoc wireless network 100 shown in FIG. 1 and is suitable for implementing the methods and techniques described here.
  • the node 800 includes a wireless transceiver subsystem 802.
  • the wireless transceiver subsystem 802 is a radio frequency (RF) transceiver subsystem.
  • the wireless transceiver subsystem 802 includes appropriate components (for example, antenna, amplifiers, modulators, demodulators, analog- to-digital (A/D) converters, digital-to-analog (D/A) converters, etc.) to handle the transmission and reception of wireless data over a wireless network.
  • appropriate components for example, antenna, amplifiers, modulators, demodulators, analog- to-digital (A/D) converters, digital-to-analog (D/A) converters, etc.
  • the node 800 also includes a control subsystem 804.
  • the control subsystem includes a programmable processor 806.
  • Programmable processor 806 is coupled to the wireless transceiver subsystem 802 in order to monitor and control the transmission and reception of wireless data over a wireless network.
  • the control subsystem 804 also includes a memory 808 in which program instructions and data used by the programmable processor 806 are stored and from which they are retrieved.
  • One or more of the methods and techniques described here, in one embodiment, are implemented using software executed on the programmable processor 806.
  • Such software comprises appropriate program instructions 810 that are stored in ' a tangible medium readable by the programmable processor 806 (for example, in memory 808) .
  • the instructions when executed by the programmable processor 806, cause the node 800 to carry out at least a portion of the functionality of the methods and techniques described here as being performed by a node.
  • the software creates and/or interacts with appropriate data structures 812 stored in memory 808.
  • the following describes an exemplary resource allocation model for a single distributed application 104.
  • a single-application resource allocation model characterizes the mapping of the one or more tasks 106 of the single distributed application to one or more nodes 102 in the network 100.
  • Such a resource allocation model also characterizes the allocation of resources among the one or more tasks 106 that comprise the distributed application 104. Examples of such resources are node resources and network resources.
  • QoS Quality of Service
  • Another assumption is that a network resource is modeled as a limited bucket associated with a pair of nodes 102 that communicate over one or more wireless network links.
  • the example of a network resource used in the exemplary single- application resource allocation model described here is communication bandwidth between two nodes 102 in the network 100.
  • the single-application model defines node resources and network resources and formalizes allocation constraints.
  • the allocation problem is formulated as an optimization problem.
  • An admission request (also referred to here as a "QoS request") for distributed application T is described, in such a model, by a set of quality-of-service descriptors, one for each QoS dimension.
  • the QoS request is described by matrices Q m and Q M .
  • Matrix R 0 describes the available resources before application admission.
  • the admission control admits the s tasks in the system.
  • the mapping of the s tasks on the n nodes is given by matrix:
  • the resource management system for example, an initiator application service manager
  • the single-application resource allocation described here assumes that network resources are allocated independently for each communication link between any two nodes.
  • the network resource is modeled by a limited bucket for each bi-directional link established between a pair of nodes (i, j).
  • Matrix NR 0 defines the available network resource at admission time, where NR 0 € [0, ⁇ ) nxn and NR 0 ij defines the network resource available to the (i, j) communication link.
  • NQ M and NQ M define the minimum and maximum network resource requirement, respectively, for each communication link (i, j), for which tasks T i and T j communicate, where NQ M and NQ M e [0, ⁇ ) sXs .
  • the set TC contains all required connections between tasks.
  • TC ⁇ (i,j) ⁇ T i communicates with T j , i ⁇ j ⁇ .
  • Matrix NQ a defines the allocated network resource.
  • NQ a ij network resource allocated for communication link (Ti, T j ) 1 where NQ M e [0, ⁇ ) sxs .
  • This resource model assumes there is a (possible multi- hop) path in the network between any two nodes and that all resource allocations for connections are independent.
  • the resource management system maps the tasks to nodes and allocates resources for each connection between two tasks.
  • a connection between tasks T i and T j is mapped to a connection between nodes Map i and Map j .
  • the system allocates NQ a ij resource to the (i, j) connection by subtracting the same resource amount NQ a ij from the available network resource for connection (Mapi, Map j ) , where after allocation:
  • NR°[Mapi, Mapj]' NR 0 [Map i , Map j ] - NQ a ij .
  • Matrix NR a defines the allocated network resources for an application (T 1 , . . . , T s , TC) :
  • this resource model can accommodate alternate allocation strategies by adjusting the equations for resource constraints and resource availability update.
  • network resources are modeled based on node communication capacity and explicit end-to-end path information that would be available from the routing protocol.
  • the single-application resource allocation model described here also specifies various conditions that any resource allocation must meet in such an embodiment. These conditions are also referred to here as “constraints.” One type of constraint relates to node resources and such constraints are referred to here as “node resource constraints.” In the exemplary single-application resource allocation model described here, one node resource constraint relates to task mapping and specifies that each task is admitted on exactly one node as specified by:
  • Another node constraint relates to application quality-of-service and specifies that allocated resources satisfy QoS requirements:
  • Another node constraint relates to resource availability and specifies that a particular resources allocation is limited by availability:
  • Another type of constraint relates to network resources and such constraints are referred to here as "network resource constraints.”
  • one network resource constraint relates to application QoS and specifies that all connections between tasks must be allocated network resources between minimum required and maximum needed.
  • Map maps from NR a to NQ a , where NQ m ⁇ NQ a ⁇ NQ M .
  • Another network constraint relates to resource availability and specifies that the allocated' network resource for all connections cannot exceed the available limit. That is, NR a ⁇ NR 0 .
  • the optimal resource allocation is designed to maximize the application QoS utility defined as a function of the combined QoS satisfaction ratio for all tasks.
  • the application's overall utility is a linear combination of the node task utility and the network utility.
  • the node utility of task Ti is a weighted sum of resource utilities:
  • Weights Wj ⁇ 0, j 1, . . . , m, and
  • a network utility is defined.
  • the network utility nu for application (T, TC), is defined as:
  • an application utility is defined.
  • the application utility for a multi-staged admitted application is defined as the weighted sum of node task utilities and the network utility:
  • the optimal allocation problem is to determine the task mapping X 1 a node resource allocation NR a and a network resource allocation NQ a , so that the node resource constraints and network resource constraints described above are obeyed and the application utility v described above is maximized.
  • This is a mixed integer programming optimization problem with a fairly complex structure.
  • the resource allocation problem is NP-hard by reduction to the integer bin-packing problem.
  • the technique for admitting a distributed application described above in connection with FIGS. 1-8 attempts to overcome the complexity of the optimization problem.
  • the admission utility function uses weighted sums to inject user-defined policies and application semantics into the allocation process.
  • the relation between the resource type weights W j may impact the contribution of the task to the overall application utility, thus being a factor to the final mapping of tasks to nodes.
  • the weight W j can be directly correlated to the relative importance the user assigns to a resource type j.
  • the weights a ⁇ from the application utility formula can be adjusted by users to express preference towards maximizing utility of specific tasks or of the network bandwidth allocation (a ⁇ + i) .
  • Such a multiple-application resource allocation model characterizes the mapping of the one or more tasks 106 of multiple distributed applications to one or more nodes 102 in the network 100.
  • Such resource allocation model also characterizes the allocation of resources among the one or more tasks 106 that comprise each of the multiple distributed applications. Examples of such resources are node resources and network resources.
  • each of the multiple distributed applications is characterized by a priority, common for all the application's stages (that is, tasks) . It is also assumed that each of the distributed applications is admitted if all of its stages are admitted to the system. A constraint that applies to this model is that higher priority applications are never preempted by lower priority applications.
  • the objective of this exemplary multi- application model is to maximize the QoS satisfaction level for higher-priority applications. In other embodiments, the optimization goal is to maximize the overall number of admitted applications.
  • Each application Ai is characterized by a priority p ⁇ > 0.
  • the resource management system maps application tasks and allocate resources.
  • the vector Y (y lf . . . , y t ) indicates the application admission:
  • the global objective function for this exemplary- multi-application model is normalized for individual application utility values.
  • the utility contributed by each application to the overall objective be proportional to the QoS (that is, amount of resources) received from the system and not to the number of stages.
  • the application node utility for application i is therefore normalized on the number of stages S 1 and the network utility is also normalized on the number of connections
  • the global objective function assigns more weight to utility contributed by higher priority applications.
  • the goal in this exemplary model, is to maximize v * :
  • the set of constraints for the optimization problems relates to resource allocation and priority-based preemption.
  • the resource constraints are an extension of those listed above in connection with the single-application resource application model.
  • the constraints specify that the resources allocated to all applications should not exceed initial availability:
  • application admissions come at different times.
  • an alternative goal for the admission procedure is to maximize the overall number of accepted applications, where higher priority application have precedence. Later after admission, a QoS expansion phase would increase the QoS for admitted applications from remaining resources.
  • the large optimization problem that results from such an embodiment can be broken in two smaller pieces with fewer unknowns.
  • the admission process includes two phases
  • the first phase is admission with minimal QoS, so that the overall number of admitted applications is maximized.
  • the optimization problem can be formalized as maximizing the objective function:
  • the second phase of the two-phase admission process involves QoS expansion.
  • the system allocates remaining resources to admitted applications, with preference to higher priority applications.
  • the QoS expansion can be formulated as a linear program as maximizing the objective function (the QoS satisfaction ratio) :
  • the methods and techniques described here may be implemented in digital electronic circuitry, or with a programmable processor (for example, a special-purpose processor or a general-purpose processor such as a computer) firmware, software, or in combinations of them.
  • Apparatus embodying these techniques may include appropriate input and output devices, a programmable processor, and a storage medium tangibly embodying program instructions for execution by the programmable processor.
  • a process embodying these techniques may be performed by a programmable processor executing a program of instructions to perform desired functions by operating on input data and generating appropriate output.
  • the techniques may advantageously be implemented in one or more programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device.
  • a processor will receive instructions and data from a read-only memory and/or a random access memory.
  • Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and DVD disks. Any of the foregoing may be supplemented by, or incorporated in, specially-designed application-specific integrated circuits (ASICs) .
  • ASICs application-specific integrated circuits

Abstract

An initiator node in a local cluster included in a wireless network receives an, admission request to execute an application comprising a set of tasks. If the initiator node is unable to map the set of tasks to nodes included in the local cluster, the local cluster head node forwards the admission request to the cluster head node of successive clusters in the wireless network in order to have at least one node in each of the successive clusters send resource availability information to the initiator node. The initiator node attempts to map the set of tasks to a subset of the nodes from which resource availability information has been received. This is repeated until the initiator node is able to map the set of tasks to a subset of the nodes in the wireless network or until there are no additional clusters to forward the admission request to.

Description

METHOD FOR MANAGING RESOURCES OF AD HOC WIRELESS NETWORKS
TECHNICAL FIELD
[0001] The following description relates to telecommunications in general and to providing quality of service in a wireless network in particular.
BACKGROUND
[0002] One type of telecommunication network is a wireless network. In a wireless network, two or more devices communicate over a wireless communication link (for example, over a radio frequency (RF) communication link) . In one wireless network topology, one or more remote nodes communicate with a central node (also referred to here as a "base station") over respective wireless communication links. In such a topology, pre-existing network infrastructure is typically provided. In one example, a network of base stations, each of which is coupled to one or more wired networks, is provided. In such a topology, the remote nodes typically do not communicate with one another directly. One example of such a network is a cellular telephone network.
[0003] In another wireless network topology (referred to here as "ad hoc") , no predetermined infrastructure is provided. Typically, an ad hoc network is made up of a dynamic group of nodes that communicate over wireless communication links. Because wireless communication links used in ad hoc wireless networks are typically prone to a large variation in quality, providing quality of service (QOS) is important in applications that have demanding availability, bandwidth, and delay- requirements. Examples of such applications include real-time and mission critical applications such as search and rescue, wireless multimedia, command and control, and combat support systems.
SUMMARY
[0004] In one embodiment, a system includes a wireless network comprising a plurality of clusters. Each cluster comprises a set of nodes including a cluster head node. Each node includes at least one resource. When an initiator node in a local cluster included in the wireless network receives an admission request to execute an application comprising a set of tasks, the initiator node forwards the admission request to a local cluster head node for the local cluster. When the admission request is forwarded to the local cluster head node, the local cluster head node requests that at least one of the set of nodes included in the local cluster provide resource availability information to the initiator node. The initiator node attempts to map the set of tasks to a subset of the nodes included in the local cluster using the resource availability information received from nodes in the local cluster. If the initiator node is unable to map the set of tasks to the subset of nodes included in the local cluster, the local cluster head node forwards the admission request to the cluster head node of successive clusters in the wireless network in order to have at least one node in each of the successive clusters send resource availability information to the initiator node until the initiator node is able to map the set of tasks to a subset of the nodes in the wireless network or until there are no additional clusters to forward the admission request to. The initiator node attempts to map the set of tasks to a subset of the nodes from which resource availability information has been received.
[0005] In another embodiment, a method includes attempting to map a set of tasks to at least one node within a first cluster of the wireless network based on resource availability of the nodes within the first cluster. The wireless network has a plurality of clusters. Each cluster includes at least one of a plurality of nodes. The method further includes, if unable to map the set of tasks to said at least one node in the first cluster, attempting to map the set of tasks to at least one node in at least one of the first cluster and at least one of the other clusters in the wireless network based on resource availability of the nodes within the first cluster and the at least one of the other clusters in the wireless network.
[0006] In another embodiment, a system includes a wireless network comprising a plurality of clusters. Each cluster includes a set of nodes including a cluster head node. Each node includes at least one resource. When an initiator node in a local cluster included in the wireless network receives an admission request to execute an application comprising a set of tasks, the initiator node forwards the admission request to a local cluster head node for the local cluster. When the admission request is forwarded to the local cluster head node, the local cluster head node requests that at least one of the set of nodes included in the local cluster provide resource availability information to the initiator node. The initiator node attempts to map the set of tasks to a subset of the nodes included in the local cluster using the resource availability received from nodes in the local cluster. If the initiator node is unable to map the set of tasks to the subset of nodes included in the local cluster, the initiator node requests that the local cluster head node forward the admission request to at least one remote cluster head node of at least one remote cluster included in the wireless network. When the admission request is forwarded to the at least one remote cluster head node, the at least one remote cluster head node requests that at least one of the set of nodes included in the at least one remote cluster provide resource availability information to the initiator node. The initiator node attempts to map the set of tasks to a subset of the nodes included in at least one of the local cluster and the at least one remote cluster using the resource availability received from nodes in the local cluster and the at least one remote cluster.
[0007] In another embodiment, a first node includes a wireless transceiver to send and receive data over a wireless network, a processor in communication with the wireless transceiver, and a tangible medium, in communication with the processor, in which program instructions are embodied. The program instructions, when executed by the processor, cause the first node to receive an admission request from a client. The admission request requesting that a set of tasks be executed. The program instructions, when executed by the processor, cause the first node to forward the admission request to a local cluster head node for a local cluster in order to have at least one node in the local cluster send resource availability information to the first node. The first node is a member of the local cluster. The program instructions, when executed by the processor, cause the first node to receive resource availability information from the at least one node in the local cluster and attempt to map the set of tasks to at least a subset of the nodes included in the local cluster using the resource availability information received from the at least one node in the local cluster. The program instructions, when executed by the processor, cause the first node to, if unable to map the set of tasks to the subset of nodes included in the local cluster, request that the local cluster head node of the local cluster forward the admission request to at least one remote cluster head node of at least one remote cluster included in the wireless network in order to have at least one node in the at least one remote cluster send resource availability information to the first node. The program instructions, when executed by the processor, cause the first node to, if unable to map the set of tasks to the subset of nodes included in the local cluster, attempt to map the set of tasks to at least a subset of the nodes included in at least one of the local cluster and the at least one remote cluster using the resource availability received from the at least one node in at least one of the local cluster and the at least one remote cluster.
[0008] In another embodiment, software embodied .on a tangible medium readable by a programmable processor included in a first node of a wireless network. The wireless network includes a plurality of clusters. The software includes program instructions executable on at least one programmable processor included in the first node. The program instructions are operable to cause the first node to receive an admission request from a client, the admission request requesting that a set of tasks be executed. The program instructions are operable to cause the first node to forward the admission request to a local cluster head node for the local cluster in order to have at least one node in the local cluster send resource availability information to the first node. The first node is a member of the local cluster. The program instructions are operable to cause the first node to receive resource availability information from the at least one node in the local cluster and attempt to map the set of tasks to at least a subset of the nodes included in the local cluster using the resource availability information received from the at least one node in the local cluster. The program instructions are operable to cause the first node to, if unable to map the set of tasks to the subset of nodes included in the local cluster, request that the local cluster head node of the local cluster forward the admission request to at least one remote cluster head node of at least one remote cluster included in the wireless network in order to have at least one node in the at least one remote cluster send resource availability information to the first node. The program instructions are operable to cause the first node to, if unable to map the set of tasks to the subset of nodes included in the local cluster, attempt to map the set of tasks to at least a subset of the nodes included in at least one of the first cluster and the at least one remote cluster using the resource availability received from received from the at least one node in at least one of the first cluster and the at least one remote cluster.
[0009] In another embodiment, a first node includes means for sending and receiving data over a wireless network, means for receiving an admission request from a client, the admission request requesting that a set of tasks be executed, and means for forwarding the admission request to a local cluster head node for a local cluster in order to have at least one node in the local cluster send resource availability information to the first node. The first node is a member of the local cluster. The first node further includes means for receiving resource availability information from the at least one node in the local cluster and means for attempting to map the set of tasks to at least a subset of the nodes included in the local cluster using the resource availability information received from the at least one node in the local cluster. The first node further includes means for requesting that the local cluster head node of the local cluster forward the admission request to at least one remote cluster head node of at least one remote cluster included in the wireless network in order to have at least one node in the at least one remote cluster send resource availability information to the first node, if unable to map the set of tasks to the subset of nodes included in the local cluster. The first node further includes means for attempting to map the set of tasks to at least a subset of the nodes included in at least one of the local cluster and the at least one remote cluster using the resource availability received from the at least one node in at least one of the local cluster and the at least one remote cluster, if unable to map the set of tasks to the subset of nodes included in the local cluster.
[0010] The details of one or more embodiments of the claimed invention are set forth in the accompanying drawings and the description below. Other features and advantages will become apparent from the description, the drawings, and the claims.
DRAWINGS
[0011] FIG. 1 is a block diagram of one exemplary embodiment of an ad hoc wireless network.
[0012] FIG. 2 is a block diagram of one embodiment of a combat support system.
[0013] FIG. 3 is a block diagram illustrating one embodiment of a system for resource management .
[0014] FIGS. 4A-4B, 5A-5B, and 6A-6B are flow diagrams of one embodiment of methods of admitting a distributed application in an ad hoc wireless network having a cluster topology.
[0015] FIGS. 7A-7D are block diagrams illustrating the operation of the embodiment of the application admission protocol shown in FIGS. 4A-4B, 5A-5B, and 6A-6B.
[0016] FIG. 8 is a simplified block diagram of one embodiment of a node.
[0017] Like reference numbers and designations in the various drawings indicate like elements.
DETAILED DESCRIPTION
[0018] FIG. 1 is a block diagram of one exemplary embodiment of an ad hoc wireless network 100. In one implementation of such an embodiment, network 100 is a mobile ad hoc wireless network 100 (also referred to here as a "MANET") . Network 100 is an ad hoc wireless network that includes a dynamic set of nodes 102. Over time, various nodes 'typically will join and leave the network.
[0019] In the embodiment shown in FIG. 1, the nodes 102 are organized in clusters 108. One of the nodes 102 in each cluster is designated as the "cluster head" node 102. In one implementation, the clusters 108 are formed based on traffic locality and node mobility. In another implementation, the clusters 108 are formed based on logical membership and mobility patterns. In the embodiment shown in FIG. 1, the cost of communication within a cluster is typically lower than between clusters, though in other embodiments this is not necessarily the case. [0020] In the embodiment shown in FIG. 1, one or more distributed applications 104 are executed by the nodes 102. Two distributed applications 104-1 and 104-2, respectively, are shown in FIG. 1. Each distributed application 104 comprises one or more tasks 106 that are executed by a subset of the nodes 102 in the network 100. In FIG. 1, the distributed applications 104-1 and 104-2 comprise tasks 106-1 and tasks 106-2, respectively.
[0021] Each distributed application 104 uses various resources in the course of being executed. In the embodiment shown in FIG. 1, one type of resource is provided by, and is characterized relative to, a single node 102. This type of resource is referred to here as a "node resource." Examples of node resources include processing time, memory usage, and energy. Another type of resource is characterized relative to a pair of nodes 102 and is referred to here as a "network resource." Network bandwidth between two nodes 102 is one example of a network resource and is specified as a source- destination pair.
[0022] In one implementation, network 100 supports periodic distributed applications 104 with a pipeline topology comprising a chain of communication tasks. One example of such a distributed application 104 is illustrated in FIG. 2. FIG. 2 is a block diagram of one embodiment of a combat support system 200. An ad hoc wireless network is used to link the various devices (that is, nodes) that are included in the network. A first unmanned air vehicle 202 (for example, a PREDATOR drone) monitors an enemy target 204. The first unmanned air vehicle 202 delivers real-time (that is, time-critical data) surveillance data (for example, high data rate video and/or infrared data) to a fire control terminal 206 operated by one or more soldiers. [0023] The first unmanned air vehicle 202 delivers real-time (that is, time-critical data) surveillance data (for example, high data rate video and/or infrared data) to a fire control terminal 206 operated by one or more soldiers. In the embodiment shown in FIG. 2, the surveillance data from the first unmanned air vehicle 202 is routed to the fire control terminal 206 via a second unmanned air vehicle 208.
[0024] The fire control terminal 206, in such an embodiment, is used to control a weapon 210 (for example, to fire HOWITZER at the enemy target 204) . Such control information is time- critical. Control information from the fire control terminal 206 is routed to the weapon 210 via the second unmanned air vehicle 208. This type of mission-critical application demands strict limits on end-to-end latency and requires significant bandwidth for network connections. Embodiments of the methods, devices, and systems described here are suitable for use in such an embodiment, though it is to be understood that such methods, devices, and systems are suitable for use with other types of applications and networks.
[0025] FIG. 3 is a block diagram illustrating one embodiment of a system 300 for resource management. The embodiment of system 300 is described here as being implemented on each of the nodes 102 of the wireless network 100 of FIG. 1, though it is to be understood that other embodiments of system 300 are implemented in other ways and/or using other networks 100. The system 300 includes an application service manager 302 and one or more resource managers 304. Each of the resource managers 304 manages one or more resources available to an application 304 executing on the node 102. Resources that are available to the node 102 include, for example, node resources such as CPUs, memory, storage, energy and network resources such as buffers and communication bandwidth. In one implementation of such an embodiment, all the resource managers 304 export a common interface for admission, adaptation and feedback adaptation that allows resource managers 304 for different resources and/or policies to be "plugged in" the system 300 relatively simply.
[0026] For example, in the embodiment shown in FIG. 3, the system 300 includes a resource manager that manages CPU load available at that node 102. This resource manager is also referred to here as the "CPU resource manager" 304. The CPU resource manager 304 administers the local (that is, local relative to the node 102) CPU resource. In one implementation of such an embodiment, the CPU resource manager 304, based on the current CPU resource allocation for the node 102, builds a process scheduler and controls the utilization of the CPU for the node 102 by applications 104 (and the tasks 106 comprising such applications 106) executing on that node 102. In such an implementation, the CPU resource manager 304 is implemented as a middleware layer wrapped on top of the local scheduler of the operating system executing on the node 102. In another implementation, the CPU resource manager 304 implements a real¬ time scheduling policy, such as the rate monotonic algorithm
(RMA) .
[0027] As shown in FIG. 3, the system 300 also includes a resource manager that controls communication bandwidth and delay. This resource managers is also referred to here as the "network resource manager" 304. The network resource manager 304 controls bandwidth allocation, enforces traffic shaping and extracts network topology from the routing layer. The embodiment of an admission protocol described below in connection with FIGS. 4A-4B, 5, 6, and 7 works in cooperation with a cluster-based ad hoc routing protocol . In one embodiment, nodes 102 are organized in clusters 108 where the cost of communication within a cluster 108 may be lower than between clusters 108. The application admission protocol described below in connection with FIGS. 4A-4B, 5A-5B, and 6A- 6B attempts to improve admission quality by decreasing cost of communication based on the assumption that communication in a MANET is less reliable while processing resources are plentiful .
[0028] The application service manager 302 is responsible for the end-to-end resource management for each distributed application. The application service manager 302 handles end- to-end QoS negotiation, admission, and adaptation by breaking end-to-end requests into individual contracts for basic resources that are passed to the appropriate resource managers 304 and to other application service managers 302 executing on other nodes 102 in the network 100. Application service managers 302 receive admission requests from clients 306. Clients 306, as used here, include users, applications 104, or other application service managers 302 executing on other nodes 102 in the network 100.
[0029] Each admission request for a particular distributed application comprises a minimum and maximum range of acceptable QoS (CPU load, network bandwidth) for the tasks 106 of that particular application 104. In one implementation, the distributed applications 104 comprise distributed periodic tasks 106 that are connected (that is, communicate) in a pipeline topology. Depending on resource availability, tasks 106 from the same application 104 may be mapped to and executed on the same node 102, on different nodes 102 in the same cluster, or on nodes 102 from different clusters. Typically, each case incurs an increasing cost of intra-application communication. [0030] During operation of such an embodiment, it may be the case that some application tasks 106 must be admitted on a specific node 102 or on a node 102 that is close to a particular geographical location and/or physical item. One example is shown in FIG 2. An application 104 that performs automatic target recognition of the target 204 requires that a sensor task run on the first unmanned air vehicle 202, which includes an imaging and/or infra-red sensor. A target display task is run on the fire control terminal 206. Intermediary processing and recognition tasks, however, can be allocated on any node 102 in the network with necessary and sufficient resources.
[0031] In one implementation, such task constraints are addressed by defining special resources appropriate to the particular constraint. In the previous example, a "sensor" resource and "target display" resource are defined. In such an implementation, the first unmanned air vehicle 202 executes a sensor resource manager 304 that manages access to the sensor resource available on the first unmanned air vehicle 202. The fire control terminal 206 executes a target display resource manager that manages access to the target display resource available on the fire control terminal 206. Application service managers 302 in the network 100 match requests made by tasks 106 for the sensor resource and the target display resource to the sensor resource manager 304 and the target display resource manager 304, respectively, as appropriate. Moreover, in other implementations, the system 300 is adapted to account for general location constraints by preloading a matrix X (described below) , which defines the task mapping, with Xij values reflecting a desired mapping of a particular task i onto a particular node j. [0032] FIGS. 4A-4B, 5A-5B, and 6A-6B are flow diagrams of one embodiment of methods 400, 500, and 600, respectively, of admitting a distributed application in an ad hoc wireless network having a cluster topology. The embodiments of method 400, 500, and 600 are described here as being implemented using the embodiment of network 100 shown in FIG. 1 and the embodiment of system 300 shown in FIG. 3, though it is to be understood that other embodiments are implemented in other ways.
[0033] The embodiment of method 400 shown in FIGS. 4A-4B is performed by an application service manager 302. The application service manager 302 listens for an admission request sent from a client 306 executing on the same node 102 as the ASM 302 (block 402) . The admission request indicates that the client 306 wishes to have a distributed application 104 admitted and executed on one or more nodes 102 in the network 100. When an admission request is sent to an application service manager 302, the application service manager 302 receives the admission request (block 404) .
[0034] The client 306 that sends the admission request is referred to here as the "initiator client." The admission request is received by the application service manger 302 executing on the same node 102 as the initiator client 306. The receiving application service manager 302 is also referred to here as the "initiator application service manager" or "initiator ASM. " The node 102 on which the initiator client 306 and the initiator ASM 302 are executing is referred to here as the "initiator node 102." Also, the cluster 108 that the initiator node 102 is a member of is referred to here as the "local cluster 108." The cluster head node 102 of the local cluster 108 is referred to here as the "local cluster head node" or "local cluster head." In one embodiment, the admission request identifies the distributed application 104 that the client 306 wishes to have admitted (which is also referred to here as the "pending application") , the tasks 106 that comprise the distributed application 104 (which are also referred to here as the "pending tasks") , and minimum and maximum resource allocations for each resource that is needed by the pending tasks 106.
[0035] The initiator ASM forwards the admission request on to the local cluster head node 102 (block 406) . The initiator ASM 302 forwards the admission request to the local cluster head node 102. The cluster head node 404 receives the admission request and forwards the admission request on to each of the nodes 102 in the local cluster 108. Such forwarding is done in accordance with the underlying routing protocol used in the network 100. The nodes 102 in the local cluster 108 are also referred to here as the "local nodes" 102. The cluster head node 102 to which the admission request was most recently forwarded is also referred to here as the "current cluster head node." The cluster associated with the current cluster node is also referred to here as the "current cluster." For example when the admission request is forwarded to the local cluster head node 102, the local cluster head node 102 is the current cluster head node 102 and the local cluster 108 is the current cluster.
[0036] As described below in connection with FIG. 6A, each of the nodes 102 that receives an admission message and, in response, sends a message to or otherwise informs the initiator ASM 302 of the resource availability of that node 102. In one embodiment, this process also includes the initiator ASM 102 obtaining the resource availability for the initiator node 102 but not from the current cluster head node 102. In another embodiment, the current cluster head node 102 does provide its resource availability to the initiator ASM 302.
[0037] As described below in connection with FIG. 6A, in the embodiment shown in FIGS. 4A-4B, 5A-5B, and 6A-6B the resource availability for a given node includes two parts -- the unused resource availability and the adaptation resource availability. The unused resource availability of a given resource for a given node 102 includes the amount of that resource that is not currently being used by any task 106 executing on the particular node 102. The adaptability resource availability of a given resource for a given node includes the amount of that resource that could be freed up by having one or more tasks adapt (that is, lower) their resource utilization of that resource.
[0038] After the initiator ASM 302 has received the resource availability from the nodes 102 in the current cluster (block 408) , the initiator ASM 302 attempts to map the pending tasks to one or more nodes from which resource availability has been received (block 410) . In one implementation, the initiator ASM 302 attempts to map the pending tasks after either the initiator ASM 302 has received the resource availability from all of the nodes 102 in the current cluster 108 or a predetermined timeout period has elapsed.
[0039] In one implementation (shown in FIG. 4 using dashed lines) , a greedy admission process is used to map the pending tasks to one or more nodes 102 from which resource availability has been received. The greedy admission process uses the resource availability of each node 102 that provided resource availability information.
[0040] In this implementation, during the greedy admission process, the initiator ASM 302 attempts to map the pending tasks 106 to the nodes 102 from which resource availability has been received based on the unused resource availability of each such node 102 using a best fit/first fit algorithm (block 412) . The best fit/first fit algorithm attempts to map as many of the pending tasks 106 as possible on one node 102. If the greedy admission process does not result in enough resources being provided to the pending application 104 (checked in block 414) , then the initiator ASM 302 attempts to map the pending tasks 106 to the nodes 102 from which resource availability has been received based on the unused resource availability and adaptation resource availability of such nodes 102 (block 416) . A best fit/first fit algorithm that attempts to fit as many of the pending tasks 106 on one node 102 as possible is used. The initiator ASM 302 attempts to map the pending tasks 106 using the adaptation resource availability by determining if the pending tasks 104 can be admitted by reducing the resource utilization of one or more tasks 106 executing on one or more nodes 102 that have previously been admitted and are already executing at the time the mapping process occurs. A task 106 that should have its resource utilization lowered is referred to here as an "adapted task" 106. A distributed application 104 that comprises at least one adapted task 106 is also referred to here as an "adapted application" 104.
[0041] If there are enough available resources to admit the pending application (checked in block 418 of FIG. 4B) , the initiator ASM 302 sends a commit message to each node 102 having at least one pending task 106 mapped to that node 102 for execution thereon (block 420) . In one implementation, the commit message that is sent to each node 102 informs the node 102 which of the pending tasks 106 are to be executed on that node 102. In such an implementation, the commit message also indicates, for each pending task 106 to be executed on that node 102, the amount of each resource that should be used by that task 106. Also, the commit message indicates which, if any, of the tasks 106 currently running on that node 102 must be adapted and how they should be adapted (for example, by indicating which resources to reduce the utilization of and by how much) .
[0042] As is described below in connection with FIG. 6A, the nodes 102 that receive the commit messages "commit" the resources identified in the commit message to the pending tasks 106 identified in the commit message.
[0043] When adaptation is required to admit a distributed application 104 (checked in block 422) , the initiator ASM 302 sends an adapt message to the other nodes 102 on which the distributed application 104 executes that have not received a commit message (block 424) . Those nodes 102 that do not receive a commit message and on which the distributed application 104 execute need to be informed that the distributed application 104 has been adapted. When such a node 102 receives an adapt message, the node 102 is able to adjust the resource utilization for the distributed application 104 and notify its tasks, if appropriate. The resource utilization for the adapted application 104 on that receiving node 102 is adjusted to be compatible with the application resource utilization on the other nodes 102 on which the application 104 executes (for example, as described below in connection with FIG. 6B) .
[0044] If there are not enough available resources to admit the pending application (checked in block 418) , the initiator ASM 302 requests that the local cluster head 102 forward the admission request on to another cluster 108 in the network 100 to check for resource availability (block 426) . Such other cluster 108 is referred to here as a "remote cluster." If there is no other remote cluster 108 left in the network 100 to check for resource availability (for example, when all the remote clusters 108 in the network 100 have previously been checked) (checked in block 428) , the pending application 104 is not admitted for execution in the network 100 (block 430) . This fact is communicated by the initiator client 306 to the initiator client 306.
[0045] If there is at least one other remote cluster 108 in the network left to check for resource availability, the local cluster head 102 selects one such remote cluster 108 and forwards the admission request to the cluster head node 102 of the selected remote cluster 108 as described below in connection with FIG. 5B.
[0046] The cluster head node 102 for the selected remote cluster 108 receives the admission request and forwards the admission request on to each of the nodes 102 in the selected remote cluster 108. The admission request is forwarded in accordance with the underlying routing protocol used in the network 100. As described below in connection with FIGS. 5A- 5B, and 6A-6B, each node 102 in the remote cluster 108 sends a message to or otherwise informs the initiator ASM 302 of the resource availability for that node 102.
[0047] Method 400 then loops back to block 408, where the initiator ASM 302 receives the resource availability information communicated from the nodes 102 in the current cluster 108 and attempts to map the pending tasks to one or more nodes from which resource availability has been received
(for example, the local nodes 102 and the nodes in the selected remote cluster 108) . Such processing is repeated until the initiator ASM 302 has located sufficient resources to admit the pending application 104 or until the resource availability of all clusters 108 in the network 100 have been checked. [0048] FIGS. 5A-5B are flow diagrams of a method 500 forwarding admission requests on to nodes 102 in a cluster. The embodiment of method 500 shown in FIGS. 5A-5B is performed by an application service manager 302 executing on a cluster head node 102 in the network 100. When an admission request is sent to the cluster head node 102 (checked in block 502 of FIG. 5A) , the cluster head node 102 receives the admission request
(block 504) and forwards the admission request on to the nodes 102 in the cluster 108 of which the cluster head node 102 is a member (block 506) . For example, when the local cluster head node 102 receives an admission request from an initiator ASM 302 executing on the initiator node 102 in the local cluster 108, the local cluster head node 102 forwards the admission request on to the local nodes 102 in the local cluster 108.
[0049] Each cluster head node 102 also listens for a request from an initiator ASM 302 to forward the admission request on to another cluster 108 in the network 100 (block 508 shown in FIG. 5B) . When the cluster head node 102 receives such a request, the cluster head node 102 is acting as the local cluster head node 102 for the initiator node 102 that sent the request. The initiator ASM 302 makes such a request when the resources available from the nodes 102 in the local cluster 108 are not sufficient to admit the pending application 104. The local cluster head node 102 selects another cluster 108 to check for resource availability (block 510) . Such other clusters 108 are referred to here as "remote clusters" 108. The local cluster head node 102 selects a remote cluster 108 that has not previously has its resource availability checked for that particular admission request. In one embodiment, each cluster head node 102 maintains a "preference list" that identifies the list of remote clusters 108 in the network 100. The local cluster head node 102 selects the remote cluster 108 to check next using, for example, a round-robin policy. If there are no other remote clusters 108 left to check for resource availability for that particular admission request (for example, when all the remote clusters 108 in the network 100 have previously been checked) (checked in block 512) , the local cluster head node 102 communicates that fact to the initiator node 102 (block 514) .
[0050] If there is another remote cluster 108 to check, the local cluster head node 102 forwards the admission request on to the cluster head node 102 of the selected remote cluster 108 (block 516) . The cluster head node 102 of the selected remote cluster 108, as described above in connection with blocks 502 through 506, receives the admission request and forwards the admission request to the nodes 102 in the selected remote cluster 108. The nodes 102 in the selected remote cluster 108 communicate their resource availability to the initiator node 102 in response to the admission request.
[0051] FIGS. 6A-6B are flow diagrams of a method 600 of processing admission requests received from the cluster head node. In the embodiment shown in FIGS. 6A-6B, method 600 is performed by each node 102 in the network 100 that receives an admission request from cluster head node and that is to supply its resource availability to the initiator ASM 302 in response to the admission request. In the following description of FIGS. 6A-6B, the node 102 that performs method 600 is referred to here as the "receiving node." In one implementation of such an embodiment, the functionality of method 600 is implemented as a part of the application service manager 302 executing on the receiving node 102, which interacts with the resource managers 304 on that node 102 as appropriate.
[0052] As shown in FIG. 6A, a receiving node 102 listens for an admission request originating from a cluster head node (block 602) . When the receiving node 102 receives an admission request forwarded from an a cluster head node, the receiving node 102 receives the admission request (block 604) and, in response thereto, determines the availability on the receiving node 102 for each type of resource specified in the admission request (block 606) . In one implementation, the application service manager 302 of the receiving node 102 contacts the resource manager 304 for each type of resource specified in the admission request. Each contacted resource manager 304 determines the resource availability for the one or more resources managed by that resource manager 304. The resources that a particular resource manager 304 manages are also referred to here as the "managed resources."
[0053] In the embodiment shown in FIG. 6A, the resource availability determination performed by each resource manager 304 includes two separate determinations. Each resource manager 304 determines, for each of its managed resources, the amount of that managed resource that is currently not being used (block 608) . This amount is also referred to here as the "unused resources" or "unused resource availability." Also, each resource manager 304 determines, for each of its managed resources, any additional amount of that resource that could be freed up by having one or more tasks 106 adapt (that is, lower) their resource utilization of that resource (block 610) . This amount is referred to here as the "adaptation resources" or "adaptation resource availability. "
[0054] In one implementation, the adaptation resource availability determination is made based on the relative priority of the various tasks 106 that are using each managed resource. For example in one such implementation, each application 104 is assigned a priority level. If a first application 104 has a lower assigned priority level than the priority level assigned to a second application 104, the first application 104 and the tasks 106 that comprise the first application 104 have a lower priority than the second application 104 and the tasks that comprise the second application 104. In such an implementation, each resource manager 304 determines the adaptation resource availability, for each of its managed resources, by identifying those tasks 106 executing on the receiving node 102 that have a lower priority than the pending application 104 and that are utilizing that managed resource. For each such identified lower priority task 106, it is determined how much of that managed resource would be freed up if the lower priority task 106 reduced its resource utilization of that resource to the minimum level permitted under the lower priority task's QoS contract.
[0055] In another implementation, the adaptation resource availability determination is made based on, at least in part, a class assigned to the tasks 106 for a given managed resource. For example, in one such implementation each application 104 is assigned a QoS class (such as best efforts,' essential, critical, etc.) for a given managed resource (such as a network resource) . Each class defines a policy or other relationship between the tasks assigned to that class and the given managed resource. The policy determines under what circumstances and by how much the utilization of the given managed resource by such tasks can be adapted. In such an implementation, each resource manager 304 determines the adaptation resource availability, for each of its managed resources, by identifying those tasks 106 executing on the receiving node 102 that have an assigned class that permits adaptation under the circumstances existing at that moment. For each such identified task 106, it is determined how much of that managed resource would be freed up if that task 106 reduced its resource utilization of that resource to the minimum level permitted under that task's QoS contract and assigned class.
[0056] The total resource availability for the receiving node 102 is then sent to the initiator node 102 (block 612) . In one embodiment, for example, the initiator node 102 is identified in the admission request received by the receiving node 102. For those pending tasks 106 for which the receiving node 102 has sufficient resources to satisfy the QoS requirements specified in the admission request, the node 102 reserves those resources for those pending tasks (block 614) . In one implementation, the receiving node 102, for each such pending task 106, reserves the maximum amount of each such resource that is available, up to the maximum resource level specified in the admission request for the pending task 106. While a portion of a resource is in the reserved state, the receiving node 102 treats the reserved resources, for the purposes of determining resource availability for subsequent admission requests, as if the associated pending task 106 has actually been committed on the receiving node 102. In such an implementation, both unused resources and adaptation resources are reserved in this manner.
[0057] The reserved resources remain in the reserved state until the receiving node 102 receives a commit message from the initiator node 102 related to the previously received admission request (checked in block 616) or until a timeout period has elapsed since the reserved resources were reserved (checked in block 618) . For example, in one implementation, a timeout period of 120 seconds is used. The commit message sent from the initiator node 102 will specify which of the resources reserved on the receiving node 102 should actually be used to execute the associated pending tasks 106 on the receiving node 102. Thus when the receiving node 102 receives such a commit message, the receiving node 102 commits each reserved resource specified in the commit message and starts execution of the associated pending task 106 (block 620) . Also, the receiving node 102 releases all the other reserved resources, if any (block 622) . If a commit message related to the previously received admission request is not received within the timeout period, the receiving node 102 releases all the reserved resources for that admission request (block 622) . After the reserved resources have been released, those resources are again available for subsequent admission requests. The overhead associated with "rolling back" such resource reservations when the reserved resources are not ultimately going to be used for the pending admission request is reduced in such an embodiment (for example, as compared to sending additional messages indicating that the reserved resources should be released) .
[0058] In addition to listening for and processing admission requests, each receiving node 102 also listens for and processes adapt messages that are sent out by an initiator node 102 as described above in connection with FIGS. 4A-4B. As noted above, an adapt message notifies the receiving node 102 that a distributed application 104 executing on the receiving node 102 has been adapted. An adapted application 104 is running on the receiving node 102 if one or more of the tasks 106 comprising the adapted application 104 are executing on the receiving node 102. When a receiving node 102 receives an adapt message
(checked in block 630 of FIG. 6B) , the receiving node 102 adapts the tasks 106 for the adapted application 104 that are executing on the receiving node 102 if appropriate (block 632) .
[0059] As noted above, when a distributed application 104 has been adapted, at least one of the tasks 106 that comprise the adapted application 104 has had its resource utilization lowered on the node 102 on which that task 106 executes. The nodes 102 on which such tasks 106 execute are sent a commit message. As noted above, an adapt message is sent to the other nodes 102 on which the adapted application 104 executes, if any, that have not received a commit message. Those nodes 102 that do not receive a commit message and on which the adapted application 104 executes need to be informed that the application 104 has been adapted.
[0060] When such a node 102 receives an adapt message, the node 102 is able to adjust the resource utilization for the adapted application 104 to be compatible with the application's resource utilization on other nodes 102 (for example, as described below in connection with FIG. 6B) . For example in one exemplary usage scenario, a distributed application 104 has a pipeline topology. At some point during execution of the distributed application 104, one late-stage task 106 in the application 104 is adapted so that the rate at which the task 106 processes input is reduced. The input that is processed by such an adapted task 106 is the output from other earlier-stage tasks 106 in the same distributed application 104. When the nodes 102 executing such earlier-stage tasks 106 receive an adapt message, the nodes 102 are able to reduce the output of such earlier-stage tasks 106 to match the rate at which the late-stage task 106 can process such output. In this way, resources can be more efficiently used.
[0061] Also, each node 102 that receives an adapt message releases any resources reserved for the pending application 104, if any (block 634) . That is, if the node 102 that received the adapt message had previously reserved resources for the pending application, those reserved resources are released since that node 102 knows by virtue of receiving the adapt message that it will not receive a commit message. In other embodiments and implementation, however, this may not be the case .
[0062] FIGS. 7A-7D are block diagrams illustrating the operation of the embodiment of the application admission protocol shown in FIGS. 4A-4B, 5A-5B, and 6A-6B. As shown in FIG. 7A when the initiator client 302 wishes to have a distributed application 104 admitted and executed on one or more nodes 102 in the network 100, the initiator client 306 sends an admission request to the initiator application service manager 302 that is executing on the initiator node 102 (shown using a solid line in FIG. 7A) . In this example, the initiator client 306 and the initiator ASM 302 are executing on the same node, the initiator node 102. The initiator ASM 302 receives the admission request and forwards the admission request to the local cluster head node 102 (shown using a dashed line in FIG. 7A) .
[0063] The cluster head node 102 (more specifically, the ASM 302 executing on the cluster head node 102) receives the admission request and forwards the admission request to all the nodes 102 in the local cluster 108 (shown using dotted lines in FIG. 7A) . Each of the nodes 102 (more specifically, the ASM 302 executing on each node 102) in the local cluster 108 determines the resource availability for that node 102. In this embodiment, the resource availability includes both the unused resource availability and the adaptation resource availability. Each node 102 in the local cluster 108 sends or otherwise informs the initiator ASM 302 of that node's resource availability (shown using solid lines in FIG. 7B) . Also, each such node 102 in the local cluster 108 reserves the resources
(both unused resources and adapt resources) needed for the pending application 104. [0064] After the initiator ASM 302 has received the resource availability from the nodes 102 in the local cluster 108 (or after a predetermined period has elapsed) , the initiator ASM 302 attempts to map the pending tasks 106 that comprise the pending application 104 on to the nodes 102 that provided resource availability information to the initiator ASM 302. In this example, there are not enough available resources on the nodes 102 in the local cluster 108. Therefore, the initiator ASM 302 is not able to successfully map all pending tasks 106 to the nodes 102 in the local cluster 108. As a result, the initiator ASM 302 requests (shown using a solid line in FIG. 7C) that the local cluster head 102 forward the admission request on to another cluster 108 in the network to check for resource availability in that other cluster 108. The local cluster head 102 receives the request and selects a remote cluster 108 in the network 100 that has not previously been checked for resource availability for the current pending application 104.
[0065] Then, the local cluster head 102 forwards the admission request to the cluster head node 102 for the selected remote cluster 108 (shown using a dashed line in FIG. 7C) . The cluster head node 102 of the selected remote cluster 108 forwards the admission request on to all the nodes 102 in the selected remote cluster 108 (shown using a dotted lines in FIG. 7C) . Each of the nodes 102 (more specifically, the ASM 302 on each of the nodes 102) in the selected remote cluster 108 determines the resource availability for that node 102. As noted above, this determination includes determining both the unused resource availability and the adaptation resource availability for each node 102. Each node 102 in the selected remote cluster 108 sends or otherwise informs the initiator ASM 302 of that node's resource availability (shown with solid lines in FIG. 7D) . Also, each such node 102 in the selected remote cluster 108 reserves the resources (both unused resources and adaptation resources) needed for the pending application 104.
[0066] After the initiator ASM 302 has received the resource availability from the nodes 102 in the selected remote cluster 108 (or after a predetermined period has elapsed) , the initiator ASM 302 attempts to map the pending tasks 106 that comprise the pending application 104 on to the nodes 102 that provided resource availability information to the initiator ASM 302 (that is, the nodes 102 in the local cluster 108 and the selected remote cluster 108) . In this example, there are enough available resources in the local cluster 108 and the remote cluster 108 to admit the pending application. Therefore, the initiator ASM 302 on the initiator node 102 sends commit messages to those nodes 102 in the local cluster 108 and the selected remote cluster 108 on which a pending task has been mapped and will execute (shown using a solid line in FIG. 7E) . Each node 102 that receives a commit message commits those resources and executes those pending tasks 106 identified in the commit message received by that node 102. Also if the commit message indicates that another task executing on receiving node should be adapted, the receiving node 102 adapts the indicated task as specified in the commit message. Each node 102 that receives a commit message also releases those resources not needed to execute any pending task 106.
[0067] In this example, one distributed application 104 needs to be adapted in order to admit the pending application 104. The initiator ASM 302 sends an adapt message to those nodes 102 in the network 100 that on which the adapted application 104 executes that have not received a commit message (shown using a solid line in FIG. 7F) . Each node 102 that receives an adapt message adapts the tasks 106 identified in the adapt message received by that node 102 as appropriate. Also, each node 102 that receives an adapt message release any resources reserved for the pending application 104, if any.
[0068] In addition, for those nodes 102 that reserved resources for the pending application 104 during the admission process but did not receive a commit message or an adapt message, those nodes 102 release any resources reserved for, the pending application 104 after the relevant timeout period elapses.
[0069] FIG. 8 is a simplified block diagram of one embodiment of a node 800. The node 800 is suitable for use in the ad hoc wireless network 100 shown in FIG. 1 and is suitable for implementing the methods and techniques described here. The node 800 includes a wireless transceiver subsystem 802. In one embodiment, the wireless transceiver subsystem 802 is a radio frequency (RF) transceiver subsystem. The wireless transceiver subsystem 802 includes appropriate components (for example, antenna, amplifiers, modulators, demodulators, analog- to-digital (A/D) converters, digital-to-analog (D/A) converters, etc.) to handle the transmission and reception of wireless data over a wireless network. The node 800 also includes a control subsystem 804. In the embodiment shown in FIG. 8, the control subsystem includes a programmable processor 806. Programmable processor 806 is coupled to the wireless transceiver subsystem 802 in order to monitor and control the transmission and reception of wireless data over a wireless network. The control subsystem 804 also includes a memory 808 in which program instructions and data used by the programmable processor 806 are stored and from which they are retrieved. One or more of the methods and techniques described here, in one embodiment, are implemented using software executed on the programmable processor 806. Such software comprises appropriate program instructions 810 that are stored in' a tangible medium readable by the programmable processor 806 (for example, in memory 808) . The instructions, when executed by the programmable processor 806, cause the node 800 to carry out at least a portion of the functionality of the methods and techniques described here as being performed by a node. The software creates and/or interacts with appropriate data structures 812 stored in memory 808.
[0070] The following describes an exemplary resource allocation model for a single distributed application 104. Such a single-application resource allocation model characterizes the mapping of the one or more tasks 106 of the single distributed application to one or more nodes 102 in the network 100. Such a resource allocation model also characterizes the allocation of resources among the one or more tasks 106 that comprise the distributed application 104. Examples of such resources are node resources and network resources.
[0071] The single-application resource model described here is based on the following assumptions that simplify the problem formulation. First, it is assumed that the quality-of-service
(QoS) dimensions have a one-to-one correspondence to system resources types. A QoS request for an application specifies, for each resource that is needed by that application, a minimum resource vale {"min") and a maximum resource value ("max") . The definition of the min and max values defines a range of acceptable allocations for that resource.
[0072] Another assumption is that node resources are modeled as limited buckets of capacity r"13* with the admission condition ∑ r± ≤ xp[ax, where r± is the resource amount allocated for task i. The total resource utilization in the network 100 cannot exceed the total amount of resources available in the network 100.
[0073] Another assumption is that a network resource is modeled as a limited bucket associated with a pair of nodes 102 that communicate over one or more wireless network links. The example of a network resource used in the exemplary single- application resource allocation model described here is communication bandwidth between two nodes 102 in the network 100.
[0074] Another assumption is that the wireless network links established between any two nodes 102 in the network 100 are bi-directional. Also, it is assumed that each of the bi¬ directional connections in each such network link share the same network resource. That is, in this exemplary resource allocation model, each of the bi-directional connections in each network link shares the same communication bandwidth.
[0075] Another assumption that is made in the exemplary single-application model described here is that resources are independent of each other. Also, it is assumed that resources are not probabilistic and the system guarantees the contracted QoS.
[0076] The single-application model defines node resources and network resources and formalizes allocation constraints. In such a model, the allocation problem is formulated as an optimization problem. There are n nodes 102 and m types of node resources and one type of network resource. A distributed application T comprises s communicating tasks, T = (T1, T2, . . . , Ts}.
[0077] An admission request (also referred to here as a "QoS request") for distributed application T is described, in such a model, by a set of quality-of-service descriptors, one for each QoS dimension. The QoS request is described by matrices Qm and QM. Matrices Qm = (qm ij)i=1,...,m, j=1,...,s and QM = (qM ij)i=1,...,m, j=1,...,s define the minimum and maximum QoS requirements, respectively, for application tasks T1, . . . , T3, where m is the number of
QoS dimensions (that is, number of node resource types) and qm ij
≤ qM ij . In addition, q1 is considered "a better QoS" than q2 if q1 > q2. [0078] Matrix R0 describes the available resources before application admission. R0 = (r0 ij)i=1,...,m, j=1,...,n, where r0 ij is the available amount of resource of type i on node j and R0 ∈ [0, ∞)mxn.
[0079] The admission control admits the s tasks in the system. The mapping of the s tasks on the n nodes is given by matrix:
Figure imgf000035_0001
[0080] The vector Map is defined as Mapi = j if task Ti was mapped on node j. The resource management system (for example, an initiator application service manager) allocates resources
Ra = (ra ij)i=1,...,m, j=1,...,s to the s tasks, where the amount ra ij > 0 of resource i has been assigned to task j.
[0081] The single-application resource allocation described here assumes that network resources are allocated independently for each communication link between any two nodes. The network resource is modeled by a limited bucket for each bi-directional link established between a pair of nodes (i, j). Matrix NR0 defines the available network resource at admission time, where NR0 € [0, ∞)nxn and NR0 ij defines the network resource available to the (i, j) communication link.
[0082] Matrices NQM and NQM define the minimum and maximum network resource requirement, respectively, for each communication link (i, j), for which tasks Ti and Tj communicate, where NQM and NQM e [0, ∞)sXs.
[0083] The set TC contains all required connections between tasks. Thus, TC = { (i,j) \ Ti communicates with Tj , i < j} .
[0084] Matrix NQa defines the allocated network resource. Thus, NQa ij = network resource allocated for communication link (Ti, Tj) 1 where NQM e [0, ∞)sxs.
[0085] This resource model assumes there is a (possible multi- hop) path in the network between any two nodes and that all resource allocations for connections are independent. The resource management system maps the tasks to nodes and allocates resources for each connection between two tasks. A connection between tasks Ti and Tj is mapped to a connection between nodes Mapi and Mapj. The system allocates NQa ij resource to the (i, j) connection by subtracting the same resource amount NQa ij from the available network resource for connection (Mapi, Mapj) , where after allocation:
NR°[Mapi, Mapj]' = NR0[Mapi, Mapj] - NQa ij.
[0086] Matrix NRa defines the allocated network resources for an application (T1, . . . , Ts , TC) :
NRa = (nra xy )x,y=1,...,n,
Figure imgf000037_0004
[0087] In other embodiments, this resource model can accommodate alternate allocation strategies by adjusting the equations for resource constraints and resource availability update. For example, in another embodiment, network resources are modeled based on node communication capacity and explicit end-to-end path information that would be available from the routing protocol.
[0088] The single-application resource allocation model described here also specifies various conditions that any resource allocation must meet in such an embodiment. These conditions are also referred to here as "constraints." One type of constraint relates to node resources and such constraints are referred to here as "node resource constraints." In the exemplary single-application resource allocation model described here, one node resource constraint relates to task mapping and specifies that each task is admitted on exactly one node as specified by:
Figure imgf000037_0002
[0089] Another node constraint relates to application quality-of-service and specifies that allocated resources satisfy QoS requirements:
Figure imgf000037_0001
[0090] Another node constraint relates to resource availability and specifies that a particular resources allocation is limited by availability:
Figure imgf000037_0003
for all resources i = 1, . . . , m and all nodes j = 1, . . . , n.
[0091] Another type of constraint relates to network resources and such constraints are referred to here as "network resource constraints." In the exemplary single-application resource allocation model described here, one network resource constraint relates to application QoS and specifies that all connections between tasks must be allocated network resources between minimum required and maximum needed. Note that, in such an exemplary model, Map maps from NRa to NQa, where NQm ≤ NQa ≤ NQM. Another network constraint relates to resource availability and specifies that the allocated' network resource for all connections cannot exceed the available limit. That is, NRa ≤ NR0.
[0092] The optimal resource allocation is designed to maximize the application QoS utility defined as a function of the combined QoS satisfaction ratio for all tasks. The application's overall utility is a linear combination of the node task utility and the network utility.
[0093] The node utility of task Ti for resource j is normalized to:
Figure imgf000038_0001
[0094] Matrix U = (uij) i=1 s, j=1 m. The node utility of task Ti is a weighted sum of resource utilities:
Figure imgf000038_0003
[0095] Weights Wj ≥ 0, j = 1, . . . , m, and The
Figure imgf000038_0002
application node utility vector is V = (ulf . . ., us)τ = UW,
Figure imgf000039_0002
[0096] In the exemplary single-application module described here, a network utility is defined. The network utility nu for application (T, TC), is defined as:
Figure imgf000039_0001
[0097] In the exemplary single-application module described here, an application utility is defined. The application utility for a multi-staged admitted application is defined as the weighted sum of node task utilities and the network utility:
Figure imgf000039_0003
[0098] In the exemplary single-application model described here, the optimal allocation problem is to determine the task mapping X1 a node resource allocation NRa and a network resource allocation NQa, so that the node resource constraints and network resource constraints described above are obeyed and the application utility v described above is maximized. This is a mixed integer programming optimization problem with a fairly complex structure. The resource allocation problem is NP-hard by reduction to the integer bin-packing problem. The technique for admitting a distributed application described above in connection with FIGS. 1-8 attempts to overcome the complexity of the optimization problem.
[0099] Formulating the admission utility function using weighted sums allows us to inject user-defined policies and application semantics into the allocation process. For the node task utility function, the relation between the resource type weights Wj, may impact the contribution of the task to the overall application utility, thus being a factor to the final mapping of tasks to nodes. In essence, the weight Wj can be directly correlated to the relative importance the user assigns to a resource type j. The admission process will map tasks to nodes where allocation of specific resources contributes maximum utility. For instance, if the fraction wCPU/wMeπ,ory = 1/3, then the admission algorithm, in the process of optimizing the total application utility value, will be more likely to map tasks to nodes where the memory allocation is closer to the maximum required. Similarly, the weights a± from the application utility formula can be adjusted by users to express preference towards maximizing utility of specific tasks or of the network bandwidth allocation (aΞ+i) .
[00100] The following describes another exemplary resource allocation model for multiple distributed applications 104. Such a multiple-application resource allocation model characterizes the mapping of the one or more tasks 106 of multiple distributed applications to one or more nodes 102 in the network 100. Such resource allocation model also characterizes the allocation of resources among the one or more tasks 106 that comprise each of the multiple distributed applications. Examples of such resources are node resources and network resources.
[00101] In this exemplary multi-application module, each of the multiple distributed applications is characterized by a priority, common for all the application's stages (that is, tasks) . It is also assumed that each of the distributed applications is admitted if all of its stages are admitted to the system. A constraint that applies to this model is that higher priority applications are never preempted by lower priority applications. The objective of this exemplary multi- application model is to maximize the QoS satisfaction level for higher-priority applications. In other embodiments, the optimization goal is to maximize the overall number of admitted applications.
[00102] In this exemplary multi-application resource allocation module, the system admits and allocates resources for the set A = (A1, . . . , At) of t applications, ordered increasingly on priority. The system comprises n nodes with resource availability R0 = (r^)i=1 /Λ7/ j=1 ,n, for m resources distributed on the n nodes, and NR0 for network resources. Each application Ai is characterized by a priority p± > 0. In this exemplary multi-application module, it is assumed that p^ ≤
Pi+i. Also, for each application Aif a minimum requested QoS Qf and a maximum requested QoS Q" for node resources are defined. Note that, in this embodiment, QoS dimensions map one-to-one to resources. Further, for each application A1, a minimum requested QoS NQ* and a maximum requested QoS NQ" for network resources are defined.
[00103] The resource management system (for example, an initiator application service manager) maps application tasks and allocate resources. The vector Y = (ylf . . . , yt) indicates the application admission:
Figure imgf000041_0001
[00104] And matrices X1 define individual task mapping for application i.
[00105] The global objective function for this exemplary- multi-application model is normalized for individual application utility values. In this model, it is desired that the utility contributed by each application to the overall objective be proportional to the QoS (that is, amount of resources) received from the system and not to the number of stages.
[00106] The application node utility for application i is therefore normalized on the number of stages S1 and the network utility is also normalized on the number of connections | TC11 :
Figure imgf000042_0001
[00107] The global objective function assigns more weight to utility contributed by higher priority applications. The goal, in this exemplary model, is to maximize v*:
Figure imgf000042_0002
[00108] The set of constraints for the optimization problems relates to resource allocation and priority-based preemption. The resource constraints are an extension of those listed above in connection with the single-application resource application model. The constraints specify that the resources allocated to all applications should not exceed initial availability:
Figure imgf000043_0001
[00109] The constraints specify that the allocated resources should satisfy QoS demands for admitted applications. For all i = 1, . . . , t:
Figure imgf000043_0002
[00110] The constraint stating that higher priority applications cannot be preempted by lower priority applications is formulated as Y1 ≤ y1+i, for all i = 1, . . . , t - 1, where it is assumed that px ≤ p1+1.
[00111] In the multi-application resource allocation model described here, the allocation optimization problem is specified by maximizing the global objective function:
Figure imgf000043_0003
for each application i:
Figure imgf000044_0001
[00112] Also, for each application i, if ng"3 =nq™0 or i = j (that is, tasks on same node) then, nu = 1. Otherwise, if nq^>nq^j f then:
Figure imgf000044_0002
with the following constraints (t is the number of applications) :
Figure imgf000044_0003
Also, for all i = 1, . . . , t:
Figure imgf000044_0004
[00113] The resource allocation problem asks for computing the following matrices: Y for admitted applications and X1 for task mappings for application i, .R1 for allocated node resources, and I\7Ra for the network resource, for application i = 1, . . . , t, . [00114] In one usage scenario, application admissions come at different times. The technique for admitting a distributed application described above in connection with FIGS. 1-8 considers QoS adaptation for existing applications, adding the extra resources to the available resources pool . This can be modeled by the above multi-application admission method by- forcing selected critical applications (assumed in execution before this admission) to be admitted, that is, yx = 1.
[00115] The large number of variables (t + tsn + tsm + tnn) for this mixed integer program, makes a real-time approximation with branch-and-bound or even with a linear program unfeasible. The technique for admitting a distributed application described above in connection with FIGS. 1-8 attempts to overcome the complexity of the optimization problem.
[00116] In another embodiment making use of such a multi- application model, an alternative goal for the admission procedure is to maximize the overall number of accepted applications, where higher priority application have precedence. Later after admission, a QoS expansion phase would increase the QoS for admitted applications from remaining resources. The large optimization problem that results from such an embodiment can be broken in two smaller pieces with fewer unknowns. In other words, in such an embodiment, the admission process includes two phases
[00117] The first phase is admission with minimal QoS, so that the overall number of admitted applications is maximized. The optimization problem can be formalized as maximizing the objective function:
Figure imgf000045_0001
with the following constraints:
Figure imgf000046_0001
[00118] By admitting applications at their minimum requested QoS, the solution space can be significantly decreased. Only the unknown matrices X1 and Y = (γlr . . . , γt) , with elements in {θ, l} have to be determined for this integer program.
[00119] After the first phase of the two-phase admission process, indicators yα and X1 have been determined. The second phase of the two-phase admission process involves QoS expansion. The system allocates remaining resources to admitted applications, with preference to higher priority applications. The QoS expansion can be formulated as a linear program as maximizing the objective function (the QoS satisfaction ratio) :
Figure imgf000046_0002
for each application i :
Figure imgf000047_0001
with the following constraints (t is the number of applications) :
Figure imgf000047_0002
Also, for all i = k, . . . , t:
Figure imgf000047_0003
[00120] The ts (w+n) remaining unknowns for this linear program are matrices R* and NQ* , for i = 1, . . . , t.
[00121] The methods and techniques described here may be implemented in digital electronic circuitry, or with a programmable processor (for example, a special-purpose processor or a general-purpose processor such as a computer) firmware, software, or in combinations of them. Apparatus embodying these techniques may include appropriate input and output devices, a programmable processor, and a storage medium tangibly embodying program instructions for execution by the programmable processor. A process embodying these techniques may be performed by a programmable processor executing a program of instructions to perform desired functions by operating on input data and generating appropriate output. The techniques may advantageously be implemented in one or more programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and DVD disks. Any of the foregoing may be supplemented by, or incorporated in, specially-designed application-specific integrated circuits (ASICs) .
[00122] A number of embodiments of the invention defined by the following claims have been described. Nevertheless, it will be understood that various modifications to the described embodiments may be made without departing from the spirit and scope of the claimed invention. Accordingly, other embodiments are within the scope of the following claims.

Claims

CLAIMSWhat is claimed is:
1. A method comprising: in a wireless network having a plurality of clusters, each cluster comprising at least one of a plurality of nodes: attempting to map a set of tasks to at least one node within a first cluster of the wireless network based on resource availability of the nodes within the first cluster; and if unable to map the set of tasks to said at least one node in the first cluster, attempting to map the set of tasks to at least one node in at least one of the first cluster and at least one of the other clusters in the wireless network based on resource availability of the nodes within the first cluster and the at least one of the other clusters in the wireless network.
2. The method of claim 1, further comprising receiving a request to admit an application for execution on at least one node in the network, the application comprising the set of tasks.
3. The method of claim 2, wherein the admission request is received at a first node.
4. The method of claim 3, wherein the request is received from a client executing on the first node.
5. The, method of claim 3, further comprising forwarding the request to a head node of the first cluster and receiving, at the first node in the first cluster, first resource availability information from at least one node in the first cluster.
6. The method of claim 5, wherein the request is forwarded to the first head node based on a routing protocol.
7. The method of claim 5, further comprising, if unable to map the set of tasks to said at least one node in the first cluster, requesting that the head node of the first cluster communicate the request to a head node of the at least one of the other clusters in the wireless network and receiving, at the first node in the first cluster, resource availability information from at least one node in the at least one of the other clusters in the wireless network.
8. The method of claim 1, wherein each cluster includes a cluster head node.
9. The method of claim 8, wherein the first cluster comprises a local cluster.
10. The method of claim 9, wherein attempting to map the set of tasks to the at least one node within the first cluster comprises: when an initiator node in the local cluster receives an admission request to execute an application comprising the set of tasks, having the initiator node forward the admission request to a local cluster head node for the local cluster; when the admission request is forwarded to the local cluster head node, having the local cluster head node request that at least one of the set of nodes included in the local cluster provide resource availability information to the initiator node; having the initiator node attempt to map the set of tasks to a subset of the nodes included in the local cluster using the resource availability information received from nodes in the local cluster.
11. The method of claim 9, wherein attempting to map the set of tasks to the at least one node in the at least one of the first cluster and the at least one o the other clusters in the wireless network if unable to map the set of tasks to said at least one node in the first cluster comprises: if the initiator node is unable to map the set of tasks to the subset of nodes included in the local cluster: having the local cluster head node forward the admission request to the cluster head node of successive clusters in the wireless network in order to have at least one node in each of the successive clusters send resource availability information to the initiator node until the initiator node is able to map the set of tasks to a subset of the nodes in the wireless network or until there are no additional clusters to forward the admission request to; and having the initiator node attempt to map the set of tasks to a subset of the nodes from which resource availability information has been received based on the received resource availability information.
12. The method of claim 11, wherein each cluster head node that receives the admission request requests that the nodes within the same cluster as that cluster head node send resource availability information to the initiator node.
13. The method of claim 12, wherein each node that receives a request to send resource availability information to the initiator node: determines resource availability information for that node; sends the resource availability information to the initiator node; and reserves resources for the admission request from available resources; when the node receives a commit message sent from the initiator node to that node when the initiator node maps at least one of the tasks to that node, commits the reserved resources for the at least one of tasks mapped to that node and execute the at least one of the tasks mapped to that node; and when a predetermined amount of time elapses after reserving the reserved resources without receiving the commit message; release the reserved resources.
14. The method of claim 13, wherein the predetermined amount of time is determined using a timer.
15. The method of claim 1, wherein an adapted application comprises a set of executing tasks and wherein at least one node in the wireless network to which a task is mapped reduces an amount of at least one resource that is utilized by at least one of the set of executing tasks of the adapted application in order to execute the task mapped to that node.
PCT/US2005/021574 2004-06-18 2005-06-17 Method for managing resources of ad hoc wireless networks WO2006028547A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/872,257 2004-06-18
US10/872,257 US7460549B1 (en) 2004-06-18 2004-06-18 Resource management for ad hoc wireless networks with cluster organizations

Publications (1)

Publication Number Publication Date
WO2006028547A1 true WO2006028547A1 (en) 2006-03-16

Family

ID=35657213

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/021574 WO2006028547A1 (en) 2004-06-18 2005-06-17 Method for managing resources of ad hoc wireless networks

Country Status (2)

Country Link
US (1) US7460549B1 (en)
WO (1) WO2006028547A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007121476A1 (en) * 2006-04-18 2007-10-25 Qualcomm Incorporated Offloaded processing for wireless applications
WO2007127878A1 (en) * 2006-04-26 2007-11-08 Qualcomm Incorporated Dynamic distribution of device functionality and resource management
WO2009158056A1 (en) * 2008-06-24 2009-12-30 Bose Corporation Personal wireless network capabilities-based task portion distribution
CN102132595A (en) * 2008-08-15 2011-07-20 高通股份有限公司 Adaptive clustering framework in frequency-time for network mimo systems
US8024596B2 (en) 2008-04-29 2011-09-20 Bose Corporation Personal wireless network power-based task distribution
US8090317B2 (en) 2008-08-01 2012-01-03 Bose Corporation Personal wireless network user behavior based topology
US8289159B2 (en) 2006-04-26 2012-10-16 Qualcomm Incorporated Wireless localization apparatus and method
US8406794B2 (en) 2006-04-26 2013-03-26 Qualcomm Incorporated Methods and apparatuses of initiating communication in wireless networks
US9288690B2 (en) 2010-05-26 2016-03-15 Qualcomm Incorporated Apparatus for clustering cells using neighbor relations
US10028332B2 (en) 2008-08-15 2018-07-17 Qualcomm, Incorporated Hierarchical clustering framework for inter-cell MIMO systems

Families Citing this family (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8782654B2 (en) 2004-03-13 2014-07-15 Adaptive Computing Enterprises, Inc. Co-allocating a reservation spanning different compute resources types
US7475158B2 (en) * 2004-05-28 2009-01-06 International Business Machines Corporation Method for enabling a wireless sensor network by mote communication
US20070266388A1 (en) 2004-06-18 2007-11-15 Cluster Resources, Inc. System and method for providing advanced reservations in a compute environment
WO2006011718A1 (en) * 2004-07-26 2006-02-02 Samsung Electronics Co., Ltd. Location tracking method in coordinator-based wireless network
US8176490B1 (en) 2004-08-20 2012-05-08 Adaptive Computing Enterprises, Inc. System and method of interfacing a workload manager and scheduler with an identity manager
US7769848B2 (en) * 2004-09-22 2010-08-03 International Business Machines Corporation Method and systems for copying data components between nodes of a wireless sensor network
US20070198675A1 (en) 2004-10-25 2007-08-23 International Business Machines Corporation Method, system and program product for deploying and allocating an autonomic sensor network ecosystem
CA2586763C (en) 2004-11-08 2013-12-17 Cluster Resources, Inc. System and method of providing system jobs within a compute environment
US8863143B2 (en) 2006-03-16 2014-10-14 Adaptive Computing Enterprises, Inc. System and method for managing a hybrid compute environment
US9075657B2 (en) 2005-04-07 2015-07-07 Adaptive Computing Enterprises, Inc. On-demand access to compute resources
CA2601384A1 (en) 2005-03-16 2006-10-26 Cluster Resources, Inc. Automatic workload transfer to an on-demand center
US9231886B2 (en) 2005-03-16 2016-01-05 Adaptive Computing Enterprises, Inc. Simple integration of an on-demand compute environment
US9015324B2 (en) 2005-03-16 2015-04-21 Adaptive Computing Enterprises, Inc. System and method of brokering cloud computing resources
US8041772B2 (en) * 2005-09-07 2011-10-18 International Business Machines Corporation Autonomic sensor network ecosystem
US7783605B2 (en) * 2005-12-30 2010-08-24 Microsoft Corporation Calculating cluster availability
US20090228309A1 (en) * 2006-12-05 2009-09-10 Georges-Henri Moll Method and system for optimizing business process management using mathematical programming techniques
JP2008226181A (en) * 2007-03-15 2008-09-25 Fujitsu Ltd Parallel execution program, recording medium storing it, parallel execution device, and parallel execution method
US8520535B2 (en) 2007-05-31 2013-08-27 International Business Machines Corporation Optimization process and system for a heterogeneous ad hoc Network
US7979311B2 (en) 2007-05-31 2011-07-12 International Business Machines Corporation Payment transfer strategies for bandwidth sharing in ad hoc networks
US7860081B2 (en) * 2007-05-31 2010-12-28 International Business Machines Corporation Optimization process and system for multiplexed gateway architecture
US7843861B2 (en) * 2007-05-31 2010-11-30 International Business Machines Corporation Coalition formation and service provisioning of bandwidth sharing AD HOC networks
US8320414B2 (en) 2007-05-31 2012-11-27 International Business Machines Corporation Formation and rearrangement of lender devices that perform multiplexing functions
US7873019B2 (en) * 2007-05-31 2011-01-18 International Business Machines Corporation Systems and methods for establishing gateway bandwidth sharing ad-hoc networks
US7817623B2 (en) * 2007-05-31 2010-10-19 International Business Machines Corporation Optimization process and system for non-multiplexed peer-to-peer architecture
US7898993B2 (en) * 2007-05-31 2011-03-01 International Business Machines Corporation Efficiency and resiliency enhancements for transition states in ad hoc networks
US10419360B2 (en) 2007-05-31 2019-09-17 International Business Machines Corporation Market-driven variable price offerings for bandwidth-sharing ad hoc networks
US8249984B2 (en) 2007-05-31 2012-08-21 International Business Machines Corporation System and method for fair-sharing in bandwidth sharing ad-hoc networks
US8040863B2 (en) 2007-05-31 2011-10-18 International Business Machines Corporation Demand pull and supply push communication methodologies
US8620784B2 (en) 2007-05-31 2013-12-31 International Business Machines Corporation Formation and rearrangement of ad hoc networks
US10623998B2 (en) 2007-05-31 2020-04-14 International Business Machines Corporation Price offerings for bandwidth-sharing ad hoc networks
US8127295B1 (en) * 2007-08-03 2012-02-28 Oracle America, Inc. Scalable resource allocation
US8041773B2 (en) 2007-09-24 2011-10-18 The Research Foundation Of State University Of New York Automatic clustering for self-organizing grids
WO2009151863A2 (en) * 2008-06-10 2009-12-17 Myers Wolin, Llc A network gateway for time-critical and mission-critical networks
WO2015160383A1 (en) * 2008-06-10 2015-10-22 Clio Tech Inc. A network gateway and method for inspecting frames in communication network
KR101156618B1 (en) * 2008-11-21 2012-06-14 연세대학교 산학협력단 Method for allocating resources for wireless network
CN102334370B (en) * 2009-01-16 2014-10-29 诺基亚公司 Apparatus and method of scheduling resources for device-to-device communications
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US20110153079A1 (en) * 2009-12-18 2011-06-23 Electronics And Telecommunication Research Institute Apparatus and method for distributing and monitoring robot application and robot driven thereby
US8639256B2 (en) 2010-05-26 2014-01-28 Qualcomm Incorporated Adaptive cell clustering in a multi-cluster environment
US10484057B2 (en) 2010-07-13 2019-11-19 Mediatek Singapore Pte. Ltd. System and method for coordinating multiple wireless communications devices in a wireless communications network
US8989111B2 (en) * 2010-07-13 2015-03-24 Mediatek Singapore Pte. Ltd. System and method for coordinating multiple wireless communications devices in a wireless communications network
US8473960B2 (en) 2010-09-24 2013-06-25 International Business Machines Corporation Vector throttling to control resource use in computer systems
US10073813B2 (en) 2011-09-06 2018-09-11 International Business Machines Corporation Generating a mixed integer linear programming matrix from an annotated entity-relationship data model and a symbolic matrix
US8751699B1 (en) * 2011-12-22 2014-06-10 Western Digital Technologies, Inc. Systems and methods for indication of activity status of a storage device
US9569275B2 (en) * 2012-05-14 2017-02-14 International Business Machines Corporation Allocation and reservation of virtualization-based resources
US8965921B2 (en) * 2012-06-06 2015-02-24 Rackspace Us, Inc. Data management and indexing across a distributed database
US10270709B2 (en) 2015-06-26 2019-04-23 Microsoft Technology Licensing, Llc Allocating acceleration component functionality for supporting services
US10506607B2 (en) * 2014-06-02 2019-12-10 Qualcomm Incorporated Discovery of multi-hop capabilities and routing on a per link basis
US10122806B1 (en) 2015-04-06 2018-11-06 EMC IP Holding Company LLC Distributed analytics platform
US10348810B1 (en) 2015-04-06 2019-07-09 EMC IP Holding Company LLC Scalable distributed computations utilizing multiple distinct clouds
US10791063B1 (en) 2015-04-06 2020-09-29 EMC IP Holding Company LLC Scalable edge computing using devices with limited resources
US10541938B1 (en) 2015-04-06 2020-01-21 EMC IP Holding Company LLC Integration of distributed data processing platform with one or more distinct supporting platforms
US10366111B1 (en) 2015-04-06 2019-07-30 EMC IP Holding Company LLC Scalable distributed computations utilizing multiple distinct computational frameworks
US10511659B1 (en) 2015-04-06 2019-12-17 EMC IP Holding Company LLC Global benchmarking and statistical analysis at scale
US10425350B1 (en) 2015-04-06 2019-09-24 EMC IP Holding Company LLC Distributed catalog service for data processing platform
US10404787B1 (en) 2015-04-06 2019-09-03 EMC IP Holding Company LLC Scalable distributed data streaming computations across multiple data processing clusters
US10505863B1 (en) 2015-04-06 2019-12-10 EMC IP Holding Company LLC Multi-framework distributed computation
US10277668B1 (en) 2015-04-06 2019-04-30 EMC IP Holding Company LLC Beacon-based distributed data processing platform
US10776404B2 (en) * 2015-04-06 2020-09-15 EMC IP Holding Company LLC Scalable distributed computations utilizing multiple distinct computational frameworks
US10515097B2 (en) 2015-04-06 2019-12-24 EMC IP Holding Company LLC Analytics platform for scalable distributed computations
US10496926B2 (en) 2015-04-06 2019-12-03 EMC IP Holding Company LLC Analytics platform for scalable distributed computations
US10331380B1 (en) 2015-04-06 2019-06-25 EMC IP Holding Company LLC Scalable distributed in-memory computation utilizing batch mode extensions
US10509684B2 (en) 2015-04-06 2019-12-17 EMC IP Holding Company LLC Blockchain integration for scalable distributed computations
US10706970B1 (en) 2015-04-06 2020-07-07 EMC IP Holding Company LLC Distributed data analytics
US10528875B1 (en) 2015-04-06 2020-01-07 EMC IP Holding Company LLC Methods and apparatus implementing data model for disease monitoring, characterization and investigation
US10860622B1 (en) 2015-04-06 2020-12-08 EMC IP Holding Company LLC Scalable recursive computation for pattern identification across distributed data processing nodes
US10541936B1 (en) * 2015-04-06 2020-01-21 EMC IP Holding Company LLC Method and system for distributed analysis
US10812341B1 (en) 2015-04-06 2020-10-20 EMC IP Holding Company LLC Scalable recursive computation across distributed data processing nodes
US10606651B2 (en) * 2015-04-17 2020-03-31 Microsoft Technology Licensing, Llc Free form expression accelerator with thread length-based thread assignment to clustered soft processor cores that share a functional circuit
US10511478B2 (en) 2015-04-17 2019-12-17 Microsoft Technology Licensing, Llc Changing between different roles at acceleration components
US10296392B2 (en) 2015-04-17 2019-05-21 Microsoft Technology Licensing, Llc Implementing a multi-component service using plural hardware acceleration components
US10198294B2 (en) * 2015-04-17 2019-02-05 Microsoft Licensing Technology, LLC Handling tenant requests in a system that uses hardware acceleration components
US9792154B2 (en) 2015-04-17 2017-10-17 Microsoft Technology Licensing, Llc Data processing system having a hardware acceleration plane and a software plane
US10216555B2 (en) 2015-06-26 2019-02-26 Microsoft Technology Licensing, Llc Partially reconfiguring acceleration components
US10656861B1 (en) 2015-12-29 2020-05-19 EMC IP Holding Company LLC Scalable distributed in-memory computation
US10581722B2 (en) 2016-08-22 2020-03-03 Qualcomm Incorporated Power control for independent links
US10374968B1 (en) 2016-12-30 2019-08-06 EMC IP Holding Company LLC Data-driven automation mechanism for analytics workload distribution
US11093293B2 (en) * 2018-01-09 2021-08-17 George P. MATUS Configuring nodes for distributed compute tasks
US20200065415A1 (en) * 2018-08-21 2020-02-27 Google Llc System For Optimizing Storage Replication In A Distributed Data Analysis System Using Historical Data Access Patterns
EP3812901A1 (en) * 2019-10-25 2021-04-28 Elektrobit Automotive GmbH Cloud computing with dynamic cloud

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6718394B2 (en) * 2002-04-29 2004-04-06 Harris Corporation Hierarchical mobile ad-hoc network and methods for performing reactive routing therein using ad-hoc on-demand distance vector routing (AODV)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07298340A (en) * 1994-03-02 1995-11-10 Fujitsu Ltd Mobile communication system and mobile station
US20050240928A1 (en) * 2004-04-09 2005-10-27 Brown Theresa M Resource reservation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6718394B2 (en) * 2002-04-29 2004-04-06 Harris Corporation Hierarchical mobile ad-hoc network and methods for performing reactive routing therein using ad-hoc on-demand distance vector routing (AODV)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DURAN-LIMON H A ET AL: "A resource and QoS management framework for a real-time event system in mobile ad hoc environments", PROCEEDINGS. NINTH IEEE INTERNATIONAL WORKSHOP ON OBJECT-ORIENTED REAL-TIME DEPENDABLE SYSTEMS IEEE COMPUT. SOC LOS ALAMITOS, CA, USA, 2003, pages 217 - 224, XP010783612, ISBN: 0-7695-2054-5 *
ESTRIN D ET AL: "NEXT CENTURY CHALLENGES: SCALABLE COORDINATION IN SENSOR NETWORKS", MOBICOM '99. PROCEEDINGS OF THE 5TH ANNUAL ACM/IEEE INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND NETWORKING. SEATTLE, WA, AUG. 15 - 20, 1999, ANNUAL ACM/IEEE INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND NETWORKING, NEW YORK, NY : ACM, US, vol. CONF. 5, 15 August 1999 (1999-08-15), pages 263 - 270, XP000896093, ISBN: 1-58113-142-9 *
ZHANG J ET AL: "Task-oriented self-organization of ad hoc sensor systems", PROCEEDINGS OF IEEE SENSORS 2002. ORLANDO, FL, JUNE 12 - 14, 2002, IEEE INTERNATIONAL CONFERENCE ON SENSORS, NEW YORK, NY : IEEE, US, vol. VOL. 1 OF 2. CONF. 1, 12 June 2002 (2002-06-12), pages 1485 - 1490, XP010605341, ISBN: 0-7803-7454-1 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8644396B2 (en) 2006-04-18 2014-02-04 Qualcomm Incorporated Waveform encoding for wireless applications
JP2009534945A (en) * 2006-04-18 2009-09-24 クゥアルコム・インコーポレイテッド Offload processing for wireless applications
WO2007121476A1 (en) * 2006-04-18 2007-10-25 Qualcomm Incorporated Offloaded processing for wireless applications
WO2007121477A1 (en) * 2006-04-18 2007-10-25 Qualcomm Incorporated Waveform encoding for wireless applications
TWI384823B (en) * 2006-04-18 2013-02-01 Qualcomm Inc Offloaded processing for wireless applications
US8654868B2 (en) 2006-04-18 2014-02-18 Qualcomm Incorporated Offloaded processing for wireless applications
CN101433052B (en) * 2006-04-26 2013-04-24 高通股份有限公司 Dynamic distribution of device functionality and resource management
WO2007127878A1 (en) * 2006-04-26 2007-11-08 Qualcomm Incorporated Dynamic distribution of device functionality and resource management
US8600373B2 (en) * 2006-04-26 2013-12-03 Qualcomm Incorporated Dynamic distribution of device functionality and resource management
TWI405444B (en) * 2006-04-26 2013-08-11 Qualcomm Inc Method,electronic device,computer-program product,handset,watch,medical device and sensor for wireless communicaitons
US8289159B2 (en) 2006-04-26 2012-10-16 Qualcomm Incorporated Wireless localization apparatus and method
US8406794B2 (en) 2006-04-26 2013-03-26 Qualcomm Incorporated Methods and apparatuses of initiating communication in wireless networks
US8024596B2 (en) 2008-04-29 2011-09-20 Bose Corporation Personal wireless network power-based task distribution
US7995964B2 (en) 2008-06-24 2011-08-09 Bose Corporation Personal wireless network capabilities-based task portion distribution
WO2009158056A1 (en) * 2008-06-24 2009-12-30 Bose Corporation Personal wireless network capabilities-based task portion distribution
US8090317B2 (en) 2008-08-01 2012-01-03 Bose Corporation Personal wireless network user behavior based topology
CN102132595A (en) * 2008-08-15 2011-07-20 高通股份有限公司 Adaptive clustering framework in frequency-time for network mimo systems
US9521554B2 (en) 2008-08-15 2016-12-13 Qualcomm Incorporated Adaptive clustering framework in frequency-time for network MIMO systems
US10028332B2 (en) 2008-08-15 2018-07-17 Qualcomm, Incorporated Hierarchical clustering framework for inter-cell MIMO systems
US9288690B2 (en) 2010-05-26 2016-03-15 Qualcomm Incorporated Apparatus for clustering cells using neighbor relations

Also Published As

Publication number Publication date
US20080279167A1 (en) 2008-11-13
US7460549B1 (en) 2008-12-02

Similar Documents

Publication Publication Date Title
US7460549B1 (en) Resource management for ad hoc wireless networks with cluster organizations
CN110647391B (en) Edge computing method and system for satellite-ground cooperative network
CN108777852B (en) Internet of vehicles content edge unloading method and mobile resource distribution system
CN108494612B (en) Network system for providing mobile edge computing service and service method thereof
CN110493360B (en) Mobile edge computing unloading method for reducing system energy consumption under multiple servers
Zhou et al. Delay-aware IoT task scheduling in space-air-ground integrated network
US7933249B2 (en) Grade of service and fairness policy for bandwidth reservation system
Samanta et al. Latency-oblivious distributed task scheduling for mobile edge computing
CN111552564A (en) Task unloading and resource optimization method based on edge cache
CN113055487B (en) VMEC service network selection-based migration method
CN109756912B (en) Multi-user multi-base station joint task unloading and resource allocation method
CA2392574A1 (en) System, apparatus and method for uplink resource allocation
Corradi et al. Adaptive context data distribution with guaranteed quality for mobile environments
US9307389B2 (en) Method, system, and equipments for mobility management of group terminals
US20080186942A1 (en) Wireless base station apparatus capable of effectively using wireless resources according to sorts of data
Feng et al. HVC: A hybrid cloud computing framework in vehicular environments
Mazza et al. A cluster based computation offloading technique for mobile cloud computing in smart cities
Yang et al. Optimal task scheduling in communication-constrained mobile edge computing systems for wireless virtual reality
Arfaoui et al. Minimization of delays in multi-service cloud-RAN BBU pools
Nguyen et al. EdgePV: collaborative edge computing framework for task offloading
CN110177383A (en) The efficiency optimization method of task based access control scheduling and power distribution in mobile edge calculations
US20020168940A1 (en) Predictive fair polling mechanism in a wireless access scheme
US8059616B1 (en) Parallel wireless scheduler
US20230153142A1 (en) System and method for enabling an execution of a plurality of tasks in a heterogeneous dynamic environment
CN110856215B (en) Resource allocation algorithm for multi-access edge calculation

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase