WO2016040743A1 - Dynamic virtual resource request rate control for utilizing physical resources - Google Patents

Dynamic virtual resource request rate control for utilizing physical resources Download PDF

Info

Publication number
WO2016040743A1
WO2016040743A1 PCT/US2015/049587 US2015049587W WO2016040743A1 WO 2016040743 A1 WO2016040743 A1 WO 2016040743A1 US 2015049587 W US2015049587 W US 2015049587W WO 2016040743 A1 WO2016040743 A1 WO 2016040743A1
Authority
WO
WIPO (PCT)
Prior art keywords
resource
virtual
physical
virtual compute
instance
Prior art date
Application number
PCT/US2015/049587
Other languages
French (fr)
Inventor
William John EARL
John Merrill PHILLIPS
Deepak Singh
Original Assignee
Amazon Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/484,200 external-priority patent/US9626210B2/en
Priority claimed from US14/484,197 external-priority patent/US9529633B2/en
Priority claimed from US14/483,952 external-priority patent/US9635103B2/en
Application filed by Amazon Technologies, Inc. filed Critical Amazon Technologies, Inc.
Publication of WO2016040743A1 publication Critical patent/WO2016040743A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Definitions

  • virtualization technologies may allow a single physical computing machine to be shared among multiple users by providing each user with one or more virtual machines hosted by the single physical computing machine, with each such virtual machine being a software simulation acting as a distinct logical computing system that provides users with the illusion that they are the sole operators and administrators of a given hardware computing resource, while also providing application isolation and security among the various virtual machines.
  • virtualization technologies may allow data storage hardware to be shared among multiple users by providing each user with a virtualized data store which may be distributed across multiple data storage devices, with each such virtualized data store acting as a distinct logical data store that provides users with the illusion that they are the sole operators and administrators of the data storage resource.
  • Virtualization technologies may be leveraged to create many different types of services or perform different functions for client systems or devices.
  • virtual machines may be used to implement a network-based service for external customers, such as an e-commerce platform.
  • Virtual machines may also be used to implement a service or tool for internal customers, such as an information technology (IT) service implemented as part of an internal network for a corporation.
  • IT information technology
  • multiple virtual machines may be hosted together on a single host, creating the possibility for contention and conflicts when utilizing different virtual computing resources that may rely upon the same physical computer resources.
  • FIG. 1 is a block diagram illustrating a dynamic virtual resource request rate control for physical resources, according to some embodiments.
  • FIG. 2 is a block diagram illustrating a provider network that provides virtual compute instances for which dynamic virtual resource request rate controls are implemented, according to some embodiments.
  • FIG. 3 is a block diagram illustrating a visualization host that implements dynamic virtual resource request rate controls for physical resources, according to some embodiments.
  • FIG. 4 is a block diagram illustrating a resource credit balance scheduler that implements dynamic virtual resource request rate controls, according to some embodiments.
  • FIG. 5 is a high-level flowchart illustrating various methods and techniques for implementing dynamic virtual resource request rate control for physical resources, according to some embodiments.
  • FIG. 6 is high-level flowchart illustrating various methods and techniques for determining a delay for a dynamic resource rate control, according to some embodiments.
  • FIG. 7 is a diagram illustrating a resource credit pool for replenishing resource credit balances for virtual compute instances, according to some embodiments.
  • FIG. 8 is a block diagram illustrating a provider network that provides resource credit pools for replenishing resource credit balances of virtual compute instances, according to some embodiments.
  • FIG. 9 is a block diagram illustrating a virtualization host that implements resource credits for scheduling virtual computer resources, according to some embodiments.
  • FIG. 10 illustrates interactions between a client and a provider network that implements resource credit pools for replenishing instance resource credit balances, according to some embodiments.
  • FIGS. 11A and 11B are block diagrams illustrating virtual compute instance migrations as part of replenishing instance resource credit balances from a resource credit pool, according to some embodiments.
  • FIG. 12 is a high-level flowchart illustrating various methods and techniques for implementing resource credit pools for replenishing resource credit balances of virtual compute instances, according to some embodiments.
  • FIG. 13 is high-level flowchart illustrating various methods and techniques for migrating instances in a provider network as part of replenishing instance resource credit balances from a resource credit pool, according to some embodiments.
  • FIG. 14 is a high-level flowchart illustrating various methods and techniques replenishing a resource credit pool, according to some embodiments.
  • FIG. 15 is a high-level flowchart illustrating various methods and techniques requesting resource credits from a resource credit pool for a particular instance, according to some embodiments.
  • FIG. 16 is a timeline illustrating variable timeslices for processing latency-dependent workloads at a virtualization host, according to some embodiments.
  • FIG. 17 is high-level flowchart illustrating various methods and techniques for implementing variable timeslices for processing latency-dependent workloads, according to some embodiments.
  • FIG. 18 is a high-level flowchart illustrating various methods and techniques for updating resource credit balances for virtual compute instances for providing preemption compensations, according to some embodiments.
  • FIG. 19 is a block diagram illustrating an example computing system, according to some embodiments.
  • the systems and methods described herein may implement dynamic virtual resource request rate control for physical resources, according to some embodiments.
  • the systems and methods described herein may implement resource credit pools for replenishing individual resource credit balances of virtual compute instances, according to some embodiments.
  • the systems and methods described herein may implement variable timeslices for latency-dependent workloads at a virtualization host, according to some embodiments.
  • Virtualization hosts may provide virtualized devices or resources as part of implementing virtual compute instances. These virtualized devices may provide a virtual compute instance with access to an underlying physical resource corresponding to the virtual resource.
  • a virtual central processing unit vCPU
  • vCPU virtual central processing unit
  • CPU physical central processing unit
  • Work requests may be submitted to individual virtual resource queues, which may correspond to a particular compute instance, from which they are then placed into a common physical resource queue for the physical computer resource performing the work request.
  • Multiple different physical computer resources may have different resource request queues and corresponding individual virtual resource requests queues for compute instances that utilize the different physical computer resources.
  • the utilization of underlying physical computer resources may differ as well.
  • Some instance workloads may be throughput sensitive, submitting a high-volume of work requests to utilize physical computer resources, in various embodiments.
  • Other instance workloads may be latency sensitive, submitting smaller numbers of work requests to utilize physical computer resources that may be dependent upon a response from the physical computer resources to continue performing, such as sending out requests via a network and receiving responses via the network.
  • dynamic virtual resource request rate controls for physical computer resources may be implemented to provide statistical fair-sharing among different virtual compute instances utilizing the same physical computer resource, without maintaining large in-memory data structures for scheduling or ordering work requests for submission to the physical resource request queue.
  • dynamic virtual resource request rate controls may provide consistent performance for performing individual work requests, so that a physical resource request queue for an underlying physical computer resource may not be overloaded with work requests.
  • FIG. 1 is a block diagram illustrating a dynamic virtual resource request rate control for physical resources, according to some embodiments.
  • a virtualization host such as virtualization hosts 234 and 310 described below with regard to FIGS. 2 and 3 may implement multiple virtual compute instances, such as virtual compute instances 102, 104, 106 and 108.
  • Virtual compute instances may utilize virtual devices or other interfaces which may submit work requests 110 for a physical resource to an individual instance request queue for that resource, such as instance request queues 112, 114, 116, and 118.
  • Dynamic rate controls 122, 124, 126, and 128 may place work requests from instance request queues into physical resource request queue 150 in order to ultimately be removed from physical resource request queue 150 and performed by the underlying physical computer resource.
  • Dynamic rate controls may, in various embodiments, impose delays between placing work requests from an instance request queue into physical resource request queue 150. Delays between work requests may be dynamically determined based on the workload of physical resource request queue 150 (e.g., based on the number of work requests in physical resource request queue 150). For example, workload metrics indicating the number of requests in queue 150 at particular points in time may be reported back (as illustrated by the loop back from queue 150) to physical resource workload module 140 which may determine a workload value or indicator, in some embodiments. The workload value or indicator may be provided (synchronously or asynchronously) to dynamic rate controls 122, 124, 126, and 128 for determining the delay between requests. In at least some embodiments, a random delay may be determined between work requests.
  • the random delay may be added to an initial or baseline delay, in some embodiments, based on a probability determined using the workload value or indicator. Introducing random delays may prevent congestion in physical resource request queue 150 due to synchronized submissions of work requests (e.g., troops marching in time problem).
  • FIG. 6, discussed below, provides further examples of adding random delays as part of dynamically determining a delay between work requests.
  • a delay may be determined for each instance request queue according to the utilization of the underlying physical resource allotted to the instance.
  • instance resource utilization 130 may provide indicators of the allocated, purchased, or otherwise assigned utilization of the underlying physical computer resource to dynamic rate controls 122, 124, 126, and 128, which may identify an initial delay to provide in between work requests.
  • the delay based on utilization may be provided between work requests, in some embodiments, whether or not a random delay is added to the delay.
  • the delay for an instance request queue may be dynamic (changing between individual work requests or multiple work requests), as utilization allotted to a virtual compute instance may change.
  • resource credit balances may be used to determine utilization of physical computer resources, as discussed below with regard to FIGS.
  • Delays between work requests may also be determined to ensure that work requests are not forced to wait out of proportion with respect to the number of requests in a respective instance request queue. For example, instance request queues 112 and 116 have more work requests to submit than instance request queues 114 and 118. Delays between work requests may be determined so that work requests from for an instance with fewer work requests may be submitted during the delay between requests of an instance with a greater number of work requests queued. For example, after submitting a first work request, dynamic rate control 122 may delay another work request from instance request queue 112 for an amount of time so that a work request from instance request queue 114, a work request from instance request queue 116, and a work request from instance request queue 118 may be submitted.
  • Imposing dynamic delays between work requests from individual instance request queues based, at least in part, on workload of physical resource request queue 150 may reduce or eliminate congestion at physical resource request queue 150. As the workload of physical resource request queue 150 increases, more delays may be added or increased between work request submissions, throttling back the number of work requests placed in physical resource request queue 150. Similarly, if the workload of physical resource request queue 150 decreases, less delays may be added or delays may be decreased between work request submissions, increasing the number of work requests placed in physical resource request queue 150.
  • This specification next includes a general description of a provider network, which may implement dynamic virtual resource request rate controls for physical resources. Then various examples of a provider network are discussed, including different components/modules, or arrangements of components/module that may be employed as part of the provider network. A number of different methods and techniques to implement dynamic virtual resource request rate controls for physical resources at a virtualization host are then discussed, some of which are illustrated in accompanying flowcharts. Various examples are provided throughout the specification. [0033] Different clients implementing virtual computing resources have different resource demands. For example, some clients' workloads are not predictable and may not utilize fixed resources efficiently. Virtual compute instances implementing resource credits for scheduling virtual computing resources may provide dynamic utilization of resources to provide flexible high performance, without wasting unutilized fixed resources.
  • Resource credits may be accumulated for individual virtual compute instances and maintained as part of an individual resource credit balance. When a virtual compute instance needs to perform work at high performance, the resource credits may be applied to the work, effectively providing full utilization of underlying physical resources for the duration of the resource credits. When a virtual compute instance is using less than its share of resources (e.g., little or no work is being performed), credits may be accumulated and used for a subsequent task.
  • Resources may, in various embodiments, be any virtualized computer resource that is implemented or performed by a managed physical computer resource, including, but not limited to, processing resources, communication or networking resources, and storage resources.
  • FIG. 2 is a block diagram illustrating a provider network that provides virtual compute instances for which variable timeslices for processing latency-dependent workloads are implemented, according to some embodiments.
  • Provider network 200 may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to clients 202.
  • Provider network 200 may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like, needed to implement and distribute the infrastructure and services offered by the provider network 200.
  • provider network 200 may provide computing resources. These computing resources may in some embodiments be offered to clients in units called "instances," 234 such as virtual compute instances.
  • provider network 200 may implement a control plane 210 in order to manage the computing resource offerings provided to clients 202 by provider network 200.
  • Control plane 210 may implement various different components to manage the computing resource offerings.
  • Control plane 210 may be implemented across a variety of servers, nodes, or other computing systems or devices (such as computing system 2000 described below with regard to FIG. 19). It is noted that where one or more instances of a given component may exist, reference to that component herein may be made in either the singular or the plural. However, usage of either form is not intended to preclude the other.
  • control plane 210 may implement interface 212. Interface 212 may be configured to process incoming requests received via network 260 and direct them to the appropriate component for further processing.
  • interface 212 may be a network-based interface and may be implemented as a graphical interface (e.g., as part of an administration control panel or web site) and/or as a programmatic interface (e.g., handling various Application Programming Interface (API) commands).
  • interface 212 may be implemented as part of a front end module or component dispatching requests to the various other components, such as resource management 214, reservation management 216, resource monitoring 218, and billing 220.
  • Clients 202 may, in various embodiments, may not directly provision, launch or configure resources but may send requests to control plane 210 such that the illustrated components (or other components, functions or services not illustrated) may perform the requested actions.
  • Control plane 210 may implement resource management module 214 to manage the access to, capacity of, mappings to, and other control or direction of computing resources offered by provider network.
  • resource management module 214 may provide both a direct sell and 3 rd party resell market for capacity reservations (e.g., reserved compute instances).
  • capacity reservations e.g., reserved compute instances.
  • resource management module 214 may allow clients 202 via interface 212 to learn about, select, purchase access to, and/or reserve capacity for computing resources, either from an initial sale marketplace or a resale marketplace, via a web page or via an API.
  • resource management component may, via interface 212, provide listings of different available compute instance types, each with a different credit accumulation rate.
  • resource management module 214 may be configured to offer credits for purchase (in addition to credits provided via the credit accumulation rate for an instance type) for a specified purchase amount or scheme (e.g., lump sum, additional periodic payments, etc.). For example, resource management module 214 may be configured to receive a credit purchase request (e.g., an API request) and credit the virtual instance balance with the purchased credits. Similarly, resource management module 214 may be configured to handle a request to increase a credit accumulation rate for a particular instance. Resource management 214 may also offer and/or implement a flexible set of resource reservation, control and access interfaces for clients 202 via interface 212. For example resource management module 214 may provide credentials or permissions to clients 202 such that compute instance control operations/interactions between clients and in-use computing resources may be performed.
  • a specified purchase amount or scheme e.g., lump sum, additional periodic payments, etc.
  • resource management module 214 may be configured to receive a credit purchase request (e.g., an API request) and credit the virtual instance balance with the purchased credits
  • reservation management module 216 may be configured to handle the various pricing schemes of instances 234 (at least for the initial sale marketplace) in various embodiments.
  • network-based virtual computing service 200 may support several different purchasing modes (which may also be referred to herein as reservation modes) in some embodiments: for example, term reservations (i.e. reserved compute instances), on- demand resource allocation, or spot-price-based resource allocation.
  • term reservations i.e. reserved compute instances
  • spot-price-based resource allocation i.e. reserved compute instances
  • a client may make a low, one-time, upfront payment for a compute instance or other computing resource, reserve it for a specified duration such as a one or three year term, and pay a low hourly rate for the instance; the client would be assured of having the reserved instance available for the term of the reservation.
  • a client could pay for capacity by the hour (or some appropriate time unit), without any long-term commitments or upfront payments.
  • spot-price mode a client could specify the maximum price per unit time that it is willing to pay for a particular type of compute instance or other computing resource, and if the client's maximum price exceeded a dynamic spot price determined at least in part by supply and demand, that type of resource would be provided to the client.
  • the spot price may become significantly lower than the price for on-demand mode.
  • a resource allocation may be interrupted - i.e., a resource instance that was previously allocated to the client may be reclaimed by the resource management module 330 and may be allocated to some other client that is willing to pay a higher price.
  • Resource capacity reservations may also update control plane data store 222 to reflect changes in ownership, client use, client accounts, or other resource information.
  • control plane 210 may implement resource monitoring module 218.
  • Resource monitoring module 218 may track the consumption of various computing instances, (e.g., resource credit balances, resource credit consumption) consumed for different virtual computer resources, clients, user accounts, and/or specific instances.
  • resource monitoring module 218 may implement various administrative actions to stop, heal, manage, or otherwise respond to various different scenarios in the fleet of virtualization hosts 230 and instances 234.
  • Resource monitoring module 218 may also provide access to various metric data for client(s) 202 as well as manage client configured alarms.
  • control plane 210 may implement billing management module 220.
  • Billing management module 220 may be configured to detect billing events (e.g., specific dates, times, usages, requests for bill, or any other cause to generate a bill for a particular user account or payment account linked to user accounts). In response to detecting the billing event, billing management module may be configured to generate a bill for a user account or payment account linked to user accounts.
  • billing events e.g., specific dates, times, usages, requests for bill, or any other cause to generate a bill for a particular user account or payment account linked to user accounts.
  • billing management module may be configured to generate a bill for a user account or payment account linked to user accounts.
  • a virtual compute instance 234 may, for example, comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size, and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor).
  • a number of different types of computing devices may be used singly or in combination to implement the compute instances 234 of network-based virtual computing service 200 in different embodiments, including general purpose or special purpose computer servers, storage devices, network devices and the like.
  • instance clients 202 or other any other user may be configured (and/or authorized) to direct network traffic to a compute instance 234.
  • Compute instances 234 may operate or implement a variety of different platforms, such as application server instances, JavaTM virtual machines (JVMs), general purpose or special- purpose operating systems, platforms that support various interpreted or compiled programming languages such as Ruby, Perl, Python, C, C++ and the like, or high-performance computing platforms) suitable for performing client 202 applications, without for example requiring the client 202 to access an instance 234.
  • JVMs JavaTM virtual machines
  • Resource credits may be provided at launch of an instance, and may be defined as utilization time (e.g., CPU time, such as CPU-minutes), which may represent the time an instance's virtual resources can spend on underlying physical resources performing a task.
  • utilization time e.g., CPU time, such as CPU-minutes
  • resource credits may represent time or utilization of resources in excess of a baseline utilization guarantee.
  • a compute instance may have a baseline utilization guarantee of 10% for a resource, and thus resource credits may increase the utilization for the resource above 10%. Even if no resource credits remain, utilization may still be granted to the compute instance at the 10% baseline. Credit consumption may only happen when the instance needs the physical resources to perform the work above the baseline performance. In some embodiments credits may be refreshed or accumulated to the resource credit balance whether or not a compute instance submits work requests that consume the baseline utilization guarantee of the resource.
  • Different types of compute instances implementing resource credits for scheduling computer resources may be offered. Different compute instances may have a particular number of virtual CPU cores, memory, cache, storage, networking, as well as any other performance characteristic. Configurations of compute instances may also include their location, in a particular data center, availability zone, geographic, location, etc... and (in the case of reserved compute instances) reservation term length. Different compute instances may have different resource credit accumulation rates for different virtual resources, which may be a number of resource credits that accumulate to the current balance of resource credits maintained for a compute instance. For example, one type of compute instance may accumulate 6 credits per hour for one virtual computer resource, while another type of compute instance may accumulate 24 credits per hour for the same type of virtual computer resource, in some embodiments.
  • the resource credit accumulation rate for one resource may be different than the resource credit accumulation rate for a different virtual computer resource (e.g., networking channel) for the same virtual compute instance.
  • a different virtual computer resource e.g., networking channel
  • multiple different resource credit balances may be maintained for a virtual compute instance for the multiple different virtual computer resources used by the virtual compute instances.
  • a baseline performance guarantee may also be implemented for each of the virtual computer resources, which may be different for each respective virtual computer resource, as well as for the different instance types.
  • Baseline performance guarantees may be included along with the resource credit accumulation rates, in some embodiments.
  • an instance type may include a specific resource credit accumulation rate and guaranteed baseline performance for processing, and another specific resource credit accumulation rate and guaranteed baseline performance rate for networking channels.
  • provider network 200 may offer many different types of instances with different combinations of resource credit accumulation rates and baseline guarantees for different virtual computer resources. These different configurations may be priced differently, according to the resource credit accumulation rates and baseline performance rates, in addition to the various physical and/or virtual capabilities.
  • a virtual compute instance may be reserved and/or utilized for an hourly price. While, a long-term reserved instance configuration may utilize a different pricing scheme, but still include the credit accumulation rates and baseline performance guarantees.
  • a virtualization host 230 such as virtualization hosts 230a,
  • a virtualization host 230 may include a virtualization management module 232, such as virtualization management modules 232a, 232b through 232n, capable of instantiating and managing a number of different client-accessible virtual machines or compute instances 234.
  • the virtualization management module 232 may include, for example, a hypervisor and an administrative instance of an operating system, which may be termed a
  • the domO operating system may not be accessible by clients on whose behalf the compute instances 234 run, but may instead be responsible for various administrative or control-plane operations of the network provider, including handling the network traffic directed to or from the compute instances 234.
  • Virtualization management module 232 may be configured to implement dynamic virtual resource request rate controls for physical resources utilized by different instances 234.
  • Client(s) 202 may encompass any type of client configurable to submit requests to provider network 200.
  • a given client 202 may include a suitable version of a web browser, or may include a plug-in module or other type of code module configured to execute as an extension to or within an execution environment provided by a web browser.
  • a client 202 may encompass an application such as a dashboard application (or user interface thereof), a media application, an office application or any other application that may make use of compute instances 234 to perform various operations.
  • an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data.
  • HTTP Hypertext Transfer Protocol
  • clients 202 may be configured to generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network- based services architecture.
  • a client 202 e.g., a computational client
  • Clients 202 may convey network-based services requests to network-based virtual computing service 200 via network 260.
  • network 260 may encompass any suitable combination of networking hardware and protocols necessary to establish network- based communications between clients 202 and provider network 200.
  • a network 260 may generally encompass the various telecommunications networks and service providers that collectively implement the Internet.
  • a network 260 may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks.
  • LANs local area networks
  • WANs wide area networks
  • both a given client 202 and network-based virtual computing service 200 may be respectively provisioned within enterprises having their own internal networks.
  • a network 260 may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client 202 and the Internet as well as between the Internet and provider network 200. It is noted that in some embodiments, clients 202 may communicate with provider network 200 using a private network rather than the public Internet.
  • hardware e.g., modems, routers, switches, load balancers, proxy servers, etc.
  • software e.g., protocol stacks, accounting software, firewall/security software, etc.
  • FIG. 3 is a block diagram illustrating a virtualization host that implements dynamic virtual resource request rate controls for physical resources, according to some embodiments.
  • virtualization hosts may serve as a host platform for one or more virtual compute instances. These virtual compute instances may utilize virtualized hardware interfaces to perform various tasks, functions, services and/or applications. As part of performing these tasks, virtual compute instances may utilize virtualized computer resources (e.g., virtual central processing unit(s) (vCPU(s)) which may act as the virtual proxy for the physical CPU(s)) implemented at the virtualization host in order to perform work on respective physical computer resources for the respective compute instance.
  • virtualized computer resources e.g., virtual central processing unit(s) (vCPU(s)
  • vCPU(s) virtual central processing unit
  • FIG. 3 illustrates virtualization host 310.
  • Virtualization host 310 may host compute instances 330a, 330b, 330c, through 330n.
  • the compute instances 330 may be the same type of compute instance.
  • compute instances 330 are compute instances that implement resource credits for scheduling virtual computer resources.
  • Virtualization host 310 may also implement virtualization management module 320, which may handle the various interfaces between the virtual compute instances 330 and physical computing resource(s) 340 (e.g., various hardware components, processors, I/O devices, networking devices, etc.).
  • virtualization management module 320 may implement resource credit balance scheduler 324.
  • Resource credit balance scheduler 324 may act as a meta-scheduler, managing, tracking, applying, deducting, and/or otherwise handling all resource credit balances for each of compute instances 330.
  • resource credit balance scheduler 324 may be configured to receive virtual compute resource work requests 332 from computes instances. Each work request 332 may be directed toward the virtual computer resource corresponding to the compute instance that submitted the work. For each request 332, resource credit balance scheduler 324 may be configured to determine a current resource credit balance for the requesting compute instance 330, and generate scheduling instructions to apply resource credits when performing the work request.
  • resource credit balance scheduler 324 may perform or direct the performance of the scheduling instructions, directing or sending the work request to the underlying physical computing resources 340 to be performed. For example, in some embodiments different hardware queues may be implemented and resource credit balance scheduler 324 may be used to place tasks for performing work requests in the queues according to the applied resource credits, such as described below with regard to FIG. 4. However, in some embodiments the resource scheduling instructions may be sent 334 to virtual compute resource scheduler 322, which may be a scheduler for the physical resources 340, such as CPU(s), implemented at virtualization host 310. Resource credit balance scheduler 324 and/or virtual compute resource scheduler 322 may be configured to perform the various techniques described below with regard to FIGS. 5 - 6, in order to provide dynamic resource rate controls to scheduled/submit work requests for instances 330, apply resource credits, deduct resource credits, and/or otherwise ensure that work requests are performed according to the applied resource credits.
  • virtual compute resource scheduler 322 may provide physical scheduling instructions for work requests 336 to physical computing resources, such as physical CPU(s), in various embodiments.
  • virtual compute resource scheduler 322 may be a credit-based scheduler for one or more CPUs.
  • Resource credit balance scheduler 324 may also report credit balance and usage metrics 362 to monitoring agent 326, which may in turn report these metrics along with any other host metrics 364 (health information, etc.) to resource monitoring module 218.
  • FIG. 4 is a block diagram illustrating a resource credit balance scheduler that implements dynamic virtual resource request rate controls, according to some embodiments.
  • resource credit balance scheduler 324 may implement dynamic virtual resource request rate controls for physical resources. Utilization of multiple different physical computer resources may be provided to compute instances 330.
  • resource A 400, resource B 410, and resource C 420 may represent processing, networking and/or storage physical computer resources (or any other physical computer resource). Different resource request queues may be implemented from which work requests may be pulled and performed.
  • Resource A request queue 409 provides work requests to resource A 400.
  • resource B request queue 419 and resource C request queue 429 provide requests to be pulled and performed for resource B 410 and resource C 420 respectively.
  • a resource request queue may be implemented as part of another scheduling component.
  • resource A 400 may be processing resources
  • resource A request queue 409 may be implemented as part of a CPU scheduler that pulls requests from the request queue 409 for processing (similar to virtual compute resource scheduler 322 in FIG. 3).
  • dynamic request rate controls 407 may be implemented for work requests for resource A 400
  • dynamic request rate controls 417 may be implemented for work requests for resource B 410
  • dynamic request rate controls 427 may be implemented for work requests for resource C 420.
  • Dynamic resource request controls may be configured to dynamically determine a delay to be imposed before placing a next resource request into a resource request queue for the physical computer resource.
  • FIGS. 5 and 6 described below provide various examples methods and techniques that dynamic rate controls may implement.
  • a resource credit balance 403 of a particular instance (e.g., 330c) for resource A may be obtained to determine an initial delay between work requests (using the number of resource credits in the credit balance to identify an allotted utilization for instance 303c).
  • the workload for resource A may also be obtained 405 and provided to the dynamic rate control.
  • a probability for adding delay may be calculated using the workload for resource A, and depending on the probability calculated a delay may be randomly added or may not be added to the initial delay.
  • the delay may then be imposed before the dynamic request rate control places another work request from the individual virtual resource A request queue 401 for instance 303 c into resource A request queue 409. Similar techniques may be applied by dynamic request rate controls 417 and 427 for resources B 410 and C 420.
  • a delay may be determined before placing new requests from individual virtual resource B request queues 411 and individual resource C request queues 421 utilizing resource B credit balances 413 and resource C credit balances 423 respectively.
  • Resource B queue workload 415 and resource C queue workload 425 may also be used to dynamically determine the delay. Delays for individual virtual resource requests queues may be performed contemporaneously, in various embodiments.
  • dynamic request rate controls 407 may be individual determining delays for the respective individual virtual resource A request queue 401 from which they pull work requests.
  • FIG. 5 is a high-level flowchart illustrating various methods and techniques for implementing dynamic virtual resource request rate control for physical resources, according to some embodiments. These techniques may be implemented using various components of network-based virtual computing service as described above with regard to FIGS. 2 - 4 or other virtual computing resource hosts.
  • a work request for a virtual computer resource may be placed from an individual resource request queue maintained for a virtual compute instance into a physical resource request queue, in various embodiments.
  • a delay may be dynamically determined based, at least in part, on a workload of the physical request queue, as indicated at 520. For example, if the number of work requests in the physical resource request queue is high, then a greater probability exists that a random delay may be imposed. The delay may also be determined so as to maintain a particular utilization of the physical computer resource for the virtual compute instance.
  • the delay may be determined such that 500 I/O work requests may be placed into the physical resource request queue between delays.
  • FIG. 6, discussed below, provides further examples of dynamically determining a delay. After the delay is imposed, as indicated by the positive exit from 530, a next work request from the individual virtual resource request queue may be placed into the physical resource request queue, as indicated at 540.
  • the techniques described above with regard to FIG. 5 may be implemented across multiple individual virtual resource request queues for different virtual compute instances submitting work requests to utilize the same physical computer resource.
  • the various delays determined between requests may be different for some or all of the different individual virtual resource request queues.
  • another work request may be submitted for another individual virtual resource queue, in some embodiments.
  • Random delays may be added to requests from different individual virtual resource request queues at different times. Delays based on allotted utilization for the physical computer resource (e.g., based on a resource credit balance) may create different delay times as well.
  • the aggregate effect provided by inserting dynamic delays between placing work requests at each individual virtual resource request queue may be to provide a consistent throughput for performing the work requests at the physical computer resource.
  • work requests submitted by a virtual compute instance that is latency sensitive may be provided with a consistent amount of time or latency to perform the work request.
  • delays may be determined to ensure that work requests submitted by a virtual compute instance that is throughput sensitive according to an expected amount of throughput.
  • FIG. 6 is high-level flowchart illustrating various methods and techniques for determining a delay for a dynamic resource rate control, according to some embodiments.
  • an initial delay may be identified for an individual resource request queue of a virtual compute instance based, at least in part, on a determined utilization of a physical computer resource for the virtual compute instance. For instance, in some embodiments utilization of a physical computer resource may be evenly divided between the virtual compute instances implemented on a virtualization host. If, for example 4 compute instances are implemented on a host and processing resources of the host are evenly divided, then each virtual compute instance may expect a 25% utilization of the processing resources. In some embodiments, the determined utilization between virtual compute instances may be different and/or change over time.
  • resource credits may be accrued and applied for a virtual compute instance to utilize a physical computer resource.
  • a determined utilization of a physical computer resource for the virtual compute instance may be, in such embodiments, based on a resource credit balance for the virtual compute instance for the physical computer resource.
  • the initial delay may be determined such as to ensure that the allotted utilization is not exceeded by a virtual compute instance. If, for instance, a virtual compute instance has a determined utilization for network traffic at 2000 packets per second, then the initial delay if inserted between submitting traffic requests for a second would limit the number of requests to a maximum of 2000 packets.
  • a workload of the physical resource request queue for the physical computer resource may be determined, in various embodiments.
  • Workload metrics for a physical resource request queue may be tracked indicating the number of requests in the queue at a point in time, for example.
  • the workload requests metrics may be smoothed to determine a workload. For instance, a weighted average may be taken of the workload metrics.
  • the same workload may be used for determining multiple different delays. For example, the workload for determining a first delay may be 100 requests, and the same workload of 100 requests may be used again to determine a subsequent delay.
  • a probability for adding a random delay may be calculated based, at least in part, on the work load of the physical resource request queue. For example, a probability calculation such as may be determined when applying a Random Early Detection (RED) technique may be used to calculate the probability for the delay. Various different random number generation techniques, such as a uniform random variable technique or a geometric random variable technique may be applied as part of calculating the probability. In general, the calculated probability may be proportional to the offered load divided by the available throughput at the physical resource request queue. As the workload of the physical resource request queue increases, the probability or likelihood that a delay may be randomly added increases.
  • RED Random Early Detection
  • a random delay is added to the initial delay is determined according to the calculated probability at 630. If, the probability indicates that for every 1/10 work requests submitted a random delay may be added, then the initial delay has a 1/10 to be increased with an additional delay, for example.
  • the amount of time added in the random delay may be a default amount of time, or may be determined to achieve a particular throughput or workload at the physical resource request queue, in some embodiments. Thus, as indicated at 650 and 660, either the random delay will be added to the initial delay or not added to the initial delay according to the determined probability.
  • a provider network may also implement resource credit pools for replenishing individual resource credit balances of virtual compute instances, according to some embodiments.
  • Different clients implementing virtual computing resources have different resource demands. For example, some clients' workloads are not predictable and may not utilize fixed resources efficiently.
  • Virtual compute instances implementing resource credits for scheduling virtual computing resources may provide dynamic utilization of resources creating flexible high performance, without wasting unutilized fixed resources. Resource credits may be accumulated for individual virtual compute instances and maintained as part of an individual resource credit balance. When a virtual compute instance needs to perform work at high performance, the resource credits may be applied to the work, effectively providing full utilization of underlying physical resources for the duration of the resource credits.
  • Resources may, in various embodiments, be any virtualized computer resource that is implemented or performed by a managed physical computer resource, including, but not limited to, processing resources, communication or networking resources, and storage resources.
  • Resources may, in various embodiments, be any virtualized computer resource that is implemented or performed by a managed physical computer resource, including, but not limited to, processing resources, communication or networking resources, and storage resources.
  • scheduling utilization of physical computer resources according to individual resource credit balances may allow individual virtual compute instances to handle some bursts or large changes in instance workloads, the workload that may be directed to any one particular instance may be difficult to predict. If, for instance, a group of instances is used to provide some kind of service for which different instances may randomly experience burst workloads the overall workload of many instances may be relatively low.
  • a resource credit pool may be implemented to provide additional resource credits to one or more instances in a group of virtual compute instances.
  • the aggregate workload for a large group of instances may be more easily determined (based on various statistical techniques).
  • the resource credit pool may be filled with sufficient resource credits to process the aggregate workload in a more cost-effective manner.
  • FIG. 7 is block diagram illustrating a resource credit pool for replenishing resource credit balances for virtual compute instances, according to some embodiments.
  • Provider network 700 may be a distributed system or service provides virtual compute instances 720a, 720b, 720c, though 720n for use by clients of provider network 700.
  • Each of these virtual compute instances 720 may be implemented on a virtualization host, which as described above with regard to FIGS. 2 and 3, may provide a platform for executing a virtual compute instance.
  • Physical computer resources implemented as part of a virtualization host may be shared among multiple virtual compute instances implemented on the same host.
  • Credit-based scheduling may be implemented to determine the utilization of physical computer resources to perform work requests for the compute instances hosted thereon according to individual resource credit balances, such as balances 722a, 722b, 722c though 722n.
  • individual resource credit balances such as balances 722a, 722b, 722c though 722n.
  • an individual balance of processing resource credits for a virtual compute instance may be applied to determine the utilization of a processing resource (e.g., central processing unit (CPU)) for the virtual compute instance.
  • CPU central processing unit
  • virtual compute instances may be provisioned with an initial individual resource credit balance (e.g., 30 credits) which may be used immediately. Over time, the compute instance may accumulate more resource credits according to a fixed rate.
  • a limit may be implemented for accumulating resource credits according to the instance refill rate. This limit may be enforced by excluding accumulated resource credits after a period of time (e.g., 24 hours).
  • a resource credit may provide full utilization of a resource for a particular time (e.g., a computer resource credit may equal 1 minute of full central processing unit (CPU) utilization, 30 seconds for a particular networking channel, or some other period of use that may be guaranteed), in some embodiments. Resource credits may be deducted from the resource credit balance when used.
  • a virtual compute instance may utilize sufficient resources (e.g., CPU cores, network interface card functions, etc.) to obtain high performance when needed.
  • resources e.g., CPU cores, network interface card functions, etc.
  • the individual resource credit balance may be insufficient to complete the work requests at a high performance level. For example, if no resource credits are available when performing a work request, a baseline utilization guarantee may still be applied to perform the work request.
  • a provider network may implement a resource credit pool 710, which may replenish resource credits 712 to individual resource credit balances 722. For example, resource credit requests may be made to the resource credit pool 710 to obtain additional resource credits when it may be determined that additional resource credits are need to complete one or more work requests for a virtual compute instance.
  • the utilization of underlying physical resources when credits are applied may trigger migration events for some virtualization hosts (as described below with regard to FIGS. 5 A, 5B, and 7), which may migrate virtual compute instances from one virtualization host to another in order to provide capacity to apply the additional resource credits for the virtual compute instance's work requests.
  • Different resource credit pools 710 may correspond to different types of physical computer resources.
  • virtual compute instances may be authorized to access multiple different resource credit pools corresponding to different physical computer resources.
  • Resource credit pools may also be linked to a single user or payment account from which funds may be drawn to obtain additional resource credit(s) 702 to replenish the resource credit pool.
  • Different replenishment policies for resource credit pool 710 may be implemented, providing automated or manually requested replenishment.
  • This specification next includes a general description of a provider network, which may implement resource credit pools for replenishing individual resource credit balances of virtual compute instances. Then various examples of a provider network are discussed, including different components/modules, or arrangements of components/module that may be employed as part of the provider network. A number of different methods and techniques to implement a resource credit pool for replenishing individual resource credit balances are then discussed, some of which are illustrated in accompanying flowcharts. Various examples are provided throughout the specification.
  • FIG. 8 is a block diagram illustrating a provider network that resource credit pools for replenishing individual resource credit balances of virtual compute instances, according to some embodiments.
  • Provider network 800 may be a provider network like provider network 200 discussed above with regard to FIG. 2, and may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to clients 802.
  • Provider network 800 may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like, needed to implement and distribute the infrastructure and services offered by the provider network 800.
  • provider network 800 may provide computing resources. These computing resources may in some embodiments be offered to clients in units called "instances," 834 such as virtual compute instances.
  • a provider network may, in some embodiments be a network-based service providing virtual compute instances.
  • provider network 800 may implement a control plane 810 in order to manage the computing resource offerings provided to clients 802 by provider network 800.
  • Control plane 810 may implement various different components to manage the computing resource offerings.
  • Control plane 810 may be implemented across a variety of servers, nodes, or other computing systems or devices (such as computing system 2000 described below with regard to FIG. 19). It is noted that where one or more instances of a given component may exist, reference to that component herein may be made in either the singular or the plural. However, usage of either form is not intended to preclude the other.
  • control plane 810 may implement interface 812.
  • Interface 812 may be configured to process incoming requests received via network 860 and direct them to the appropriate component for further processing.
  • interface 812 may be a network-based interface and may be implemented as a graphical interface (e.g., as part of an administration control panel or web site) and/or as a programmatic interface (e.g., handling various Application Programming Interface (API) commands).
  • API Application Programming Interface
  • interface 812 may be implemented as part of a front end module or component dispatching requests to the various other components, such as resource management 814, reservation management 816, resource credit pool management 818, and resource monitoring 820.
  • Clients 802 may, in various embodiments, may not directly provision, launch or configure resources but may send requests to control plane 810 such that the illustrated components (or other components, functions or services not illustrated) may perform the requested actions.
  • FIG. 4 discussed below, provides various examples of requests that may be processed via interface 812.
  • Control plane 810 may implement resource management module 814 to manage the access to, capacity of, mappings to, and other control or direction of computing resources offered by provider network.
  • resource management module 814 may provide both a direct sell and 3 rd party resell market for capacity reservations (e.g., reserved compute instances).
  • capacity reservations e.g., reserved compute instances.
  • resource management module 814 may allow clients 802 via interface 812 to learn about, select, purchase access to, and/or reserve capacity for computing resources, either from an initial sale marketplace or a resale marketplace, via a web page or via an API.
  • resource management component may, via interface 812, provide a listings of different available compute instance types, each with a different credit accumulation rate.
  • resource management module 814 may be configured to offer credits for purchase (in addition to credits provided via the credit accumulation rate for an instance type) for a specified purchase amount or scheme (e.g., lump sum, additional periodic payments, etc.). For example, resource management module 814 may be configured to receive a credit purchase request (e.g., an API request) and credit the resource credit pool with the purchased credits. Similarly, resource management module 214 may be configured to handle a request to reconfigure an instance, such as increase a credit accumulation rate for a particular instance. Resource management 814 may also offer and/or implement a flexible set of resource reservation, control and access interfaces for clients 802 via interface 812.
  • a credit purchase request e.g., an API request
  • resource management module 214 may be configured to handle a request to reconfigure an instance, such as increase a credit accumulation rate for a particular instance.
  • Resource management 814 may also offer and/or implement a flexible set of resource reservation, control and access interfaces for clients 802 via interface 812.
  • resource management module 814 may provide credentials or permissions to clients 802 such that compute instance control operations/interactions between clients and in-use computing resources may be performed.
  • resource management modules may be configured to perform various migrations of virtual compute instances from one virtualization host to another in response to detecting migration events (as discussed below with regard to FIGS. 11A, 11B, and 13).
  • reservation management module 816 may be configured to handle the various pricing schemes of instances 834 (at least for the initial sale marketplace) in various embodiments.
  • network-based virtual computing service 800 may support several different purchasing modes (which may also be referred to herein as reservation modes) in some embodiments: for example, term reservations (i.e. reserved compute instances), on- demand resource allocation, or spot-price-based resource allocation.
  • term reservations i.e. reserved compute instances
  • spot-price-based resource allocation i.e. reserved compute instances
  • a client may make a low, one-time, upfront payment for a compute instance or other computing resource, reserve it for a specified duration such as a one or three year term, and pay a low hourly rate for the instance; the client would be assured of having the reserved instance available for the term of the reservation.
  • a client could pay for capacity by the hour (or some appropriate time unit), without any long-term commitments or upfront payments.
  • spot-price mode a client could specify the maximum price per unit time that it is willing to pay for a particular type of compute instance or other computing resource, and if the client's maximum price exceeded a dynamic spot price determined at least in part by supply and demand, that type of resource would be provided to the client.
  • the spot price may become significantly lower than the price for on-demand mode.
  • a resource allocation may be interrupted - i.e., a resource instance that was previously allocated to the client may be reclaimed by the resource management module 816 and may be allocated to some other client that is willing to pay a higher price.
  • Resource capacity reservations may also update control plane data store 822 to reflect changes in ownership, client use, client accounts, or other resource information.
  • control plane 810 may implement resource credit pool management 818.
  • Resource credit pool management 818 may, in various embodiments, be configured to manage and handle requests to create, configure, add instances or remove instances, or any other management operation as part of providing resource credit pools.
  • Resource credit pool management 818 may store resource credit pool balances, authorized instances, or any other information in control plane data store 822.
  • Resource credit pool management 818 may, in various embodiments, handle resource credit requests, determine the number of resource credits to provide, send responses to add credits or deny the resource request, and update the resource credit pool based on replenishment actions to individual resource credit balances or acquisitions of new resource credits for the resource credit pool.
  • Resource credit pool management 818 may request resource migrations from resource management module 814 and perform evaluations of virtualization hosts to detect migration events.
  • control plane 810 may implement resource monitoring module 820.
  • Resource monitoring module 820 may track the consumption of various computing instances, (e.g., resource credit balances, resource credit consumption) consumed for different virtual computer resources, clients, user accounts, and/or specific instances.
  • resource monitoring module 820 may implement various administrative actions to stop, heal, manage, or otherwise respond to various different scenarios in the fleet of virtualization hosts 830 and instances 834.
  • Resource monitoring module 820 may also provide access to various metric data for client(s) 802 as well as manage client configured alarms. Information collected by monitoring module 820 may be used to detect migration events for virtualization hosts, in some embodiments.
  • control plane 810 may implement a billing management module (not illustrated).
  • the billing management module may be configured to detect billing events (e.g., specific dates, times, usages, requests for bill, or any other cause to generate a bill for a particular user account or payment account linked to user accounts). In response to detecting the billing event, billing management module may be configured to generate a bill for a user account or payment account linked to user accounts.
  • a virtual compute instance 834 may, for example, comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size, and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor).
  • a number of different types of computing devices may be used singly or in combination to implement the compute instances 834 of provider network 800 in different embodiments, including general purpose or special purpose computer servers, storage devices, network devices and the like.
  • instance clients 802 or other any other user may be configured (and/or authorized) to direct network traffic to a compute instance 834.
  • Compute instances 834 may operate or implement a variety of different platforms, such as application server instances, JavaTM virtual machines (JVMs), general purpose or special- purpose operating systems, platforms that support various interpreted or compiled programming languages such as Ruby, Perl, Python, C, C++ and the like, or high-performance computing platforms) suitable for performing client 802 applications, without for example requiring the client 802 to access an instance 834.
  • JVMs JavaTM virtual machines
  • platforms that support various interpreted or compiled programming languages such as Ruby, Perl, Python, C, C++ and the like, or high-performance computing platforms
  • This type of instance may perform based on resource credits, where resource credits represent time an instance can spend on a physical resource doing work (e.g., processing time on a physical CPU, time utilizing a network communication channel, etc.).
  • resource credits represent time an instance can spend on a physical resource doing work (e.g., processing time on a physical CPU, time utilizing a network communication channel, etc.).
  • Resource credits may be provided at launch of an instance, and may be defined as utilization time (e.g., CPU time, such as CPU-minutes), which may represent the time an instance's virtual resources can spend on underlying physical resources performing a task.
  • resource credits may represent time or utilization of resources in excess of a baseline utilization guarantee.
  • a compute instance may have a baseline utilization guarantee of 10% for a resource, and thus resource credits may increase the utilization for the resource above 10%. Even if no resource credits remain, utilization may still be granted to the compute instance at the 10% baseline. Credit consumption may only happen when the instance needs the physical resources to perform the work above the baseline performance. In some embodiments credits may be refreshed or accumulated to the resource credit balance whether or not a compute instance submits work requests that consume the baseline utilization guarantee of the resource.
  • Different types of compute instances may be offered. Different compute instances may have a particular number of virtual CPU cores, memory, cache, storage, networking, as well as any other performance characteristic. Configurations of compute instances may also include their location, in a particular data center, availability zone, geographic, location, etc., and (in the case of reserved compute instances) reservation term length. Different compute instances may have different resource credit accumulation rates for different virtual resources, which may be a number of resource credits that accumulate to the current balance of resource credits maintained for a compute instance. For example, one type of compute instance may accumulate 6 credits per hour for one virtual computer resource, while another type of compute instance may accumulate 24 credits per hour for the same type of computer resource, in some embodiments.
  • the resource credit accumulation rate for one resource may be different than the resource credit accumulation rate for a different computer resource (e.g., networking channel) for the same virtual compute instance.
  • a different computer resource e.g., networking channel
  • multiple different resource credit balances may be maintained for a virtual compute instance for the multiple different physical resources used by the virtual compute instances.
  • a baseline performance guarantee may also be implemented for each of the computer resources, which may be different for each virtual computer resource, as well as for the different instance types.
  • Baseline performance guarantees may be included along with the resource credit accumulation rates, in some embodiments.
  • an instance type may include a specific resource credit accumulation rate and guaranteed baseline performance for processing, and another specific resource credit accumulation rate and guaranteed baseline performance rate for networking channels.
  • provider network 800 may offer many different types of instances with different combinations of resource credit accumulation rates and baseline guarantees for different virtual computer resources. These different configurations may be priced differently, according to the resource credit accumulation rates and baseline performance rates, in addition to the various physical and/or virtual capabilities.
  • a virtual compute instance may be reserved and/or utilized for an hourly price. While, a long-term reserved instance configuration may utilize a different pricing scheme, but still include the credit accumulation rates and baseline performance guarantees.
  • a virtualization host 830 may implement and/or manage multiple compute instances 834, in some embodiments, and may be one or more computing devices, such as computing system 2000 described below with regard to FIG. 19.
  • a virtualization host 830 may include a virtualization management module 832, such as virtualization management modules 832a, 832b through 832n, capable of instantiating and managing a number of different client-accessible virtual machines or compute instances 834.
  • the virtualization management module 832 may include, for example, a hypervisor and an administrative instance of an operating system, which may be termed a "domain-zero" or "domO" operating system in some implementations.
  • the domO operating system may not be accessible by clients on whose behalf the compute instances 834 run, but may instead be responsible for various administrative or control-plane operations of the network provider, including handling the network traffic directed to or from the compute instances 834.
  • Client(s) 802 may encompass any type of client configurable to submit requests to network-based virtual computing service 800.
  • a given client 802 may include a suitable version of a web browser, or may include a plug-in module or other type of code module configured to execute as an extension to or within an execution environment provided by a web browser.
  • a client 802 may encompass an application such as a dashboard application (or user interface thereof), a media application, an office application or any other application that may make use of compute instances 834 to perform various operations.
  • such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network- based data.
  • clients 802 may be configured to generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network-based services architecture.
  • a client 802 e.g., a computational client
  • Clients 802 may convey network-based services requests to network-based virtual computing service 800 via network 860.
  • network 860 may encompass any suitable combination of networking hardware and protocols necessary to establish network- based communications between clients 802 and network-based virtual computing service 800.
  • a network 860 may generally encompass the various telecommunications networks and service providers that collectively implement the Internet.
  • a network 860 may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks.
  • LANs local area networks
  • WANs wide area networks
  • both a given client 802 and network-based virtual computing service 800 may be respectively provisioned within enterprises having their own internal networks.
  • a network 860 may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client 802 and the Internet as well as between the Internet and network-based virtual computing service 800. It is noted that in some embodiments, clients 802 may communicate with network-based virtual computing service 800 using a private network rather than the public Internet.
  • hardware e.g., modems, routers, switches, load balancers, proxy servers, etc.
  • software e.g., protocol stacks, accounting software, firewall/security software, etc.
  • FIG. 9 is a block diagram illustrating a virtualization host that implements resource credits for scheduling virtual computer resources, according to some embodiments.
  • virtualization hosts may serve as a host platform for one or more virtual compute instances. These virtual compute instances may utilize virtualized hardware interfaces to perform various tasks, functions, services and/or applications. As part of performing these tasks, virtual compute instances may utilize physical computer resources via a virtual proxy (e.g., virtual central processing unit(s) (vCPU(s)) which may act as the virtual proxy for the physical CPU(s)) implemented at the virtualization host 310 in order to perform work on respective physical computer resources for the respective compute instance.
  • the compute instances may be operated for different clients of a provider network such that the virtualization host is multi-tenant.
  • FIG. 9 illustrates virtualization host 910.
  • Virtualization host 910 may host compute instances 930a, 930b, 930c, through 930n.
  • the compute instances 930 may be the same type of compute instance.
  • compute instances 930 are compute instances that implement rolling resource credits for scheduling virtual computer resources.
  • Virtualization host 910 may also implement virtualization management module 920, which may handle the various interfaces between the virtual compute instances 930 and physical computing resource(s) 940 (e.g., various hardware components, processors, I/O devices, networking devices, etc.).
  • virtualization management module 920 may implement resource credit balance scheduler 924.
  • Resource credit balance scheduler 924 may act as a meta-scheduler, managing, tracking, applying, deducting, and/or otherwise handling all individual resource credit balances for each of compute instances 930 for the different respective physical resources 940.
  • resource credit balance scheduler 924 may be configured to receive virtual resource work requests 932 from computes instances. Each work request 932 may be directed toward the virtual computer resource corresponding to the compute instance that submitted the work. For each request 932, resource credit balance scheduler may be configured to determine a current resource credit balance for the requesting compute instance 930, and generate scheduling instructions to apply resource credits when performing the work request.
  • resource credit balance scheduler 924 may perform or direct the performance of the scheduling instructions, directing or sending the work request to the underlying physical computing resources 940 to be performed (as illustrated by the arrow between 924 and 940). For example, in some embodiments different hardware queues may be implemented and resource credit balance scheduler 924 may be used to place tasks for performing work requests in the queues according to the applied resource credits (e.g., queuing tasks according to the amount of time of applied resource credits). However, in some embodiments the resource scheduling instructions may be sent 934 to virtual compute resource scheduler 922, which may be a scheduler for the physical resources 940, such as CPU(s), implemented at virtualization host 910.
  • virtual compute resource scheduler 922 may provide physical scheduling instructions for work requests 936 to physical computing resources, such as physical CPU(s), in various embodiments.
  • virtual compute resource scheduler 922 may be a credit-based scheduler for one or more CPUs.
  • Rolling resource credit balance scheduler 924 may also report credit balance and usage metrics as part of performance metrics 972 to along with any other host metrics (health information, etc.) to resource monitoring module 820.
  • the individual resource credit balances may be insufficient to complete work requests 932.
  • credit requests 962 may be sent via credit pool agent 926 (which handles communications between virtualization host 910 and resource credit manager 818) to request 964 a number of resource credits from a particular resource credit pool.
  • Resource credit manager 818 may send a response authorizing additional resource credits 966 to credit pool agent 926 which in turn may inform the scheduler 924 of the additional resource credits 968.
  • scheduling instructions (which may apply the additionally granted credits to an individual resource account according to a schedule or in response to events such as the completion of a migration) for applying additional resource credits 968 may be enforced.
  • FIG. 10 illustrates interactions between a client and a provider network that implements resource credit pools for replenishing instance resource credit balances, according to some embodiments.
  • Client 1000 (similar to client(s) 802 in FIG. 8) may interact with control plane 810 via interface 812.
  • interface 812 may be implemented as a graphical user interface (e.g., at a network-based site) or programmatically (e.g., an API).
  • Client 1000 may submit a request to create a resource credit pool 1010 to control plane 810.
  • Creation request 1010 may indicate the type of physical computer resource for the resource credit pool to maintain resource credits.
  • the resource credit pool creation request may also include a replenishment policy (e.g., on-demand, periodic refill, manual refill). Replenishment policies for individual resource credit balances may also be included.
  • a separate request, 1020, to configure these replenishment policies or change these replenishment policies may also be sent.
  • the creation request may also identify the virtual compute instances authorized to obtain resource credits from the resource credit pool (e.g., including a list of instance identifiers, a zone, region, or other indication of instances that are authorized). Requests to add compute instances 1030 to those authorized to replenish credits from the resource credit pool may be sent, as well as requests to remove authorization 1040 for particular compute instances.
  • requests to add resource credits 1070 to the resource credit pool may also be sent.
  • requests for pricing information 1050 may be sent to obtain resource credit pricing 1060 when making purchasing decisions.
  • FIG. 11A illustrates a virtual compute instance migration performed as part of replenishing an individual resource credit balance for a physical computer resource, according to some embodiments.
  • Virtualization host 1100 may send a resource credit request 1132 for resource pool credits to control plane 210 for instance 1102b.
  • Control plane 210 may identify the correct resource credit pool and verify the authority of instance 1102b to receive pool credits.
  • Control plane 210 may perform an evaluation of virtualization host 1100 and detect a migration event for virtualization host 1100. For example, adding the additional resource credits to the local credit balance for instance 1102 may cause utilization of a physical computer resource authorized by the type of resource credits to exceed an actual capacity of the physical computer resource at virtualization host 1100.
  • control plane 210 may select one or more instances to migrate to destination virtualization hosts. As illustrated in FIG. 11 A, instance 1102c and instance 602d may be selected for instance migration 1138. Instances may be selected for migration for various reasons. For example, typical utilization of the physical computer resource by instances 1102c and 1102d may offset the increased utilization provided by the additional resource credits for instance 1102b. Destination virtualization hosts 1110 and 1120 may be selected to host instances 1102c and 1102d respectively. FIG. 13, discussed below, provides further examples and techniques for selecting instances and destination virtualization hosts for migration.
  • Control plane 810 may send a response 1134 authorizing a number of resource credits to be added to the local credit balance for instance 1102b.
  • the response may include a scheduling instruction which may allow only a portion resource credits to be applied until instances 1102c and 1102d are migrated to virtualization hosts 1110 and 1120.
  • Control plane 810 may also direct the instance migration 1136, performing various operations to re-instantiate instances 1102c and 1102d at virtualization hosts 1110 and 1120.
  • control plane 210 may provision a replica instance on virtualization host 1110 of instance 1102c, synchronize the state of the two instances and redirect traffic to virtualization host 1110 to the new instance acting as instance 1102c.
  • the individual resource credit balances for instances 1102c and 1102d may also be replicated to virtualization hostl 1110 and 1120. Migration may be performed in such a way as to be transparent to a user or client of instances 1102c and 1102d (which as the virtualization hosts may be multi-tenant, utilization changes due to resource credit requests may be hidden from view). Once migration 1138 is complete, virtualization host 1100 may make the physical computer resources utilized by instances 1102c and 1102d available to other instances.
  • FIG. 11B illustrates a virtual compute instance migration performed as part of replenishing an individual resource credit balance for a physical computer resource, where the requesting virtual compute instance is migrated, according to some embodiments.
  • Virtualization host 1140 may send a resource credit request 1162 for pool credits to control plane 210 for instance 1142a.
  • Control plane 210 may evaluate virtualization host 1140.
  • control plane 810 may evaluate usage and performance data for utilization of the physical computer resource for which the pool credits are requested. If, for instance, the resource credit request is for processing credits, then past processor utilization of instances 1142a, 1142b, 1142c and 1142d may be examined. Excess processing capacity or bandwidth may be identified based on the evaluation.
  • Instance 1142a may be selected for migration 1168 to a destination virtualization host 1150 (which may also be selected based on the resource requirements of instance 1142 including the additional resource credits). Instance 1142a may be selected based on multiple factors. For example, instance 1142a may be a "small" instance (based on workload and/or utilization), and thus may be easy to migrate. In another example, the destination virtualization host 1150 may have different hardware providing different physical computer resource capabilities that meet the requirements of the virtual compute instance post credit replenishment. FIG. 13, discussed below, provides further discussion on the selection of instances for migration.
  • Control plane 810 may send a response 1164 authorizing a number of credits to be added to the individual resource credit pool for instance 1142a.
  • the response may include a scheduling instruction which may allow only a portion resource credits to be applied until instance 1142a is migrated to virtualization host 1150.
  • Control plane 810 may also direct the instance migration 1166, performing various operations to re -instantiate instance 1142a at virtualization host 1150.
  • control plane 210 may provision a replica instance on virtualization host 1150 of instance 1142a, synchronize the state of the two instances and redirect traffic to virtualization host 1150 to the new instance acting as instance 1142a.
  • the individual resource credit balances for instance 1142a may also be replicated to virtualization host 1150. Migration may be performed in such a way as to be transparent to a user or client of instance 1142a.
  • migration 1168 is complete, virtualization host 1140 may make the physical computer resources utilized by instance 1142a available to other instances.
  • FIG. 12 is a high-level flowchart illustrating various methods and techniques for implementing resource credit pools for replenishing resource credit balances of virtual compute instances, according to some embodiments. These techniques may be implemented using various components of provider network as described above with regard to FIGS. 8 - 11B or other system or service providing virtual computing instances.
  • a resource credit pool of resource credits may be maintained to replenish individual resource credit balances of authorized compute instances, in various embodiments.
  • Resource credit pools may pertain to a particular type of physical computer resource (e.g., processing, network, I/O or storage). Accordingly, in some embodiments multiple different resource credit pools may be accessible to a virtual compute instance corresponding to different physical computer resources that the virtual compute instances utilizes to perform work requests. The resource credits in the resource credit pool may be individually applicable to increase utilization of the physical computer resource for the virtual compute instance for which the resource credits are applied.
  • One or multiple different virtual compute instances may be authorized to obtain resource credits from the resource credit pool. As illustrated above in FIG. 10, virtual compute instances may be added or removed from the group of virtual compute instances authorized to obtain resource credits.
  • Various enforcement mechanisms e.g., an access list of authorized instances
  • a common set of virtual compute instances may be authorized to obtain resource credits from multiple different resource credit pools (e.g., a pool for networking, a pool for processing, a pool for I/O, etc.), while in other embodiments the authorized virtual compute instances may vary from one resource credit pool to another.
  • Resource credit pools may be replenished in various ways by obtaining more resource credits from a provider network.
  • a provider network may offer resource credits for purchase, either individually or in batches of resource credits.
  • Resource credit pools may be refilled in automated fashion (as discussed below with regard to FIG. 14), either on demand or according to a scheduled or periodic refill rate.
  • resource credits may be purchased or added on demand from instances authorized to access the resource credit pool.
  • resource credit pools may authorize access to any virtual compute instance of a provider network. Resource credits may also be manually purchased by submitting a purchase request for resource credits (as illustrated above in FIG. 4) to refill a resource credit pool.
  • Resource credit pricing may be determined according to a fixed pricing scheme, such as price per individual resource credit, which may also be discounted as larger numbers of resource credits are purchased. In some embodiments, resource credit pricing may be determined according to market or otherwise variable rate.
  • a resource credit request may be received for an authorized virtual compute instance to replenish the individual resource credit balance for the authorized virtual compute instance, in various embodiments.
  • the resource credit request may specify a number of resource credits, in some embodiments.
  • a number of resource credits to add to the individual resource credit balance for the authorized compute instance may be determined, as indicated at 1230.
  • the number of resource credits may be the same as a requested number of resource credits. While in some embodiments, resource credits may be replenished to individual resource credit balances according to an individual resource credit replenishment scheme (e.g., providing a pre-determined number of resource credits to virtual compute instance in response to a request).
  • a response may be sent indicating the number of resource credits to be added to the individual resource credit balance for the authorized compute instance.
  • the response may include a scheduling instruction or other information directing the addition or application of the resource credits.
  • the virtualization host implementing the virtual compute instance may add the resource credits to the individual resource credit balance and apply them to work requests utilizing the underlying physical computer resource.
  • the resource credit pool may be updated to remove the number of resource credits from the resource credit pool, as indicated at 1250, in various embodiments.
  • Credit-based scheduling for virtual compute instances may allow virtual compute instances to handle workloads that are irregular or unpredictable. For multiple virtual compute instances located on the same virtualization host, credit-based scheduling distributes utilization of underlying physical resources according the individual resource credit balances for the instances. Some increased utilization for a virtual compute instance may exceed the capacity or capability of a virtualization host to provide (or without reducing the performance of other virtual compute instances located at the virtualization host). When an individual resource credit balance for a virtual compute instance is replenished, it may be that the virtualization host is unable to meet the various performance commitments of the virtual compute instances located at the virtualization host. In such scenarios, migrating one or more virtual compute instances to another virtualization host may allow for the additional resource credits added to an individual resource credit balance to be applied. FIG.
  • FIG. 13 is high-level flowchart illustrating various methods and techniques for migrating instances in a provider network as part of replenishing instance resource credit balances from a resource credit pool, according to some embodiments.
  • a resource credit request may be received for an authorized virtual compute instance to replenish an individual resource credit balance for the authorized virtual compute instance from a resource credit pool.
  • the increase in utilization provided by applying the additional resource credits may cause the overall utilization of the underlying physical resource to exceed the capacity of the underlying physical resource.
  • virtualization hosts may be evaluated and or monitored.
  • a virtualization host implementing an authorized virtual compute instance that receives additional resource credits from a resource credit pool may be evaluated to detect a migration event for the virtualization host, as indicated at 1310, as a result of replenishing the individual resource credit balance. For instance, in at least some embodiments, credit usage, and other information or performance statistics for the virtualization host and the instances located on the virtualization host (including the virtual compute instance for which the resource credits are requested as well as other virtual compute instances) may be collected (as illustrated in FIG. 9).
  • the virtualization host may be evaluated based on current utilization of the underlying physical resource for the requested resource credits, as well as historical utilization trends, based on the virtual compute instances located on the virtualization host.
  • the evaluation may also include adding the increase in utilization of the physical resource when the requested resource credits are applied, to determine whether the utilization increase exceeds the capabilities of the virtualization host triggering a migration event. If, for instance, the evaluation determines that based on historical trends and current utilization information, a physical resource (e.g., one or more central processing units) is at 80% utilization and the requested resource credits provide a 30% increase to the utilization of the physical resource, then a migration event may be detected as the estimated utilization (110%) exceeds the capacity of the virtualization host.
  • a physical resource e.g., one or more central processing units
  • migration events may be triggered when utilization or workload for an underlying physical resource exceeds a threshold above which slows or impacts the performance of a virtual compute instance in violation of a service level agreement.
  • a migration even is not detected, as indicated by the negative exit from 1320, then replenishment of the individual resource credit balance for the authorized virtual compute instance may be completed, as indicated at 1322 (as discussed above with regard to FIG. 12).
  • a migration event is detected, as indicated by the positive exit from 1320, then a migration of one or more virtual compute instances located on the virtualization host may be performed so that the utilization capacity for the replenished virtual compute instance exists.
  • Many different methods for selecting virtual compute instance(s) to migrate from a virtualization host as well as a destination virtualization host may be implemented, in some embodiments one or more virtual compute instances implemented on the virtualization host may be selected to migrate so as to provide the utilization capacity. For instance, if an originating virtualization host has high CPU utilization and low memory utilization, then it may be desirable to locate a virtualization host with a reverse utilization, a low CPU/high memory instance.
  • Selecting virtual compute instances to migrate may depend upon various factors. For example, migration burden or workload may be assessed for the virtual compute instances, in some embodiments. Migrating a larger virtual compute instance (by resource utilization and/or workload) may be, for instance, more difficult or costly to perform. If the movement of multiple smaller virtual compute instances achieves the same effect, then in some cases multiple virtual compute instances may be moved. In addition to the cost of migrating virtual compute instances, the impact of migration on the operation of virtual compute instances may be assessed. For example, the performance of the various virtual compute instances on a virtualization host may be subject to respective service level agreements (SLAs). If a migration operation may cause a virtual compute instance to violate an SLA, then the virtual compute instance may be less likely to be selected for migration.
  • SLAs service level agreements
  • virtualization hosts may be multi-tenant, hosting virtual compute instances for different clients.
  • the impact of a resource credit performance on those virtual compute instances that did not request the resource credits may be minimized when selecting instances to migrate.
  • the impact or effect of performing a migration may be examined upon the one or more virtualization hosts selected as destinations for virtual compute instances (as discussed below).
  • virtualization hosts in a provider network may be analyzed to determine whether utilization capacity exists to perform migration and host one or more of the instances selected for migration.
  • a possible destination virtualization host may be evaluated based on current utilization of underlying physical resources utilized by selected compute instances of migration, as well as historical utilization trends, based on the virtual compute instances located on the possible destination virtualization host.
  • the analysis may also include adding the increase in utilization of the physical resources of the one or more of the selected instances to be hosted, to determine whether the hosting one or more of the selected instances exceeds the capabilities of the virtualization host (or negatively impacts the performance of currently hosted instances in violation of an SLA).
  • resource credits obtained from the resource credit pool may increase utilization of one physical resource implemented at virtualization host, the utilization of many different physical computer resources at the virtualization host may also be considered when selecting virtual compute instances to migrate and destination virtualization hosts.
  • a placement technique for migrating instances may be implemented to balance utilization of resources across the virtualization hosts of a provider network.
  • One such technique is described below with regard to elements 1330 through 1360.
  • a set of candidate destination virtualization hosts may be selected for consideration when performing a migration of a virtual compute instance from the virtualization host.
  • a provider network may, for instance, implement large numbers of virtualization hosts, distributed across multiple data centers. It may be computationally less expensive to reduce the number hosts considered for hosting a migrated virtual compute instance.
  • a set of virtualization hosts may be randomly selected. Some biases may be included when performing the selection, such as those virtualization hosts that have unbalanced utilization among different underlying physical computer resources, as well as those virtualization hosts that are similarly located as the originating host for the migrated instance.
  • the virtual compute instances located on the virtualization host for which the migration event is detected may be scored for migration, in some embodiments. For example, a score may be calculated by calculating the standard deviation of the mean of the utilization percentages of resources for the virtualization host, and then may determine scores for each instance according to how much a migration of the instance from the virtualization host reduces the standard deviation. A similar calculation may be performed for each candidate destination virtualization host.
  • the set of candidate virtualization hosts may be scored individually for each of the virtual compute instances located on the host for which a migration event has been detected. Thus, if the virtualization host implements for 4 instances (as in FIG. 11 A), then a candidate destination virtualization host may be scored for each of the 4 instances.
  • a score for a candidate virtualization host may be determined in various ways. For example, similar to the calculation above for the virtual compute instances on the originating host, the standard deviation of the mean of the utilization percentages of resources for the candidate destination virtualization host may be determined, and then a score may be determined for each instance added to the host according to how much a migration of the instance from the virtualization host reduces/improves the standard deviation. As indicated at 1360, based, at least in part, on the scores (determined at 1340 and 1350), one or more instances may be selected to migrate to a particular destination virtualization host. For example, the instances and destinations based on determining which migrations improve the standard deviations of utilization of physical computer resources at the originating and destination hosts respectively. In some embodiments, minimum improvement thresholds or criteria may be implemented such that a new set of candidate destination virtualization hosts may be selected if a migration does not satisfy the criteria.
  • migration of the virtual compute instances from the virtualization host to a selected destination virtualization host may be directed, in various embodiments.
  • Migration operations may provide a "live" migration experience, in some embodiments.
  • users, clients, or other systems that interact with the migrated instances may experience little or no impact (e.g., downtime) as a result of the migration.
  • migration may include provisioning and configuring a destination virtual compute instance based on the selected virtual compute instance for migration. Operation at the destination virtual compute instance may be started and may be synchronized with the currently operating virtual compute instance selected for migration, in some embodiments. For instance, tasks, operations, or other functions performed at the selected virtual compute instance may be replicated at the destination virtual compute instance.
  • a stream of messages or indications of these tasks may be sent from the selected virtual compute instance to the destination virtual compute instance so that they may be replicated, for example.
  • Access to other computing resources (e.g., a data volume) or systems that are utilized by the selected virtual compute instance may be provided to the destination virtual compute instance (in order to replicate or be aware of the current state of operations at the selected virtual compute instance), in some embodiments.
  • Individual resource credit balances for the virtual compute instance may be transferred to the destination virtualization host.
  • requests for the selected virtual compute instance may be directed to the destination virtual compute instance.
  • a network endpoint, or other network traffic component may be modified or programmed to now direct traffic for the selected virtual compute instance to the destination virtual compute instance. Operation of the selected virtual compute instance that is currently operating may then be stopped, allowing the virtualization host to use physical computer resources once used by the selected virtual compute instance.
  • a response to replenish the individual resource credit balance for the authorized virtual compute instance may be sent that is configured according to the migration performed in element 1370 above, in various embodiments.
  • resources freed by migrating the virtual compute instances or new resources acquired may also not be fully available until the completion of the migration.
  • a scheduling instruction or other indication may be included in responses sent to replenish individual resource credit balances indicating how and/or when the additional resource credits may be consumed.
  • the scheduling instruction may indicate that 10 resource credits may be immediately available, while the remaining 10 resource credits may not be applied until the migration is complete.
  • the response may be sent to a current and destination virtualization host if the virtual compute instance is itself migrating in response to replenishing the individual resource credit balance.
  • the large computing resources of a provider network may allow for increased utilization of computing resources via resource credits in a manner that makes the resource credits that may be added to or included in a resource credit pool appear unlimited to a customer of a provider network that implements resource credit pools.
  • resource credits may be acquired for a resource credit pool in manner commensurate with the type of work performed by the virtual compute instances that replenish resource credits from the resource credit pool.
  • virtual compute instances may perform work that provides revenue or otherwise adds value as a result of performance. Therefore, in such a scenario, a replenishment scheme or technique for acquiring additional resource credits may provide automatic resource credit acquisitions as needed.
  • virtual compute instances may perform work that is a cost to be constrained or budgeted for (e.g., support functions such as Information Technology (IT) services).
  • I Information Technology
  • scheduled or manual resource credit acquisitions so as to remain within constraints for performing the work may be implemented as part of a replenishment scheme or technique.
  • FIG. 14 is a high-level flowchart illustrating various methods and techniques replenishing a resource credit pool, according to some embodiments.
  • available resource credits in a resource credit pool may be monitored, in various embodiments.
  • a resource credit pool balance or other indicator of resource credits may be maintained and/or updated in response to resource credit acquisitions or deductions for replenishing individual resource credit balances.
  • the available resource credits may, in some embodiments, be compared to a replenishment threshold, as indicated at 1420. If, as indicated by the negative exit from 1420, the available resource credits are above the replenishment threshold, then monitoring 1420 of the available resource credits may continue. However, if the available resource credits are below the replenishment threshold, as indicated by the positive exit from 1420, then a replenishment action may be necessary for the resource credit pool.
  • a replenishment policy or scheme may be implemented for a resource credit pool, in various embodiments.
  • the replenishment policy may be configured at the creation of or during the existence of the resource credit pool.
  • the replenishment policy for the resource credit pool may provide various instructions or actions to take as part of replenishing the resource credit pool.
  • the replenishment policy may indicate the replenishment threshold or other triggering event that determines when new resource credits should be acquired for the resource credit pool.
  • the replenishment policy may indicate pricing limits or spending limits which may be determinative as to the number resource credits acquired.
  • the replenishment policy may, in some embodiments, describe a schedule (e.g., monthly, weekly or daily) of resource credit acquisitions or indicate that resource credit acquisitions are manually performed, while other replenishment policies may refill the resource credit pool on demand from authorized virtual compute instances.
  • the replenishment policy for the resource credit pool may provide for an automated replenishment of resource credits.
  • resource credits may be obtained from the provider network to replenish the resource credit pool according to the automated replenishment policy, in some embodiments.
  • the replenishment policy may describe a fixed number of resource credits to purchase or fixed amount of purchasing funds.
  • the number of resource credits may be determined based on the replenishment threshold (e.g., how many resource credits to be acquired in order to exceed the replenishment threshold).
  • resource credits may be acquired at a pre-determined price.
  • a provider network may offer resource credits for purchase to replenishing resource credit pools according to a market value for the resource credits.
  • prices for resource credits may vary (e.g., depending on the type of underlying physical computer resource).
  • the current credit price may be determined (as illustrated above in FIG. 10).
  • a low resource credit notification for the resource credit pool may be sent (e.g., to a client associated with a user account), as indicated at 1432.
  • resource credit balances may be applied to perform work requests at the type of underlying physical computer resource to which the resource credits correspond.
  • work requests to utilize processing resources may be performed by applying processing resource credits from the processing resource credit balance of a virtual compute instance requesting the work.
  • a credit-based scheduler, or other component of a virtualization host or other system implementing a virtual compute instance may be configured to perform work requests in such a manner.
  • FIG. 15 is a high-level flowchart illustrating various methods and techniques requesting resource credits from a resource credit pool for a particular instance, according to some embodiments.
  • an individual resource credit balance for a virtual compute instance implemented at a virtualization host may be maintained. As resource credits are expended or added, a table entry or other set of metadata describing resource credit balances may be updated, for example. In at least some embodiments multiple individual resource credit balances for different types of physical computer resources may be maintained (e.g., processing, network, I/O or storage). As virtualization hosts may also implement other virtual compute instances, other individual resource credit balances for those other virtual compute instances may also be maintained, in some embodiments.
  • Work requests may be received and/or instigated at the virtual compute instance. These work requests may requests to perform a certain amount of processing, data transfer over a network, or any other utilization of a physical computer resource implemented at the virtualization host. For some work requests, resource credits maintained in the individual resource credit balance may be sufficient to perform the work request. However, in some cases work requests for a virtual compute instance may exceed the individual resource credit balance. As indicated at 1520, a number of resource credits to perform work request(s) at the virtual compute in addition to the available resource credits in the individual resource credit balance may be determined, in various embodiments.
  • the amount of resource credits to operate a full utilization of the physical computer resource until completion of the work request(s) may be determined by calculating the number of resource credits necessary to provide utilization of the physical computer resource for the duration of or amount of work in the work requests. If, for instance, an application running on a virtual compute instance needs to perform 500 I/O operations per second (IOPS), then a corresponding number of I/O resource credits to provide utilization of the physical I/O channel that achieve 500 IOPS may be calculated based on the utilization value of individual I/O resource credits.
  • IOPS I/O operations per second
  • a resource credit request may be sent to obtain the number of additional resource credits from a resource credit pool, as indicated at 1530, in various embodiments.
  • authorization and/or identification credentials may be included in the resource credit request.
  • Other information may also be included, such as the individual resource credit balance for the virtual compute instance (which may be used, for example, to prioritize replenishment requests) in some embodiments.
  • the request may be formatted according to an API or other protocol for resource credit pool manager or other system or device that manages the resource credit pool.
  • a response may be received, in various embodiments, to add at least one resource credit to update the individual resource credit balance, as indicated at 1540.
  • the response may only indicate to add 5 resource credits (if, for example, the resource credit pool manager implements prioritization or replenishment schemes for replenishing individual resource credit balances).
  • the response may include a scheduling or other application instruction for the credits (e.g., a rate or event in which some or all of the additional resource credits may be added to a resource credit pool, such as at the completion of a migration operation).
  • the updated individual resource credit balance may be applied to perform the work requests, as indicated at 1550, in various embodiments.
  • the credit-based scheduler, or other component that applies/enforces physical computer resource utilization according to the individual resource credit balances may consider/apply the additional resource credits when determining utilization of the physical computer resource for the virtual compute instance.
  • a provider network may implement variable timeslices for latency- dependent workloads at a virtualization host, according to some embodiments. Because differing virtual compute instances may perform different tasks or functions, vCPUs implemented for the virtual compute instances may process different types of workloads. Some processing workloads may be processing intensive, and thus may be performed without waiting on another component or device to perform, in various embodiments. Other processing workloads may enter wait states until the completion of some other operation, such as an input/output (I/O) operation. Implementing scheduling techniques to handle these different workloads often optimizes one type of workload at the expense of another. Variable timeslices may be implemented to provide for optimal handling of different workloads.
  • I/O input/output
  • Timeslices may be implemented to determine an amount of time up to which a vCPU may utilize a processing resource, such as a central processing unit (CPU).
  • a scheduling technique may be implemented to select a vCPU to utilize the processing resource according to a timeslice. If, for example, a vCPU is selected that performs intensive processing operations, the first vCPU may utilize the entire timeslice and not finish performing the processing operations. If another vCPU is selected that only performs a few operations and then waits on the response, then the vCPU may spend the rest of the timeslice waiting on a response (if none is received then a subsequent time slice may provide sufficient time for the response). Instead of waiting, the other vCPU may yield the remaining timeslice, and resume processing when the response is received. In such a scenario, the other vCPU may preempt another vCPU utilizing the processing resource to continue processing based on the response.
  • Preempting a running vCPU to allow another vCPU to utilize a processing resource may trigger a context switch, in various embodiments.
  • a context switch may involve changing register values for the processing resource as well as loading different data into a cache which is used for performing processing for the vCPU taking over the processing resource.
  • Context switching consumes a portion of a timeslice allotted to a vCPU.
  • the preempting vCPU performs tasks, the data in a cache may be changed (from the data used by the preempted vCPU).
  • preemption compensation may be provided to vCPUs that are preempted to allow latency-dependent vCPUs to utilize the processing resource.
  • a preemption compensation may increase the timeslice of for a preempted vCPU. The preemption compensation may be determined based, at least in part, on a reduction in throughput of the preempted vCPU as a result of performing the preemption.
  • FIG. 16 is a timeline illustrating variable timeslices for processing latency-dependent workloads at a virtualization host, according to some embodiments.
  • a physical CPU may be utilized different virtual CPUs, such as vCPU 1610, vCPU 1620, vCPU 1630, and vCPU 1640.
  • the timeline illustrates utilization of the physical CPU by vCPUs 1610, 1620, 1630, and 1640 over time 1600.
  • vCPU 1610 is illustrated as utilizing the physical CPU from 0 to T8.
  • a scheduled timeslice of 1652 may be used to determine the duration for which vCPU 1610 may utilize the physical CPU. As vCPU 1610 is not preempted during this timeslice 1652, then no preemption compensation is determined to increase timeslice 1652.
  • vCPU 1620 begins utilizing the physical CPU.
  • a preemption event 1622 occurs at T10, switching utilization of the physical CPU to vCPU 1630.
  • vCPU 1630 may be latency-dependent.
  • the utilization of the physical CPU by vCPU 1630 is small relative to the utilization vCPU 1620 (e.g., completing utilization at Tl 1).
  • a preemption compensation 1624 may be determined. The preemption compensation 1624 may be used to increase the timeslice 1654 for vCPU 1620 (e.g., increasing the timeslice to end at T18).
  • preemptions may occur for a vCPU during a timeslice.
  • increased timeslice 1656 for vCPU 1610 illustrates two different preemption events, 1612 and 1614, to allow vCPU 1640 to utilize the physical CPU.
  • Preemption compensation for a timeslice may be dynamic, increasing as the number of preemption events increases. For example, preemption compensation 1616 for vCPU 1610 appears larger than preemption compensation 1624 for vCPU 1620 (as the number of preemptions for vCPU 1610 was greater).
  • Increasing a timeslice for a vCPU may be limited to the timeslice in which a preemption event occurs. For example, the next time vCPU 1610 utilizes the physical CPU, a default timeslice (e.g., timeslice 1652) may be again scheduled for vCPU 1610. Yet, in some embodiments, preemption compensation may be provided to increase the timeslice for multiple timeslices for a particular vCPU (e.g., based on analysis of historical preemption events for a vCPU, increasing the timeslice may be performed proactively).
  • a default timeslice e.g., timeslice 1652
  • preemption compensation may be provided to increase the timeslice for multiple timeslices for a particular vCPU (e.g., based on analysis of historical preemption events for a vCPU, increasing the timeslice may be performed proactively).
  • This specification next includes a description of a provider network, which may implement variable timeslices for latency-dependent workloads at a virtualization host.
  • a number of different methods and techniques to implement variable timeslices for latency-dependent workloads at a virtualization host are then discussed, some of which are illustrated in accompanying flowcharts. Various examples are provided throughout the specification.
  • Variable timeslices for latency-dependent workloads may be implemented at virtualizations hosts, such as virtualization host 310 discussed above with regard to FIG. 3, which may also be implemented as part of a provider network, such as provider network 200 discussed above with regard to FIG. 2.
  • a virtualization host may host compute instances different compute instances, which may also be the same type of compute instance. Resource credits may be implemented for scheduling virtual computer resources.
  • a virtualization host may also implement virtualization management module (e.g., virtualization management module 320), which may handle the various interfaces between virtual compute instances and physical computing resource(s) (e.g., resources 340 in FIG. 3, such as various hardware components, processors, I/O devices, networking devices, etc.).
  • a virtualization management module may implement resource credit balance scheduler to act as a meta-scheduler, managing, tracking, applying, deducting, and/or otherwise handling all resource credit balances for compute instances at the virtualization host.
  • the resource credit balance scheduler may be configured to receive virtual compute resource work requests from computes instances. Each work request may be directed toward the virtual computer resource corresponding to the compute instance that submitted the work. For each request, resource credit balance scheduler may be configured to determine a current resource credit balance for the requesting compute instance, and generate scheduling instructions to apply resource credits when performing the work request.
  • resource credit balance scheduler may perform or direct the performance of the scheduling instructions, directing or sending the work request to the underlying physical computing resources to be performed.
  • the resource scheduling instructions may be sent to a virtual compute resource scheduler (e.g., such as virtual compute resource scheduler 322 in FIG. 3), which may be a scheduler for the physical resources 340, such as CPU(s), implemented at virtualization host.
  • the resource credit balance scheduler and/or the virtual compute resource scheduler may be configured to perform the various techniques described below with regard to FIGS.
  • the resource credit balance scheduler and/or virtual compute resource scheduler may determine preemption compensation for a vCPU that has been preempted by a latency-dependent vCPU. A scheduled timeslice for the preempted vCPU may be increased according to the determined preemption compensation. Resource credits for the preemption may be deducted from a resource credit balance for the compute instance associated with the latency-dependent vCPU that preempted the vCPU.
  • FIG. 17 is high-level flowchart illustrating various methods and techniques for implementing variable timeslices for processing latency-dependent workloads, according to some embodiments. These techniques may be implemented using various components of network- based virtual computing service as described above with regard to FIGS. 2 - 3 or other virtual computing resource hosts.
  • utilization of a central processing unit (CPU) for a virtual central processing unit (vCPU) may be initiated according to a scheduled timeslice for the vCPU, in various embodiments.
  • a scheduler, or similar, component may be implemented as part of a virtualization host and may, for instance, evaluate multiple vCPUs implemented for virtual compute instances at a virtualization host and select a vCPU to utilize the CPU.
  • Different scheduling policies or techniques may be implemented, such as a fair-share scheduling, round- robin scheduling, or any other scheduling technique.
  • a credit- based scheduler may select vCPUs to utilize the CPU based on a resource credit balance maintained for a virtual compute instance for processing resources.
  • resource credits may be applied to increase utilization (e.g., above a baseline utilization) of a physical computer resource for a virtual compute instance.
  • resources credits may be applied by a scheduler for determining which vCPU selection to make for utilizing the CPU.
  • a given vCPU When selected for utilization of the CPU, a given vCPU may have a scheduled timeslice (e.g., 20 ms) during which the vCPU may utilize the CPU. In at least some embodiments, a default-sized timeslice may be provided for each vCPU selected to begin utilizing the CPU. As workloads for vCPUs may vary, with some vCPU workloads being processing intensive while other CPU workloads perform smaller tasks, a given vCPU may or may not utilize all of the scheduled timeslice.
  • a scheduled timeslice e.g. 20 ms
  • a default-sized timeslice may be provided for each vCPU selected to begin utilizing the CPU.
  • Some vCPUs may utilize the CPU to perform tasks that are complete without dependence on any other physical computer resource, whereas some vCPUs may perform tasks that depend upon operations performed by other physical computer resources to complete (e.g., various input/output (I/O) operations for storage, input devices, or networking resources).
  • Latency-dependent workloads for vCPUs may be dependent upon the performance of an I/O operation or other physical computer resource in order to continue to make progress with the performance of tasks.
  • a latency-dependent vCPU may, in various embodiments, enter a wait state prior to the completion of a scheduled timeslice for the latency- dependent vCPU until the performance of the I/O operation or other physical computer resource is complete.
  • vCPUs that perform tasks to send out requests via a network to another computing system, and do not take further action until a response is received back may be considered latency-dependent.
  • a latency-dependent vCPU may be I/O bound.
  • the processing workloads of some vCPUs may utilize the CPU for the entire scheduled timeslice (and beyond if not limited to the scheduled timeslice) and may be sensitive to providing a certain level of throughput for performing tasks.
  • a vCPU workload may be performing various calculations as part of an analysis task (which may not be dependent upon another physical computer resource to be performed).
  • vCPUs that utilize the entire scheduled timeslice may be CPU bound.
  • the amount of time utilized by latency-dependent vCPUs may be relatively small when compared with vCPUs that utilize the entire scheduled timeslice.
  • preemption may be performed when, for example, the I/O or other physical computer resource operation for which the latency-dependent vCPU was waiting to complete is finished.
  • Preemption may, in various embodiments, be performed to switch utilization of the CPU from one vCPU to another vCPU (e.g., a latency dependent vCPU).
  • a preemption event may be detected, for instance, when a latency-dependent vCPU is ready to begin utilizing the CPU again (e.g., the latency-dependent vCPU is no longer in a wait state).
  • a latency-dependent vCPU may be identified when it is determined that a vCPU did not utilize all of the immediately previous timeslice for the vCPU (e.g., the last time the vCPU utilized the CPU, the vCPU only utilized the CPU for 3 ms out of a 20 ms timeslice).
  • a latency-processing option may be maintained for each vCPU.
  • preemption may be performed for a vCPU that is identified as latency-dependent. If the latency-processing option is not enabled for vCPU, then preemption may not be performed for a vCPU (whether or not the vCPU utilized all of the immediately previous timeslice for the vCPU). In at least some embodiments, a latency-dependent vCPU may be I/O bound.
  • the vCPU may continue utilizing the CPU for processing. If the timeslice for the vCPU expires, as indicated by the positive exit from 1732, then a new vCPU may be selected and begin utilization of the CPU for the selected vCPU. It follows that for some vCPUs a scheduled timeslice may not be increased (in contrast with the scheduled timeslice for some preempted vCPUs as discussed below). If the scheduled timeslice has not expired, as indicated by the negative exit from 1722, then utilization of the CPU by the vCPU may continue until preemption (at 1720) or upon expiration of the timeslice (at 422).
  • the vCPU is preempted by a latency-dependent vCPU, as indicated by the positive exit from 420, then utilization of the CPU may be paused for the vCPU, as indicated at 430. Preemption may be performed by storing a state of the tasks, processes, or other operations performed for the vCPU (e.g., storing register values).
  • the latency-dependent vCPU may utilize the CPU for processing within a scheduled timeslice, which may or may not be the same as the scheduled timeslice for the vCPU that was preempted. In at least some embodiments, the timeslice for the latency-dependent vCPU may be decreased (so as leave room in the overall utilization of the CPU for a preemption compensation as discussed below).
  • a preemption compensation may be determined for the scheduled timeslice of the vCPU.
  • a preemption compensation may be a pre-defined value (e.g., 1 ms).
  • the preemption compensation may be determined based, at least in part, on a reduction in throughput of the given vCPU as a result of the preemption. For example, the number of CPU cycles to perform operations to restore register values and reload data for performing the processes, tasks, or other operations of the vCPU into a cache may be calculated or timed as they are performed.
  • a linear function may be implemented such that the preemption compensation is determined based, at least in part, on the amount of time the latency-dependent vCPU utilized the CPU.
  • Other compensation models or functions, such as exponential decay may be implemented.
  • the cache miss counter for the given vCPU may be monitored (e.g., indicating the amount of time spent reloading data into the cache, which reduces throughput of the given vCPU than if the values still remained in the cache).
  • Preemption compensation may be determined dynamically or on-the-fly such that additional time may be added to the timeslice as the effects of the preemption become known (e.g., more cache misses occur).
  • the scheduled timeslice for the vCPU may then be increased according to the preemption compensation determined for the scheduled timeslice, as indicated at 1760. For instance, if the preemption compensation is determined to be 3 ms, then 3 ms may be added to a timer, tracker, or other component that determines the amount of a timeslice consumed for a vCPU and to increase the amount of time that the vCPU may utilize the CPU. In a credit-based scheduler, such as discussed above, resource credits may not be deducted for additional time provided by preemption compensation, in some embodiments.
  • the utilization of the vCPU may be preempted again by another latency-dependent vCPU (either the same or a different latency-dependent vCPU). For instance, some vCPUs may be preempted multiple times, however, the corresponding preemption compensations for the preemptions may allow the vCPU to achieve the same throughput for a single timeslice as if no preemptions had occurred during the timeslice (reducing or eliminating the impact of multiple context switches and/or other operations when a preemption occurs). If no further preemptions occur and/or the increased timeslice expires (as indicated by the positive exit from 1722, then a next vCPU may begin utilization of the CPU.
  • FIGS. 8 and 9, discussed above, provide examples of a credit-based scheduler that may be implemented for utilizing physical computer resources at a virtualization host for a virtual compute instance.
  • Credit-based scheduling may apply credits from a resource credit balance for the virtual compute instance in order to increase utilization of the underlying physical computer resources for the virtual compute instance.
  • a resource credit balance for processing resources such as a CPU may be maintained for each virtual compute instance.
  • resource credits may be deducted from the resource credit balance for processing for the virtual compute instance.
  • FIG. 18 is a high-level flowchart illustrating various methods and techniques for updating resource credit balances for virtual compute instances for providing preemption compensations, according to some embodiments.
  • an interrupt to resume processing for a latency-dependent vCPU may be detected, in various embodiments.
  • a network packet may be received, a storage device may return data or an acknowledgment of a write, or any other I/O operation or other physical computer resource operation upon which the latency-dependent vCPU depends may complete and trigger an interrupt or event, which may place the latency-dependent vCPU into a ready to process state.
  • Various scheduling techniques may be used to bump or increase the priority of the latency-dependent vCPU to trigger a preemption event.
  • a vCPU currently utilizing the CPU may be preempted to allow the latency-dependent vCPU to utilize the CPU, in various embodiments.
  • the resource credit balance for the latency-dependent vCPU may be updated, as indicated at 1830, to deduct credit(s) for utilization of the latency-dependent vCPU and credits for providing a preemption compensation for the currently processing vCPU, in various embodiments.
  • Latency processing may, in such embodiments, be effectively more costly in terms of resource credits than non-latency processing. However, in this way latency processing may provide faster (and therefore lower latency) response for latency-dependent vCPUs and allowing the cost for such processing to be borne by the vCPU initiating the preemption instead of the vCPU being preempted.
  • the methods described herein may in various embodiments be implemented by any combination of hardware and software.
  • the methods may be implemented by a computer system (e.g., a computer system as in FIG. 19) that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors.
  • the program instructions may be configured to implement the functionality described herein (e.g., the functionality of various servers and other components that implement the network-based virtual computing resource provider described herein).
  • the various methods as illustrated in the figures and described herein represent example embodiments of methods. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc.
  • Embodiments of dynamic virtual resource request rate task controls for physical resources as described herein may be executed on one or more computer systems, which may interact with various other devices.
  • Embodiments of resource credit pools for replenishing resource credit balances of virtual compute instances as described herein may be executed on one or more computer systems, which may interact with various other devices.
  • Embodiments of variable timeslices for processing latency-dependent workloads as described herein may be executed on one or more computer systems, which may interact with various other devices.
  • FIG. 19 is a block diagram illustrating an example computer system, according to various embodiments.
  • computer system 2000 may be configured to implement nodes of a compute cluster, a distributed key value data store, and/or a client, in different embodiments.
  • Computer system 2000 may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a consumer device, application server, storage device, telephone, mobile telephone, or in general any type of computing device.
  • Computer system 2000 includes one or more processors 2010 (any of which may include multiple cores, which may be single or multi-threaded) coupled to a system memory 2020 via an input/output (I/O) interface 2030.
  • Computer system 2000 further includes a network interface 2040 coupled to I/O interface 2030.
  • computer system 2000 may be a uniprocessor system including one processor 2010, or a multiprocessor system including several processors 2010 (e.g., two, four, eight, or another suitable number).
  • Processors 2010 may be any suitable processors capable of executing instructions.
  • processors 2010 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 2010 may commonly, but not necessarily, implement the same ISA.
  • the computer system 2000 also includes one or more network communication devices (e.g., network interface 2040) for communicating with other systems and/or components over a communications network (e.g. Internet, LAN, etc.).
  • a client application executing on system 2000 may use network interface 2040 to communicate with a server application executing on a single server or on a cluster of servers that implement one or more of the components of the provider network described herein.
  • an instance of a server application executing on computer system 2000 may use network interface 2040 to communicate with other instances of the server application (or another server application) that may be implemented on other computer systems (e.g., computer systems 2090).
  • computer system 2000 also includes one or more persistent storage devices 2060 and/or one or more I/O devices 2080.
  • persistent storage devices 2060 may correspond to disk drives, tape drives, solid state memory, other mass storage devices, or any other persistent storage device.
  • Computer system 2000 (or a distributed application or operating system operating thereon) may store instructions and/or data in persistent storage devices 2060, as desired, and may retrieve the stored instruction and/or data as needed.
  • computer system 2000 may host a storage system server node, and persistent storage 2060 may include the SSDs attached to that server node.
  • Computer system 2000 includes one or more system memories 2020 that are configured to store instructions and data accessible by processor(s) 2010.
  • system memories 2020 may be implemented using any suitable memory technology, (e.g., one or more of cache, static random access memory (SRAM), DRAM, RDRAM, EDO RAM, DDR 10 RAM, synchronous dynamic RAM (SDRAM), Rambus RAM, EEPROM, non-volatile/Flash-type memory, or any other type of memory).
  • System memory 2020 may contain program instructions 2025 that are executable by processor(s) 2010 to implement the methods and techniques described herein.
  • program instructions 2025 may be encoded in platform native binary, any interpreted language such as JavaTM byte-code, or in any other language such as C/C++, JavaTM, etc., or in any combination thereof.
  • program instructions 2025 include program instructions executable to implement the functionality of a provider network and/or virtualization host, in different embodiments.
  • program instructions 2025 may implement multiple separate clients, server nodes, and/or other components.
  • program instructions 2025 may include instructions executable to implement an operating system (not shown), which may be any of various operating systems, such as UNIX, LINUX, SolarisTM, MacOSTM, WindowsTM, etc. Any or all of program instructions 2025 may be provided as a computer program product, or software, that may include a non-transitory computer-readable storage medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to various embodiments.
  • a non-transitory computer-readable storage medium may include any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer).
  • a non-transory computer-accessible medium may include computer-readable storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM coupled to computer system 2000 via I/O interface 2030.
  • a non-transitory computer-readable storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computer system 2000 as system memory 2020 or another type of memory.
  • program instructions may be communicated using optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.) conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 2040.
  • a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 2040.
  • system memory 2020 may include data store 2045, which may be configured as described herein.
  • system memory 2020 e.g., data store 2045 within system memory 2020
  • persistent storage 2060, and/or remote storage 2070 may store data blocks, replicas of data blocks, metadata associated with data blocks and/or their state, configuration information, and/or any other information usable in implementing the methods and techniques described herein.
  • I/O interface 2030 may be configured to coordinate I/O traffic between processor 2010, system memory 2020 and any peripheral devices in the system, including through network interface 2040 or other peripheral interfaces.
  • I/O interface 2030 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 2020) into a format suitable for use by another component (e.g., processor 2010).
  • I/O interface 2030 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example.
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • I/O interface 2030 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments, some or all of the functionality of I/O interface 2030, such as an interface to system memory 2020, may be incorporated directly into processor 2010.
  • Network interface 2040 may be configured to allow data to be exchanged between computer system 2000 and other devices attached to a network, such as other computer systems 2090 (which may implement one or more components of the distributed system described herein), for example.
  • network interface 2040 may be configured to allow communication between computer system 2000 and various I/O devices 2050 and/or remote storage 2070.
  • Input/output devices 2050 may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer systems 2000.
  • Multiple input/output devices 2050 may be present in computer system 2000 or may be distributed on various nodes of a distributed system that includes computer system 2000.
  • similar input/output devices may be separate from computer system 2000 and may interact with one or more nodes of a distributed system that includes computer system 2000 through a wired or wireless connection, such as over network interface 2040.
  • Network interface 2040 may commonly support one or more wireless networking protocols (e.g., Wi- Fi/IEEE 802.11, or another wireless networking standard). However, in various embodiments, network interface 2040 may support communication via any suitable wired or wireless general data networks, such as other types of Ethernet networks, for example. Additionally, network interface 2040 may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol.
  • computer system 2000 may include more, fewer, or different components than those illustrated in FIG. 19 (e.g., displays, video cards, audio cards, peripheral devices, other network interfaces such as an ATM interface, an Ethernet interface, a Frame Relay interface, etc.)
  • any of the distributed system embodiments described herein, or any of their components may be implemented as one or more network-based services.
  • a compute cluster within a computing service may present computing services and/or other types of services that employ the distributed computing systems described herein to clients as network- based services.
  • a network-based service may be implemented by a software and/or hardware system designed to support interoperable machine -to-machine interaction over a network.
  • a network-based service may have an interface described in a machine -processable format, such as the Web Services Description Language (WSDL).
  • WSDL Web Services Description Language
  • Other systems may interact with the network-based service in a manner prescribed by the description of the network-based service's interface.
  • the network-based service may define various operations that other systems may invoke, and may define a particular application programming interface (API) to which other systems may be expected to conform when requesting the various operations, though
  • API application programming interface
  • a network-based service may be requested or invoked through the use of a message that includes parameters and/or data associated with the network- based services request.
  • a message may be formatted according to a particular markup language such as Extensible Markup Language (XML), and/or may be encapsulated using a protocol such as Simple Object Access Protocol (SOAP).
  • SOAP Simple Object Access Protocol
  • a network-based services client may assemble a message including the request and convey the message to an addressable endpoint (e.g., a Uniform Resource Locator (URL)) corresponding to the network-based service, using an Internet-based application layer transfer protocol such as Hypertext Transfer Protocol (HTTP).
  • URL Uniform Resource Locator
  • HTTP Hypertext Transfer Protocol
  • network-based services may be implemented using Representational State Transfer (“RESTful”) techniques rather than message-based techniques.
  • RESTful Representational State Transfer
  • a network-based service implemented according to a RESTful technique may be invoked through parameters included within an HTTP method such as PUT, GET, or DELETE, rather than encapsulated within a SOAP message.
  • a system comprising:
  • a memory comprising program instructions that when executed by the at least one processor cause the at least one processor to implement a virtualization host for a plurality of virtual compute instances;
  • the virtualization host is configured to:
  • the dynamic rate control configured to:
  • the dynamic rate control is configured to: identify an initial delay for the individual virtual resource request queue based, at least in part, on a determined utilization of the physical computer resource for the virtual compute instance;
  • the virtualization host is implemented as part of a provider network that offers a network-based virtual computing service, wherein the virtualization host is multi-tenant such that at least one of the plurality of virtual compute instances implemented at the virtualization host is maintained for a client of the provider network that is different than another client of the provider network maintaining another one of the plurality of virtual compute instances at the virtualization host.
  • a method comprising:
  • dynamically determining the delay comprises: identifying an initial delay for the individual virtual resource request queue based, at least in part, on a determined utilization of the physical computer resource for the virtual compute instance;
  • determining the workload of the physical resource request queue comprises smoothing one or more workload metrics indicating workload of the physical resource request queue at different points in time.
  • a non-transitory, computer-readable storage medium storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement a virtualization host for a plurality of compute instances, wherein the virtualization host implements :
  • determining the workload of the physical resource request queue comprises smoothing one or more workload metrics indicating workload of the physical resource request queue at different points in time.
  • the plurality of compute nodes implement respective virtualization hosts for a plurality of virtual compute instances implemented as part of the provider network;
  • control plane for the provider network, the control plane configured to:
  • a resource credit pool comprising a plurality of resource credits available to replenish an individual resource credit balance for one or more virtual compute instances that are authorized to obtain resource credits from the resource credit pool of the plurality of virtual compute instances, wherein the plurality of resource credits are individually applicable to increase utilization of a physical computer resource for an individual one of the one or more authorized virtual compute instances at one of the respective virtualization hosts;
  • control plane is further configured to:
  • control plane is further configured to:
  • a method comprising:
  • a resource credit pool comprising a plurality of resource credits available to replenish an individual resource credit balance for one or more virtual compute instances implemented as part of a provider network, wherein the one or more virtual compute instances are authorized to obtain resource credits from the resource credit pool, wherein the plurality of resource credits are individually applicable to increase utilization of a physical computer resource at a virtualization host implementing one of the one or more virtual compute instances;
  • evaluating a virtualization host that implements the virtual compute instance based, at least in part on the resource credit request comprises detecting a migration event for the virtualization host; in response to detecting the migration event for the virtualization host:
  • the provider network is implemented as a network-based virtual computing service, wherein the resource credit pool and the authorized one or more virtual compute instances are linked to a client account of the network-based virtual computing service, and wherein at least one of the one or more virtual compute instances selected to migrate is linked to a client account different than the client account.
  • the one or more computing devices together implement a control plane for the provider network, and wherein the method further comprises: performing, by least one computing device:
  • the resource credit pool is one of a plurality of resource credit pools, wherein individual ones of the plurality of resource credit pools corresponding to different types of physical computer resources, and wherein maintaining the resource credit pool, receiving the resource credit requests, determining the number of resource credits to add, sending the response, and updating the resource credit pool are performed for different ones of the plurality of resource credit pools.
  • the plurality of resource credit pools are linked to a user account for the provider network.
  • a non-transitory, computer-readable storage medium storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement:
  • a resource credit pool comprising a plurality of resource credits available to replenish an individual resource credit balance for one or more virtual compute instances implemented as part of a provider network, wherein the one or more virtual compute instances are authorized to obtain resource credits from the resource credit pool, wherein the plurality of resource credits are individually applicable to increase utilization of a physical computer resource at a virtualization host implementing one of the one or more virtual compute instances;
  • the resource credit pool is one of a plurality of resource credit pools, wherein individual ones of the plurality of resource credit pools corresponding to different types of physical computer resources, and wherein maintaining the resource credit pool, receiving the resource credit requests, determining the number of resource credits to add, sending the response, and updating the resource credit pool are performed for different ones of the plurality of resource credit pools.
  • the provider network is implemented as a network-based virtual computing service, wherein the resource credit pool and the authorized one or more virtual compute instances are linked to a client account of the network-based virtual computing service, wherein another resource credit pool is maintained for one or more other virtual compute instances authorized to obtain resource credits from the other resource credit pool, wherein the other resource credit pool and the authorized one or more other virtual compute instances are linked to a different client account of the network-based virtual computing service, and wherein at least one of the other virtual compute instances is implemented on a same virtualization host as one of the one or more virtual compute instances.
  • a system comprising:
  • a memory comprising program instructions that when executed by the at least one processor cause the at least one processor to implement a virtualization host for a plurality of virtual compute instances;
  • the virtualization host configured to:
  • vCPU virtual central processing unit
  • preempt the given vCPU to utilize the processor for a latency-dependent vCPU of a different virtual compute instance of the plurality of virtual compute instances, wherein the preemption pauses the utilization of the at least one processor for the given vCPU prior to completion of the scheduled timeslice for the given vCPU;
  • the virtualization host implements a credit- based scheduler for scheduling utilization of physical computer resources including the at least one processor among the plurality of virtual compute instances, wherein the virtualization host maintains a respective resource credit balance for the given vCPU and the latency-dependent vCPU, wherein the utilization of the at least one processor for the given vCPU and the latency- dependent vCPU is deducted from the respective resource credit balance, and wherein the virtualization host is further configured to: update the respective resource credit balance for the latency-dependent vCPU to deduct one or more resource credits corresponding to the preemption exemption for the given vCPU.
  • a method comprising:
  • vCPU virtual central processing unit
  • CPU central processing unit
  • determining the preemption compensation for the scheduled timeslice of the given vCPU is based, at least in part, on an amount of time that the latency-dependent vCPU utilized the CPU.
  • a non-transitory, computer-readable storage medium storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement a virtualization host for a plurality of compute instances, wherein the virtualization host implements:
  • vCPU virtual central processing unit
  • CPU central processing unit
  • determining the preemption compensation for the scheduled timeslice of the given vCPU is based, at least in part, on a reduced throughput as a result of the preemption of the given vCPU.
  • non-transitory, computer-readable storage medium of clause 54 wherein the virtualization host implements a credit-based scheduler for scheduling utilization of physical computer resources including the CPU among the plurality of virtual compute instances, wherein the virtualization host maintains a respect resource credit balance for the given vCPU and the latency-dependent vCPU, and wherein the utilization of the CPU for the given vCPU and the latency-dependent vCPU is deducted from the respective resource credit balances.
  • non-transitory, computer-readable storage medium of clause 54 wherein the virtualization host is implemented as part of a part of a provider network that offers a network- based virtual computing service, wherein the virtualization host is multi-tenant such that at least one of the plurality of virtual compute instances implemented at the virtualization host is maintained for a client of the provider network that is different than another client of the provider network maintaining another one of the plurality of virtual compute instances at the virtualization host.

Abstract

A virtualization host may implement dynamic virtual resource request rate controls for physical resources. Individual virtual resource request queues may be maintained for different virtual compute instances implemented at a virtualization host for a particular physical computer resource. After placing a work request from one of the individual virtual resource request queues into a physical resource request queue to be performed at the physical computer resource, a delay may be dynamically determined based, at least in part, on the workload of the physical resource request queue. After imposing the delay, a next work request from the individual virtual resource request queue may be placed into the physical resource request queue. In at least some embodiments, the dynamically determined delay may include a randomly added delay.

Description

DYNAMIC VIRTUAL RESOURCE REQUEST RATE CONTROL FOR UTILIZING
PHYSICAL RESOURCES
BACKGROUND
[0001] The advent of virtualization technologies for commodity hardware has provided benefits with respect to managing large-scale computing resources for many customers with diverse needs, allowing various computing resources to be efficiently and securely shared by multiple customers. For example, virtualization technologies may allow a single physical computing machine to be shared among multiple users by providing each user with one or more virtual machines hosted by the single physical computing machine, with each such virtual machine being a software simulation acting as a distinct logical computing system that provides users with the illusion that they are the sole operators and administrators of a given hardware computing resource, while also providing application isolation and security among the various virtual machines. As another example, virtualization technologies may allow data storage hardware to be shared among multiple users by providing each user with a virtualized data store which may be distributed across multiple data storage devices, with each such virtualized data store acting as a distinct logical data store that provides users with the illusion that they are the sole operators and administrators of the data storage resource.
[0002] Virtualization technologies may be leveraged to create many different types of services or perform different functions for client systems or devices. For example, virtual machines may be used to implement a network-based service for external customers, such as an e-commerce platform. Virtual machines may also be used to implement a service or tool for internal customers, such as an information technology (IT) service implemented as part of an internal network for a corporation. Utilizing these virtual resources efficiently, however, may require flexible utilization options for many different types of virtual resource workloads. In some environments multiple virtual machines may be hosted together on a single host, creating the possibility for contention and conflicts when utilizing different virtual computing resources that may rely upon the same physical computer resources. BRIEF DESCRIPTION OF THE DRAWINGS
[0003] FIG. 1 is a block diagram illustrating a dynamic virtual resource request rate control for physical resources, according to some embodiments.
[0004] FIG. 2 is a block diagram illustrating a provider network that provides virtual compute instances for which dynamic virtual resource request rate controls are implemented, according to some embodiments. [0005] FIG. 3 is a block diagram illustrating a visualization host that implements dynamic virtual resource request rate controls for physical resources, according to some embodiments.
[0006] FIG. 4 is a block diagram illustrating a resource credit balance scheduler that implements dynamic virtual resource request rate controls, according to some embodiments.
[0007] FIG. 5 is a high-level flowchart illustrating various methods and techniques for implementing dynamic virtual resource request rate control for physical resources, according to some embodiments.
[0008] FIG. 6 is high-level flowchart illustrating various methods and techniques for determining a delay for a dynamic resource rate control, according to some embodiments.
[0009] FIG. 7 is a diagram illustrating a resource credit pool for replenishing resource credit balances for virtual compute instances, according to some embodiments.
[0010] FIG. 8 is a block diagram illustrating a provider network that provides resource credit pools for replenishing resource credit balances of virtual compute instances, according to some embodiments.
[0011] FIG. 9 is a block diagram illustrating a virtualization host that implements resource credits for scheduling virtual computer resources, according to some embodiments.
[0012] FIG. 10 illustrates interactions between a client and a provider network that implements resource credit pools for replenishing instance resource credit balances, according to some embodiments.
[0013] FIGS. 11A and 11B are block diagrams illustrating virtual compute instance migrations as part of replenishing instance resource credit balances from a resource credit pool, according to some embodiments.
[0014] FIG. 12 is a high-level flowchart illustrating various methods and techniques for implementing resource credit pools for replenishing resource credit balances of virtual compute instances, according to some embodiments.
[0015] FIG. 13 is high-level flowchart illustrating various methods and techniques for migrating instances in a provider network as part of replenishing instance resource credit balances from a resource credit pool, according to some embodiments.
[0016] FIG. 14 is a high-level flowchart illustrating various methods and techniques replenishing a resource credit pool, according to some embodiments.
[0017] FIG. 15 is a high-level flowchart illustrating various methods and techniques requesting resource credits from a resource credit pool for a particular instance, according to some embodiments. [0018] FIG. 16 is a timeline illustrating variable timeslices for processing latency-dependent workloads at a virtualization host, according to some embodiments.
[0019] FIG. 17 is high-level flowchart illustrating various methods and techniques for implementing variable timeslices for processing latency-dependent workloads, according to some embodiments.
[0020] FIG. 18 is a high-level flowchart illustrating various methods and techniques for updating resource credit balances for virtual compute instances for providing preemption compensations, according to some embodiments.
[0021] FIG. 19 is a block diagram illustrating an example computing system, according to some embodiments.
[0022] While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words "include", "including", and "includes" mean including, but not limited to.
DETAILED DESCRIPTION
[0023] The systems and methods described herein may implement dynamic virtual resource request rate control for physical resources, according to some embodiments. The systems and methods described herein may implement resource credit pools for replenishing individual resource credit balances of virtual compute instances, according to some embodiments. The systems and methods described herein may implement variable timeslices for latency-dependent workloads at a virtualization host, according to some embodiments.
[0024] Virtualization hosts may provide virtualized devices or resources as part of implementing virtual compute instances. These virtualized devices may provide a virtual compute instance with access to an underlying physical resource corresponding to the virtual resource. For example, a virtual central processing unit (vCPU) may be implemented for a compute instance, which can in turn be utilized to access a physical central processing unit (CPU). Work requests may be submitted to individual virtual resource queues, which may correspond to a particular compute instance, from which they are then placed into a common physical resource queue for the physical computer resource performing the work request. Multiple different physical computer resources may have different resource request queues and corresponding individual virtual resource requests queues for compute instances that utilize the different physical computer resources.
[0025] As differing virtual compute instances may perform different tasks or functions, the utilization of underlying physical computer resources may differ as well. Some instance workloads may be throughput sensitive, submitting a high-volume of work requests to utilize physical computer resources, in various embodiments. Other instance workloads may be latency sensitive, submitting smaller numbers of work requests to utilize physical computer resources that may be dependent upon a response from the physical computer resources to continue performing, such as sending out requests via a network and receiving responses via the network. Often times, the smaller number of latency sensitive work requests may be blocked or forced to wait behind large numbers of work requests submitted by a throughput sensitive instance workloads, increasing latency for the latency sensitive work requests (e.g., if the work requests are processed as they are received for the underlying physical computer resource, then instances that submit a large number of requests may force an instance that submits a single request to wait until the larger number of requests have been performed). In various embodiments, dynamic virtual resource request rate controls for physical computer resources may be implemented to provide statistical fair-sharing among different virtual compute instances utilizing the same physical computer resource, without maintaining large in-memory data structures for scheduling or ordering work requests for submission to the physical resource request queue. Moreover, dynamic virtual resource request rate controls may provide consistent performance for performing individual work requests, so that a physical resource request queue for an underlying physical computer resource may not be overloaded with work requests.
[0026] FIG. 1 is a block diagram illustrating a dynamic virtual resource request rate control for physical resources, according to some embodiments. A virtualization host, such as virtualization hosts 234 and 310 described below with regard to FIGS. 2 and 3 may implement multiple virtual compute instances, such as virtual compute instances 102, 104, 106 and 108. Virtual compute instances may utilize virtual devices or other interfaces which may submit work requests 110 for a physical resource to an individual instance request queue for that resource, such as instance request queues 112, 114, 116, and 118. Dynamic rate controls 122, 124, 126, and 128 may place work requests from instance request queues into physical resource request queue 150 in order to ultimately be removed from physical resource request queue 150 and performed by the underlying physical computer resource.
[0027] Dynamic rate controls may, in various embodiments, impose delays between placing work requests from an instance request queue into physical resource request queue 150. Delays between work requests may be dynamically determined based on the workload of physical resource request queue 150 (e.g., based on the number of work requests in physical resource request queue 150). For example, workload metrics indicating the number of requests in queue 150 at particular points in time may be reported back (as illustrated by the loop back from queue 150) to physical resource workload module 140 which may determine a workload value or indicator, in some embodiments. The workload value or indicator may be provided (synchronously or asynchronously) to dynamic rate controls 122, 124, 126, and 128 for determining the delay between requests. In at least some embodiments, a random delay may be determined between work requests. The random delay may be added to an initial or baseline delay, in some embodiments, based on a probability determined using the workload value or indicator. Introducing random delays may prevent congestion in physical resource request queue 150 due to synchronized submissions of work requests (e.g., troops marching in time problem). FIG. 6, discussed below, provides further examples of adding random delays as part of dynamically determining a delay between work requests.
[0028] In various embodiments, a delay may be determined for each instance request queue according to the utilization of the underlying physical resource allotted to the instance. For example, instance resource utilization 130 may provide indicators of the allocated, purchased, or otherwise assigned utilization of the underlying physical computer resource to dynamic rate controls 122, 124, 126, and 128, which may identify an initial delay to provide in between work requests. The delay based on utilization may be provided between work requests, in some embodiments, whether or not a random delay is added to the delay. The delay for an instance request queue may be dynamic (changing between individual work requests or multiple work requests), as utilization allotted to a virtual compute instance may change. For example, in some embodiments, resource credit balances may be used to determine utilization of physical computer resources, as discussed below with regard to FIGS. 2 - 4. As a resource credit balance is depleted, utilization of the physical computer resource may decrease. Delays determined based on instance resource utilization may allow for sharing of the underlying physical computer resource for different instances, even those instances with different types of workloads (e.g., latency sensitive vs. throughput sensitive), such that an instance with higher or lower utilization than another instance may have work requests submitted during the delay of the other instance. Initial delays may be identified so that the number work requests in-flight or placed in physical resource request queue 150 does not exceed the allotted utilization.
[0029] Delays between work requests may also be determined to ensure that work requests are not forced to wait out of proportion with respect to the number of requests in a respective instance request queue. For example, instance request queues 112 and 116 have more work requests to submit than instance request queues 114 and 118. Delays between work requests may be determined so that work requests from for an instance with fewer work requests may be submitted during the delay between requests of an instance with a greater number of work requests queued. For example, after submitting a first work request, dynamic rate control 122 may delay another work request from instance request queue 112 for an amount of time so that a work request from instance request queue 114, a work request from instance request queue 116, and a work request from instance request queue 118 may be submitted.
[0030] Imposing dynamic delays between work requests from individual instance request queues based, at least in part, on workload of physical resource request queue 150 may reduce or eliminate congestion at physical resource request queue 150. As the workload of physical resource request queue 150 increases, more delays may be added or increased between work request submissions, throttling back the number of work requests placed in physical resource request queue 150. Similarly, if the workload of physical resource request queue 150 decreases, less delays may be added or delays may be decreased between work request submissions, increasing the number of work requests placed in physical resource request queue 150.
[0031] Please note that previous descriptions are not intended to be limiting, but are merely provided as an example of providing dynamic virtual resource request rate control for physical resources. The number and/or arrangement of different components, modules, or requests may all be different. Multiple physical computer resources, as illustrated below in FIG. 4, may be implemented, in a least some embodiments.
[0032] This specification next includes a general description of a provider network, which may implement dynamic virtual resource request rate controls for physical resources. Then various examples of a provider network are discussed, including different components/modules, or arrangements of components/module that may be employed as part of the provider network. A number of different methods and techniques to implement dynamic virtual resource request rate controls for physical resources at a virtualization host are then discussed, some of which are illustrated in accompanying flowcharts. Various examples are provided throughout the specification. [0033] Different clients implementing virtual computing resources have different resource demands. For example, some clients' workloads are not predictable and may not utilize fixed resources efficiently. Virtual compute instances implementing resource credits for scheduling virtual computing resources may provide dynamic utilization of resources to provide flexible high performance, without wasting unutilized fixed resources. Resource credits may be accumulated for individual virtual compute instances and maintained as part of an individual resource credit balance. When a virtual compute instance needs to perform work at high performance, the resource credits may be applied to the work, effectively providing full utilization of underlying physical resources for the duration of the resource credits. When a virtual compute instance is using less than its share of resources (e.g., little or no work is being performed), credits may be accumulated and used for a subsequent task. Resources may, in various embodiments, be any virtualized computer resource that is implemented or performed by a managed physical computer resource, including, but not limited to, processing resources, communication or networking resources, and storage resources.
[0034] FIG. 2 is a block diagram illustrating a provider network that provides virtual compute instances for which variable timeslices for processing latency-dependent workloads are implemented, according to some embodiments. Provider network 200 may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to clients 202. Provider network 200 may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like, needed to implement and distribute the infrastructure and services offered by the provider network 200. In some embodiments, provider network 200 may provide computing resources. These computing resources may in some embodiments be offered to clients in units called "instances," 234 such as virtual compute instances.
[0035] In various embodiments, provider network 200 may implement a control plane 210 in order to manage the computing resource offerings provided to clients 202 by provider network 200. Control plane 210 may implement various different components to manage the computing resource offerings. Control plane 210 may be implemented across a variety of servers, nodes, or other computing systems or devices (such as computing system 2000 described below with regard to FIG. 19). It is noted that where one or more instances of a given component may exist, reference to that component herein may be made in either the singular or the plural. However, usage of either form is not intended to preclude the other. [0036] In at least some embodiments, control plane 210 may implement interface 212. Interface 212 may be configured to process incoming requests received via network 260 and direct them to the appropriate component for further processing. In at least some embodiments, interface 212 may be a network-based interface and may be implemented as a graphical interface (e.g., as part of an administration control panel or web site) and/or as a programmatic interface (e.g., handling various Application Programming Interface (API) commands). In various embodiments, interface 212 may be implemented as part of a front end module or component dispatching requests to the various other components, such as resource management 214, reservation management 216, resource monitoring 218, and billing 220. Clients 202 may, in various embodiments, may not directly provision, launch or configure resources but may send requests to control plane 210 such that the illustrated components (or other components, functions or services not illustrated) may perform the requested actions.
[0037] Control plane 210 may implement resource management module 214 to manage the access to, capacity of, mappings to, and other control or direction of computing resources offered by provider network. In at least some embodiments, resource management module 214 may provide both a direct sell and 3rd party resell market for capacity reservations (e.g., reserved compute instances). For example, resource management module 214 may allow clients 202 via interface 212 to learn about, select, purchase access to, and/or reserve capacity for computing resources, either from an initial sale marketplace or a resale marketplace, via a web page or via an API. For example, resource management component may, via interface 212, provide listings of different available compute instance types, each with a different credit accumulation rate. Additionally, in some embodiments, resource management module 214 may be configured to offer credits for purchase (in addition to credits provided via the credit accumulation rate for an instance type) for a specified purchase amount or scheme (e.g., lump sum, additional periodic payments, etc.). For example, resource management module 214 may be configured to receive a credit purchase request (e.g., an API request) and credit the virtual instance balance with the purchased credits. Similarly, resource management module 214 may be configured to handle a request to increase a credit accumulation rate for a particular instance. Resource management 214 may also offer and/or implement a flexible set of resource reservation, control and access interfaces for clients 202 via interface 212. For example resource management module 214 may provide credentials or permissions to clients 202 such that compute instance control operations/interactions between clients and in-use computing resources may be performed.
[0038] In various embodiments, reservation management module 216 may be configured to handle the various pricing schemes of instances 234 (at least for the initial sale marketplace) in various embodiments. For example network-based virtual computing service 200 may support several different purchasing modes (which may also be referred to herein as reservation modes) in some embodiments: for example, term reservations (i.e. reserved compute instances), on- demand resource allocation, or spot-price-based resource allocation. Using the long-term reservation mode, a client may make a low, one-time, upfront payment for a compute instance or other computing resource, reserve it for a specified duration such as a one or three year term, and pay a low hourly rate for the instance; the client would be assured of having the reserved instance available for the term of the reservation. Using on-demand mode, a client could pay for capacity by the hour (or some appropriate time unit), without any long-term commitments or upfront payments. In the spot-price mode, a client could specify the maximum price per unit time that it is willing to pay for a particular type of compute instance or other computing resource, and if the client's maximum price exceeded a dynamic spot price determined at least in part by supply and demand, that type of resource would be provided to the client.
[0039] During periods when the supply of the requested resource type exceeded the demand, the spot price may become significantly lower than the price for on-demand mode. In some implementations, if the spot price increases beyond the maximum bid specified by a client, a resource allocation may be interrupted - i.e., a resource instance that was previously allocated to the client may be reclaimed by the resource management module 330 and may be allocated to some other client that is willing to pay a higher price. Resource capacity reservations may also update control plane data store 222 to reflect changes in ownership, client use, client accounts, or other resource information.
[0040] In various embodiments, control plane 210 may implement resource monitoring module 218. Resource monitoring module 218 may track the consumption of various computing instances, (e.g., resource credit balances, resource credit consumption) consumed for different virtual computer resources, clients, user accounts, and/or specific instances. In at least some embodiments, resource monitoring module 218 may implement various administrative actions to stop, heal, manage, or otherwise respond to various different scenarios in the fleet of virtualization hosts 230 and instances 234. Resource monitoring module 218 may also provide access to various metric data for client(s) 202 as well as manage client configured alarms.
[0041] In various embodiments, control plane 210 may implement billing management module 220. Billing management module 220 may be configured to detect billing events (e.g., specific dates, times, usages, requests for bill, or any other cause to generate a bill for a particular user account or payment account linked to user accounts). In response to detecting the billing event, billing management module may be configured to generate a bill for a user account or payment account linked to user accounts.
[0042] A virtual compute instance 234 may, for example, comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size, and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor). A number of different types of computing devices may be used singly or in combination to implement the compute instances 234 of network-based virtual computing service 200 in different embodiments, including general purpose or special purpose computer servers, storage devices, network devices and the like. In some embodiments instance clients 202 or other any other user may be configured (and/or authorized) to direct network traffic to a compute instance 234.
[0043] Compute instances 234 may operate or implement a variety of different platforms, such as application server instances, Java™ virtual machines (JVMs), general purpose or special- purpose operating systems, platforms that support various interpreted or compiled programming languages such as Ruby, Perl, Python, C, C++ and the like, or high-performance computing platforms) suitable for performing client 202 applications, without for example requiring the client 202 to access an instance 234. There may be various different types of compute instances. In at least some embodiments, there may be compute instances that implement resource credit balances for scheduling virtual computer resource operations. This type of instance may perform based on resource credits, where resource credits represent time an instance can spend on a physical resource doing work (e.g., processing time on a physical CPU, time utilizing a network communication channel, etc.). The more resource credits an instance has for computer resources, the more time it may spend on the physical resources executing work (increasing performance). Resource credits may be provided at launch of an instance, and may be defined as utilization time (e.g., CPU time, such as CPU-minutes), which may represent the time an instance's virtual resources can spend on underlying physical resources performing a task.
[0044] In various embodiments, resource credits may represent time or utilization of resources in excess of a baseline utilization guarantee. For example, a compute instance may have a baseline utilization guarantee of 10% for a resource, and thus resource credits may increase the utilization for the resource above 10%. Even if no resource credits remain, utilization may still be granted to the compute instance at the 10% baseline. Credit consumption may only happen when the instance needs the physical resources to perform the work above the baseline performance. In some embodiments credits may be refreshed or accumulated to the resource credit balance whether or not a compute instance submits work requests that consume the baseline utilization guarantee of the resource.
[0045] Different types of compute instances implementing resource credits for scheduling computer resources may be offered. Different compute instances may have a particular number of virtual CPU cores, memory, cache, storage, networking, as well as any other performance characteristic. Configurations of compute instances may also include their location, in a particular data center, availability zone, geographic, location, etc... and (in the case of reserved compute instances) reservation term length. Different compute instances may have different resource credit accumulation rates for different virtual resources, which may be a number of resource credits that accumulate to the current balance of resource credits maintained for a compute instance. For example, one type of compute instance may accumulate 6 credits per hour for one virtual computer resource, while another type of compute instance may accumulate 24 credits per hour for the same type of virtual computer resource, in some embodiments. In another example the resource credit accumulation rate for one resource (e.g., vCPU) may be different than the resource credit accumulation rate for a different virtual computer resource (e.g., networking channel) for the same virtual compute instance. In some embodiments, multiple different resource credit balances may be maintained for a virtual compute instance for the multiple different virtual computer resources used by the virtual compute instances. A baseline performance guarantee may also be implemented for each of the virtual computer resources, which may be different for each respective virtual computer resource, as well as for the different instance types.
[0046] Baseline performance guarantees may be included along with the resource credit accumulation rates, in some embodiments. Thus, in one example, an instance type may include a specific resource credit accumulation rate and guaranteed baseline performance for processing, and another specific resource credit accumulation rate and guaranteed baseline performance rate for networking channels. In this way, provider network 200 may offer many different types of instances with different combinations of resource credit accumulation rates and baseline guarantees for different virtual computer resources. These different configurations may be priced differently, according to the resource credit accumulation rates and baseline performance rates, in addition to the various physical and/or virtual capabilities. In some embodiments, a virtual compute instance may be reserved and/or utilized for an hourly price. While, a long-term reserved instance configuration may utilize a different pricing scheme, but still include the credit accumulation rates and baseline performance guarantees. [0047] As illustrated in FIG. 2, a virtualization host 230, such as virtualization hosts 230a,
230b, through 230n, may implement and/or manage multiple compute instances 234, in some embodiments, and may be one or more computing devices, such as computing system 2000 described below with regard to FIG. 4. A virtualization host 230 may include a virtualization management module 232, such as virtualization management modules 232a, 232b through 232n, capable of instantiating and managing a number of different client-accessible virtual machines or compute instances 234. The virtualization management module 232 may include, for example, a hypervisor and an administrative instance of an operating system, which may be termed a
"domain-zero" or "domO" operating system in some implementations. The domO operating system may not be accessible by clients on whose behalf the compute instances 234 run, but may instead be responsible for various administrative or control-plane operations of the network provider, including handling the network traffic directed to or from the compute instances 234. Virtualization management module 232 may be configured to implement dynamic virtual resource request rate controls for physical resources utilized by different instances 234.
[0048] Client(s) 202 may encompass any type of client configurable to submit requests to provider network 200. For example, a given client 202 may include a suitable version of a web browser, or may include a plug-in module or other type of code module configured to execute as an extension to or within an execution environment provided by a web browser. Alternatively, a client 202 may encompass an application such as a dashboard application (or user interface thereof), a media application, an office application or any other application that may make use of compute instances 234 to perform various operations. In some embodiments, such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data. In some embodiments, clients 202 may be configured to generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network- based services architecture. In some embodiments, a client 202 (e.g., a computational client) may be configured to provide access to a compute instance 234 in a manner that is transparent to applications implement on the client 202 utilizing computational resources provided by the compute instance 324.
[0049] Clients 202 may convey network-based services requests to network-based virtual computing service 200 via network 260. In various embodiments, network 260 may encompass any suitable combination of networking hardware and protocols necessary to establish network- based communications between clients 202 and provider network 200. For example, a network 260 may generally encompass the various telecommunications networks and service providers that collectively implement the Internet. A network 260 may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given client 202 and network-based virtual computing service 200 may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, a network 260 may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client 202 and the Internet as well as between the Internet and provider network 200. It is noted that in some embodiments, clients 202 may communicate with provider network 200 using a private network rather than the public Internet.
[0050] FIG. 3 is a block diagram illustrating a virtualization host that implements dynamic virtual resource request rate controls for physical resources, according to some embodiments. As noted above in FIG. 2, virtualization hosts may serve as a host platform for one or more virtual compute instances. These virtual compute instances may utilize virtualized hardware interfaces to perform various tasks, functions, services and/or applications. As part of performing these tasks, virtual compute instances may utilize virtualized computer resources (e.g., virtual central processing unit(s) (vCPU(s)) which may act as the virtual proxy for the physical CPU(s)) implemented at the virtualization host in order to perform work on respective physical computer resources for the respective compute instance.
[0051] FIG. 3 illustrates virtualization host 310. Virtualization host 310 may host compute instances 330a, 330b, 330c, through 330n. In at least some embodiments, the compute instances 330 may be the same type of compute instance. In FIG. 3, compute instances 330 are compute instances that implement resource credits for scheduling virtual computer resources. Virtualization host 310 may also implement virtualization management module 320, which may handle the various interfaces between the virtual compute instances 330 and physical computing resource(s) 340 (e.g., various hardware components, processors, I/O devices, networking devices, etc.).
[0052] In FIG. 3, virtualization management module 320 may implement resource credit balance scheduler 324. Resource credit balance scheduler 324 may act as a meta-scheduler, managing, tracking, applying, deducting, and/or otherwise handling all resource credit balances for each of compute instances 330. In various embodiments resource credit balance scheduler 324 may be configured to receive virtual compute resource work requests 332 from computes instances. Each work request 332 may be directed toward the virtual computer resource corresponding to the compute instance that submitted the work. For each request 332, resource credit balance scheduler 324 may be configured to determine a current resource credit balance for the requesting compute instance 330, and generate scheduling instructions to apply resource credits when performing the work request. In some embodiments, resource credit balance scheduler 324 may perform or direct the performance of the scheduling instructions, directing or sending the work request to the underlying physical computing resources 340 to be performed. For example, in some embodiments different hardware queues may be implemented and resource credit balance scheduler 324 may be used to place tasks for performing work requests in the queues according to the applied resource credits, such as described below with regard to FIG. 4. However, in some embodiments the resource scheduling instructions may be sent 334 to virtual compute resource scheduler 322, which may be a scheduler for the physical resources 340, such as CPU(s), implemented at virtualization host 310. Resource credit balance scheduler 324 and/or virtual compute resource scheduler 322 may be configured to perform the various techniques described below with regard to FIGS. 5 - 6, in order to provide dynamic resource rate controls to scheduled/submit work requests for instances 330, apply resource credits, deduct resource credits, and/or otherwise ensure that work requests are performed according to the applied resource credits.
[0053] In some embodiments, in response to receiving the scheduling instructions, virtual compute resource scheduler 322 may provide physical scheduling instructions for work requests 336 to physical computing resources, such as physical CPU(s), in various embodiments. In at least some embodiments, virtual compute resource scheduler 322 may be a credit-based scheduler for one or more CPUs.
[0054] Resource credit balance scheduler 324 may also report credit balance and usage metrics 362 to monitoring agent 326, which may in turn report these metrics along with any other host metrics 364 (health information, etc.) to resource monitoring module 218.
[0055] FIG. 4 is a block diagram illustrating a resource credit balance scheduler that implements dynamic virtual resource request rate controls, according to some embodiments. As noted above, in various embodiments, resource credit balance scheduler 324 may implement dynamic virtual resource request rate controls for physical resources. Utilization of multiple different physical computer resources may be provided to compute instances 330. For example, resource A 400, resource B 410, and resource C 420 may represent processing, networking and/or storage physical computer resources (or any other physical computer resource). Different resource request queues may be implemented from which work requests may be pulled and performed. Resource A request queue 409 provides work requests to resource A 400. Similarly resource B request queue 419 and resource C request queue 429 provide requests to be pulled and performed for resource B 410 and resource C 420 respectively. In at least some embodiments, a resource request queue may be implemented as part of another scheduling component. For example, resource A 400 may be processing resources, and resource A request queue 409 may be implemented as part of a CPU scheduler that pulls requests from the request queue 409 for processing (similar to virtual compute resource scheduler 322 in FIG. 3).
[0056] Different dynamic request rate controls may be implemented for different resources, in some embodiments. For example dynamic request rate controls 407 may be implemented for work requests for resource A 400, dynamic request rate controls 417 may be implemented for work requests for resource B 410, and dynamic request rate controls 427 may be implemented for work requests for resource C 420. Dynamic resource request controls may be configured to dynamically determine a delay to be imposed before placing a next resource request into a resource request queue for the physical computer resource. FIGS. 5 and 6 described below provide various examples methods and techniques that dynamic rate controls may implement. For example, in at least some embodiments, a resource credit balance 403 of a particular instance (e.g., 330c) for resource A may be obtained to determine an initial delay between work requests (using the number of resource credits in the credit balance to identify an allotted utilization for instance 303c). The workload for resource A may also be obtained 405 and provided to the dynamic rate control. A probability for adding delay may be calculated using the workload for resource A, and depending on the probability calculated a delay may be randomly added or may not be added to the initial delay. The delay may then be imposed before the dynamic request rate control places another work request from the individual virtual resource A request queue 401 for instance 303 c into resource A request queue 409. Similar techniques may be applied by dynamic request rate controls 417 and 427 for resources B 410 and C 420. A delay may be determined before placing new requests from individual virtual resource B request queues 411 and individual resource C request queues 421 utilizing resource B credit balances 413 and resource C credit balances 423 respectively. Resource B queue workload 415 and resource C queue workload 425 may also be used to dynamically determine the delay. Delays for individual virtual resource requests queues may be performed contemporaneously, in various embodiments. Thus, dynamic request rate controls 407 may be individual determining delays for the respective individual virtual resource A request queue 401 from which they pull work requests.
[0057] The examples of implementing dynamic virtual resource request rate controls for physical resources discussed above with regard to FIGS. 2 - 4 have been given in regard to virtual computing resources offered by a provider network. Various other types or configurations of virtualization hosts or other virtualization platforms may implement these techniques, which may or may not be offered as part of a network-based service. For example, other scheduling techniques different than a credit-based scheduling technique may be implemented to determine utilization of a physical computer resource. FIG. 5 is a high-level flowchart illustrating various methods and techniques for implementing dynamic virtual resource request rate control for physical resources, according to some embodiments. These techniques may be implemented using various components of network-based virtual computing service as described above with regard to FIGS. 2 - 4 or other virtual computing resource hosts.
[0058] As indicated at 510, a work request for a virtual computer resource may be placed from an individual resource request queue maintained for a virtual compute instance into a physical resource request queue, in various embodiments. In response to placing the work request in the physical resource request queue, a delay may be dynamically determined based, at least in part, on a workload of the physical request queue, as indicated at 520. For example, if the number of work requests in the physical resource request queue is high, then a greater probability exists that a random delay may be imposed. The delay may also be determined so as to maintain a particular utilization of the physical computer resource for the virtual compute instance. If, for example, a virtual compute instance is allotted 500 input/output operations per second (IOPs), then the delay may be determined such that 500 I/O work requests may be placed into the physical resource request queue between delays. FIG. 6, discussed below, provides further examples of dynamically determining a delay. After the delay is imposed, as indicated by the positive exit from 530, a next work request from the individual virtual resource request queue may be placed into the physical resource request queue, as indicated at 540.
[0059] The techniques described above with regard to FIG. 5 may be implemented across multiple individual virtual resource request queues for different virtual compute instances submitting work requests to utilize the same physical computer resource. For example, the various delays determined between requests may be different for some or all of the different individual virtual resource request queues. During a delay for one individual virtual resource queue, another work request may be submitted for another individual virtual resource queue, in some embodiments. Random delays may be added to requests from different individual virtual resource request queues at different times. Delays based on allotted utilization for the physical computer resource (e.g., based on a resource credit balance) may create different delay times as well. The aggregate effect provided by inserting dynamic delays between placing work requests at each individual virtual resource request queue may be to provide a consistent throughput for performing the work requests at the physical computer resource. Thus, work requests submitted by a virtual compute instance that is latency sensitive, for example, may be provided with a consistent amount of time or latency to perform the work request. Moreover delays may be determined to ensure that work requests submitted by a virtual compute instance that is throughput sensitive according to an expected amount of throughput.
[0060] FIG. 6 is high-level flowchart illustrating various methods and techniques for determining a delay for a dynamic resource rate control, according to some embodiments. As indicated at 610, an initial delay may be identified for an individual resource request queue of a virtual compute instance based, at least in part, on a determined utilization of a physical computer resource for the virtual compute instance. For instance, in some embodiments utilization of a physical computer resource may be evenly divided between the virtual compute instances implemented on a virtualization host. If, for example 4 compute instances are implemented on a host and processing resources of the host are evenly divided, then each virtual compute instance may expect a 25% utilization of the processing resources. In some embodiments, the determined utilization between virtual compute instances may be different and/or change over time. For instance, as discussed above with regard to FIGS. 2 - 4, resource credits may be accrued and applied for a virtual compute instance to utilize a physical computer resource. Thus, a determined utilization of a physical computer resource for the virtual compute instance may be, in such embodiments, based on a resource credit balance for the virtual compute instance for the physical computer resource. The initial delay may be determined such as to ensure that the allotted utilization is not exceeded by a virtual compute instance. If, for instance, a virtual compute instance has a determined utilization for network traffic at 2000 packets per second, then the initial delay if inserted between submitting traffic requests for a second would limit the number of requests to a maximum of 2000 packets.
[0061] As indicated at 620, a workload of the physical resource request queue for the physical computer resource may be determined, in various embodiments. Workload metrics for a physical resource request queue may be tracked indicating the number of requests in the queue at a point in time, for example. In some embodiments, the workload requests metrics may be smoothed to determine a workload. For instance, a weighted average may be taken of the workload metrics. In some embodiments, the same workload may be used for determining multiple different delays. For example, the workload for determining a first delay may be 100 requests, and the same workload of 100 requests may be used again to determine a subsequent delay. [0062] In at least some embodiments, a probability for adding a random delay may be calculated based, at least in part, on the work load of the physical resource request queue. For example, a probability calculation such as may be determined when applying a Random Early Detection (RED) technique may be used to calculate the probability for the delay. Various different random number generation techniques, such as a uniform random variable technique or a geometric random variable technique may be applied as part of calculating the probability. In general, the calculated probability may be proportional to the offered load divided by the available throughput at the physical resource request queue. As the workload of the physical resource request queue increases, the probability or likelihood that a delay may be randomly added increases.
[0063] As indicated at 640, whether a random delay is added to the initial delay is determined according to the calculated probability at 630. If, the probability indicates that for every 1/10 work requests submitted a random delay may be added, then the initial delay has a 1/10 to be increased with an additional delay, for example. The amount of time added in the random delay may be a default amount of time, or may be determined to achieve a particular throughput or workload at the physical resource request queue, in some embodiments. Thus, as indicated at 650 and 660, either the random delay will be added to the initial delay or not added to the initial delay according to the determined probability.
[0064] As noted above, a provider network may also implement resource credit pools for replenishing individual resource credit balances of virtual compute instances, according to some embodiments. Different clients implementing virtual computing resources have different resource demands. For example, some clients' workloads are not predictable and may not utilize fixed resources efficiently. Virtual compute instances implementing resource credits for scheduling virtual computing resources may provide dynamic utilization of resources creating flexible high performance, without wasting unutilized fixed resources. Resource credits may be accumulated for individual virtual compute instances and maintained as part of an individual resource credit balance. When a virtual compute instance needs to perform work at high performance, the resource credits may be applied to the work, effectively providing full utilization of underlying physical resources for the duration of the resource credits. When a virtual compute instance is using less than its share of resources (e.g., little or no work is being performed), credits may be accumulated and used for a subsequent task. Resources may, in various embodiments, be any virtualized computer resource that is implemented or performed by a managed physical computer resource, including, but not limited to, processing resources, communication or networking resources, and storage resources. [0065] While scheduling utilization of physical computer resources according to individual resource credit balances may allow individual virtual compute instances to handle some bursts or large changes in instance workloads, the workload that may be directed to any one particular instance may be difficult to predict. If, for instance, a group of instances is used to provide some kind of service for which different instances may randomly experience burst workloads the overall workload of many instances may be relatively low. Yet, a few instances may receive workloads that may even be in excess of the burst capacity handled by individual resource credit balances. Instead of trying to predict which particular instances may receive such high workloads, a resource credit pool may be implemented to provide additional resource credits to one or more instances in a group of virtual compute instances. The aggregate workload for a large group of instances may be more easily determined (based on various statistical techniques). Thus, the resource credit pool may be filled with sufficient resource credits to process the aggregate workload in a more cost-effective manner.
[0066] FIG. 7 is block diagram illustrating a resource credit pool for replenishing resource credit balances for virtual compute instances, according to some embodiments. Provider network 700 may be a distributed system or service provides virtual compute instances 720a, 720b, 720c, though 720n for use by clients of provider network 700. Each of these virtual compute instances 720 may be implemented on a virtualization host, which as described above with regard to FIGS. 2 and 3, may provide a platform for executing a virtual compute instance. Physical computer resources implemented as part of a virtualization host may be shared among multiple virtual compute instances implemented on the same host. Credit-based scheduling may be implemented to determine the utilization of physical computer resources to perform work requests for the compute instances hosted thereon according to individual resource credit balances, such as balances 722a, 722b, 722c though 722n. For example, an individual balance of processing resource credits for a virtual compute instance may be applied to determine the utilization of a processing resource (e.g., central processing unit (CPU)) for the virtual compute instance. The greater the individual balance of resource credits 722, the higher the utilization of the underlying physical computer resource the virtual compute instance may receive to perform work requests. In at least some embodiments, virtual compute instances may be provisioned with an initial individual resource credit balance (e.g., 30 credits) which may be used immediately. Over time, the compute instance may accumulate more resource credits according to a fixed rate. In at least some embodiments, a limit may be implemented for accumulating resource credits according to the instance refill rate. This limit may be enforced by excluding accumulated resource credits after a period of time (e.g., 24 hours). When applied, a resource credit may provide full utilization of a resource for a particular time (e.g., a computer resource credit may equal 1 minute of full central processing unit (CPU) utilization, 30 seconds for a particular networking channel, or some other period of use that may be guaranteed), in some embodiments. Resource credits may be deducted from the resource credit balance when used.
[0067] Consuming resource credits, a virtual compute instance may utilize sufficient resources (e.g., CPU cores, network interface card functions, etc.) to obtain high performance when needed. However, to perform some work requests, the individual resource credit balance may be insufficient to complete the work requests at a high performance level. For example, if no resource credits are available when performing a work request, a baseline utilization guarantee may still be applied to perform the work request. A provider network may implement a resource credit pool 710, which may replenish resource credits 712 to individual resource credit balances 722. For example, resource credit requests may be made to the resource credit pool 710 to obtain additional resource credits when it may be determined that additional resource credits are need to complete one or more work requests for a virtual compute instance. The utilization of underlying physical resources when credits are applied, such as when credits obtained from resource credit pool 710 are applied, may trigger migration events for some virtualization hosts (as described below with regard to FIGS. 5 A, 5B, and 7), which may migrate virtual compute instances from one virtualization host to another in order to provide capacity to apply the additional resource credits for the virtual compute instance's work requests.
[0068] Different resource credit pools 710 may correspond to different types of physical computer resources. In some embodiments, virtual compute instances may be authorized to access multiple different resource credit pools corresponding to different physical computer resources. Resource credit pools may also be linked to a single user or payment account from which funds may be drawn to obtain additional resource credit(s) 702 to replenish the resource credit pool. Different replenishment policies for resource credit pool 710 may be implemented, providing automated or manually requested replenishment.
[0069] Please note that previous descriptions with regard to FIG. 7 are not intended to be limiting, but are merely provided as an example of a resource credit pool for replenishing individual resource credit balances of virtual compute instances. Accumulation rates, initial balances and balances limits may all be different, as may be the various amounts in which resource credits may be used.
[0070] This specification next includes a general description of a provider network, which may implement resource credit pools for replenishing individual resource credit balances of virtual compute instances. Then various examples of a provider network are discussed, including different components/modules, or arrangements of components/module that may be employed as part of the provider network. A number of different methods and techniques to implement a resource credit pool for replenishing individual resource credit balances are then discussed, some of which are illustrated in accompanying flowcharts. Various examples are provided throughout the specification.
[0071] FIG. 8 is a block diagram illustrating a provider network that resource credit pools for replenishing individual resource credit balances of virtual compute instances, according to some embodiments. Provider network 800 may be a provider network like provider network 200 discussed above with regard to FIG. 2, and may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to clients 802. Provider network 800 may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like, needed to implement and distribute the infrastructure and services offered by the provider network 800. In some embodiments, provider network 800 may provide computing resources. These computing resources may in some embodiments be offered to clients in units called "instances," 834 such as virtual compute instances. A provider network may, in some embodiments be a network-based service providing virtual compute instances.
[0072] In various embodiments, provider network 800 may implement a control plane 810 in order to manage the computing resource offerings provided to clients 802 by provider network 800. Control plane 810 may implement various different components to manage the computing resource offerings. Control plane 810 may be implemented across a variety of servers, nodes, or other computing systems or devices (such as computing system 2000 described below with regard to FIG. 19). It is noted that where one or more instances of a given component may exist, reference to that component herein may be made in either the singular or the plural. However, usage of either form is not intended to preclude the other.
[0073] In at least some embodiments, control plane 810 may implement interface 812. Interface 812 may be configured to process incoming requests received via network 860 and direct them to the appropriate component for further processing. In at least some embodiments, interface 812 may be a network-based interface and may be implemented as a graphical interface (e.g., as part of an administration control panel or web site) and/or as a programmatic interface (e.g., handling various Application Programming Interface (API) commands). In various embodiments, interface 812 may be implemented as part of a front end module or component dispatching requests to the various other components, such as resource management 814, reservation management 816, resource credit pool management 818, and resource monitoring 820. Clients 802 may, in various embodiments, may not directly provision, launch or configure resources but may send requests to control plane 810 such that the illustrated components (or other components, functions or services not illustrated) may perform the requested actions. FIG. 4, discussed below, provides various examples of requests that may be processed via interface 812.
[0074] Control plane 810 may implement resource management module 814 to manage the access to, capacity of, mappings to, and other control or direction of computing resources offered by provider network. In at least some embodiments, resource management module 814 may provide both a direct sell and 3rd party resell market for capacity reservations (e.g., reserved compute instances). For example, resource management module 814 may allow clients 802 via interface 812 to learn about, select, purchase access to, and/or reserve capacity for computing resources, either from an initial sale marketplace or a resale marketplace, via a web page or via an API. For example, resource management component may, via interface 812, provide a listings of different available compute instance types, each with a different credit accumulation rate. Additionally, in some embodiments, resource management module 814 may be configured to offer credits for purchase (in addition to credits provided via the credit accumulation rate for an instance type) for a specified purchase amount or scheme (e.g., lump sum, additional periodic payments, etc.). For example, resource management module 814 may be configured to receive a credit purchase request (e.g., an API request) and credit the resource credit pool with the purchased credits. Similarly, resource management module 214 may be configured to handle a request to reconfigure an instance, such as increase a credit accumulation rate for a particular instance. Resource management 814 may also offer and/or implement a flexible set of resource reservation, control and access interfaces for clients 802 via interface 812. For example resource management module 814 may provide credentials or permissions to clients 802 such that compute instance control operations/interactions between clients and in-use computing resources may be performed. In at least some embodiments, resource management modules may be configured to perform various migrations of virtual compute instances from one virtualization host to another in response to detecting migration events (as discussed below with regard to FIGS. 11A, 11B, and 13).
[0075] In various embodiments, reservation management module 816 may be configured to handle the various pricing schemes of instances 834 (at least for the initial sale marketplace) in various embodiments. For example network-based virtual computing service 800 may support several different purchasing modes (which may also be referred to herein as reservation modes) in some embodiments: for example, term reservations (i.e. reserved compute instances), on- demand resource allocation, or spot-price-based resource allocation. Using the long-term reservation mode, a client may make a low, one-time, upfront payment for a compute instance or other computing resource, reserve it for a specified duration such as a one or three year term, and pay a low hourly rate for the instance; the client would be assured of having the reserved instance available for the term of the reservation. Using on-demand mode, a client could pay for capacity by the hour (or some appropriate time unit), without any long-term commitments or upfront payments. In the spot-price mode, a client could specify the maximum price per unit time that it is willing to pay for a particular type of compute instance or other computing resource, and if the client's maximum price exceeded a dynamic spot price determined at least in part by supply and demand, that type of resource would be provided to the client.
[0076] During periods when the supply of the requested resource type exceeded the demand, the spot price may become significantly lower than the price for on-demand mode. In some implementations, if the spot price increases beyond the maximum bid specified by a client, a resource allocation may be interrupted - i.e., a resource instance that was previously allocated to the client may be reclaimed by the resource management module 816 and may be allocated to some other client that is willing to pay a higher price. Resource capacity reservations may also update control plane data store 822 to reflect changes in ownership, client use, client accounts, or other resource information.
[0077] In various embodiments, control plane 810 may implement resource credit pool management 818. Resource credit pool management 818 may, in various embodiments, be configured to manage and handle requests to create, configure, add instances or remove instances, or any other management operation as part of providing resource credit pools. Resource credit pool management 818 may store resource credit pool balances, authorized instances, or any other information in control plane data store 822. Resource credit pool management 818 may, in various embodiments, handle resource credit requests, determine the number of resource credits to provide, send responses to add credits or deny the resource request, and update the resource credit pool based on replenishment actions to individual resource credit balances or acquisitions of new resource credits for the resource credit pool. Resource credit pool management 818 may request resource migrations from resource management module 814 and perform evaluations of virtualization hosts to detect migration events.
[0078] In various embodiments, control plane 810 may implement resource monitoring module 820. Resource monitoring module 820 may track the consumption of various computing instances, (e.g., resource credit balances, resource credit consumption) consumed for different virtual computer resources, clients, user accounts, and/or specific instances. In at least some embodiments, resource monitoring module 820 may implement various administrative actions to stop, heal, manage, or otherwise respond to various different scenarios in the fleet of virtualization hosts 830 and instances 834. Resource monitoring module 820 may also provide access to various metric data for client(s) 802 as well as manage client configured alarms. Information collected by monitoring module 820 may be used to detect migration events for virtualization hosts, in some embodiments.
[0079] In various embodiments, control plane 810 may implement a billing management module (not illustrated). The billing management module may be configured to detect billing events (e.g., specific dates, times, usages, requests for bill, or any other cause to generate a bill for a particular user account or payment account linked to user accounts). In response to detecting the billing event, billing management module may be configured to generate a bill for a user account or payment account linked to user accounts.
[0080] A virtual compute instance 834 may, for example, comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size, and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor). A number of different types of computing devices may be used singly or in combination to implement the compute instances 834 of provider network 800 in different embodiments, including general purpose or special purpose computer servers, storage devices, network devices and the like. In some embodiments instance clients 802 or other any other user may be configured (and/or authorized) to direct network traffic to a compute instance 834.
[0081] Compute instances 834 may operate or implement a variety of different platforms, such as application server instances, Java™ virtual machines (JVMs), general purpose or special- purpose operating systems, platforms that support various interpreted or compiled programming languages such as Ruby, Perl, Python, C, C++ and the like, or high-performance computing platforms) suitable for performing client 802 applications, without for example requiring the client 802 to access an instance 834. There may be various different types of compute instances. In at least some embodiments, there may be compute instances that implement rolling resource credit balances for scheduling virtual computer resource operations. This type of instance may perform based on resource credits, where resource credits represent time an instance can spend on a physical resource doing work (e.g., processing time on a physical CPU, time utilizing a network communication channel, etc.). The more resource credits an instance has for computer resources, the more time it may spend on the physical resources executing work (increasing performance). Resource credits may be provided at launch of an instance, and may be defined as utilization time (e.g., CPU time, such as CPU-minutes), which may represent the time an instance's virtual resources can spend on underlying physical resources performing a task.
[0082] In various embodiments, resource credits may represent time or utilization of resources in excess of a baseline utilization guarantee. For example, a compute instance may have a baseline utilization guarantee of 10% for a resource, and thus resource credits may increase the utilization for the resource above 10%. Even if no resource credits remain, utilization may still be granted to the compute instance at the 10% baseline. Credit consumption may only happen when the instance needs the physical resources to perform the work above the baseline performance. In some embodiments credits may be refreshed or accumulated to the resource credit balance whether or not a compute instance submits work requests that consume the baseline utilization guarantee of the resource.
[0083] Different types of compute instances may be offered. Different compute instances may have a particular number of virtual CPU cores, memory, cache, storage, networking, as well as any other performance characteristic. Configurations of compute instances may also include their location, in a particular data center, availability zone, geographic, location, etc., and (in the case of reserved compute instances) reservation term length. Different compute instances may have different resource credit accumulation rates for different virtual resources, which may be a number of resource credits that accumulate to the current balance of resource credits maintained for a compute instance. For example, one type of compute instance may accumulate 6 credits per hour for one virtual computer resource, while another type of compute instance may accumulate 24 credits per hour for the same type of computer resource, in some embodiments. In another example the resource credit accumulation rate for one resource (e.g., CPU) may be different than the resource credit accumulation rate for a different computer resource (e.g., networking channel) for the same virtual compute instance. In some embodiments, multiple different resource credit balances may be maintained for a virtual compute instance for the multiple different physical resources used by the virtual compute instances. A baseline performance guarantee may also be implemented for each of the computer resources, which may be different for each virtual computer resource, as well as for the different instance types.
[0084] Baseline performance guarantees may be included along with the resource credit accumulation rates, in some embodiments. Thus, in one example, an instance type may include a specific resource credit accumulation rate and guaranteed baseline performance for processing, and another specific resource credit accumulation rate and guaranteed baseline performance rate for networking channels. In this way, provider network 800 may offer many different types of instances with different combinations of resource credit accumulation rates and baseline guarantees for different virtual computer resources. These different configurations may be priced differently, according to the resource credit accumulation rates and baseline performance rates, in addition to the various physical and/or virtual capabilities. In some embodiments, a virtual compute instance may be reserved and/or utilized for an hourly price. While, a long-term reserved instance configuration may utilize a different pricing scheme, but still include the credit accumulation rates and baseline performance guarantees.
[0085] As illustrated in FIG. 8, a virtualization host 830, such as virtualization hosts 830a, 830b, through 830n, may implement and/or manage multiple compute instances 834, in some embodiments, and may be one or more computing devices, such as computing system 2000 described below with regard to FIG. 19. A virtualization host 830 may include a virtualization management module 832, such as virtualization management modules 832a, 832b through 832n, capable of instantiating and managing a number of different client-accessible virtual machines or compute instances 834. The virtualization management module 832 may include, for example, a hypervisor and an administrative instance of an operating system, which may be termed a "domain-zero" or "domO" operating system in some implementations. The domO operating system may not be accessible by clients on whose behalf the compute instances 834 run, but may instead be responsible for various administrative or control-plane operations of the network provider, including handling the network traffic directed to or from the compute instances 834.
[0086] Client(s) 802 may encompass any type of client configurable to submit requests to network-based virtual computing service 800. For example, a given client 802 may include a suitable version of a web browser, or may include a plug-in module or other type of code module configured to execute as an extension to or within an execution environment provided by a web browser. Alternatively, a client 802 may encompass an application such as a dashboard application (or user interface thereof), a media application, an office application or any other application that may make use of compute instances 834 to perform various operations. In some embodiments, such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network- based data. In some embodiments, clients 802 may be configured to generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network-based services architecture. In some embodiments, a client 802 (e.g., a computational client) may be configured to provide access to a compute instance 834 in a manner that is transparent to applications implement on the client 802 utilizing computational resources provided by the compute instance 834.
[0087] Clients 802 may convey network-based services requests to network-based virtual computing service 800 via network 860. In various embodiments, network 860 may encompass any suitable combination of networking hardware and protocols necessary to establish network- based communications between clients 802 and network-based virtual computing service 800. For example, a network 860 may generally encompass the various telecommunications networks and service providers that collectively implement the Internet. A network 860 may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given client 802 and network-based virtual computing service 800 may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, a network 860 may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client 802 and the Internet as well as between the Internet and network-based virtual computing service 800. It is noted that in some embodiments, clients 802 may communicate with network-based virtual computing service 800 using a private network rather than the public Internet.
[0088] FIG. 9 is a block diagram illustrating a virtualization host that implements resource credits for scheduling virtual computer resources, according to some embodiments. As noted above in FIG. 2, virtualization hosts may serve as a host platform for one or more virtual compute instances. These virtual compute instances may utilize virtualized hardware interfaces to perform various tasks, functions, services and/or applications. As part of performing these tasks, virtual compute instances may utilize physical computer resources via a virtual proxy (e.g., virtual central processing unit(s) (vCPU(s)) which may act as the virtual proxy for the physical CPU(s)) implemented at the virtualization host 310 in order to perform work on respective physical computer resources for the respective compute instance. In some embodiments, the compute instances may be operated for different clients of a provider network such that the virtualization host is multi-tenant.
[0089] FIG. 9 illustrates virtualization host 910. Virtualization host 910 may host compute instances 930a, 930b, 930c, through 930n. In at least some embodiments, the compute instances 930 may be the same type of compute instance. In FIG. 9, compute instances 930 are compute instances that implement rolling resource credits for scheduling virtual computer resources. Virtualization host 910 may also implement virtualization management module 920, which may handle the various interfaces between the virtual compute instances 930 and physical computing resource(s) 940 (e.g., various hardware components, processors, I/O devices, networking devices, etc.).
[0090] In FIG. 9, virtualization management module 920 may implement resource credit balance scheduler 924. Resource credit balance scheduler 924 may act as a meta-scheduler, managing, tracking, applying, deducting, and/or otherwise handling all individual resource credit balances for each of compute instances 930 for the different respective physical resources 940. In various embodiments resource credit balance scheduler 924 may be configured to receive virtual resource work requests 932 from computes instances. Each work request 932 may be directed toward the virtual computer resource corresponding to the compute instance that submitted the work. For each request 932, resource credit balance scheduler may be configured to determine a current resource credit balance for the requesting compute instance 930, and generate scheduling instructions to apply resource credits when performing the work request. In some embodiments, resource credit balance scheduler 924 may perform or direct the performance of the scheduling instructions, directing or sending the work request to the underlying physical computing resources 940 to be performed (as illustrated by the arrow between 924 and 940). For example, in some embodiments different hardware queues may be implemented and resource credit balance scheduler 924 may be used to place tasks for performing work requests in the queues according to the applied resource credits (e.g., queuing tasks according to the amount of time of applied resource credits). However, in some embodiments the resource scheduling instructions may be sent 934 to virtual compute resource scheduler 922, which may be a scheduler for the physical resources 940, such as CPU(s), implemented at virtualization host 910.
[0091] In some embodiments, in response to receiving the scheduling instructions, virtual compute resource scheduler 922 may provide physical scheduling instructions for work requests 936 to physical computing resources, such as physical CPU(s), in various embodiments. In at least some embodiments, virtual compute resource scheduler 922 may be a credit-based scheduler for one or more CPUs.
[0092] Rolling resource credit balance scheduler 924 may also report credit balance and usage metrics as part of performance metrics 972 to along with any other host metrics (health information, etc.) to resource monitoring module 820.
[0093] In some instances, the individual resource credit balances may be insufficient to complete work requests 932. As described below with regard to FIG. 9, credit requests 962 may be sent via credit pool agent 926 (which handles communications between virtualization host 910 and resource credit manager 818) to request 964 a number of resource credits from a particular resource credit pool. Resource credit manager 818 may send a response authorizing additional resource credits 966 to credit pool agent 926 which in turn may inform the scheduler 924 of the additional resource credits 968. In some embodiments, scheduling instructions (which may apply the additionally granted credits to an individual resource account according to a schedule or in response to events such as the completion of a migration) for applying additional resource credits 968 may be enforced.
[0094] Resource credit pools may be offered to clients of a provider network in order to allow resource utilization to be purchased for more predictable requirements than individual instance requirements. FIG. 10 illustrates interactions between a client and a provider network that implements resource credit pools for replenishing instance resource credit balances, according to some embodiments. Client 1000 (similar to client(s) 802 in FIG. 8) may interact with control plane 810 via interface 812. As noted above, interface 812 may be implemented as a graphical user interface (e.g., at a network-based site) or programmatically (e.g., an API).
[0095] Client 1000 may submit a request to create a resource credit pool 1010 to control plane 810. Creation request 1010 may indicate the type of physical computer resource for the resource credit pool to maintain resource credits. The resource credit pool creation request may also include a replenishment policy (e.g., on-demand, periodic refill, manual refill). Replenishment policies for individual resource credit balances may also be included. A separate request, 1020, to configure these replenishment policies or change these replenishment policies may also be sent. The creation request may also identify the virtual compute instances authorized to obtain resource credits from the resource credit pool (e.g., including a list of instance identifiers, a zone, region, or other indication of instances that are authorized). Requests to add compute instances 1030 to those authorized to replenish credits from the resource credit pool may be sent, as well as requests to remove authorization 1040 for particular compute instances.
[0096] While some replenishment policies or schemes for resource credit pools may provide for mechanisms to automatically acquire more resource credits for a resource credit pool, requests to add resource credits 1070 to the resource credit pool may also be sent. As the purchase price for different types of resource credits may vary, in some embodiments, requests for pricing information 1050 may be sent to obtain resource credit pricing 1060 when making purchasing decisions.
[0097] FIG. 11A illustrates a virtual compute instance migration performed as part of replenishing an individual resource credit balance for a physical computer resource, according to some embodiments. Virtualization host 1100 may send a resource credit request 1132 for resource pool credits to control plane 210 for instance 1102b. Control plane 210 may identify the correct resource credit pool and verify the authority of instance 1102b to receive pool credits. Control plane 210 may perform an evaluation of virtualization host 1100 and detect a migration event for virtualization host 1100. For example, adding the additional resource credits to the local credit balance for instance 1102 may cause utilization of a physical computer resource authorized by the type of resource credits to exceed an actual capacity of the physical computer resource at virtualization host 1100.
[0098] In response to detecting the migration event, control plane 210 may select one or more instances to migrate to destination virtualization hosts. As illustrated in FIG. 11 A, instance 1102c and instance 602d may be selected for instance migration 1138. Instances may be selected for migration for various reasons. For example, typical utilization of the physical computer resource by instances 1102c and 1102d may offset the increased utilization provided by the additional resource credits for instance 1102b. Destination virtualization hosts 1110 and 1120 may be selected to host instances 1102c and 1102d respectively. FIG. 13, discussed below, provides further examples and techniques for selecting instances and destination virtualization hosts for migration.
[0099] Control plane 810 may send a response 1134 authorizing a number of resource credits to be added to the local credit balance for instance 1102b. The response may include a scheduling instruction which may allow only a portion resource credits to be applied until instances 1102c and 1102d are migrated to virtualization hosts 1110 and 1120. Control plane 810 may also direct the instance migration 1136, performing various operations to re-instantiate instances 1102c and 1102d at virtualization hosts 1110 and 1120. For example, control plane 210 may provision a replica instance on virtualization host 1110 of instance 1102c, synchronize the state of the two instances and redirect traffic to virtualization host 1110 to the new instance acting as instance 1102c. The individual resource credit balances for instances 1102c and 1102d may also be replicated to virtualization hostl 1110 and 1120. Migration may be performed in such a way as to be transparent to a user or client of instances 1102c and 1102d (which as the virtualization hosts may be multi-tenant, utilization changes due to resource credit requests may be hidden from view). Once migration 1138 is complete, virtualization host 1100 may make the physical computer resources utilized by instances 1102c and 1102d available to other instances.
[00100] FIG. 11B illustrates a virtual compute instance migration performed as part of replenishing an individual resource credit balance for a physical computer resource, where the requesting virtual compute instance is migrated, according to some embodiments. Virtualization host 1140 may send a resource credit request 1162 for pool credits to control plane 210 for instance 1142a. Control plane 210 may evaluate virtualization host 1140. For example, control plane 810 may evaluate usage and performance data for utilization of the physical computer resource for which the pool credits are requested. If, for instance, the resource credit request is for processing credits, then past processor utilization of instances 1142a, 1142b, 1142c and 1142d may be examined. Excess processing capacity or bandwidth may be identified based on the evaluation. If insufficient capacity exists to apply the additional resource credits for instance 1142a, then a migration event for virtualization host 1140 may be triggered. Instance 1142a may be selected for migration 1168 to a destination virtualization host 1150 (which may also be selected based on the resource requirements of instance 1142 including the additional resource credits). Instance 1142a may be selected based on multiple factors. For example, instance 1142a may be a "small" instance (based on workload and/or utilization), and thus may be easy to migrate. In another example, the destination virtualization host 1150 may have different hardware providing different physical computer resource capabilities that meet the requirements of the virtual compute instance post credit replenishment. FIG. 13, discussed below, provides further discussion on the selection of instances for migration.
[00101] Control plane 810 may send a response 1164 authorizing a number of credits to be added to the individual resource credit pool for instance 1142a. The response may include a scheduling instruction which may allow only a portion resource credits to be applied until instance 1142a is migrated to virtualization host 1150. Control plane 810 may also direct the instance migration 1166, performing various operations to re -instantiate instance 1142a at virtualization host 1150. For example, control plane 210 may provision a replica instance on virtualization host 1150 of instance 1142a, synchronize the state of the two instances and redirect traffic to virtualization host 1150 to the new instance acting as instance 1142a. The individual resource credit balances for instance 1142a may also be replicated to virtualization host 1150. Migration may be performed in such a way as to be transparent to a user or client of instance 1142a. Once migration 1168 is complete, virtualization host 1140 may make the physical computer resources utilized by instance 1142a available to other instances.
[00102] The examples of implementing resource credit pools for replenishing individual resource credit balances discussed above with regard to FIGS. 8 - 1 IB have been given in regard to virtual compute instances offered by a provider network. Various other types or configurations of virtual compute instances and/or a provider network may implement these techniques, which may or may not be offered as part of a network-based service. FIG. 12 is a high-level flowchart illustrating various methods and techniques for implementing resource credit pools for replenishing resource credit balances of virtual compute instances, according to some embodiments. These techniques may be implemented using various components of provider network as described above with regard to FIGS. 8 - 11B or other system or service providing virtual computing instances.
[00103] As indicated at 1210, a resource credit pool of resource credits may be maintained to replenish individual resource credit balances of authorized compute instances, in various embodiments. Resource credit pools, as discussed above with regard to FIG. 7, may pertain to a particular type of physical computer resource (e.g., processing, network, I/O or storage). Accordingly, in some embodiments multiple different resource credit pools may be accessible to a virtual compute instance corresponding to different physical computer resources that the virtual compute instances utilizes to perform work requests. The resource credits in the resource credit pool may be individually applicable to increase utilization of the physical computer resource for the virtual compute instance for which the resource credits are applied.
[00104] One or multiple different virtual compute instances may be authorized to obtain resource credits from the resource credit pool. As illustrated above in FIG. 10, virtual compute instances may be added or removed from the group of virtual compute instances authorized to obtain resource credits. Various enforcement mechanisms (e.g., an access list of authorized instances) may be implemented to ensure that only authorized instances obtain resource credits from a resource credit pool. In some embodiments, a common set of virtual compute instances may be authorized to obtain resource credits from multiple different resource credit pools (e.g., a pool for networking, a pool for processing, a pool for I/O, etc.), while in other embodiments the authorized virtual compute instances may vary from one resource credit pool to another.
[00105] Resource credit pools may be replenished in various ways by obtaining more resource credits from a provider network. A provider network may offer resource credits for purchase, either individually or in batches of resource credits. Resource credit pools may be refilled in automated fashion (as discussed below with regard to FIG. 14), either on demand or according to a scheduled or periodic refill rate. In some embodiments, resource credits may be purchased or added on demand from instances authorized to access the resource credit pool. In at least some embodiments, resource credit pools may authorize access to any virtual compute instance of a provider network. Resource credits may also be manually purchased by submitting a purchase request for resource credits (as illustrated above in FIG. 4) to refill a resource credit pool. Resource credit pricing may be determined according to a fixed pricing scheme, such as price per individual resource credit, which may also be discounted as larger numbers of resource credits are purchased. In some embodiments, resource credit pricing may be determined according to market or otherwise variable rate. [00106] As indicated at 1220, a resource credit request may be received for an authorized virtual compute instance to replenish the individual resource credit balance for the authorized virtual compute instance, in various embodiments. The resource credit request may specify a number of resource credits, in some embodiments. In response to the resource credit request, a number of resource credits to add to the individual resource credit balance for the authorized compute instance may be determined, as indicated at 1230. The number of resource credits may be the same as a requested number of resource credits. While in some embodiments, resource credits may be replenished to individual resource credit balances according to an individual resource credit replenishment scheme (e.g., providing a pre-determined number of resource credits to virtual compute instance in response to a request).
[00107] As indicated at 1240, a response may be sent indicating the number of resource credits to be added to the individual resource credit balance for the authorized compute instance. In at least some embodiments, the response may include a scheduling instruction or other information directing the addition or application of the resource credits. As described above with regard to FIG. 9 and below with regard to FIG. 15, the virtualization host implementing the virtual compute instance may add the resource credits to the individual resource credit balance and apply them to work requests utilizing the underlying physical computer resource. The resource credit pool may be updated to remove the number of resource credits from the resource credit pool, as indicated at 1250, in various embodiments.
[00108] Credit-based scheduling for virtual compute instances may allow virtual compute instances to handle workloads that are irregular or unpredictable. For multiple virtual compute instances located on the same virtualization host, credit-based scheduling distributes utilization of underlying physical resources according the individual resource credit balances for the instances. Some increased utilization for a virtual compute instance may exceed the capacity or capability of a virtualization host to provide (or without reducing the performance of other virtual compute instances located at the virtualization host). When an individual resource credit balance for a virtual compute instance is replenished, it may be that the virtualization host is unable to meet the various performance commitments of the virtual compute instances located at the virtualization host. In such scenarios, migrating one or more virtual compute instances to another virtualization host may allow for the additional resource credits added to an individual resource credit balance to be applied. FIG. 13 is high-level flowchart illustrating various methods and techniques for migrating instances in a provider network as part of replenishing instance resource credit balances from a resource credit pool, according to some embodiments. [00109] Similar to the description above with regard to FIG. 12, a resource credit request may be received for an authorized virtual compute instance to replenish an individual resource credit balance for the authorized virtual compute instance from a resource credit pool. The increase in utilization provided by applying the additional resource credits may cause the overall utilization of the underlying physical resource to exceed the capacity of the underlying physical resource.
Due to the unpredictably at which resource credits may be obtained and applied at individual virtual compute instances, virtualization hosts may be evaluated and or monitored. A virtualization host implementing an authorized virtual compute instance that receives additional resource credits from a resource credit pool may be evaluated to detect a migration event for the virtualization host, as indicated at 1310, as a result of replenishing the individual resource credit balance. For instance, in at least some embodiments, credit usage, and other information or performance statistics for the virtualization host and the instances located on the virtualization host (including the virtual compute instance for which the resource credits are requested as well as other virtual compute instances) may be collected (as illustrated in FIG. 9). The virtualization host may be evaluated based on current utilization of the underlying physical resource for the requested resource credits, as well as historical utilization trends, based on the virtual compute instances located on the virtualization host. The evaluation may also include adding the increase in utilization of the physical resource when the requested resource credits are applied, to determine whether the utilization increase exceeds the capabilities of the virtualization host triggering a migration event. If, for instance, the evaluation determines that based on historical trends and current utilization information, a physical resource (e.g., one or more central processing units) is at 80% utilization and the requested resource credits provide a 30% increase to the utilization of the physical resource, then a migration event may be detected as the estimated utilization (110%) exceeds the capacity of the virtualization host. Note that various other thresholds lower than 100% capacity may trigger migration detection events (especially as workloads for virtual compute instances may occur in bursts). For example, in some embodiments, migration events may be triggered when utilization or workload for an underlying physical resource exceeds a threshold above which slows or impacts the performance of a virtual compute instance in violation of a service level agreement.
[00110] If a migration even is not detected, as indicated by the negative exit from 1320, then replenishment of the individual resource credit balance for the authorized virtual compute instance may be completed, as indicated at 1322 (as discussed above with regard to FIG. 12). However, if a migration event is detected, as indicated by the positive exit from 1320, then a migration of one or more virtual compute instances located on the virtualization host may be performed so that the utilization capacity for the replenished virtual compute instance exists. Many different methods for selecting virtual compute instance(s) to migrate from a virtualization host as well as a destination virtualization host may be implemented, in some embodiments one or more virtual compute instances implemented on the virtualization host may be selected to migrate so as to provide the utilization capacity. For instance, if an originating virtualization host has high CPU utilization and low memory utilization, then it may be desirable to locate a virtualization host with a reverse utilization, a low CPU/high memory instance.
[00111] Selecting virtual compute instances to migrate may depend upon various factors. For example, migration burden or workload may be assessed for the virtual compute instances, in some embodiments. Migrating a larger virtual compute instance (by resource utilization and/or workload) may be, for instance, more difficult or costly to perform. If the movement of multiple smaller virtual compute instances achieves the same effect, then in some cases multiple virtual compute instances may be moved. In addition to the cost of migrating virtual compute instances, the impact of migration on the operation of virtual compute instances may be assessed. For example, the performance of the various virtual compute instances on a virtualization host may be subject to respective service level agreements (SLAs). If a migration operation may cause a virtual compute instance to violate an SLA, then the virtual compute instance may be less likely to be selected for migration. As noted above, in various embodiments, virtualization hosts may be multi-tenant, hosting virtual compute instances for different clients. Thus, the impact of a resource credit performance on those virtual compute instances that did not request the resource credits may be minimized when selecting instances to migrate. Similarly, the impact or effect of performing a migration may be examined upon the one or more virtualization hosts selected as destinations for virtual compute instances (as discussed below). For example, virtualization hosts in a provider network may be analyzed to determine whether utilization capacity exists to perform migration and host one or more of the instances selected for migration. Thus, a possible destination virtualization host may be evaluated based on current utilization of underlying physical resources utilized by selected compute instances of migration, as well as historical utilization trends, based on the virtual compute instances located on the possible destination virtualization host. The analysis may also include adding the increase in utilization of the physical resources of the one or more of the selected instances to be hosted, to determine whether the hosting one or more of the selected instances exceeds the capabilities of the virtualization host (or negatively impacts the performance of currently hosted instances in violation of an SLA). Please note that while resource credits obtained from the resource credit pool may increase utilization of one physical resource implemented at virtualization host, the utilization of many different physical computer resources at the virtualization host may also be considered when selecting virtual compute instances to migrate and destination virtualization hosts.
[00112] In some embodiments a placement technique for migrating instances may be implemented to balance utilization of resources across the virtualization hosts of a provider network. One such technique is described below with regard to elements 1330 through 1360. As indicated at 1330, a set of candidate destination virtualization hosts may be selected for consideration when performing a migration of a virtual compute instance from the virtualization host. A provider network may, for instance, implement large numbers of virtualization hosts, distributed across multiple data centers. It may be computationally less expensive to reduce the number hosts considered for hosting a migrated virtual compute instance. For example, a set of virtualization hosts may be randomly selected. Some biases may be included when performing the selection, such as those virtualization hosts that have unbalanced utilization among different underlying physical computer resources, as well as those virtualization hosts that are similarly located as the originating host for the migrated instance.
[00113] As indicated at 1340, the virtual compute instances located on the virtualization host for which the migration event is detected may be scored for migration, in some embodiments. For example, a score may be calculated by calculating the standard deviation of the mean of the utilization percentages of resources for the virtualization host, and then may determine scores for each instance according to how much a migration of the instance from the virtualization host reduces the standard deviation. A similar calculation may be performed for each candidate destination virtualization host. As indicated at 1350, the set of candidate virtualization hosts may be scored individually for each of the virtual compute instances located on the host for which a migration event has been detected. Thus, if the virtualization host implements for 4 instances (as in FIG. 11 A), then a candidate destination virtualization host may be scored for each of the 4 instances. A score for a candidate virtualization host may be determined in various ways. For example, similar to the calculation above for the virtual compute instances on the originating host, the standard deviation of the mean of the utilization percentages of resources for the candidate destination virtualization host may be determined, and then a score may be determined for each instance added to the host according to how much a migration of the instance from the virtualization host reduces/improves the standard deviation. As indicated at 1360, based, at least in part, on the scores (determined at 1340 and 1350), one or more instances may be selected to migrate to a particular destination virtualization host. For example, the instances and destinations based on determining which migrations improve the standard deviations of utilization of physical computer resources at the originating and destination hosts respectively. In some embodiments, minimum improvement thresholds or criteria may be implemented such that a new set of candidate destination virtualization hosts may be selected if a migration does not satisfy the criteria.
[00114] As indicated at 1370, migration of the virtual compute instances from the virtualization host to a selected destination virtualization host may be directed, in various embodiments. Migration operations may provide a "live" migration experience, in some embodiments. Thus, users, clients, or other systems that interact with the migrated instances may experience little or no impact (e.g., downtime) as a result of the migration. For example, migration may include provisioning and configuring a destination virtual compute instance based on the selected virtual compute instance for migration. Operation at the destination virtual compute instance may be started and may be synchronized with the currently operating virtual compute instance selected for migration, in some embodiments. For instance, tasks, operations, or other functions performed at the selected virtual compute instance may be replicated at the destination virtual compute instance. A stream of messages or indications of these tasks may be sent from the selected virtual compute instance to the destination virtual compute instance so that they may be replicated, for example. Access to other computing resources (e.g., a data volume) or systems that are utilized by the selected virtual compute instance may be provided to the destination virtual compute instance (in order to replicate or be aware of the current state of operations at the selected virtual compute instance), in some embodiments. Individual resource credit balances for the virtual compute instance may be transferred to the destination virtualization host. Once synchronized, in some embodiments, requests for the selected virtual compute instance may be directed to the destination virtual compute instance. For example, a network endpoint, or other network traffic component may be modified or programmed to now direct traffic for the selected virtual compute instance to the destination virtual compute instance. Operation of the selected virtual compute instance that is currently operating may then be stopped, allowing the virtualization host to use physical computer resources once used by the selected virtual compute instance.
[00115] As indicated at 1380, a response to replenish the individual resource credit balance for the authorized virtual compute instance may be sent that is configured according to the migration performed in element 1370 above, in various embodiments. As the migration of one or more virtual compute instances may not occur instantaneously, resources freed by migrating the virtual compute instances or new resources acquired (in scenarios where the requesting virtual compute instance is moved to a different virtualization host) may also not be fully available until the completion of the migration. Thus, in some embodiments a scheduling instruction or other indication may be included in responses sent to replenish individual resource credit balances indicating how and/or when the additional resource credits may be consumed. For example, if 20 new resource credits are to be added, the scheduling instruction may indicate that 10 resource credits may be immediately available, while the remaining 10 resource credits may not be applied until the migration is complete. In some embodiments, the response may be sent to a current and destination virtualization host if the virtual compute instance is itself migrating in response to replenishing the individual resource credit balance.
[00116] The large computing resources of a provider network may allow for increased utilization of computing resources via resource credits in a manner that makes the resource credits that may be added to or included in a resource credit pool appear unlimited to a customer of a provider network that implements resource credit pools. In this way, resource credits may be acquired for a resource credit pool in manner commensurate with the type of work performed by the virtual compute instances that replenish resource credits from the resource credit pool. For example, virtual compute instances may perform work that provides revenue or otherwise adds value as a result of performance. Therefore, in such a scenario, a replenishment scheme or technique for acquiring additional resource credits may provide automatic resource credit acquisitions as needed. In another example, virtual compute instances may perform work that is a cost to be constrained or budgeted for (e.g., support functions such as Information Technology (IT) services). In this scenario, scheduled or manual resource credit acquisitions so as to remain within constraints for performing the work may be implemented as part of a replenishment scheme or technique. FIG. 14 is a high-level flowchart illustrating various methods and techniques replenishing a resource credit pool, according to some embodiments.
[00117] As indicated at 1410, available resource credits in a resource credit pool may be monitored, in various embodiments. A resource credit pool balance or other indicator of resource credits may be maintained and/or updated in response to resource credit acquisitions or deductions for replenishing individual resource credit balances. The available resource credits may, in some embodiments, be compared to a replenishment threshold, as indicated at 1420. If, as indicated by the negative exit from 1420, the available resource credits are above the replenishment threshold, then monitoring 1420 of the available resource credits may continue. However, if the available resource credits are below the replenishment threshold, as indicated by the positive exit from 1420, then a replenishment action may be necessary for the resource credit pool. [00118] A replenishment policy or scheme may be implemented for a resource credit pool, in various embodiments. As illustrated above in FIG. 10, the replenishment policy may be configured at the creation of or during the existence of the resource credit pool. The replenishment policy for the resource credit pool may provide various instructions or actions to take as part of replenishing the resource credit pool. For example, the replenishment policy may indicate the replenishment threshold or other triggering event that determines when new resource credits should be acquired for the resource credit pool. In some embodiments, the replenishment policy may indicate pricing limits or spending limits which may be determinative as to the number resource credits acquired. The replenishment policy may, in some embodiments, describe a schedule (e.g., monthly, weekly or daily) of resource credit acquisitions or indicate that resource credit acquisitions are manually performed, while other replenishment policies may refill the resource credit pool on demand from authorized virtual compute instances.
[00119] As indicated at 1430, in some embodiments, the replenishment policy for the resource credit pool may provide for an automated replenishment of resource credits. As indicated at 1440, resource credits may be obtained from the provider network to replenish the resource credit pool according to the automated replenishment policy, in some embodiments. For example, as noted above, the replenishment policy may describe a fixed number of resource credits to purchase or fixed amount of purchasing funds. In some embodiments, the number of resource credits may be determined based on the replenishment threshold (e.g., how many resource credits to be acquired in order to exceed the replenishment threshold). In some embodiments, resource credits may be acquired at a pre-determined price. In at least some embodiments, a provider network may offer resource credits for purchase to replenishing resource credit pools according to a market value for the resource credits. Thus, prices for resource credits may vary (e.g., depending on the type of underlying physical computer resource). When obtaining resource credits, the current credit price may be determined (as illustrated above in FIG. 10).
[00120] If, as indicated by the negative exit from 1430, the replenishment policy for the resource credit pool is not automated, then a low resource credit notification for the resource credit pool may be sent (e.g., to a client associated with a user account), as indicated at 1432.
[00121] As discussed above with regard to FIG. 9, resource credit balances may be applied to perform work requests at the type of underlying physical computer resource to which the resource credits correspond. Thus, work requests to utilize processing resources, for instance, may be performed by applying processing resource credits from the processing resource credit balance of a virtual compute instance requesting the work. In some embodiments, a credit-based scheduler, or other component of a virtualization host or other system implementing a virtual compute instance may be configured to perform work requests in such a manner.
[00122] While in some embodiments resource credit balances for compute instances may be replenished (according to a periodic refill rate and/or carrying over unused resource credits, some workloads or numbers of work requests for a virtual compute instance may be sufficient to exhaust the individual resource credit balance for the instance (and type of physical computer resource). Virtualization hosts or other system implementing a virtual compute instance may be able to determine when a resource credit balance needs replenishment from a resource credit pool. FIG. 15 is a high-level flowchart illustrating various methods and techniques requesting resource credits from a resource credit pool for a particular instance, according to some embodiments.
[00123] As indicated at 1510, an individual resource credit balance for a virtual compute instance implemented at a virtualization host may be maintained. As resource credits are expended or added, a table entry or other set of metadata describing resource credit balances may be updated, for example. In at least some embodiments multiple individual resource credit balances for different types of physical computer resources may be maintained (e.g., processing, network, I/O or storage). As virtualization hosts may also implement other virtual compute instances, other individual resource credit balances for those other virtual compute instances may also be maintained, in some embodiments.
[00124] Work requests may be received and/or instigated at the virtual compute instance. These work requests may requests to perform a certain amount of processing, data transfer over a network, or any other utilization of a physical computer resource implemented at the virtualization host. For some work requests, resource credits maintained in the individual resource credit balance may be sufficient to perform the work request. However, in some cases work requests for a virtual compute instance may exceed the individual resource credit balance. As indicated at 1520, a number of resource credits to perform work request(s) at the virtual compute in addition to the available resource credits in the individual resource credit balance may be determined, in various embodiments. For example, if the work request(s) utilize a resource for a certain duration or size of operation (e.g., sending network packets over a network to multiple destinations at a certain frequency), the amount of resource credits to operate a full utilization of the physical computer resource until completion of the work request(s) may be determined by calculating the number of resource credits necessary to provide utilization of the physical computer resource for the duration of or amount of work in the work requests. If, for instance, an application running on a virtual compute instance needs to perform 500 I/O operations per second (IOPS), then a corresponding number of I/O resource credits to provide utilization of the physical I/O channel that achieve 500 IOPS may be calculated based on the utilization value of individual I/O resource credits.
[00125] Once the number of additional resource credits for performing the work requests is determined, a resource credit request may be sent to obtain the number of additional resource credits from a resource credit pool, as indicated at 1530, in various embodiments. In various embodiments, authorization and/or identification credentials may be included in the resource credit request. Other information may also be included, such as the individual resource credit balance for the virtual compute instance (which may be used, for example, to prioritize replenishment requests) in some embodiments. The request may be formatted according to an API or other protocol for resource credit pool manager or other system or device that manages the resource credit pool.
[00126] A response may be received, in various embodiments, to add at least one resource credit to update the individual resource credit balance, as indicated at 1540. For example, although 10 resource credits may have been requested, the response may only indicate to add 5 resource credits (if, for example, the resource credit pool manager implements prioritization or replenishment schemes for replenishing individual resource credit balances). In some embodiments, as noted above with regard to FIG. 13, the response may include a scheduling or other application instruction for the credits (e.g., a rate or event in which some or all of the additional resource credits may be added to a resource credit pool, such as at the completion of a migration operation). The updated individual resource credit balance may be applied to perform the work requests, as indicated at 1550, in various embodiments. With the resource credits added to the resource credit balance for the virtual compute instance, for instance, the credit-based scheduler, or other component that applies/enforces physical computer resource utilization according to the individual resource credit balances may consider/apply the additional resource credits when determining utilization of the physical computer resource for the virtual compute instance.
[00127] As noted above, a provider network may implement variable timeslices for latency- dependent workloads at a virtualization host, according to some embodiments. Because differing virtual compute instances may perform different tasks or functions, vCPUs implemented for the virtual compute instances may process different types of workloads. Some processing workloads may be processing intensive, and thus may be performed without waiting on another component or device to perform, in various embodiments. Other processing workloads may enter wait states until the completion of some other operation, such as an input/output (I/O) operation. Implementing scheduling techniques to handle these different workloads often optimizes one type of workload at the expense of another. Variable timeslices may be implemented to provide for optimal handling of different workloads.
[00128] Timeslices may be implemented to determine an amount of time up to which a vCPU may utilize a processing resource, such as a central processing unit (CPU). A scheduling technique may be implemented to select a vCPU to utilize the processing resource according to a timeslice. If, for example, a vCPU is selected that performs intensive processing operations, the first vCPU may utilize the entire timeslice and not finish performing the processing operations. If another vCPU is selected that only performs a few operations and then waits on the response, then the vCPU may spend the rest of the timeslice waiting on a response (if none is received then a subsequent time slice may provide sufficient time for the response). Instead of waiting, the other vCPU may yield the remaining timeslice, and resume processing when the response is received. In such a scenario, the other vCPU may preempt another vCPU utilizing the processing resource to continue processing based on the response.
[00129] Preempting a running vCPU to allow another vCPU to utilize a processing resource may trigger a context switch, in various embodiments. A context switch, may involve changing register values for the processing resource as well as loading different data into a cache which is used for performing processing for the vCPU taking over the processing resource. Context switching consumes a portion of a timeslice allotted to a vCPU. Moreover, as the preempting vCPU performs tasks, the data in a cache may be changed (from the data used by the preempted vCPU). If a processing intensive vCPU like the first example vCPU given runs for an entire timeslice, multiple other vCPUs may potentially preempt the long running vCPU and reducing the amount of the timeslice available for processing for each context switch performed, as well as the time to return the state of a cache to include data for the long running vCPU, increasing processing throughput. If preemption occurs too often, a preempted vCPU may make little progress due to side effects such as cache thrashing. In various embodiments, preemption compensation may be provided to vCPUs that are preempted to allow latency-dependent vCPUs to utilize the processing resource. A preemption compensation may increase the timeslice of for a preempted vCPU. The preemption compensation may be determined based, at least in part, on a reduction in throughput of the preempted vCPU as a result of performing the preemption.
[00130] FIG. 16 is a timeline illustrating variable timeslices for processing latency-dependent workloads at a virtualization host, according to some embodiments. A physical CPU may be utilized different virtual CPUs, such as vCPU 1610, vCPU 1620, vCPU 1630, and vCPU 1640. The timeline illustrates utilization of the physical CPU by vCPUs 1610, 1620, 1630, and 1640 over time 1600. For example, vCPU 1610 is illustrated as utilizing the physical CPU from 0 to T8. For this utilization, a scheduled timeslice of 1652 may be used to determine the duration for which vCPU 1610 may utilize the physical CPU. As vCPU 1610 is not preempted during this timeslice 1652, then no preemption compensation is determined to increase timeslice 1652.
[00131] At time T8, vCPU 1620 begins utilizing the physical CPU. However, a preemption event 1622 occurs at T10, switching utilization of the physical CPU to vCPU 1630. As noted above, vCPU 1630 may be latency-dependent. Thus, as illustrated in FIG. 16, the utilization of the physical CPU by vCPU 1630 is small relative to the utilization vCPU 1620 (e.g., completing utilization at Tl 1). Upon resuming utilization of the physical CPU by vCPU 1620, a preemption compensation 1624 may be determined. The preemption compensation 1624 may be used to increase the timeslice 1654 for vCPU 1620 (e.g., increasing the timeslice to end at T18). Multiple preemptions may occur for a vCPU during a timeslice. For example, increased timeslice 1656 for vCPU 1610 illustrates two different preemption events, 1612 and 1614, to allow vCPU 1640 to utilize the physical CPU. Preemption compensation for a timeslice may be dynamic, increasing as the number of preemption events increases. For example, preemption compensation 1616 for vCPU 1610 appears larger than preemption compensation 1624 for vCPU 1620 (as the number of preemptions for vCPU 1610 was greater).
[00132] Increasing a timeslice for a vCPU may be limited to the timeslice in which a preemption event occurs. For example, the next time vCPU 1610 utilizes the physical CPU, a default timeslice (e.g., timeslice 1652) may be again scheduled for vCPU 1610. Yet, in some embodiments, preemption compensation may be provided to increase the timeslice for multiple timeslices for a particular vCPU (e.g., based on analysis of historical preemption events for a vCPU, increasing the timeslice may be performed proactively).
[00133] Please note that previous descriptions are not intended to be limiting, but are merely provided as an example of variable timeslices for processing latency-dependent workloads. The timeslices, preemptions, number of vCPUs or CPUs may all be different. Moreover representations as to the length of utilization for long running vCPUs or latency-dependent vCPUs are not necessarily drawn to scale. For example, the length of a preemption compensation may not be equivalent to the length of time a preempting vCPU utilizes the CPU. Moreover, although some vCPUs are depicted as latency-dependent and some are not, a vCPU may switch from becoming latency-dependent to not latency-dependent (or vice versa).
[00134] This specification next includes a description of a provider network, which may implement variable timeslices for latency-dependent workloads at a virtualization host. A number of different methods and techniques to implement variable timeslices for latency- dependent workloads at a virtualization host are then discussed, some of which are illustrated in accompanying flowcharts. Various examples are provided throughout the specification.
[00135] Variable timeslices for latency-dependent workloads may be implemented at virtualizations hosts, such as virtualization host 310 discussed above with regard to FIG. 3, which may also be implemented as part of a provider network, such as provider network 200 discussed above with regard to FIG. 2. As discussed above, a virtualization host may host compute instances different compute instances, which may also be the same type of compute instance. Resource credits may be implemented for scheduling virtual computer resources. As discussed above in FIG. 3, a virtualization host may also implement virtualization management module (e.g., virtualization management module 320), which may handle the various interfaces between virtual compute instances and physical computing resource(s) (e.g., resources 340 in FIG. 3, such as various hardware components, processors, I/O devices, networking devices, etc.).
[00136] As discussed above in FIG. 3, a virtualization management module may implement resource credit balance scheduler to act as a meta-scheduler, managing, tracking, applying, deducting, and/or otherwise handling all resource credit balances for compute instances at the virtualization host. In various embodiments, the resource credit balance scheduler may be configured to receive virtual compute resource work requests from computes instances. Each work request may be directed toward the virtual computer resource corresponding to the compute instance that submitted the work. For each request, resource credit balance scheduler may be configured to determine a current resource credit balance for the requesting compute instance, and generate scheduling instructions to apply resource credits when performing the work request. In some embodiments, resource credit balance scheduler may perform or direct the performance of the scheduling instructions, directing or sending the work request to the underlying physical computing resources to be performed. However, in some embodiments the resource scheduling instructions may be sent to a virtual compute resource scheduler (e.g., such as virtual compute resource scheduler 322 in FIG. 3), which may be a scheduler for the physical resources 340, such as CPU(s), implemented at virtualization host. The resource credit balance scheduler and/or the virtual compute resource scheduler may be configured to perform the various techniques described below with regard to FIGS. 17 - 18, in order to provide preemption compensation for work performed on behalf of different vCPUs for instances hosted at the virtualization host, apply resource credits, deduct resource credits, and/or otherwise ensure that work requests are performed according to the applied resource credits. For example, the resource credit balance scheduler and/or virtual compute resource scheduler may determine preemption compensation for a vCPU that has been preempted by a latency-dependent vCPU. A scheduled timeslice for the preempted vCPU may be increased according to the determined preemption compensation. Resource credits for the preemption may be deducted from a resource credit balance for the compute instance associated with the latency-dependent vCPU that preempted the vCPU.
[00137] The examples of implementing variable timeslices for processing latency dependent workloads discussed above with regard to a virtualization host (as described above in FIGS. 2 and 3) have been given in regard to virtual computing resources offered by a provider network. Various other types or configurations of virtualization hosts or other virtualization platforms may implement these techniques, which may or may not be offered as part of a network-based service. For example, other scheduling techniques different than a credit-based scheduling technique may be implemented to schedule vCPUs for utilizing a physical processing resource. FIG. 17 is high-level flowchart illustrating various methods and techniques for implementing variable timeslices for processing latency-dependent workloads, according to some embodiments. These techniques may be implemented using various components of network- based virtual computing service as described above with regard to FIGS. 2 - 3 or other virtual computing resource hosts.
[00138] As indicated at 1710, utilization of a central processing unit (CPU) for a virtual central processing unit (vCPU) may be initiated according to a scheduled timeslice for the vCPU, in various embodiments. A scheduler, or similar, component may be implemented as part of a virtualization host and may, for instance, evaluate multiple vCPUs implemented for virtual compute instances at a virtualization host and select a vCPU to utilize the CPU. Different scheduling policies or techniques may be implemented, such as a fair-share scheduling, round- robin scheduling, or any other scheduling technique. In at least some embodiments, a credit- based scheduler may select vCPUs to utilize the CPU based on a resource credit balance maintained for a virtual compute instance for processing resources. As noted above, resource credits may be applied to increase utilization (e.g., above a baseline utilization) of a physical computer resource for a virtual compute instance. Thus, resources credits may be applied by a scheduler for determining which vCPU selection to make for utilizing the CPU.
[00139] When selected for utilization of the CPU, a given vCPU may have a scheduled timeslice (e.g., 20 ms) during which the vCPU may utilize the CPU. In at least some embodiments, a default-sized timeslice may be provided for each vCPU selected to begin utilizing the CPU. As workloads for vCPUs may vary, with some vCPU workloads being processing intensive while other CPU workloads perform smaller tasks, a given vCPU may or may not utilize all of the scheduled timeslice. Some vCPUs may utilize the CPU to perform tasks that are complete without dependence on any other physical computer resource, whereas some vCPUs may perform tasks that depend upon operations performed by other physical computer resources to complete (e.g., various input/output (I/O) operations for storage, input devices, or networking resources). Latency-dependent workloads for vCPUs may be dependent upon the performance of an I/O operation or other physical computer resource in order to continue to make progress with the performance of tasks. Thus, a latency-dependent vCPU may, in various embodiments, enter a wait state prior to the completion of a scheduled timeslice for the latency- dependent vCPU until the performance of the I/O operation or other physical computer resource is complete. For example, vCPUs that perform tasks to send out requests via a network to another computing system, and do not take further action until a response is received back may be considered latency-dependent. In at least some embodiments, a latency-dependent vCPU may be I/O bound. The processing workloads of some vCPUs may utilize the CPU for the entire scheduled timeslice (and beyond if not limited to the scheduled timeslice) and may be sensitive to providing a certain level of throughput for performing tasks. For example, a vCPU workload may be performing various calculations as part of an analysis task (which may not be dependent upon another physical computer resource to be performed). In at least some embodiments, vCPUs that utilize the entire scheduled timeslice may be CPU bound.
[00140] The amount of time utilized by latency-dependent vCPUs may be relatively small when compared with vCPUs that utilize the entire scheduled timeslice. Instead of blocking latency-dependent vCPUs from performing work behind vCPUs that utilize an entire scheduled timeslice, preemption may be performed when, for example, the I/O or other physical computer resource operation for which the latency-dependent vCPU was waiting to complete is finished. Preemption may, in various embodiments, be performed to switch utilization of the CPU from one vCPU to another vCPU (e.g., a latency dependent vCPU). A preemption event may be detected, for instance, when a latency-dependent vCPU is ready to begin utilizing the CPU again (e.g., the latency-dependent vCPU is no longer in a wait state). In various embodiments, a latency-dependent vCPU may be identified when it is determined that a vCPU did not utilize all of the immediately previous timeslice for the vCPU (e.g., the last time the vCPU utilized the CPU, the vCPU only utilized the CPU for 3 ms out of a 20 ms timeslice). In some embodiments, a latency-processing option may be maintained for each vCPU. If the latency-processing option is enabled for a vCPU, then preemption may be performed for a vCPU that is identified as latency-dependent. If the latency-processing option is not enabled for vCPU, then preemption may not be performed for a vCPU (whether or not the vCPU utilized all of the immediately previous timeslice for the vCPU). In at least some embodiments, a latency-dependent vCPU may be I/O bound.
[00141] As indicated by the negative exit from 1720, if a preemption event is not detected, then the vCPU may continue utilizing the CPU for processing. If the timeslice for the vCPU expires, as indicated by the positive exit from 1732, then a new vCPU may be selected and begin utilization of the CPU for the selected vCPU. It follows that for some vCPUs a scheduled timeslice may not be increased (in contrast with the scheduled timeslice for some preempted vCPUs as discussed below). If the scheduled timeslice has not expired, as indicated by the negative exit from 1722, then utilization of the CPU by the vCPU may continue until preemption (at 1720) or upon expiration of the timeslice (at 422).
[00142] If the vCPU is preempted by a latency-dependent vCPU, as indicated by the positive exit from 420, then utilization of the CPU may be paused for the vCPU, as indicated at 430. Preemption may be performed by storing a state of the tasks, processes, or other operations performed for the vCPU (e.g., storing register values). The latency-dependent vCPU may utilize the CPU for processing within a scheduled timeslice, which may or may not be the same as the scheduled timeslice for the vCPU that was preempted. In at least some embodiments, the timeslice for the latency-dependent vCPU may be decreased (so as leave room in the overall utilization of the CPU for a preemption compensation as discussed below).
[00143] Upon resuming utilization of the CPU for the vCPU (which may be after the latency dependent vCPU has completed utilization of the CPU), a preemption compensation may be determined for the scheduled timeslice of the vCPU. In some embodiments a preemption compensation may be a pre-defined value (e.g., 1 ms). In some embodiments, the preemption compensation may be determined based, at least in part, on a reduction in throughput of the given vCPU as a result of the preemption. For example, the number of CPU cycles to perform operations to restore register values and reload data for performing the processes, tasks, or other operations of the vCPU into a cache may be calculated or timed as they are performed. In some embodiments, a linear function may be implemented such that the preemption compensation is determined based, at least in part, on the amount of time the latency-dependent vCPU utilized the CPU. Other compensation models or functions, such as exponential decay may be implemented. In some embodiments, the cache miss counter for the given vCPU may be monitored (e.g., indicating the amount of time spent reloading data into the cache, which reduces throughput of the given vCPU than if the values still remained in the cache). Preemption compensation may be determined dynamically or on-the-fly such that additional time may be added to the timeslice as the effects of the preemption become known (e.g., more cache misses occur). The scheduled timeslice for the vCPU may then be increased according to the preemption compensation determined for the scheduled timeslice, as indicated at 1760. For instance, if the preemption compensation is determined to be 3 ms, then 3 ms may be added to a timer, tracker, or other component that determines the amount of a timeslice consumed for a vCPU and to increase the amount of time that the vCPU may utilize the CPU. In a credit-based scheduler, such as discussed above, resource credits may not be deducted for additional time provided by preemption compensation, in some embodiments.
[00144] As illustrated by the arrow from element 1760 to element 1720, the utilization of the vCPU may be preempted again by another latency-dependent vCPU (either the same or a different latency-dependent vCPU). For instance, some vCPUs may be preempted multiple times, however, the corresponding preemption compensations for the preemptions may allow the vCPU to achieve the same throughput for a single timeslice as if no preemptions had occurred during the timeslice (reducing or eliminating the impact of multiple context switches and/or other operations when a preemption occurs). If no further preemptions occur and/or the increased timeslice expires (as indicated by the positive exit from 1722, then a next vCPU may begin utilization of the CPU.
[00145] FIGS. 8 and 9, discussed above, provide examples of a credit-based scheduler that may be implemented for utilizing physical computer resources at a virtualization host for a virtual compute instance. Credit-based scheduling may apply credits from a resource credit balance for the virtual compute instance in order to increase utilization of the underlying physical computer resources for the virtual compute instance. In some embodiments, a resource credit balance for processing resources, such as a CPU may be maintained for each virtual compute instance. As a vCPU for a virtual compute instance obtains utilization of the processing resources of a virtualization host, resource credits may be deducted from the resource credit balance for processing for the virtual compute instance. However, providing preemption compensations to some vCPUs may provide no actual compensation if resource credits of the preempted vCPU are applied when using a preemption compensation (as the preempted vCPU is still "paying" for the time used to perform the context switches). If no resources credits are deducted from any resource credit balances, then the "free" utilization time may result in utilization of the processing resources being oversold (e.g., the additional time given to the preempted vCPU may prevent another vCPU from receiving an amount of processing utilization according to the number of resource credits in the resource credit balance for the other vCPU). FIG. 18 is a high-level flowchart illustrating various methods and techniques for updating resource credit balances for virtual compute instances for providing preemption compensations, according to some embodiments.
[00146] As indicated at 1810, an interrupt to resume processing for a latency-dependent vCPU may be detected, in various embodiments. For example, a network packet may be received, a storage device may return data or an acknowledgment of a write, or any other I/O operation or other physical computer resource operation upon which the latency-dependent vCPU depends may complete and trigger an interrupt or event, which may place the latency-dependent vCPU into a ready to process state. Various scheduling techniques may be used to bump or increase the priority of the latency-dependent vCPU to trigger a preemption event. As indicated at 1820, a vCPU currently utilizing the CPU may be preempted to allow the latency-dependent vCPU to utilize the CPU, in various embodiments. When the latency-dependent vCPU is finished utilizing the CPU, the resource credit balance for the latency-dependent vCPU may be updated, as indicated at 1830, to deduct credit(s) for utilization of the latency-dependent vCPU and credits for providing a preemption compensation for the currently processing vCPU, in various embodiments. Latency processing may, in such embodiments, be effectively more costly in terms of resource credits than non-latency processing. However, in this way latency processing may provide faster (and therefore lower latency) response for latency-dependent vCPUs and allowing the cost for such processing to be borne by the vCPU initiating the preemption instead of the vCPU being preempted.
[00147] The methods described herein may in various embodiments be implemented by any combination of hardware and software. For example, in one embodiment, the methods may be implemented by a computer system (e.g., a computer system as in FIG. 19) that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. The program instructions may be configured to implement the functionality described herein (e.g., the functionality of various servers and other components that implement the network-based virtual computing resource provider described herein). The various methods as illustrated in the figures and described herein represent example embodiments of methods. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc.
[00148] Embodiments of dynamic virtual resource request rate task controls for physical resources as described herein may be executed on one or more computer systems, which may interact with various other devices. Embodiments of resource credit pools for replenishing resource credit balances of virtual compute instances as described herein may be executed on one or more computer systems, which may interact with various other devices. Embodiments of variable timeslices for processing latency-dependent workloads as described herein may be executed on one or more computer systems, which may interact with various other devices. FIG. 19 is a block diagram illustrating an example computer system, according to various embodiments. For example, computer system 2000 may be configured to implement nodes of a compute cluster, a distributed key value data store, and/or a client, in different embodiments. Computer system 2000 may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a consumer device, application server, storage device, telephone, mobile telephone, or in general any type of computing device.
[00149] Computer system 2000 includes one or more processors 2010 (any of which may include multiple cores, which may be single or multi-threaded) coupled to a system memory 2020 via an input/output (I/O) interface 2030. Computer system 2000 further includes a network interface 2040 coupled to I/O interface 2030. In various embodiments, computer system 2000 may be a uniprocessor system including one processor 2010, or a multiprocessor system including several processors 2010 (e.g., two, four, eight, or another suitable number). Processors 2010 may be any suitable processors capable of executing instructions. For example, in various embodiments, processors 2010 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 2010 may commonly, but not necessarily, implement the same ISA. The computer system 2000 also includes one or more network communication devices (e.g., network interface 2040) for communicating with other systems and/or components over a communications network (e.g. Internet, LAN, etc.). For example, a client application executing on system 2000 may use network interface 2040 to communicate with a server application executing on a single server or on a cluster of servers that implement one or more of the components of the provider network described herein. In another example, an instance of a server application executing on computer system 2000 may use network interface 2040 to communicate with other instances of the server application (or another server application) that may be implemented on other computer systems (e.g., computer systems 2090).
[00150] In the illustrated embodiment, computer system 2000 also includes one or more persistent storage devices 2060 and/or one or more I/O devices 2080. In various embodiments, persistent storage devices 2060 may correspond to disk drives, tape drives, solid state memory, other mass storage devices, or any other persistent storage device. Computer system 2000 (or a distributed application or operating system operating thereon) may store instructions and/or data in persistent storage devices 2060, as desired, and may retrieve the stored instruction and/or data as needed. For example, in some embodiments, computer system 2000 may host a storage system server node, and persistent storage 2060 may include the SSDs attached to that server node.
[00151] Computer system 2000 includes one or more system memories 2020 that are configured to store instructions and data accessible by processor(s) 2010. In various embodiments, system memories 2020 may be implemented using any suitable memory technology, (e.g., one or more of cache, static random access memory (SRAM), DRAM, RDRAM, EDO RAM, DDR 10 RAM, synchronous dynamic RAM (SDRAM), Rambus RAM, EEPROM, non-volatile/Flash-type memory, or any other type of memory). System memory 2020 may contain program instructions 2025 that are executable by processor(s) 2010 to implement the methods and techniques described herein. In various embodiments, program instructions 2025 may be encoded in platform native binary, any interpreted language such as JavaTM byte-code, or in any other language such as C/C++, JavaTM, etc., or in any combination thereof. For example, in the illustrated embodiment, program instructions 2025 include program instructions executable to implement the functionality of a provider network and/or virtualization host, in different embodiments. In some embodiments, program instructions 2025 may implement multiple separate clients, server nodes, and/or other components.
[00152] In some embodiments, program instructions 2025 may include instructions executable to implement an operating system (not shown), which may be any of various operating systems, such as UNIX, LINUX, SolarisTM, MacOSTM, WindowsTM, etc. Any or all of program instructions 2025 may be provided as a computer program product, or software, that may include a non-transitory computer-readable storage medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to various embodiments. A non-transitory computer-readable storage medium may include any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). Generally speaking, a non- transitory computer-accessible medium may include computer-readable storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM coupled to computer system 2000 via I/O interface 2030. A non-transitory computer-readable storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computer system 2000 as system memory 2020 or another type of memory. In other embodiments, program instructions may be communicated using optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.) conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 2040.
[00153] In some embodiments, system memory 2020 may include data store 2045, which may be configured as described herein. In general, system memory 2020 (e.g., data store 2045 within system memory 2020), persistent storage 2060, and/or remote storage 2070 may store data blocks, replicas of data blocks, metadata associated with data blocks and/or their state, configuration information, and/or any other information usable in implementing the methods and techniques described herein.
[00154] In one embodiment, I/O interface 2030 may be configured to coordinate I/O traffic between processor 2010, system memory 2020 and any peripheral devices in the system, including through network interface 2040 or other peripheral interfaces. In some embodiments, I/O interface 2030 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 2020) into a format suitable for use by another component (e.g., processor 2010). In some embodiments, I/O interface 2030 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 2030 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments, some or all of the functionality of I/O interface 2030, such as an interface to system memory 2020, may be incorporated directly into processor 2010.
[00155] Network interface 2040 may be configured to allow data to be exchanged between computer system 2000 and other devices attached to a network, such as other computer systems 2090 (which may implement one or more components of the distributed system described herein), for example. In addition, network interface 2040 may be configured to allow communication between computer system 2000 and various I/O devices 2050 and/or remote storage 2070. Input/output devices 2050 may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer systems 2000. Multiple input/output devices 2050 may be present in computer system 2000 or may be distributed on various nodes of a distributed system that includes computer system 2000. In some embodiments, similar input/output devices may be separate from computer system 2000 and may interact with one or more nodes of a distributed system that includes computer system 2000 through a wired or wireless connection, such as over network interface 2040. Network interface 2040 may commonly support one or more wireless networking protocols (e.g., Wi- Fi/IEEE 802.11, or another wireless networking standard). However, in various embodiments, network interface 2040 may support communication via any suitable wired or wireless general data networks, such as other types of Ethernet networks, for example. Additionally, network interface 2040 may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. In various embodiments, computer system 2000 may include more, fewer, or different components than those illustrated in FIG. 19 (e.g., displays, video cards, audio cards, peripheral devices, other network interfaces such as an ATM interface, an Ethernet interface, a Frame Relay interface, etc.)
[00156] It is noted that any of the distributed system embodiments described herein, or any of their components, may be implemented as one or more network-based services. For example, a compute cluster within a computing service may present computing services and/or other types of services that employ the distributed computing systems described herein to clients as network- based services. In some embodiments, a network-based service may be implemented by a software and/or hardware system designed to support interoperable machine -to-machine interaction over a network. A network-based service may have an interface described in a machine -processable format, such as the Web Services Description Language (WSDL). Other systems may interact with the network-based service in a manner prescribed by the description of the network-based service's interface. For example, the network-based service may define various operations that other systems may invoke, and may define a particular application programming interface (API) to which other systems may be expected to conform when requesting the various operations, though
[00157] In various embodiments, a network-based service may be requested or invoked through the use of a message that includes parameters and/or data associated with the network- based services request. Such a message may be formatted according to a particular markup language such as Extensible Markup Language (XML), and/or may be encapsulated using a protocol such as Simple Object Access Protocol (SOAP). To perform a network-based services request, a network-based services client may assemble a message including the request and convey the message to an addressable endpoint (e.g., a Uniform Resource Locator (URL)) corresponding to the network-based service, using an Internet-based application layer transfer protocol such as Hypertext Transfer Protocol (HTTP). [00158] In some embodiments, network-based services may be implemented using Representational State Transfer ("RESTful") techniques rather than message-based techniques. For example, a network-based service implemented according to a RESTful technique may be invoked through parameters included within an HTTP method such as PUT, GET, or DELETE, rather than encapsulated within a SOAP message.
[00159] Embodiments of the present disclosure can be described in view of the following clauses:
1. A system, comprising:
at least one processor;
a memory, comprising program instructions that when executed by the at least one processor cause the at least one processor to implement a virtualization host for a plurality of virtual compute instances;
wherein the virtualization host is configured to:
maintain a plurality of individual virtual resource request queues for respective virtual computer resources of the plurality of virtual compute instances that utilize a physical computer resource;
implement a dynamic rate control for individual ones of the plurality of individual virtual resource request queues;
the dynamic rate control, configured to:
place a work request for a virtual computer resource of a virtual compute instance from an individual virtual resource request queue into a physical resource request queue to perform the work request at the physical computer resource;
in response to the placement of the work request:
dynamically determine a delay based, at least in part, on a workload of the physical resource request queue; and
after imposition of the delay, place a next work request from the individual virtual resource request queue into the physical resource request queue;
wherein a work request from at least one other individual virtual resource request queue of the plurality of individual virtual resource request queues is placed in the physical resource request queue during the delay.
2. The system of clause 1, wherein, to dynamically determine the delay, the dynamic rate control is configured to: identify an initial delay for the individual virtual resource request queue based, at least in part, on a determined utilization of the physical computer resource for the virtual compute instance;
calculate a probability for adding a random delay based, at least in part, on the workload of the physical resource request queue; and
add the random delay to the initial delay according to the calculated probability.
3. The system of clause 2, wherein the determined utilization of the physical computer resource for the virtual compute instance is based, at least in part, on a resource credit balance for the physical resource maintained for the virtual compute instance.
4. The system of clause 1, wherein the virtualization host is implemented as part of a provider network that offers a network-based virtual computing service, wherein the virtualization host is multi-tenant such that at least one of the plurality of virtual compute instances implemented at the virtualization host is maintained for a client of the provider network that is different than another client of the provider network maintaining another one of the plurality of virtual compute instances at the virtualization host.
5. A method, comprising :
performing, by one or more computing devices that together implement a virtualization host for a plurality of virtual compute instances:
placing a work request for a virtual computer resource from an individual virtual resource request queue maintained for a virtual compute instance into a physical resource request queue to perform the work request at a physical computer resource, wherein the virtual compute instance is one of the plurality virtual compute instances, wherein the individual virtual resource request queue is one of a plurality of respective individual virtual resource request queues for respective virtual computer resources including the virtual computer resource of the plurality of virtual compute instances; in response to placing the work request:
dynamically determining a delay based, at least in part, on a workload of the physical resource request queue; and
after imposing the delay, placing a next work request from the individual virtual resource request queue into the physical resource request queue; wherein a work request from at least one other individual virtual resource request queue of the plurality of individual virtual resource request queues is placed in the physical resource request queue during the delay.
6. The method of clause 5, wherein dynamically determining the delay comprises: identifying an initial delay for the individual virtual resource request queue based, at least in part, on a determined utilization of the physical computer resource for the virtual compute instance;
calculating a probability for adding a random delay based, at least in part, on the workload of the physical resource request queue; and
adding the random delay to the initial delay according to the calculated probability.
7. The method of clause 6, wherein the determined utilization of the physical computer resource for the virtual compute instance is based, at least in part, on a resource credit balance for the physical resource maintained for the virtual compute instance.
8. The method of clause 6, wherein determining the workload of the physical resource request queue comprises smoothing one or more workload metrics indicating workload of the physical resource request queue at different points in time.
9. The method of clause 6, further comprising:
in response to placing the next work request into the physical resource request queue: performing dynamically determining the delay, wherein according to the calculated probability the random delay is not added to the initial delay; and
imposing the initial delay prior to placing another work request from the individual virtual resource request queue into the physical resource request queue.
10. The method of clause 5, wherein placing the work request, dynamically determining the delay, and placing the next work request are performed for other individual virtual resource request queues for different virtual computer resources of the plurality of virtual compute instances that correspond to different physical computer resources.
11. The method of clause 5, wherein the different physical computer resources comprise:
processing resources;
networking resources; or
storage resources. 12. The method of clause 5, wherein a determined utilization of the physical computer resource for the virtual compute instance is higher than a determined utilization of the physical computer resource for a virtual compute instance for which the work request from the at least one other individual virtual resource request queue is placed in the physical resource request queue during the delay.
13. The method of clause 5, wherein the virtualization host is implemented as part of a provider network that offers a network-based virtual computing service, wherein the virtualization host is multi-tenant such that at least one of the plurality of virtual compute instances implemented at the virtualization host is maintained for a client of the provider network that is different than another client of the provider network maintaining another one of the plurality of virtual compute instances at the virtualization host.
14. A non-transitory, computer-readable storage medium, storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement a virtualization host for a plurality of compute instances, wherein the virtualization host implements :
placing a work request for a virtual computer resource from an individual virtual resource request queue maintained for a virtual compute instance into a physical resource request queue to perform the work request at a physical computer resource, wherein the virtual compute instance is one of the plurality virtual compute instances, wherein the individual virtual resource request queue is one of a plurality of respective individual virtual resource request queues for respective virtual computer resources including the virtual computer resource of the plurality of virtual compute instances;
in response to placing the work request:
dynamically determining a delay based, at least in part, on a workload of the physical resource request queue; and
after imposing the delay, placing a next work request from the individual virtual resource request queue into the physical resource request queue;
wherein a work request from at least one other individual virtual resource request queue of the plurality of individual virtual resource request queues is placed in the physical resource request queue during the delay.
15. The non-transitory, computer-readable storage medium of clause 14, wherein, in dynamically determining the delay, the program instructions cause the virtualization host to further implement: identifying an initial delay for the individual virtual resource request queue based, at least in part, on a determined utilization of the physical computer resource for the virtual compute instance;
calculating a probability for adding a random delay based, at least in part, on the workload of the physical resource request queue; and
adding the random delay to the initial delay according to the calculated probability.
16. The non-transitory, computer-readable storage medium of clause 15, wherein the determined utilization of the physical computer resource for the virtual compute instance is based, at least in part, on a resource credit balance for the physical resource maintained for the virtual compute instance.
17. The non-transitory, computer-readable storage medium of clause 15, wherein determining the workload of the physical resource request queue comprises smoothing one or more workload metrics indicating workload of the physical resource request queue at different points in time.
18. The non-transitory, computer-readable storage medium of clause 14, wherein the program instructions cause the virtualization host to further implement:
in response to placing the next work request into the physical resource request queue: performing dynamically determining the delay, wherein according to the calculated probability the random delay is not added to the initial delay; and
imposing the initial delay prior to placing another work request from the individual virtual resource request queue into the physical resource request queue.
19. The non-transitory, computer-readable storage medium of clause 14, wherein a determined utilization of the physical computer resource for the virtual compute instance is lower than a determined utilization of the physical computer resource for a virtual compute instance for which the work request from the at least one other individual virtual resource request queue is placed in the physical resource request queue during the delay.
20. The non-transitory, computer-readable storage medium of clause 14, wherein the virtualization host is implemented as part of a provider network that offers a network-based virtual computing service, wherein the virtualization host is multi-tenant such that at least one of the plurality of virtual compute instances implemented at the virtualization host is maintained for a client of the provider network that is different than another client of the provider network maintaining another one of the plurality of virtual compute instances at the virtualization host. A system, comprising:
a plurality of compute nodes that together implement a provider network;
at least some of the plurality of compute nodes implement respective virtualization hosts for a plurality of virtual compute instances implemented as part of the provider network;
a control plane for the provider network, the control plane configured to:
maintain a resource credit pool comprising a plurality of resource credits available to replenish an individual resource credit balance for one or more virtual compute instances that are authorized to obtain resource credits from the resource credit pool of the plurality of virtual compute instances, wherein the plurality of resource credits are individually applicable to increase utilization of a physical computer resource for an individual one of the one or more authorized virtual compute instances at one of the respective virtualization hosts;
receive a resource credit request for a virtual compute instance of the one or more authorized virtual compute instances to replenish the individual resource credit balance for the virtual compute instance;
in response to the receipt of the resource credit request:
determine a number of resource credits to add to the individual resource credit balance for the virtual compute instance;
send a response indicating the number of resource credits to be added to the individual resource credit balance for the virtual compute instance; and
remove the calculated number of resource credits from the resource credit pool.
. The system of clause 21, wherein the control plane is further configured to:
in response to the receipt of the resource credit request:
evaluate a respective virtualization host that implements the virtual compute instance based, at least in part on the resource credit request, wherein the evaluation detects a migration event for the respective virtualization host; in response to the detection of the migration event for the respective virtualization host:
select one or more virtual compute instances implemented at the respective virtualization host to migrate; select a respective destination virtualization host of the respective virtualization hosts for the one or more selected virtual compute instances; and
direct migration of the one or more selected virtual compute instances from the virtualization host to the respective destination virtual host.
23. The system of clause 21, wherein the control plane is further configured to:
receive a request to add one or more resource credits to the resource credit pool; and in response to the receipt of the request to add the one or more resource credits to the resource credit pool, update the resource credit pool to include the one or more resource credits.
24. The system of clause 21, wherein a respective virtualization host that implements the virtual compute instance implemented at one of the at least some compute nodes is configured to:
maintain the individual resource credit balance for the virtual compute instance;
determine a number of resource credits in addition to the individual resource credit balance to perform one or more work requests at the physical computer resource implemented at the respective virtualization host for the virtual compute instance; send the resource credit request to the control plane, wherein the resource credit request indicates the determined number of resource credits to perform the one or more work requests;
receive from the control plane a response to add at least one of the requested number of resource credits to update the individual resource credit balance; and apply the updated individual resource credit balance to perform the one or more work requests.
25. A method, comprising:
performing, by one or more computing devices:
maintaining a resource credit pool comprising a plurality of resource credits available to replenish an individual resource credit balance for one or more virtual compute instances implemented as part of a provider network, wherein the one or more virtual compute instances are authorized to obtain resource credits from the resource credit pool, wherein the plurality of resource credits are individually applicable to increase utilization of a physical computer resource at a virtualization host implementing one of the one or more virtual compute instances;
receiving a resource credit request for a virtual compute instance of the one or more authorized virtual compute instances to replenish the individual resource credit balance for the virtual compute instance;
in response to receiving the resource credit request:
determining a number of resource credits to add to the individual resource credit balance for the virtual compute instance;
sending a response indicating the number of resource credits to be added to the individual resource credit balance for the virtual compute instance; and
updating the resource credit pool to remove the number of resource credits from the resource credit pool.
26. The method of clause 25, further comprising:
evaluating a virtualization host that implements the virtual compute instance based, at least in part on the resource credit request, wherein evaluating the virtualization host comprises detecting a migration event for the virtualization host; in response to detecting the migration event for the virtualization host:
selecting one or more virtual compute instances implemented at the virtualization host to migrate;
selecting a respective destination virtualization host in the provider network for the one or more selected virtual compute instances; and directing migration of the one or more selected virtual compute instances from the virtualization host to the respective destination virtual host.
27. The method of clause 26, wherein the response indicating the number of resource credits comprises a scheduling instruction for applying the resource credits according to the performance migration of the one or more selected virtual compute instances from the virtualization host to the respective destination virtual host.
28. The method of clause 26, wherein the provider network is implemented as a network-based virtual computing service, wherein the resource credit pool and the authorized one or more virtual compute instances are linked to a client account of the network-based virtual computing service, and wherein at least one of the one or more virtual compute instances selected to migrate is linked to a client account different than the client account. 29. The method of clause 25, wherein the one or more computing devices together implement a control plane for the provider network, and wherein the method further comprises: performing, by least one computing device:
maintaining the individual resource credit balance for the virtual compute instance;
determining a number of resource credits in addition to the individual resource credit balance to perform one or more work requests at the physical computer resource implemented at a virtualization host implementing the virtual compute instance;
sending the resource credit request to the control plane, wherein the resource credit request indicates the determined number of resource credits to perform the one or more work requests;
receiving from the control plane a response to add at least one of the requested number of resource credits to update the individual resource credit balance; and
applying the updated individual resource credit balance to perform the one or more work requests.
30. The method of clause 25, further comprising:
receiving a request to add one or more resource credits to the resource credit pool; and in response to receiving the request, updating the resource credit pool to include the one or more resource credits.
31. The method of clause 25, further comprising:
monitoring the resource credit pool to determine a number of available resource credits; determining that the number of available resource credits is below a replenishment threshold for the resource credit pool; and
in response to determining that the number of available resource credits is below the replenishment threshold, obtaining one or more resource credits from the provider network to add to the resource credit pool.
32. The method of clause 25, wherein the resource credit pool is one of a plurality of resource credit pools, wherein individual ones of the plurality of resource credit pools corresponding to different types of physical computer resources, and wherein maintaining the resource credit pool, receiving the resource credit requests, determining the number of resource credits to add, sending the response, and updating the resource credit pool are performed for different ones of the plurality of resource credit pools. 33. The method of clause 32, wherein the plurality of resource credit pools are linked to a user account for the provider network.
34. A non-transitory, computer-readable storage medium, storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement:
maintaining a resource credit pool comprising a plurality of resource credits available to replenish an individual resource credit balance for one or more virtual compute instances implemented as part of a provider network, wherein the one or more virtual compute instances are authorized to obtain resource credits from the resource credit pool, wherein the plurality of resource credits are individually applicable to increase utilization of a physical computer resource at a virtualization host implementing one of the one or more virtual compute instances;
receiving a resource credit request for a virtual compute instance of the one or more virtual compute instances to replenish an individual resource credit balance for the virtual compute instance;
in response to receiving the resource credit request:
identifying a number of resource credits to add to the individual resource credit balance for the virtual compute instance;
sending a response indicating the number of resource credits to be added to the individual resource credit balance for the virtual compute instance; and updating the resource credit pool to remove the number of resource credits from the resource credit pool.
35. The non-transitory, computer-readable storage medium of clause 34, wherein the program instructions cause the one or more computing devices to implement:
in response to receiving the resource credit request:
evaluating a virtualization host that implements the virtual compute instance based, at least in part on the resource credit request, wherein evaluating the virtualization host comprises detecting a migration event for the virtualization host;
in response to detecting the migration event for the virtualization host:
selecting one or more virtual compute instances implemented at the virtualization host to migrate; selecting a respective destination virtualization host in the provider network for the one or more selected virtual compute instances; and
directing migration of the one or more selected virtual compute instances from the virtualization host to the respective destination virtual host.
36. The non-transitory, computer-readable storage medium of clause 35, wherein at least one of the one or more virtual compute instances selected to migrate is the virtual compute instance.
37. The non-transitory, computer-readable storage medium of clause 34, wherein the program instructions cause the one or more computing devices to further implement:
in response to receiving the resource credit request, obtaining the number of resource credits from the provider network to add to the resource credit pool.
38. The non-transitory, computer-readable storage medium of clause 34, wherein the program instructions cause the one or more computing devices to further implement:
receiving a request to add one or more resource credits to the resource credit pool; and in response to receiving the request, updating the resource credit pool to include the one or more resource credits.
39. The non-transitory, computer-readable storage medium of clause 34, wherein the resource credit pool is one of a plurality of resource credit pools, wherein individual ones of the plurality of resource credit pools corresponding to different types of physical computer resources, and wherein maintaining the resource credit pool, receiving the resource credit requests, determining the number of resource credits to add, sending the response, and updating the resource credit pool are performed for different ones of the plurality of resource credit pools.
40. The non-transitory, computer-readable storage medium of clause 34, wherein the provider network is implemented as a network-based virtual computing service, wherein the resource credit pool and the authorized one or more virtual compute instances are linked to a client account of the network-based virtual computing service, wherein another resource credit pool is maintained for one or more other virtual compute instances authorized to obtain resource credits from the other resource credit pool, wherein the other resource credit pool and the authorized one or more other virtual compute instances are linked to a different client account of the network-based virtual computing service, and wherein at least one of the other virtual compute instances is implemented on a same virtualization host as one of the one or more virtual compute instances. 41. A system, comprising:
at least one processor;
a memory, comprising program instructions that when executed by the at least one processor cause the at least one processor to implement a virtualization host for a plurality of virtual compute instances;
the virtualization host, configured to:
for a given virtual central processing unit (vCPU) of a virtual compute instance of the plurality of virtual compute instances, wherein the given vCPU currently utilizes the at least one processor according to a scheduled timeslice:
preempt the given vCPU to utilize the processor for a latency-dependent vCPU of a different virtual compute instance of the plurality of virtual compute instances, wherein the preemption pauses the utilization of the at least one processor for the given vCPU prior to completion of the scheduled timeslice for the given vCPU;
upon resumption of the utilization of the at least one processor for the given vCPU:
determine a preemption compensation for the scheduled timeslice of the given vCPU; and
increase the scheduled timeslice for the given vCPU such that the utilization of the at least one processor for the given vCPU is performed according to the increased scheduled timeslice.
42. The system of clause 41, wherein the determination of the preemption compensation for the scheduled timeslice of the given vCPU is based, at least in part, on a reduction in throughput as a result of the preemption of the given vCPU.
43. The system of clause 41, wherein the virtualization host implements a credit- based scheduler for scheduling utilization of physical computer resources including the at least one processor among the plurality of virtual compute instances, wherein the virtualization host maintains a respective resource credit balance for the given vCPU and the latency-dependent vCPU, wherein the utilization of the at least one processor for the given vCPU and the latency- dependent vCPU is deducted from the respective resource credit balance, and wherein the virtualization host is further configured to: update the respective resource credit balance for the latency-dependent vCPU to deduct one or more resource credits corresponding to the preemption exemption for the given vCPU.
44. The system of clause 41, wherein the virtualization host is implemented as part of a provider network that offers a network-based virtual computing service, wherein the virtualization host is multi-tenant such that at least one of the plurality of virtual compute instances implemented at the virtualization host is maintained for a client of the provider network that is different than another client of the provider network maintaining another one of the plurality of virtual compute instances at the virtualization host.
45. A method, comprising:
performing, by one or more computing devices that together implement a virtualization host for a plurality of virtual compute instances:
for a given virtual central processing unit (vCPU) of a virtual compute instance of the plurality of virtual compute instances that currently utilizes a central processing unit (CPU) of a virtualization host according to a scheduled timeslice:
preempting the given vCPU to utilize the CPU for a latency-dependent vCPU of a different virtual compute instance of the plurality of virtual compute instances, wherein the preempting pauses the utilization of the CPU for the given vCPU prior to completion of the scheduled timeslice for the given vCPU;
upon resuming the utilization of the CPU for the given vCPU:
determining a preemption compensation for the scheduled timeslice of the given vCPU; and
increasing the scheduled timeslice for the given vCPU such that the utilization of the CPU for the given vCPU is performed according to the increased scheduled timeslice.
46. The method of clause 45, wherein the given vCPU is CPU bound.
47. The method of clause 45, wherein determining the preemption compensation for the scheduled timeslice of the given vCPU is based, at least in part, on an amount of time that the latency-dependent vCPU utilized the CPU.
48. The method of clause 45, wherein the virtualization host implements a credit- based scheduler for scheduling utilization of physical computer resources including the CPU among the plurality of virtual compute instances, wherein the virtualization host maintains a respect resource credit balance for the given vCPU and the latency-dependent vCPU, and wherein the utilization of the CPU for the given vCPU and the latency-dependent vCPU is deducted from the respective resource credit balances.
49. The method of clause 48, wherein the method further comprises updating the respective resource credit balance for the latency-dependent vCPU to deduct one or more resource credits corresponding to the preemption exemption for the given vCPU.
50. The method of clause 45, further comprising:
prior to preempting the processing of the given vCPU for the latency-dependent vCPU, determining that the latency-dependent vCPU did not complete an immediately previous timeslice to utilize the CPU.
51. The method of clause 45, further comprising:
prior to preempting the processing of the given vCPU for the latency-dependent vCPU, determining that a latency-dependent processing option is enabled for the latency- dependent vCPU, wherein preemption is not performed for another vCPU of another virtual compute instance for which the latency-dependent processing option is not enabled.
52. The method of clause 45, wherein the latency-dependent vCPU is input/output (I/O) bound.
53. The method of clause 45, wherein the virtualization host is implemented as part of a part of a provider network that offers a network-based virtual computing service, wherein the virtualization host is multi-tenant such that at least one of the plurality of virtual compute instances implemented at the virtualization host is maintained for a client of the provider network that is different than another client of the provider network maintaining another one of the plurality of virtual compute instances at the virtualization host.
54. A non-transitory, computer-readable storage medium, storing program instructions that when executed by one or more computing devices cause the one or more computing devices to implement a virtualization host for a plurality of compute instances, wherein the virtualization host implements:
for a given virtual central processing unit (vCPU) of a virtual compute instance of the plurality of virtual compute instances that currently utilizes a central processing unit (CPU) of the virtualization host according to a scheduled timeslice:
preempting the given vCPU to utilize the CPU for a latency-dependent vCPU of a different virtual compute instance of the plurality of virtual compute instances, wherein the preempting pauses the utilization of the CPU for the given vCPU prior to completion of the scheduled timeslice for the given vCPU;
upon resuming the utilization of the CPU for the given vCPU:
determining a preemption compensation for the scheduled timeslice of the given vCPU; and
increasing the scheduled timeslice for the given vCPU such that the utilization of the CPU for the given vCPU is performed according to the increased scheduled timeslice.
55. The non-transitory, computer-readable storage medium of clause 54, wherein determining the preemption compensation for the scheduled timeslice of the given vCPU is based, at least in part, on a reduced throughput as a result of the preemption of the given vCPU.
56. The non-transitory, computer-readable storage medium of clause 54, wherein the virtualization host implements a credit-based scheduler for scheduling utilization of physical computer resources including the CPU among the plurality of virtual compute instances, wherein the virtualization host maintains a respect resource credit balance for the given vCPU and the latency-dependent vCPU, and wherein the utilization of the CPU for the given vCPU and the latency-dependent vCPU is deducted from the respective resource credit balances.
57. The non-transitory, computer-readable storage medium of clause 56, wherein the program instructions further cause the virtualization host to implement updating the respective resource credit balance for the latency-dependent vCPU to deduct one or more resource credits corresponding to the preemption exemption for the given vCPU.
58. The non-transitory, computer-readable storage medium of clause 54, wherein the program instructions cause the virtualization host to further implement:
prior to the completion of the increased scheduled timeslice, performing the preempting, the determining and the increasing for the latency-dependent vCPU or another latency-dependent vCPU.
59. The non-transitory, computer-readable storage medium of clause 54, wherein the program instructions cause the virtualization host to further implement:
prior to preempting the processing of the given vCPU for the latency-dependent vCPU, determining that a latency-dependent processing option is enabled for the latency- dependent vCPU, wherein preemption is not performed for another vCPU of another virtual compute instance for which the latency-dependent processing option is not enabled. 60. The non-transitory, computer-readable storage medium of clause 54, wherein the virtualization host is implemented as part of a part of a provider network that offers a network- based virtual computing service, wherein the virtualization host is multi-tenant such that at least one of the plurality of virtual compute instances implemented at the virtualization host is maintained for a client of the provider network that is different than another client of the provider network maintaining another one of the plurality of virtual compute instances at the virtualization host.
[00160] Although the embodiments above have been described in considerable detail, numerous variations and modifications may be made as would become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.

Claims

WHAT IS CLAIMED IS:
1. A method, comprising:
performing, by one or more computing devices that together implement a virtualization host for a plurality of virtual compute instances:
placing a work request for a virtual computer resource from an individual virtual resource request queue maintained for a virtual compute instance into a physical resource request queue to perform the work request at a physical computer resource, wherein the virtual compute instance is one of the plurality virtual compute instances, wherein the individual virtual resource request queue is one of a plurality of respective individual virtual resource request queues for respective virtual computer resources including the virtual computer resource of the plurality of virtual compute instances; in response to placing the work request:
dynamically determining a delay based, at least in part, on a workload of the physical resource request queue; and
after imposing the delay, placing a next work request from the individual virtual resource request queue into the physical resource request queue;
wherein a work request from at least one other individual virtual resource request queue of the plurality of individual virtual resource request queues is placed in the physical resource request queue during the delay.
2. The method of claim 1, wherein dynamically determining the delay comprises: identifying an initial delay for the individual virtual resource request queue based, at least in part, on a determined utilization of the physical computer resource for the virtual compute instance;
calculating a probability for adding a random delay based, at least in part, on the workload of the physical resource request queue; and
adding the random delay to the initial delay according to the calculated probability.
3. The method of claim 2, wherein the determined utilization of the physical computer resource for the virtual compute instance is based, at least in part, on a resource credit balance for the physical resource maintained for the virtual compute instance.
4. The method of claim 2, wherein determining the workload of the physical resource request queue comprises smoothing one or more workload metrics indicating workload of the physical resource request queue at different points in time.
5. The method of claim 2, further comprising:
in response to placing the next work request into the physical resource request queue: performing dynamically determining the delay, wherein according to the calculated probability the random delay is not added to the initial delay; and
imposing the initial delay prior to placing another work request from the individual virtual resource request queue into the physical resource request queue.
6. The method of claim 1, wherein placing the work request, dynamically determining the delay, and placing the next work request are performed for other individual virtual resource request queues for different virtual computer resources of the plurality of virtual compute instances that correspond to different physical computer resources.
7. The method of claim 1, wherein the different physical computer resources comprise:
processing resources;
networking resources; or
storage resources.
8. The method of claim 1, wherein a determined utilization of the physical computer resource for the virtual compute instance is higher than a determined utilization of the physical computer resource for a virtual compute instance for which the work request from the at least one other individual virtual resource request queue is placed in the physical resource request queue during the delay.
9. The method of claim 1, wherein the virtualization host is implemented as part of a provider network that offers a network-based virtual computing service, wherein the virtualization host is multi-tenant such that at least one of the plurality of virtual compute instances implemented at the virtualization host is maintained for a client of the provider network that is different than another client of the provider network maintaining another one of the plurality of virtual compute instances at the virtualization host.
10. A system, comprising:
at least one processor;
a memory, comprising program instructions that when executed by the at least one processor cause the at least one processor to implement a virtualization host for a plurality of virtual compute instances;
wherein the virtualization host is configured to :
place a work request for a virtual computer resource from an individual virtual resource request queue maintained for a virtual compute instance into a physical resource request queue to perform the work request at a physical computer resource, wherein the virtual compute instance is one of the plurality virtual compute instances, wherein the individual virtual resource request queue is one of a plurality of respective individual virtual resource request queues for respective virtual computer resources including the virtual computer resource of the plurality of virtual compute instances; in response to placement of the work request:
dynamically determine a delay based, at least in part, on a workload of the physical resource request queue; and
after imposing the delay, place a next work request from the individual virtual resource request queue into the physical resource request queue;
wherein a work request from at least one other individual virtual resource request queue of the plurality of individual virtual resource request queues is placed in the physical resource request queue during the delay.
11. The system of claim 10, wherein, to dynamically determine the delay, the virtualization host is configured to:
identify an initial delay for the individual virtual resource request queue based, at least in part, on a determined utilization of the physical computer resource for the virtual compute instance; calculate a probability for adding a random delay based, at least in part, on the workload of the physical resource request queue; and
add the random delay to the initial delay according to the calculated probability.
12. The system of claim 11, wherein the determined utilization of the physical computer resource for the virtual compute instance is based, at least in part, on a resource credit balance for the physical resource maintained for the virtual compute instance.
13. The system of claim 11, wherein to determine the workload of the physical resource request queue, the virtualization host is configured to smooth one or more workload metrics indicating workload of the physical resource request queue at different points in time.
14. The system of claim 10, wherein the virtualization host is further configured to: in response to placement of the next work request into the physical resource request queue:
perform the dynamic determination of the delay, wherein according to the calculated probability the random delay is not added to the initial delay; and
impose the initial delay prior to placing another work request from the individual virtual resource request queue into the physical resource request queue.
15. The system of claim 10, wherein a determined utilization of the physical computer resource for the virtual compute instance is lower than a determined utilization of the physical computer resource for a virtual compute instance for which the work request from the at least one other individual virtual resource request queue is placed in the physical resource request queue during the delay.
PCT/US2015/049587 2014-09-11 2015-09-11 Dynamic virtual resource request rate control for utilizing physical resources WO2016040743A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US14/484,200 2014-09-11
US14/484,197 2014-09-11
US14/483,952 2014-09-11
US14/484,200 US9626210B2 (en) 2014-09-11 2014-09-11 Resource credit pools for replenishing instance resource credit balances of virtual compute instances
US14/484,197 US9529633B2 (en) 2014-09-11 2014-09-11 Variable timeslices for processing latency-dependent workloads
US14/483,952 US9635103B2 (en) 2014-09-11 2014-09-11 Dynamic virtual resource request rate control for utilizing physical resources

Publications (1)

Publication Number Publication Date
WO2016040743A1 true WO2016040743A1 (en) 2016-03-17

Family

ID=54207755

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/049587 WO2016040743A1 (en) 2014-09-11 2015-09-11 Dynamic virtual resource request rate control for utilizing physical resources

Country Status (1)

Country Link
WO (1) WO2016040743A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107465743A (en) * 2017-08-03 2017-12-12 郑州云海信息技术有限公司 A kind of method and apparatus for handling request
WO2020000724A1 (en) * 2018-06-29 2020-01-02 平安科技(深圳)有限公司 Method, electronic device and medium for processing communication load between hosts of cloud platform
EP3825854A1 (en) * 2019-11-19 2021-05-26 Huawei Technologies Co., Ltd. Compute instance scheduling method and apparatus

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001031861A1 (en) * 1999-10-22 2001-05-03 Nomadix, Inc. Systems and methods for dynamic bandwidth management on a per subscriber basis in a communications network
EP2637097A1 (en) * 2012-03-05 2013-09-11 Accenture Global Services Limited Differentiated service-based graceful degradation layer

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001031861A1 (en) * 1999-10-22 2001-05-03 Nomadix, Inc. Systems and methods for dynamic bandwidth management on a per subscriber basis in a communications network
EP2637097A1 (en) * 2012-03-05 2013-09-11 Accenture Global Services Limited Differentiated service-based graceful degradation layer

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107465743A (en) * 2017-08-03 2017-12-12 郑州云海信息技术有限公司 A kind of method and apparatus for handling request
CN107465743B (en) * 2017-08-03 2020-10-16 苏州浪潮智能科技有限公司 Method and device for processing request
WO2020000724A1 (en) * 2018-06-29 2020-01-02 平安科技(深圳)有限公司 Method, electronic device and medium for processing communication load between hosts of cloud platform
EP3825854A1 (en) * 2019-11-19 2021-05-26 Huawei Technologies Co., Ltd. Compute instance scheduling method and apparatus
US11507425B2 (en) 2019-11-19 2022-11-22 Huawei Cloud Computing Technologies Co., Ltd. Compute instance provisioning based on usage of physical and virtual components

Similar Documents

Publication Publication Date Title
US9626210B2 (en) Resource credit pools for replenishing instance resource credit balances of virtual compute instances
US11487562B2 (en) Rolling resource credits for scheduling of virtual computer resources
US9635103B2 (en) Dynamic virtual resource request rate control for utilizing physical resources
US9529633B2 (en) Variable timeslices for processing latency-dependent workloads
US11080084B1 (en) Managing resources in virtualization systems
US10346775B1 (en) Systems, apparatus and methods for cost and performance-based movement of applications and workloads in a multiple-provider system
US9888067B1 (en) Managing resources in container systems
US8396807B1 (en) Managing resources in virtualization systems
US9830192B1 (en) Managing application performance in virtualization systems
US9858123B1 (en) Moving resource consumers in computer systems
US9805345B1 (en) Systems, apparatus, and methods for managing quality of service agreements
Yang et al. A cost-based resource scheduling paradigm in cloud computing
US11507417B2 (en) Job scheduling based on job execution history
CN111480145A (en) System and method for scheduling workloads according to a credit-based mechanism
US11150951B2 (en) Releasable resource based preemptive scheduling
USRE48680E1 (en) Managing resources in container systems
US10372501B2 (en) Provisioning of computing resources for a workload
USRE48714E1 (en) Managing application performance in virtualization systems
US9830566B1 (en) Managing resources in computer systems using action permits
US20160147687A1 (en) Arbitration in an sriov environment
WO2016040743A1 (en) Dynamic virtual resource request rate control for utilizing physical resources
US11886932B1 (en) Managing resource instances
USRE48663E1 (en) Moving resource consumers in computer systems
Patel et al. Resource optimization and cost reduction by dynamic virtual machine provisioning in cloud

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15771350

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15771350

Country of ref document: EP

Kind code of ref document: A1