People/Web Search Calendar Emergency Info A-Z Index UVA Email University of Virginia

Computer Science Colloquia

Tuesday, December 4, 2012
Ming Mao
Advisor: Mary Humphrey
Attending Faculty: Sudhanva Gurumurthi, Chair; Jack Davidson, Jason Lawrence and Teresa Culver, Minor Representative

3:00 PM, Rice Hall, Rm. 242

PhD dissertation presentation
Cloud Auto-Scaling with Deadline and Budget Constraints

ABSTRACT

The cloud has become an important computing platform. It has attracted many businesses and individual users by offering on-demand computing power and storage capacity. The economies of scale and pay-as-you-go billing model could save users large up-front capital investments and long term operation costs. A key feature of the cloud is the elasticity, the ability to dynamically acquire and release computing resources in response to demand. We believe the key to successful cloud adoption is to first decide how much and what type of resources is needed in the cloud ("provisioning") and then decide how to place computing activities onto each of the resources ("allocation"). This is a challenging problem because the mapping from user objectives to the resource provisioning and allocation plans is not trivial. It needs to carefully consider the following factors. A performance goal can be achieved through different types of resources with different costs. A fixed budget can be used to rent a wide variety of resource configurations for varying durations. The structure of a cloud application could be complex. Task precedence orders need to be preserved in a job. The workload may experience unexpected peaks. The performance requirements and cost constraints may be changed dynamically.

This dissertation solves this resource provisioning and allocation problem using an auto-scaling approach. It solves the batch-queue application model based on the integer programming technique. By ensuring the computing power is always large enough to handle the workload for all the VM types, in our experiment, our approach finishes more than 95% jobs before the deadline and saves 20.2% - 40.1% cost compared to a fixed machine type choice. Our approach contains several innovative heuristics for the workflow application model. In the unlimited budget case, the presented solution - dynamic scaling-consolidation-scheduling (SCS) - can save 9.4% - 40.4% cost compared to two baseline approaches and can work well in both light and heavy workload environments. In the limited budget case, our scheduling-first and scaling-first algorithms can reduce 9.8% - 45.2% job turnaround time than the standard machine choice and they also show good tolerance (between -10.2% and 16.7%) to inaccurate parameters (±20% error). Finally, this dissertation presents three job scheduling policies and a data prefetching strategy to manage the intermediate data for data-intensive applications running in the cloud. Particularly, the cost-deadline-first (CDF) algorithm can save 13.5% - 33.7% cost compared to the deadline-first (DF) algorithm and the data prefetching strategy can further improve the cost saving up to 44.6% through data locality aware job placement.