Sunday, February 25, 2018

EcoUp: Towards Economical Datacenter Upgrading




The rapid growth of cloud services dictates increasingly powerful datacenters to maintain the high quality of service (QoS). It's a common practice in virtually all tiers of datacenters to continuously upgrade the datacenters, i.e. replacing outdated and failed servers with more advanced and efficient ones. However, how to upgrade a datacenter in the most cost-efficient strategy remains unclear, and however this problem goes increasingly challenging given the great diversity of applications. In practice, the datacenters’ operators usually resort to expending the scale of servers. 


The preferred servers are either expensive but high-performance, or, by contrast, cheap but low-power. Whatever sever preferences, how to justify the cost-efficiency is still an open problem. We claim that a cost-efficient upgrading strategy should be fully aware of not only the capacity and cost of various servers, but also the resource demands of target applications. We model this strategy as a recommendation problem: recommending the “best” servers to a datacenter. 

We propose “EcoUp”, a model-based framework that faithfully rates the cost efficiency of server candidates, relying on which an optimal server portfolio can be derived. The performance prediction on candidate servers is realized by employing a sophisticated latent factor model (LFM). The cost mainly involves the server purchasing cost and energy bill. Given the application distribution, EcoUp can give an optimal server portfolio under a certain capital budget. We use Google trace, a big profiling dataset opened by Google, to validate the performance prediction. 

Experimental results show that the error rate is below 8 percent on average. Meanwhile, we build a comprehensive upgrading procedure on a local cluster to evaluate the potential of EcoUp. The results show that our approach significantly outperforms two conventional upgrading strategies by 12.3 and 33.6 percent in terms of system throughput, respectively.


Data Center: Situation Analysis 


Dynamic in nature, the role of technology is critical in supplementing organizational initiatives. Innovation in hardware and software often helps facilitate the tactics required to meet business strategy deemed essential to success. The adoption of virtualization is a current, prominent example of how such innovation is occurring. With the explosive growth of data center use in the 1990s and after, challenges emerged. The cost to support a sprawling physical infrastructure increased dramatically. With server sprawl, up to 85 percent of each server’s resources can go unused. The resulting excesses in hardware, power, cooling and management can lead to infrastructure instability and excess spending. 

A less than robust economy is putting greater pressure on IT organizations to cut costs. Capital Expenditures (CAPEX) and Operating Expenditures (OPEX) have come under the axe. Budgets are also being reduced based on future uncertainties. Subsequent reductions in IT staffing are also requiring greater efficiencies. Increased productivity is seen through improving server uptime and flexibility. The same can be said for speeding the availability of new servers and improving disaster recovery (DR) processes. It’s evident there has been a rapid adoption of virtual environments as a way to reduce data center hardware costs, improve energy efficiency and enhance operations. Experts believe this trend will only accelerate in 2010, increasing the deployment of server and client virtual machines.


Server Virtualization: 

The Basics In its most basic sense, server virtualization removes physical barriers and decouples one technology from another, thereby removing intricate dependencies. From a practical standpoint, it allows running multiple independent virtual operating systems (OSs) and applications on a single physical computer. The technology permits combining and consolidating workloads on a smaller number of physical servers to maximize the investment in hardware. 

It separates the physical resources from the applications that use them. The main goal is to reduce costs and increase hardware utilization. In addition, the technology offers solutions to many of the challenges that IT departments face today. In fact, according to “Computerworld’s 2010 Forecast” survey, 64 percent of the 312 professionals polled stated that their organizations are likely or very likely to virtualize more servers in 2010. What’s more, the tech research firm Gartner estimates that 55 percent of all new workloads will be deployed on virtual servers this year. This is up from 40 percent in 2009. 


IT Challenges 

It doesn’t take much to realize the IT industry landscape has dramatically evolved over the last decade. Businesses have gained access to greater technological capabilities through inexpensive x86 server systems as well as the applications and operating systems that run on this platform. However, adoption rates increased so rapidly that many businesses today now face a myriad of difficulties. Fortunately, server virtualization can serve as a potential remedy. 

These issues include: 

• Low server utilization 
• Complex server-storage migration 
• Inefficient server deployment
• High-availability/disaster recovery complexity 
• Power and cooling costs


No comments:

Post a Comment

Hybrid scheme of public-key encryption

Hybrid scheme of public-key encryption We introduce a hybrid homomorphic encryption that combines public-key encryption (PKE) and som...