Trends

ANALYSIS

Corporate Priorities: Infrastructure Upgrades Now, Cloud Later

The ongoing buzz surrounding cloud computing — particularly public clouds — is far outpacing actual deployments by mainstream users. To date, only 14 percent of companies have deployed or plan to deploy a private cloud infrastructure within the next two calendar quarters.

Instead, as businesses slowly recover from the ongoing economic downturn, their most immediate priorities are to upgrade legacy desktop and server hardware, bring outmoded applications up to date, and expand their virtualization deployments. Those are the results of the latest ITIC 2010 Virtualization and High Availability survey, which polled C-level executives and IT managers at 400 organizations worldwide.

ITIC partnered with Stratus Technologies and Sunbelt Software to conduct the Web-based survey of multiple choice questions and essay comments. ITIC also conducted first- person interviews with over two dozen end-users to obtain anecdotal responses on the primary accelerators or impediments to virtualization, high availability and reliability, and cloud computing.

The survey also queried customers on whether their current network infrastructure and mission critical applications were adequate to handle new technologies and increasing business demands.

Although, many mid-sized and large enterprises are contemplating a move to the cloud — especially a private cloud infrastructure — the technology and business model is still not essential for most businesses, based on survey results. Some 48 percent of survey participants said they have no plans to migrate to private cloud architecture within the next 12 months, while another 33 percent said their companies are studying the issue but have no firm plans to deploy.

Private Cloud deployments are outpacing Public Cloud Infrastructure deployments by a 2 to 1 margin, the study indicates. However, before businesses can begin to consider a private cloud deployment, they must first upgrade the “building block” components of their existing environments — e.g., server and desktop hardware; WAN infrastructure; storage, security and applications. Only 11 percent of businesses described their server and desktop hardware as leading-edge or state-of-the-art. And just 8 percent of respondents characterized their desktop and application environment as leading-edge.

The largest proportion of the survey participants — 52 percent — described their desktop and server hardware as working well, while 48 percent said their applications were up-to-date. However, 34 percent acknowledged that some of their server hardware needed to be updated. A higher percentage of users, 41 percent, admitted that their mission-critical software applications were due to be refreshed. And a small minority, 3 percent, said that a significant portion of both their hardware and mission-critical applications were outmoded and adversely impacting the performance and reliability of their networks.

Based on the survey data and customer interviews, ITIC anticipates that from now until October, companies’ primary focus will be on infrastructure improvements.

Reliability and Uptime Lag

The biggest surprise in this survey compared to the 2009 High Availability and Fault Tolerant survey, which ITIC and Stratus conducted nearly one year ago, was the decline in the number of survey participants who said their organizations required 99.99 percent uptime and reliability.

In this latest survey, the largest portion of respondents — 38 percent, or nearly 4 out of 10 businesses — said that 99.9 percent uptime (the equivalent of 8.76 hours of per server, per annum downtime) was the minimum acceptable amount for their mission-critical line of business (LOB) applications. This is more than three times the 12 percent of respondents who said that 99.9 percent uptime was acceptable in the prior 2009 survey.

Overall, 62 percent, or nearly two-thirds, of survey participants indicated their organizations were willing to live with higher levels of downtime than were considered acceptable in previous years.

Some 39 percent of survey respondents — almost 4 out of 10 respondents — indicated that their organizations demanded high availability, which ITIC defines as four nines of uptime or greater. Specifically, 27 percent said their organizations required 99.99 percent uptime; another 6 percent needed 99.999 percent uptime, and a 3 percent minority required the highest 99.9999 percent level of availability.

The customer interviews disclosed that the ongoing economic downturn, aged/aging network infrastructures (server and desktop hardware and older applications), layoffs, hiring freezes and the new “do more with less” standard operating procedure (SOP) have made 99.9 percent uptime more palatable than in previous years.

Those firms that do not keep track of the number and severity of their outages have no way of gauging the impact of financial and data losses to the business. Even a cursory comparison indicates substantial cost disparities between 99 percent uptime and 99.99 percent uptime. The monetary costs, business impact and risks associated with downtime, as well as the duration and severity of individual outage incidents, will vary by company.

However, a small or mid-sized business, for example, which estimates the hourly cost of downtime to be a very conservative US$10,000 per hour, would potentially incur losses of $876,000 per year at a data center with 99 percent application availability (87 hours downtime). By contrast, a company whose data center operations had 99.99 percent uptime would incur losses of $87,600, or one-tenth that of a firm with conventional 99 percent availability.

Ironically, the need for rock-solid network reliability has never been greater. The rise of Web-based applications and new technologies like virtualization and service-oriented architecture (SOA), as well as the emergence of public or shared cloud computing models, are designed to maximize productivity. But without the proper safeguards, these new data center paradigms may raise the risk of downtime. One in four data centers will experience a serious business disruption over the next five years, forecasts the Association for Computer Operations Management (AFCOM) Data Center Institute.

At the same time, more than half of all businesses, 56 percent, lacked the budget for high availability technology, customer interviews revealed. Another ongoing challenge, acknowledged by 78 percent of survey participants, was that their companies either lacked the skills or simply did not attempt to quantify the monetary and business costs associated with hourly downtime.

The reasons for this are well documented. Some organizations don’t routinely do this, and those that attempt to calculate costs and damages run into difficulties collecting data because the data resides with many individuals across the enterprise. Interdepartmental communication, cooperation and collaboration is sorely lacking at many firms.

Only 22 percent of survey respondents were able assign a specific cost to one hour of downtime, and most of them gave conservative estimates of $1,000 to $25,000 for a one-hour network outage. Only 13 percent of the 22 percent of survey participants who were able to quantify the cost of downtime indicated that their hourly losses would top $175,000 or more.

Users Confident and Committed to Virtualization Technology

The news is more upbeat with respect to virtualization — especially server virtualization deployments. Organizations were both confident and comfortable with virtualization technology. Seventy-two percent of respondents indicated the number of desktop and server-based applications demanding high availability had increased over the past two years. A 77 percent majority of participants ran business-critical applications on virtual machines, the survey also found.

Not surprisingly, virtualization usage will continue to expand over the next 12 months, the survey data indicated. A 79 percent majority — approximately eight out of 10 respondents — said the number of business-critical applications running on virtual machines and virtual desktops would increase significantly over the next year. Server virtualization is very much a mainstream and accepted technology. The responses to this question indicated increased adoption, as well as confidence.

Nearly one-quarter of the respondents — 24 percent — said that more than 75 percent of their production servers were VMs. Overall, 44 percent of respondents said that more than 50 percent of their servers were VMs. However, none of the survey participants indicated that 100 percent of their servers were virtualized. Additionally, only 6 percent of survey respondents said that none of their servers were VMs.


Laura DiDio is principal at ITIC, a research and consulting firm that covers the high-tech industry.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

What's your outlook for the business climate in 2025?
Loading ... Loading ...

E-Commerce Times Channels