In the world of IT, it’s important to maintain an open mind and be willing to consider new approaches to addressing problems. People often have a tendency to get locked into a particular approach and hold onto it long past the point at which it has ceased being a best practice. On the other hand, it’s also important to maintain a certain level of skepticism, and a key trait many IT veterans have developed — out of necessity — is the innate ability to apply a “reality filter” to vendor technology claims.
For those needing assistance in this regard, one of the more widely used tools has been Gartner’s Hype Cycle, which the company has used to trace new technology life cycles, mapping them through a series of aptly-labeled phases: upward from the Technology Trigger to the Peak of Inflated Expectations, then down to Trough of Disillusionment, onward to the Slope of Enlightenment, and ultimately to the Plateau of Productivity. Not all technologies span the entire lifecycle — some don’t survive, while others morph into or are subsumed by newer options.
Right now, cloud computing represents a technology initiative that most would see as at or near the peak of the hype cycle. Expectations are running high, vendors are busily retuning their marketing messages to brand their offerings as cloud-friendly, and as a result, nearly anything to do with network-based computing, storage and applications is being positioned in some way as “cloud.” The result, despite the initial enthusiasm of the IT community, is confusion. We’ve seen what happens in such situations — disillusionment follows.
Hype vs. Reality
The rush to “cloud” affinity dwarfs earlier infrastructure initiatives. One example: Anyone remember Information Lifecycle Management (ILM)? There was a time a few years back when the home Web page of literally every major storage player trumpeted the glories of ILM. Here was a perfectly reasonable technology concept that, in theory at least, resonated with many.
Unfortunately, the realization of the concept, by and large, ended up falling far short of its initial promise of dynamically aligning data and value. To a large degree, the available technologies could not support the goals. At the risk of oversimplification, ILM became most closely aligned with two practices: data classification and storage tiering. In many organizations, the former proved to be too difficult, so ultimately it came to mean the latter. So, while storage vendors got to sell lots of tiered storage, by and large, ILM was never realized — in essence, the hype got ahead of the reality, and the reality never caught up.
Now, I’m not suggesting that cloud computing will go the way of ILM. While there is some similarity in terms of the degree to which vendors and the trade media have latched onto the idea, there are also some critical differences.
The most important of these differences is that cloud computing technology actually exists and is functioning. In contrast, ILM began as a concept, and products were, for the most part, nonexistent. Also, ILM had no “poster children” as readily identifiable examples of the concept in practice that could be used as models.
In contrast, cloud computing has examples of both Internet-based companies that employ a cloud model and cloud service providers open for business and offering an array of services. The challenge that organizations attempting to adopt a cloud model must face is ensuring that the promised efficiencies and service benefits are actually realized.
Two Approaches
There are two fundamental approaches that organizations may follow in adopting a cloud model. The first — and easier — is to simply subscribe to one of the many cloud services that are available, ranging from application Software as a Service to compute or storage services. This is a relatively “clean” approach and has the benefit of avoiding capital investments. Staffing is immediately available (at least to some extent) and can be expanded or shrunk as needed with little consequence. The potential risks relate primarily to whether the service provided meets the requirements of the organization with regard to key attributes like performance, availability and security. Another significant concern is lock-in — the ability to exit the service or replace it with another.
The second approach to the cloud model is to build it — for IT to become a cloud services provider. It is expected that larger organizations will be more likely to adopt this model or, to some extent, a hybrid approach. Here is where some of the largest potential pitfalls lie. From a technology perspective, most companies have implemented server virtualization to varying degrees, and in some ways, a cloud model can be viewed as a large-scale expansion of virtual clusters. The challenge comes in the scale of the cloud and in the significant implications of such a model. This is as much a policy and operations challenge as a technology one.
Being a successful cloud services provider will require that IT improve on a number of things that historically it has not done particularly well. Let me explain. Keep in mind that at a high level, the key objectives of a cloud are to drive efficiency and offer very high degrees of flexibility (fast deployment, dynamic relocation, easy growth, reduction and reuse of resources). Traditionally, IT has been able to accommodate one or the other, but not both. While server virtualization diminishes some of the obstacles and is a critical enabling technology for implementing a cloud, it’s not the total solution. It especially does not address the issue of organizational practices and behavior.
Hurdles to Overcome
Here are a few examples of what a company must do to effectively become a provider of internal cloud services:
- Develop a real demand forecasting capability and evolve from a “just-in-case” infrastructure capacity model to a “just enough” approach;
- End the practice of over-provisioning resources;
- Develop effective business-based metrics: It’s about cost-of-capital and cost-of-operations and margin, not just utilization;
- End the project-based funding model for equipment purchases: This just doesn’t work in a world of shared resources;
- Focus on metrics, metrics, metrics: Information is critical. Understanding performance characteristics, resource capacities, per-unit costs, and consumption and performance data is an absolute key to success.
The companies that are commonly held up as examples of public cloud success have been able to drive both high service levels and high efficiency. The challenge is for IT organizations seeking to provide cloud services internally to achieve similar economies of scale. If they ignore this dimension, then the risk is that a cloud initiative could result in significant investment with little to show for it — in other words, another IT boondoggle.
James Damoulakis is CTO at GlassHouse Technologies.
Social Media
See all Social Media