Those of us who have been around the IT block a few times have witnessed remarkable changes in hardware and software capabilities. The big ones are fairly obvious in hindsight: from mainframes to minicomputers in the 1960s; from minis to PCs in the 1980s; and from dumb terminals to client-server technology in the 1990s.
For some of us, these shifts seemed like a big deal the first time we learned of them. For most, however, they crept into our perception more slowly. Multiple innovations were needed on top of the “big idea” to make it self-evident that “things are different now.”
The ideas behind server virtualization — the decoupling of software services from the hardware required to run them — have been around for a long time. A very long time, if you count back to the early mainframe days. Even the current wave of server virtualization led by VMware has been a relatively slow build, though. (I first started using VMware products back in 2000).
Over this period, successive innovations and expansions on the core idea have turned it into the powerful platform for workload consolidation and mobility that it has become today.
In the case of the shift to a cloud-based model for computing, however, this change seems to be happening much more quickly — and I think there’s an interesting reason why this might be so. Unlike every major shift in information technology, this change has been led not by a technology vendor, but by a technology consumer: Amazon.
A Better Way
In a very real sense, Amazon came out of left field and demonstrated to the entire IT industry that there really was a better, more efficient, hugely scalable and extremely low-friction way to deliver IT.
Starting with massive, self-service virtualization, Amazon has layered broader and richer abstractions on top, piece by piece, so that now — with the announcement of the AWS Cloud Formation services — customers can pretty much define, deploy and manage their own virtual data center from their desktop. At the very least, they can see that it’s very, very close to being possible.
Plenty of folks will argue that Amazon wasn’t the first, and that these ideas were also “out there” in various forms prior to the launch of AWS — but who else put them together and made them self-service?
The upshot of this is that Amazon’s leading example has raised the pressure on pretty much every other participant in the IT value chain, from the internal IT infrastructure team to the vendors that supply them.
One of the main reasons for this pressure is that this innovation originated from the demand side rather than the supply side of the IT industry — although, it obviously helps enormously that the stuff actually works as well.
Amazon can do IT. So why can’t you? That’s the kind of question I’m hearing in conversations with enterprises today. And there’s no reason you can’t.
Deconstructing the Secret Sauce
As with any major change, plenty of folks will turn their attention back to what’s in front of their noses, due mostly to issues that are part of their daily grind: servers to provision, trouble tickets to resolve and the interminable interdepartmental bickering resulting from the massive inefficiencies in their current IT supply chain processes.
Yet others are stopping for a moment to ask the obvious next question: “So, if Amazon can do IT, what do we have to do differently?” This realization is forcing IT leaders to transform IT delivery models to look more and more like a public cloud service in their own right: self-service, on-demand and elastic.
The starting point for this journey is a private cloud. Eventually, IT will also embrace public clouds as part of an integrated fabric of computing, platform and application options. In order to start answering this question, however, there are a few clear elements that first need to be distilled out of the secret sauce:1. Automation: Nothing about this cloud thing should be more obvious than the role that automation plays in Amazon’s solution.
2. Self-service: Automation enables the big shift from IT-constrained application delivery to business-constrained application delivery.
3. Virtualization: Not just the compute, but the networks, firewalls, storage and location of all these resources.
4. Models: Less obvious, perhaps, is the fact that automation and virtualization both rely on machine-processable descriptions of what should be done, rather than on detailed instructions on how to do it.
The rise of on-demand computing will force IT to automate and industrialize — to trade manual, ad hoc processes for automated, standardized ones. Those who carried out these manual tasks in the past will move up the stack and focus on higher levels of contribution and innovation — defining policies that govern automated processes, standardizing infrastructure, and improving new applications and business solutions.
Some will be threatened by this evolution, while others will be inspired to ride the wave of change and make higher-level contributions to IT service delivery.
The architectural transformation that we see — and will continue to see — taking hold in the enterprise won’t be the sort of discretionary, religious architectural shift we’ve seen in the past; it will be mandatory, based on rudimentary economic and market principles. IT is being forced to change. Public cloud services have proven the model and demonstrated what enterprise IT is expected to become.
Social Media
See all Social Media