Public Cloud Computing – From “We can’t…” to “We can”

From the first day of public cloud computing, there’s been people saying “We can’t use public cloud computing, because…”, followed by a range of reasons, all perfectly legitimate but generally based around company policies or long-held fears about shared resources, security, and support, rather than technical limitations.

Over the past few years, Amazon and the other public cloud providers have been chipping away at these reasons for not using public cloud computing, with Amazon recently upgrading their “Virtual Private Cloud” offering of a VPN connection to their servers to now include controllable secure networking of their instances.

Now, Amazon have launched “Dedicated Instances“, an offering where you pay a flat rate of an extra $10 per hour per region when you launch any number of dedicated instances. By “dedicated instance”, Amazon mean an instance running on hardware that’s only running instances by you, noone else. No more multitenancy resource fears on the server, reduced worries about over-commitment of hardware resources, potential weaknesses in the Xen hypervisor, etc.

You still get many of the benefits of public clouds – no up-front costs, the massive volumes of AWS leading to lower overheads, commodity services, and so on, you just pay a slightly higher per-hour price to remove one of the major hurdles in moving to public cloud computing.

I’m sure that dedicated EBS will be coming along soon, and perhaps dedicated S3 storage for people using more than something like 10TB of data – the amount that would justify a dedicated shelf of storage replicated to multiple locations?

While these recent moves won’t let everyone use the public cloud to reduce their computing costs and improve their flexibility, it’s a big step in moving people from “We can’t do this because” to “We can do this, now let’s get on with it”.

And that’s got to be good, hasn’t it?