Why invest in purchasing, housing and maintaining a set of computers when you can outsource all that worry to someone else? This has been the oft-used marketing slogan of cloud computing. And it works. It is much easier to offload the data hassle and focus your resources (especially if they’re limited) on your core operations.
During their astounding growth in the middle of the last decade, technology companies such as Amazon and Google built huge infrastructures to power their ever-growing needs. It is estimated that Amazon, for example, has more than 2m servers around the world, while Google is estimated to have 10 exabytes of data storage space. That’s 10 million terabytes, or 10 billion gigabytes.
Over time, they learned how to manage all their software and hardware assets in these infrastructures without significantly increasing costs. They also realised that these infrastructures could be leased out to external companies for them to use as and when they wish. This significantly lowers the amount of capital expenditure required by these businesses to build server set-ups and also allows them to scale this up and down as dictated by their needs.
This was the birth of cloud computing, so called because computer specialists commonly use a cloud cartoon in schematic diagrams to refer to parts in the system that are opaque. But while we know that it works – the global cloud computing market is forecast to reach $127 billion in the next two years – we are less sure exactly how it works.
