A few years ago, when the public cloud made its presence felt in the world of traditional data centers, the industry dealt with the dilemma of choosing one over the other. Experts debated whether the euphoria of the public cloud would be lasting enough to overshadow the private vendors. However, today, as the likes of AWS, Google and Microsoft rake in dizzying amounts of revenue due to the sheer scale of economies, the moot point is for how long can the smaller fish hold out before losing the battle to the greater sharks?
How did the mega clouds storm in to conquer?
Choosing to design own hardware rather than purchasing from suppliers
Before the internet was born, companies who had not anticipated the need for scalability were content to source their hardware from traditional suppliers. The scale brought in by a connected global network through the internet, however, upset the balance in favor of pioneers like Google who cashed in on the internet opportunity early to become the kings. Google abstained from purchasing servers from supplier giants like IBM and HP to design their own hardware directly from the ODMs or the original device manufacturers. Google is only the fifth largest company to design their own hardware. Facebook and Microsoft are ahead in the race with Amazon Web Services ruling the roost.
Scale-out and not scale-up
The hitherto scale-up method meant that the hardware requirement would be proportional to the server workloads, but this model could not address the scalability issue. This was because the scalability depended on the limited hardware capacity and the single point of control it offered. The larger players, on the other hand, changed the game plan by using the scale-out technique to match step with the scale of the internet. The scale-out model worked very well since it added several nodes to the existing server to increase the capacity as needed.
How does this strategy hold them in good stead?
The economies of scale helped them to lower their costs and thereby charge less for their services.
The cost to scale-out is amortized across multiple players in the industry. The variable costs are offset by large scale purchases. The mega clouds can afford to sell their resources and servers for such a low cost due to the economies brought in by the scale. A typical data center of Amazon has about 50k hosts. A traditional data center would in comparison have just a few thousand hosts. Yet, the costs incurred by both would be similar in nature.
The economies of scale arise from several factors:
PUE: The larger facilities can enjoy lower costs of power through bulk purchase and by having their data centers located in areas that have lower electricity rates. The smaller data centers, in comparison, would need to pay the prevalent rates which might not be cost-effective in the longer run.
Furthermore, companies like Google have gone to great depths to ensure that their power usage effectiveness (PUE) is optimized and works towards consistently reducing energy overheads.
The mega cloud players like AWS understand the need to have a superior infrastructure has not left any stone unturned to fine tune their efficiency and build a robust infrastructure.
Already late into the game, it’s an enormous uphill task for traditional data centers to ramp up their efficiency and infrastructure level to compete with the large players.
Lower labour costs:
The cloud automates a significant amount of tasks that are repetitive in nature. This lowers the administrative costs for the cloud players. While traditional data centers also reap similar benefits, the large players benefit more due to the magnitude of scale.
Cost-effectiveness of large scale purchase:
Cloud computing has facilitated a homogenous infrastructure by standardizing and limiting the number of hardware and software architecture. This, in turn, enables them to get a large percentage of discounts on bulk hardware purchase.
The architecture is far more heterogeneous in a traditional data center environment and this kind of cost-benefit is not possible.
Increasing workload on public cloud:
While the annual global data center traffic is set to grow at a CGR of 23% from 2014 to 2018, the amount of workload in traditional data centers is expected to come down by 2% per year during this period.
The flexibility, agility, and scalability provided by the cloud are causing a lot of developers to move to the public cloud.
The trend is slowly diminishing the fears of security and threat to privacy which was seen as the major risk factor of the public cloud domain. The economies of scale are much applicable even here as the investment needed to have a reliable and secure system so that the bigger names in the industry will be able to attract better expertise to tackle the problem.
The private cloud vendors are already facing the brunt of the cost advantages and better infrastructure enjoyed by the mega clouds. The growing trend in favor of the latter could well mean that they are set to dominate the IT world.