The concept of cloud computing appears omnipresent in our modern world as we rely on on-demand computing to manage our digital lives across multiple devices – mobiles, tablets, laptops, and so on – while at home, in the office, or on the move. This article introduces the key component in cloud computing and the servers that underpin each service and provide the computing resources, as well as describing how they provide some of cloud computing’s most notable benefits.
The primary benefits of cloud computing are its cost savings, flexibility, ease of access, and immediate access. The service platforms such as servers, software, and management are paid for by the provider and can be adjusted in flexible increments to fit customer needs, and they can be accessed online from anywhere in the world. Subscribers pay for the features they require. Adding or subtracting services is dynamic. Cloud-based applications can be deployed in hours, days, or weeks compared to much longer installation intervals if the same service were to be provided “in-house.”
As mentioned previously, the responsive scalability of pooled cloud servers means that its services can offer significant cost savings for the end user – the most notable of which is that the client need only pay for what they use. Without being bound by the fixed physical capacities of single servers, clients are not required to pay up front for capacity which they may not make use of, whether it be their initial outlay or subsequent steps to cater for increases in demand. In addition, they avoid the setup costs which would otherwise be incurred by bringing individual servers online.
Instead, any setup costs generated when the underlying cloud servers were brought online are overheads for the cloud provider and diluted by economies of scale before having any impact on their pricing model. This is particularly the case as many cloud services minimize the effort and expense of specific configurations by offering standardised services into which the client taps into.
Lastly, cloud models allow providers to do away with long-term lock-ins. Without the longer term overheads of bringing individual servers online for individual clients and maintaining them, there is no dependency on those clients for a return on that investment from the provider’s point of view.
There are two common deployment models for cloud services that span the service level models (IaaS, PaaS, SaaS) described in part one: Public Cloud and Private Cloud.
Perhaps the most familiar to the general population, and also the most likely to deliver some of the features and benefits mentioned previously, is the typical public cloud model. This model utilitizes the large number of pooled cloud servers located in data centers to provide a service over the Internet which members of the public can sign up for and access.
However, the exact level of resource – and therefore capacity, scalability, and redundancy – underpinning each public cloud service will depend on each provider. The underlying infrastructure, including servers, will be shared across all of the service’s end users whilst the points of access are open to anyone, anywhere, on any device, as long as they have an Internet connection.
Consequently, one of the model’s key strengths, its accessibility, leads to its most prominent weakness: security.
By combining the computing power of a significant number of cloud servers, cloud providers can offer services which are massively scalable and have no limited capacities. With hypervisors pulling resources from the plethora of underlying servers as and when needed, cloud services can be responsive to demand so that increased requests from a client’s particular cloud service can be met instantaneously with the computing power that it needs. There is no issue with functions being limited by the capacity of one server and therefore clients having to acquire and configure additional servers when there are rises in demand. What’s more, with cloud services, where the product has already been provisioned, the client can simply tap into the service without the costs and delays of the initial server set up that would otherwise be incurred.
For those clients whose IT functions are susceptible to large fluctuations in use, for example websites with varying traffic levels, pooled cloud server resources remove the chance of service failure when there are spikes in demand. On the flip side, it removes the need to invest in high capacity setups – as contingency for these spikes – which would go unused for a large proportion of time. Indeed, if the client’s demands fall, the resource they use (and pay for) can also reduce accordingly.
As previously mentioned, the high number of cloud servers used to form a cloud service means that services are less likely to be disrupted with performance issues or downtime due to spikes in demand. However, the model also protects against single points of failure. If one server goes offline, it won’t disrupt the service to which it was contributing resources because there are plenty of other servers seamlessly providing that resource in its place. In some cases, the physical servers are located across different data centers and even different countries so that there could conceivably be an extreme failure causing a data centre to go offline without the cloud service being disrupted. In some models, back ups are specifically created in different data centers to combat this risk.
In addition to unforeseen failures, pooled server resources can also allow maintenance – for example, patching of operating systems – to be carried out on the servers and networks without any disruption or downtime for the cloud service. What’s more, that maintenance, as well as any other supporting activities optimising the performance, security, and stability of the cloud servers, will be performed by staff with the relevant expertise working for either the cloud service provider or the hosting provider. In other words, the end user has no need to invest in acquiring that expertise themselves and can instead focus on the performance of the end product.