Connecting clouds – Have your (layer) cake and eat it, too
Cloud Computing offers businesses many benefits, not in cost savings – that myth has been well and truly busted – but in flexibility of deployment and responsiveness to changing business demands.
In theory, says Matthew Parker, UK sales director & country manager, InterCloud, organisations can quickly adopt new applications, get users up and running and thus respond to business pressures to steal a march on the competition. In practice the idea is simple but ensuring that users get everything they need – the right applications, secure data access and speed – whilst the IT function keeps the necessary control is not easy.
As more and more departments within an organisation adopt more and more different cloud-based approaches (Software as a Service, SaaS, or Infrastructure as a Service, IaaS), the connections between the Cloud-based components proliferate both in number and complexity.
As well as potential performance issues, this affects the original benefits outlined above, namely the speed and agility of application deployment; additionally it also makes it difficult to manage the user experience as organisations struggle to gain the visibility and control of applications once up and running.
Why does this happen, and what can businesses do about it?
The big Telecommunications companies (telco) have an absolutely critical role in supporting global wide-area networks (WANs) and the internet itself. However, there’s something of a disconnect between what the telcos are good at (and that they want to sell more of) and the business (and hence communications) demands of a proliferating cloud computing environment.
The telco clearly need to concern themselves with getting a return on the considerable investments they make in their huge networks. Selling long-term fixed capacity contracts to organisations is their ideal. This is their history, this is their ethos; this drives their business structures, which tend to be rather departmentalised – each department focused solely on its part of the telco’s operation , rather than concerning itself with the customers’ end-to-end comms needs.
Cloud computing, on the other hand, is implicitly a more volatile environment: performance has peaks and troughs; users come and go; connections are needed swiftly to new geographies and/or new cloud services. This volatility drives the need for speed of deployment, which in turn implies the need for flexible communications contracts.
In terms of the OSI 7-layer model, the telcos home turf is in Layer 2 – switching: reliable transmission of data between two nodes. Cloud computing, however, places extra demands on networking at Layer 3 – routing: managing a multi-node network and routing traffic across it. By analogy to the postal service, Layer 2 delivers to the local post office, whereas Layer 3 will deliver to the addressee.
So, in the early stages of a company’s adoption of cloud computing, communications at predominantly layer 2 levels will likely suffice, as the user can organise the final routing themselves – in such circumstances telcos are happy to step up to the plate and provide this connectivity.
However, inevitably, cloud-based applications will increase in number, and possibly start talking to, through and with other cloud-based infrastructures – this is Cloud-to-Cloud connectivity, found increasingly commonly in the business world today.
Add to this the likelihood of adding more users across disparate geographies, and the picture emerges of demands on the sort of infrastructure – and the sort of service contract – that telcos are less able or willing to provide. At this point the organisation needs expert help with correct, efficient and secure routing through these labyrinths.
Otherwise, response times increase, user dissatisfaction increases and business responsiveness suffers. At InterCloud, we’ve seen this situation develop more and more frequently as cloud technology moves further into the mainstream.
A standard response from a telco might be to increase the bandwidth thrown at the problem, but this creates more issues than benefits (a) the cost to the business goes up and (b) the delivery of connectivity requires a longer lead time. This approach does not work for cloud computing, because demand can go down as well as up, and the inflexibility of the telco’s response to this volatile environment is ineffective.
Simply increasing bandwidth may not be the answer, and what is needed is some way of segregating the applications and routing the communications between them in a more efficient way – to use the Post Office analogy again the more effective the sorting process – or in this case the network paths – the easier it is for the postal service to deliver letters more quickly to their final destination.
Meeting user requirements in a proliferating cloud-to-cloud environment is fast becoming the number one problem that organisations’ IT departments are having to deal with. If these IT functions had their druthers, the migration to cloud computing would be a careful, planned and controlled journey. Sadly, this is rarely the case and IT has to deal with the situation as they find it – “I wouldn’t start from here”, to quote the old joke.
So, what should the CIO be thinking about in plotting the course through the potential minefield of cloud computing?
- It’s important to look for solutions which can segregate the various cloud applications, giving greater clarity of the necessary communication paths and then routing these communications via Layer 3, which gives more flexibility in despatching packets to the right addressees.
- Aim to get performance SLAs in place which support your overall business – today and tomorrow.
- Also worth seeking is a solution which links SaaS and IaaS providers more directly, rather than increasing traffic across the company’s own network.
- As well as this, given the ups and downs of cloud computing, the CIO should avoid committing to long-term communications bandwidth contracts and seek greater flexibility, instead seeking some kind of tailored package – importantly, something which can be provided quickly. Markets don’t give businesses long to respond to changing conditions, so new computing function needs to be up and running fast.
- There will also be specific requirements within these overall parameters – mixing private and public cloud, extending firewalls to ensure secure communications and maintain control over governance in a multi-cloud environment.
- Finally, the CIO should be thinking long-term – understanding how business drivers will change over time and the impact this will have on the adoption and ubiquity of key applications.
CIOs should be looking at these areas as they deal with cloud service providers and telcos, and seeking a way to address all of them as part of ensuring that their organisations’ users get the service they need and their businesses get the benefits of cloud computing that enable them to thrive and survive in today’s volatile and competitive markets.
Matthew Parker is responsible for driving the growth in the UK via new customer acquisition. The team will focus most of their efforts on addressing the challenges and pain points of large Global Enterprise. These organisations will all be consuming multiple clouds (IaaS & SaaS, Public & Private) and trying to manage the increasing challenge of application performance across those clouds. “These challenges will only increase over time, and as a result, it is a really exciting time to be working for an organisation that is at the centre of attempting to address these issues for global companies.”
The author of this blog is Matthew Parker, UK sales director & country manager, InterCloud
Comment on this article below or via Twitter @IoTGN