As COVID-19 lockdowns swept across the globe, many organisations made a leap to the public cloud for the very first time, with minimal lead time to prepare, experiment, or strategise. Now the pendulum is swinging the other way.

 
For organisations previously conducting their business solely on-premises, the public cloud was the clearest path forward in a lockdown-driven world. Social distancing and remote work were quickly facilitated by a swift shift to public cloud providers, enabling safer working environments for the duration of the pandemic.

Now that the dust has settled, however, many organisations are examining the challenges and potential risks associated with the public cloud, with many initiating partial data repatriation activities in order to find a balance between convenience, compliance, and complexity.
 

 

The first driver: security unknowns

 
Just how much of your critical and sensitive data is hosted on public cloud infrastructure? The major public cloud providers, including Amazon Web Services, Google and Microsoft, now offer a vast range of productivity and analytics tools that can essentially enable an organisation to become 100% cloud-based, should they choose.

But there are growing concerns from businesses correctly identifying the risks involved with the majority of data being managed by only a handful of providers, particularly when it comes to having full transparency over the infrastructure your data is processed by, or stored on. Customers of major public cloud providers generally have to get by on trust that the hardware and software risks are being appropriately managed, and that successful attacks are detected in time.
 

This sort of trust exposes organisations, and their clients, to a host of unknown risks.

 
In some cases, the risks may be considered acceptable – and clearly documented in disaster recovery and business continuity plans. But in other cases, data loss, data corruption, or data held hostage can cripple an organisation’s competitiveness, operations, and reputation.
 

 

The second driver: performance efficiencies

 
Latency issues are an unfortunate reality on the public cloud, particularly if your area of operation is far from your cloud provider’s nearest data centre. Much like the unknown security risks of opaque public cloud infrastructure, some organisations are willing to endure slower data transfer and retrieval times, along with lackluster application performance. However, there are a broad variety of use cases where this simply isn’t acceptable for a business that relies on speed to maintain competitiveness. For these organisations, placing data where it is needed (for processing and storing locally, and then backhauling the results and data later) is essential.
 

If your business streams video to customers only after several minutes of buffering, and a competitor offers similar content in a matter of seconds, maintaining a stable customer base will become increasingly difficult.

 
Industrial automation, public utility management, remote health services and intelligent transportation relying on cloud services will all require efficient, highly available, low-latency access to properly function. And this is only scratching the surface of the vast number of corporate and government activities that increasingly rely on rapid access to cloud-based services. Organisations making use of machine learning, internet of things (IoT) or artificial intelligence will find performance issues begin to outsrip the convenience of a public cloud solution.

This need for reliable, swift response times is why edge computing is one of the four major trends shaping data centres and the adoption of the cloud over 2021.
 

 

The third driver: the true cost of full public cloud dependency

 
Shifting to the public cloud sounds like an obvious move towards cost-efficient service delivery. With infrastructure, software, storage and support all available on-demand, it’s a quick answer to the question ‘how do we remove our barriers to scale?’. Many organisations, whether they’re universities supporting collaborative research projects or businesses collecting real-time performance data from their products, will find demand for some cloud services varies considerably from quarter to quarter. Why manage your own infrastructure when you can simply pay for what you use month to month?
 

The truth is, that the public cloud isn’t always the cheaper option, especially when it comes to managing workloads that require frequent shifts between local and cloud storage.

 
What businesses save in staffing costs and infrastructure overhead can wind up being well and truly negated when it comes to managing multi-cloud workloads. Additionally, there’s the disadvantage brought about by fluctuating pricing plans – certainly, organisations can regularly shift to whoever offers the best deal, but these shifts come with their own expenses, both financial and operational. On-premises workloads, on the other hand, can be budgeted for with a larger degree of certainty.
 

It’s cheap to upload data to the cloud – but frequently expensive to get it back again.

 
For many use cases, particularly when it comes to data that doesn’t need to move outside of your own organisation terribly often, it’s sound financial sense to keep it in your own ICT infrastructure.

The first two drivers of this public cloud reconsideration, performance and security risk, are also worth considering for the impact they’ll have on your budget. What could it cost your organisation if key processes take an extra hour or more to run, each? What impact would the loss of access to your public cloud hosted data have on your daily operations?
 

 

The public cloud still has much to offer: the challenge many businesses are currently facing is where to draw the line.

 
As life begins to return to pre-pandemic patterns, and organisations review their current approach to the cloud, the answer isn’t a complete exodus, but instead a shift to the balance of a hybrid cloud approach. By examining the workloads best suited to on-premises storage, reviewing performance requirements, and carefully assessing risks, organisations have the opportunity to have the best of both worlds.

Where the public cloud is all about convenience at the cost of transparency and control, your private cloud design principles should focus on optimising for performance, density and total authority over every aspect of your solution.
 

Software-defined storage is the first step towards building your highly scalable, flexible hybrid cloud solution.

 
Software-defined storage (SDS) combines the flexibility of the public cloud with the control you need for running secure, efficient local workloads. By separating the functions that software performs from the hardware running it, an SDS approach allows for use-driven customization, scalability and optimization. What’s more, independence from storage hardware creates the opportunity for an agile, adaptable data center. For edge computing, this is essential for minimising your expenses, by provisioning what you need, when you need it, where you need it.

Of course, shifting to an SDS approach can seem challenging at first, its very flexibility and customisation potential daunting after a ‘decision-lite’ experience in the public cloud. Where do you even start?
 

Download our free eBook to start planning your SDS journey

 

Download our eBook: SDS for the Enterprise
In our eBook, Software-Defined Storage for the Enterprise, we outline four approaches to designing your own SDS solution, sharing the pros and cons of each, along with a framework for assessing vendor solutions, so you can make the best decision for your requirements and available resources.

 
A hybrid solution might seem challenging at first, but with a strategic, task-specific approach, your organisation can balance the benefits of the public cloud with the power and control required for your critical workloads and sensitive data.