Toggle navigation

By Andrew Moloney, VP Strategy

 

In the race “back” from the Public Cloud, making the right platform choice at this strategy “inflection point” could have a huge long term impact.

 

In tech, we often talk of technology lifecycles, but sometimes that cycle can feel more like a swinging pendulum. Take the rush from on-premises data infrastructure to the full throttle shift to the cloud for example. For a while there if you weren’t moving all your workloads and data to AWS, Azure or Google Cloud your peers started to look at you like you were crazy. Why wouldn’t you? It switched Capex to Opex, it was elastic as your demand level changed and the management overhead (in theory) was meant to be low.

More recently though we’re seeing that pendulum swing back to a more rational state of equilibrium, a hybrid world unifying the best of both worlds. In fact IDC report that 80% of organizations are now undertaking some level of cloud data repatriation activities.

It’s at these “inflection points” in technology where key strategic choices need to be made, and the quality of those decisions will have a lasting impact on your organization. And let’s be clear. You can be sure that Amazon, Microsoft and Google have not missed that fact either.

Three Drivers that underpin Data Repatriation

There are really three key reasons we see that underpin such a big shift back from a “pure play” cloud strategy – and for different kinds of organizations the priority between these is different:

 

One: Cost

While moving workloads and data into the cloud may seem initially attractive, for many businesses the egress costs – the cost of moving the data back out again, can really stack up. If you can identify workloads where the data need to be accessed locally on a regular basis, why pay the cost of moving it to and from the cloud? In addition the much vaunted ability to deploy a multi-cloud strategy has for many not been achievable in practice at a workload level, making it harder to dynamically switch between the big three vendors to help keep their pricing keen.

 

Two: Agility and Performance

The public cloud is great for managing elastic workloads, but not so great where latency is an issue. This performance challenge has been magnified further of late by new types of workload such as AI and IoT, which have, in turn also driven the growth in Edge Computing, adding compute and storage closer to where the data is created.

 

Three: Data Governance and Security

As data breaches continue unabated  and regulators respond the third reason we see organizations repatriating data is to ensure greater levels of governance and security controls over it, and transparency of how these controls have been implemented.

 

The Fight for Control of Your Hybrid Infrastructure

 

As I’ve already mentioned, the big guys don’t want to lose your data from their cloud ecosystem, but if hybrid is now a reality, how do they respond? Enter Amazon Outposts, Microsoft Azure Stack and Google Anthos – three slightly different approaches, but each designed to keep your data in their cloud, even if it’s physically residing in your own data center.

If there were three reasons to move FROM public cloud, then how do any of these approaches measure against those drivers we just identified?

 

The Choices Just Don’t Stack Up.

 

Well, if you’re still in the same proprietary ecosystem, then you’re still locked into your cloud vendor of choice’s pricing plan. From an agility perspective, your ability to deploy a multi-cloud strategy is also impeded giving you little room to negotiate. You may see improvements in performance by placing workloads back in your own data center, but if Edge Computing is in your stars will you be able (and willing) to roll out that same architecture to every edge environment? Will the hardware that is prescribed by those vendors actually perform efficiently in the edge locations you need?

Lastly, have you really addressed security and governance concerns simply by placing a cloud vendors hardware and/or software in your location? What real controls are you able to place on that deployment. And how will those controls stand up to close inspection by regulators?

 

The Solution: Open Source and Task Specific Hardware

 

In our view the right strategic choice as you consider both hybrid cloud and any future shift toward Edge Computing is to use this “inflection point” to break the tie with the public cloud vendors and move to an open source based architecture able to integrate with them on YOUR terms.

The software stack is already well understood and widely deployed, consulting and support is widely available and your hardware choices are broad giving you the bargaining chips you need to drive a good deal, and to select the right platform for your use case.

That said, in our opinion, you really start to make the best strategic choice if you couple that shift to open source with a move to task specific hardware on which it will run – appliances designed with a single purpose in mind, giving you the best of all worlds –  blistering performance and incredible efficiency, manageability, and scalability, yet underpinned by transparent design and manufacture giving you secure provenance for your hybrid cloud platform.

Remember though, under the hood it’s still open source at its core, eliminating vendor lock-in for your own infrastructure, but also facilitating a multi-cloud strategy should you wish to implement one.

Make the right decision at this inflection point. Talk to us and see how SoftIron can help.