This blog is part 3 of 3. You can go back to read part 1 or 2 here. Or take a deep dive and download our white paper to get it all at once: Cloud Infrastructure and the Game of Tradeoffs
A quick recap
In part 1 of this series, we shared a bird’s eye-view on the state of cloud: why it’s so complex, the challenges this presents to the modern day CIO, and why current cloud providers aren’t actually making it any easier – despite their snappy marketing claims.
In part 2, we broke down how HyperCloud is deployed, and who it’s designed for. In this blog, we’ll explain what makes HyperCloud so unique, and how it’s different to what else is currently available on the market.
Only part of the problem has been solved
In Part 1 of this series, we touched briefly on the rise of the public cloud, with one reason being that those vendors have successfully made the cloud so easy to use – on the surface. The user is provided a neat set of interfaces and clearly defined services to interact with, and their focus is on optimizing those services. It’s a huge undertaking for smaller organizations, especially if they aren’t directly monetizing their IT footprint.
And yet there’s incredible value in having this internally available as part of an IT estate. The great functionality such as elastic building, self-service, and consistent orchestration help lines of business build apps and new features more quickly. And the significant price savings, performance benefits, and control you get are needed to keep IT a strategic resource that helps profitability.
Applications increasingly tend to use a variety of infrastructure and data services across lots of different technologies. IT operators have to manage not only the underlying components but also the interconnections between them, and so the complexity problem below the experience only gets worse over time. In fact it’s often the connections between technologies that become the harder parts to keep running as each component goes through its lifecycle. It’s not just the bricks, it’s the mortar that leads to pain. And it’s a combinatorics problem. And that complexity creates inefficiency, slow-downs, and risk.
With a few exceptions, the vendors who claim to sell some kind of private or hybrid cloud infrastructure are really only selling part of the story.
Under the covers, it’s a complex combination of technologies that are often loosely tied together or not at all, leaving IT to figure out how to weave it all together, creating complexity and requiring high levels of skill.
4 Approaches to cloud… and still too complex
Software Defined Infrastructure
The software-defined vendors only handle the middle part, leaving customers to integrate from the virtualization layers down into the hardware. And most don’t go all the way up the stack, missing critical aspects of the user experience that makes cloud attractive to its consumers. Or they require massive professional services engagements. You may get to choose your own hardware, but even that’s limited by compatibility lists. Not to mention the costs.
The integrated infrastructure vendors mainly solve the other half of the problem, focusing on the hardware and some primitive automation tooling. Dell/EMC might have done more until they sold off VMware. It’s really a packaging exercise for delivering racks of gear using a standard set of configurations, with a single throat to choke for support. They don’t solve all the higher level problems needed to deliver that consumption experience.
Single Vendor Edge to Cloud Offering
Recently, some of the traditional vendors have marketed combinations of their product portfolio as a turnkey cloud. But under the covers it’s mostly more of the same, with a management layer on top and a massive professional services wrapper. And since none of them owns the virtualization stack in a meaningful way, once you scratch the surface you’re back to old school element managers and CLIs.
Fully Turnkey Cloud Infrastructure
The only vendors that have really tackled the full integration problem on premises are public cloud providers – specifically Amazon with Outpost and Microsoft with Azure Stack Hub. In this case users to get the full cloud experience with all the bells and whistles. And it’s because they’ve got a bench of thousands of engineers to throw at it. But in this model, you’re locked into their public cloud, and they’re pretty clear that what they want is you to migrate those workloads entirely into their facilities over time.
Finally, a radically different approach with HyperCloud
Instead of building, weaving, and operating independent components, HyperCloud provides a fully turnkey, fully integrated, intelligent cloud fabric optimised to enable enterprises to build and grow their own private or hybrid clouds in a way that’s incredibly simple to deploy and own, independent of any public cloud provider, and requiring a fraction of the time and skills.
HyperCloud Intelligent Cloud Fabric
SoftIron’s holistic approach and unified hardware and software technologies enable low touch cloud operations and provides the highest level of resiliency, without requiring hyperscale and without the need for armies of operators to keep it running. Our result: dramatic reductions in the amount of time required to build, and deliver the cloud consumption experience from the core to the edge of your data footprint, at very small or very large scale.
If you started here, you might want to go back and read part 1 of this series “At last, a silver lining for private cloud with HyperCloud Intelligent Cloud Fabric™” or part 2 “What is HyperCloud, and who is it for?”.
Download the full white paper here Cloud Infrastructure and the Game of Tradeoffs.
Watch our Demo On Demand to see HyperCloud in action
1Gartner, Predicts 2021: Cloud and Edge Infrastructure, Published 8 December 2020, John McArthur, Arun Chandrasekaran, Thomas Bittman, Tim Zimmerman