Toggle navigation

Building clouds from a basket of legacy technologies is incredibly complicated. Complicated to design and build, complicated to live with, complicated to evolve, scale and secure. It needs an army of experts siloed in their various fields of expertise and a myriad number of support, maintenance and spares agreements to keep the band on the road.

HyperCloud changes all of that. 

HyperCloud achieves something radical. An approach that, for the first time, achieves a reductive technology architecture – stripping away waste and complexity, an inseparable fabric of hardware and software at every layer, where the resolution of complexity deep in the technology’s DNA becomes the enabler, a lever to unlock a profound change for IT in deploying and owning their own cloud. Resources become stateless and easily consumable, and the intelligence becomes ingrained into the cloud’s network fabric.



Let’s get physical

We started by rethinking the physical infrastructure as a whole. What if we designed all the hardware needed to build a cloud – regardless of whether it was bringing compute, storage or networking to the party to share a bunch of common design elements? One platform on which everything could efficiently be built. That would hugely simplify spares handling and lifecycle management of upgrades and the like. 

But then, what if we could design an architecture that made these hardware components consumable? Not in a throwaway, single use kind of way, but in the ability to add more of them to the fabric, change them out, or remove them entirely without the rest of the system caring, impacting the end users, and without the engineer installing them ever needing to configure any of them, or not even much care what they were? 

Want to add compute or storage resources? Simply plug them in and power them on. Within a few minutes, the resources are available to the entire cluster. Need to decommission a node? A single command migrates all resources from that node and powers it off for physical removal.

A true hardware fleet of everything needed to build a cloud in a data centre –  not a selection box of flavours. 

That would be transformational, so that’s what we did.


Networking is dumb

In the old world, data centre networking was a tax. Largely unchanged in decades, except for how fast it can shift packets, yet with an ever increasing number of crucial but complex and temperamental parameters to configure, the plumbing of the data centre has become the single biggest headache for IT teams. Ask them. Nine times out of ten, it will be the thing that screws up, adds, moves and changes and impacts service availability and delays deployments.

HyperCloud handles networking configuration changes in a unified way; by integrating the networking into our Interconnects, changes to the virtual network layer automatically propagate to the physical networking layer. As changes are made, whether by the API, CLI, or GUI, those changes instantly take effect where needed physically. No more trunking VLANs to compute nodes and upstream switch ports.

HyperCloud even has IP Address Management functionality built-in, allowing IP address assignment to guest workloads automatically without requiring any guest VM or container configuration.

The network becomes the heart of the fabric.


Turn intelligent convention on its head

At a system level though we can change all of that. The network is the one place that can “see” everything. If it can see everything, what if we placed all the intelligence to automatically configure, and manage everything inside the network? Whole layers of networking complexity would immediately fall away.

What if we held that information (the state) of all these boxes and the data held within them in this intelligent interconnect fabric? And if that fabric was then controlled centrally, we could then manage the fabric, not the boxes. Then the dream of consumable, “stateless” hardware becomes a reality and adds, moves, changes, upgrades – everything – becomes radically simpler, totally elastic, and totally organic.

Unique. Radical simplification. A game changer.


You’ve only got a cloud when you deliver a service

Convention dictates that you should add more layers on top of the data infrastructure you run to virtualise it, administer and monitor the “tenants” you want to allow to use it and offer templates, services and applications from which they can select to actually start to get value from your cloud. Another layer of complexity. Another silo. But if you’re building a complete Cloud technology – a true product that delivers cloud, not just parts of it, then this capability is not optional, and should just be “there”. The minute you install your cloud, regardless of whether it has external connectivity, and with no other extra external dependencies. And, of course, as you’re operating a cloud, not a bunch of products and layers, its entire lifecycle management should also be part of the same fabric and single experience. 

That would be unique. That would strip away the final layer of complexity.

Another game changer.

One Fabric, One Fleet – Secure Provenance Delivered

Complexity is the enemy of data security and service resilience. Multiple silos of hardware and software, and multiple vendors with obscure supply chains, all make the challenge of delivering secure, sovereign, resilient cloud services intensely challenging.

Manufacturing it ourselves, integrating it into a single auditable fabric. Managing the fabric as a single entity, not as hundreds of discrete boxes, is transformative for security practitioners. It’s never been done before. It, again, is unique.

This is HyperCloud


Learn More About HyperCloud Architecture