Toggle navigation

By Andrew Moloney, VP Strategy

As data continues to grow exponentially and services continue to move online, are we reaching a tipping point in data center design?


A few weeks back, as the COVID-19 lock-down took hold I took the opportunity to attend a number of sessions at the hastily reconvened as a virtual event, DCD New York Conference, run by Data Center Dynamics. Virtual conferences, of course, aren’t new but I do wonder if in a post lock-down world we’ll see these become more commonplace either instead of, or augmenting, in-person events.


Four Immutable Truths in Data Center Design

No matter what part of the supply chain you participate in building out and operating a data center, it strikes me that there are three immutable truths that we all must confront if we are going to be able to serve future data demands in anything like an efficient manner.

Truth One – Data is growing Exponentially, Data Centers are not.

Pick a report, any report and they will all tell a similar story. The amount of data that we are creating and the number of people and devices that want to access it is growing exponentially. IDC, for example, predict a 61% compound annual growth rate of data, being accessed by an additional 1 billion people (bringing the total to 6Bn) by 2025. Total data in 2025? 175ZB – or 175 trillion gigabytes if that helps. Let’s just agree, it’s a lot.

True that advances in storage media will enable us to pack more storage into the same U’s of rack space, but it’s not going to be enough. Current course and speed will require hectares more of data center real estate.

Truth Two – Traditional Data Centers Appliances are Power Hungry and Hot

So, let’s say we can get some of the way by increases in storage capacity, we’re still going to need to supply power to these data centers and supply enough power to support loaded racks. While significant strides have been made in power efficiency in data centers over the years, that’s still quite a big ask. And those racks generate heat. Lots of heat. In fact, the average data centre consumes around 30% of the total power in cooling alone. Few, if any, of the best performing data centers can be found in Enterprises however and seem reserved for the public cloud vendors and monoliths such as Facebook. It’s probably also fair to assume that in the rush to the cloud over the last few years, investment in Enterprise Data center infrastructure has probably stalled in many businesses.

And while we’re talking about power and heat, let’s not forget the bigger challenge that we all face – that of climate change.  How are the decisions that you are making for your data center architecture positively impacting your own CSR strategy?

Truth Three – the Shift to the Edge and the Core

Yet today, we’re seeing a shift of data from the core to the edge of the network, and while data is still growing in the core, that growth at the edge makes truths one and two even more of an issue. If I move my data to the edge of my network it’s likely I’m also moving to locations nearer where my employees operate. This could be cities and other densely populated areas. In these areas space is at a premium, and it’s less likely that I can optimise the environment of power efficiency and/or cooling.

Truth Four – A Return of Data to the Enterprise

I’m sure the rise of hybrid cloud environments is not news to you. We see an increasing trend of repatriating data out of the public cloud and back on-premise. This trend is being driven either by a wish to apply greater security controls over more sensitive data by keeping it “local” and/or to solve latency issues for regularly accessed data to enhance performance.

The big public cloud vendors like AWS, for example, are jumping on these issues and trying to stem the tide with solutions like Outposts – but does that really solve the issue of data security? And what about the challenge of still being locked into a proprietary architecture?


How does SoftIron help?

These “immutable truths” have been driving the SoftIron strategy for a number of years. They represent a number of the reasons we believe that task-specific design is so important, and a design that is built upon a foundation of secure provenance.

By designing and building “task-specific” appliances from the ground up that are both high performance and super-efficient we can reduce the power requirement and reduce the need for cooling. The consequence of which is that we can pack more petabytes of storage into every rack installed and be well within the power and cooling budget available. We can free up square footage in data centers making them more efficient, and can make edge deployments possible in environments that maybe have been troublesome before.

By designing and building from the ground up  – rather than using sub-assemblies sourced from across the globe as other vendors do, we are also able to provide a transparent, auditable platform on which to build a truly secure data center environment.

And lastly, by taking leading open source platforms, but “wrapping” them in task-specific hardware we can both eliminate the complexities of using these platforms in the Enterprise, while at the same time eliminating vendor lock-in.

Find out how we can help in your data center, book a demo with us here.