The Bank Of England recently highlighted risks posed by the current trend of shifting UK financial services to a small handful of global public cloud service providers.
It’s no secret that Amazon Web Services underpins roughly a third of the internet today. If AWS disappeared overnight, many businesses and government agencies would find themselves screeching to a halt that would likely be even more debilitating than a successful cyber attack. Not far behind are Google and Microsoft, both of whom continue to evolve their cloud computing platforms to fulfill public cloud computing needs unmet by Amazon.

Image provided by Statista, used under the CC-BY-ND 2.0 license.
So many eggs, so few baskets
The benefits of the public cloud are clear: reduced management and maintenance overhead, pay-as-you-go provisioning of popular software tools and platforms, and access to the resilience and availability a large-scale provider can offer (for the right price) to name a few. But those benefits don’t come without their fair share of risks, as the Bank of England has warned. By outsourcing such a large volume of data and services to such a small number of providers, the nation’s financial stability is at risk.
To identify the best pathways towards sovereign resilience for the UK financial services sector in a cloud-computing age, we first need to understand the recent drivers of the mass move to the public cloud, and the common underlying issues in the infrastructure being used to provide public cloud services.
Accelerated public cloud adoption: the response to a world in lockdown
The COVID-19 pandemic has understandably been a massive driver of public cloud adoption in recent years, bringing about a dramatic shift in how and where data is created, processed and stored. This has presented ICT teams in both the public and private sectors with considerable challenges, as entire workforces switched to remote work near-overnight. For those organisations previously based entirely around on-premises operational models, this was a massive shift, and few had the onsite infrastructure or support network to handle the sudden demand for cloud services.
The public cloud is at its best and most useful in such scenarios – providing a scalable and elastic buffer of computing power, delivering services on-demand, where and when required, while handling peaks and troughs with ease.
The speed at which these services can be spun up is seductive, and the distributed approach to computing and service provision a clear way forward for every ICT team. Tasked with supporting entirely new working patterns with a deadline of: ‘yesterday, or earlier’, who wouldn’t want to push for a transition to the public cloud?
It’s no surprise then that across many sectors, the transition to the cloud has been swift. There can be no doubt that access to public cloud services is now a critical tool in the armoury of every CIO.
“Just someone else’s computer”
As the dust settles, and businesses adjust to their new ways of working, the opacity of the public cloud, and the risks this poses, are becoming more apparent. Cloud services are sometimes referred to as “just someone else’s computer”, and this captures the core risks of the public cloud perfectly.
Imagine for a moment moving a business-critical data application onto a computer hidden behind a wall where:
- you have no idea what hardware your application has been entrusted to, let along its origins, components, design process, or the authors of the firmware and other software co-existing with your application;
- you have no real idea how it is being secured, other than by virtue of reliance on the claimed use of various security standards, with few ways to audit or verify this; and
- no idea what other applications or users may be hosted on the same infrastructure as yours, and their potential impact on your applications and/or data.
Public cloud vendors will assert that in not divulging such information they are in fact, doing you a service, as vectors of attack are obscured. But this provides little assurance for regulators and others with vested interests in securing business and economic continuity should downtime occur or, worse, complete loss of data or services in the event of an attack (or, just as bad, loss of confidence in the integrity of the data or services).

Security controls can only do so much for an already compromised system
The industry today takes a largely ‘information-centric’ approach to security. That is, assume that every system is compromised.
This leads to a focus on security controls that:
- protect access to the data itself, and
- protect it “at rest”, “in transit” or “in use”.
These are perfectly valid and useful security controls… but there’s a problem. These controls can generally only be applied to hardware systems that have already booted up and loaded their operating systems. This leaves a window of opportunity for bad actors to infiltrate hardware and execute attacks that alter normal operations during the bootup process, before the usual controls can be applied. From there, attackers can do as they please: disrupt operations, monitor sensitive information, steal or corrupt data, or even take complete control over a system.
We can patch vulnerable software. But what is the industry doing to address these infrastructural vulnerabilities?
It’s a matter of convenience – at least, for vendors and service providers. Over the decades, the design and manufacture of enterprise ICT products globally has largely moved to a model that makes addressing such vulnerabilities particularly challenging. A complex supply chain has evolved to minimise manufacturing costs, at the expense of oversight into the manufacturing process.
Essentially:
- product designs are based on generic hardware sub-assemblies, supplied with pre-compiled firmware – effectively ‘black boxes’ of hardware and software functionality;
- final product manufacture is generally carried out by a handful of sub-contracted factory networks, which service a large number of brands simultaneously, producing parts at a massive scale, usually in low-cost locations in Asia (mainly China); and
- any product differentiation is primarily delivered via the software running above these hardware subsystems and sub-assemblies.
This model is heavily embedded in modern industry practices. For those already using it, a shift away would be a massive inconvenience.
This does, however, ignore the fact that this inconvenience is instead passed on to the businesses at the end of the supply chain.
Ultimately, businesses are left with no option but to rely on trust alone that:
- hardware and software in subsystems have not been compromised;
- products built by the sub-contractor match the original design specifications;
- components used within the manufacture are exactly those specified in the design; and
- software loaded onto the products is as supplied and has not been altered in any way.
This chain of trust carries through from the vendor whose logo appears on the box, to the customer buying the product, to, in the case of a cloud provider, the tenants storing their critical data on that hardware. Which includes many businesses within the UK financial services sector.
As the Bank of England has highlighted – this is a situation that cannot go unaddressed. The scale and sophistication of hacking attacks is ever-increasing, and the rewards for executing a successful attack can be enormous. Financial institutions are a particularly appealing target. But with major public cloud providers refusing to disclose full details of their infrastructure, UK financial institutions, regulators and customers are unable to make fully informed decisions when it comes to public cloud security risks.
Cyber attacks are frequently depicted as overnight disasters, as public as they are devastating. But for those looking to profit, a long-term, covert approach carried out through sustained compromising of supply chains offers far better gains.
One form of supply chain attack involves replacing legitimate electronic components with lookalike spyware components in the manufacturing process. Systems will look and function as normal, leaving businesses entirely unaware that their data is being monitored, and potentially altered. For bad actors playing the long game, this gives them consistent access to do as they please – with a far lower risk of being caught.
Even when a supply chain attack is detected, victims and law enforcement agencies are naturally reluctant to advertise failures in this area. However, there is ample and growing evidence of actual supply chain attacks being successfully mounted. Examples of such attacks can be found here.
Across-the-board exposure to supply chain compromise is the “dirty secret” of the ICT Industry. Inertia in a long established and complex supply chain is a hard thing to break. If no one operates any differently, then no one has to move away from the business model that has served them so well. Structural compromise is tolerated in the bedrock of our IT systems to an unconscionable degree – but the threat to national stability and sovereign resilience is there, even if manufacturers ignore it.
It seems that, at least in the UK, senior figures in the banking industry are realising that they don’t have the luxury to trust this ‘black box’ approach. The risk of the unknown is simply too high.

Breaking ranks to address transparency – a ‘clear box’ approach to building critical cloud infrastructure
The ‘black box’ approach exposes customer data and service provision to significant, and unquantifiable risk from hardware-based attacks. That’s why SoftIron doesn’t use it. Our manufacturing model places power back into the hands of our customers, with a manufacturing process designed specifically for supplying secure provenance hardware.
No trust needed. By building an end-to-end process where a fully auditable forensic chain of custody is possible at every step, transparency for the customer is achieved.
We believe this ‘clear box’ approach is the only way forward for the ICT manufacturing industry, and for cloud service providers. And with our headquarters in London, we’re thoroughly invested in ensuring that the UK’s national infrastructure is up to the task of supporting resilient and secure cloud services.
The foundation of our ‘clear box’ approach
- Products are designed from the ground up, without reliance on sub-assemblies.
- All software is compiled directly from its source code, and every line of code is reviewed before compilation to understand its purpose and exclude malware.
- Manufacturing is carried out by SoftIron staff, in secure SoftIron facilities.
- Our “Secure Provenance” process opens products up to an ‘access-all-areas’ audit by key customers, giving them the ability to inspect every level and aspect of the product design and realisation process. This enables customers to assert their own level of risk assessment on the ICT infrastructure on which they will depend.
Using fully auditable products built by SoftIron, it is possible to deploy a credible, transparent alternative to today’s dominant cloud providers.