Analysis

AI’s Billion Dollar Bottleneck

Is the IT industry pouring billions into AI they can’t use?

Written by Mary-Ann Russon | 6 min August 15, 2025

Building the Agentic Cloud

In July, the UK government unveiled its splashy Compute Roadmap, a plan to transform the country's AI infrastructure with billions in new investment. The tech press called it ambitious. Some industry veterans had a different word for it: familiar.

Megan Starkey, Chief Executive of AI transformation consultancy RBD, watched companies scramble to rent graphics processing units (GPUs) during the cloud boom — only to let their computing power sit idle because their software wasn’t ready. Now, as tech giants, investors and governments pour billions into AI, Starkey sees the same pattern emerging.

“Nearly half of global GPUs are sitting unused because we’re chasing infrastructure when the advantage is integration. We’re solving the wrong problem,” she said.

It’s a stark contradiction at the heart of the AI revolution. While tech giants invest record amounts in computing power and governments race to stockpile processing capacity, the technology industry faces an uncomfortable truth: building AI infrastructure has outpaced the ability to actually use it. The result is billions of dollars spent on cutting-edge hardware that delivers a fraction of its potential value.

This challenge is one many organizations face in navigating an increasingly crowded AI marketplace. With dozens of companies claiming their components are indispensable, it’s getting harder to separate the technologies that solve real problems from those that simply promise to. 

What parts of the AI stack matter — and which are just expensive distractions? And how are seasoned investors applying lessons from previous technology booms and busts?

The Case for Applications Over Infrastructure

Nick Kingsbury has been placing bets on AI since before most people knew what a large language model (LLM) was. As a partner at Amadeus Capital, one of the UK’s top venture capital firms, he has spent more than 20 years watching AI evolve from academic curiosity to business imperative.

His investment thesis cuts against the grain of the current AI hype cycle. While others chase the latest breakthrough in LLMs, Kingsbury focuses on the layers that he believes make AI truly useful. These include the tools that help organizations harness LLMs, the vertical applications that solve specific problems in back-office banking, defense and security, and, most critically, application frameworks: software that orchestrates multiple AI components to deliver real business outcomes.

Picture a typical customer service scenario at a bank, Kingsbury suggests. A frustrated customer emails about a problem with their account. In a traditional setup, a human would read the email, categorize the issue, look up the account details and either resolve or escalate it. But with an application framework, the process becomes a choreographed dance of AI agents: one LLM categorizes the problem, a second AI agent locates the customer’s account balance and a third might email back requesting clarification. Only then does a human representative step in to close the loop. 

Kingsbury stresses that LLMs are not the complete solution when it comes to AI, but just part of a process. 

When GPU Power Hits the Wall

While Kingsbury sees opportunity in the current AI wave, venture capitalist Vince Berk is betting on an eventual correction. He agrees that private investors should focus on applications, but warns against getting caught up in the infrastructure frenzy.

“I would almost exclusively leave that to the hyperscaler companies,” said Berk, a partner at New Hampshire-based Apprentis Ventures, referring to large-scale computing providers. “Imagine it’s a data center with data-processing units burning tremendous amounts of energy. That infrastructure is only worth it if it can be capitalized over 6–10 years.”

Berk believes that eventually the hype will die down, some AI startups will go bust and the value of data center infrastructure will decline, because GPUs and AI systems use far too much energy. The returns are already shrinking, he said.

Plus, Berk believes, companies are applying LLMs to use cases that don’t even require AI. In the future, they will seek out cheaper ways to solve problems, such as simple automation, small language models or traditional computing methods, he said.

"Companies are applying LLMs to use cases that don’t even require AI. In the future, they will seek out cheaper ways to solve problems, such as simple automation, small language models or traditional computing methods. "

“The best and the brightest AI models have all passed a point of diminishing returns,” Berk said. “If you double the amount of required GPU power, the model only gets 5–10% better at answering your question.” 

The Power Problem Reshaping Computing

One of the challenges in building AI infrastructure is the amount of energy LLMs demand. The emerging solution is new, high-performance AI data centers built close to power sources, called “neoclouds.” 

But this creates a new set of problems that require government intervention, according to Karl Havard, Chief Commercial Officer of Nscale, a neocloud that claims to be the UK’s only full-stack sovereign AI infrastructure provider.

“Without [government involvement], you will have all sorts of chaos emerging,” Havard said. “People building up data centers wherever they want to, taking energy from the grid — which wouldn’t be great for any population. You then create this Wild West scenario where anybody who has money can start to build data centers and serve AI.”

This is why Havard is in favor of the UK Compute Roadmap, which aims to build an ecosystem of supercomputers around the British Isles. Everett Thompson, the Founder and Chief Executive of Las Vegas-based WiredRE, an independent AI data center development firm, also sees the roadmap as an opportunity. 

“This is a historic chance to modernize the UK’s energy and digital backbone, just as the National Grid did in the 1930s or civil nuclear did in the 1950s,” he said. 

“The difference now is that this transformation must happen through public–private collaboration, not central planning alone. The capital exists — in the hands of global tech firms racing to deploy AI — but it needs a clear, coordinated and confidence-inspiring framework to flow.”

AI is supposed to make decision-making faster and more efficient, but the infrastructure needed to support it demands the kind of long-term, coordinated planning that governments have struggled with for decades. Without that collaboration, we risk recreating the same fragmentation we’ve seen with previous technologies — a prime example being the cloud.  

The Cloud Bottleneck

Digital transformation experts Megan Starkey and Benjamin Hermann say there’s an even bigger problem than energy consumption: organizations and governments are buying shiny new technology without understanding how to integrate it into existing systems.

Hermann, Managing Director of German international IT consultancy Zoi, has seen the inner workings of many enterprise IT infrastructures firsthand. He believes some enterprises have taken a “lazy approach” to hybrid cloud deployment, moving only “the easy parts of their data” to the cloud while leaving their core data in legacy data centers. 

“That was fine for 10 years, but now with the generative AI wave, they can’t adopt the technology as quickly as their competitors,” he said.

Starkey agrees. She believes “close to 40% of cloud spend was wasted [over the past decade] because companies scrambled to rent GPUs before their software was even adapted for that environment,” she said. “What’s happening with AI now is almost identical.”

The Overlooked Infrastructure Problem

The infrastructure puzzle gets even more complicated when you consider where all this data should live. Organizations face two paths: keep their most sensitive information close to home or send it to external clouds.

AI and cybersecurity investor Alex Lanstein sees the future tilting toward the first option. He predicts that many organizations will eventually choose to process their most critical data on-premise, using free LLMs such as DeepSeek or Meta’s Llama, which can be downloaded to their own cloud.

But that approach requires significant technical expertise and infrastructure investment that not every organization can manage. There will still be plenty of data flowing to providers for processing.

Many organizations will face a fundamental challenge: how do you gather all your data — which may be scattered across various clouds worldwide — and feed it into an AI system that can actually use it?

David Flynn, the Chief Executive of US software firm Hammerspace, said this is the most overlooked problem facing AI infrastructure today. He put it bluntly: “AI is forcing the industry to do geographically dispersed work… You can have GPUs and frameworks, but if you can’t house the data and feed the data at pace, it’s not going to work.

“Think of the AI agents and models as a destination. Right now, we are using city streets to get the data to it. We need a superhighway.”

The Global Race That Misses the Real Challenge

Ask these experts about the UK’s Compute Roadmap announcement and you’ll get a straightforward assessment: it’s about building supercomputers and stockpiling GPUs for the future, driven by a global shortage. The more pressing question is whether this infrastructure will be used effectively.

“At least the UK is doing something,” Hermann said, pointing to the stark lack of data center capacity across Europe. Germany, for example, is currently two years behind the UK and five years behind the US in infrastructure development, he noted.

But Starkey recognizes the theme from previous technology waves. “In every case, it’s the same root pattern,” Starkey explained. “The US goes after speed and scale; the EU prioritizes governance and control; the UK goes for balance and splits the difference. But everywhere you look, the private sector keeps struggling to integrate and operationalize what it builds.”

The experts agree on one thing: success in AI won’t come solely from having the most powerful hardware. Whether it’s Berk’s warnings about diminishing returns, Flynn’s observations about data transport bottlenecks or Hermann’s critique of lazy cloud deployments, the message is consistent: for all the billions being spent on AI infrastructure, the industry may be missing the point entirely. As Starkey put it: “Different rhetoric, same outcome.”

Whether organizations learn from the mistakes made in cloud computing — or repeat them at scale — will determine who has an advantage in the AI race.

  • AI
  • Hybrid Cloud
Mary-Ann Russon

Mary-Ann Russon

Contributor

Mary-Ann Russon is a veteran journalist with more than 18 years’ experience writing about technology, science and business — in particular their intersection with policy, human rights and anthropology. Her work has been nominated for several awards, including the Technology Journalism Award at the British Journalism Awards. She specializes in "breaking down" complicated concepts to help enhance the general public’s and decision makers' understanding of technology, science and business.