The AI FOMO Effect
Why most AI projects fail — and what leaders can do differently.
Written by Jennifer Guay | 4 min • August 21, 2025
The AI FOMO Effect
Why most AI projects fail — and what leaders can do differently.
Written by Jennifer Guay | 4 min • August 21, 2025
In early 2024, Sebastian Siemiatkowski, the CEO of fintech startup Klarna, heralded a breakthrough that he believed illustrated the “profound impact” AI will have on society. The Swedish firm, which has more than 150 million customers, had just begun an ambitious plan to automate its call centers.
After only a month, the results were staggering. Not only was its chatbot already handling two-thirds of customer service chats — it was also doing the work of 700 full-time human agents, boasted customer satisfaction scores on par with human operators and was expected to increase annual profits by $40 million. In response, Klarna laid off workers and stopped hiring.
But Siemiatkowski had spoken too soon. In May 2025, the tech boss admitted Klarna was too focused on cost-cutting at the expense of customer service. His firm would once again start hiring human agents. They now work alongside the AI systems, handling complex cases that require empathy, nuanced problem-solving and the judgment calls that chatbots still struggle to make.
Companies across industries have embraced AI as a path to automation and greater efficiency. But enthusiasm often outpaces results: by some estimates, more than 80% of AI initiatives fail — twice the rate of failure for tech projects that don’t involve AI.
Software vendors have contributed to the confusion by overstating their products’ capabilities. Many wrap AI tools in marketing buzzwords, promising “next-generation agentic workflows” or “enterprise-grade automation” — jargon that obscures what the tools actually do, making it harder to separate substance from spin.
The C-suite faces a delicate balancing act. Move too slowly on AI adoption and risk falling behind competitors, but rush into hasty implementations and face costly failures. The pressure to act has intensified as companies struggle to distinguish between practical applications and expensive dead ends.
The idea that workers can be easily replaced is one of the common misconceptions about AI, said Nathalie Baker, Managing Director at FTI Consulting. Executives often initially find tools such as OpenAI’s ChatGPT or Microsoft’s Copilot “so convincing” that they have unrealistic expectations about their shortcomings, such as the tendency to hallucinate, she explained.
But these platforms still need a huge amount of oversight. This may mean bringing in new expertise to manage performance and compliance, or providing AI literacy training across the company, all of which comes at a cost, Baker said.
“In its current form, AI is very much more about human augmentation than replacement. However, we do see quite a lot of businesses that have moved too quickly and not had that human in the loop, and it can lead to negative outcomes,” she said.
It’s not only overblown marketing promises that wrongfoot companies, but also a lack of clarity about the problems they are trying to solve, said Dr. Mark Bloomfield, a Fellow at the University of Cambridge’s Judge Business School, who advises organizations on how to leverage AI.
"The AI hype cycle ‘creates this FOMO effect’ that forces organizations to jump in, despite the high rate of failure for AI projects. "
Too many corporate leaders have a “tech-first, rather than problem-first” mindset, he said, which can lead to ill-judged investments. He also believes the AI hype cycle “creates this FOMO effect” that forces organizations to jump in, despite the high rate of failure for AI projects.
“Often organizations will buy enterprise software like ChatGPT or Perplexity and say, ‘We have deployed the technology; we are now AI-first.’ Then the crash of disappointment comes in when they don’t see the metrics [they expect],” Bloomfield said.
Departments should lead AI projects, rather than leave them to technical teams, Bloomfield emphasized. Through his consultancy Turbulence, he works with a large investment bank that is replacing about half of its HR operations with AI agents.
It would have been easy for the bank to treat automation as a cost-cutting exercise, he said. But HR leaders realized the time saved could free up staff for more strategic work, like talent management. “It’s about combining technological expertise and business insights to ensure tech is deployed in the right way,” Bloomfield said.
He cites research by Dr. Ethan Mollick, an Associate Professor at the University of Pennsylvania’s Wharton School. Mollick argues that AI isn’t just a productivity tool. It can also be an effective virtual teammate.
In a randomized controlled trial of 776 knowledge workers at consumer goods giant Procter & Gamble, his team found that individuals working with AI performed as effectively as two-member teams working without it. They also found that AI helped individuals think outside of their own specialties – R&D, for example, or finance – to come up with more “balanced solutions.”
The study concluded that, as AI could “effectively replicate” certain benefits of human collaboration, organizations needed to rethink the structure of collaborative work. That might mean developing new forms of training, reducing siloes between departments and restructuring entire teams to support human–AI collaboration.
So, how can firms avoid the hype and reduce the risks of inflated expectations? Piloting solutions is part of the answer, said Baker. By taking an incremental approach, organizations can ascertain what works and better tailor their plans, she said.
“To help understand the technology, you start with a lower-risk use case, something that won’t affect your business too much, but really speeds up processes. You can then start implementing longer workflows,” said Baker.
"Data management makes up ‘80–90%’ of the work involved in deriving ‘real value’ from AI. "
Data is the lifeblood of any enterprise AI solution, and firms should work out how to manage theirs as early as possible. Professor Mohanbir Sawhney, the McCormick Foundation Chair of Technology at Kellogg Business School, believes data management makes up “80–90%” of the work involved in deriving “real value” from AI. Still, many firms “don’t have their data houses in order,” he said.
“First of all, the data has to be in one place, so you have to have a data lake,” explained Sawhney. “Then the data has to be current, complete, unbiased and governed. There’s a whole data governance framework that needs to be put in place before you can even think about AI at the enterprise scale.”
Business leaders are developing a more nuanced understanding of AI’s capabilities and constraints, according to Gartner’s 2025 AI Hype Cycle report. Yet the research revealed a significant disconnect between investment and satisfaction: fewer than 30% of AI leaders said their chief executives were happy with the returns on AI spending.
The challenges varied by organizational maturity. Companies with limited AI experience struggled to identify practical applications and had what Gartner called “unrealistic expectations” for their initiatives. Even sophisticated organizations faced obstacles, particularly in recruiting qualified talent and building AI literacy across their workforce.
Sawhney sees this as a natural adjustment and urges C-suite leaders to be patient with their AI pilot projects.
“People ask me if AI is overhyped or underhyped, and I say both. It’s overhyped in the short term because initially, you expect things to move faster, and they don’t. But it’s underhyped in the long term because when critical mass kicks in, I believe it will accelerate beyond the point we expect,” he said.
“The answer is we have to temper our short-term expectations and get down and do the hard work.”