Why most enterprise AI pilots stall (and the three things that fix it)
The data on this is consistent across vendors and analysts. IBM's Institute for Business Value found that only about one in four enterprise AI initiatives delivers the expected ROI
The numbers are bad and they're not improving
The data on this is consistent across vendors and analysts. IBM's Institute for Business Value found that only about one in four enterprise AI initiatives delivers the expected ROI. Gartner has predicted that 30% of generative AI projects will be abandoned after proof of concept by the end of 2025. RAND's analysis of AI project failure puts the rate above 80%, roughly twice the failure rate of non-AI IT projects.
So what's actually going wrong? It isn't the models. It isn't the platforms. After delivering AI training programs to enterprise teams across banking, retail, health, and government, I can tell you the pattern. AI pilots stall for three reasons, and they're almost always the same three.
Reason one: the pilot was never tied to a business outcome
This is the big one. Most enterprise AI pilots get scoped backwards. Someone reads about a use case, finds a willing technical team, builds a working prototype, and only then asks "so what do we do with this?"
By that point it's too late. The pilot exists in a vacuum. There's no executive sponsor whose number changes when it works. There's no operations leader who's going to redesign their workflow around it. There's no finance owner who's allocated budget for the running cost. The pilot is technically real and organisationally orphaned.
Compare that to a pilot that starts with the question: "Which line in our P&L are we trying to move, by how much, by when, and who owns that line?" When you start there, the pilot has a parent. It has a destination. The technical work is in service of an outcome that's already been agreed.
I've watched this play out in dozens of engagements. The pilots that survive are the ones where, on day one, someone in the C-suite can finish the sentence: "If this works, the result is _______ and I'm accountable for it." If nobody can finish that sentence, the pilot will stall. Not might. Will.
This is the single biggest cause of enterprise AI project failure, and it's almost entirely a scoping problem. It has nothing to do with the technology.
Reason two: nobody designed the workflow
Here's the second pattern. The model works. The output is good. And then the team trying to use it can't actually integrate it into how they do their job.
A claims team gets an AI summarisation tool. The summaries are accurate. But the team's existing process expects a structured form, not a paragraph, and the downstream system can't ingest free text. So the analysts copy the summary, retype it into the form, and the time saved is zero.
A marketing team gets a content generation assistant. The drafts are decent. But the brand approval workflow requires three rounds of review, and the AI-generated content has to go through all three rounds anyway because nobody's updated the governance rules. So the assistant adds a step instead of removing one.
This is a workflow design problem, not an AI problem. And it's why we tell clients that workflow design has to come before tool selection, not after. If you don't redesign the work, you're just bolting AI onto a process that wasn't built for it. The pilot will technically succeed and operationally fail.
The fix is unglamorous. Map the current workflow. Identify where the AI actually changes a step, not just adds one. Redesign the surrounding process: inputs, handoffs, approvals, exception handling. Then build the technical solution. Most teams skip this because it's slow and political and doesn't produce a demo. That's exactly why the pilots stall.
Reason three: the people who need to use it weren't trained to use it
The third pattern is the one I see most often, because it's the one we get called in to fix.
A bank rolls out Microsoft Copilot to 5,000 staff. Six months later, license utilisation is sitting around 20%. The IT team is confused. The vendor is confused. The CFO is asking why they're paying for 5,000 licenses when only 1,000 people are using them.
The answer is almost always the same. The rollout came with a thirty-minute onboarding video and a SharePoint page of "tips and tricks." Nobody actually taught the staff how to integrate the tool into their daily work. There was no role-specific training for the legal team versus the marketing team versus the operations team. There was no follow-up coaching when people hit walls. The training was theatre.
This is one of the most common AI adoption barriers in large organisations and it's entirely preventable. Adults learn capability through deliberate practice in their actual work context, not through generic eLearning. If your rollout plan is "send everyone a video," your utilisation will be 20% and your pilot will stall. We've written more about why off-the-shelf training rarely sticks for enterprise rollouts.
The three things that fix it
So what do you do about it? Three things, in this order.
1. Anchor every pilot to a named outcome with a named owner. Before any technical work starts, the pilot needs an executive sponsor whose KPI moves when it works. Not a sponsor who "supports" it. A sponsor whose number changes. If you can't find one, you don't have a pilot, you have a science experiment. That's fine, but call it what it is and budget accordingly.
2. Redesign the workflow before you pick the tool. Map the current state. Identify the step where AI actually changes the work. Redesign the surrounding process, including inputs, handoffs, approvals, exceptions, and governance. Only then choose the technology. This is boring and slow and worth every hour. We cover the discipline in more depth in our workflow design guidance, but the principle is simple: tools amplify workflows. Bad workflow plus good tool equals bad outcome at higher cost.
3. Train people in their actual job context. Not generic AI literacy. Not vendor onboarding videos. Role-specific, scenario-based training that uses the team's real data, real tasks, and real pain points. Followed by coaching, not a one-and-done event. This is what we build in our custom AI training programs and what separates the rollouts that hit 70% utilisation from the ones stuck at 20%.
None of this is exotic. None of it requires a new model or a new platform. It requires the buyer, usually a CIO or CDO or head of transformation, to resist the pull of the demo and do the unglamorous work of scoping, designing, and training.
The pattern is the point
If you've run AI pilots that stalled, the temptation is to blame the model, the vendor, or the platform. Usually it's none of those. The pilot stalled because it had no business owner, no redesigned workflow, and no real training plan for the people meant to use it. Fix those three things and the next pilot won't stall. Skip them and you'll be running pilot number twelve next quarter, wondering why eleven didn't make it.
The companies getting AI to production aren't smarter or better-funded. They're just doing the boring parts other people skip.
Ijan Kruizinga
Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.