Why most enterprise AI training fails (and the four design choices that fix it)

Walk into any enterprise L&D function this year and you'll find AI training somewhere on the roadmap.

Better People·

The training worked. The capability didn't.

Walk into any enterprise L&D function this year and you'll find AI training somewhere on the roadmap. Most of it is being delivered. Most of it is being completed. Very little of it is producing the capability change the business is paying for.

The reason isn't bad content. The content is mostly fine. Microsoft, Google, the major LMS vendors and a long tail of consultants have all produced perfectly competent AI training material. Slides are clean. Demos work. Instructors know their stuff.

The reason it fails is that "delivered training" and "changed capability" are two different products, and most enterprises are buying the first while reporting on it as if it were the second.

This matters because the cost of getting it wrong is high. AI license spend is up across every organisation we work with. Boards are asking about productivity gains. CFOs are asking why utilisation rates are flat. And the L&D team is sitting on a stack of completion certificates that don't answer the question.

So what actually separates programs that produce capability from programs that produce completion certificates? In our experience designing custom programs for banks, telcos, retailers and government agencies, it comes down to four design choices. Get these right and the training works. Get them wrong and you've bought theatre.

Design choice 1: Train against real work, not curriculum

The default model of corporate training is curriculum-first. Someone designs a course, the course gets delivered, learners absorb the content, and theoretically they apply it later. This model has never worked particularly well, and it works especially badly for AI.

AI capability is not knowledge. It's a workflow change. Knowing what a large language model is doesn't help an analyst write a better prompt for the report they're working on right now. Knowing how Copilot works in theory doesn't help a procurement manager use it on the supplier review due Friday.

The programs that work are built around tasks the learner is already doing. A claims assessor learns to use AI on actual claims. A legal counsel learns to use AI on actual contracts. A marketing manager learns to use AI on the campaign brief sitting in their inbox. The training is the work, not a substitute for it.

This sounds obvious. It's also rare. Most off-the-shelf AI training is built around generic use cases ("summarise a document," "draft an email") because that's what scales as content. The catch is that generic use cases produce generic capability, which is to say, not much.

If you're commissioning a program, the first question is: what specific work will learners be doing during the session, and is it their actual work or a sanitised version of it? We've covered this in more depth in our piece on off-the-shelf vs. custom AI training.

Design choice 2: Tie every module to an organisational outcome

The second failure mode is training that has no destination. The business buys "AI training" without specifying what should be true after the training that wasn't true before.

This is the question we ask every prospective client before we'll quote a program: what changes for your organisation if this works? The answers that produce good programs sound like:

  • Copilot license utilisation moves from 22% to 60% in the rolled-out divisions

  • Time from data request to first draft analysis drops by half in the analytics team

  • The number of staff who can independently build a working agent goes from 3 to 30

  • AI-related security incidents drop because staff stop pasting customer data into public models

The answers that produce bad programs sound like "we want our people to be AI-ready" or "we need to build AI literacy." Those aren't outcomes. They're vibes.

When the program is tied to a specific organisational metric, every design decision has a tiebreaker. Should we cover this topic? Only if it moves the metric. Should we add this exercise? Only if it builds the capability the metric depends on. Without that anchor, programs bloat into general-interest content and the result is what you'd expect: general interest, no result.

The Australian Government's Responsible AI Network has been clear that AI adoption in Australian organisations is being held back less by tools and more by the absence of organisational structures around them. Training that ignores the organisational layer is training that ignores most of the problem.

Design choice 3: Design for the bell curve, not the average

Most AI training is pitched at the imaginary average learner. In practice, no team looks like that. In any cohort of forty staff, you'll have five people who already use AI fluently, twenty who have tried it once or twice, and fifteen who haven't touched it.

If you train to the middle, the top five are bored, the bottom fifteen are lost, and the middle twenty learn something but not enough to change their work. The bell curve eats your average outcome.

The programs that work segment by capability before they design the content. We typically run a short diagnostic before any custom program: ten to fifteen minutes per learner, looking at current usage patterns, comfort with prompting, and the kinds of tasks they're trying to do. The output is three or four streams running in parallel, each pitched at where the learners actually are.

This is harder to deliver. It's also the difference between a program that lifts the whole team and a program that lifts no-one. If you're evaluating providers, the question to ask is how they assess incoming capability and whether their delivery model can flex to it. Most providers will quietly admit they don't and it can't. We've written more on this in how to choose an AI training provider.

Design choice 4: Plan the eight weeks after the training

Here's the design choice that almost no-one makes. The training itself, the workshop, the cohort, the in-person sessions, is maybe 30% of the capability outcome. The other 70% is what happens in the eight weeks after.

If a learner finishes a workshop on Friday and goes back to a Monday inbox with no support, no community of practice, no manager who's been briefed, no time allocated to apply what they learned, the capability decays fast. We've seen research from CSIRO confirming what every L&D leader already knows: skill transfer without applied practice within two weeks is mostly lost.

The programs that work bake in the post-training period. Coaching hours. Slack channels with active facilitation. Manager briefings so leaders know how to support new behaviours. Concrete first-week assignments that force the new capability into real work. A check-in at week four to surface where people are stuck.

This is also where most program budgets fall short. Buyers price the workshop and forget the wraparound. Then the workshop happens, the wraparound doesn't, and three months later the capability has evaporated.

If you only have budget for the workshop, our honest advice is to cut the cohort size and use the savings to fund the eight weeks that follow. A smaller, well-supported cohort beats a larger, abandoned one every time.

What this means for your next program

If you're scoping AI training for your organisation right now, the four questions to put to any provider, including us, are:

  1. How will learners practice on their actual work during the program, not a generic case study?

  2. What organisational metric will move if this program succeeds, and how will we measure it?

  3. How do you assess incoming capability and adapt delivery to the spread in the room?

  4. What happens in the eight weeks after the workshop, and is it included or extra?

If a provider can't answer all four with specifics, you're not buying capability. You're buying completion certificates.

The good news is that enterprise AI training doesn't have to fail. The four design choices above are not exotic. They just require treating training as a capability investment rather than an event. The organisations getting real results from AI right now are the ones who made that shift twelve months ago. The next twelve months will sort the rest.

Ready to talk?

30-minute discovery call.