Train-the-trainer vs. external delivery: how to scale AI fluency without losing quality

Train-the-trainer makes sense in plenty of domains. Compliance refreshers. Product knowledge. Process changes.

Ijan Kruizinga·

Why the model is so attractive (and so misleading on paper)

Train-the-trainer makes sense in plenty of domains. Compliance refreshers. Product knowledge. Process changes. Anything where the content is stable, the answers are knowable in advance, and the trainer's job is mostly to deliver, not to think.

AI fluency is none of those things.

The content moves every quarter. Copilot's capabilities in November are not what they were in May. Learners ask questions the deck didn't anticipate ("can I use this for client emails?", "what about our customer data?", "is this hallucinating?"). The trainer's value isn't in delivering slides. It's in handling the live, unscripted, often risk-adjacent questions that come up the moment a real human starts using a real tool on a real piece of work.

That's where most internal champion programs break. The champions are fluent enough to run the deck. They are not fluent enough to handle the room.

When train-the-trainer AI actually works

It does work, in specific conditions. We've helped clients build internal champion networks that have held up well past the initial rollout. The pattern looks like this:

The champions are selected for capability, not availability. The biggest mistake is asking for volunteers and getting whoever has slack in their calendar. The right champions are people who are already using AI tools heavily in their actual work, who teach naturally, and who have the political standing in their team to be taken seriously. If you can't name 30 people across your organisation who fit that description, you don't have a champion network. You have a list.

The scope is narrow and stable. Train-the-trainer works for "how to use Copilot for the eight tasks our finance team does every week." It does not work for "AI literacy across the enterprise." The narrower and more concrete the scope, the more durable the cascade.

External experts stay in the loop. The strongest programs we've seen treat champions as the front line, not the whole army. Champions handle the everyday questions. A monthly office hour with an external specialist handles the hard ones. Champions get refreshed every quarter as the tools evolve. Without that ongoing oxygen, the champion network suffocates within six months.

There's a real measurement system. Not "did the session happen." Actual capability change. Are the champions' teams shipping more work, faster, with AI? If you can't measure that, you can't tell whether the cascade is working or just running. We've written more on this in our piece on measuring ROI on enterprise AI training.

When external delivery is non-negotiable

There are situations where train-the-trainer is the wrong tool, and forcing it costs more than doing it properly the first time.

Technical depth. If you're training data engineers on Databricks, ML engineers on production deployment, or developers on agentic systems, internal champions almost never have the depth to teach it. The Australian Government's AI Technical Standard sets a bar for production AI systems that most internal champions cannot teach to.

High-risk content. AI scam awareness, deepfake detection, prompt injection, data leakage. The cost of getting this wrong is measured in fraud incidents and regulator letters. The ACSC's guidance on AI threats makes clear how fast this space is moving. You want a specialist who lives in this every day, not a champion running a deck from last quarter.

Senior leadership cohorts. Executives won't sit through training delivered by a colleague two levels down, no matter how capable. They'll politely cancel and the program will quietly die. External delivery here isn't about capability, it's about status dynamics. Get over it and book the specialist.

The pilot phase. When you're rolling out something new, the first three to five cohorts should be delivered by experts. That's where you discover what questions actually come up, what fails, what needs adjustment. Champions can take over from cohort six, with a curriculum that's been pressure-tested in the real environment. Skipping this step is the single most common reason rollouts fail.

The hybrid model that actually scales

For most enterprises in the 2,000 to 20,000 staff range, the answer isn't choosing between train-the-trainer and external delivery. It's designing a layered model where each layer does what it's good at.

A pattern that works:

  • External specialists design the curriculum, deliver the first cohorts, and run quarterly office hours. They also deliver any high-risk or technical content directly.

  • Internal champions handle ongoing delivery for the broad workforce on stable content, run team-level office hours, and act as the first line for everyday questions.

  • L&D owns the operating system: how champions are selected, refreshed, supported, measured, and replaced when they move on.

This is more expensive than pure train-the-trainer and cheaper than full external delivery. More importantly, it actually works. The champions stay sharp because they have access to specialists. The specialists stay efficient because they're not doing every cohort. The workforce gets training that doesn't decay the moment the model updates.

We get into how this maps to budget in our piece on off-the-shelf vs. custom AI training, and the procurement questions to ask sit in how to choose an AI training provider.

What to do on Monday

If you're staring at a rollout right now, three questions will tell you whether your model holds up:

  1. Can your champions answer the hardest question a real user is likely to ask in the first month? If not, your scope is too broad or your selection is wrong.

  2. Is there a feedback loop from champion sessions back to whoever owns the curriculum? If champions are teaching content nobody updates, the program has a six-month half-life.

  3. Are you measuring capability change in the workforce, not session attendance? If your dashboard shows completion rates and satisfaction scores, you're measuring theatre.

Train-the-trainer AI is not cheaper than external delivery. It's a different cost shape, traded for different risks. Get the design right and it scales. Get it wrong and you'll spend the saved budget twice over fixing what the cascade broke. The organisations that make it work treat their champions as a system, not a line item, and they don't try to outsource expertise they haven't built yet.

Ijan Kruizinga

Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.

Ready to talk?

30-minute discovery call.