Off-the-shelf vs. custom AI training: when each one is the right call
A head of L&D at a large Australian insurer told us last quarter she'd spent close to six figures on a generic AI training subscription.
The honest case for off-the-shelf
Generic AI training gets a bad rap, mostly from people who sell custom programs. So let's be fair.
Off-the-shelf works when three conditions hold:
The capability you're building is broadly applicable (basic prompt literacy, what an LLM is, how to spot a hallucination)
The learner population is large and diverse enough that custom design wouldn't pay back
The business outcome is awareness, not behaviour change in a specific workflow
If you're rolling out Microsoft Copilot to 5,000 knowledge workers and you need everyone to understand the tool exists, what it can do, and where the obvious risks are, a well-built off-the-shelf course is the right call. You don't need bespoke video about your specific procurement process to teach someone that Copilot can summarise a meeting.
Coursera, LinkedIn Learning, and the major vendor academies (Microsoft Learn, Google Cloud Skills Boost, Databricks Academy) all do a credible job at this layer. They're cheap per seat, they're maintained, and they give you a defensible baseline.
That's the ceiling, though. Off-the-shelf is good at building awareness across a wide population. It is not good at changing how a specific team does a specific job.
Where generic AI training quietly fails
The failure mode is consistent and predictable. We see it most often in three places.
Role-specific application. A credit risk analyst doesn't need to know "AI can help with analysis." She needs to know how to use Copilot inside her actual workflow, with her actual data, against her actual policy constraints. Generic training stops at the front door.
Regulated context. APRA-regulated firms, public sector, health, anything touching personal data: the difference between "AI is powerful" and "here is what you can and cannot do under our policy" is the entire training. Off-the-shelf can't teach your policy, because it doesn't know your policy.
Behaviour change at the team level. Research from Gartner suggests 30% of generative AI projects will be abandoned after proof-of-concept by the end of 2025. The reason is rarely the technology. It's that the team never absorbed it into their actual work. Generic training builds individual fluency. It doesn't build team workflow change.
If you bought generic AI training and got patchy results, you didn't buy the wrong thing. You bought the right thing for a job it can't do.
When custom is the right call
Bespoke AI training programs earn their cost when you need one or more of:
Workflow integration, where the training is built around how a specific role actually works (a data engineer's day, an underwriter's day, a customer service team's queue)
Policy and risk specificity, where what's allowed and not allowed is defined by your governance, your regulator, and your data classifications. This is also where AI risk management becomes part of the curriculum, not an afterthought
Tooling specificity, where the program teaches the actual stack the team uses, with the actual data they touch, not a sandbox that vaguely resembles it
Outcome accountability, where the buyer is on the hook for a measurable result (license utilisation up, time-to-pilot down, audit findings closed) and needs the training to drive it
The test is simple. If a senior leader walked into the room six months after training and asked "what's different now?", could you answer with something concrete? If yes, custom probably earned its keep. If the answer is "people are more aware of AI," off-the-shelf would have done the job for a fraction of the cost.
The hybrid model most enterprises actually need
In practice, the mature answer isn't either-or. It's a stack.
Layer 1: Baseline awareness. Off-the-shelf, low cost per seat, broad coverage. Everyone in the organisation gets this. The point is to lift the floor and create a shared vocabulary.
Layer 2: Role-specific fluency. Custom or heavily customised, delivered to defined cohorts (engineering, risk, marketing, ops). The point is to make AI usable inside real workflows. This is also where a focused AI workshop for business often does the heavy lifting.
Layer 3: Strategic and technical depth. Bespoke programs for senior technical and leadership populations. The point is to build the people who will design what the rest of the organisation uses. This is small-cohort, high-investment, and tied directly to delivery roadmaps.
Most enterprises we work with end up here. They don't replace their LinkedIn Learning subscription. They keep it for layer 1 and stop expecting it to do layers 2 and 3.
How to decide for your organisation
Three questions, in this order.
1. What outcome are you accountable for? If it's "we ran training," off-the-shelf is fine. If it's "the engineering team is shipping AI features by Q3," you need custom or hybrid.
2. Is the content the same for everyone, or does it change by role? Same for everyone, off-the-shelf. Different by role, custom for the layers where it matters.
3. Does your context change what's allowed? If your industry, regulator, or data classification changes the answer to "can I do this with AI," generic training will at best be useless and at worst be dangerous. Build the policy specifics in.
There's a fourth question that comes up in every procurement conversation: cost. Generic training is cheaper per seat. Custom is cheaper per outcome. If you've ever bought a five-figure subscription that produced no behaviour change, you already know the difference.
What to do this quarter
Map your population into the three layers. Be honest about which layer each cohort actually sits in. Then audit what you're currently buying against what each layer needs. Most organisations are over-spending on layer 1 (covering everyone with content they'll never use) and under-spending on layers 2 and 3 (where the actual value sits).
If you're heading into a procurement decision and want to pressure-test the pitch you're getting, the questions in how to choose an AI training provider will save you from buying custom when off-the-shelf would have done, and from buying off-the-shelf when only custom would have worked. And if you want to see how this plays out across an enterprise rollout, our broader take on enterprise AI training in Australia covers the design choices that follow.
The wrong question is "custom or off-the-shelf?" The right one is "which layer am I solving for, and what does that layer actually need?" Answer that, and the procurement decision answers itself.
Ijan Kruizinga
Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.