What "custom" should actually mean when a vendor pitches you AI training
Before we get to what custom should mean, it's worth naming the three things that get sold as custom but aren't
The three things vendors call "custom" that aren't
Before we get to what custom should mean, it's worth naming the three things that get sold as custom but aren't.
Branded off-the-shelf. The vendor has a stock curriculum. They put your logo on it, change a few examples, and send an invoice. The course is identical to the one they ran for the bank down the road last quarter. This is the most common form of fake customisation in the market, and it's usually priced as if it were the real thing. We wrote about when off-the-shelf is actually the right call here, and there are real situations where it is. But you should pay off-the-shelf prices for it.
Modular Lego. The vendor has fifteen pre-built modules. You pick six. They call the result a custom program. It isn't. It's an order from a menu. The modules don't know about each other, the case studies don't connect, and the learner finishes with a survey of topics rather than a coherent capability.
Custom delivery, generic content. The vendor will run their stock course on the dates you want, in the rooms you want, with your CEO doing the welcome. The logistics are tailored. The training isn't.
If a proposal lands on your desk and what's actually being customised is the cover slide, the dates, and two case studies, you're being sold a re-skin. Push back or walk away.
What real customisation looks like
A genuinely custom AI training program is built from four things the vendor cannot know until they've done the work.
Your actual workflows. Not "marketing teams" in the abstract. The specific tools, handoffs, approval chains, and data sources your marketing team uses on a Tuesday afternoon. A custom program for your risk team should reference your risk taxonomy, your model governance process, and your real-world pain points. If the curriculum could be lifted and used at a competitor without changes, it isn't custom.
Your AI maturity baseline. A team that's been using Copilot for eighteen months needs different training than a team that just got licenses last week. Real bespoke training design starts with an honest assessment of where the learners actually are, not where the buyer hopes they are. We dig into how to do that baseline assessment in the AI literacy baseline article in this cluster.
Your organisational outcome. What is this training for? Higher Copilot license utilisation? Faster pilot-to-production timelines for the data team? Reduced shadow IT? Specific compliance or AI risk management outcomes? The curriculum should be reverse-engineered from the result you're paying for, not assembled forward from a list of topics the vendor happens to teach.
Your constraints. Regulated industry. Air-gapped environments. Limited training budget per head. A workforce split across timezones. A union agreement that limits training hours per quarter. These aren't side notes. They reshape what's possible and what's effective.
A program that's actually built around those four inputs takes longer to design, costs more to deliver, and produces results that off-the-shelf cannot. That's the trade.
The questions that expose fake customisation
If you want to test whether a vendor's "custom" claim is real, ask these in the first meeting:
Who is doing the curriculum design, and have they delivered a program in our industry before? If the answer involves a junior instructional designer and a stack of templates, you have your answer.
Can you show me a sample of a custom curriculum you built for another client, with the client's permission? Not a generic outline. The actual workshop plan, the actual exercises, the actual case studies. Vendors who do real custom work have these. Vendors who don't, don't.
What does your discovery process look like and how long does it take? Real customisation requires real discovery. If they're ready to start delivery in two weeks, the program isn't custom.
What changes between a program you'd run for us and one you'd run for a competitor? Make them be specific. "The case studies" is not an answer. "The case studies, the data examples, the tooling references, the governance overlay, the assessment rubric, and the post-training reinforcement" is closer.
What does the learner do differently on Monday morning? If they can't answer this in concrete behavioural terms, the program is theatre. We've written more about the questions to ask any AI training provider here.
A vendor who can answer all five with specifics is doing real custom work. A vendor who pivots to talking about their methodology, their platform, or their thought leadership is selling you something else.
The honest economics
Real custom AI training programs cost more than off-the-shelf because they take more time to design and require senior people to build. There's no way around this. A two-day workshop for fifty people might cost $25,000 off-the-shelf and $50,000–$80,000 custom. The custom version isn't twice as good. It's the difference between training that produces measurable capability change and training that produces engagement scores.
If the budget genuinely doesn't support custom, off-the-shelf delivered well is a better choice than custom delivered cheaply. What you should not pay for is custom pricing on off-the-shelf content. That's the worst of both worlds and it's depressingly common.
The buyer's job
You can't fully outsource the assessment. The vendors who do real custom work are doing it with you, not for you. That means you need to bring real information to the discovery: actual workflows, honest maturity baselines, the organisational outcome you're actually accountable for. If you can't articulate what success looks like, no amount of customisation will save the program. We work through this discovery process in detail in our custom programs and implementation work.
The next time a vendor pitches you a custom AI training program, count how many times they use the word "custom" and how many times they describe what's actually different about what they'd build for you. The ratio tells you everything.
Ijan Kruizinga
Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.