How to choose an AI training provider for your enterprise (the questions to ask before you sign)

The AI training market has exploded, and most of it is theatre

Ijan Kruizinga·

The AI training market has exploded, and most of it is theatre. If you're trying to work out how to choose an AI training provider for a serious enterprise rollout, the pitch deck won't help you. The questions you ask in the room will.

The market is full of providers. Most of them shouldn't be on your shortlist

Demand for enterprise AI training in Australia has outrun supply of people who actually know how to deliver it. The Tech Council of Australia estimates generative AI could add $115 billion a year to the Australian economy by 2030, but only if the workforce can actually use it. That gap has pulled in three kinds of providers.

The first kind is the repackaged e-learning vendor. They had a compliance training catalogue last year. This year they bought a generative AI module from an LMS marketplace and stuck their logo on it. The content is generic, the trainer is a contractor reading slides, and the "custom" component is your logo on the cover page.

The second kind is the boutique consultancy that's good at one thing, usually strategy decks, and has decided training is an adjacent revenue line. They'll send a partner to the kickoff and a junior to the delivery. The framework will be elegant. Your team will not learn how to write a useful prompt.

The third kind, and there are very few of them, is the provider whose entire business is designing custom curriculum that changes how people work. That's the only kind worth shortlisting for anything serious.

The job of your evaluation process is to tell them apart.

The five questions that actually matter

Forget the RFP template. These five questions will get you further in thirty minutes than a hundred-page response will.

1. What will the learner be able to do on Monday morning that they couldn't do on Friday?

Watch what happens when you ask this. A weak provider will talk about "awareness," "confidence," "AI literacy," "cultural shift." A strong provider will name specific capabilities. The data engineer will be able to write a Spark job using GitHub Copilot in half the time. The fraud analyst will be able to identify three categories of deepfake artefact. The product manager will be able to scope an AI feature against a governance checklist.

If they can't name the capability, they haven't designed for it.

2. How do you tie this to a business outcome we already report on?

The right answer involves a metric the buyer's executive team already cares about: license utilisation, time-to-pilot, audit findings closed, fraud loss prevented, support deflection rate. The wrong answer is "Net Promoter Score on the training" or "completion rates." We've written more about measuring training ROI and the difference matters. Engagement scores are a hygiene metric. They tell you whether people enjoyed the day. They do not tell you whether the organisation got a return.

3. Who is actually delivering this, and what have they built?

Ask for the CV of the person who will be in the room. Not the sales lead. Not the "learning architect." The trainer. If they haven't shipped production AI systems, written code that runs in real environments, or led real implementations, your senior engineers will eat them alive in the first hour. We've seen it happen. Once you lose the room, you don't get it back.

4. Show me a course you built for someone else, and tell me what you'd change.

This question separates the providers who are reflective practitioners from the ones who are running a script. A good provider has opinions about what worked and what didn't in their last engagement. They'll tell you the labs that fell flat, the assessments they redesigned, the modules they cut after cohort one. A weak provider will show you a polished case study and pretend everything went perfectly.

5. What happens if our team doesn't change how they work?

This is the one most providers fumble. The honest answer is that training alone doesn't change behaviour. Workflow, tooling, manager expectations, and incentives do. A provider who acknowledges this and tells you what wraps around the training (manager enablement, implementation support, measurement cadence) is taking the problem seriously. A provider who insists their workshop alone will transform your culture is selling you something they can't deliver.

The enterprise AI training checklist (the boring bits that decide everything)

Once you've narrowed the shortlist on substance, you still need to verify the operational fundamentals. Treat this as your AI training vendor evaluation baseline.

  • Curriculum authorship. Who wrote the material, when, and against what version of the underlying tools? AI tooling moves monthly. A course written eighteen months ago against GPT-3.5 is not fit for purpose.

  • Customisation depth. Does "custom" mean a pre-built module with your logo, or does it mean labs built against your data, your stack, your workflows? Read our piece on what custom should actually mean before you accept any provider's definition.

  • Vendor authorisation. If you're training on Databricks, Microsoft, or Google environments, is the provider authorised by the vendor to deliver enterprise training on that platform? Better People is, for what it's worth, the only Databricks training partner globally authorised to design custom curriculum for enterprise customers. That status matters because it means the curriculum is reviewed against the actual product roadmap.

  • Risk and governance coverage. Does the program cover AI risk management, data handling, prompt injection, hallucination management, and the regulatory environment your team operates in? If not, you're building fluency without guardrails.

  • Assessment design. How do they measure capability change? A quiz at the end of a workshop is not assessment. Pre and post capability tests, applied projects, manager-observed behaviour change: that's assessment.

  • Scaling model. Can they deliver to 50 people? 500? 5,000? Do they have a train-the-trainer pathway, or do they need to fly the same person to every session? The answer shapes your unit economics.

  • References you can actually call. Not logos on a slide. Phone numbers of L&D leaders who ran the program. If a provider won't connect you to a reference, that's the answer.

The red flags that should end the conversation

Some signals are non-negotiable. If you see them, walk.

A provider who can't tell you who built the curriculum. A trainer who has never written production code but is teaching engineers. Pricing that doesn't change when the scope changes. A "methodology" that's actually a slide template. Case studies with no quantified outcome. Reluctance to put real assessment in front of you. A pitch that's heavier on the founder's LinkedIn following than on the work.

You're not buying content. You're buying capability change in your workforce. Anyone who doesn't understand that distinction shouldn't be on the shortlist.

Pick the provider who'd rather lose the deal than oversell

The best signal in any evaluation is the provider who pushes back. Who tells you your timeline is wrong. Who says half your cohort is in the wrong workshop. Who recommends starting with twenty people instead of two hundred. Who tells you the problem you've described isn't a training problem.

That's the provider who's optimising for your outcome, not their invoice. Everyone else is selling you a deck.

If you want to see how we approach this in practice, our custom programs and workshops pages lay out the design choices we make and why. The questions above will work whether you're evaluating us or anyone else. Use them.

Ijan Kruizinga

Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.

Ready to talk?

30-minute discovery call.