Designing AI training for non-technical teams (without dumbing it down)
Somewhere along the way, "non-technical" got conflated with "can't handle complexity.
The dumbing-down problem
Somewhere along the way, "non-technical" got conflated with "can't handle complexity." It's a category error. A finance controller manages multi-entity consolidations. A legal counsel reads 80-page contracts for a living. An HR business partner navigates industrial relations law. These people handle complexity all day. What they don't have is a software engineering background.
When training providers confuse those two things, you get the slop that's currently passing for business AI training: generic prompt libraries, "10 ChatGPT hacks," a tour of Copilot ribbons, and a closing slide about "the future of work." It treats the audience as if their job is to be impressed by AI rather than to use it competently inside a regulated, commercial, time-pressured environment.
The result is predictable. Engagement scores look fine because the session was painless. Six weeks later, license utilisation is flat, the legal team is still copy-pasting clauses into ChatGPT against policy, and the CFO is asking why the AI investment isn't showing up anywhere.
What non-technical actually means (and doesn't)
Non-technical doesn't mean non-rigorous. It means the person isn't going to write Python, isn't going to deploy a model, isn't going to architect a RAG pipeline. Fine. They don't need to.
What they do need:
A working model of how the underlying technology actually behaves, including where it fails
Fluency with the specific tools their organisation has licensed
Judgement about when to trust output and when to verify
Awareness of the risk and governance boundaries that apply to their function
Confidence to redesign their own workflows, not just bolt AI onto the old ones
Notice what's not on that list. Tokenisation. Transformer architecture. The history of GPT. Vector databases. Anything with the word "embedding" in it. Most non-technical AI training fails not because it skips this material but because it includes diluted versions of it as theatre, then runs out of time for the things that actually matter.
Four design choices that hold the line
When we build custom programs for non-technical cohorts, we make four decisions early and stick to them.
1. Teach the mental model, not the mechanism.
People don't need to know how a large language model is trained. They need to know that it's a probabilistic next-token predictor, what that means for hallucinations, why it's confidently wrong sometimes, and why retrieval-grounded systems behave differently from raw chat. That's a 20-minute conversation, not a 90-minute lecture, and it changes how someone uses the tool for the rest of their career. Skip it and you're training button-pushers.
2. Anchor every concept in the participant's actual work.
A finance team's training uses real (sanitised) variance commentary, real board pack inputs, real reconciliation memos. A legal team's training uses redacted contracts from their own templates. The Australian Government's voluntary AI Safety Standard makes the same point in a different register: AI use needs to be grounded in actual context, not abstract examples. Generic prompts on generic problems produce generic capability. Nobody pays $25,000 for generic.
3. Build judgement, not just skill.
A prompt that produces a good first draft is a starting point. The harder, more valuable skill is knowing when the output is wrong, when it's plausible-but-misleading, when it's fine to send, and when it needs a human review step. That judgement comes from doing the work, getting feedback from someone who knows both the domain and the technology, and iterating. It does not come from watching a demo.
4. Design for workflow change, not tool adoption.
The point isn't that the HR team uses Copilot. The point is that the time-to-first-draft on a performance management letter goes from 40 minutes to 8 minutes, with a clear review step, and the team trusts the process. That's a workflow redesign question, not a tool training question. If your program doesn't end with each participant having redesigned at least one of their own workflows, it hasn't done its job. We've written more about this distinction in why most enterprise AI training fails.
What this looks like in practice
A typical non-technical cohort program we'd design for a large Australian enterprise runs something like this.
Pre-work: a short capability assessment, plus participants bring three current tasks they want to redesign. Real tasks. Not hypotheticals.
Day one: mental model, tool fluency on the licensed stack (usually Microsoft Copilot or Google Gemini), governance and risk boundaries specific to the function. By the end of day one, every participant has used the tools on their own work, not on a sample dataset.
Between sessions: participants apply what they've learned to one of their three tasks, with async support.
Day two: workflow redesign. Each participant walks through their reworked process, gets peer and instructor critique, and leaves with a plan they can run on Monday. We also cover the things people only ask about once they've started using the tools properly: handling sensitive data, when to escalate, how to document AI-assisted work for audit.
That's it. No quiz. No certificate ceremony. The deliverable is the workflow change.
What the buyer should actually ask
If you're an L&D or function lead procuring this kind of program, the questions to ask the provider are not about content libraries or LMS integrations. They're:
Show me the workflow each participant will leave with. What does the artefact look like?
How will you tailor the examples to our function? Whose work product are you using?
What's the post-program support to make sure these workflows actually get adopted?
How will we measure whether anything changed eight weeks later?
If the answers are vague, you're buying theatre. There's a real choice to make between off-the-shelf and custom here, and for non-technical teams in regulated functions, off-the-shelf almost never gets you to capability change. It gets you to attendance.
The bar is higher than it used to be
Two years ago, "we ran an AI workshop" was a defensible answer to a board question. Now the adoption gap is the actual bottleneck and boards know it. Non-technical teams are where most enterprise AI value will or won't show up, because that's where most of the work happens. Training those teams properly isn't a nice-to-have, and it isn't something that survives being dumbed down.
The next program your finance, legal, HR, or operations team sits through should leave them better at their actual jobs. If it doesn't, you've bought the wrong thing.
Ijan Kruizinga
Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.