The case for in-person AI training in a remote-first world

The default for corporate L&D in Australia is now remote or hybrid, and rightly so for a lot of content

Ijan Kruizinga·

Remote-first works for most things. AI training isn't most things.

The default for corporate L&D in Australia is now remote or hybrid, and rightly so for a lot of content. Compliance modules, leadership theory, product updates: there's no good reason to fly people in for those. Asynchronous video and a knowledge check do the job.

AI training is different, and the difference matters. Here's why.

When someone learns Excel or Salesforce, the tool behaves predictably. You click, it does the thing. When someone learns to use Copilot, Gemini, or a Databricks AI agent, the tool is non-deterministic. The same prompt produces different outputs. Skill is built through hundreds of small iterations, watching what works, adjusting, trying again. That iteration loop collapses when learners can't see each other's screens, hear each other's frustrations, and steal each other's tricks.

The Australian Government's AI in Government Taskforce evaluation of the Microsoft 365 Copilot trial found that the people who got the most value were those who shared techniques with colleagues and had support to experiment. Remote training makes both of those harder, not easier.

What in-person actually does that remote can't

Three things, specifically.

It surfaces the stuck. In a room of twenty people, a facilitator walks the floor and spots the four who are quietly lost. On Zoom, those four are invisible. They have their cameras off, they're polite, they don't interrupt, and they finish the session no better off than when they started. The completion rate looks the same. The capability change is not.

It accelerates peer learning. The most valuable moments in our workshops are usually not the ones we plan. They're when someone next to you says "wait, how did you get it to do that?" and you spend three minutes showing them. Multiply that by twenty people across a six-hour day and you have a compounding effect that no breakout room reproduces. We've watched cohorts walk out with shared prompt libraries, shared shortcuts, and shared confidence that took five minutes of corridor conversation to create.

It forces presence. I'm not going to pretend this is a small thing. When someone is in a room with their colleagues and a facilitator, they are not also clearing their inbox, joining a parallel meeting, or half-watching the cricket. The opportunity cost of attending is higher, which means the focus is higher, which means the learning is deeper. The economics of in-person training look worse on paper and better in outcomes, which is why we keep recommending it for the programs that matter most.

When in-person is worth the cost

In-person AI training is more expensive. Travel, venue, catering, time off the floor: it adds up. So we don't recommend it for everything. The question is when the extra cost is justified.

It's justified when:

  • The cohort is senior or technical. Engineers, data scientists, and senior leaders need to push the tools hard, and they need to do it next to peers who'll challenge them. Remote settings flatten the seniority signal that makes these rooms productive.

  • The work involves real organisational data. When a face to face AI workshop is built around the team's actual workflows, screens, and edge cases, the discussion gets specific in a way that generic remote content can't match.

  • The goal is behaviour change, not awareness. A one-hour all-staff briefing on AI risk can be remote. A two-day program designed to change how a finance team builds reports needs the room.

  • You're kicking off a multi-cohort rollout. The first cohort sets the tone for the next ten. Get them in a room. Build the energy. Capture the patterns. Then scale with hybrid or remote delivery for subsequent waves.

It's not justified when the content is genuinely introductory, the audience is geographically spread across the country with no natural hub, or the budget reality means choosing between in-person for some and nothing for the rest. In those cases, we design hybrid programs: in-person for the kickoff and the deep technical sessions, remote for the reinforcement and follow-up coaching. That's covered in more detail in how to choose an AI training provider for your enterprise.

What good onsite AI training actually looks like

A few things we've learned from running enterprise AI training across Australia:

One facilitator per fifteen learners, maximum. Below that ratio, the over-the-shoulder coaching that justifies in-person delivery stops happening.

Real laptops, real accounts, real data. If the learners can't use the tools they'll use on Monday with the data they'll touch on Monday, you've spent in-person money on a remote-quality outcome.

Hands-on time at sixty percent minimum. If the agenda is forty percent presentation, you're delivering a webinar in a meeting room. Charge accordingly and don't fly anyone in.

A clear "what changes Monday" artifact. Every learner leaves with a written commitment to one or two workflow changes they'll make in the next fortnight, and their manager gets a copy. Without this, the room dissolves and so does the behaviour change.

Manager presence at start and end. Not for the whole day. But the manager who opens the session and closes it signals that this matters, and that signal is worth more than any module.

The choice between in-person and remote isn't ideological. It's a design choice tied to what you're trying to achieve, who's in the room, and what changes when they leave. Get that calculation right and the AI training delivery format becomes the cheapest part of the decision. Get it wrong and you'll spend the next budget cycle wondering why the adoption numbers never moved.

If you're scoping a program now and want a second opinion on whether in-person, remote, or hybrid is the right call for your cohort, get in touch. We'll tell you straight.

Ijan Kruizinga

Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.

Ready to talk?

30-minute discovery call.