The 90-day plan: from "we need AI training" to first cohort delivered

The phone call usually goes like this. The CIO has just been in front of the board. The board asked what the organisation is doing about AI.

Ijan Kruizinga·

The phone call usually goes like this. The CIO has just been in front of the board. The board asked what the organisation is doing about AI. The CIO said "we're rolling out training." The board said "great, by when?" And now someone, often the head of L&D, has roughly twelve weeks to make that real.

If that's where you are, this is for you. A 90-day AI training implementation plan that gets you from a vague mandate to a first cohort actually delivered, with the right people in the room learning the right things. Not a perfect program. A real one.

Why 90 days is the right window

Shorter than 90 days and you skip the diagnostic work that makes training stick. Longer and you lose executive sponsorship. The board moves on. Budget gets reallocated. The window closes.

Ninety days is also enough time to do this properly without it feeling rushed. Four weeks to scope and design, four weeks to build and pilot, four weeks to deliver the first cohort. That cadence holds up across the enterprise AI training programs we've delivered to banks, retailers, and government agencies in Australia. It's tight but achievable.

The trap is treating this as a procurement exercise. "We need AI training, let's get three quotes, pick the cheapest, schedule the workshops." That gets you a calendar full of sessions and zero capability change. The 90-day plan below assumes you actually want the cohort to be different on day 91 than they were on day 1.

Days 1-30: Diagnose, scope, design

The first month is not about content. It's about understanding what you're trying to change and who needs to change it.

Week 1: Define the result, not the activity. Sit down with the executive sponsor and answer one question: what does success look like in six months? More Copilot licenses being used? Fewer manual reports? Engineers shipping AI features faster? An audit-ready governance posture? The answer determines everything else. If your sponsor can't articulate it past "AI fluency," you have a sponsorship problem, not a training problem. Fix that first.

Week 2: Run a baseline assessment. You cannot design training without knowing where people are starting. We typically run a short diagnostic across the target population: current tool usage, confidence levels, real workflows where AI could help, and the specific blockers people are hitting. This is not a survey for the sake of it. It's the input that tells you whether you need a literacy program, a tools program, an engineering program, or something else entirely.

Week 3: Scope the cohort. Who's in the first cohort and why? Resist the urge to start with "everyone." Start with the group where capability change will produce the most visible result: a team where AI usage is already high but unstructured, a function with a clear productivity bottleneck, or a leadership cohort whose fluency unlocks decisions further down. Fifty to eighty people is a good size for a first cohort.

Week 4: Choose your provider model and lock the design. This is where you decide between off-the-shelf and custom, and whether to deliver internally or bring in a partner. If you're going external, the questions to ask before you sign matter more than the proposal deck. By end of week 4, you should have: a defined success metric, a baseline, a target cohort, a delivery model, and a one-page program design.

Days 31-60: Build, pilot, refine

Now you build. The mistake here is to disappear into content development for four weeks and emerge with a polished curriculum that nobody has tested.

Weeks 5-6: Build the spine, not the polish. Draft the session structure, the practical exercises, the assessment approach, and the supporting materials. Get the architecture right before you worry about slide design. The single biggest predictor of whether training transfers to the job is whether the exercises use the learner's real work, not generic case studies. If your design doesn't have learners bringing their own data, prompts, and workflows into the room, redesign it.

Week 7: Run a pilot with 8-12 people. Not a focus group. A real session, with real participants, ideally including one or two skeptics. Watch where they get stuck. Watch which exercises produce the moment of "oh, I can actually use this." Watch which ones produce polite confusion. The pilot is the most valuable hour of the entire 90 days.

Week 8: Refine and finalise. Take the pilot feedback and rewrite. Cut sessions that didn't land. Expand exercises that did. Lock the materials. Confirm logistics: rooms, tech, licences, pre-work, manager briefings. By the end of week 8, the program should be ready to run at scale.

A note on governance. If your program touches sensitive data or production systems, this is also when you align with risk and security. The AI risk management conversation is much easier when you bring it forward early than when it surfaces two days before delivery.

Days 61-90: Deliver the first cohort

Delivery is the visible part, but if you've done the first 60 days well, it's the easiest part.

Weeks 9-10: Run the cohort. Whether you're delivering in person, remote, or hybrid, the orchestration matters. Pre-work two days before each session. Manager briefings so people aren't pulled out mid-workshop. Real-time support during practical exercises. A single channel where learners can ask questions between sessions.

Week 11: Capture evidence of capability change. This is where most programs fall over. The post-training survey asks "how satisfied were you?" and gets 4.6 out of 5, and everyone declares victory. Wrong measure. You want evidence that people are doing something different at work. That means a structured follow-up: a workplace task they complete using what they learned, a manager check-in two weeks after delivery, a sample of artefacts (prompts, agents, analyses) the cohort has produced. If you can't show capability change, you ran a workshop, not a training program.

Week 12: Report, decide, scale. Bring the evidence back to your executive sponsor. Show what the cohort can now do that they couldn't before. Show what changed in the metric you defined in week 1. Then decide: roll out to the next cohort, refine the design, or expand into adjacent capability areas. The 90-day plan is not the program. It's the proof of concept that gets you the budget and mandate to do the next 12 months properly.

What gets cut, and what doesn't

In a 90-day window, you will be tempted to cut corners. Some are fine. Some are fatal.

Fine to cut: glossy production values, custom branding on every deck, a learning management system integration, a perfect competency framework, training every population at once.

Not fine to cut: the baseline diagnostic, the pilot, the use of real workplace tasks in exercises, the post-training evidence of capability change, the executive sponsor's clear statement of what success looks like.

The pattern we see in failed programs is almost always the inverse. People invest in production polish and skip the diagnostic. They build slick content and skip the pilot. They run a great-looking workshop and skip the follow-up. The cohort enjoys it and forgets it within a month.

What changes after day 90

If you do this well, three things are true on day 91.

First, you have a working program design that you've already pressure-tested with real learners. You're not guessing what works. Second, you have evidence of capability change in a defined cohort, which means you have something concrete to take to the next budget conversation. Third, you have an internal coalition: managers who saw their people get better, sponsors who got the result they asked for, and learners who can advocate for the next rollout.

That's what the 90-day AI training implementation plan is actually for. Not to tick a board-level box. To prove that capability change is possible in your organisation, in your context, with your people, and to earn the right to do it at scale.

The CIO will get asked the same question at the next board meeting. The answer should be specific.

Ijan Kruizinga

Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.

Ready to talk?

30-minute discovery call.