The five roles every enterprise AI initiative actually needs (most are missing two)

Walk into any stalled enterprise AI program and you'll usually find the same line-up. A senior sponsor who pitched it to the board. A data scientist or two. A vendor on retainer.

Ijan Kruizinga·

Walk into any stalled enterprise AI program and you'll usually find the same line-up. A senior sponsor who pitched it to the board. A data scientist or two. A vendor on retainer. Maybe a project manager pulled from another team. The technology is fine. The use case is sensible. And nothing is moving.

The problem is almost never the model. It's the team. Most enterprise AI initiatives are missing two of the five roles they actually need, and the gap is predictable enough that you can name it before you walk into the room.

Why the standard line-up fails

The default enterprise AI team is built around a faulty assumption: that AI is a technology project. So the staffing follows the IT project template. Sponsor, technical lead, builders, PM. Ship it.

But AI isn't an IT project. It's a workflow change wrapped in a probabilistic system that touches data, risk, people, and trust. A traditional IT project ships software that does what it's told. An AI project ships software that does roughly what it's told, most of the time, in ways that need to be measured, governed, and continuously improved. That's a different shape of work, and it needs a different shape of team.

The MIT NANDA report on enterprise AI found that 95% of generative AI pilots are failing to deliver measurable P&L impact. The reason isn't model quality. It's the gap between the pilot and the workflow it's supposed to change. That gap is staffed by roles most teams forget to fund.

Here are the five.

1. The executive sponsor (almost always present)

The person with the budget, the political cover, and the willingness to push when things get hard. This role is rarely missing because no one runs an AI project without a sponsor. The risk isn't absence; it's drift. Sponsors who treat AI as a quarterly status update rather than a workflow they're personally invested in tend to lose the program to the first competing priority.

A good sponsor knows the use case well enough to argue for it in a room without the technical team present. They've used the tool. They've sat with the people whose work is changing. If your sponsor can't do that, you have a sponsor in title only.

2. The technical lead (almost always present)

The person who owns the architecture, the model choices, the data pipelines, and the integration. In well-staffed programs this is a senior ML or platform engineer. In smaller ones it's often a contractor or the lead from your platform vendor.

This role is also rarely missing. The trap here is the opposite of the sponsor problem: the technical lead is so present that they end up making decisions outside their lane. Which use case to prioritise. How to handle change management. What "good" looks like for the business. Those aren't technical decisions, and good technical leads know it. Great ones refuse to make them alone.

3. The workflow owner (frequently missing)

This is the first of the two roles most teams skip. The workflow owner is the person whose team's work is being changed by the AI system. Not the sponsor. Not the technical lead. The actual operational leader whose people will use the tool every day, whose KPIs will move or not move, and whose process documentation needs rewriting.

Without this role, you get pilots that work in demo and die in production. The model performs beautifully on the test set, the integration is clean, the dashboard looks great, and then the team it was built for either ignores it or works around it. Why? Because no one with skin in the operational game was in the room when decisions got made.

The workflow owner brings the unglamorous knowledge: the exception cases, the regulatory carve-outs, the reason step seven looks redundant but isn't, the team member who'll quietly torpedo any tool that makes their job harder. This is the role that turns a pilot into a production system that actually changes how work gets done. Skip it and you're building software for an imaginary user.

4. The governance and risk lead (frequently missing)

The second role most teams skip. Sometimes it's pushed onto legal or compliance after the fact, which is worse than not having it at all because it shows up as a blocker rather than a partner.

A real governance lead is embedded from day one. They own the risk register, the model documentation, the data classification, the human-in-the-loop design, the audit trail, and the policy alignment. They translate between the technical team and the people who'll eventually have to defend this system to a regulator, a board, or a customer.

In Australia, with the Voluntary AI Safety Standard now setting expectations and APRA-regulated entities under CPS 230 scrutiny for operational risk including AI, the governance role isn't optional. It's the difference between an AI program that scales and one that gets paused after the first incident. We've written more about how this plays out in practice in our piece on AI risk management.

5. The capability lead (almost always missing)

This is the role no one staffs and almost everyone needs. The capability lead owns whether the people using and adjacent to the AI system actually know how to use it. Not "got the email about the new tool." Actually fluent. Actually changing how they work.

Most programs assume capability will happen by osmosis. It doesn't. The technical team builds, the workflow team gets a training session, and three months later usage is at 12% and no one can explain why. That's a capability gap, and it's almost always under-resourced.

The capability lead designs the learning path, runs the enablement, measures fluency (not attendance), and feeds capability data back to the sponsor and workflow owner. They're the reason the system gets used. This is the role we end up partnering with most often, because the gap between "tool deployed" and "tool used well" is where most enterprise AI investment quietly evaporates.

What to do on Monday

Pull up your current AI initiative. List the five roles. For each, name the person. Not the team, the person.

If you can't name someone for workflow owner, capability lead, or governance lead, that's your next staffing decision. It doesn't have to be a full-time hire. In most mid-sized programs it's a portion of an existing person's time, with clear accountability and a seat at the steering meeting.

Then check the load. A part-time governance lead with 5% of their week is a fiction. A workflow owner who hasn't used the tool is a placeholder. Capability work that lives entirely with HR L&D, disconnected from the build team, will not produce fluent users.

The teams that ship AI into production aren't the ones with the best models. They're the ones that staffed all five roles before they started, and held each role to a real definition of done.

If you're building or rebuilding an enterprise AI team and want a second set of eyes on the structure, our AI implementation work starts with exactly this conversation.

Ijan Kruizinga

Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.

Ready to talk?

30-minute discovery call.