AI governance for mid-sized enterprises: a practical framework that won't slow you down

Last month a head of risk at a mid-sized Australian financial services firm told me their AI governance committee had been meeting fortnightly for nine months and had approved exactly two use cases.

Ijan Kruizinga·

The two failure modes

Most mid-sized enterprises end up in one of two ditches.

The first is the paper governance ditch. Someone writes a 40-page AI policy, circulates it to legal, and declares the problem solved. No one reads it. No one operationalises it. When the auditors ask how it is enforced, the answer is a shrug and a SharePoint link. The policy exists; the controls do not.

The second is the veto governance ditch. A committee is formed with representatives from legal, risk, security, privacy, and IT. Every AI initiative needs committee approval. The committee meets fortnightly, asks for more information, and defers decisions. Six months in, the business has stopped bringing things to the committee, because going around it is faster. The committee thinks it is governing. It is not. It is being routed around.

Both failure modes share a root cause: governance designed without reference to how work actually gets done. The fix is not more policy or more committee. It is a smaller number of clearer controls, applied at the right point in the workflow, with the right people accountable.

What a practical framework looks like

A working AI governance framework for a mid-sized enterprise has six components. Not twelve. Not three. Six.

1. A risk-tiered use case inventory. Every AI use case in the organisation gets logged with three pieces of information: what it does, what data it touches, and what its tier is. Tier 1 is internal productivity (drafting emails, summarising meetings) with no customer data. Tier 2 is internal operational use that touches sensitive data (HR analytics, financial modelling). Tier 3 is anything customer-facing or that influences decisions about people (credit, hiring, claims, health). Tiers determine the controls. The Australian Government's voluntary AI Safety Standard uses a similar risk-based logic and is a reasonable reference point.

2. A pre-approved tool list. Most staff want to use AI for Tier 1 work. Give them a sanctioned list of tools and accounts (Microsoft Copilot, Google Gemini, ChatGPT Enterprise, Claude for Work) with enterprise data protection turned on. The vast majority of "AI governance" demand is just people wanting to use AI safely without filling in a form. Solve that and the committee gets its time back.

3. A lightweight intake for Tier 2 and Tier 3. A one-page intake form with eight to ten questions. What is the use case? What data does it use? Who is accountable? What is the failure mode? What happens if the model is wrong 5% of the time? Most submissions get a decision in a week, not six. The form is the control, not the committee.

4. Model and vendor due diligence. For any third-party AI capability, you need standard answers on data residency, training data usage, security certifications, and model provenance. We covered the procurement angle in what good AI procurement looks like, and the questions there map directly onto governance.

5. Human-in-the-loop requirements by tier. Tier 1 needs none. Tier 2 needs documented review by a named owner. Tier 3 needs human decision authority on every output that affects a person, plus logging, plus the ability to explain the decision. This is also where Australia's emerging regulatory expectations are heading, particularly for high-risk uses.

6. Monitoring and incident response. Someone owns the question "what happens when this AI system gets something wrong?" before it gets something wrong. Logs exist. An escalation path exists. A kill switch exists. This is the part most frameworks forget, and it is the part regulators will care about most.

That is the framework. Six controls. You can fit it on a page.

Where most frameworks go wrong

The biggest mistake we see is treating AI governance as a separate stream from existing risk and compliance work. It is not. Your organisation already has data classification, vendor management, change management, and operational risk frameworks. AI governance should plug into them, not replace them.

The second mistake is putting governance entirely in the second line. Risk and compliance have a role, but the day-to-day controls have to live with the people building and using AI. If a data scientist cannot tell you what tier their use case is and what controls apply, the framework is not real to them. It is paperwork someone else does.

The third mistake is assuming a policy document is the deliverable. The policy is maybe 10% of the work. The other 90% is the operating model: who runs the intake, who tiers the cases, who approves Tier 3, who monitors production systems, who trains the workforce on what is allowed. Without that, the policy is a PDF.

This connects to a broader point we made in our guide to AI risk management: governance is a behaviour, not a document. The test is what people actually do when no one is watching.

What the workforce needs to know

A governance framework only works if the people doing the work understand it. In our experience running AI workshops for enterprise teams, the gap is rarely capability. It is clarity. Staff want to do the right thing. They just do not know what the right thing is.

At minimum, every employee needs to know:

  • Which AI tools they are allowed to use, and on which data

  • What "sensitive data" means in their context (it is not always obvious)

  • When they need to flag a use case for review, and how

  • What to do if they see something go wrong

This is a 90-minute workshop, not a 12-week program. It pays for itself the first time it stops someone pasting a client list into a public chatbot. We cover this kind of practical staff enablement in our enterprise AI training programs, and it is the cheapest, highest-ROI part of any governance rollout.

For technical teams building AI systems, the bar is higher. They need to understand model evaluation, prompt injection, data leakage, and the difference between a demo and a production-grade system. That is a different conversation, and usually a custom program rather than a workshop.

A 90-day rollout that actually works

If you are starting from scratch, here is the sequence we have seen succeed at mid-sized Australian enterprises.

Days 1 to 30. Inventory what is already happening. Survey the workforce. You will find more AI use than you expected, almost all of it Tier 1, almost all of it on personal accounts. Stand up a sanctioned tool with enterprise data protection. Communicate it. This single move retires 70% of your shadow AI risk in a month.

Days 31 to 60. Publish the tiering model and the one-page intake. Train a small group (legal, risk, security, IT, and one business representative) to triage submissions. Set a service level: Tier 1 same week, Tier 2 within ten business days, Tier 3 within four weeks with a clear path. Run the workforce briefing.

Days 61 to 90. Stand up the monitoring and incident process for Tier 2 and Tier 3 systems. Connect AI governance to your existing operational risk reporting. Brief the board with a one-page dashboard: number of use cases by tier, average decision time, incidents, training coverage.

At day 90 you have a working framework. Not perfect. Working. You can refine it for the next two years; you cannot refine something that does not exist.

The thing governance is actually for

The point of enterprise AI controls is not to prove you have controls. It is to let your organisation move faster, with confidence, on the use cases that matter. A good framework gives a project team a yes or no in a week, with a clear reason. A bad framework gives them silence for six months and a quiet workaround in the meantime.

If your governance is slowing your business down more than your competitors' is slowing theirs, you do not have better governance. You have worse governance, dressed up as caution. The mid-sized enterprises that will do well over the next three years are the ones that figure out the difference.

Ijan Kruizinga

Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.

Ready to talk?

30-minute discovery call.