The AI readiness assessment: a 12-question diagnostic for enterprise teams

The assessments coming out of large consultancies and platform vendors share a structural problem

Ijan Kruizinga·

Why most readiness assessments fail the people paying for them

The assessments coming out of large consultancies and platform vendors share a structural problem: they're optimised to sell the next phase of work. A vendor-led AI maturity assessment will almost always conclude that you need more of what the vendor sells. Platform vendors find platform gaps. Strategy houses find strategy gaps. Training providers, if they're honest, will tell you when training is not the answer, which is the part most providers skip.

The other failure mode is academic. Frameworks like Gartner's AI maturity model and the various "five-stage" pyramids floating around LinkedIn are useful for describing reality at altitude, but they don't help a CIO decide what to do on Monday. They tell you that you are at level two and should aspire to level four. They don't tell you which two pilots to kill, which capability to build first, or whether your data team can actually support the use case in your backlog.

A useful AI capability diagnostic does three things: it identifies the binding constraint, it produces a decision (not a score), and it can be done in a week, not a quarter.

The 12 questions that actually predict success

We've adapted these from work across enterprise, government, and large association clients. Score each one honestly. One point for a clear yes, half a point for "partially," zero for no. The total is less interesting than where the zeros cluster.

Strategy and use case

1. Can you name the three highest-value AI use cases in your organisation, with rough dollar estimates and the executive sponsor for each?

If the answer is "we have a list of forty ideas in a spreadsheet," you do not have prioritised use cases. You have a backlog of unfunded curiosity. Real readiness means three named bets with money and ownership attached.

2. For your top use case, can you describe the current workflow, the future workflow, and what specifically changes for the people doing the work?

This is the question that separates the organisations that will get a result from the ones running pilots forever. Workflow design before tool selection is the single highest-leverage practice we see, and most teams skip it because it's harder than buying licenses.

3. Do you have a written rule for which decisions stay with humans and which can be automated or assisted?

Without this, every AI project relitigates the same governance debate from scratch. With it, you ship faster and your auditors stop calling.

Data and technical foundations

4. Can a non-engineer in your business find, access, and trust the top ten datasets relevant to your priority use cases within a day?

If your data lives in fifteen systems, three of which require a ticket and a prayer, your AI projects will fail at the integration layer regardless of which model you pick. This is not a glamorous finding. It is usually the binding constraint.

5. Do you have a documented pattern for moving an AI workload from pilot to production, including evaluation, monitoring, and rollback?

Most enterprises have built one pilot. Few have built the bridge from pilot to production. Without that pattern, your second, third, and tenth pilots will each reinvent the wheel.

6. Have you made an explicit decision about your primary AI platform, and can the people closest to delivery articulate why?

"We're multi-cloud" is sometimes a strategy and sometimes a euphemism for "we never decided." Choosing your AI platform is a real decision with real trade-offs, and ducking it costs you more than picking the wrong one.

People and capability

7. What percentage of your workforce has had hands-on, role-specific AI training in the last twelve months, not a generic awareness module?

Generic e-learning is not training. A 45-minute video on "what is generative AI" does not change what someone does on Monday. Role-specific, hands-on capability building is the only thing that moves the needle, and it's worth being honest about whether off-the-shelf or custom training is right for your context.

8. Do you have at least one named person accountable for AI capability, distinct from the person accountable for AI delivery?

These are different jobs. Delivery is about shipping the use case. Capability is about whether the workforce can sustain it after the consultants leave. Most organisations conflate the two and underinvest in capability as a result.

9. Can your top fifty managers identify a deepfake voice clone, write a workable prompt, and explain when not to use AI for a decision?

These are the three managerial baseline skills for 2025. If your managers can't do these, your governance frameworks are theatre. The Australian Signals Directorate has been increasingly clear that AI-related social engineering is now a frontline risk, not a future one.

Risk, governance, and measurement

10. Do you have an AI risk register that names specific risks tied to specific use cases, not generic categories like "bias" and "hallucination"?

Generic risk registers are compliance artefacts. Specific ones are management tools. The difference shows up the first time something goes wrong. We've written more about practical AI risk management for teams that want to skip the theatre.

11. Can you measure AI adoption and value at the team level, not just license utilisation at the org level?

License utilisation tells you who logged in. It does not tell you whether anyone got anything done. The dashboards that matter to a CFO show capability change and outcome change, not seat counts.

12. Have you set a date by which you will either scale, sunset, or restructure each current AI initiative?

Initiatives without expiry dates become permanent residents. The single most useful governance practice we've seen is the standing kill-or-scale review every quarter.

What to do with your score

Add it up. The score itself is a curiosity. The pattern of zeros is the assessment.

Zeros clustered in strategy and use case (questions 1–3): Your problem is not technical. Stop scoping platforms and start scoping work. A two-day workflow design exercise on your top three use cases will change more than another month of vendor demos.

Zeros clustered in data and technical (questions 4–6): Your AI program is going to be bottlenecked by your data program. This is unglamorous and the right answer is to fix it before you commit to large-scale rollouts.

Zeros clustered in people and capability (questions 7–9): You have a training and change problem dressed up as a technology problem. Generic licenses and generic e-learning will not solve it. This is the most common pattern we see across enterprise AI training in Australia.

Zeros clustered in risk and measurement (questions 10–12): You're probably moving fast and will hit a governance wall in the next two quarters. Better to install the brakes now than after an incident.

A real diagnostic produces a decision, not a colour. If your assessment came back amber and the recommended next step was "another assessment," you got a sales motion, not a diagnostic.

How to run this in your own organisation

Take the twelve questions to a room with your CIO, your CDO, your head of L&D, your head of risk, and the executive sponsor of your largest AI initiative. Score them together. Where you disagree on the score is more diagnostic than where you agree. The disagreements show you where your leadership team has different mental models of where the organisation actually is.

Do this in 90 minutes. Not 90 days. The point is a clear-eyed view of the binding constraint, not another deck.

If the room concludes that the binding constraint is capability, that's a conversation worth having with us. If it concludes the constraint is data architecture or governance, that's a conversation for somebody else, and we'll happily say so. The question that matters is not whether you're "AI ready." It's which one thing, fixed in the next 90 days, would make the next dollar of AI investment actually land.

Pick that thing. Fix it. Then run the twelve questions again.

Ijan Kruizinga

Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.

Ready to talk?

30-minute discovery call.