Workflow design before tool selection: the rule that saves AI projects
A procurement lead at a large Australian insurer told us recently that her team had bought three AI tools in eighteen months. None of them were being used.
A procurement lead recently told us that her team had bought three AI tools in eighteen months. None of them were being used. When we asked what workflow each tool was supposed to improve, she paused. "We bought them because the vendor said they'd save us time." That was the entire design document.
This is the most common, and most expensive, mistake we see in enterprise AI implementation. The tool gets picked before anyone has mapped the work it's meant to change. Then the project stalls, the licences sit idle, and someone gets blamed.
The order most companies follow (and why it fails)
Here's the sequence we see again and again:
A vendor demo lands well in a leadership meeting
Procurement is asked to "evaluate options"
A tool is selected and licensed
Someone is told to "find use cases"
Pilots launch, struggle, and quietly die
The fatal step is number four. By the time you're hunting for use cases that fit a tool you've already bought, you've inverted the entire logic of process design. You're optimising the work to suit the software, instead of selecting software to suit the work.
The tool vendors love this order. It maximises licence revenue and minimises their accountability for outcomes. But it explains why most enterprise AI pilots stall and why the people who paid for the licences end up looking for somewhere to hide them in next year's budget.
What AI workflow design actually means
AI workflow design is the practice of mapping a real piece of work, end to end, before deciding what (if anything) AI should do inside it. It's not glamorous. It looks more like business analysis than data science. That's the point.
A proper workflow design answers four questions:
What is the work? Not the department, not the system. The actual sequence of decisions, handoffs, and outputs that produce a result a customer or colleague needs.
Where does it break? Where do people wait, redo, escalate, or guess?
What's the result the business is paying for? Faster cycle time, fewer errors, lower cost per transaction, higher conversion. Pick the metric before you pick the tool.
Where, specifically, could AI change the shape of the work? Not "where could we add AI." Where would a different capability (generation, classification, extraction, summarisation, retrieval) genuinely remove a step, shorten a loop, or improve a decision?
If you can't answer those four questions in plain English, you are not ready to evaluate tools. You're ready to do the workflow mapping you skipped.
A short example: the claims triage case
Back to the insurer. When we walked the actual claims intake process with the operations team, the bottleneck wasn't where the executives thought it was. They'd assumed claim assessment was the slow step, which is what the vendor had pitched against. The real bottleneck was the first 90 minutes after a claim arrived: documents in five formats, half of them photos, sitting in an inbox waiting for a human to sort them into the right queue.
That's a classification and extraction problem. It needs a different kind of model, a different integration pattern, and a different success metric than what the licensed tool was built for. The team had bought a generative assistant for assessors. The actual problem was triage before assessors ever saw the work.
Once the workflow was mapped, the AI tool selection process took about three weeks. The shortlist had two vendors instead of fifteen. The pilot ran in six weeks instead of six months. None of that was possible while the tool was leading the conversation.
The four-step sequence that works
This is the order we use on every implementation engagement, and it's the order we teach inside custom programs for enterprise teams.
1. Map the workflow. Sit with the people doing the work. Watch a real instance from start to finish. Document every step, including the ones nobody talks about (the spreadsheet someone maintains on the side, the Slack thread that resolves edge cases). McKinsey's research on AI value capture keeps landing on the same point: the workflow detail is where value lives or dies.
2. Define the result. Pick one or two metrics that matter to the business owner. Cycle time. First-pass accuracy. Cost per case. Customer effort score. If you can't measure it before, you can't claim improvement after.
3. Identify the AI-shaped problems. Not every step benefits from AI. Some steps need better data, simpler rules, or fewer approvals. AI is one tool among several. Mark the steps where a model genuinely changes what's possible: extracting structure from unstructured input, drafting a first version, retrieving the right precedent, classifying an exception.
4. Then, and only then, select tools. Now your shortlist is short. You know what capability you need, what data it touches, what integration points exist, and what "good" looks like. Vendor demos become useful instead of seductive. Procurement has real questions to ask, not generic ones.
This is the same logic behind a sound AI risk management approach: you can't govern a workflow you haven't drawn.
What changes for your team
Three things shift when you put workflow design before tool selection.
The first is that your shortlist gets shorter and your pilots get faster. Teams that map workflows first typically run pilots in 6 to 10 weeks instead of 4 to 6 months, because the success criteria are clear from day one.
The second is that the people doing the work participate in the design. That changes the change management problem entirely. When the assessor or the analyst helped map the workflow, they have a stake in the AI redesign. When the tool was bought over their heads, they have a reason to resist it. The Australian Public Service Commission's AI in government guidance makes this point bluntly: adoption follows ownership.
The third is that your training spend lands properly. Training people to use a tool they don't need on a workflow nobody mapped is the most expensive form of theatre in corporate learning. Training people to operate inside a redesigned workflow, with the AI capability slotted into the right step, produces measurable capability change. That's the difference between a workshop people sit through and an enterprise AI program that shows up in operational metrics.
The rule, simply
If a vendor is in the room before the workflow is on the wall, you are doing it backwards.
Most AI projects that fail in production were already failing at procurement, because the order was wrong. Fix the order, and most of the downstream problems (low adoption, unclear ROI, frustrated users, idle licences) either shrink or disappear.
The companies getting real returns from AI in 2025 aren't the ones with the best tools. They're the ones who took workflow design seriously before they signed a contract. If you're staring at a licence you're not using, that's where to start: not with a better tool, but with a clearer map of the work the tool was supposed to change.
Ijan Kruizinga
Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.