AI Risk Management: The Defensive Playbook for Australian Enterprises

In February 2024, an Arup employee in Hong Kong joined a video call with people who looked and sounded exactly like the company's UK CFO and several colleagues.

Ijan Kruizinga·

In February 2024, an Arup employee in Hong Kong joined a video call with people who looked and sounded exactly like the company's UK CFO and several colleagues. Every person on that call, except the employee, was a deepfake. They walked him through fifteen separate transactions totalling about HK$200 million (roughly AUD 39 million) before anyone noticed. The story is documented in the Financial Times and confirmed by Arup itself.

This is the part of enterprise AI nobody puts in a slide deck. Every CISO and CFO I speak to has heard about the productivity gains. Far fewer have a clear answer when I ask them what their finance team would do if a Teams call from the CEO turned out to be synthetic.

The defensive side of AI is the side most enterprises have skipped

Australian boards spent 2024 obsessing over Copilot rollouts and Databricks pilots. Most of that money went into the offensive side of AI: how do we use this to ship faster, sell more, automate the back office. Fair enough. That's where the value is.

But the defensive side, the part that protects you from AI being used against your organisation, has lagged badly. The ASD Annual Cyber Threat Report 2023–24 recorded over 87,000 cybercrime reports, with business email compromise still one of the most damaging categories for Australian businesses. AI hasn't created a new category of crime. It has industrialised the existing ones.

The pattern I see across our enterprise clients is consistent. Security teams understand the threat. Finance, HR, and operations teams, the ones actually targeted, mostly don't. The training they've had on AI is about how to write better prompts, not how to spot a synthetic voice clone of their manager asking for an urgent payment.

That gap is the entire subject of this article. AI risk management is not a technology problem. It's a capability problem in the people closest to the money, the data, and the decisions.

What AI risk management actually covers

When I say AI risk management to a CFO, they think compliance frameworks. When I say it to a CISO, they think model security and prompt injection. When I say it to a head of HR, they think bias and policy. They're all right and all incomplete.

A working definition for an Australian enterprise in 2025 covers four domains:

  • External AI threats. Deepfake fraud, voice cloning, AI-generated phishing, social engineering at scale.

  • Internal AI misuse. Data leakage into public models, prompt injection, shadow AI tools, IP exposure.

  • Governance and compliance. Alignment with the Australian Government's Voluntary AI Safety Standard, Privacy Act obligations, sector-specific regulation.

  • Model and system risk. Hallucinations, bias, drift, and failures in AI systems your organisation builds or buys.

Most enterprises I work with have someone owning two of these. Almost none have a clear owner for all four. That's the first finding of any honest AI risk audit.

Deepfake scams are no longer exotic

Let's start with the threat that's growing fastest and is most under-addressed in Australian enterprises.

Voice cloning of a CEO used to require a research team. Now it requires three minutes of audio scraped from a podcast or earnings call and a tool you can rent for under fifty dollars a month. Microsoft's VALL-E paper demonstrated this in early 2023. The open-source ecosystem has caught up since.

Video deepfakes have followed the same curve. The Arup case is the high-profile example, but the Australian Competition and Consumer Commission's Targeting Scams report shows business email compromise and impersonation scams costing Australian businesses hundreds of millions of dollars annually, and that's before deepfake-enabled variants are reliably classified separately.

Here's what I tell finance and exec assistant teams when we run AI scam awareness sessions:

  1. The scam doesn't usually fail because the deepfake is bad. It fails because someone follows a verification process the attacker can't control.

  2. Out-of-band verification is the only thing that consistently works. A call back to a known number. A second-channel confirmation. Not a reply in the same channel where the request arrived.

  3. Speed is the attacker's weapon. Every deepfake fraud I've studied involves manufactured urgency. Slowing down is the single highest-leverage habit you can train.

Training finance and procurement teams to recognise this pattern, and giving them explicit permission to pause a CEO request, is one of the highest-ROI defensive interventions an Australian enterprise can make right now. We cover this in detail in our AI workshops for business, and it's the most consistently requested session we run for boards.

Data leakage is the quiet risk eating your IP

Deepfakes get the headlines. Data leakage into public AI tools is the quieter, more pervasive problem.

Samsung famously banned ChatGPT internally in 2023 after engineers pasted source code into the tool to debug it. That code is now part of someone's training corpus. There is no recall mechanism for that.

Every Australian enterprise I've worked with in the past eighteen months has the same problem at smaller scale. People paste customer lists, financial models, draft contracts, and HR investigations into whatever AI tool they have access to, often a personal account on a personal device, because it makes their job easier. They are not malicious. They are productive. The organisation has just never told them where the line is.

A defensible AI compliance posture starts with three things, in this order:

An acceptable use policy that's actually readable. Not a 40-page legal document. A one-pager that says: here are the tools we approve, here are the data classifications, here's what you can and cannot paste into each. If your policy can't fit on a single screen, your staff won't read it and won't follow it.

Approved tooling with enterprise data protections. Microsoft 365 Copilot and Google Gemini for Workspace both offer commercial data protection tiers where prompts are not used for training. If your staff need AI to do their jobs, give them an approved path. Banning AI without providing an alternative just pushes usage to personal accounts where you have zero visibility.

Training that connects the policy to the work. Telling someone "don't paste confidential data into ChatGPT" is meaningless if they don't know what counts as confidential, or what the approved alternative looks like for their specific task. This is where most policy rollouts fail and where targeted AI security training earns its keep.

Governance: stop waiting for the AI Act

I get asked weekly whether Australian enterprises should be waiting for federal AI legislation before building governance frameworks. The answer is no, and the reason is straightforward.

The Australian Government's Voluntary AI Safety Standard, released in September 2024, sets out ten guardrails covering accountability, risk management, data governance, testing, transparency, human oversight, contestability, supply chain, stakeholder engagement, and conformity. It's voluntary today. It's a strong indicator of what mandatory regulation will look like.

If you build to those ten guardrails now, you are not getting ahead. You are catching up to where regulators in the EU, UK, and Canada already are, and where APRA-regulated and government entities in Australia will need to be within twelve to twenty-four months.

For most enterprises I work with, the practical governance starting point is:

  • An AI inventory. What tools, models, and systems are in use, by whom, for what data.

  • A use case approval process. Lightweight for low-risk, more rigorous for customer-facing or high-stakes decisions.

  • Clear human-in-the-loop requirements for any AI system that affects a customer, employee, or significant business outcome.

  • Defined ownership. Someone whose job it is to say yes or no to new AI use cases, with criteria.

This is unglamorous work. It is also the work that keeps you out of the news for the wrong reasons. We cover the operational side of this in our piece on AI implementation and adoption.

The human layer is where defence actually happens

Here's the uncomfortable truth most security vendors won't tell you. You cannot buy your way out of AI risk. The technical controls matter, data loss prevention, conditional access, AI gateway tooling, but every serious AI fraud case I've studied was ultimately stopped or not stopped by a person making a judgement call.

The Arup employee made fifteen transfers because nothing in his training had prepared him for what he was seeing. The deepfake didn't beat the technical controls. There were no controls between him and the wire instruction. There was only him.

The defensive capability you need is in the people. Specifically:

Finance and AP teams need to recognise synthetic voice and video patterns, internalise out-of-band verification as a non-negotiable habit, and have explicit organisational permission to delay an executive request without career risk.

HR teams need to spot AI-generated phishing aimed at credential harvesting, recognise deepfake interview candidates (yes, this is now a thing in remote hiring), and understand the data classification rules for the sensitive employee information they handle daily.

Executives and their assistants need to understand they are the highest-value targets, that their public audio and video footprint is training data for whoever wants to impersonate them, and that their verification protocols protect everyone downstream of them.

Frontline managers across operations, legal, and procurement need enough fluency to ask the right questions when their teams want to use a new AI tool, and to know when to escalate.

This is not a 30-minute compliance module. It is targeted, role-based capability building, and it has to be refreshed because the threat landscape moves quarterly. The organisations getting this right are running short, sharp, role-specific sessions every six months and treating it as part of their security operating rhythm, not a one-off rollout.

What good looks like in 2025

The Australian enterprises I see managing AI risk well share a few traits. None of them are about having the most sophisticated technology stack.

They have a named owner for AI risk who reports up to the executive, not buried three layers down in IT. They've separated the offensive AI agenda (productivity, automation, products) from the defensive one (risk, governance, scams) and resourced them as two distinct workstreams, even if they connect at the top.

They've mapped which roles in the organisation are highest-risk for AI-enabled fraud, finance, AP, executive support, HR, customer-facing roles in regulated sectors, and they've built a defensive training pathway for those roles that's separate from general AI literacy.

They've moved past one-off awareness sessions to a continuous program. Quarterly threat updates. Annual scenario-based simulations. Role-specific deep-dives when a new threat pattern emerges. The same way a mature organisation handles cyber awareness more broadly.

And they've made governance a yes-machine, not a no-machine. Their AI risk function exists to enable safe adoption at speed, not to block use cases until they go away. That distinction is the difference between governance that improves the business and governance that gets routed around.

Where to start this quarter

If you're a CISO, CFO, head of HR, or chief risk officer reading this and feeling behind, the practical sequence is straightforward.

Start with an honest AI use audit. Survey your staff anonymously about what AI tools they actually use, including personal accounts. The answer will surprise you and it will tell you where your real exposure is.

Run a targeted deepfake and AI scam workshop for your finance, AP, and executive support teams within the next sixty days. This is the highest-ROI defensive intervention you can make. It's also one of the easiest to scope and resource.

Publish a one-page acceptable use policy backed by approved tooling. Make sure people know what they can use, what they can't, and where to go when they're not sure.

Map your governance posture to the Voluntary AI Safety Standard's ten guardrails. Identify the three biggest gaps. Close them this year.

These are not exotic moves. They are the basic hygiene of an organisation that takes AI seriously on both sides of the ledger. Most Australian enterprises have done some of this work. Almost none have done all of it.

If you'd like to see how this fits into a broader capability plan, our overview of enterprise AI training in Australia covers how the offensive and defensive sides connect, and our custom programs are how we build role-specific defensive capability for finance, HR, and risk teams across our enterprise clients.

The organisations that will come through the next two years cleanly are the ones treating AI risk management as a capability investment in their people, not a policy document in SharePoint. The threats are not slowing down. The good news is that the defences, when they're built into the right people doing the right jobs, work.

Ijan Kruizinga

Co-founder of Better People. 20+ years across technology and marketing leadership. Previously CEO of Crucial, CEO/COO of OMG and Jaywing.

Ready to talk?

30-minute discovery call.