Evaluation 7 min read

AI Audits That Start With Your Business, Not the Technology

Most AI consultants start with the technology and hunt for problems to solve. I start with your operations and find where real value is being left on the table — using evaluation methods refined across government, banking, health, and commercial environments.

Before I became an AI consultant, I spent 20 years evaluating whether complex organisations were actually performing as they should.

I led the evaluation framework for the WA Public Sector Standards Commission, preparing reports tabled to executives, governing authorities, and on occasion Parliament. I designed and managed the evaluation framework for WA Police following the Kennedy Royal Commission. I managed reporting for a major banking institution, translating sales data into executive reporting across a retail network. Later, I led workforce analytics within a major metropolitan health service.

Across every role, the audience was the same: time-poor decision-makers who rejected jargon and were accountable for acting on what they read.

Today, I apply that same discipline to AI.


Evaluation vs Technology-First Consulting

Most AI consulting begins with the technology.

What can GPT-4 do? Where could automation fit? Which chatbot platform should we deploy?

The consultant arrives with tools and looks for somewhere to use them.

Evaluation works in the opposite direction. It begins with the system — your operations, customer journey, and internal workflows — and asks structured questions:

  • What is working well?
  • Where is time or money being wasted?
  • Where are customers being lost?
  • Where is value sitting idle?

Only once those answers are clear do we consider interventions.

Sometimes the highest-leverage solution is AI. Sometimes it is a process redesign. Sometimes it is fixing capacity constraints in something that already works.

The methodology has no preferred answer. It demands the most accurate one.

That distinction shapes what you receive.

A technology-first engagement typically ends with a list of tools you could use. An evaluation-led AI audit delivers a prioritised, quantified action plan — sequenced by impact, effort, and risk — including the recommendations where AI is not the right solution. Your internal technology team (or ours) then has a clear, evidence-based brief to build against.


Five Habits from Executive Reporting That Strengthen AI Audits

When your reports go to executives and governing authorities, you develop habits quickly. Those habits directly shape how I structure AI audits.

1. Lead with “so what”

Decision-makers want the answer first.

  • What did you find?
  • What does it mean?
  • What should we do?

Methodology and supporting analysis matter — but they follow the findings.

AI audit reports open with the top three opportunities, their estimated value, and what implementation would require. The detailed analysis sits behind that summary for anyone who wants to examine the evidence.

You should not have to read 30 pages to reach the point.

2. Quantify what matters

“AI could improve your marketing” is not a finding.

“AI-assisted follow-up on 340 dormant leads is projected to reactivate 8–12% within 90 days, adding $28,000–$42,000 in revenue” is a finding.

Specific. Measurable. Actionable.

Every recommendation includes:

  • The concrete action required
  • Expected impact (quantified wherever possible)
  • Cost and timeline
  • Clear success measures

If it cannot be measured, it cannot be managed.

3. Prioritise ruthlessly

Fifteen opportunities create paralysis. Businesses need to know which three to act on first.

Opportunities are grouped into:

  • Quick wins — high impact, low effort, implement immediately
  • Medium-term builds — meaningful value, some integration required
  • Strategic investments — transformational potential, longer commitment

You start at the top and move down deliberately. No guesswork.

4. Anticipate objections

In high-stakes environments, stakeholders actively look for weaknesses. Good reporting anticipates questions before they are raised.

In AI audits, the predictable concerns are:

  • What about data security?
  • Will the team adopt it?
  • What if the AI makes mistakes?
  • How quickly will we see a return?

Each recommendation addresses cost, risk, alternatives, and expected return upfront. That way, you are prepared to explain and defend the decision the moment you make it.

5. Make every recommendation stand alone

Each recommendation is structured so it can be assigned immediately.

It includes:

  • What to do (plain English)
  • Why it matters
  • Expected outcome
  • Cost and timeline
  • How success will be measured (30/60/90 days)
  • Key risks and mitigations

You should be able to hand a single recommendation to a team member and say, “Implement this,” without further interpretation.


Why Evaluation Experience Changes AI Auditing

Technology professionals are trained to build solutions. They naturally focus on what is technically possible.

Evaluators are trained to diagnose systems. They focus on where friction, leakage, and underperformance cause the greatest loss of value.

Across public sector, banking, and health environments, I observed the same pattern: the most impactful recommendations were rarely the most technically complex. They were the ones that addressed the most consequential bottlenecks.

The same principle applies in AI auditing.

The highest-impact AI opportunity in your business is rarely the most impressive model. It is usually the intervention that removes the most friction from the process that drives revenue.

An evaluation-led approach identifies those leverage points first. Technology then becomes a precise instrument, not an experiment.


What You Receive

Every AI opportunity audit includes two deliverables.

A written report

A clear, prioritised, quantified action plan. No jargon. No unnecessary architecture diagrams. No bloated technology landscape overview.

A document designed to be read and acted on.

A working prototype

A functional demonstration of the highest-impact opportunity identified in the audit.

Not a slide deck. Not a proposal for future work.

A working tool your team can test before committing further investment — for example, an AI-assisted follow-up workflow in your CRM or a draft automation around a repetitive internal process.

Together, they answer two questions:

  • What should we do?
  • Will it actually work?

The Bottom Line

Many AI consultants begin with what AI can do and search for places to apply it.

I begin with how your business actually operates — where time is spent, where customers drop away, where revenue leaks — and identify the intervention that produces the greatest return.

Sometimes that intervention is AI. Sometimes it is not.

The method stays neutral. The evidence decides.

If you are looking for an AI audit Perth businesses can act on immediately — grounded in evaluation discipline rather than technology enthusiasm — that is exactly what this approach delivers.

Twenty years of evaluation experience, now applied to one practical question:

Where does AI actually fit in your business?


Perth AI Consulting delivers AI opportunity audits for small and medium businesses. Written report and working prototype, from $500. Start with a conversation.

Published 23 February 2026

Perth AI Consulting delivers AI opportunity audits for small and medium businesses. Start with a conversation.

More from Thinking

Adoption 6 min read

The Psychology of Why Your Team Won't Use AI

80% of Australians are not using AI at work. The reason is not the technology — it is five predictable psychological barriers. Each one has a specific strategy that overcomes it.

Technical 5 min read

Stop Telling AI What NOT to Do: The Positive Framing Revolution

Most businesses get poor results from AI because they instruct it with constraints and prohibitions. Switching from negative framing to positive framing transforms output quality — and the principle comes from psychology, not computer science.

Building 7 min read

What Production AI Teaches You That Demos Never Will

The gap between AI that works in a demo and AI that works in your business is enormous. Here are the lessons that only emerge when AI hits real users, real data, and real constraints — illustrated by production systems built for real estate and clinical psychology.

Building 8 min read

How We Turned Generic AI Into a Specialist — And What That Means for Your Business

Most businesses get mediocre AI output because they ask AI to think and create in a single step. Building a production AI pipeline with over 1,000 lines of carefully chosen prompting revealed a better approach — and the principles apply to any business using AI.

Evaluation 6 min read

Your Business Has 9 Customer Touchpoints. AI Can Fix the 6 You're Dropping.

You are spending money to get customers to your door. Then you are losing them because you cannot personally follow up with every lead, nurture every client, and ask for every review. AI can handle the touchpoints you are dropping — quietly, consistently, and at scale.

Technical 7 min read

What Happens to Your Data When You Press 'Send' on an AI Tool

Most businesses are sending customer data, financials, and internal documents to AI tools without understanding what happens during processing. The spectrum of AI privacy protection is wider than you think — and recent research shows that even purpose-built security can have structural flaws.