What a Good AI Audit Actually Delivers
A useful AI audit produces two things: a written report with specific, costed recommendations and a working prototype you can test. Not a slide deck. Not a proposal for more work.
Most AI consulting reports read the same way. Broad observations. Category-level recommendations. “AI could improve your customer communications.” A sentence, not a finding. It gives a business owner nothing to act on, nothing to cost, and nothing to measure. (If you have not read the companion post on why operations-first analysis finds opportunities that technology-first approaches miss, start there.)
The difference between a useful audit and a useless one comes down to specificity.
“AI-assisted follow-up on 340 dormant leads is projected to reactivate 8–12% within 90 days, adding $28,000–$42,000 in revenue.” That is a finding. Specific enough to act on, cost-justify, and measure. It tells the business owner what to do, what it will cost, what the return looks like, and when they will know if it worked.
The Five Questions Every Recommendation Must Survive
Every recommendation in a useful audit needs to answer the questions a business owner actually asks:
- What exactly should we do? Not a category of improvement: a specific action.
- How much will it cost? Including implementation, ongoing costs, and internal time required.
- How long will it take? With dependencies and prerequisites identified, not hidden.
- What will the return be? A projection grounded in the business’s actual numbers, not industry averages.
- What happens if it does not work? The risk, the fallback, and the cost of getting it wrong.
If a recommendation cannot survive those five questions, it does not belong in the report. Three specific recommendations beat thirty pages of vague possibilities, because vague recommendations waste the business owner’s most limited resource: decision-making energy.
Two Deliverables, Not One
A useful AI opportunity audit produces two things.
A written report with prioritised, specific recommendations. Each recommendation includes the specific action, expected impact, cost, timeline, and how success will be measured. The report opens with the top three opportunities: a business owner should not have to read thirty pages to reach the point.
Recommendations are ranked by effort-to-impact ratio, so the business knows where to start and what to defer. Quick wins that can be delivered in days sit alongside medium-term opportunities and strategic plays that require more investment. The sequencing matters as much as the recommendations themselves.
A working prototype of the highest-impact opportunity. Not a slide deck. Not a proposal for more work. A functional demonstration that the business can test, evaluate, and use to make an informed decision about what comes next.
Together, they answer two questions: What should we do? And will it actually work?
Why the Prototype Matters
The prototype is the part that most AI consultants skip, because it requires building something before being paid to build something.
But it is also the part that matters most. It closes the gap between “this sounds promising” and “this works in our business.” A business owner looking at a working prototype makes a fundamentally different decision than one looking at a recommendation in a report.
The AI market is full of consultants who deliver reports. Recommendations are easy. A working demonstration of what the recommendation looks like in practice: that is where the value sits. And the quality of that prototype depends on the architecture behind it, not the model powering it.
Perth AI Consulting delivers AI opportunity analysis for small and medium businesses in Perth. Written report and working prototype, from $1,000. Start with a conversation.