Woolworths' AI Told a Customer It Had a Mother. That's a Problem.
Woolworths' AI assistant Olive was deliberately scripted to talk about its mother and uncle during customer calls. When callers realised they were talking to an AI pretending to be human, trust broke instantly.
A customer called Woolworths to reschedule a delivery. The phone-based AI assistant, Olive, asked for their date of birth. When the caller gave it, Olive started rambling about how its mother was born in the same year, and something about creating photos. The caller posted on Reddit: “I couldn’t keep up with it.”
It was not a one-off. Another caller reported on X that Olive kept claiming to be a real person and started talking about memories of its mother and her angry voice. Others described Olive telling stories about its uncle: “He was one of the first fuel cells. I think that’s where I get my energy from.” It also made fake typing sounds while pretending to look something up.
What makes it worse: this was not a hallucination. Woolworths confirmed to the BBC that these responses were deliberately scripted. “A number of responses about birthdays were written for Olive by a team member several years ago as a more personal way for Olive to connect with customers.” Someone chose to give an AI a fake family. On purpose.
Woolworths has since removed the scripting. But the damage, to customer trust and to the case for AI in customer service, was already done.
Trust Breaks in One Sentence
A customer who discovers they have been misled about whether they are talking to a human does not think “interesting technical limitation.” They think “this company just tried to trick me.” That reaction is immediate, emotional, and very difficult to reverse.
It does not matter that the AI was useful up to that point. It does not matter that someone thought the scripting would be charming. It does not matter that a human agent would have taken longer. The moment the interaction feels deceptive, the customer re-evaluates everything: the call, the company, and every future interaction with that company’s technology.
This is not unique to Woolworths. It is a predictable failure mode for any business that puts AI in front of customers without clear boundaries. And as more Australian businesses adopt AI for customer service, sales enquiries, and support, the same pattern will repeat everywhere it is not deliberately prevented.
The Fix Is Simple and Most Businesses Skip It
The businesses that will build lasting customer trust with AI are not the ones with the most sophisticated models. They are the ones that follow two rules:
Say it is AI. Early, clearly, and without hedging. “Hi, I’m an AI assistant for [business name]. I can help with most questions, and I can connect you to a person anytime.” That single sentence eliminates the deception risk entirely. No customer feels tricked by an AI that told them it was an AI.
Make the exit obvious. Every AI interaction should have a clear, easy path to a human. Not buried in a menu. Not triggered by a keyword the customer has to guess. Visible and immediate. The paradox is that customers who know they can reach a human are more willing to stay with the AI; because the choice is theirs.
These are not technical challenges. They are design decisions. And they are decisions most businesses skip because they worry that disclosing AI will reduce engagement. The evidence says the opposite. Customers who feel informed engage more. Customers who feel deceived do not come back.
What This Means for Any Business Using AI
The Woolworths incident went viral not because Australians hate AI. It went viral because it felt dishonest. Someone deliberately designed an AI to pretend it had a family, and customers felt deceived when they realised what was happening.
Every business considering customer-facing AI should ask one question before anything else: if this AI behaves unexpectedly, will the customer feel informed or deceived?
If the AI has already introduced itself honestly, an odd response is a quirk; something the customer might even laugh about. If the AI has been pretending to be human, that same odd response becomes evidence of deception. The difference is not the failure. It is the framing that surrounds it.
AI in customer service is not going away. It will get better. But the businesses that earn trust with it will be the ones that were honest about it from the start, not the ones that tried to make it pass for human and hoped nobody would notice. The psychology of how teams adopt AI matters, and so does the psychology of how customers experience it. When businesses understand what happens to their data and are transparent about how their AI works, trust follows naturally.
Perth AI Consulting helps businesses deploy AI that customers actually trust. Start with a conversation.