Technical 5 min read

What Happens to Your Data When You Press 'Send' on an AI Tool

Most businesses are sending customer data, financials, and internal documents to AI tools without understanding what happens during processing. The spectrum of AI privacy protection is wider than you think; recent research shows that even purpose-built security can have structural flaws.

You paste a client email into ChatGPT to draft a reply. You upload a financial report to get a summary. You feed customer feedback into an AI tool to spot patterns. You ask Claude to review a contract clause.

Every one of those actions sends data to a server you do not control. And for most businesses, that is where the understanding stops.

Where does that data go? Who can access it during processing? Is it stored? Is it used to train future models? What happens if the server is compromised?

These are not hypothetical questions. They are architectural ones; the answers vary enormously depending on which tool you use, how it is configured, and what sits between your data and the outside world.

Most businesses have never asked these questions. If your business handles anything sensitive, client information, financial data, health records, legal documents, commercial-in-confidence material, you need to start.


The Default: Your Data Leaves Your Control

When you use a standard AI tool, ChatGPT, Claude, Gemini, or any of the dozens of industry-specific AI platforms, your data travels to a cloud server, is processed by the model, and a response is returned. The data exists, at least temporarily, on infrastructure you do not own and cannot inspect.

Most major providers now offer usage policies stating they do not train on your data (particularly on business and API tiers). That is a policy commitment. It is not an architectural guarantee. The data still travels to their servers. It still exists in memory during processing. The protection is a promise, not a mechanism.

For many business uses, this is perfectly acceptable. Drafting marketing copy, brainstorming product names, summarising public information. The data is not sensitive, and the convenience outweighs the risk.

But the moment your AI workflow involves client data, employee records, financial details, health information, or legal documents, the calculus changes. A policy that says “we do not train on your data” does not answer the question that actually matters: who can access this data while it is being processed?


The Spectrum of AI Privacy

AI privacy is not binary. It exists on a spectrum, and understanding where a tool sits on that spectrum is essential before you send it anything sensitive.

Level 1: No Protection

The AI tool processes your data on shared cloud infrastructure. Your data may be logged, stored, or used for model improvement. This is the default for free-tier consumer AI tools. It is not appropriate for any business data you would not publish on your website.

Level 2: Policy Protection

The provider commits, through terms of service or a data processing agreement, not to store, log, or train on your data. Enterprise tiers of major AI providers typically offer this. The protection is contractual. The data still travels to and is processed on their infrastructure, but you have a legal commitment regarding how it is handled.

For most business uses, this is sufficient. But “sufficient” depends entirely on what you are sending. For a marketing agency drafting social media posts, policy protection is more than adequate. For a medical practice processing patient notes, it may not be.

Level 3: Partial Architectural Protection

Some systems attempt to protect data architecturally, using encryption, secure enclaves, or trusted execution environments (TEEs) to ensure data is protected during processing, not just by policy.

However, not all architectural approaches are equal. Recent research from Rutgers University demonstrated a structural vulnerability in AI systems that split computation between a secure CPU enclave and an untrusted GPU. These “partial TEE” architectures use cryptographic noise to obfuscate data before it reaches the GPU, then remove the obfuscation when it returns.

The researchers showed that attackers could characterise the noise patterns through repeated queries, filter out the protection mathematically, and recover the protected data. In testing, the attack succeeded with a 100% success rate, recovering model secrets in approximately six minutes.

This is not a minor implementation bug. It is a structural flaw in the architecture itself; the kind of vulnerability that cannot be patched because it is inherent to the design approach.

The lesson is important: architectural protection is only as strong as the architecture. “We use encryption” or “we use secure enclaves” is not sufficient. The question is how comprehensive the protection is and whether the design creates exploitable patterns.

Level 4: Full Architectural Protection

Full TEE architectures protect data throughout the entire processing pipeline. Both the CPU and GPU operate within hardware-secured enclaves: Intel TDX for the CPU, NVIDIA H100 Confidential Computing for the GPU. Data is encrypted in memory at the silicon level. It never leaves the trust boundary. There is no unprotected handoff between components, which means there is no attack surface of the kind the Rutgers researchers exploited.

In a full TEE system, not even the platform operator can access the data during processing. The protection is enforced by hardware, not policy. This is the gold standard for AI privacy; it is what is required when the data is genuinely sensitive.


The Questions Your Business Should Ask

Before sending sensitive data to any AI tool, ask these five questions:

1. Where does my data go during processing?

Not where is it stored. Where does it travel? Which servers process it? In which jurisdiction? If the answer is vague, the protection is vague.

2. Who can access my data while it is being processed?

This is the question most privacy policies do not answer. Data at rest can be encrypted. Data in transit can be encrypted. But data during processing, data in use, is where most exposure occurs. Does the system protect data during inference, or only before and after?

3. Is the protection contractual or architectural?

A data processing agreement is a promise. A trusted execution environment is a mechanism. Both have value. But if your data would cause serious harm if exposed, client health records, legal strategy, financial modelling, you need to understand the difference.

4. Has the architecture been independently validated?

The Rutgers research is a reminder that security architectures can have structural flaws invisible to non-specialists. Has the system been reviewed by independent security researchers? Are the results published? If the vendor’s only evidence is their own assurance, that is worth exactly as much as any self-assessment.

5. What are the consequences if this data is exposed?

This is the question that determines where on the spectrum your business needs to sit. If exposed data would be embarrassing, policy protection is probably sufficient. If it would trigger regulatory action, breach notification obligations, or loss of professional registration, you need architectural guarantees.


The Bottom Line

Most businesses are sending data to AI tools without understanding what happens to it during processing. For non-sensitive work, that is fine; the convenience of AI outweighs a minimal privacy risk.

But if your business handles client data, health information, legal documents, or financial records, the default is not good enough. And “we take privacy seriously” is not an architecture.

The spectrum of AI privacy, from no protection through policy commitments to partial and full architectural guarantees, is wider than most businesses realise. Where your business needs to sit on that spectrum depends on your data, your industry, and the consequences of getting it wrong.

Privacy is an architecture decision, not a settings toggle, and architecture decisions determine output quality across every aspect of an AI implementation. It is one of the first things we evaluate in every AI opportunity analysis because the right AI solution for your business depends on what data it needs to touch and what the constraints around that data are.


Perth AI Consulting delivers AI opportunity analysis for small and medium businesses. Written report and working prototype, from $1,000. Start with a conversation.

Published 22 December 2025

Perth AI Consulting delivers AI opportunity analysis for small and medium businesses. Start with a conversation.

Written with Claude, Perplexity, and Grok. Directed and edited by Perth AI Consulting.

More from Thinking

Building 9 min read

How We Built On-Device De-Identification So AI Never Sees Real Names

Most AI privacy is a policy. Ours is architecture. We run a named entity recognition model inside the browser to strip identifying information before it ever leaves the device. Here is how it works, what we tested, and where it applies.

Building 8 min read

Your Practice Needs an AML/CTF Program by July 1. Here's What That Actually Looks Like.

AUSTRAC's Tranche 2 reforms hit accountants, real estate agents and settlement agents on 1 July 2026. We built a complete compliance program for a small practice in three days. Here's the process, the output and the boundaries.

Technical 7 min read

Your Agency's Clients Are About to Ask Why This Costs So Much

A solo consultant just built in two weeks what your agency quoted eight for. The client doesn't understand AI yet; but they will. The agencies that survive aren't the ones that cut costs. They're the ones that change what they sell.

Adoption 6 min read

What Do You Love Doing? What Do You Hate Doing?

Most AI rollouts fail the same way. Leadership announces efficiency. Staff hear replacement. A developer at a recent peer group meeting offered a reframe that changes everything; the psychology of why it works tells you how to deploy AI without destroying trust.

Technical 7 min read

Why I Don't Use n8n (And What I Do Instead)

If you've been pitched an AI system recently, there's a good chance you saw n8n in the demo. It demos well. But a compelling demo and a reliable production system are different things; and the distance between them is where businesses get hurt.

Technical 10 min read

Your Codebase Was Not Built for AI. That's the Actual Problem.

Amazon's mandatory meeting about AI breaking production isn't an AI tools story. It's an architecture story. The codebases AI is being pointed at were never designed to be understood by anything other than the humans who built them.

Adoption 4 min read

Your Team Has AI Licences. You Don't Have an AI System.

Fifteen people, fifteen separate AI accounts, no shared context. The problem isn't the tool; it's the architecture around it. Here's what fixing it looks like.

Building 7 min read

Your $2,000 Day Starts the Night Before: Our System Keeps You on the Tools, Not on the Phone

Your route is optimised overnight. Your customers are notified automatically. When something changes mid-day, every affected customer gets told without you picking up the phone. A tradie scheduling system that protects your daily rate.

Evaluation 4 min read

The Fastest Way for an Executive to Get Across AI

AI is moving faster than any executive can track. The alternatives: learning it yourself, sitting through vendor pitches, hiring a consultant who arrives with a hammer, all waste your scarcest resource. There is a faster way.

Building 6 min read

Your IT Department Will Take 18 Months. You Need This Working by Next Quarter.

Senior leaders often know exactly what they need built. The gap isn't technical; it's time. A prototype approach gets the tool working now and gives IT a validated blueprint to build from later.

Adoption 4 min read

What If You Had Perfect Memory Across Every Client?

Any practice managing dozens of ongoing client relationships captures more than it can recall. AI gives practitioners perfect memory across every interaction, so preparation time becomes thinking time, not retrieval time.

Building 8 min read

We Built an AI Invoice Verifier. Here's Where It Hits a Wall.

We built an AI invoice verifier and watched a fake beat a real invoice. Here's why document analysis alone cannot stop invoice fraud; the five layers of detection that most businesses never reach.

Building 5 min read

How to Build an AI Chatbot That Doesn't Lie to Your Customers

Woolworths deliberately scripted its AI to talk about its mother. The business fix is simple: be honest about the bot. The technical fix is harder: architecture that prevents fabrication by design, not by hope.

Technical 9 min read

Why AI Safety Features Are Load-Bearing Architecture, Not Political Decoration

The 'woke AI' label came from real failures; but they were engineering failures, not safety failures. Understanding the difference matters for every organisation deploying AI where errors have consequences.

Adoption 3 min read

Woolworths' AI Told a Customer It Had a Mother. That's a Problem.

Woolworths' AI assistant Olive was deliberately scripted to talk about its mother and uncle during customer calls. When callers realised they were talking to an AI pretending to be human, trust broke instantly.

Evaluation 4 min read

Google Is No Longer the Only Way Your Customers Find You

People are using ChatGPT, Perplexity, and Gemini to find businesses. The sites that get cited are structured differently to the sites that rank on Google. Most businesses are optimising for one and invisible to the other.

Evaluation 4 min read

Two Types of AI Assessment: And How to Know Which One You Need

Most businesses considering AI face the same question: where do we start? The answer depends on whether you need to find the opportunities or reclaim the time. Two assessments, two perspectives, one goal.

Evaluation 4 min read

The Personal Workflow Analysis: What Watching a Real Workday Reveals About Automation

When asked how they spend their day, most people describe the work they value, not the work that consumes their time. Recording a typical workday closes that gap, revealing automation opportunities no interview could surface.

Evaluation 4 min read

What a Good AI Audit Actually Delivers

A useful AI audit produces two things: a written report with specific, costed recommendations and a working prototype you can test. Not a slide deck. Not a proposal for more work.

Evaluation 4 min read

Your Website Looked Great Five Years Ago. Now It's Costing You Customers.

The signals that used to build trust online (polished design, stock imagery, aggressive calls to action) now trigger scepticism. Most businesses don't realise their digital presence is working against them.

Evaluation 4 min read

AI Audit That Starts With Your Business

Most AI consultants arrive with a toolkit and look for places to use it. An operations-first audit starts with how your business actually runs, and only recommends AI where the evidence says it will work.

Building 6 min read

What Production AI Teaches You That Demos Never Will

The gap between AI that works in a demo and AI that works in your business is where the useful lessons live. Architecture, framing, privacy, and adoption; the patterns are the same every time.

Adoption 6 min read

The Psychology of Why Your Team Won't Use AI

You buy the tool, run the demo, and three months later nobody is using it. The reason is not the technology; it is five predictable psychological barriers. Each one has a specific strategy that overcomes it.

Technical 4 min read

Stop Telling AI What NOT to Do: The Positive Framing Revolution

Most businesses get poor results from AI because they instruct it with constraints and prohibitions. Switching from negative framing to positive framing transforms output quality, and the principle comes from psychology, not computer science.

Building 5 min read

How We Turned Generic AI Into a Specialist: And What That Means for Your Business

Most businesses get mediocre AI output and blame the model. The fix is almost never a better model; it's a better architecture. Three structural changes that transform AI from 'fine' to 'actually useful.'

Evaluation 5 min read

Your Business Has 9 Customer Touchpoints. AI Can Fix the 6 You're Dropping.

You are spending money to get customers to your door. Then you are losing them because you cannot personally follow up with every lead, nurture every client, and ask for every review. AI can handle the touchpoints you are dropping: quietly, consistently, and at scale.