Technical 7 min read

What Happens to Your Data When You Press 'Send' on an AI Tool

Most businesses are sending customer data, financials, and internal documents to AI tools without understanding what happens during processing. The spectrum of AI privacy protection is wider than you think — and recent research shows that even purpose-built security can have structural flaws.

You paste a client email into ChatGPT to draft a reply. You upload a financial report to get a summary. You feed customer feedback into an AI tool to spot patterns. You ask Claude to review a contract clause.

Every one of those actions sends data to a server you do not control. And for most businesses, that is where the understanding stops.

Where does that data go? Who can access it during processing? Is it stored? Is it used to train future models? What happens if the server is compromised?

These are not hypothetical questions. They are architectural ones — and the answers vary enormously depending on which tool you use, how it is configured, and what sits between your data and the outside world.

Most businesses have never asked these questions. If your business handles anything sensitive — client information, financial data, health records, legal documents, commercial-in-confidence material — you need to start.


The Default: Your Data Leaves Your Control

When you use a standard AI tool — ChatGPT, Claude, Gemini, or any of the dozens of industry-specific AI platforms — your data travels to a cloud server, is processed by the model, and a response is returned. The data exists, at least temporarily, on infrastructure you do not own and cannot inspect.

Most major providers now offer usage policies stating they do not train on your data (particularly on business and API tiers). That is a policy commitment. It is not an architectural guarantee. The data still travels to their servers. It still exists in memory during processing. The protection is a promise, not a mechanism.

For many business uses, this is perfectly acceptable. Drafting marketing copy, brainstorming product names, summarising public information — the data is not sensitive, and the convenience outweighs the risk.

But the moment your AI workflow involves client data, employee records, financial details, health information, or legal documents, the calculus changes. A policy that says “we do not train on your data” does not answer the question that actually matters: who can access this data while it is being processed?


The Spectrum of AI Privacy

AI privacy is not binary. It exists on a spectrum, and understanding where a tool sits on that spectrum is essential before you send it anything sensitive.

Level 1: No Protection

The AI tool processes your data on shared cloud infrastructure. Your data may be logged, stored, or used for model improvement. This is the default for free-tier consumer AI tools. It is not appropriate for any business data you would not publish on your website.

Level 2: Policy Protection

The provider commits — through terms of service or a data processing agreement — not to store, log, or train on your data. Enterprise tiers of major AI providers typically offer this. The protection is contractual. The data still travels to and is processed on their infrastructure, but you have a legal commitment regarding how it is handled.

For most business uses, this is sufficient. But “sufficient” depends entirely on what you are sending. For a marketing agency drafting social media posts, policy protection is more than adequate. For a medical practice processing patient notes, it may not be.

Level 3: Partial Architectural Protection

Some systems attempt to protect data architecturally — using encryption, secure enclaves, or trusted execution environments (TEEs) to ensure data is protected during processing, not just by policy.

However, not all architectural approaches are equal. Recent research from Rutgers University demonstrated a structural vulnerability in AI systems that split computation between a secure CPU enclave and an untrusted GPU. These “partial TEE” architectures use cryptographic noise to obfuscate data before it reaches the GPU, then remove the obfuscation when it returns.

The researchers showed that attackers could characterise the noise patterns through repeated queries, filter out the protection mathematically, and recover the protected data. In testing, the attack succeeded with a 100% success rate, recovering model secrets in approximately six minutes.

This is not a minor implementation bug. It is a structural flaw in the architecture itself — the kind of vulnerability that cannot be patched because it is inherent to the design approach.

The lesson is important: architectural protection is only as strong as the architecture. “We use encryption” or “we use secure enclaves” is not sufficient. The question is how comprehensive the protection is and whether the design creates exploitable patterns.

Level 4: Full Architectural Protection

Full TEE architectures protect data throughout the entire processing pipeline. Both the CPU and GPU operate within hardware-secured enclaves — Intel TDX for the CPU, NVIDIA H100 Confidential Computing for the GPU. Data is encrypted in memory at the silicon level. It never leaves the trust boundary. There is no unprotected handoff between components, which means there is no attack surface of the kind the Rutgers researchers exploited.

In a full TEE system, not even the platform operator can access the data during processing. The protection is enforced by hardware, not policy. This is the gold standard for AI privacy — and it is what is required when the data is genuinely sensitive.


What We Built — And Why It Matters

When I built a clinical documentation platform for mental health professionals, privacy was not a feature. It was the foundation.

Therapists cannot use standard AI tools. Session notes contain protected health information — diagnoses, treatment details, personal disclosures. Sending that data to a standard cloud AI service is an ethical violation, potentially a legal one, and most therapists know it. The result is that an entire profession that would benefit enormously from AI documentation assistance is locked out of it by legitimate privacy constraints.

The platform I built uses NEAR Protocol’s full TEE infrastructure — Intel TDX and NVIDIA H100 Confidential Computing. Clinical data is encrypted in CPU and GPU memory during processing. Not even the platform can access the content during inference. The architecture makes it structurally impossible, not just policy-prohibited.

That is an extreme end of the spectrum. Not every business needs silicon-level privacy guarantees. But the experience of building it taught me something that applies to every AI engagement: privacy is an architecture decision, not a settings toggle. The same principle applies to AI quality generally — the architecture around the AI matters more than the model itself. And the right architecture depends on what data you are processing, what industry you operate in, and what the consequences of exposure would be.


The Questions Your Business Should Ask

Before sending sensitive data to any AI tool, ask these five questions:

1. Where does my data go during processing?

Not where is it stored — where does it travel? Which servers process it? In which jurisdiction? If the answer is vague, the protection is vague.

2. Who can access my data while it is being processed?

This is the question most privacy policies do not answer. Data at rest can be encrypted. Data in transit can be encrypted. But data during processing — data in use — is where most exposure occurs. Does the system protect data during inference, or only before and after?

3. Is the protection contractual or architectural?

A data processing agreement is a promise. A trusted execution environment is a mechanism. Both have value. But if your data would cause serious harm if exposed — client health records, legal strategy, financial modelling — you need to understand the difference.

4. Has the architecture been independently validated?

The Rutgers research is a reminder that security architectures can have structural flaws invisible to non-specialists. Has the system been reviewed by independent security researchers? Are the results published? If the vendor’s only evidence is their own assurance, that is worth exactly as much as any self-assessment.

5. What are the consequences if this data is exposed?

This is the question that determines where on the spectrum your business needs to sit. If exposed data would be embarrassing, policy protection is probably sufficient. If it would trigger regulatory action, breach notification obligations, or loss of professional registration, you need architectural guarantees.


The Bottom Line

Most businesses are sending data to AI tools without understanding what happens to it during processing. For non-sensitive work, that is fine — the convenience of AI outweighs a minimal privacy risk.

But if your business handles client data, health information, legal documents, or financial records, the default is not good enough. And “we take privacy seriously” is not an architecture.

The spectrum of AI privacy — from no protection through policy commitments to partial and full architectural guarantees — is wider than most businesses realise. Where your business needs to sit on that spectrum depends on your data, your industry, and the consequences of getting it wrong.

I have built AI systems at the highest end of that spectrum — zero-knowledge architectures where not even the platform operator can access data during processing. That experience shapes how I evaluate privacy requirements for every client, whether they need silicon-level guarantees or simply a properly configured enterprise AI tier.

If your business needs AI but cannot afford to be casual about where the data goes, that is exactly the problem we solve.


Perth AI Consulting builds AI systems for businesses where data privacy is not optional. From configuring enterprise AI tiers with proper data agreements to designing full zero-knowledge architectures for regulated industries. Start with a conversation.

Published 12 January 2026

Perth AI Consulting delivers AI opportunity audits for small and medium businesses. Start with a conversation.

More from Thinking

Evaluation 7 min read

AI Audits That Start With Your Business, Not the Technology

Most AI consultants start with the technology and hunt for problems to solve. I start with your operations and find where real value is being left on the table — using evaluation methods refined across government, banking, health, and commercial environments.

Adoption 6 min read

The Psychology of Why Your Team Won't Use AI

80% of Australians are not using AI at work. The reason is not the technology — it is five predictable psychological barriers. Each one has a specific strategy that overcomes it.

Technical 5 min read

Stop Telling AI What NOT to Do: The Positive Framing Revolution

Most businesses get poor results from AI because they instruct it with constraints and prohibitions. Switching from negative framing to positive framing transforms output quality — and the principle comes from psychology, not computer science.

Building 7 min read

What Production AI Teaches You That Demos Never Will

The gap between AI that works in a demo and AI that works in your business is enormous. Here are the lessons that only emerge when AI hits real users, real data, and real constraints — illustrated by production systems built for real estate and clinical psychology.

Building 8 min read

How We Turned Generic AI Into a Specialist — And What That Means for Your Business

Most businesses get mediocre AI output because they ask AI to think and create in a single step. Building a production AI pipeline with over 1,000 lines of carefully chosen prompting revealed a better approach — and the principles apply to any business using AI.

Evaluation 6 min read

Your Business Has 9 Customer Touchpoints. AI Can Fix the 6 You're Dropping.

You are spending money to get customers to your door. Then you are losing them because you cannot personally follow up with every lead, nurture every client, and ask for every review. AI can handle the touchpoints you are dropping — quietly, consistently, and at scale.