Stop Telling AI What NOT to Do: The Positive Framing Revolution
Most businesses get poor results from AI because they instruct it with constraints and prohibitions. Switching from negative framing to positive framing transforms output quality — and the principle comes from psychology, not computer science.
If you have used ChatGPT or any AI tool for your business, you have almost certainly written instructions like this: “Do not use jargon. Do not be too formal. Do not make things up. Do not exceed 200 words.” It feels logical. You know what you do not want, so you tell the AI what to avoid.
The problem is that this approach produces exactly the kind of cautious, generic, lifeless output that makes business owners conclude AI is not ready for real work.
It is ready. You are just instructing it wrong.
The Negative Framing Problem
When you tell a person “do not think about a white bear,” they immediately think about a white bear. This is not a quirk — it is a well-documented cognitive phenomenon called ironic process theory, first described by Daniel Wegner at Harvard in 1987. The act of monitoring for something you are trying to suppress keeps that very thing active in your mind.
AI language models do not have minds. But they do have a structural equivalent of this problem. When you fill your instructions with prohibitions, you are dedicating a significant portion of the model’s attention to the concepts you want it to avoid. The word “jargon” appears in your instruction, so jargon-adjacent language stays activated in the model’s probability space. “Do not be formal” keeps formal registers present. “Do not make things up” foregrounds the boundary between fact and fabrication in a way that often produces hedged, uncertain phrasing.
The result is output that reads like it was written by someone who is afraid of making a mistake. Which, in a sense, it was.
What Positive Framing Looks Like
Positive framing replaces prohibitions with descriptions of what you actually want. It is a simple shift, but the difference in output quality is immediate and measurable.
Instead of negative constraints:
- “Do not use jargon”
- “Do not be too formal”
- “Do not exceed 200 words”
- “Do not make things up”
- “Do not sound like a robot”
Use positive descriptions:
- “Use plain language that a business owner with no technical background would understand”
- “Write in a warm, direct, conversational tone — like explaining something to a colleague over coffee”
- “Keep the response to 150–200 words”
- “Only include claims you can directly support from the information provided”
- “Write as a knowledgeable person who genuinely wants to help”
The negative versions tell the AI what to move away from. The positive versions tell it what to move toward. A model with a clear destination produces far better work than a model surrounded by fences.
Why This Works: Probability, Not Obedience
AI language models do not follow instructions the way an employee follows a policy manual. They generate text by predicting what comes next, word by word, based on the full context of your prompt. Every word in your instructions shifts the probability distribution of the output.
When your instructions contain the phrase “do not use jargon,” the words “do,” “not,” “use,” and “jargon” all become part of the context. The model now has “jargon” as an active concept. It will likely avoid the most obvious jargon, but the neighbourhood of jargon — semi-technical language, industry-adjacent phrasing, unnecessarily complex vocabulary — remains probabilistically elevated.
When your instructions instead say “use plain language that a Year 10 student would understand,” the active concepts are “plain,” “language,” “student,” and “understand.” The model’s probability space shifts toward simplicity, clarity, and accessibility. Not because it is obeying a rule, but because the context points it in that direction.
This is why positive framing is not just a stylistic preference. It is a more effective way to steer the model’s output.
The Business Owner’s Quick Guide to Reframing
You do not need to understand how language models work to apply this principle. The rule is simple: every time you catch yourself writing “do not,” stop and describe what you want instead.
Here are the most common negative instructions businesses use, with their positive replacements:
Tone and voice:
“Do not be salesy”→ “Be helpful and informative. Let the value speak for itself.”“Do not be boring”→ “Open with the most interesting or surprising point. Use specific examples.”“Do not sound like AI”→ “Write as a senior [role] with 10 years of experience who communicates clearly and directly.”
Content and accuracy:
“Do not hallucinate”→ “Only reference information provided in the source material. If you are unsure, say so.”“Do not go off-topic”→ “Focus specifically on [topic]. Every paragraph should connect back to [core point].”“Do not repeat yourself”→ “Make each paragraph introduce a new idea or example.”
Format and length:
“Do not write too much”→ “Respond in 3–4 concise paragraphs.”“Do not use bullet points”→ “Write in flowing prose with clear paragraph breaks.”
The Compound Effect
Positive framing does not just improve individual outputs. It compounds across your entire AI workflow — especially when combined with a structured pipeline that separates analysis from creation.
When you give an AI tool a positively framed identity — “You are a senior business analyst who communicates complex findings in plain language to non-technical business owners” — every subsequent instruction inherits that framing. The model does not need to be told what not to do because it has a clear picture of what it is.
If your team spends 20 minutes rewriting every AI draft, you are not saving time — you are adding a hidden tax to your workflow. Positive framing reduces that tax directly, in three ways:
- Output quality improves immediately. The first draft is closer to what you actually want, which means less editing time.
- Consistency increases. A positive identity produces more predictable output than a list of prohibitions, because the model has a stable reference point rather than a set of boundaries to navigate.
- The tool becomes genuinely useful. When AI produces output you can actually use — with minor edits rather than complete rewrites — the calculus changes. It stops being a novelty and starts being infrastructure.
The Bottom Line
The difference between businesses that find AI useful and businesses that find it disappointing often comes down to something this simple: how you write your instructions.
Negative framing — lists of prohibitions and constraints — produces cautious, generic output that requires heavy editing. Positive framing — clear descriptions of what you want — produces output that is closer to usable on the first attempt.
This is not a productivity hack. It is a fundamental shift in how you communicate with AI tools. And it applies to every AI interaction your business has: customer service chatbots, content generation, email drafting, proposal writing, and internal documentation.
The next time you sit down to write instructions for an AI tool, read through them before you press enter. Count the prohibitions. Then rewrite each one as a positive description. The difference will be obvious within a single generation.
Frequently Asked Questions
What is prompt engineering?
Prompt engineering is the practice of writing instructions for AI tools in a way that produces reliable, high-quality output. It is not programming — it is closer to briefing a skilled contractor. The clearer and more specific your brief, the better the result. Positive framing is one of the most effective prompt engineering techniques: describing what you want rather than listing what you do not want.
Why do my AI prompts give bad results?
The most common reason is negative framing — instructions built around prohibitions (“do not use jargon,” “avoid being too formal,” “do not make things up”). These constraints make the AI cautious and generic. Replacing each prohibition with a clear, positive description of what you want typically produces noticeably better output on the first attempt.
Does this work with ChatGPT, Claude, and other AI tools?
Yes. Positive framing improves output across all major language models because the underlying mechanism is the same: every word in your instructions shapes the probability of what the model generates next. Positive descriptions point the model toward what you want. Negative constraints keep the unwanted concepts active in the model’s context.
This principle — designing AI interactions around how people naturally think and communicate — is central to every AI audit at Perth AI Consulting. The audit identifies not just where AI fits in your business, but how to configure it so your team actually gets value from it.