Back to blog
AI2026-04-215 min read

When AI gives rubbish answers, the fix is usually context — not a better model

You asked Claude to draft a customer reply and got something that sounded like a canned template. The model is not the problem. Here's what context engineering actually means, and how to apply it in a small business.

01

You have probably seen this pattern

You ask Claude or ChatGPT to draft a reply to a customer complaint. What you get back is polite, generic, and clearly not from you — it reads like a canned template anyone could have sent.

You try again with a different prompt. Same problem. You wonder if the model is the issue, and maybe next quarter's release will fix it.

It almost certainly will not. The model is not the problem. The model does not know anything about your business.

02

What context engineering means

Anthropic calls this context engineering, and the phrase has stuck for a reason. Prompt engineering is about the words you write. Context engineering is about what the model can actually see when it answers.

The things that make a reply sound like it came from your business — a customer's history with you, the policy that applies to this specific situation, the way you phrase things, the words you do not use — live in your systems, your head, or nowhere written down at all. The model has no way to see any of it unless you make it visible.

Without that context, the model fills the gap with whatever was most common in the data it learned from — the generic, anyone-could-have-sent-it template.

03

Why piling in more does not work

The natural reaction, once you understand this, is to paste everything in. The whole customer manual, the whole style guide, the last month of emails, the FAQ. Surely more context means better answers.

It does not. Anthropic describes a phenomenon they call context rot: as you pile on tokens, the model's ability to actually use any given detail goes down. The context window is not a bucket you fill. It is an attention budget you spend. Spend it on relevant things and the model performs. Spend it on everything and the model hunts through noise and misses the point.

04

The practical discipline

The guiding principle is the smallest amount of high-quality information that makes the right decision likely. A few things we have found work in real business systems:

Pull, do not push. Instead of pre-loading the entire customer history into every prompt, give the AI a tool to fetch what it needs when it needs it. A function called get_customer_context that returns the last five interactions and the account tier. Cleaner than dumping everything.

Curate examples rather than writing rules. If you want the model to match your tone, three examples of emails you would actually send beat three paragraphs of instructions. Anthropic's phrasing is good here — examples are pictures worth a thousand words.

Clear tool output once it is used. If the AI has already looked up the refund policy and drafted a response, it does not need to keep staring at the policy text for the rest of the conversation. Removing stale tool output keeps the attention budget on the current step.

Take notes outside the chat. Long tasks benefit from a running scratchpad — a project log, a decisions file — that survives between prompts. The better AI systems today maintain explicit state rather than relying on one long conversation.

05

What the fix looks like

The change is usually smaller than people expect. Instead of handing the model a prompt and hoping, you give it a small set of tools for pulling the right context at the right moment — the specific customer's history, the specific policy that applies, a few examples of how your business has handled similar situations before. Nothing about the prompt itself changes materially. Same model, same instruction. The output stops sounding generic because the model can finally see the business.

This only works when the context is tied to your specific operation. Generic prompts produce generic output because generic is all the model has to go on. Making it yours is the whole job.

06

Where this matters for you

If you have tried an AI tool in your business and it disappointed you, the next model release will disappoint you for the same reason. The fix is almost always context — the tool was not built to see your business.

That is a solvable problem. It is also what we mean when we say we integrate AI around how a business actually works — context engineering, under a friendlier name. Your ops manual, your tone, your history: the model needs it, and it needs it arranged so it can find the right bit at the right time.

If that sounds like something you are fighting with, get in touch. First call is free, and we will tell you honestly whether your problem is a context problem (usually yes) or something else.

Further reading

  • Anthropic Engineering — Effective context engineering for AI agents. The source for the term context engineering, the context rot phenomenon, and techniques like just-in-time retrieval, compaction, and structured note-taking.

Got questions about this topic? We're happy to help.

Get in touch

Let's talk.

Tell us what you're trying to do. We'll reply within one working day. If we're not the right team for it, we'll say so.

Reply in one working day
First call is free — no pitch
If we're not the right fit, we'll say so
Based in the UK, working with businesses across the country