← Back to Writing
AI Systems

The Context Window Is the Product

7 min read

Reading note

Essays for people who want the pattern behind the pattern.

This page is designed to read like a quiet, deliberate argument rather than a feed item.

The most useful AI interaction I’ve ever had wasn’t a single prompt. It was a conversation that had been running for months.

I’d been building Prolific Personalities — the brand, the content strategy, the voice — inside a Claude project. Over hundreds of exchanges, the model learned what I cared about. Not because I wrote a perfect system prompt (though I tried), but because the accumulated context of our conversation carried information that no prompt could capture in advance. My preferences. My rejections. The specific way I’d rephrase something that came back too polished. The topics I kept returning to.

The output was good because the context was deep. And then I needed someone else to produce content, and I discovered that none of that depth was transferable.

The real limitation isn’t intelligence

When people talk about AI limitations, they usually mean capability. The model can’t reason well enough, can’t handle complex instructions, hallucinates facts. Those are real problems. But in my experience, the bigger practical limitation is simpler: the model doesn’t remember.

Every conversation starts from zero. Every new session, every new tool, every new team member using a different AI product — they all start with an empty context window. The intelligence is there. The memory isn’t.

This matters more than most people realize, because the value of an AI interaction compounds over time. The first conversation is generic. The fiftieth conversation, if the context carried forward, would be genuinely personalized. But we almost never get to the fiftieth conversation, because the window resets.

Context is the moat, and it’s leaking everywhere

I built a content production system for Prolific Personalities partly because of this problem. The system encodes editorial decisions — voice rules, structure requirements, archetype-specific language — into structured prompts. It exists because I couldn’t hand a new team member the context that lived in my Claude conversations.

But the system is a workaround, not a solution. It captures the decisions I was able to articulate. It doesn’t capture the ones I wasn’t — the intuitive sense of “this feels right” that developed over months of iteration.

This problem scales. Every team using AI tools right now is accumulating context in individual conversations that will evaporate. The marketing person who spent weeks refining a brand voice with ChatGPT. The engineer who built up a codebase understanding with Copilot. The product manager who explored strategy with Claude. All of that context is trapped — in one person’s account, in one tool’s memory, in one conversation thread that nobody else can access.

What a solution would look like

I don’t think the answer is better memory features inside existing tools, though that helps. The deeper issue is architectural. Context needs to be:

Portable. I should be able to carry my preferences, my editorial standards, my domain knowledge between tools. Not as a static document — as something the next tool can actually use.

Shared. When my marketer uses an AI tool to create content, it should have access to the same brand context I’ve built up. Not a summary of it. The actual nuanced preferences that only emerge through repeated interaction.

Persistent without being permanent. My preferences evolve. The system should learn and update, not just store a snapshot. But it should also not forget things I haven’t explicitly changed.

Separable from any single model. Right now, my context is locked inside specific platforms. If I switch from Claude to GPT to Gemini for different tasks, each one starts cold. The context layer should sit above the model layer.

This is essentially a personal or organizational knowledge layer that AI tools read from and write to. Not a database of facts — a living representation of how you think, what you value, and what you’ve already decided.

Nobody’s building this well yet

There are attempts. Memory features in ChatGPT and Claude. Custom GPTs and Claude Projects with system prompts. RAG systems that retrieve relevant documents. MCP servers that connect tools to external context. These are all partial solutions that address pieces of the problem.

But none of them solve the core issue: context that compounds across time, across tools, across people, and across sessions — while remaining nuanced enough to capture the things you can’t easily put into words.

I suspect this is where the real value in AI tooling will end up. Not in the models themselves — those are commoditizing. Not in the interfaces — those are converging. But in the context layer. Whoever figures out how to make AI interactions compound rather than reset will have built something genuinely durable.

I don’t have the answer. But after watching months of accumulated context vanish the moment I needed to hand work to someone else, I’m convinced this is the right question.