The Problem With Having Good Taste and a New Hire
After months of building Prolific Personalities, I had a problem most founders would consider a good one: I needed to hire help. Content production was bottlenecking on me — not because I couldn’t produce it, but because I was the only one who knew what “right” looked like.
So I hired a marketer. And immediately ran into a different problem.
Context doesn’t transfer
Over the course of building PP, I’d spent months working with Claude. Not just generating content — shaping it. Refining voice. Rejecting things that sounded too polished. Pushing toward a tone that felt specific rather than generic. After hundreds of iterations, Claude understood my preferences deeply. The output was consistent because the context was rich.
None of that was portable.
When my marketer started, I did what most people do: I wrote documents. Brand guidelines. Voice notes. Examples of what good looked like. And honestly, it was fine — for the first few days. But documents are passive. People skim them, internalize some of it, forget the rest, and start drifting. That’s not a criticism of the marketer. That’s just how humans interact with reference material. I’ve done it too. Everyone has.
The result was predictable: inconsistent output, more rewrites than I wanted, and a growing feeling that I was spending as much time reviewing and correcting as I would have spent just doing it myself.
Documents describe. Systems decide.
The fix wasn’t better documentation. It was realizing that what I actually needed was a system that made decisions on behalf of the person using it — or at least constrained the decision space enough that the output stayed within bounds.
So I built a content production system. Not a CMS. Not a template library. A prompt assembly tool that encodes the editorial judgments I’d been making intuitively.
Here’s what that means in practice: instead of telling someone “our blog voice is warm but direct, research-backed but not academic, and should avoid productivity-industry clichés,” the system builds prompts that enforce those constraints structurally. You pick a content type. You pick an archetype. You fill in the topic-specific details. The system assembles a brief that already has the voice rules, structure requirements, and guardrails baked in.
The marketer didn’t need to remember that we don’t use adversarial framing against the productivity industry. The system just didn’t offer that option.
What actually changed
Training time dropped. Not because the marketer didn’t need to understand the brand — he did — but because the system carried the details that are easy to forget and hard to enforce through conversation alone. Things like: our CTA style, how we handle archetype-specific language, when to cite research and how to cite it, word count boundaries.
Review cycles shortened. Not to zero — I still review everything — but the first drafts started arriving much closer to publishable. The rewrites shifted from “this doesn’t sound like us” to “let’s tighten this section,” which is a fundamentally different kind of edit.
And something I didn’t expect: it made my own thinking clearer. Building the system forced me to articulate preferences I’d never made explicit. Things I’d been vibing on — “this feels right, that doesn’t” — had to become rules. That process of codification was valuable in itself.
The bigger question
This project started as a practical solve for a hiring problem. But it got me thinking about something larger: what happens when every team is using AI tools, but the context that makes those tools useful is trapped in individual conversations?
Right now, most AI-assisted workflows are personal. One person, one chat, one accumulated context. That works fine for individuals. It breaks the moment you need consistency across a team. And it completely falls apart when you need consistency across teams using different AI tools.
The content system I built is small — it serves one brand across a handful of formats. But the pattern feels general. Codify the decisions. Encode the constraints. Make the system carry the taste so the people can focus on the thinking.
I don’t know exactly where this leads yet. But I suspect the next evolution of AI tooling isn’t better models — it’s better systems for making the models behave consistently at the edges, where the actual work happens and where things quietly drift.
That’s what I’m building toward. I’ll write more as I figure it out.