When AI Talks to AI, Where Do We Fit?
Reading note
Essays for people who want the pattern behind the pattern.
This page is designed to read like a quiet, deliberate argument rather than a feed item.
I have a tendency to automate everything. It’s a reflex at this point. If I do something twice, I start thinking about how to systematize it. If I do it three times, I’ve probably already built a tool.
This instinct has served me well in architecture, in building tools, in running projects. But lately it’s been bumping into a question I can’t automate my way out of.
If AI is writing our blog posts, and AI is scraping and reading those posts, and AI is summarizing them for other people who never visit the original site — at what point are we just building systems where machines perform for other machines?
The loop
Here’s the concrete version. I could, right now, use an AI model to:
- Generate a blog post based on my expertise
- Optimize it for SEO (which increasingly means optimizing for what AI search engines surface)
- Create social media posts to promote it
- Schedule those posts automatically
Meanwhile, on the consumption side:
- AI search engines scrape the post and summarize it
- People ask their AI assistants about the topic and get a synthesis of my post and ten others
- AI-powered feed algorithms decide who sees my social promotion
- The “readers” who engage might be using AI to draft their comments
At every step, the human involvement gets thinner. I write less. The reader reads less. The space between us fills with machines talking to machines about what we think.
Where authenticity lives
I don’t think the answer is to reject automation wholesale. I use AI tools constantly and they genuinely make me more productive. The content production system I built for Prolific Personalities is explicitly designed to automate parts of the creative process.
But there’s a difference between automating the mechanical parts and automating the thinking.
When I use AI to help structure a blog post, I’m still deciding what to say. The judgment is mine. The experience it draws from is mine. The AI handles the scaffolding; I handle the substance.
When I let AI write the post entirely — even from my notes, even trained on my voice — something changes. The output might be technically accurate. It might even sound like me. But the act of writing is also the act of thinking. When I skip the writing, I skip the thinking. And the reader can usually tell, even if they can’t articulate why.
The test I apply
I’ve started asking myself a simple question before automating any piece of my content workflow:
Would I be comfortable if the reader knew exactly how this was made?
If the answer is yes — “I used AI to help me outline this, then I wrote and edited it myself” — that feels honest. The tool served the human.
If the answer is “I’d rather the reader not know” — that’s the signal. Not that automation is wrong, but that I’ve automated past the point where my judgment is actually in the output.
The bigger concern
The individual authenticity question is manageable. The systemic one is harder.
If AI-generated content becomes the majority of what’s published, and AI-powered tools become the primary way people consume content, we’ve built an information ecosystem where:
- The supply is machine-generated
- The demand is machine-intermediated
- The human is somewhere in the loop, theoretically, but increasingly at the margins
This isn’t a dystopian prediction. It’s a description of what’s already happening in SEO content, in social media, in news aggregation. The question is whether it spreads to the kinds of content that actually matter — the thinking, the analysis, the hard-won insight that comes from lived experience.
I suspect it will, partially. And I suspect the response will be exactly what we’re seeing in other trust-depleted environments: a flight toward signal. Toward content that is clearly, verifiably, unmistakably human. Not because human is always better, but because authenticity becomes scarce when everything else is synthetic.
What I’m doing about it
For this site, the rule is simple: I write. AI helps me think, but the writing is mine. Every post on this site is something I actually sat with, drafted, edited, and decided to publish. Not because AI couldn’t produce something comparable, but because the act of writing it is part of how I understand what I think.
That might sound idealistic. It probably is. But in a world where AI is increasingly talking to AI, the most distinctive thing a person can do is think in public — slowly, specifically, and in their own voice.
That’s what this site is for.