← Back to Writing
AI Systems Building

The Claude Code Cult

12 min read

Reading note

Essays for people who want the pattern behind the pattern.

This page is designed to read like a quiet, deliberate argument rather than a feed item.

There’s a new genre of social media post I keep seeing. Someone — usually not a professional developer — shares a screenshot of something they built with Claude Code. A dashboard. A tool. A full website. The caption is always some version of: “I built this in 45 minutes. I don’t know how to code.”

The comments are a mix of euphoria and existential dread. Non-technical people are thrilled. Engineers are split between fascination and a creeping unease they don’t quite want to name.

I’ve been watching this closely because I’m in both camps. I’m a cloud and AI architect by profession. I also rebuilt my entire personal site using Claude Code — from an overengineered 84-dependency React app to a clean static Astro site with 7 dependencies. I’ve used it to ship blog posts, wire up email signups, generate OG images, and configure deployments. It’s genuinely good. Unreasonably good, sometimes.

And that’s exactly why the implications deserve more thought than “this is amazing” or “we’re all doomed.”

What’s actually happening

Claude Code — and tools like it — have eliminated a specific barrier: the translation layer between “I know what I want” and “I can make the computer do it.” That layer used to require years of learning. Syntax, frameworks, debugging, deployment, dependency management. You needed to speak the machine’s language before the machine would listen.

Now you can describe what you want in plain English and get working code back. Not pseudocode. Not suggestions. Working, deployable code.

This is a real shift. It’s not hype. I’ve watched it happen in my own workflow. Things that would have taken me hours of looking up documentation and debugging edge cases now take minutes. And I’m someone who already knew how to code. For someone who doesn’t, the gap that just closed is enormous.

The game loop

The “addiction” people describe is real, and it follows a specific pattern:

You describe something. The tool builds it. It works. You feel a rush — the gap between idea and reality just collapsed from days to minutes. So you describe something more ambitious. That works too. You push further. Each cycle is faster and more rewarding than the last.

It’s a classic game loop: action → reward → escalation → action. The dopamine hit of seeing your idea materialize instantly is genuinely compelling. I’ve felt it. I’ve also noticed myself staying up later than I should because “just one more feature” felt effortless.

This is worth naming because it means the adoption isn’t just rational. It’s emotional. People aren’t just using these tools because they’re productive. They’re using them because they feel powerful. And when a tool makes you feel powerful, you tend to overlook what it doesn’t do well.

What it actually produces

Here’s where my architect brain starts asking questions that the euphoric social media posts skip over.

It produces code that works. It doesn’t produce code that lasts. There’s a difference between software that runs and software that can be maintained, extended, debugged by someone else, and operated in production for months or years. Claude Code is excellent at the first thing. It’s inconsistent at the second.

My own site rebuild is an honest example. Claude Code did an impressive job generating the Astro structure, migrating content, and implementing features. But it also made choices I had to catch: inconsistent naming patterns, CSS that solved the immediate problem but would create specificity nightmares later, component structures that worked fine at the current scale but wouldn’t survive the next round of changes.

I caught those because I have the context to know what “good” looks like at scale. Someone without that context wouldn’t catch them — and wouldn’t notice the consequences until much later.

It produces software. It doesn’t produce systems. Software is code that runs. A system is code plus deployment plus monitoring plus security plus access control plus compliance plus maintenance plus handoff documentation plus someone who understands why it was built that way. Claude Code gives you the first layer. The other seven layers are still your problem.

It democratizes creation. It doesn’t democratize judgment. Anyone can now build a tool. Not everyone can tell whether that tool should exist, whether it handles edge cases, whether it’s secure, whether it scales, or whether it solves the right problem. Those are judgment calls that come from experience — the same experience that AI can’t generate.

What this means for engineers

The anxiety in engineering communities is understandable but mostly misplaced. The part of software engineering that Claude Code replaces is the mechanical translation — turning a known requirement into working code. That was never the hard part. It was the tedious part.

The hard parts remain:

Understanding the problem. Most software projects fail not because the code was wrong but because the team built the wrong thing. Requirements gathering, stakeholder alignment, scope definition — these are human judgment problems that AI tools don’t touch.

Architectural decisions. Which database? How do you handle state? What’s the deployment model? How does this integrate with existing systems? What are the security implications? These require understanding the full context of an organization’s constraints, and that context isn’t in any prompt.

Operating at scale. Making software work for one user is fundamentally different from making it work for ten thousand. Concurrency, caching, failure modes, monitoring, disaster recovery — the production concerns that separate a prototype from a product.

Debugging the unexpected. Claude Code is great at generating code for known patterns. It’s much less reliable when something goes wrong in a novel way. The most valuable debugging skill isn’t knowing the syntax — it’s developing a mental model of how the system behaves and using that model to narrow down where things broke.

If anything, AI coding tools make experienced engineers more valuable, not less. The mechanical work compresses, which means more of the job becomes the parts that require judgment. And judgment is exactly what can’t be automated.

What this means for everyone else

The more interesting question isn’t about engineers. It’s about the millions of people who can now build software but couldn’t before.

More software will exist. A lot more. Internal tools that never justified hiring a developer. Personal projects that stayed as sketches. Automations that seemed too complex to build. The barrier to creation just dropped dramatically, and when barriers drop, volume explodes.

Most of it will be bad. Not malicious — just fragile, insecure, unscalable, and unmaintained. A tool built in 45 minutes and never updated is a tool that breaks in 45 days. The people building these tools don’t know what they don’t know about security, data handling, error recovery, or operational hygiene. They’re not incompetent — they’re just operating outside their domain of expertise.

Some of it will be transformative. Because the people who deeply understand a problem domain but couldn’t code before can now build solutions that no professional developer would have thought to build. A nurse who builds an internal workflow tool. A teacher who creates a custom learning system. A small business owner who automates their specific, weird, unique process. These are genuinely valuable applications that only emerge when domain experts can build without a technical intermediary.

The “vibe coding” problem. There’s a term floating around — vibe coding — for the practice of building software by feel, without understanding the underlying mechanics. It’s fun. It’s fast. It produces results that look right. And it creates a growing inventory of software that nobody can maintain, debug, or secure, because nobody understands how it works — including the person who built it.

This isn’t hypothetical. I’m already seeing it. People share tools they’ve built and when asked how a specific part works, they say “I’m not sure, Claude handled that.” That’s fine for a personal project. It’s a liability for anything that touches real users, real data, or real money.

The risk nobody’s talking about

The biggest risk isn’t job displacement. It’s a flood of unvetted software entering environments where it can do real damage.

When building is easy, the constraint isn’t creation — it’s governance. Who reviews the AI-generated code before it touches production data? Who ensures the tool that an enthusiastic non-developer built in an afternoon doesn’t have a SQL injection vulnerability? Who maintains it when the person who prompted it into existence moves to a different role?

Enterprise environments will eventually develop governance frameworks for this. But right now, we’re in the gap — the tools are ahead of the policies. And in that gap, a lot of well-intentioned, poorly-secured software is quietly entering production.

Where I land

I use Claude Code daily. I’ll keep using it. It’s a genuinely powerful tool that makes me faster and more productive.

But I use it as someone who can evaluate its output. I know when the generated code is good and when it’s cutting corners. I know when the architecture is appropriate and when it’s overengineered or underengineered. I know when to accept the suggestion and when to rewrite it.

That evaluation layer — the ability to judge whether the output is actually good — is the thing that separates productive use from dangerous use. And it’s the thing that can’t be acquired by using the tool itself. It comes from years of building, maintaining, and debugging software the hard way.

The Claude Code cult is real. The enthusiasm is justified. The tool is impressive. And the implications — for software quality, for security, for the engineering profession, and for the growing pile of unmaintained AI-generated code — deserve a more honest conversation than “look what I built in 45 minutes.”

The 45 minutes is the easy part. The next 45 months are where the real questions live.