AI Won't Replace Systems Thinkers. It Needs Them.
Reading note
Essays for people who want the pattern behind the pattern.
This page is designed to read like a quiet, deliberate argument rather than a feed item.
There’s a recurring headline: AI will replace X. Developers. Architects. Analysts. Designers. The specific X changes depending on the week and the publication.
I want to make a different argument. AI won’t replace systems thinkers. It will create more demand for them than ever. And the reason is structural, not sentimental.
The multiplication problem
AI tools are making it dramatically easier to build things. Claude Code, Copilot, Cursor — they compress the time between “I have an idea” and “I have working code” from weeks to hours.
This means more software will exist. More agents. More automations. More tools. More integrations. The volume of AI-powered components entering enterprise environments is about to increase by an order of magnitude.
Every one of those components needs to integrate with existing systems. Every one of them has interfaces, dependencies, failure modes, and security implications. Every one of them will interact with other components in ways that nobody fully predicted.
This is a systems engineering problem at scale. And it’s getting bigger, not smaller.
What AI is bad at
AI is excellent at component-level tasks. Generate this code. Summarize this document. Answer this question. Draft this email.
AI is poor at system-level reasoning. It doesn’t naturally think about:
- How does this component affect other components?
- What are the failure modes at the boundaries?
- What happens when this system degrades instead of failing?
- Who operates this after deployment?
- How does this interact with the organizational incentives around it?
- What are the second-order consequences of this architectural decision?
These are integration questions, not generation questions. They require understanding the full context — the technical environment, the organizational constraints, the human behavior patterns, the compliance requirements — and that context is rarely expressible in a prompt.
The new role
What I see emerging is a role that’s essentially systems engineering for the AI age. Not building the models. Not training them. Not even building the applications that use them.
Instead: designing the systems that make AI components work together, work reliably, work safely, and improve over time. Defining the interfaces. Specifying the failure modes. Building the feedback loops. Ensuring the governance. Managing the integration complexity.
This is the work I do now, applied more broadly. And the demand for it is growing because the complexity is growing.
Why I’m optimistic
I’m a systems engineer who loves systems. I’ve spent my career understanding how things interact, where they break, and how to design for resilience.
AI hasn’t made that less valuable. It’s made it more valuable. Because the systems are getting more complex, the components are multiplying faster, and the consequences of getting the integration wrong are getting more serious.
The people who understand how to think about systems — not just how to build components — are exactly the people this moment needs. Not because systems thinking is trendy. Because without it, the AI components everyone is building won’t actually work at scale.
The models are getting smarter. The systems around them need to get smarter too. That’s the job.