← Back to Writing
AI Systems Field Notes

The Onboarding Problem Nobody Talks About

3 min read

Reading note

Essays for people who want the pattern behind the pattern.

This page is designed to read like a quiet, deliberate argument rather than a feed item.

I recently built an onboarding agent for an organization. The technical stack was straightforward: a foundry-based agent with AI search over internal documentation.

The interesting problem wasn’t the agent. It was discovering that the documentation it needed to draw from was inconsistent, outdated, and contradictory across departments. The agent surfaced this immediately — and brutally — by confidently presenting conflicting information from different sources.

The real deliverable wasn’t the agent. It was forcing the organization to reconcile its own knowledge base. The AI was a mirror, not a solution.

Three observations from the build:

Search quality determines agent quality. The model is almost never the bottleneck. The retrieval layer is. If your search returns irrelevant chunks, the smartest model in the world will produce confident garbage.

Onboarding is a systems problem, not a content problem. Most organizations think onboarding fails because the documentation is incomplete. It usually fails because the documentation exists in seven places and nobody agreed on which version is current.

The agent’s first failure mode is the most valuable. When the agent contradicts itself, that’s not a bug — it’s a diagnostic. It tells you exactly where your organizational knowledge has drifted.