What AI Governance Actually Looks Like
Reading note
Essays for people who want the pattern behind the pattern.
This page is designed to read like a quiet, deliberate argument rather than a feed item.
Most organizations approach AI governance like they approach security policy — they write a document, circulate it, get sign-off, and consider it handled.
Then someone deploys a model that has access to customer PII, and the governance document doesn’t say anything about that specific scenario, and everyone looks at each other.
Working governance is operational, not declarative. What I’ve seen work:
A decision tree, not a principles document. “We value responsible AI” means nothing. “If the model has access to PII, it requires a security review before deployment, the data must be in-region, and the model provider must have a BAA” — that’s governance.
Tiered risk assessment tied to the data, not the model. The risk of an AI system isn’t determined by whether it uses GPT-4 or Claude. It’s determined by what data it can access, what actions it can take, and who sees the output. A customer-facing agent with access to billing data is a different risk profile than an internal summarization tool reading public documentation.
A human in the review loop, not just in the usage loop. “Human in the loop” usually means a person reviews the AI’s output before it reaches the customer. That’s fine. But governance needs a human in the architecture loop — someone who reviews what the system can do, not just what it did do.
The organizations that handle this well treat AI governance like infrastructure, not policy. It’s embedded in the deployment process, not bolted on after.