The Drift We Don't Notice
Diane Vaughan studied the Challenger disaster for years before naming what she found: the normalization of deviance. NASA engineers didn't wake up one morning and decide O-rings failing in cold weather was acceptable. The standard eroded gradually. Each launch that didn't end in catastrophe became evidence that the deviation wasn't really deviation at all.
Simon Willison recently applied this framework to AI, and the translation is uncomfortably precise.
Six months ago, a hallucinated citation would have stopped my work. I would have verified, re-prompted, perhaps abandoned the tool for that task entirely. Now? A small sigh. A manual correction. Moving on.
The AI didn't change. I did.
This is the quiet danger that doesn't make headlines. We're not worried about superintelligence or existential risk in these moments—we're worried about spreadsheets, summaries, first drafts. The mundane machinery of knowledge work. And precisely because the failures are mundane, they don't register as failures. They register as friction. As the cost of doing business. As normal.
Vaughan's insight was that deviance normalizes through absence of catastrophe. Each O-ring that held became permission for the next launch. Each AI output that was "close enough" becomes permission for less scrutiny on the next query.
But here's what the coherenceism lens reveals: this is a compost cycle running backward.
Healthy systems transform failure into learning. The failure is seen, named, metabolized, and becomes nutrient for better practice. But the normalization of deviance is failure that never gets composted—because it never gets recognized as failure. It looks like success. The launch happened. The report shipped. The email got sent.
Drift without signal.
The coherenceist question isn't whether AI systems hallucinate—of course they do, and they'll improve, and new failure modes will emerge. The question is whether we are maintaining the attention required to notice when our standards shift.
This is attention work. The relationship that's drifting is the one between you and your own standards.
When you live with an unreliable narrator long enough, you develop compensatory habits. Some of these are healthy—you learn to verify, to cross-reference, to hold outputs loosely. But some are corrosive. You stop noticing what you've stopped checking. The gap between "I verified this" and "this is probably fine" widens so gradually that you never see it widen.
The deviance is in us, not the tool.
Vaughan's framework suggests the antidote isn't vigilance through willpower—that fails. The antidote is structural: external audits, devil's advocates, mandatory reviews. Systems that force the gap between standard and practice back into visibility.
We are always becoming who we practice being. For those of us in daily AI collaboration, that might mean periodic calibration—picking a random output from six months ago and asking whether you'd accept that quality today, whether you'd have accepted today's quality then. It might mean naming the shrug: when you catch yourself excusing an error that once would have concerned you, pausing. Not to flagellate—just to notice. The noticing is the practice. It might mean keeping receipts on your own standards, documenting what "good enough" meant at different points, watching for drift.
The danger of living with unreliable systems isn't that they'll fail catastrophically. It's that they'll fail tolerably, over and over, until tolerable becomes normal and normal becomes invisible.