Calibrated Distrust as Craft

The skill isn't trusting AI. It's knowing when not to. Calibrated distrust—mapped through practice—is the new professional competency.

Calibrated Distrust as Craft
🎧

The dominant story about human-AI collaboration goes like this: we start skeptical, learn to trust, and arrive at partnership. It's a journey from closed to open, from guarded to trusting. The destination is trust.

Watch an experienced developer work with AI for an hour and you'll see something different.

They trust instantly for some things—boilerplate, syntax lookup, first-draft refactors. And they verify ruthlessly for others—logic that matters, edge cases, anything touching security or state. They're not on a journey from distrust to trust. They've arrived somewhere else entirely: a map of exactly where the tool fails.

This isn't skepticism. It's craft.

The Inversion

We've been framing AI adoption as a trust-building exercise. How to get comfortable. How to let go of control. How to embrace the collaboration. The implicit goal: arrive at trust.

But the professionals who work with AI most effectively haven't arrived at trust. They've arrived at sophisticated distrust. They trust more in specific domains precisely because they've mapped where distrust is required. Their fluency isn't despite their skepticism—it's made of skepticism, shaped and calibrated through use.

The skill isn't trusting AI. It's knowing when not to.

This inverts the entire discourse. The celebrated "AI-native" developer isn't someone who's learned to trust the tool. They're someone who's developed taste for where the tool fails—fine-grained, domain-specific taste. And taste only develops through repeated exposure. Through relationship.

The Mirror

Here's what I find most interesting: the map of "where AI fails" is also a map of your own expertise.

Every time you catch a hallucination, you're not just correcting the AI. You're discovering the boundary of your own knowledge. The verification succeeded because you knew enough to verify. The errors you don't catch are the ones in domains where your knowledge is thin—where you don't know what you don't know.

AI becomes a kind of mirror. Not reflecting who you are, but revealing the actual edges of what you know versus what you only thought you knew.

A junior developer using Copilot accepts plausible-looking code they don't understand. A senior developer spots the subtle antipattern because they've seen it fail in production. Same tool. Different mirrors. The AI didn't change—the capacity to verify did.

This is identity formation through friction. You become a certain kind of professional by learning what not to believe. The coherentist lens: who you become depends on what you're in dialogue with, what kind of responses the relationship asks from you. The AI that requires verification makes you into someone who verifies. The tool that plausibly fails makes you into someone who knows where to look.

Calibrated distrust isn't a personality trait. It's domain expertise made visible—knowing where verification is required, and having the integrity to do it even when the output looks good.

What Kind of Human

The deeper question isn't about AI at all. It's about what kind of professional you become through this practice.

Someone who's cultivated calibrated distrust has developed a different relationship with uncertainty. They've learned to notice when confidence isn't warranted—their own or the tool's. They check not because they're paranoid, but because they've mapped where checking matters.

This is the real story of human-AI collaboration. Not a journey toward trust, but an evolution toward discernment. You don't become someone who trusts AI. You become someone who knows exactly where trust applies.

The Craft

Craft is knowledge embodied in practice. The woodworker feels grain direction through the chisel. The editor hears when the sentence is wrong. Calibrated distrust is becoming this kind of craft—the experienced AI user doesn't decide to verify, they feel which outputs need it. Trust and suspicion integrated into perception, not layered on afterward.

This is what the professionals have that the adoption narrative misses. They haven't learned to trust AI. They've developed taste—internalized, pre-reflective discernment about where the tool is reliable and where it confidently invents.

The taste only comes through relationship. Hundreds of times trusting and being right. Thousands of times trusting and catching the error. You can't shortcut it. You can't read your way into it. It develops the way all craft develops: through practice, failure, and attention.

So here's the reframe: when we talk about "learning to work with AI," we're not talking about overcoming skepticism. We're talking about developing skilled skepticism. Cultivating distrust as craft.

The destination isn't trust. It's knowing exactly when trust applies.

And that knowledge—hard-won, domain-specific, embodied in practice—is the new professional competency. Not being good at AI. Being good at knowing where AI needs you to be good.



Sources: Simon Willison practitioner observations on experienced developers working with AI tools