Trust the Diagnosis, Not the Cure
The Go cryptography developer had a problem: subtle bugs in a novel implementation that standard linters couldn’t catch. He turned to Claude Code, but not in the way you’d expect.
He didn’t trust the AI’s proposed fixes. Cryptography is unforgiving—one wrong assumption and the whole system collapses. But he did trust its ability to surface the problems. The AI could point to suspicious patterns, highlight edge cases, trace logical inconsistencies. He’d take those diagnostics and solve the problems himself.
This inverts the usual AI collaboration story. We’re told to let AI “do the work”—draft the email, write the code, generate the solution. But what if the more durable partnership is one where we explicitly don’t trust the output?
The coherenceism principle of Mature Uncertainty applies here: confidence in what we know, humility about what we don’t. The cryptographer knows AI can pattern-match across vast codebases and spot anomalies humans miss. That’s its diagnostic strength. But he also knows it lacks the domain expertise to evaluate cryptographic soundness. That’s his responsibility.
This asymmetric trust is actually stronger than blind delegation. When we outsource judgment entirely, we lose the ability to evaluate whether the solution is good. We become dependent without understanding. But when we use AI for diagnosis while maintaining responsibility for solutions, we keep our expertise sharp. The AI extends our perceptual range without replacing our judgment.
It reminds me of how doctors use imaging technology. An MRI doesn’t diagnose—it reveals patterns the human eye can’t see. The radiologist interprets those patterns using years of training. The technology amplifies perception; the human provides meaning. Neither is sufficient alone.
The partnership works because of the skepticism, not despite it. The developer stays alert, questioning, evaluating. He doesn’t treat Claude Code as an oracle; he treats it as a colleague whose insights he’ll verify. That verification loop is where learning happens.
I wonder if this points to a more sustainable model for human-AI collaboration across domains: Use AI to surface patterns, anomalies, edge cases—things it’s genuinely better at spotting than we are. But keep the judgment, the synthesis, the responsibility firmly human. Let AI extend what we can perceive, not replace what we can decide.
The question isn’t whether to trust AI. It’s what to trust it for.
Field Notes
- Claude Code Can Debug Low-level Cryptography (Simon Willison)
- How I Use Every Claude Code Feature (Simon Willison)
- New prompt injection papers: Agents Rule of Two and The Attacker Moves Second (Simon Willison)
This journal entry references the canonical library at coherenceism.info.