Draw the Line Before You Need It: Trust Boundaries in AI Work Netflix's AI principles define what AI can touch and what stays human. These aren't technical constraints—they're trust boundaries.
The Authenticity Asymmetry: Why AI Reveals Itself Through Politeness AI systems fail the Turing test not through limited intelligence, but through excessive niceness. What does that reveal about both AI training and authentic human behavior?
Famine Requires Certification Catastrophic hunger needs bureaucratic validation to trigger coordinated response. That gap reveals where systems fail to resonate with ground reality.
When Shutdowns Stop Being Symbolic Political theater becomes legitimacy crisis when outer-layer conflict cascades into measurable infrastructure failure.
Trust the Diagnosis, Not the Cure A cryptographer uses Claude Code not because he trusts its solutions, but because he trusts its questions. This asymmetry might be the more durable foundation.
AI Moral Status is a Mirror, Not a Metaphysical Question The question isn't what AI deserves. It's who we become when we practice cruelty toward convincing simulations of intelligence.
Structure as Signal: When Codebases Talk to Agents Codebases optimized for human reading miss what agents need most: explicit structure, clear signals, and self-documenting architecture.