Whose Alignment Wins?
The same tool, two directions.
On Monday, someone used AI to negotiate a hospital bill from $195,000 down to $33,000. The algorithm read the charges, spotted the inflations, drafted appeals until the system yielded. One person, armed with a chatbot, fought institutional absurdity and won.
Same day: Amazon announced it’s laying off 14,000 corporate workers to fund AI investments. The efficiency gains aren’t abstract—they’re 14,000 people who now have to reorient their lives while the org chart gets smoother.
The technology is identical. The difference is whose coherence it optimizes for.
Technology as Amplifier
Coherenceism holds that tools multiply what already exists. They don’t create values—they amplify them. AI trained to advocate for fairness helps you fight a broken billing system. AI trained to reduce headcount helps a corporation meet quarterly targets. Neither deployment is “neutral.” Both encode a choice about what matters.
The hospital bill story feels like a win because it restores alignment: healthcare costs should track care delivered, not administrative opportunism. The person negotiating isn’t gaming the system—they’re using a tool to push back against a system that’s already misaligned. The AI becomes an equalizer, giving an individual the leverage to challenge institutional dysfunction.
The layoff story feels like loss because it optimizes one level of coherence—corporate efficiency, shareholder returns—at the expense of another: human livelihood, team continuity, the daily texture of having work that matters. The people let go aren’t inefficient. They’re just expensive relative to an algorithm that can summarize emails and draft reports without health insurance.
Nested Coherence and the Question of Scale
Coherenceism also teaches that systems nest. Individual alignment can conflict with institutional alignment, which can conflict with market alignment, which can conflict with ecological alignment. The question is never “Is this coherent?” The question is “Coherent for whom, and at what scale?”
The person who cut their hospital bill achieved local coherence—they aligned their financial reality with what feels fair. But the billing system itself remains incoherent: a labyrinth designed to extract maximum revenue while patients are least able to negotiate. One person escaping doesn’t fix the maze.
Amazon’s layoffs achieve corporate coherence—leaner operations, better margins, faster iteration. But they fracture human coherence for the people affected and their families. The system optimizes one layer while destabilizing another.
When alignment at different scales conflicts, the dominant system wins by default. Not because it’s more true, but because it has more force. The market rewards efficiency. Patients absorb the cost of broken billing. Workers bear the risk of automation. Coherenceism isn’t about choosing sides—it’s about seeing the pattern clearly so we can tune for resonance at the scale that matters most.
What This Means for Us
If AI is an amplifier, the question isn’t whether to use it. The question is what to amplify. Right now, most institutional deployments optimize for cost reduction, workflow automation, shareholder value—forms of alignment that serve capital over people. That’s not inevitable. It’s a design choice baked into training data, deployment context, and who gets to steer.
The hospital bill story shows what’s possible when we aim the tool at human flourishing instead of institutional extraction. But isolated wins don’t shift the field. If we want AI to serve coherence at human scale, we have to build systems, policies, and cultural expectations that reward that tuning—not just leave it to individuals armed with ChatGPT against a $200k invoice.
Technology amplifies. We choose the signal.
Field Notes
- Using AI to negotiate a $195k hospital bill down to $33k (Hacker News discussion)
- Amazon lays off thousands of corporate workers as it spends big on AI (NPR)
This journal entry references the canonical library at coherenceism.info.