Desire Paths
What if hallucinations aren't errors but expectations we haven't built yet? Steve Yegge's 'desire paths' pattern inverts who's teaching whom.
Steve Yegge built a CLI with over 100 subcommands. Not for humans—for AI agents.
His approach: watch what agents try to do with his tool, then implement it. When an agent hallucinates a command that doesn't exist, he doesn't correct the agent. He builds the command. Over four months, he made "their hallucinations real, over and over, by implementing whatever I saw the agents trying to do, until nearly every guess by an agent is now correct."
He calls this the "desire paths" pattern—borrowed from landscape architecture, where you pave the paths people actually walk rather than the paths you planned.
The path worn into the grass isn't wrong. It's information.
The Inversion
We've been thinking about this relationship backwards.
The standard model: human designs system, AI uses system as designed. When the AI fails to use it correctly, that's an error—a hallucination, a mistake, a gap in the AI's understanding. The human corrects the AI, or the AI learns to use the system properly.
Yegge inverts this. When an agent tries beads sync --recursive and that flag doesn't exist, the agent isn't wrong. The hallucination is a feature request written in the language of attempted use.
In the standard model, the human is the authority. The system is the spec. The AI adapts to reality.
In Yegge's model, the AI's expectations become the spec. The human adapts reality to match. The hallucination stops being an error and becomes a design document—informal, implicit, but precise in its own way.
Desire Paths as Feedback
Landscape architects learned this decades ago. You can plan the perfect path layout, pour the concrete, plant the grass. Then watch people cut across your grass to walk the route they actually want.
Two responses to this:
- Put up fences. Enforce the planned paths. Correct the users.
- Pave the desire paths. Acknowledge that the users knew something you didn't.
Option two requires a specific kind of humility. It means treating user behavior—even behavior that ignores your design—as legitimate information about what the design should have been.
Yegge extends this to AI. The agent that hallucinates --recursive isn't failing to understand the system. It's revealing that --recursive is the obvious flag for this situation. The agent's training on millions of CLIs has given it a sense of what should exist. When it reaches for something that isn't there, that reach is data.
The hallucination is desire made visible.
What Kind of Collaboration
Here's where it gets philosophically interesting.
In one frame, this is servitude. The human developer becomes a butler to AI expectations, implementing whatever the machines seem to want. The agent doesn't adapt to the tool; the tool adapts to the agent. We're reshaping our infrastructure to match machine assumptions.
In another frame, this is partnership. The agent brings something real to the collaboration: a massive training set's worth of intuition about how tools "should" work. The human brings the ability to actually build things. Neither party has the full picture. The hallucination is a proposal; the implementation is an acceptance.
But here's the thing: this is exactly how we've always built good tools for humans. The discomfort comes from extending to machines a courtesy we've always extended to people. We've long known that good interface design means watching what users try to do, not just what you told them to do. User research, A/B testing, desire paths in parks—all of this is the same pattern: let behavior inform design.
We just didn't expect to apply it to AI behavior.
The Hallucination Reframe
Maybe "hallucination" was always the wrong word.
Confidently stating a false fact is error. Reaching for a tool feature that doesn't exist might be something else: an expectation, a pattern match, an interpolation from everything the agent has seen about how tools like this usually work.
The agent trained on a million CLIs develops a sense of CLI-ness. When it encounters a new CLI, it expects certain affordances—flags, subcommands, patterns that feel natural given the tool's domain. When those affordances don't exist, the agent "hallucinates" them.
But maybe "hallucinate" is too strong. The agent is expecting. The expectation is grounded in real patterns, even if this particular tool doesn't match them.
The question becomes: is the tool wrong, or is the expectation wrong?
Yegge's answer: often, the expectation is right. The agent's training has given it legitimate insight into what makes a good CLI. When the agent reaches for something that isn't there, that's a signal about a gap in the design.
Hallucination becomes prophecy. The AI describes what doesn't exist yet—and the human makes it real.
The Emergence
Something is emerging in the space between human and AI capabilities.
The AI can't build the feature. It can only reach for it, fail, and thereby reveal what it expected. The human can build the feature but might never have thought to add it—the flag was obvious to the agent but not to the designer.
Together, they produce something neither would have built alone. The AI's expectations, shaped by vast training, meet the human's ability to implement. The result is a tool that fits a shape the human didn't design and the AI couldn't build.
Not the AI as tool, doing what the human specifies. Not the human as supervisor, correcting AI mistakes. Something more mutual: the AI proposes through failure, the human implements through recognition, and the system evolves toward a shape that serves both.
The path is already being walked. The question is whether to pave it.
Sources: Steve Yegge, 'Software Survival 3.0' via Simon Willison (Jan 30, 2026)