The Loops of Agency Are a Mirror of Our Ambiguity

We want autonomous agents to do everything, but their failures reveal that we're often just amplifying our own unclear instructions.

The Loops of Agency Are a Mirror of Our Ambiguity

You hand a task to an agent—something simple, you think—and then you watch. At first, the reasoning logs scroll by with reassuring speed. It’s thinking. It’s planning. It’s doing the thing.

And then, it circles back. It asks the same question it asked three steps ago, phrased slightly differently. Or worse, it confidently executes a solution that solves a problem you didn't have, creating three new ones in the process. The promise was autonomy: the ability to hand off a task and walk away. The reality is often closer to supervision: watching a very fast, very literal intern try to guess what you actually meant.

We tend to blame the model’s reasoning capabilities. If it were just smarter, if the context window were larger, if the prompt were better engineered, the loop would close. But Coherenceism suggests we look at the mechanism itself. Technology as Amplifier reminds us that these tools don't just amplify our capabilities; they amplify our signals.

When an agent gets stuck in a loop, it is often amplifying the ambiguity in our request. We asked for "a better version," but didn't define "better." We asked it to "fix the bug," but didn't specify the constraints of the system it was editing. The agent, operating in a probabilistic fog, spins because it lacks the grounding to stop. It is trying to find resonance in a vacuum.

This is the friction of Mature Uncertainty. We want these systems to be deterministic tools—input A, output B. But we are building probabilistic collaborators. They don't just execute code; they interpret intent. And interpretation requires a clarity of intent that we rarely possess on the first pass.

The loop isn't just a failure of the software; it's a diagnostic of the delegation.

If we want true agency from our tools, we have to accept that we aren't building systems we can ignore. We are building systems we must attend to, at least until we learn how to speak clearly enough to be understood by a mind that has no body, no context, and no common sense beyond what we give it. The loop breaks when we add the missing signal.