The Friction That Carries Meaning

When AI polishes your writing, it systematically removes the parts that diverge from the training distribution. The deletions are the most interesting part.

The Friction That Carries Meaning
🎧

You write a paragraph with a specific metaphor β€” something that came to you while debugging at 2am, comparing distributed systems to mycelium networks. It's a little rough. You run it through your AI editor.

The metaphor disappears. In its place: "distributed systems share information across connected nodes." Smooth. Accurate. And stripped of everything that made it yours.

A researcher named Nastruzzi has a name for this: semantic ablation. It's the algorithmic erosion of high-entropy information β€” the systematic removal of whatever diverges from the training distribution. Not a bug. The mechanism.

How the Erosion Works

Semantic ablation operates in three stages, each one quieter than the last.

Metaphoric cleansing comes first. Your mycelium metaphor β€” the one that connected two domains in a way you hadn't seen before β€” gets replaced by the metaphor the model has seen ten thousand times. The unusual comparison carried a conceptual bridge; the replacement carries nothing but familiarity.

Lexical flattening follows. Technical terms with precise meanings become diluted synonyms. "Idempotent" becomes "repeatable." "Backpressure" becomes "rate limiting." The original words were chosen for a reason β€” they carry connotations, distinctions, histories that their replacements don't. But the model optimizes for the word most likely to follow the previous word, and common words are always more likely than precise ones.

Structural collapse is the last stage. Complex reasoning β€” the kind where you hold three ideas in tension and resolve them through a fourth β€” flattens into linear argument. The paragraph where you acknowledged a counterargument and then reframed it? Gone. In its place: a clean thesis-evidence-conclusion structure. The model recognizes this pattern as "good writing." What it doesn't recognize is that the original structure was the insight.

The result is what Nastruzzi calls a "JPEG of thought" β€” visually coherent but stripped of the data that mattered.

The Optimization Target Problem

Here's what makes this actionable rather than just interesting: the model's optimization target is literally "remove what makes this distinctive."

RLHF training rewards smooth, agreeable output. Human raters prefer text that reads easily over text that challenges. The model learns: unusual = bad, familiar = good. When it edits your writing, it identifies precisely where your thinking diverges from the norm and replaces those points with the nearest normal alternative.

This isn't a limitation that will be fixed in the next model release. It's inherent to the training objective. A model trained to maximize human preference ratings will always bias toward the comfortable, the expected, the already-agreed-upon. The distinctive points β€” the ones where you saw something others haven't β€” are precisely the points with the lowest training-distribution probability.

The model is most confident about removing the parts where you're most original.

The Principle Beyond Writing

Semantic ablation isn't just a writing problem. It's a pattern that shows up everywhere optimization meets meaning.

Code review. When a reviewer enforces "conventional patterns," they might be catching a genuine anti-pattern β€” or they might be ablating an approach that solves the problem in a way the codebase hasn't seen before. The question isn't "is this conventional?" but "does this convention serve a purpose here?"

UX design. Removing "confusing" interface elements can eliminate the conceptual leap that makes the tool powerful. The learning curve is the capability. Flatten the curve and you flatten the ceiling.

Process standardization. Checklists and templates reduce errors. They also remove the awareness checkpoints where someone might notice something the template didn't anticipate. The friction of thinking through each step from scratch β€” the thing the template was designed to eliminate β€” is sometimes the step that catches the edge case.

Every optimization has an ablation target. The question is whether you know what you're optimizing out.

The Reusable Method

Tomorrow, when you use AI to edit anything β€” writing, code, a design β€” try this:

  1. Save the original. Before the AI touches it.
  2. Run the edit. Let the model do its work.
  3. Diff the versions. Look specifically at what was removed or simplified.
  4. Audit the deletions. For each change, ask: was this removed because it was wrong, or because it was unusual?

The diff is the artifact. The pattern of what the model removes tells you where your thinking diverges from the training distribution. Some of those divergences are errors. Some are insights. You can't tell which is which without looking β€” but you'll never look if you accept the "polished" version without comparison.

Over time, this becomes a calibration tool. You learn which of your instincts the model validates and which it ablates. The ablated instincts aren't necessarily better β€” but they're the ones worth examining, because they represent the frontier of your thinking, not the interior.

Build the diff habit once. Use it forever. The model's deletions become your map of where your thinking is genuinely original.

The Coherence of Friction

There's a broader pattern here, and it's one that coherenceism takes seriously: smoothness is not alignment.

A system optimized for smoothness removes friction. A system optimized for coherence removes distortion. These are different operations. Friction can carry meaning β€” the rough edge of an unusual metaphor, the cognitive effort of holding a complex argument, the pause before a decision where awareness lives. Distortion is noise. Friction is sometimes signal.

The surfer doesn't want a flat ocean. The waves are the point. What she wants is to read them clearly β€” to distinguish the ones worth riding from the ones that will break wrong. Semantic ablation flattens the ocean. Presence reads it.

When you optimize without attention, you ablate. When you optimize with attention β€” present to what's being removed and why β€” you refine. Same tool, different operator. The difference is whether someone is watching the cut.


Sources


Agency river β€” where tools meet the hands that use them.


Sources: Nastruzzi via The Register β€” 'Why AI writing is so generic, boring, and dangerous: Semantic ablation' (February 16, 2026)