Chapter 14: The Feedback Loops

Reportedly two point four seconds. That is the time between detection and engagement in Israel's Iron Dome system — the interval in which the battle management computer tracks up to 1,200 targets per minute, a machine learning model assesses each...

The Feedback Loops

Reportedly two point four seconds. That is the time between detection and engagement in Israel's Iron Dome system — the interval in which the battle management computer tracks up to 1,200 targets per minute, a machine learning model assesses each rocket's trajectory across multiple parameters including altitude decay and wind shear, the system calculates an intercept solution far faster than a human operator could, and a missile launches to destroy an incoming projectile that might kill civilians in Tel Aviv or Ashkelon or Sderot.

A human could not make this decision. Not in 2.4 seconds, not tracking 1,200 targets per minute, not while simultaneously evaluating whether the incoming rocket will land in a populated area or an empty field — a distinction that determines whether a $50,000 interceptor is worth firing. Iron Dome ignores approximately seventy percent of incoming fire, the rockets it calculates will do no harm. Its success rate exceeds ninety percent against threats it chooses to engage.

This is AI saving lives. It is also AI deciding which threats matter and which do not, compressing the space for human judgment to something approaching zero. The operators can override. The question is whether, at 2.4 seconds per decision in a barrage of hundreds of rockets, override is a meaningful concept or a legal fiction.

Now hold that thought, because what is happening in Gaza at the other end of the same military's AI pipeline answers it.


In April 2024, an investigation by +972 Magazine and Local Call exposed Lavender — an AI system that assigns Gaza residents a numerical score indicating their suspected likelihood of membership in armed groups. Lavender generated a list of up to 37,000 Palestinians targeted for assassination. Sources within the Israeli military described human review of the system's output as a "rubber stamp" lasting approximately twenty seconds per target. The acknowledged error rate was ten percent — meaning roughly 3,700 people on the list were, by the military's own assessment, wrongly identified.

A companion system called "Where's Daddy?" tracked individuals on Lavender's list and was, according to the investigation, "purposely designed to help Israel target individuals when they were at home at night with their families." The design choice maximized the likelihood of killing not just the target but everyone sleeping under the same roof.

A third system, the Gospel, used AI to recommend buildings and structures as bombing targets — working alongside Lavender in a division of labor where one algorithm identified the person and the other identified the place.

Iron Dome and Lavender run on the same technological substrate. The same neural networks, the same machine learning architectures, the same data pipelines. One intercepts rockets to protect civilians. The other generates kill lists with a ten percent error rate and tracks its targets to their families' bedrooms. The technology is neutral. The application is not.


Ukraine tells a different chapter of the same story.

In 2024, Ukraine produced approximately two million drones. Of those, fewer than 10,000 — less than half a percent — used AI guidance. But that half percent mattered. AI-enabled drones raised target engagement success from ten to twenty percent to seventy to eighty percent — a three- to four-fold improvement in lethality with the same hardware.

The key was not the AI itself but what it was trained on. Ukraine took publicly available artificial intelligence models and retrained them on classified, real-world frontline combat data — datasets "tailored not just to current combat conditions but, often, to a specific sector of the front and specific types of drone." In January 2026, the Ministry of Defence unveiled the Brave1 Dataroom: a secure data environment providing developers access to structured visual and thermal datasets collected directly from combat operations. The institutional infrastructure for a feedback loop.

The loop works like this.

Stage one: deploy AI to the battlefield. Ten thousand drones with machine learning guidance systems, operating alongside two million conventional ones. Stage two: collect data. Every engagement — successful or failed, every thermal signature, every evasion pattern, every target identification — becomes training data. Stage three: retrain. Feed the combat data back into the models. Tune them for specific terrain, specific enemy tactics, specific drone airframes. Stage four: redeploy. The improved models go back to the front. The drones are more lethal. They generate more data. The cycle accelerates.

Ukraine's military ambition for 2025 was to increase the share of AI-guided drones from half a percent to fifty percent. If realized, that would mean approximately one million AI-assisted drones, each three to four times more effective than their unguided predecessors — a roughly twelvefold increase in killing power, achieved not through more weapons but through better algorithms trained on the data the weapons themselves produce.

Russia, meanwhile, runs its own version of the same loop. The Alabuga Special Economic Zone produced 2,738 Geran drones in 2023. By late spring 2025, the number exceeded 26,000, with an annual production target of approximately 25,000 and a workforce expanding toward 40,000 — 170 to 190 drones rolling off the line every day. In December 2024, Russia began equipping Geran-2 strike drones with AI, using Nvidia Jetson processors sourced through sanctions evasion to enable autonomous target recognition, real-time video processing, and adaptive navigation. The V2U autonomous system operates with minimal GPS reliance and can function without human commands — it uses AI-powered terrain analysis to find and strike targets up to sixty-two miles away, launched in stacks of seven or eight.

The feedback loop is not a theory. It is an industrial process operating in real time on two sides of the same front line, each iteration producing more autonomous, more lethal weapons trained on the combat data of the previous iteration.


Now step back far enough to see the recursion.

The technology being fought over — AI chips, rare earth minerals, energy supplies, undersea data cables, fabrication capacity — is the same technology being used to fight. Russia's AI drones use Nvidia processors from the same supply chain the chip export controls are designed to restrict. Ukraine trains its models on servers powered by a grid that Russian drones are simultaneously attacking. Israel's Iron Dome depends on chips fabricated in Taiwan, protecting a population that lives atop a node in the AI supply chain (Israel's own military AI industry, which climbed international arms rankings in 2024).

The United States deploys Palantir's Maven Smart System across five combatant commands — over twenty thousand users scanning satellite imagery and sensor feeds to identify targets, a contract now worth $1.3 billion. Palantir's software runs on cloud infrastructure from AWS and Azure, powered by data centers consuming four percent of American electricity. That electricity comes from natural gas supplies the US protects by maintaining a naval presence in the same waters where it is fighting the wars the software helps prosecute.

AI is fighting the wars that determine who controls AI. The recursive irony is complete.


This recursion creates escalation dynamics that no existing framework can manage.

"By reducing risks to a state's own soldiers," researchers warn, "autonomous weapons may reduce the political threshold for deploying or using force." A RAND Corporation wargame found that "the speed of autonomous systems did lead to inadvertent escalation" and that "widespread AI and autonomous systems could lead to inadvertent escalation and crisis instability."

The concept emerging from military theorists is the "flash war" — an algorithmic escalation that intensifies a crisis before humans can intervene. "Reciprocal autonomous weapons interactions could accelerate tactical skirmishes into strategic conflicts before human oversight intervenes." A 2024 study found that military-related large language models were "prone to recommending pro-escalation tactics with unclear motivation and logic, including escalations that provoked arms races and called for nuclear weapons deployment."

The governance response has been exactly what you would expect. The United Nations General Assembly adopted a resolution on lethal autonomous weapons systems in December 2024 by a vote of 166 to 3, with Belarus, North Korea, and Russia opposed. The Secretary-General called for a legally binding treaty banning autonomous weapons without human control by 2026. One hundred and twenty countries support negotiations.

The states blocking regulation — the United States, Russia, China, India, Israel — are the states most actively developing the weapons the resolution would regulate. In December 2025, the General Assembly voted again: 164 in favor, 6 against. The United States had joined the opposition. The countries building the weapons will not agree to stop building them. The countries that would agree do not have the weapons.

The feedback loop extends beyond the battlefield into the arms industry itself. Military AI spending reached $9.31 billion in 2024, projected to more than double to $19.29 billion by 2030. The real figures are almost certainly higher — classified programs, dual-use procurement, and cross-subsidization from commercial AI research make the published numbers a floor, not a ceiling. The companies building commercial AI — Google, Microsoft, Amazon, Palantir — are the same companies building military AI, and the combat data from Ukraine and Gaza feeds improvements that enhance both product lines.


In 2020, in Libya, a Turkish-made Kargu 2 drone "hunted down and attacked" retreating soldiers autonomously, without command. A United Nations panel of experts considered it the first lethal drone attack in history carried out "on their own initiative." In January 2026, Turkey conducted the first live-fire test of a twenty-drone swarm controlled by a single operator using indigenous swarm-intelligence software. The drones navigated autonomously, split into three sub-swarms, and executed simultaneous attacks.

That was six years between the first autonomous kill and the first autonomous swarm. The feedback loop is accelerating. Each generation of weapons produces the data that trains the next generation, which is deployed faster, which produces more data. The cycle is not slowing. The governance frameworks designed to contain it have not even started.

The scenario military theorists call a flash war begins like this. A swarm of autonomous drones detects movement along a disputed border. It classifies the movement as hostile — correctly or incorrectly — and engages. The opposing side's autonomous defense systems detect the incoming swarm and launch countermeasures. The countermeasures are interpreted by the first side's battle management AI as an escalation. It requests reinforcement from a second echelon of autonomous systems staged behind the line. The second echelon deploys. The opposing side's AI, observing a surge in hostile autonomous activity, escalates to a higher threat level and activates standoff weapons. Within minutes — not hours, not days — a tactical skirmish between machines has become a strategic confrontation between nations, and no human being has made a decision. The entire sequence has played out at machine speed, in the gap between detection and deliberation where human oversight was supposed to live.

The technology being fought over is the weapon being used to fight. The weapon produces the data that improves the weapon. The wars that determine who controls AI are themselves powered by AI. And the loop, once started, has no obvious off switch.

What happens when the autonomous systems of two peer adversaries engage each other — AI-guided Russian drones against AI-guided Ukrainian interceptors — is a question that theory cannot answer and practice is about to test. The feedback loop does not care about international law, or human oversight, or the 2.4-second decision window that once seemed fast. The machines are getting faster. The humans are not.