Skip to content
islam.ninja
Go back

The Data Was Stale. The Strike Was Precise.

7 min read

On autonomous targeting, stale data, and the architecture that ensures no one is responsible

On February 28, 2026, the Shajareh Tayyebeh girls’ primary school in Minab, Iran, was destroyed. Girls aged 7 to 12. Classes were in session. Scores of children killed. The investigation is still open.


I want to ask you something before we get into the politics and the technology and the policy frameworks and the press releases.

Where were you at 7 years old?

School, probably. Sitting at a desk. Learning to read. Worrying about something small — a test, a lunch, a friend who wasn’t talking to you. The specific texture of being alive and young and completely unaware that you were a coordinate on someone’s map.

That’s who was in the building.

I want to trace how that building ended up on the map — not to assign blame, because the investigation is still open — but to ask a harder question: even if every human involved acted in good faith, does the architecture of how we make these decisions protect us from making it again?

The evidence suggests it doesn’t.


The Honest Argument for Speed

The case for removing humans from targeting decisions is not stupid. It’s not even obviously wrong, and it deserves to be heard clearly before it’s answered.

Humans freeze under pressure. Humans are tribal, frightened, and vindictive. Human operators throughout history have signed off on strikes that killed thousands of civilians — the algorithm didn’t invent collateral damage, it inherited it. In a threat environment that moves faster than human cognition, a system that processes information without fear, fatigue, or fury might, the argument goes, make fewer catastrophic errors than the alternatives.

That case deserves a real answer, not a dismissal.

Here is the real answer: the argument assumes the machine knows what it doesn’t know. It never has. The most dangerous property of an AI system deployed in high-stakes decisions is not malice — it’s confidence without uncertainty. When it’s wrong, it doesn’t hesitate. It outputs a precise, authoritative answer with no signal that the answer is built on data that stopped being accurate three years ago.

A human who freezes is a human who’s uncertain. Uncertainty is information. In a targeting context, it may be the most important information of all.

The system doesn’t freeze. The system outputs. That distinction is the distance between a soldier who hesitates at a doorway because something feels wrong, and a missile that doesn’t feel anything.


Stale Data, Live Children

Here is what investigators found when analyzing the Minab strike area.

The school and the military compound next door showed a clustered, precise impact pattern — not a single stray munition, but a structured sequence. One of the leading explanations offered by weapons and targeting experts: the target data may not have been updated after the building’s use changed. The compound next door was the likely intended target. The school, according to someone’s database, may still have been registered as part of that complex.

The data was stale. The strike was precise. Those two facts together describe a system performing exactly as designed while being catastrophically wrong.

This is not a hypothetical failure mode. It is a question about whether anyone paused — amid a strike package already being accelerated based on last-minute intelligence — to ask: when was this last verified?

The person who should have asked that question was likely looking at a large list of coordinates, a shrinking window, and a professional culture that rewards decisiveness and penalizes delay. The architecture made it nearly impossible to stop.

That observation is not an excuse. It is a description of a structural problem that will reproduce the same outcome wherever the same architecture is deployed.


The Loop Was Always the Point

Current military doctrine holds that a human must be in the loop before lethal force is authorized. This is treated as a safeguard. It functions, in practice, closer to a legal fiction.

A human “in the loop” on a large-scale strike package — under time pressure, relying entirely on outputs from a system they didn’t build and cannot interrogate — is not exercising judgment. They are performing the ritual of judgment. They are providing a signature, not a check. They are the last step in a process that was never designed to be interrupted at that step.

Under those conditions, the human in the loop exists to absorb liability. Not to catch the mistake.

This matters because the entire moral architecture of military accountability rests on the premise that a human being decided. That somewhere in the chain, a person looked at a target, weighed the evidence, and bore responsibility for the call. Automate everything upstream until the human is simply ratifying an output, and you no longer have a human in the loop. You have a human at the end of a conveyor belt, stamping boxes.

You also have, when something goes wrong, a structure perfectly designed to ensure that no one is clearly responsible. The algorithm suggested it. The analyst approved it. The commander authorized it. The investigation remains open.

AI doesn’t just automate decisions. It automates the diffusion of blame.


Policy Is a Memo

Military doctrine currently holds that lethal autonomous decisions require human authorization. This is policy, not law.

Policy is a memo. Policy is an administration. Policy is a defense secretary who decides, on any given morning, that the threat environment has shifted and the approval window has become a bottleneck.

Policy is exactly as strong as the institutional will to enforce it — and that will has a documented tendency to erode over time in the direction that makes operations faster and oversight lighter.

The Minab strike timing was already moved forward at the last minute when intelligence indicated a high-value meeting was happening earlier than expected. The system accelerated. The window shrank. The checks that exist in theory exist within time constraints set by the threat, not by the checklist.

This is not a criticism of any individual decision. It is a description of how the architecture behaves under the exact conditions it will always face. Urgency is not the exception in warfare. Urgency is the operating environment. A safeguard that holds in calm conditions and dissolves under urgency is a decoration, not a constraint.


The Question Underneath

The question underneath all of this is not technical. It was never technical.

Every life has value. That is not a policy position — it is the premise on which military law, rules of engagement, and the entire framework of proportionality in warfare are built. The question is whether the architecture we are building is capable of honoring that premise under the conditions it will actually face.

The evidence from Minab suggests a gap between the premise and the practice. Not because anyone intended it. The architecture was built incrementally, each optimization making local sense — faster processing, broader coverage, tighter windows — without anyone designing the accountability failure that emerged from the combination.

That is, in some ways, a harder problem than malice. Malice can be identified and punished. A system that produces catastrophic outcomes through the accumulation of reasonable decisions is much harder to arrest.

What it requires instead is a deliberate structural choice: to build the conditions for genuine human judgment back into the chain, not as ceremony but as a real constraint. A human with enough time, enough current information, and enough institutional protection to interrogate an output before it becomes an action. Someone whose career does not end for asking when was this last verified.

That human doesn’t currently exist in most targeting chains. Building them back in is slow, expensive, and operationally inconvenient.

It is also what the premise — that every life has value — actually requires.


The girls were between 7 and 12. They were in class.

The system didn’t know it was a school.

That was always a human’s job to know.


Minab, Hormozgan Province, Iran. February 28, 2026. Investigation ongoing. Casualty figures drawn from AP and Reuters reporting; final numbers remain unverified pending the investigation’s conclusion.


Share this post on:

Previous Post
The Ordeal, Returned
Next Post
We Are the Ore