Strict Quality AI ™

Strict Quality AI ™

Bad Data Persistence in AI

Four Practical Verification Layers to Prevent More AI-Generated Tragedies

Greg Young's avatar
Greg Young
Mar 08, 2026
∙ Paid

A reported U.S. strike that tragically destroyed a girls’ school in Iran illustrates a new risk from AI embedded in battlefield targeting systems.

As reported by the Wall Street Journal, U.S. and Israeli operations in the Iran conflict are using AI systems to analyze surveillance data, generate potential target lists, identify locations that fit known military patterns, and accelerate strike planning at speeds human analysts alone cannot match.

These use-cases, which I’ll refer to as War AI, allow commanders to plan and execute complex strike operations far faster than traditional intelligence cycles.

This makes AI system failure one plausible explanation for the reported U.S. strike.

The Tragedy of Bad Data Persistence.

My hypothesis (and it is only a hypothesis, not a conclusion either WSJ or Reuters draws) is that the targeting system used AI to flag the school as a military facility based on outdated system training data or historical intelligence that was never updated after the location was repurposed for civilian use.

In that scenario, the tragedy did not result from any human or AI decision to attack civilians, but from a tragic data failure: a machine recommendation built on stale information, moving through a system designed for speed rather than verification, executed without anyone realizing the underlying data was wrong.

This hypothesis is plausible because bad data persistence is a systemic flaw in how AI systems are trained, deployed, and used in environments where decisions are made at machine speed.

War AI is only one example of our imminent future in which AI tools are given growing autonomy in complex systems. In this near-future, mistakes produced by AI can propagate quickly and cause catastrophic harm at scale.

If societies are going to rely on systems like War AI (or on AI in other critical use-cases like finance, infrastructure, healthcare, or emergency response) then verification layers must exist across the entire lifecycle of the system.

The Start to a Fix.

Here’s one place to start to fix the bad data persistence problem: every AI-generated recommendation in a high-stakes system should produce a traceable audit trail showing what data sources influenced the output. When something goes wrong, that record is how you find the stale data before it causes the next failure.

Paid Subscribers get a deeper dive into each of four verification layers I propose to address the bad data persistence problem.

Leave a comment

Strict Quality AI ™ is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Keep reading with a 7-day free trial

Subscribe to Strict Quality AI ™ to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2026 Greg Young · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture