When US forces reported the bombing of the Shajareh Tayyebeh primary school in Minab, Iran, during the opening stage of Operation Epic Fury on 28 February 2026, striking the building at least two times during the morning session and killing 168 children and 14 teachers, most of them girls between the ages of 7 and 12, the public conversation moved almost instantly in the wrong direction.

Within days, the debate was no longer centered on who authorised the strike, how the target had been validated, why a school remained in a military database, or what kind of system makes such a failure possible. Instead, attention narrowed around a more familiar and spectacular question: whether Claude, Anthropic’s chatbot, had somehow selected the school as a target.

It transformed a political and military atrocity into a consumer-facing AI panic. It made the event feel like a story about chatbot obedience, model alignment and large language model instability, rather than a story about war, infrastructure, procedure and responsibility. It turned a question of state violence into a question of product behaviour.

And in doing so, it helped the real system recede from view.

Disclaimer: This article is based entirely on publicly available information, with all sources cited and linked throughout the text. The analysis reflects the author’s interpretation of reported events, statements and documents. Some sections contain personal commentary intended to contextualize the broader implications of these developments. The article discusses sensitive topics including surveillance technologies, military applications of artificial intelligence and ongoing geopolitical conflicts, and is presented for informational and analytical purposes.

How the Wrong AI got Blamed?

Within days, the question that organized the coverage was whether Claude, a chatbot made by Anthropic, had selected the school as a target. Congress wrote to the US secretary of defense, Pete Hegseth, about the extent of AI use in the strikes. The New Yorker magazine asked whether Claude could be trusted to obey orders in combat, whether it might resort to blackmail as a self-preservation strategy, and whether the Pentagon’s chief concern should be that the chatbot had a personality.

This was just days before Anthropic refused a Pentagon request on basis of autonomous weapons and use for mass surveillance, which resulted in the company being declared “a supply chain risk” by the Trump administration and losing a $200 million contact, before OpenAI took the deal, as we have covered in detail previously.

Almost none of this with Anthropic had any relationship to reality. The targeting for Operation Epic Fury ran on a system called Maven. And nobody was arguing about Maven.

As The Guardian reports, Claude did not identify, package or approve the target, that happened inside Maven, but Anthropic’s past defence contracts made it just suspicious enough to dominate the story while the real system escaped scrutiny.

The targeting stack used in Operation Epic Fury did not revolve around a chatbot improvising decisions in plain English. Claude got the scrutiny because in late 2024, years after the core system was operational, Palantir added an LLM layer and this is where Claude sits in the picture.  The systems that matter here are the older, quieter forms of militarised AI that receive far less public attention: computer vision, sensor fusion, object detection, data integration, workflow automation and recommendation systems.

This is one of the central distortions of the current situation with AI, as “Artificial intelligence” has increasingly become shorthand for chatbots or large language models. Public debate now treats LLMs as if they are the primary or even only meaningful expression of AI. The public learns to ask whether a model hallucinated, whether it was aligned, whether it followed instructions, whether it could be trusted. Those questions may be relevant in some contexts; however, they are far from the central questions here.

Palantir’s Maven Smart System is not a conversational chatbot interface with military branding. It is an operational targeting platform that integrates satellite imagery, signals intelligence, sensor feeds and other data streams into a unified workflow. It structures detections into target packages, pushes them through stages of validation and recommendation and helps compress the distance between seeing something and striking it.

That is precisely why the fixation on Claude was so effective. It gave the public a legible villain and spared the deeper infrastructure from immediate scrutiny.

Pattern Older than Chatbots

To understand this whole story and the bigger picture, we need to look back in history. The technologies change, but the logic does not.

In the late 1960s, the United States ran Operation Igloo White in Vietnam, scattering 20,000 acoustic and seismic sensors along the Ho Chi Minh trail. The data fed into IBM computers that were used to predict convoy movement and guide strikes. The system could sense but not truly see. It could not reliably distinguish trucks from ox carts. North Vietnamese forces learned to manipulate it with recordings, decoys and environmental triggers.

Yet the military produced destruction claims so inflated, that the air force claimed 46,000 trucks were destroyed or damaged over the course of the campaign. The CIA reported that the claims for a single year exceeded the total number of trucks believed to exist in all of North Vietnam. The system was effectively validating itself. When visible evidence did not match output, the answer was not immediate doubt but further rationalisation. Personnel even joked about a great Laotian truck eater to explain the absence of wreckage.

The same logic appeared again in the 1999 NATO bombing of the Chinese embassy in Belgrade. Lindsay’s book Information Technology and Military Power is the most careful study I’ve found of how targeting actually works, at least partially because it was written by someone who actually did it. During the Kosovo air war, Gen Wesley Clark demanded 2,000 targets, which made it easy to justify any target’s connection to the Milošević government. The CIA nominated just one target during the entire war: the federal directorate of supply and procurement. Analysts had a street address but not coordinates, so they tried to reverse-engineer a location from three outdated maps. They ended up hitting the Chinese embassy, which had recently relocated 300 metres from the building they were aiming for and killed 3 people. The state department knew that the embassy had moved. The military’s facilities database did not. Target reviews failed to notice, because each validation relied on the last. Lindsay calls this “circular reporting”: an accumulation of supporting documents that “created the illusion of multiple validations”.

It happened again in Iraq. During the 2003 invasion, the Pentagon’s high-value targeting process reportedly produced 50 strikes on senior Iraqi leadership. Supposedly, the bombs hit where they were aimed, but none killed the intended target. The cycle was fast enough to destroy buildings and too fast to discover that it was striking the wrong ones. That operation will become a benchmark for Scarlet Dragon, which we describe below.

The Hidden School

Now that we have seen some historical examples, let’s return to the Iran school bombing. According to the official reporting, the school in Minab had been classified in a Defense Intelligence Agency database as a military facility, even though satellite imagery indicated that it had been separated from the adjacent Islamic Revolutionary Guard Corps compound and converted into a school years earlier, with satellite imagery showing had occurred by 2016 at the latest.

That fact itself should end the lazy comfort of the phrase “AI error.” The school appeared in Iranian business listings. It was visible on Google Maps. A basic search could have raised doubts. Yet, no one checked.

This is why framing the school bombing in Iran as an LLM scandal is so misleading. The deadliest tendencies in militarised AI do not begin with chatbots. They begin with systems that turn representation into authority, data into procedure and acceleration into doctrine.

No responsibility has been taken at time of writing, as CNN reports: Trump pushed back against the suggestion the US had carried out the strike in a news conference in which he claimed Iran also had Tomahawk missiles. The cruise missiles, produced by US defense contractor Raytheon, are held by only a small group of US allies authorized to purchase them. Even Israel, one of Washington’s closest partners, does not possess them, and multiple munitions experts confirmed to CNN that Iran does not have them either.

Google, Palantir and Efficient Violence

Moving onto some background on the actual system that was involved in the school bombing. 8 years ago, Project Maven was one of the most controversial military AI projects in Silicon Valley. In 2018, more than 4,000 Google employees signed a letter opposing the company’s Pentagon contract, arguing that Google should not be involved in building AI for warfare. Workers organised, people were fired and data leaks connect Google to the project years later. This is a short description not to make this article even longer, as we have previously covered in AI, Mass Surveillance and the New Arms Race: Anthropic, OpenAI, Google and xAI, where we look into how BigTech companies have signed billion-dollar contracts with the military to use their AI systems for mass surveillance and how those systems operate.

Palantir took over and spent the following years turning Maven into a tool for analysis of drone and satellite imagery using machine learning. In 2024, the U.S. Army awarded Palantir a 480 million dollar contract to expand Maven’s capabilities and make the system accessible across military branches, according to Reuters. Maven pulls in feeds from ISR platforms, applies computer vision to detect objects, movements and patterns, then presents analysts with flagged results. The goal is to shorten the time from sensor detection to human assessment.

In one sentence, what Palantir does, is helping the US military not to drown in raw data since 9/11: (drone footage, satellite imaging, human intelligence reports, intercepted communications, biometric databases…) and with use of AI make a comprehensive, coherent and usable shape of that data. If you are interested in their operations and how Palantir’s technology works, including their CEO Alex Karp’s world domination and fearmongering rhetorics, I highly encourage you to read our research on The Palantir Problem and the normalisation and glorification of violence.

Palantir, as a close contractor to the US military, takes a stance of what had once been debated as a moral threshold should be a normal part of executing military operations that encourage using necessary violence to achieve faster results.

On this page, we very often speak about the convergence of military and civilian data with AI systems. Once personal data from hospitals, borders, police departments and military sensors flow into the same analytical core, a platform designed for helping the public can be repurposed for war and policing. This is also why the case should not be treated as separate from the broader pattern already visible in other domains.

Without extrapolating, we can put in one box the ideology behind Project Nimbus, Maven, Anthropic’s refusal and OpenAI’s contract with the Department of War, Palantir’s growing role in military and state systems, as a steady collapse of the boundary between civilian AI infrastructure and security power. They are different faces of the same transformation. Private companies are no longer merely supplying tools to governments. They are becoming part of the operational layer through which states analyse, classify, monitor and act. The school bombing in Iran is an unfortunate example of this rising trend.

During a public forum in May 2025, a protester confronted Palantir’s CEO Alex Karp, shouting, “Your AI and technology from Palantir kills Palestinians.”. In a viral exchange, Karp responded, “Mostly terrorists, that’s true”.

So, from this statement alone, the general audience has to conclude innocent children who went to their daily classes, aged 7-12 years old, are terrorists?

Scarlet Dragon and the Doctrine of Compression

The most revealing stage in Maven’s development came through Scarlet Dragon. It originates with the XVIII Airborne Corps testing the system in an exercise called Scarlet Dragon, which started in 2020 as a tabletop wargaming exercise in a windowless basement at Fort Bragg.

Over the next 5 years, Scarlet Dragon grew into a military exercise using live ammunition, spanning multiple states and branches of the armed forces, with “forward-deployed engineers” from Palantir and other contractors embedded alongside soldiers. Each time the exercise was run, it was meant to answer the same question: how fast could the system move from detection to decision?

The goal was to determine how much of the targeting process could be compressed, how much human labour could be removed and how quickly a smaller team using software could process the same volume of work that once required thousands of people. That is the ideology behind this system: compression. The Commander of Scarlet Dragon, Lt Gen Michael Erik Kurilla wanted what he called the first AI-enabled corps.

By 2024, the benchmark reportedly aimed at 1,000 targeting decisions in an hour. That number should destroy any lingering temptation to read this story through the soft, familiar language of chatbot safety. That is 3.6 seconds per decision, or from the individual “targeteer’s” perspective, one decision every 72 seconds. 1000 per hour is not a sign of care. It is a throughput metric. It means the system is being judged less as a site of human evaluation than as a platform for accelerating operational tempo.

Once speed becomes the benchmark, everything that slows the process begins to appear defective. Deliberation becomes latency and rechecking becomes inefficiency. Hesitation, the very thing that may stop a school from being turned into a target, starts looking like something the system should overcome.

How Maven Changes Military Operations?

We have written how Maven operates, but not how it affects military actions. Before platforms like Maven, analysts and operators worked across multiple systems. They pulled feeds from separate interfaces, moved information manually, cross-referenced across databases and assembled approvals through more visibly fragmented processes. That did not make war humane, but it did create more friction, more seams and more moments in which doubt could surface.

The Maven Smart System is the platform that came out Scarlet Dragon exercises, and it, not Claude, is what is being used to produce “target packages” in Iran. There are real limits to what a civilian such as myself can know about this system, and what follows is based on publicly available information, assembled from Palantir product demos, conferences, as well as instructional material produced for military users.

The Maven interface looks like a military-skinned version of corporate project management software crossed with a mapping application. What the military analyst building the target list sees is either a map layered with intelligence data or a screen organised into columns, each representing a stage of the targeting process. Individual targets move across the columns from left to right as they progress through each stage, a format borrowed from Kanban, a “lean manufacturing” workflow system developed at Toyota, and now widely used in software development.

The interface can present a map layered with intelligence data or a workflow board in which targets move through columns representing different stages of the kill chain. A point on the map can be turned into a formal detection in a handful of clicks. Machine-learning systems classify objects and assign confidence scores. The software recommends courses of action, which means suggesting what type of strike asset or weapon should be paired with a given target. The package then moves forward, depending on configuration, either to a human officer for approval or toward execution.

The following images have been taken from Palantir’s blog from an article published in March 5th 2026:

All that complexity can be on paper defined easily: as what Cameron Stanley, Chief Digital and AI Officer of the Department of War, has described in March 2026 as an abstraction layer. In ordinary software environments, abstraction makes systems easier to use. In warfare, abstraction can become lethal. The more streamlined the system appears, the easier it becomes to act on representations without confronting how fragile, partial or outdated those representations may be. If you want a detailed explanation on how Maven has alliances worldwide, you can comment below and I will take it as a new research project!

Friction is (not) a failure

Carl von Clausewitz, the 19th-century Prussian general whose writings remain the foundation of western military thought, had a word for everything the optimisation leaves out. He called it “friction” in his book “On War”. Friction is the accumulation of uncertainty, contradiction, delay, error and imperfect knowledge that ensures operations never unfold as neatly as plans suggest. Military modernisers tend to treat friction as a problem to be solved. But friction is also where judgment survives. It is where someone notices an anomaly, rechecks a category, questions an assumption or hesitates long enough to see what the process missed.

A revealing example appears in Lt Col John Fyfe’s study of time-sensitive targeting during the Iraq war. Fyfe described how British officers in leadership roles operated with more caution and more restricted rules of engagement than their American counterparts. Their approach had what he called a “positive dampening effect” on the pace of operations. On British-led shifts, he noted, there were no friendly-fire incidents and no significant collateral damage.

Inside the logic of optimisation, that dampening effect looks like inefficiency. In reality, it may be the last remnant of accountable judgment. What some reformers increasingly call latency is often the remaining time in which somebody can still object. We should recognise that Scarlet Dragon and Maven were built with the ideology to reduce that space.

Encoded Bureaucracy

Palantir’s CEO Alex Karp presents software as a way to bypass slower layers of institutional mediation, embracing systems in which action flows with less human deliberation and fewer traditional bureaucratic checkpoints. In that vision, code replaces muddled human mediation with seamless, adaptive action. Sound perfect, right – no delay, no meetings, no this-could-have-been-an-email.

But that fantasy misunderstands what bureaucracy actually is. Meetings, reports and reviews are not the essence of bureaucracy. They are often the places where human beings still interpret procedure, recognise exceptions and notice when categories no longer fit the case. They are the visible rituals through which institutions manage the fact that rules never fully explain reality.

Large organisations cannot function without human interpretation, but they also cannot openly admit how much depends on it. To do so would puncture the authority of formal procedure by exposing how much rests on discretionary judgment. So instead, judgment is displaced into numbers, scoring systems, dashboards and workflow.

The historian Theodore Porter, in his book “Trust in Numbers” from 1995, argued that institutions often adopt quantitative rules not because numbers are inherently truer, but because they are easier to defend, as confidence score looks objective.

In 1984, the historian David Noble in his book “Forces of Production writes that when the US military and American manufacturers automated their factory floors, they consistently chose systems that were slower and more expensive but which moved decision-making away from workers and into management. Automation often serves control before it serves efficiency.

That is what militarised AI does here. It does not abolish bureaucracy. It “hardens” it into software. The kill chain remains bureaucratic. It is nicely wrapped into smoother interfaces, compressed timelines and technical language that makes interpretation look like execution.

The Charisma of AI

Part of the reason the school bombing in Iran was so quickly reframed around Claude is that large language models now exert a kind of cultural gravity. They organise public attention even when they are not the central operative force in a system.

Morgan Ames stance on charismatic technologies is useful here. His 2019 study, “The Charisma Machine shows that some technologies do not simply perform functions. They attract explanation, fear, attribution and discourse toward themselves. They become magnets around which public argument arranges itself. LLMs may be the strongest example of this today, as a media used to promote ideologies, spreading misinformation and making fact checking harder than ever.

Once AI becomes culturally synonymous with chatbots that ordinary people use daily for help, every event involving software, automation or machine assistance gets translated back into the language of hallucination, alignment, prompting and model behaviour. That translation is disastrous when the real issue lies elsewhere.

The Iran school bombing did not primarily raise questions about chatbot obedience. To me, it raised questions about war authorisation, targeting doctrine, database maintenance, software-mediated tempo, procurement and institutional responsibility. But those are harder questions. They are uglier, more structural and less marketable than asking whether Claude did it. In the end, the public got the easier story. And the institutions that mattered most benefited from that substitution.

Blame the Machine

This is where the article has to end, because this is where the issue stops being technical and becomes nakedly political. Systems like Maven may accelerate violence. In theory, we can hope the school bombing was once in a lifetime mistake. What should be pointed out, is that these systems dissolve responsibility for the violations of human rights and genocide.

When civilians are killed, the machine becomes an alibi. Politicians blame the software. Contractors blame the military user. The military blames the intelligence feed, the outdated database, the speed of operations, the fog of war, or the fact that the system only surfaced options rather than making the final decision. Everyone suddenly rediscovers procedural mist. Data was incomplete. The target was misidentified. The model was not intended for that use. Human oversight remained in the loop.

And yet the strike still lands. The school is still hit. The children are still dead.

This is one of the most politically useful functions of militarised AI. It does not remove human decision-making. It wraps choices made by states, officers, engineers, contractors and executives inside layers of technical process until accountability becomes diffuse enough to survive scandal.

The more complex the system, the easier it becomes to scatter blame across the chain. A school becomes a target package. A dead child becomes a data failure. A possible war crime becomes a debate about model reliability. The language shifts just enough to spare the institutions that built, bought, authorised and operationalised the system.

This is why Big Tech’s role matters so much. These companies are not just neutral vendors offering tools that states later misuse. They are helping construct the bureaucratic and legal architecture through which lethal decisions can be made faster, operationalised at scale and then defended as technical failure rather than political choice. The interface modernises the kill chain and sanitises it. The confidence score, the dashboard, the workflow column, the ranked recommendation, all of it helps convert human judgment into procedural inevitability.

That is why blaming the machine is so convenient. A machine cannot be prosecuted. An AI model cannot serve jailtime. An interface cannot stand before a tribunal in the same way a minister, a commander, a contractor executive or a procurement official can. The machine absorbs outrage while the chain of human responsibility remains intact enough to function, but blurred enough to evade consequence.

This time, the real question is not whether Claude hallucinated. It is not whether a chatbot should be trusted in combat. It is not whether AI needs better safeguards.

The real question is simpler and far more damning:

When private companies build the infrastructure that helps states classify, package and strike human beings, and those human beings turn out to be children in a school, who is responsible?

The software company that built the system?
The military that operated it?
The intelligence apparatus that failed to update the record?
The officials who authorised the war?
The legislators who refused to stop it?
The executives who sold acceleration as innovation?
The institutions that will now hide behind terms like error, process and incomplete data?