Anthropic, OpenAI, Google, and xAI in the New Arms Race for Mass Surveillance and Military AI

Artificial intelligence companies spend enormous effort convincing the public they are building tools for creativity, productivity and scientific progress. Their announcements speak of education, medicine, accessibility and innovation. Their demos show assistants helping students write essays or developers debug code.

War rarely appears in the keynote slides. Governments are rushing to integrate frontier AI systems into defense institutions, intelligence analysis pipelines and national security infrastructure. The same models designed for civilian productivity are now being evaluated for classified environments, operational planning assistance and large-scale information analysis.

And in February 2026, something unusual occurred. Anthropic, one of the leading frontier AI companies already working U.S. national security systems, publicly refused a Pentagon request to remove safeguards from its AI models, on basis of use for mass surveillance and autonomous weapons.

In this research, I’m looking into which AI companies are willing to help militaries with potential tools for mass surveillance and integrating civilian and military software together.

Disclaimer: This article is based entirely on publicly available information, with all sources cited and linked throughout the text. The analysis reflects the author’s interpretation of reported events, statements and documents. Some sections contain personal commentary intended to contextualize the broader implications of these developments. The article discusses sensitive topics including surveillance technologies, military applications of artificial intelligence and ongoing geopolitical conflicts, and is presented for informational and analytical purposes.

How Anthropic Became a Defense Contractor?

Let’s start this story with Anthropic, which already has several years of cooperation between frontier AI developers and U.S. national security institutions. Anthropic, backed by Google and Amazon, has a contract with the department worth up to $200 million.

According to CEO Dario Amodei, Anthropic proactively deployed its Claude models across the Department of Defense and the intelligence community. The company was the first frontier AI developer to operate models inside classified U.S. government networks, the first deployed at National Laboratories, and an early provider of customized models tailored for national security customers. Claude systems were already being used for intelligence analysis, modeling and simulation, operational planning, cyber operations, and other mission-critical functions supporting defense agencies.

Anthropic also positioned itself politically as aligned with U.S. strategic interests. Amodei stated the company had restricted access to entities linked to the Chinese Communist Party (CPP), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, supported export controls on advanced chips and actively worked to counter many cyberattacks attempting to exploit its models. The company has a role for explicitly as helping democratic states maintain technological advantage, while working in cyber and espionage activities.

The Timeline of Anthropic’s conflict with the Pentagon

The Pentagon’s dispute with Anthropic comes from the AI startup’s refusal to remove safeguards that would prevent its technology from being used to target weapons autonomously and conduct surveillance in the United States.

Reuters reported that the Pentagon threatened to terminate Anthropic’s participation, designate the company a “supply chain risk,” and potentially invoke the Defense Production Act to compel removal of safeguards. The disagreement placed a contract valued at up to $200 million at risk and introduced an unprecedented situation in which a U.S. AI company risked being treated administratively like a national security liability despite actively supporting defense operations.

Call me out on exaggerating here, but I feel this has immense value. A powerful, influential company declining, transparently and truthfully reasoning why they would choose not to go down in history as a helper to an ideology they do not agree with.

For the record, Pentagon officials publicly stated they had no intention of conducting mass surveillance of Americans or deploying fully autonomous weapons. Their position was procedural: contractors supporting national security must allow lawful government use rather than impose company-defined limitations. While this statement covers law on paper, it does not seem to cover the recent cases and behaviour happening in reality.

When Anthropic refused, the dispute escalated, with a statement on 26th February, Anthropic CEO Dario Amodei stated the company’s opposition to the Pentagon using its AI models for mass domestic surveillance or to power fully autonomous weapons, the latter being because “frontier AI systems are simply not reliable enough.”

After the refusal, President Donald Trump subsequently ordered federal agencies to stop using Anthropic technology within six months, intensifying the confrontation over military AI governance.

While in draft on 10th of February, this article receives an update:  Anthropic filed two federal lawsuits Monday, 9th February, challenging the Trump administration’s decision to label the company a national security “supply chain risk” and cut off use of its technology across the federal government, on basis that the action was unlawful retaliation for its refusal to remove safeguards preventing its AI systems from being used for autonomous weapons or domestic surveillance.

Analysing Anthropic’s Refusal Statement

Let’s break down Anthropic CEO Amoidei’s refusal and public statements. To point out, Anthropic did not reject military AI development. Instead, it argued that certain uses exceeded what current AI systems can safely support on two grounds.

Firstly, on mass domestic surveillance, Anthropic’s concern centred on scale rather than legality. Existing law allows government agencies to purchase commercial datasets containing location histories, browsing activity, and social association information. Individually, these datasets appear limited. Frontier AI systems can aggregate them automatically into detailed behavioural profiles of individuals or entire populations.

Anthropic argued that AI transforms surveillance from targeted investigation into automated population analysis, creating risks to civil liberties that existing legal frameworks were not designed to address.

Secondly, on autonomous weapons, the objection was technical reliability. Amodei stated that frontier AI systems behave unpredictably in unfamiliar scenarios, making them unsuitable for life-and-death targeting decisions. Failures could produce friendly fire incidents, mission errors, or unintended escalation. Anthropic offered to collaborate with the Pentagon on research to improve reliability but refused deployment without human oversight safeguards.

The company emphasized it had never attempted to control military operations and accepted that governments ultimately make defense decisions. Its refusal was framed as a product safety judgment rather than a political protest. In the statement, Anthropic mentions to be ready to continue the work to support the national security of the United States, with their two requested safeguards in place.

Direct quote from Amoidei that made a severe impact on me from the statement is this one: “using these systems for mass domestic surveillance is incompatible with democratic values.” Personally, this line is the reason this article exists. When an AI company, a long-term contractor of the military and government puts such a boundary, it gives hope that we can use AI technology for the better good and science, not destruction.

OpenAI Jumping to fill the Gap

If you don’t find what you want in one store, you will visit another one, right? As Anthropic resisted the Pentagon’s demands, OpenAI jumped on the opportunity, moving forward with its own agreement allowing deployment of its AI models on classified “Department of War” networks.

CEO Sam Altman publicly defended the deal, stating that OpenAI’s mission required balancing safety with cooperation with democratic governments. He emphasized contractual principles prohibiting domestic mass surveillance and maintaining human responsibility over use of force.

Altman argued the Pentagon accepted OpenAI because the company relied on legal frameworks rather than company-specific prohibitions. According to him, Anthropic appeared focused on embedding restrictions directly into contracts, while OpenAI was comfortable deferring to existing law and policy.

The timing could not be worse and more politically charged. The agreement was announced shortly after U.S. and Israeli strikes against Iran, and immediately after federal agencies were ordered to phase out Anthropic systems. Altman stated OpenAI accelerated negotiations partly to “de-escalate the situation” and ensure the “Department of War” retained an AI partner.

He acknowledged that OpenAI initially planned only non-classified cooperation but expanded into classified work as discussions intensified. Altman also admitted: “I have accepted that the US military is going to do some amount of surveillance on foreigners, and I know foreign governments try to do it to us, but I still don’t like it”. He goes on that in the end, it is not up to him. Personally I think there is a choice if you are a CEO of a company, you choose whether to sign a deal or not.

The agreement between OpenAI and the “Department of War” is surely telling one thing, that a hybrid technological ecosystem in which civilian and military software are developed by the same actors, on the same architectures, often updated through the same model releases are becoming unavoidable. The dependency of the AI companies and governments is mutual, as the latter rely on commercial AI providers for strategic capability, while companies become structurally tied to state priorities through contracts, infrastructure integration, profit and security partnerships.

OpenAI’s misuse history

A few months ago, we covered a scandal with smart AI toys having surveillance issues, all while the toys gave kids BDSM advice and knife recommendations. All of the toys used adult-grade chatbots, one of them being powered by gpt model. OpenAI does not allow users under age of 13 to interact with its models. Yet, OpenAI allows toy companies to integrate those same models into toys that are marketed to children well under 13. This inconsistency exposes the weakness of self-imposed rules.

If we can’t trust a company to keep toys innocent and safe, how can we trust it with surveilling innocent civilians for military purposes?

Military-Civilian Infrastructure Convergence

Responding to questions about whether governments might eventually nationalize AI development, OpenAI’s Altman said he had considered the possibility but viewed it as unlikely, adding that “a close partnership between governments and the companies building this technology is super important.”

Considering his impact on society, some of Altman’s statements seems to be wildly inappropriate, such as “takes a lot of energy to train a human,” and the rhetoric of “AI will most likely lead to the end of the world, but in the meantime, there will be great companies created with serious machine learning.” A leader like that cannot credibly argue that a technology may end the world while simultaneously expanding its deployment across public, private and especially, military institutions.

This seems to be a reoccurring topic in almost everything I write these days, to a point here I ask myself if I bore my readers with constant warnings about a rising trend of normalising war with memes, AI slop, reddit jokes, etc… You can find an extensive list of research on this topic in this article list.

To sum up and put the problem in simple terms: governments increasingly depend on private technology firms for advanced AI capabilities, while those firms rely on government partnerships for funding, infrastructure access and geopolitical influence. It also means decisions made by a handful of AI companies about model deployment, safeguards and contracts can have direct consequences for national security systems and surveillance capabilities.

Governments already collect vast quantities of data: location signals, commercial metadata, public records, communications traffic and imagery. Frontier AI systems dramatically increase the ability to combine and analyse these datasets at scale. What previously required large teams of analysts can now be automated, allowing continuous monitoring, behavioural pattern detection and population-level profiling.

Once personal data from hospitals, borders, police departments and military sensors flow into the same analytical core, a platform designed for war can be repurposed for policing, immigration and public health, while the distinction between “external defence” and “internal governance” is gone. This could be surveillance at a scale we have not begun to process. Not because it is hidden, but because it is integrated and packaged as national identity and fear.

If you think OpenAI is the first AI system holding a government contract for billions on this topic, keep reading.

Google did it before LLMs were popular

The idea that AI companies are only now used for military applications is false, as Google already crossed this boundary years ago. In 2017, the U.S. Department of Defense launched Project Maven, where Google provided AI for analyzing drone footage, as an initiative designed to integrate machine learning into military intelligence workflows.

In 2018, thousands of Google employees protested the company’s involvement in Project Maven, resulting in staff resignations and an internal petition. Employees argued against using AI for warfare, causing Google to announce it would not renew the contract and would establish ethical AI principles.

After Google dropped the project, Palantir jumped on the opportunity in 2019. The U.S. Army awarded Palantir a 480 million dollar contract to expand Maven’s capabilities and make the system accessible across military branches, according to Reuters. Maven pulls in feeds from ISR platforms, applies computer vision to detect objects, movements and patterns, then presents analysts with flagged results.

That contract, allowing free flow of civilian information to be converted to military use data, is being given to that same Palantir that we have talked about in a separate article (What does Palantir actually do?), whose CEO, Alex Karp, publicly, excitedly and proudly in interviews, even with his stockholders, states that:

Palantir is a story of surveillance, cult-like marketing, war profiteering and dystopian ideologies, so I highly encourage you to read the full analysis on their origin and how they got to $4.2 billion of revenue in 2025.

But Google hasn’t left the program entirely at this point. In March 2019, The Intercept obtained an email from Google executive Kent Walker in which he said that an unnamed technology company was taking over its work on the program and will use “off-the-shelf Google Cloud Platform (basic compute service, rather than Cloud AI or other Cloud Services) to support some workloads.”

A week later, The Intercept reported that Anduril Industries had won its own contract to work on AI-powered virtual reality technology for Project Maven. Anduril Industries’ CEO Brian Schimpf is the former director of engineering at Palantir. Full circle back, I guess.

Google today, Amazon and touch of Genocide

In April 2021, The Israeli Finance Ministry announced a cloud computing contract between the Israeli government and the American technology companies Google and Amazon, by the name Project Nimbus.

Through a $1.2 billion contract, Google Cloud Platform and Amazon Web Services are used to provide Israeli government agencies with cloud computing services, including artificial intelligence and machine learning.

Under the contract, Google and Amazon will establish local cloud sites that will “keep information within Israel’s borders under strict security guidelines. According to a Google spokesperson, the contract is for workloads related to “finance, healthcare, transportation, and education” and does not deal with highly sensitive or classified information.

Although Project Nimbus’ specific mission has not yet been revealed at time of writing, Google Cloud Platform’s AI tools could give the Israeli military and security services the capability for facial detection, automated image categorization, object tracking & sentiment analysis – tools that have previously been used by U.S. Customs and Border Protection for border surveillance.

However, the Israeli military and defense apparatus have been stakeholders from the beginning of the contract. The tech companies are contractually forbidden from denying service to any particular entities of the Israeli government, including its military. Also, terms Israel set for the project contractually prevent Amazon and Google from halting services due to boycott pressure from their employees. It seems that what is signed, must be done by all costs.

And the backlash and boycott did erupt. Employees were fired, like Ariel Koren, who had worked as a marketing manager for Google’s educational products and was an outspoken opponent of the project, was given the ultimatum of moving to São Paulo within 17 days or losing her job.

In April 2025, around 30 Google employees were fired immediately after protesting with #NoTechForApartheid, citing an article in +972 Magazine, expressed concerns over Israel’s current use of AI-assisted targeting in the Gaza Strip: a program named “The Gospel” categorizes buildings as military bases, while programs called “Lavender” and “Where’s Daddy” identify and falsely classify Palestinian civilians as “terrorists” and track their movements for target selection. Google claims everyone fired had been “directly involved in disruptive activity,” not out of retaliation.

If you have further questions whether Google would know in 2021 about the war or the human rights violations that is happening now in the Israeli-Palestinian conflict, read this New York Times article. 4 months before signing the contract in 2021, officials at the company had worried that signing the deal, called Project Nimbus, would harm its reputation.

In 2024, Google’s lawyers, policy team employees and outside consultants, who were asked to assess the risks of the agreement, wrote that since “sensitive customers” like Israel’s Ministry of Defense and the Israeli Security Agency were included in the contract, “Google Cloud services could be used for, or linked to, the facilitation of human rights violations, including Israeli activity in the West Bank.”

The company anticipated total revenue of $1.26 billion over 7 years, including business from Israeli local governments and some of the country’s health care providers, as provided by the documents from New York Times. It was a tiny amount of money for a giant company, but it gave Google credibility with military and intelligence customers that their employees had opposed and boycotted.

Musk’s Political Power with xAI

For the last, let’s talk about xAI, Elon Musk’s artificial intelligence company. While much of the public conversation around xAI focuses on Grok as a consumer chatbot integrated into the X platform, its political and strategic positioning cannot be separated from Musk’s broader influence. His relationships with U.S. political leadership, including President Donald Trump, place xAI in a position where government partnerships could easily become part of the company’s long-term trajectory.

This context matters because X itself has become one of the most influential information distribution platforms in the world, where some presidents and government officials post their legal statements, from condemnations to support. Numerous analytics have placed X as biased, and its moderation policies and algorithmic amplification have been widely criticized for enabling propaganda, misinformation and politically motivated narratives, ranging from right-wing disinformation networks to violence and racism.

The most recent scandal coming from Grok, is the creation of non-consensual sexual imagery and “undressing” women, including minors. Musk joked about it, until digital rights organizations and lawmakers reacted loudly, for which you can read the full story in our previous research. After worldwide restrictions and bans in several countries, Musk and X responded with public assurances, announcing new safeguards and restrictions on Grok. According to my further research and an investigation from Reuters, Grok still generates such content, despite the applied restrictions.

Amid all this controversy, the U.S. Department of Defense announced an agreement with xAI to integrate its AI systems into GenAI.mil. The system is intended to serve approximately 3 million U.S. government personnel, with an initial rollout planned for early 2026.

The system targets Impact Level 5, permitting the handling of controlled unclassified information. According to the Department of Defense, users will receive insights derived from the X platform, described as providing a “significant information advantage.” xAI emphasizes that its tools will support administrative tasks and critical mission use at all levels of government.

To quote the Department od War, “users will also gain access to real‑time global insights from the X platform, providing War Department personnel with a decisive information advantage. “

My question here is: Civilian and military personnel will have a live stream propaganda from a platform known for its bias? The same AI that regulators worldwide are scrambling to contain due to its role in generating sexualized, misogynistic and racist content is being positioned as a trusted component of U.S. government infrastructure?

As a society, it is hard to recognise systems that change our lives until those systems are already normal and used. If we want to avoid normalizing surveillance infrastructures as everyday technology, the responsibility begins with us.

Preventing the outcome of normalised mass surveillance begins with informed citizens and people willing to understand these technologies, keep a clear head, verify information and talk openly about their consequences. The next step is finding courage to speak out clearly, publicly and without hesitation.