“AI will most likely lead to the end of the world, but in the meantime there will be great companies created with serious machine learning.”

Spoken casually in an interview. Delivered as irony and tech-bro banter. Coming from Sam Altman, CEO of OpenAI in an interview for “The Future of AI”, this statement cannot be treated as speculative humour or philosophical provocation. It is language issued from a position of structural power at a moment when OpenAI is a leading technology company and an infrastructural actor in AI use in governments, public administration, defence-adjacent systems and regulatory processes across multiple jurisdictions.

At this scale, language does not remain rhetorical. It becomes normative. It shapes policy expectations, acceptable risk thresholds and public consent.

This article examines why Altman’s wording in interviews can be inappropriate, misleading and potentially dangerous.

The Myth of Inevitable Doom

Altman in his interviews frequently invokes existential risk: extinction, civilizational collapse, world-ending scenarios. On the surface, this rhetoric appears responsible, even cautious. To point out, this does not concern OpenAI’s technical documents, which are most of the time relatively transparent. The focus is solely on the rhetoric used by Altman, as a leader of the biggest AI tech companies at the moment.

In practice, inevitability language functions as moral evasion. Scholars of technological governance have long noted that framing harms as inevitable shifts responsibility away from present-day decision-makers and toward abstract futures (Winner, 1986; Jasanoff, 2016). When collapse is portrayed as unavoidable, acceleration becomes rational. This framing dissolves accountability while preserving momentum.

A leader cannot credibly argue that a technology may end the world while simultaneously expanding its deployment across public and private institutions. One of these positions must be rhetorical rather than real.

Extinction From Inside the Control Room

Altman’s remarks do not originate from an academic or speculative context. OpenAI is not an external observer describing hypothetical futures. It is an operational actor shaping the trajectory of AI deployment in real time.

OpenAI’s systems automate cognitive labour at scale, influence regulatory discourse on AI safety, and are increasingly integrated into public-sector workflows (OpenAI, 2024; UK Government, 2025). The company actively shapes narratives around labour displacement while monetizing tools that accelerate it, a pattern widely discussed in labour economics and automation literature (Acemoglu & Restrepo, 2020).

When systemic risk is reduced to casual phrasing, it normalizes harm and reframes uncertainty as an acceptable cost of progress.

Structural Power Worldwide

OpenAI’s role is no longer just a ChatGPT tool for regular people. OpenAI and Altman have signed several government deals worldwide, integrating in both civilian and military branches.

In the United States, OpenAI has entered into formal agreements with federal institutions that position its models within both civilian administration and defence-adjacent experimentation. This includes direct engagement with the Department of Defense through its Chief Digital and AI Office, where OpenAI develops and prototypes advanced AI capabilities under national security frameworks (U.S. Department of Defense, 2025). While publicly described as non-weaponized, such arrangements place OpenAI within military-linked institutional ecosystems.

Simultaneously, OpenAI has partnered with the U.S. General Services Administration to make its enterprise AI systems broadly accessible across federal agencies, dramatically lowering adoption barriers and accelerating institutional reliance on a single private provider for core administrative functions (GSA, 2025). The creation of dedicated government-specific deployments such as “ChatGPT Gov” further formalizes OpenAI’s role as a public-sector infrastructure provider rather than a neutral software vendor (OpenAI, 2025).

In the United Kingdom, OpenAI signed a strategic Memorandum of Understanding with the UK government to explore AI integration across public services, including justice and administrative systems (UK Government, 2025). This partnership has already translated into operational deployment, with thousands of civil servants in the Ministry of Justice using OpenAI’s systems in routine legal-administrative workflows (OpenAI, 2025).

Within the European Union, OpenAI is actively involved in shaping the regulatory environment governing advanced AI. The company has committed to the EU’s voluntary Code of Practice for general-purpose AI under the forthcoming AI Act, positioning itself not merely as a regulated entity but as a participant in defining compliance norms (European Commission, 2024). Parallel engagements with EU institutions and member states focus on sovereign AI deployments, data residency solutions, and public-sector integration aligned with European legal frameworks (OpenAI, 2025).

Once a private AI company becomes infrastructural to the state, executive language becomes policy-adjacent whether intended or not.

Regulation, Only If I Like It

Altman frequently presents himself as an advocate for AI regulation, calling for guardrails, oversight, and international coordination (Altman testimony, U.S. Senate, 2023). OpenAI consistently frames itself as a responsible actor willing to accept constraint.

This support, however, is conditional.

The regulatory approaches endorsed by OpenAI emphasize high compliance thresholds, licensing regimes for large-scale models and centralized oversight structures that favor incumbent actors with substantial capital, compute resources, and legal capacity. This dynamic has been widely discussed in scholarship on regulatory capture and “safety-as-barrier-to-entry” mechanisms in emerging technologies (Khan, 2017; Zuboff, 2019).

In this configuration, regulation does not function as democratic control, but becomes just a moat.

The contradiction becomes sharper when considered alongside OpenAI’s institutional embedding. Regulation is framed as a future necessity, while deployment proceeds as a present fact. Risk is acknowledged rhetorically, deferred procedurally and externalized institutionally.

Altman’s existential-risk language thus performs a dual function: it elevates OpenAI as a uniquely responsible steward of dangerous technology while positioning its leadership as indispensable to preventing catastrophe.

Military and Civilian Systems

I seem to love talking about these connections…A foundational principle of democratic governance and international humanitarian law is the institutional separation between military and civilian systems, as defined in Geneva Conventions; ICRC, 2020. This separation exists because their objectives, risk tolerances, accountability mechanisms and ethical constraints are fundamentally incompatible.

AI systems developed, tested, or optimized within defence or national security frameworks cannot be treated as neutral when later deployed in civilian contexts. Once a technology is shaped by military-adjacent objectives such as strategic advantage, intelligence processing, or adversarial optimization, its migration into public administration, justice systems, education, or welfare infrastructures constitutes a category error (UNIDIR, 2025).

OpenAI’s simultaneous engagement with defence institutions and civilian governments collapses this boundary. This blurring of military and civilian infrastructures is not without precedent. Similar concerns have been raised in relation to Palantir Technologies, whose long-standing integration with military, intelligence, and law-enforcement agencies has repeatedly expanded into civilian domains such as healthcare, social services, border control, and public administration. In previous analyses where we talk about what does Palantir actually do, this pattern has been identified as a case study in how tools developed under security and defence logics migrate into civilian governance, bringing with them assumptions of surveillance, risk scoring, and population management rather than democratic accountability.

Even when deployments are described as “administrative” or “non-weaponized,” the underlying systems, training paradigms, and institutional incentives remain shared. Civilian governance inherits tools shaped by logics of control and efficiency rather than democratic accountability or proportionality.

This convergence enables function creep, normalizes military-grade data processing in civilian life, enables surveillance and reduces transparency under the pretext of security or proprietary protection.

A technology cannot be simultaneously framed as potentially world-ending, embedded in defence infrastructures and marketed as a benign productivity tool for civil servants. These roles are mutually exclusive.

With great power comes great responsibility

That is the defining motto for Spider-Man, told by his Uncle Ben, teaching Peter Parker that his extraordinary abilities obligate him to use them for the good of others, not just himself. Funnily, that’s the first thing I thought when seeing Altman’s doomsday quote which we started writing this.

Altman’s influence is not merely rhetorical, neither his power is a fictional movie. As the leader of one of the most powerful AI companies in existence, his statements shape markets, policy discourse, institutional behaviour and influence personal decisions.

Power of this magnitude carries responsibilities that extend beyond personal expression or speculative commentary.  Political theory and governance ethics consistently emphasize that actors with infrastructural power have heightened duties of care, clarity and restraint (Rawls, 1971; Floridi, 2019).

When leaders with institutional reach speak casually about existential catastrophe, they condition societies to accept extreme risk as ambient, unavoidable, and compatible with ongoing commercialization. Altman doesn’t have a God complex though, he seems to have embraced the tragedy, saying “I cannot imagine having gone through figuring out how to raise a newborn without ChatGPT,” to Fallon on NBC Show. That normalization is itself a form of harm.

Language as Governance

Responsibility at this scale does not mean silence. For impactful leaders, it means coherence between stated risks and operational behaviour. It means acknowledging that words, when backed by power, function as governance signals rather than personal opinion.

When AI companies become embedded within the state, executive language ceases to be casual. It becomes part of governance.

Joking about civilizational collapse while embedding systems into public institutions is not honesty. The public doesn’t deserve fearmongering.

If AI is genuinely world-ending, deployment is unethical.
If AI is manageable, apocalypse rhetoric is misleading.
If both are claimed simultaneously, the public deserves clarity, not irony.

At this scale, words are not commentary, they are acts that hold political weight.

And they should be judged accordingly.