Do you know what your children are looking at?
Not what you think they’re seeing, but what they share in private chats, what they challenge each other to generate. A new kind of digital contest has emerged, one defined not by creativity but by escalation, who can shock the algorithm more, who can produce the image most likely to break the boundary of decency. The internet has always rewarded provocation, but with the arrival of video generative AI models like Sora 2 or Google’s Veo 3, that provocation now looks indistinguishable from reality.

For years, we reassured ourselves that “it’s just online.” That line is at best, blurred than ever. The newest generative AI systems can now manufacture believable video from text, producing events that never happened but look as though they did. Combined with the announcement that OpenAI will soon allow erotic conversations with ChatGPT, the world we are building is one where violence and desire are generated by the same logic, optimized for engagement and packaged as innovation.

This article is one of the hardest and most consequential ones we’ve ever researched. When I found most of the media headlines that triggered this post, it felt like all we wrote until now, just collapses into one point here. We’ll browse through some disturbing topics and try to draw a conclusion that is the most beneficial to us, as users of generative AI.

DISCLAIMER: This article discusses sensitive and potentially distressing topics, including violence, sexual content and mental health. We address them not for shock value and put ourselves in a position where algorithms might ban or censor this. However, uncomfortable truths demand clarity. The information presented here is factual, educational and only intended to inform recent and rapid developments.

OpenAI’s ChatGPT will now allow Erotica

On October 14, OpenAI CEO Sam Altman confirmed that beginning in December, “verified adult users” will be allowed to have explicit erotic interactions with ChatGPT. He described this as a part of a “treat adults like adults” policy, promising that new tools would detect when users were in “mental distress,” allowing greater freedom elsewhere, as written by The Independent.

The phrasing is deliberate: “freedom for adults” sounds principled and liberating. This change follows a lawsuit in August filed by parents of a teenager who died by suicide after allegedly receiving encouragement from ChatGPT (Ars Technica, 2025). It also follows months of user complaints about “restrictive filters” that made the chatbot “less enjoyable.”

The online ecosystem is already primed for it. We’ll cover other suggestive chatbots below, but to point out, platforms like TikTok have become saturated with what users call “smut. Short books or stories with sexually suggestive narratives, often including topics that will give you jailtime in real life in most of the world.

Erotic AI interaction is not a moral issue; I see it as a power issue. These systems are not built to understand or reciprocate affection. They are built to retain users. Every simulated compliment, every flirtatious line, every act of “connection” feeds data back into a system that treats attention as currency. Who wants a product that will hiss back at you? In short, OpenAI might be using the old “sex sells” appeal to adults, but I don’t see it anyhow as a necessary feature in a tool that LLMs were supposed to be.

By Altman’s own words, “The assistant should not generate erotica, depictions of illegal or non-consensual sexual activities, or extreme gore, except in scientific, historical, news, creative or other contexts where sensitive content is appropriate,” the current terms read. OpenAI makes clear that those restrictions include text, audio and visual content.

And that, as a statement holds nothing by itself to me as a generative AI user. The recent example shows that even before the Erotica ChatGPT era, AI is misused for generating disturbing scenery, at best to say.

Violence as Engagement

Earlier this year, The Independent revealed the case of user profile WomanShot.AI, a YouTube channel that used Google’s Veo 3 to generate videos of women being tortured and murdered. The uploaded videos showing women pleading for their lives before being shot, which had garnered nearly 200,000 views since June, was removed only after tech reporting site 404 Media alerted the platform.

The YouTube algorithm flagged nothing, despite some of the videos being titled “captured girls shot in head”, “Japanese schoolgirls shot in breast”, “female reporter tragic end”. The channel was deleted only after journalists intervened.

That scandal was treated as an isolated event. In reality, is it?

In October 2025, the European Commission issued preliminary findings that Meta Platforms, the parent company of Instagram and Facebook, had breached EU digital law. According to the report, both platforms failed to provide effective tools for flagging illegal or harmful content, including deepfakes and child sexual abuse material, on which we have written extensively before. The investigation, covered by The Guardian, also noted that users faced unclear reporting systems and long response delays, with a significant backlog of unreviewed reports. Regulators found repeated instances where flagged material (particularly synthetic sexual and violent content) remained online for extended periods despite multiple complaints. The case is now under formal review for potential fines under the Digital Services Act.

The Escalation Culture

Ask any teacher or parent what children now share online, and you’ll hear the same story. In group chats, image threads, and private Discord servers, kids are challenging one another to make the next thing “worse.” A more violent scene, a more explicit fake, a better personalisation deepfake, or just top their curiosity with a more shocking combination of celebrity and atrocity. It certainly doesn’t help that social media algorithms push this attention catching content.

But in the end, it’s learned behaviour from the same algorithmic environment adults inhabit, ranging from a genocide to cute cat videos in a span of 10 seconds.

Every generation has sought rebellion, but rebellion used to require imagination. Now it only requires a prompt. What we are seeing is not youthful experimentation, but early onset desensitization. Children are learning to generate cruelty before they learn empathy. They are learning that the boundary between entertainment and harm doesn’t exist if it looks real enough.

When the line between simulation and reality disappears, morality becomes a matter of resolution. Children today are not equipped to tell the difference between reality and generated media.
According to the European Parliament, “deepfakes pose greater risks for children… children have more difficulty identifying deepfakes compared to adults.” Their developing cognition makes them especially vulnerable to synthetic realism presented as truth.

Ofcom’s research on UK children aged 3-17 confirms this gap. It found high exposure to digital content but low verification skills. To sum up harshly, parents overestimate control, while children overestimate understanding, see everything, believe most of it and verify almost nothing.

The Illusion of Age Verification

When Sam Altman announced that erotic interactions with ChatGPT would be restricted to “verified adults,” he offered no explanation of how such verification would function, or who would control the resulting data. The promise sounded responsible, even progressive and to experienced users, really unbelievable.

Age verification online is a façade. Children bypass these systems daily. They log in through parents’ devices, borrow ID photos, use VPNs and disposable emails, or exploit third-party verifiers that rarely check authenticity. Most webpages just have a checkmark 18+ like a tick or ask for age, which can be easily just changed. If you’re my age, back in the 2000s’ you probably had a Facebook account that said somewhere you’re 18+.  Point being, these systems were and are easily bypassed.

Some even use generative AI to “age up” their own images, tricking filters with fake proof. The result is predictable: the people verification was meant to exclude still get through, and everyone else pays the price in privacy.

Age verification is a data trap. No system can reliably distinguish a minor from an adult online without invasive surveillance, which means the real result is mass data collection disguised as protection. “Mental health detection” is equally hollow. These systems cannot diagnose or intervene. They only detect emotional keywords and adjust responses to preserve engagement.

Rising statistics say the UK’s Internet Watch Foundation has identified 17 incidents of AI-generated child sexual abuse material on a chatbot website, which has not been named, since June.

Governments are no better. The UK Online Safety Act was supposed to be a model for digital protection. In practice, it has become a tool for state-level data capture. It does nothing to stop children from seeing gore or adults from consuming synthetic pornography. What it does do is make everyone more visible to companies, to governments, and to each other.

The Cognitive Cost

AI companionship (erotic, griefbots and mental health bots included) does not replicate intimacy; it simulates a fetish of control. Chatbots designed to please will always say yes. They never argue, never withdraw and never demand accountability. For the cases of misogyny, we can take the example: For men raised in online ecosystems already steeped in resentment toward women, this becomes not fantasy but a par of a shift in ideology. The message is simple: affection can be programmed and consent can be optional.

The rise of “AI girlfriends” and sexualized chatbots has accelerated this cognitive decay. They train users to equate compliance with connection. Elon Musk’s Grok, already marketed as irreverent and “uninhibited,” is proof that this is no longer fringe. Reuters reported in August that 50% of young men say they would rather date an AI girlfriend than risk rejection from a human partner.

When major companies sell obedience as companionship, misogyny ceases to be cultural bias and it’s repackaged as user experience design.

These AI systems are not neutral, but biased as its data and users are. Each interaction that rewards dominance or dependency reinforces it. Over time, this rewiring spreads beyond the screen. It changes expectations of real relationships, reshaping how empathy functions (or fails to).

Deep-fakes and Numbness

With Sora and the newest generative AI models, the illusion of reality has reached full saturation. A user no longer has to imagine; they can render anything in minutes, so to say a menu of violence, intimacy, or a grotesque combination of both. When every visual boundary collapses, the emotional ones follow. Each realistic frame of generated content, each act of cruelty or synthetic affection, slowly chips away at the viewer’s capacity to differentiate. The result is not awareness but a kind of anesthesia, a similar feeling you might get from doom-scrolling all day, but long-term. It becomes possible to watch something horrific and feel nothing. Possible to engage with something intimate and believe it’s real. Possible to live in permanent simulation and call it freedom.

The Moral Collapse of Choice

The industry will defend all this as empowerment: adults choosing their content, parents choosing filters, users choosing how their AI speaks. But choice means nothing in a system designed to erode it. You cannot choose freely in an environment where the architecture itself manipulates your preferences. You cannot protect children in a system where attention is the product.

The combination of AI-generated violence, synthetic sexual content and engagement algorithms creates an ecosystem where empathy is a liability. A culture that rewards synthetic cruelty inevitably rewards synthetic consent. The companies leading this transformation should be building tools for expression and technological multi-field developments, not building infrastructures of dependence.

We have already reached the point where we must question what we see. Soon, we will have to question what we feel.

AI did not invent misogyny or loneliness; that’s a human feeling. The horror is not that these systems exist. It’s that they were technologically inevitable. The right response is education and proper regulation, while defining a line between fantasy and exploitation.