A Florida 6th-grade teacher, David McKeown, has been sentenced to 135 years in prison for generating child sexual abuse material using artificial intelligence, according to People. He didn’t seek out pre-existing images or join the dark web, either didn’t infiltrate hidden forums. He created the material himself. He took photos of his own students, ran them through generative AI models, and produced synthetic abuse images (CSAM) to share online. He also created animal pornography using his family pet. The violations were deliberate, repeated and enabled by tools that anyone can access.
This case is horrifying. The promised “democratisation” of generative AI has revealed itself as a pathway for misuse rather than public good?
DISCLAIMER: This article discusses child sexual abuse material (CSAM) and exploitation through AI. The subject matter is disturbing and may be emotionally challenging. It is presented solely for educational, investigative and public-interest purposes. Nothing here is included for shock value. The harms described are real, ongoing and documented by law-enforcement agencies and child-protection organisations.
The goal of this analysis is to expose the technological, regulatory and societal conditions that allowed these abuses to occur, and to advocate for stronger protection of vulnerable populations.
If you are affected by the issues raised, or believe a child may be at risk, please contact your local law-enforcement agency immediately.
Synthetic Abuse Is Still Abuse
For decades, law enforcement followed a consistent principle: the existence of child abuse material requires a real child, a real act and a real camera. Synthetic CSAM and AI generated material does not follow that logic. It manufactures the illusion of crime while producing real psychological harm.
A child does not need to be touched to be violated. The moment their face becomes the substrate for an act of abuse, the damage is done. Those children will live with the knowledge that their teacher used their faces to construct abuse that never happened; except it did happen, in a form that travels farther and faster than traditional crimes ever could. Anyone can now be violated endlessly without ever being touched.
The public still clings to the idea that “no real child” means “no real crime.” I think this is a good proof of the opposite.
A Predictable Tragedy
Anyone calling this Florida case an “isolated incident” has not been paying attention.
On 28 February 2025, Europol released the results of Operation Cumberland, an international takedown of networks producing and distributing AI-generated CSAM. Led by Danish authorities with Europol and the Joint Cybercrime Action Taskforce (J-CAT), the operation resulted in 25 arrests across 19 countries and the identification of 273 additional suspects. We covered this in a previous article, which in a really bad way, is kind of a precursor warning to this topic.
It confirmed everything the Internet Watch Foundation had been warning about since 2023: tens of thousands of AI-generated child abuse images circulating on dark-web platforms, thousands depicting explicit criminal acts, and by mid-2024, the emergence of deepfake CSAM videos where real children’s faces were grafted onto synthetic/generated bodies. Even Europol openly acknowledged that the ease of generating synthetic abuse has become a massive, growing obstacle for law enforcement.
Why this Crime was only a matter of Time?
When generating abuse requires nothing more than uploading a photo and entering a prompt, the safety risks fall under whatever resistance the technology itself provides.
The promised democratisation of generative AI was framed (and is supposed to be) as empowerment, creativity and access. What we actually received was access without proper guardrails. Every technological wave comes with unintended consequences, and it is true that we cannot predict everything. But this particular outcome was not unpredictable; it was a major red flag that companies might chose to overlook in pursuit of scale, adoption and hype.
Escalation Culture
In a previous article on the normalising delinquent behaviour online, we talk how children today grow up in digital spaces where escalation is currency, such as group chats, discord servers, private threads. The game is simple: make something “worse”, produce something more shocking, break another filter.
While I did mention this rise in kids and adolescents, adults belong in the same digital environment, like this Florida teacher. If outdoing harm is entertainment and realism is the challenge, where have we lost the empathy and rationality?
The illusion of age verification
When companies insist that “age verification” will keep children away from sexual or explicit AI systems, they rely on a fiction and legalise statements. Age verification online is a data pipeline. No system can reliably distinguish a minor from an adult without intrusive surveillance that most users would never accept if it were described honestly. Children go around these systems effortlessly. They log in on parents’ devices, use borrowed IDs, or simply click the button that claims they are old enough.
Most webpages just have a checkmark 18+ like a tick or ask for age, which can be easily just changed. If you’re my age, back in the 2000s’ you probably had a Facebook account that said somewhere you’re 18+. Point being, these systems were and are easily bypassed.
Erotic AI as a misuse tool
On October 14, OpenAI CEO Sam Altman confirmed that beginning in December, “verified adult users” will be allowed to have explicit erotic interactions with ChatGPT. He described this as a part of a “treat adults like adults” policy, promising that new tools would detect when users were in “mental distress,” allowing greater freedom elsewhere, as written by The Independent.
But erotic AI systems are not companions; they are engagement engines. They simulate affection without reciprocity, boundaries or consequence. A synthetic partner cannot refuse, cannot contradict, cannot say no. It has no autonomy to defend and no emotional limits of its own. This is precisely why it becomes attractive to those who feel alienated, insecure or powerless in real relationships. On more detailed information how this change influences people and has drastic changed in the rise of misogyny or the use of violence as engagement, I encourage you to read the previous research.
However, that dynamic does not stay on the screen. Once someone learns that intimacy can be demanded rather than negotiated, the expectation spreads. It changes how people relate to each other, and in the worst cases, it changes who they believe is entitled to whom.
For individuals already inclined to seek control rather than connection, erotic AI becomes a training ground for boundaryless desire. The Florida teacher case is not about an AI girlfriend, but it exists on the same continuum: a digital environment that rewards fantasy over responsibility.
The Florida teacher offender will never leave prison. But the next offender will not be stopped by this sentence.
AI did not create loneliness or harmful impulses; those come from us. The issue is that once these systems were released without meaningful safeguards, their misuse became inevitable. What we need now is education that matches the pace of the technology and regulation that clearly separates fantasy from exploitation. Until those foundations are in place, synthetic harm will continue to outgrow our capacity to respond.
[…] the Automation of Sexual ExploitationDec 16: AI in 2025: Independent research and what we learnedDec 09: Teacher used Generative AI for CSAM: The misuse of a “democratised” technologyDec 02: If AI is the future, why are its environmental costs a tomorrow’s problem?Nov 25: AI Toys: […]