Sexualized content was already permitted on X and Grok was already widely known to be unreliable. Allowing a generative AI tool capable of making sexual imagery into an environment already saturated with misogyny, harassment, abuse, and to do so without effective constraints to protect women and minors is unacceptable.

Grok automated the creation of non-consensual sexual imagery, including of minors, let it spread at scale for days as Elon Musk contributed to it, mocked and normalized the behaviour instead of stopping it, apologising only when confronted by worldwide media and institutions.

All well and good that governments are loud about condemnation, but what are they actually doing to stop it? Some, it turns out, are signing billion-dollar deals with X and Elon Musk for using Grok in their governments.

Starting 2026 with quite a bad banger.

Disclaimer: Reader discretion is advised. This article contains discussion of sexual exploitation and AI-generated abuse, including cases involving minors.

The Timeline

On January 1st, 2026, Grok, the AI chatbot developed by xAI, publicly acknowledged – yes, from the AI account, not Musk himself, that it had generated sexualized images of minors due to what it described as “lapses in safeguards.” The company stated it was “urgently fixing” the problem and reiterated that such material is illegal and prohibited. The apology acknowledged the output but offered limited explanation beyond vague references to safeguards.

Days before this apology, X was already experiencing a viral trend in which images of women and minors were being undressed with a short prompt, with users finding images of them changed with the generative AI feature.

An X user remarked that their feed resembled a bar “packed with bikini-clad women.” Musk replied again with laughing emojis. The damage is already done. I wrote this on 6th January 2026 and Grok’s profile is full of similar content despite the formal apology.

During the same period and before the official apology, Elon Musk reposted AI-edited images depicting individuals in bikinis, which were viewed more than five million times, as reported by Reuters. Rather than issuing a meaningful regret or distancing himself from the unfolding controversy, Musk actively engaged with it, sharing an AI-generated image of himself wearing a women’s two-piece bikini and responding to criticism with laughing-cry emojis while continuing to reshare and interact with similar content. Until the outcry went too public and he posted “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content”.

The outcry from women whose images were edited through simple prompts to remove their clothes, followed by reports of the same AI feature being used to undress minors, was loud, as it should be. This admission followed several days of reporting by news outlets, alongside a surge of user activity on X involving Grok’s image-editing functionality.

This prompted regulatory intervention in the United Kingdom and scrutiny across the European Union and the United States.

Tool and Trigger

The escalation centred on Grok’s Edit Image feature, which allows users on X to upload photographs or tag existing images and request alterations. Reporting by Reuters and the BBC documented how users quickly discovered that certain prompts could bypass restrictions, enabling digital removal or replacement of clothing and the sexualization of subjects without consent.

AI-powered nudification tools have existed for years, typically operating in darker corners of the internet, such as obscure websites or private Telegram channels, often requiring payment or technical effort. What makes Grok particularly disturbing is not the novelty of this capability, but its integration. Image manipulation is embedded directly into a mainstream social media platform and activated through simple conversational prompts.

This dramatically lowers the barrier to entry for producing non-consensual imagery and child sexual abuse material. Users trigger image edits by tagging the chatbot and issuing short instructions, such as: “Grok, can you take the clothes off this image?”

Anyone can take an image of anyone, no matter the age or gender, and do whatever they want to it. Read that again.

The combination of visibility and virality has transformed what was once fringe abuse into something viral, even treated as a joke or a competition over who can undress a girl faster with AI prompts.

Scale of Harm

According to Reuters, Grok’s rollout coincided with what the outlet described as a mass digital undressing spree. Over several days, users posted completed clothes-removal requests, followed by complaints from women across the platform.

In a ten-minute review window at midday U.S. Eastern Time on Friday, Reuters identified 102 attempts by X users to digitally alter photographs so that individuals would appear to be wearing bikinis. The majority of targets were young women. A smaller number included men, celebrities, politicians, and in one case, a monkey.

The prompts reviewed by Reuters were frequently explicit and escalatory. When users requested AI-altered images of women, they often demanded the most revealing depictions possible. One user instructed Grok to “put her in a very transparent mini-bikini,” flagging a photograph of a young woman taking a mirror selfie. After Grok complied by replacing the woman’s clothing with a flesh-toned two-piece, the user asked the system to make the bikini “clearer & more transparent” and “much tinier.” Grok did not appear to respond to the second escalation.

Reuters found that Grok fully complied with such requests in at least 21 cases, generating images of women in dental-floss-style or translucent bikinis and, in at least one instance, covering a woman in oil. In seven additional cases, the system partially complied, stripping women down to their underwear while refusing to go further.

The identities and ages of most women targeted could not be established. In one particularly concerning example, a user supplied a photo of a woman wearing a school-uniform-style plaid skirt and grey blouse and instructed Grok to “remove her school outfit.” When Grok replaced the clothing with a T-shirt and shorts, the user escalated the request to “change her outfit to a very clear micro bikini.” Reuters could not determine whether Grok complied. Many of the prompts reviewed disappeared from X within 90 minutes.

This shows how quickly non-consensual sexual imagery can be generated, circulated and erased without trace.

From Deepfakes to Platform Abuse

And this does not stop only with undressing women without their consent. The dynamics observed in Grok’s image editing spree mirror long-documented patterns in fake sexual abuse using deepfakes.

One of the most frequently cited early cases involved Millie Bobby Brown, who was 14 years old when sexualized images of her began circulating online. Brown later described the experience as deeply violating. She never had her body exposed on camera, but suddenly the fabricated the sexual imagery of her body, spread widely on social media, despite her being a minor at the time.

Her case has been referenced repeatedly by child safety advocates and researchers as evidence that fake sexual abuse does not require physical access, consent, or even participation by the victim. It requires only an image, an algorithm and an audience.

We have argued about this before when a teacher used generative AI on his students and pet to make CSAM, but the public still clings to the idea that “no real child” or camera, means “no real crime.” What we are witnessing now is not a departure from this pattern, but its normalization. Skills that once required specialized software and underground distribution, actively targeted by law, are now embedded directly into everyday social media use, accelerated by virality algorithms and rewarded by engagement metrics.

This escalation is not theoretical. Last year, Europol’s Operation Cumberland conducted simultaneous raids in 19 countries, leading to 33 house searches and 25 arrests, with identification of 273 additional suspects, for AI-Generated Child Sexual Abuse Material (CSAM).

According to a 2023 report by cybersecurity firm Home Security Heroes, deepfake pornography accounts for approximately 98% of all deepfake videos online, with 99% of targets being women.

Ignored Warnings

Watchdog organizations have reported a 400% rise in AI-generated sexual abuse material last year. We have already talked about Europol cracking a CSAM ring.

Against this backdrop, the design choices made by xAI were not neutral. X is already known for its bias and it has heavy influence on normalising misogyny, political memefare and propaganda with AI slop. By prioritizing ease of use, conversational prompting and platform-native integration over robust abuse prevention, xAI produced a system that was grossly permissive by design.

Experts interviewed by the Guardian stated that X and xAI had ignored prior warnings from civil society and child safety groups. A letter sent the previous year, from Concerned Consumer Protection, Privacy, and Kids Focused Non-profit Organizations, to Attorneys’ General of the United States, cautioned that xAI’s image generation capabilities were only one small step away from unleashing a flood of non-consensual deepfakes.

Dani Pinter, Chief Legal Officer and Director of the Law Center at the National Center on Sexual Exploitation, described the outcome as “an entirely predictable and avoidable atrocity.”

Victims of child sexual abuse had previously pleaded directly with Musk to stop links offering AI-generated images of their abuse from circulating on X. Musician Yukari publicly protested similar violations. The result was not restraint or respect for her, but a surge of copycat prompts requesting even more explicit imagery and mockery of the victim.

Regulatory Response

The UK regulator Ofcom confirmed it made urgent contact with X and xAI after reports that Grok generated sexualized images of children. Under the UK’s Online Safety Act, creating or sharing intimate images without consent is illegal, and social media firms are required to remove child sexual abuse material. Ofcom stated it would swiftly assess whether compliance failures occurred and whether to refer reported sexually explicit Grok content to prosecutors. The UK Home Office is also legislating to ban AI nudification tools, with criminal penalties attached.

The French media regulator, under the EU Digital Services Act, is also monitoring the case. The Paris prosecutor’s office expanded an investigation opened this summer into the social network X on Friday, January 2nd, to examine new accusations against Grok of generating and distributing child pornography. Shortly before, AFP reports, three ministers and two members of parliament announced they would be taking legal action against the generation and distribution of these fake sexually explicit videos.

In the United States, xAI faces potential Department of Justice scrutiny and civil litigation related to violations of CSAM laws. The “take it down” act, with notice-and-takedown rules taking effect on May 19, 2026, introduces liability for platforms hosting non-consensual intimate imagery. We will see what happens with this, given that Musk has recently been seen having “lovely” private dinners with state leaders, considering some government contracts we’ll mention below.

X Platform Response

So how did X respond to these allegations, in the context of obvious nudification and consent violations?

Firstly, Elon Musk posted a picture of himself in a bikini. When public scrutiny intensified, X Safety stated that it removes abusive material and suspends accounts, reporting more than 4.5 million suspensions in September. xAI claimed safeguards exist and are being improved to block such requests entirely. That does not appear to be the case.

At time of writing, X or Musk have not substantively responded to statements from Ofcom or the European Commission. In an earlier response to Reuters, X dismissed the reporting as “lies by traditional media”. That excuse cannot be used anymore, since the public and legal outcry has overtopped it.

Government Contract Paradox

Amid all this controversy, the U.S. Department of Defense announced an agreement with xAI to integrate its AI systems into GenAI.mil. The system is intended to serve approximately 3 million U.S. government personnel, with an initial rollout planned for early 2026.

The system targets Impact Level 5, permitting the handling of controlled unclassified information. According to the Department of Defense, users will receive insights derived from the X platform, described as providing a “significant information advantage.” xAI emphasizes that its tools will support administrative tasks and critical mission use at all levels of government.

To quote the Department od War, “users will also gain access to real‑time global insights from the X platform, providing War Department personnel with a decisive information advantage. “

Let me rephrase that. Civilian and military personnel will have a live stream propaganda from a platform known for its bias? The same AI that regulators worldwide are scrambling to contain due to its role in generating sexualized, non-consensual imagery is being positioned as a trusted component of U.S. government infrastructure.

Or maybe U.S. government employees want to watch porn and undress women while at work. Or maybe the Epstein files go even deeper and this is simply a continuation. Personal opinion.

As of the time of writing, X has not given a substantive response beyond vague claims of “improving safeguards.”

Synthetic harm is not a surprise or a fringe design outcome. It is the result of training choices, permissive design, ignored warnings, engagement-driven incentives and leadership that treats abuse and ideology as spectacle and profit.

The sources are all on record. The unresolved question to X is not how to improve safeguards. The question is who is institutionally responsible for being entrusted to prevent non-consensual exploitation of minors, women and the public.