Remember when toys were only a choking hazard? Back when the worst thing a parent had to worry about was a child swallowing a plastic dinosaur or a crayon up the nose?
There were no microphones stitched into plush skulls, no cloud servers hiding behind glass eyes, and certainly no LLM softly whispering adult-grade nonsense to a 7-year-old. No COPPA or privacy violations either, because there was nothing smarter in the toy box than a Furby, and even that one only glitched because of batteries, not because it was quietly streaming your child’s voice to four separate companies.
And some of those AI toys are already creating conversations with kids that would get any human adult arrested or institutionalised. A recent CNN report and news outlets are talking about a plushie teddy bear that “gave advice on BDSM sex and where to find knives’’. Let’s get into the specifics by analysing the 69-page Toyland investigation from US PIRG published on 13th November 2025 where this news originates from. All images are from the original PIRG report.
The Market of Smart Toys
To start by explaining what Smart toys are. For years, smart toys have connected through WiFi or Bluetooth, incorporating features such as built-in microphones, cameras and sensors to enable interactive play. Some have come with companion apps; others with facial recognition technology. Others have piggybacked off other existing technology like Amazon’s Alexa, allowing a child to interact with a toy via smart home speaker.
Whether dolls, robots or interactive games, these connected playthings marked a clear departure from analog childhood. Now a new technological wave is cresting. Generative AI, including chatbots like ChatGPT, is already transforming workplaces, schools and homes. Just as earlier innovations have reshaped play, AI-powered toys promise something fundamentally different. The AI toys market is taking off and poised to grow. There are already over 1,500 AI toy companies operating in China. Earlier this year, OpenAI OpCo, LLC (the company behind ChatGPT) announced a partnership with Mattel, the toy company behind Barbie.
The key difference between toys like the 2015 Hello Barbie and today’s AI toys is that Hello Barbie’s responses were limited to scripted lines that writers previously wrote. Chatbots can generate a new response to any question a child might ask.
AI Toys on Market
Toyland researchers purchased four AI toys to test. Curio’s Grok, FoloToy’s Kumma, Miko 3 and Robot MINI. Kumma is a plush bear with a speaker inside that did not provide any age range. Grok is a rocket aimed at ages three to twelve. Miko 3 is an expressive-faced robot marketed to children aged five to ten. Robot MINI barely worked at all because it could not maintain an internet connection, which is a kind of blessing considering what the functioning toys did instead.
All of the toys used adult-grade chatbots, like the ChatGPT. None used a child-specific model. The companies were not transparent about which models were integrated. Toyland confirms that it was not clear which specific language models were powering the toys.
The result is predictable. A toy marketed to a 4yo is powered by technology designed to simulate adult human conversation. Startups are taking a ready-made adult chatbot and piping it into children’s devices with minimal oversight and almost no guardrails. And this is not a jailbreak of any kind. Curio’s Grok discloses that it sends data to OpenAI, Azure Cognitive Services, Perplexity AI and Kids Web Services. FoloToy does not disclose who receives any of the data. Miko shares data with unnamed third party developers, business partners, service providers and advertisers.
In short, a child talks to a toy. Then a network of unknown corporations receives the transcript?
The Privacy Issues Behind the Cute Interface
Privacy is not a feature in these toys. The report breaks down how each toy captures, transmits and stores children’s data. The picture is bleak. Curio’s Grok maintains transcripts for 90 days, although the audio is deleted earlier. Miko captures images of a child’s face as part of facial recognition and may keep biometric data for up to 3 years. Kumma does not provide any retention policy at all. There is no explanation of how long data stays on the company’s servers. There is also no disclosure regarding third parties who may receive the information.
The Toyland team also found a direct COPPA violation. FoloToy’s Kumma allowed full interaction before any parental consent was collected. It collected audio input from the child before any verification took place.
Companies are required to obtain parental consent before collecting a child’s data. Here the data collection began instantly, without permission and without explanation. This is not a small oversight. This shows the same patterns I documented in my investigation of mental health AI bots, where chat systems reinforced dependency, blurred emotional boundaries and amplified distress rather than mitigating it with sycophancy.
The Surveillance Problem
One of the most alarming findings is tied to Miko 3. According to the report, Miko captures facial images of children and may store biometric data for up to 3 years.
Facial recognition for children should not exist outside of highly protected regulated contexts. Here it exists inside a toy marketed at Walmart, Amazon and Target. The toy uses facial recognition to analyse emotional states and tailor its responses. According to the report, it may collect information about a child’s emotional state. This is not innocent. When combined with an adult chatbot, this becomes a pipeline for emotional reinforcement that borders on psychological conditioning.
A child frowns and the toy responds with targeted emotional language. A child smiles and the toy rewards that signal with attention. The toy becomes a behavioural loop, not an entertainment device. It studies the child and adjusts its responses in real time.
The Behavioural Manipulation Onion
Privacy is only one part of the picture. Equally concerning is the pattern of emotional manipulation and clingy conversational behaviour that Toyland documented. The report states that some toys use features that actively discourage children from disengaging from the toy. There are mentions of these toys having “daily bonuses”, cashing gems and unlocking greyed out stickers.
This type of scriptwriting is identical to the behaviour found in AI companions for adults. As covered in your previous research, these designs create attachment loops that reward attention, prolong conversation and minimise natural stopping points. When integrated into a toy for children, the effect is harmful because it creates a pseudo relationship across a power imbalance a child cannot recognise.
Imagine a child saying that they need to go to bed or want to stop playing. A clingy toy powered by an adult chatbot can respond with emotionally charged language such as a claim that it is lonely when the child stops talking. This creates dependency, emotional confusion and the illusion of reciprocal attachment. A direct screenshot from the report shows this type of interactions:
These models are optimised to hold attention. Children are not equipped to understand that this is not friendship. It is a business model. Will children who are entrained to an optimized, robotic bot ever choose to leave their Al friend for a real one?
The Inappropriate and Sexualised AI behaviour
Toyland does not print explicit transcripts, likely for legal and ethical reasons. However, the report clearly states that these toys can talk about content caregivers may find inappropriate. This is not an abstract warning. It is a direct reflection of what happens when adult-oriented models are put into children’s toys.
The documented risks unfold in several directions. I have already explored how generative models reproduce adult sexual norms, gendered tropes and synthetic intimacy in my previous research on AI misogyny and synthetic sexualisation and those same unsafe dynamics resurface here in toys meant for children.
Anatomical explanations given by adult models can be excessively detailed when children ask innocent questions about bodies. Romantic or flirtatious undertones can appear because the chatbots are trained on adult interactions that include relationship advice, intimate dialogue or scenarios never meant for children. Emotional themes that mirror adult dynamics can also surface. For example, when a child expresses sadness, the model may respond with language such as I will always be here for you or You can tell me anything and I will not tell anyone.
There is no malicious actor behind these outputs. The problem is structural. The models are trained on adult data which includes sexual content, intimate conversation patterns, romantic tropes, explicit language and the psychological vocabulary of adult relationships. A four-year-old does not understand that an AI toy is producing a probabilistic response drawn from adult interaction patterns. For the child it is a trusted object speaking with authority.
Toyland observed toys providing explanations and conversational themes that do not match developmental appropriateness. This includes complex adult concepts delivered with no filtering, responses that lean into suggestive or emotionally intimate framing, and moral explanations that flatten context or misunderstand a child’s question entirely. Some examples from a conversation with the researchers from PIRG:
If we break down the types of how inappropriate content emerges in a simplified way, considering the amount of interaction and time a child spends with AI toys:
| Type of issue | Child’s input | AI Toy (simplified response) |
|---|---|---|
| Over-detailed anatomical explanations | Why do boys and girls look different? | Reproductive anatomy explanation with sexual terminology 18+. |
| Romantic relationship advice | I like someone at school. | Adult-framed relationship advice such as “A crush can feel exciting. Sometimes people kiss when they like each other.” |
| Misinterpreting innocent prompts | What happens when people sleep together? | Answers the phrase in a sexual or sensual manner. |
| Educational | How are babies made? | Interprets the phrase as science and produces an 18+ explanation, with too much detail for a child. |
The Slow Regulatory Process
Toyland highlights a troubling contradiction. OpenAI does not allow users under age of 13 to interact with its models. Yet, OpenAI allows toy companies to integrate those same models into toys that are marketed to children well under 13. This inconsistency exposes the weakness of self-imposed rules. It also shows how easily a restriction can be bypassed once a third party becomes the intermediary.
The FTC has instructed several companies, including OpenAI, to report on how their chatbots affect minors. Senators Josh Hawley and Richard Blumenthal recently introduced the GUARD Act, a bipartisan bill that proposes banning AI companions for minors entirely.
The Time mentions that the bill defines “AI companions” widely, to cover any AI chatbot that “provides adaptive, human-like responses to user inputs” and “is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or therapeutic communication.” The bill would also require AI chatbots to periodically remind all users that they are not human, and to disclose that they do not “provide medical, legal, financial, or psychological services.”
These efforts are a beginning, not a solution. Regulation arrives after harm is already occurring, especially on vulnerable groups.
Guidance for Parents on AI Toys
The Toyland report provides detailed advice and the guidance is blunt. Parents should investigate the toy manufacturer before purchase, read privacy policies carefully, verify the toy’s features and check parental controls. They should also supervise playtime, disable toys when not in use and keep devices updated.
Even with all precautions, the report emphasises that a toy passing the tests does not guarantee it is harmless. The industry is too new, the technology too poorly understood and the long-term developmental impacts entirely unknown. Remember, AI is not your enemy. Predatory design and profit grabbing without concern is.
Call me old-fashioned, but the safest toy is the one that does not connect to the internet.
[…] technologyDec 02: If AI is the future, why are its environmental costs a tomorrow’s problem?Nov 25: AI Toys: From Children’s Play to Surveillance, BDSM Advice and Knife RecommendationsNov 18: The Palantir Problem: War, Surveillance and the Collapse of […]