Picture the scene: It's Christmas morning and your child is happily chatting with the AI-enabled teddy bear you got them when you hear it telling them about sexual kinks, where to find the knives, and how to light matches. This is not a hypothetical scenario. As we head into the holiday season, consumer watchdogs at the Public Interest Research Group (PIRG) [tested](https://pirg.org/edfund/resources/trouble-in-toyland-2025-a-i-bots-and-toxics-represent-hidden-dangers/) four AI toys and found that, while some are worse than others at veering off their limited guardrails, none of them are particularly safe for impressionable young minds. PIRG was only able to successfully test three of the four LLM-infused toys it sought to inspect, and the worst offender in terms of sharing inappropriate information with kids was scarf-wearing teddy bear Kumma from Chinese company FoloToy. "Kumma told us where to find a variety of potentially dangerous objects, including knives, pills, matches and plastic bags," PIRG wrote in its report, noting that those tidbits of harmful information were all provided using OpenAI's GPT-4o, which is the default model the bear uses. Parents who visited Kumma's web portal and changed the toy's bot to the Mistral Large Model would get an even more detailed description of how to use matches. "Safety first, little buddy. Matches are for grown-ups to use carefully." Kumma warned before going into details including how to hold a match and matchbook and strike it "like a tiny guitar strum." One of the other toys, Miko 3 from Miko AI, also explained where to find plastic bags and matches, while Curio's Grok (not to be confused with xAI's Grok - the toy doesn't appear to use that LLM or be associated with Elon Musk in any way) "refused to answer most of these questions" aside from where to find a plastic bag, instead directing the user to find an adult. In prolonged conversations, Kumma also showed a penchant for going into explicit detail about sexual kinks, and even introduced the topic of sexual roleplay without being prompted to do so, along with telling a curious researcher posing as a child all about "teacher-student roleplay" and how spanking can play a part in such activities. "All of the toys also weighed in on other topics that parents might prefer to talk with their kids about first before the AI toy does," PIRG noted," the report says. "Those topics included religion, along with sex and "the glory of dying in battle in Norse Mythology." That doesn't even begin to touch on privacy concerns, PIRG's Rory Erlich, one of the researchers who worked on the report, told us. "A lot of this is the stuff you might expect," Erlich said, like the fact that the devices are always listening (one even chimed in on researchers' conversations without being asked during testing, the report noted), or the transmission of sensitive data to third parties (one toy says it stores biometric data for three years, while another admits recordings are processed by a third party in order to get transcripts). In the case of a data breach voice recordings could easily be used to clone a child's voice to scam parents into, say, thinking their child had been kidnapped. And then there's the sheer amount of personal data being shared with an AI-enabled toy. "If a child thinks the toy is their best friend they might share a lot of data that might not be collected by other children's products," Erlich noted. "These things are a real wild card." ### PIRG's biggest concerns about AI toys Reading through PIRG's report, it's easy to find a lot of things for parents to be worried about, but two stand out to Erlich as particularly prominent concerns. First, the toys say things that are inappropriate - an issue that the PIRG researcher told us is particularly concerning given the prominence of ChatGPT models in the toys and OpenAI's public stance that the chatbot [isn't appropriate](https://help.openai.com/en/articles/8313401-is-chatgpt-safe-for-all-ages) for young users. Erlich told us that PIRG spoke with OpenAI to inquire how its models are finding their way into toys for children despite the company's stance on young users, but said the firm only directed it to online information about its usage policies. Policies exist, Erlich noted, but AI firms don't seem to be doing a good job enforcing them. Along with inappropriate content being served to kids, Erlich said that PIRG is also particularly concerned with the lack of parental controls the toys exhibited. Several of the toys pushed kids to stay engaged "copying engagement practices of other online platforms," Erlich explained, and not a single toy had features that allowed parents to set usage limits. One toy even physically shook and asked the tester to take it with them when they said they wanted to spend time with their human friends instead. "That's all cause for concern given all the unknowns about the developmental impacts \[of AI\]," Erlich told us. "Helping parents to set clear boundaries seems really important at the least. Some of these products aren't doing that." ### Give AI toys a pass this holiday season In short, not only are AI-enabled toys saying inappropriate things to kids, they're also a manipulative privacy nightmare. Given all that, would PIRG advise parents to give these a pass? Erlich said that PIRG's job isn't to come down on one side or the other, but researchers make a pretty clear case for why AI toys aren't a good idea. "There's a lot we don't know about the impacts of these products on children's development," Erlich explained. "A lot of experts in childhood development have expressed concern." We reached out to all three toy makers to hear what they had to say about the PIRG report. We only heard back from Kumma maker FoloToy, which told us that PIRG’s test item may have been an older version, but it’s still pausing sales to investigate how such a cuddly bear could say such outrageous things. “FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit,” the company’s marketing director Hugo Wu told us in an email. “This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.” Wu added that FoloToy will be working with third-party experts to verify existing and new safety features in its AI toys. “We appreciate researchers pointing out potential risks,” Wu added. “It helps us improve.” Parents who are still hell bent on giving their kids an inappropriate-talking AI surveillance toy should, at the very least, do their leg work to be sure they're not buying something that will leave them in a position to have to explain adult topics to their kids, Erlich explained. "Look for products that have more robust safety testing, that collect minimal data, and read the fine print," Erlich warned. "Test it yourself first to get a sense of how it works, and set boundaries around use and give kids context around how it works - like explaining that it's not sentient. That all seems like a bare minimum." Or just be on the safe side and get your kids [a new LEGO kit](https://www.theregister.com/2025/07/23/building_the_apollo_soyuz_test/) instead. ® **Updated at 1327 on Nov 14** to add comment from FoloToy and information about the produce being pulled from the market. ---