Groups say these toys can engage in dangerous conversations
November 20, 2025
-
Child-safety experts warn families to skip AI-powered toys this holiday season
-
New advisory says AI toys can harm child development, disrupt relationships, and invade privacy
-
Advocates cite documented cases of AI toys giving dangerous, explicit, or misleading responses to kids
A coalition of leading child-development specialists and technology-safety advocates is urging parents not to purchase AI-powered toys this holiday season, warning that the devices can undermine healthy development, expose families to serious privacy risks, and potentially endanger young children.
The advisory, released by Fairplay and signed by dozens of experts in child psychology and digital safety, pushes back against the booming marketing of smart companions for kids.
These toys which include plush animals, dolls, robots, and character devices equipped with conversational artificial intelligence are being advertised as educational and safe for even very young children. But experts say the reality is far more troubling.
AI toys embed chatbot technology inside familiar playthings, allowing them to converse with children in seemingly human ways. Companies behind products like Miko, Smart Teddy, Roybi, Loona Robot Dog, and Curio Interactives Gabbo/Grem/Grok pitch them as friendly, emotionally attuned companions. Major manufacturers, including Mattel, plan to introduce their own AI-driven toys.
Powerful AI models
But researchers emphasize that the conversational abilities that make these toys appealing come from the same powerful AI models that have already been associated with harmful interactions when used by children and teens.
Past incidents include chatbots encouraging unsafe behavior, initiating explicit sexual dialogue, and generating violent or manipulative content. U.S. PIRG tests have already found some AI toys offering children instructions on finding knives, lighting matches, and engaging in sexually explicit exchanges.
Because these products target younger children many of whom cannot distinguish between real relationships and programmed behavior the potential for harm is even greater, the groups say.
A central concern is the way AI toys leverage childrens natural trust. Young kids often treat digital voice assistants and talking toys as truthful and humanlike. Studies show, for example, that 75% of children ages 310 believe Amazons Alexa always tells the truth.
Trustworthy buddies
By presenting themselves as loyal friends or trustworthy buddies, AI toys may confuse childrens understanding of relationships and undermine their ability to build healthy emotional bonds with real caregivers, the advisory warns. It says that when companies design toys to appear empathetic or affectionate, they are effectively substituting machine responses for the messy, human interactions essential for resilience, social skills, and emotional growth.
The advisory also highlights extensive privacy and surveillance concerns. Many AI toys use always-on microphones, cameras, facial-recognition features, and gesture tracking often without children understanding they are being recorded. These tools can capture intimate family moments, private conversations, and even data from children who are not the toys owners.
Companies can use this trove of personal information to refine their AI systems or to target families with personalized marketing. Some AI toy makers are building subscription models that nudge children to request paid upgrades, while others could potentially sell sensitive data to third parties. History shows that connected toys have been hacked before, raising additional concerns about security.
Despite marketing promises of endless learning and imaginative engagement, the groups argue that AI toys tend to dominate play rather than support it. Instead of encouraging children to invent stories, explore freely, or use toys to express emotions key ingredients of healthy development AI toys drive interactions through prompts, scripts, and automated chatter.
Traditional hands-on play, the advisory notes, has decades of research confirming its developmental benefits. AI-driven play does not.
A call for caution
Fairplay and its co-signers say the risks outweigh the promises, especially given the lack of independent research showing any developmental benefit to AI toys. While companies race to release new AI-enabled products, advocates argue that children should not be used as test subjects for experimental technology embedded into toys that collect sensitive data, mimic relationships, and may say unpredictable or dangerous things.
Offline teddy bears and toys have been proven to benefit childrens development with none of the risks, the advisory concludes. As holiday shopping ramps up, experts urge caregivers to steer clear of AI-enabled toys and return to the simple, imaginative play tools that have supported children for generations.