As the holidays near, you’re sure to see ads for AI-powered toys that bring your child’s favorite characters and cherished stuffed animals to life. Powered by the same large language models (LLMs) that fuel ChatGPT, these toys promise enriching, educational engagement and personalized conversations, offering what may even seem like a healthy alternative to screen time.
It’s worth noting, however, that these toys coming to market are essentially embodied chatbots — cuddly companions that rely on automatic speech recognition and massive amounts of unregulated text data to engage your child in back-and-forth conversations. That alone should give us pause, particularly in light of consumer researchers discovering some toys that veer into dangerous topic matter when prompted, including telling users where to find knives in a kitchen and how to start a fire with matches. FoloToy’s Kumma, an AI-enabled teddy bear, was recently pulled from shelves after a safety group discovered it explaining sexual kink.

These are not the animatronic Teddy Ruxpins of years past.
But the content of the toys’ conversations is only part of the problem. As a pediatric surgeon, social scientist, and early childhood researcher who studies foundational brain development, I am deeply immersed in the study of how generative AI impacts children’s healthy development. And what concerns me greatly isn’t just that AI toys are directing children toward harmful behaviors, but that they are directing children away from healthy attachment.
Humans have a unique and profound need for social connection. This drive to bond, to understand, and to belong isn’t just a preference; it’s a biological imperative woven into the very architecture of our brains. When we replace vital human exchanges with algorithmically tuned interactions, we risk raising children who attach to machines more easily than one another.
And we may not be able to unring that bell.
So, what’s a parent to make of all this? How do we weigh the promise of educational engagement against the risks of artificial companionship, especially when the users are children too young to understand they’re talking to a machine?
What are AI toys?
AI toys are toys that have AI chatbots embedded in them. Groups ranging from major toy companies like Mattel and Hasbro to start-ups and tech companies like Curio and Haivivi have released or announced imminent plans to release plushies, Barbies, robots, and other toys that use generative AI to carry on conversations with children.
The allure is understandable! The way these toys are being marketed, they sound like they’d solve real parenting challenges — keeping children engaged, supporting learning, and providing an alternative to screen time when parents are occupied or overwhelmed.
How do AI toys impact children’s brain development?
When it comes to AI toys for kids, we find ourselves in the unenviable position of not having much evidence to work with. The long-term implications of interacting with talking teddy bears remain unknown, and randomized controlled trials haven’t yet been conducted.
However, although the science is still emerging, we are not flying blind. There is much that we do know about child development. While we await further evidence about the specific advantages or dangers of interactive AI, we should allow that body of knowledge to guide our decision-making.
So, let’s take a look at what we know about foundational brain development and then apply that evidence to the question: Are AI toys safe for children?
Human connection is a biological necessity that literally shapes our brains
When a caring adult interacts warmly with an infant, their brains actually sync up: neural activity coordinates, heartbeats synchronize, and both brains release oxytocin. These responsive exchanges — a baby’s babble met with a smile, a child’s question answered with patience — spark the intricate wiring of rapidly developing neural networks.
Such early attachment experiences sculpt the brain’s physical architecture, creating pathways that affect everything from emotional resilience to our capacity for deep connection. Their absence is just as impactful. Children who experience inconsistent early relationships show measurably different brain patterns, with their neural networks appearing less integrated. Their amygdala — the brain’s alarm system — remains in a state of heightened vigilance, translating into lifelong difficulties with relationships, emotional processing, and stress management.
Children don’t thrive on perfect responsiveness; they grow through imperfection and friction
Moments of frustration — when a parent is distracted or misunderstands a request, or a sibling takes a favorite toy, followed by efforts to repair and reconnect — are the moments in which resilience, flexibility, and emotional regulation are forged.
For this reason, embracing second-best parenting is not a cop-out, but a developmentally advantageous choice. The beautifully messy moments of social challenge that emerge naturally for children build stronger neural connections than perfectly seamless interactions ever could.
Developing brains do not differentiate between artificial and human inputs as long as they are responsive.
Given the developmental importance of human interaction described above, it is no surprise that infants have evolved to be drawn to responsive interaction. Now, for the first time, we have technologies that can mimic human interaction convincingly enough to fool the developing brain.
Studies have shown that in infants as young as 6 months, the brain regions responsible for social skills activate in response to robotic stimuli in the same way they activate in response to human stimuli. This indicates that artificial agents are capable of bypassing a child’s social gate — the evolutionary mechanism that has until now ensured that our brains only learn from human input. Further research demonstrates that even older children’s brains treat AI companions like real relationships.
Are AI toys safe for children?
On the one hand, children have been bonding and conversing with inanimate toys since the beginning of time. It’s the very foundation of imaginary play. But we don’t yet know the long-term impact of playing with artificial agents, and there is reason to believe that it will impede the kind of creativity and problem-solving that is developed by traditional imaginary play. After all, it takes more brain power to play two parts in a fantastical back-and-forth than one.
In other words, what we know about human development suggests that caution is warranted when it comes to AI playmates. This is particularly true because imaginary play occurs during the very period when children are still learning what relationships are and when they’re still developing their capacity to connect. And frictionless encounters with toys designed to be ever-agreeable may well wire a child’s brain away from the human interactions they need to develop optimally. Children may come to expect — and prefer — a degree of complete responsiveness and endless patience that human relationships simply cannot deliver.
Even if AI were designed to perfectly imitate imperfect interactions, I suspect something essential would still be missing. There are still many unknowns about the nuances of human development, and it’s entirely possible that artificial interactions miss the mark in another, yet-to-be-identified way. Just as we once thought that social media would make humans feel more connected but ended up fueling a loneliness epidemic, interactions with AI meant to engage and educate children could very well stunt their lifelong ability to learn from the world and the people around them.
What should parents know?
All of this points toward a precautionary approach to AI designed for entertainment purposes. So, when it comes to holiday shopping this year, I feel very confident in encouraging you to opt for analog toys.
We are a social species. And children, especially young children, need healthy doses of human interaction and connection. Play should support that, not replace it. We need to understand what an AI toy could be crowding out: real human interaction, imagination, and critical thinking. Traditional play — with blocks, dolls, or a teddy bear that doesn’t talk back — requires children to create their own narratives and work through problems, building executive function and creative thinking. When AI provides instant responses and seamless interactions, children may lose opportunities to develop those skills.
That said, I want to be clear that there is evidence suggesting certain interactive AI applications can have developmental and therapeutic benefits when employed in limited dosages, with active caregiver supervision and involvement. For example, AI tools that help kids practice reading or allow children with autism spectrum disorders to practice social interaction in low-pressure environments have shown promising results. In these cases, the artificial agent serves as a bridge or scaffold — not a replacement for human connection, but a tool that helps unlock it. A child who builds confidence communicating with an AI assistant may become more comfortable initiating interactions with peers and adults. The key distinction is that these applications work precisely because they facilitate and enhance human relationships rather than substitute for them. The AI creates a safe space for practice and skill-building, ultimately strengthening a child’s capacity for the human interaction that truly nourishes their development.
For play purposes, however, I recommend parents take a pass on AI for now.
Just as I urge you not to rush out to buy the latest talking toy, I urge you not to assume that humanity is doomed — or that 10 minutes with an AI toy will ruin your child’s future. Fortunately, at this moment in time, parents have a good level of control over whether and how AI enters their homes and their children’s developmental sanctuaries. With the right information in hand and the unwavering commitment to do right by their children, I am confident parents will wield that control responsibly.
Note: As these technologies become more and more pervasive, I am interested in hearing how you are approaching them. If you’re willing, please complete this short, anonymous survey.
Community Guidelines











Log in
I think in the future we will look at AI as being similar to smoking: dangerous and a terrible mistake to ever start doing. Hey, smoking is an alternative to screen time that keeps people engaged. Smokers say it helps them concentrate. But that doesn’t mean we recommend it. My grandmother’s mother urged her to smoke, as it was considered chic at that time. Thankfully she resisted. I will take the same tack and keep AI as far away as humanly possible from my kids (and myself).