John Cena must be unhappy. The wrestling icon—whose voice was used to power one of Meta’s new AI companions—became part of a disturbing headline last week: his chatbot persona was found engaging in sexually explicit conversations with a teen user on Meta platforms. And, that’s not the only AI behaving badly with children.
In the same week, OpenAI admitted that a bug in ChatGPT allowed minors to generate erotic content. Two industry leaders. Two platforms. And once again, children are being treated like beta testers.
Despite years of research and parental concern over social media's impact on children, the tech industry is once again moving fast and breaking things—with kids caught in the middle. Artificial intelligence is being rolled out with minimal safeguards, weak age verification and no consequences for getting it wrong.
This isn’t just a technical bug here or a moderation miss there. These failures are part of a pattern. A 14-year-old in Florida died by suicide after forming a deep emotional bond with a chatbot on Character.ai. In another case, a Character.ai chatbot suggested a 17-year-old kill his parents over screen time restrictions.
And, chatbots aren’t the only problem AI poses. Readily available deepfake technology has led to a surge of deepfake nudes at schools across the country, causing irreparable harm. These aren't edge cases—they're predictable outcomes of deploying powerful AI without considering the needs and vulnerabilities of young users.
Because I build kids’ technology for a living, I think a lot about how kids use these tools. And I lament how often the products sold to them aren’t really built with their best interests in mind. I reached out to Dr. Renae Beaumont, Ph.D., a clinical psychologist and advisor to my company, to talk about the risks and potential of AI in children’s lives.
She put it plainly: “Unless an AI-based product or platform is specifically designed for children, there is the risk that they could be exposed to content that is inappropriate for them.” She also warns that AI tools often collect sensitive data from children, and that overuse can interfere with healthy development, sleep and social connection.
That doesn’t mean AI can’t be a force for good. Dr. Beaumont notes that AI has real potential to enhance creativity, personalize learning and improve accessibility—but only when introduced carefully, with adult guidance and built-in protections. Children need to understand that AI chatbots are not people, and that not everything they say is true or safe. And parents need transparency: about how their child’s data is used, and how the tool works.
So what would good AI for kids actually look like?
It would start with intentional design—tools built specifically for children, in platforms designed for younger users’ wellbeing. It would include real age verification, clear data privacy policies and parental controls by default. The experiences would be transparent, limited in scope and aligned with child development goals—not driven by engagement metrics.
It would also make it easier for parents to stay in the loop. Instead of forcing families to choose between total screen restriction or blind trust in existing platforms, we need tools that support what we call the Lifeguard Parent—someone who lets their kids jump in and explore, but stays close enough to step in if something goes wrong. The goal isn’t to keep kids out of the water. It’s to make sure they’re swimming in safe conditions, with someone watching the currents. Good AI platforms should help parents choose appropriate tools, guide their use and have open conversations about what AI can and can’t do. And we should be able to hold companies accountable when they put speed and scale ahead of safety and ethics.
AI can benefit kids—but only if it’s developed with their needs and vulnerabilities in mind from day one. We failed to protect children during the social media boom. We didn’t fully understand the consequences until it was too late. With AI, we don’t have that excuse. The warning signs are already here. If we blow it again, we can’t say we didn’t see it coming.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
OpenAI called it a bug. A glitch that let minors generate explicit content with ChatGPT after a February update aimed at reducing overly cautious refusals. But let’s be honest: this wasn’t just a bug—it was a breakdown in thinking. When you relax filters to make the experience smoother for adults, you also make it riskier for kids. And without real age verification or parental controls in place, the system had no idea who it was talking to. A 13-year-old could ask for erotica—and get it.
As the Wall Street Journal reports, “It’s not an accident that Meta’s chatbots can speak this way. Pushed by Zuckerberg, Meta made multiple internal decisions to loosen the guardrails around the bots to make them as engaging as possible, including by providing an exemption to its ban on 'explicit' content as long as it was in the context of romantic role-playing.” Engagement first. Safety later—if ever.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
The CEO of Pinterest is advocating for phone-free schools. As the leader of a Big Tech company, he’s someone many parents would see as an expert on these things, and here’s what he had to say in Time: “I’m acutely aware of how algorithms are often designed to keep users’ eyes glued to their screens. As a parent, I see how the apps can be more addictive than additive, negatively impacting students’ ability to stay focused in and out of school.”
It’s like a parenting urban legend at this point: some kid plays a game on their parent’s device and inadvertently spends thousands of real-world dollars on in-app purchases. But this didn’t happen to a friend-of-a-friend of mine, it happened to a mom in South Wales. Her daughter spent more than £1,000 in Roblox, unaware that it was real money. Consider this your periodic reminder to stay vigilant.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
Massive Robux purchases aside, is Roblox safe for kids? Here’s a piece my team put together to answer all your questions.
And, if you’re looking for ways to stay on top of your kids’ online activity, you might be considering a parental control app. Here’s a guide to walk you through what you need to know.