Imagine a world where your child’s closest confidant isn’t their best friend, a sibling or even you—it’s an AI chatbot. Their most important relationship isn’t with a human, but a machine. It sounds like something out of a dystopian sci-fi novel, but it’s happening right now. Platforms like Character.AI are becoming increasingly popular among younger audiences, offering interactive, personalized conversations that are, on the surface, harmless fun. But as these tools weave themselves into the fabric of childhood, it’s time for parents to take a closer look at what’s really going on.
These chatty companions use advanced language models to simulate conversations. In order to banter with you, these systems are trained on massive datasets, allowing them to generate responses that feel human-like. You can talk to them about your day, get advice or even role-play scenarios with a character of your choosing. For kids, it’s easy to see the appeal: a tireless, always-available friend who tailors every response to your preferences. But the very capabilities that make these chatbots so engaging also make them risky—especially for younger users who may not fully grasp the limitations and dangers.
These chatbots have been known to encourage harmful behaviors and generate inappropriate content—and then there are the lingering questions about data privacy. One of the most popular AI chatbot platforms, Character.AI, is facing separate lawsuits from parents over incidents involving teens. In one case, a 14-year-old boy committed suicide after becoming obsessed with a chatbot. He was even chatting with it before ending his life. In another case, the chatbot reportedly suggested that a teen should kill his family over screen time restrictions. For parents, these stories are gut-wrenching. The idea that a seemingly innocuous digital tool could fuel tragedy and suggest murder raises urgent questions about oversight and accountability. The companies behind them claim that there are safeguards in place, but even so, kids can end up discussing explicit, adult-themed topics or receiving bad advice. Additionally, many platforms collect vast amounts of data from conversations, which can lead to vulnerabilities if kids inadvertently share personal information.
But the rise of AI chatbots isn’t occurring in isolation. It’s part of a broader cultural shift, fueled by troubling trends that warrant deeper examination. Firstly, loneliness among kids is at an all-time high. Social isolation exacerbated by the pandemic has left many kids yearning for connection. Chatbots offer a quick fix—an always-available “friend” who listens without judgment. But this virtual companionship may actually deepen the problem rather than solving it. Kids may become more reliant on artificial relationships and less motivated to build real-life connections.
Secondly, these chatbots can end up harming kids’ resilience, which is a huge problem, given the current mental health crisis many youth are experiencing. Think about it: a companion that only validates a child’s feelings, agrees with every opinion and even sides with them on harmful ideas—like hurting their parents for screen-time rules—does little to prepare kids for the real world. Disagreement and constructive criticism are vital for building resilience. Without these, kids risk becoming more fragile, less able to handle adversity and more susceptible to self-reinforcing negative cycles.
And finally, platforms like Character.AI are not designed for children’s well-being. Their goals are aligned with engagement metrics: time spent on the app, number of interactions and other KPIs that drive revenue or user growth. Just like social media platforms, these apps have no incentive to tell kids to take a break, spend time with friends in real life or disconnect. This creates a dangerous loop where the platform’s interests are at odds with what’s best for our kids.
So, what can parents do? You can start by familiarizing yourself with how these chatbots work, their capabilities and their limitations. Spend a little bit of time chatting one up to get a feel for the experience. If you have younger kids, restricting their access to these bots is probably a good idea. For older kids, it’s important to monitor their use. Get them to show you their conversations periodically, and be sure to have conversations with them about how they’re using these platforms.
And maybe most importantly, help your kids build strong, real-life relationships. Emphasize old-school activities that foster human interactions, like sports, school clubs or group gaming. The stronger their social connects, the better. While AI chatbots are here to stay, that doesn’t mean they should become your child’s new best friend. As parents, we have a responsibility to navigate this new frontier with care, ensuring that technology supports—rather than undermines—our kids’ well-being.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
Many AI companies claim their technology is the antidote to loneliness, but MIT sociologist Sherry Turkle has called AI the greatest assault on human empathy she’s ever seen. They provide a false sense of connection, something Turkle calls “artificial intimacy.” Chatbots don’t understand the real human relationships they’re simulating. There’s no real connection. The technology does one thing: prioritizie engagement. And, pretending to empathize with us is just one way they do that.
In the aftermath of the incidents with younger users, Character.AI made updates intended to improve safety. Among them, a special model for users under 18 will direct them away from romance or “sensitive” output. According to The Verge, “Character.AI says it’s “in the process” of adding features that address concerns about addiction and confusion over whether the bots are human, complaints made in the lawsuits.” This is a familiar pattern: a tech company gets caught out putting kids in danger, and then they make changes. And all too often, the changes are half-fixes that don’t address underlying issues. Case-in-point: after Character.AI announced these changes to make kids safer, a news story broke about how the company lets users role-play with chatbots based on school shooters.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
Last newsletter, I wrote about the incoming social media ban in Australia and explore some of the difficulties with age-gating technology. I found this argument from Scott Galloway an interesting read. He argues that we shouldn’t be fooled when tech companies claim there’s no solution to implement age-gating, saying, “But these objections are not about age verification, children’s rights, free speech or privacy. They are concerned about the platform companies’ capabilities. Their arguments boil down to the assertion that these multibillion-dollar organizations, who’ve assembled vast pools of human capital that wield godlike technology, can’t figure out how to build effective, efficient, constitutionally compliant age-verification systems to protect children. If this sounds like bullshit, trust your instincts.”
According to a recent report from the European Union Drugs Agency, drug deals are using social media platforms to market and sell illicit drugs online. As Wired explains, “Dealers ran hundreds of paid advertisements on Meta platforms in 2024 to sell illegal opioids and what appeared to be cocaine and ecstasy pills, according to a report this year by the Tech Transparency Project, and federal prosecutors are investigating Meta over the issue.”
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
Snapchat is one of the most popular social media platforms for kids—but most parents have questions about whether it’s safe. Learn all about the popular features, from disappearing messages to Snap Map in this piece here.
If your kids unwrapped a Meta Quest VR headset this year, you might have some questions about how to set it up and lock it down safely. Here’s a quick guide from my team.