It’s easy to feel overwhelmed by stories about AI gone wrong, especially when kids are involved. AI chatbots giving bad advice. Algorithms pushing harmful content. Data collected without consent. These headlines are enough to make any parent want to swear off technology altogether.
But here’s the thing: AI doesn’t have to be dangerous for kids. It can actually be an incredible tool if it’s designed with their best interests in mind. We don’t have to settle for broken promises and weak safeguards. We can build something better.
Here’s what good AI for kids should look like:
Built for kids from the start
Good AI has to be created specifically for young users, with their safety, privacy and development in mind. That means kid-friendly language, age-appropriate interactions and clear explanations about what the AI can and can’t do. It can’t just be access to adult AI platforms with a few more filters in place. It needs to be a set of focused, creative tools for kids—walled garden experiences where they can explore without wandering into unsafe territory.
Real age verification
If a platform claims to be safe for kids, it should actually know who’s using it. Real age verification means going beyond a checkbox. It means using technology that can verify a child’s age without collecting unnecessary personal data, and making sure kids aren’t exposed to adult content or interactions.
Parents-in-the-loop from day one
Right now, it's pretty tricky to know what your kids are querying when they're using AI unless you scroll through their chats, and that’s not ideal. Parents need a platform that makes it easy to stay informed. Good AI for kids is designed with transparency at its core. Just imagine getting a clear report on your child’s activities—whether they’re exploring a creative tool, learning something new, or getting help with homework. And if your child encounters something risky, like a harmful prompt, you get notified. This kind of design is absolutely possible—it’s just that the adult platforms kids are using don’t have these kinds of features.
Limited, purposeful tools
Kids don’t need an AI that can do everything, especially when they’re just learning the technological ropes. They need tools that inspire creativity, encourage learning and support healthy social interactions. A good kids’ AI platform is focused, purposeful and designed to help, not distract. And, it should offer a curated suite of creative tools that help kids explore safely without falling into rabbit holes.
Trustworthy and fair AI models
Good AI is built on quality, unbiased models that prioritize child safety. That means avoiding smaller, less reliable models in favor of trusted, well-tested technology. It also means maintaining strict content moderation to ensure that AI interactions remain safe and appropriate.
It’s not like we don’t know how to build powerful, complex technology. We’ve seen the amazing things AI can do, from generating art to translating languages, solving problems, and even making us laugh. Technology can be magic when it’s done right. But if we want kids to benefit, we have to build AI with them in mind from the very beginning. Not as an afterthought. Not as an add-on.
Parents deserve tools they can trust, platforms that aren’t a black box, but a partner. Kids deserve experiences that help them learn and grow, not manipulate or exploit them. We can build this. We just have to choose to do it.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
Speaking of adult AI platforms, Google has made Gemini available to kids. By default. If your children are connected to you via the Family Link app, they automatically have access to the chatbot. Although there are a few safeguards in place, parents are cautioned to tell their children that Gemini can make mistakes. And that they shouldn’t share any personal information. And, although there are reportedly additional filters in place, it’s still pretty much full access to an adult platform.
There are AIs like ChatGPT and Gemini, and then there’s chatbot companions, which are a whole other level. According to researchers from Stanford School of Medicine’s Brainstorm Lab for Mental Health Innovation and Common Sense Media, these chatbots are “much more likely to veer into socially controversial and even illegal territory.” And apparently, it takes minimal prompting to get these chatbots to engage in behavior that can harm kids’ mental health.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
A new study is confirming something many parents probably knew already: when we’re distracted by our own phones and devices, it has the potential to affect our children. According to a meta-analysis of studies from 10 countries and nearly 15,000 participants, there’s a negative association between parents’ tech use and children's cognition. The authors of the study note, “distracted or inconsistent engagement—such as when a parent is absorbed in a phone—can result in missed opportunities for learning, bonding and emotional regulation.” It’s not entirely surprising, but it’s a good reminder to put your own phone away and spend some quality time with your kids.
What the sigma? Gen Alpha has some slang on repeat, and to a lot of parents, it sounds completely nonsensical. This piece does a good job of explaining why your kids might shout random-sounding words out of the blue, and often, it’s because they’re just kind of fun to say. For the most part, this kind of thing is normal behavior and not unique to this generation of kids. Long story short, there’s nothing sus here.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
If you’re looking for more information on Gemini for kids, my team put together this parents’ guide here.
And, if you have questions about AI search engines, this post on Perplexity can help you decide what’s right for your kids.