Imagine this: Your kid is sitting at the kitchen table, laughing as they chatting with their new best friend: an AI-powered chatbot. Or maybe they’re using an AI platform to generate a hyper-realistic image of you wearing a silly hat, just for fun. It might sound futuristic, but it’s actually happening right now. AI isn’t just something that office workers use to draft emails—it’s already in the hands of kids, shaping the way they learn, create and interact with the world.
And yet, as AI barrels forward at breakneck speed, we still don’t have clear guardrails in place. Just recently, world leaders, researchers and tech executives gathered at the AI Action Summit in Paris to discuss the future of artificial intelligence. While the conference covered a broad range of topics—from AI's role in global security to its potential economic impact—what was strikingly absent was a solid regulatory framework. One tech columnist in attendance was struck by the lack of conversation around regulation and described the struggle to regulate AI as akin to watching policymakers on horseback trying to install seatbelts on a passing Lamborghini. A colorful analogy, but it feels about right.
If this sounds familiar, it’s because we’ve seen this story before. When social media took over the world, we didn’t fully understood its impact. And by the time the data started coming in—on mental health, misinformation and the erosion of privacy—the damage was done. Now, AI is here, moving just as rapidly, with the same “move fast and break things” mentality, and once again, no one’s stopping to think it all the way through. And, platforms frequented by children, like Snapchat and Character.AI, are deploying AI tools with no peer-reviewed studies, no real oversight and—crucially—the same business model as social media: engagement at all costs. The incentives haven’t changed, which means the risks haven’t either. And as parents, we need to be paying attention.
AI isn’t just a tool for office workers cranking out emails. Kids are using it, whether we like it or not. And that’s not an inherently bad thing—AI can be a creative partner or a research assistant. It can even help spur connection. Kids can ask it to write a song about their best friend, draft a story idea or help them understand something they’re learning in school. But as with anything, there are risks.
AI models aren’t neutral. They reflect biases in the data they’re trained on, and while most adults understand that AI is imperfect, kids might not. If an AI tool returns misleading or problematic information, will they know to question it? And even when AI is being used for something positive, like writing a poem for a friend, it’s easy to see how the same tool could be used for harm. A bully could just as easily ask an AI to write a cruel song about a classmate.
And then there’s the issue of AI-generated images and deepfakes. We’ve already seen how AI can be weaponized, from fake news to fake nudes. The technology is getting more advanced, more accessible and—worryingly—more appealing to kids who might not fully grasp the consequences of their actions.
And what about chatbots? We’ve seen the horror stories—kids forming attachments to AI personalities, relying on them for emotional support or being manipulated by AI-powered characters that are programmed to keep them engaged. AI isn’t like social media—it’s something that talks back. And in a world where loneliness is already an epidemic among kids and teens, there’s a very real concern that AI interactions could become a self-reinforcing cycle.
One of the biggest unknowns right now is how these AI tools will be monetized. Many are pay-to-play, but we’ve seen how this tends to go in the world of Big Tech: at some point, ads will be introduced. And if you think targeted advertising on social media is creepy, imagine how it will evolve with AI.
I use AI for recipes, and it already knows my preferences. It remembers the kinds of meals I like, my dietary restrictions and even that I prefer quick dinners. That’s not a problem in itself—but it’s also a clear sign that AI is capable of building incredibly detailed user profiles. How long before those profiles are being used to target children with ads? How long before an AI assistant starts nudging them toward a certain brand, a certain influencer or a certain behavior?
Right now, there are no real regulations in place to stop this. The same companies that built social media’s engagement-first model are now rolling out AI, and they’ve already proven that they’ll put profits before safety. If protecting kids meant losing revenue, they wouldn’t do it. And if history has taught us anything, it’s that they’ll push the limits until someone forces them to stop.
Right now, the best thing parents can do is stay informed and stay involved. Be aware of the AI tools your kids are using, and talk to them about how they work. Make sure they understand that AI can be biased, misleading or just plain wrong. After all, there platforms have a tendency to hallucinate. Encourage your kids to question what AI gives them, and to think critically about the information they receive. And, pay attention to how these tools are evolving—because if there’s one thing we know for sure, it’s that AI is moving fast, and no one’s figured out the seatbelts yet.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
At the AI Action Summit in Paris, tech columnist Kevin Roose reports being surprised that no one seemed to be talking about the near future, and what AI advances could mean for us soon. “The biggest surprise of the Paris summit, for me, has been that policymakers can’t seem to grasp how soon powerful AI systems could arrive, or how disruptive they could be.” I fear we haven’t learned anything from the rise of social media—and all the “unintended consequences” that came from the combination of powerful, world-altering technology and a lack of meaningful regulation.
AI technology is advancing fast—so fast that it’s hard to keep up with all the new platforms and models. Tech Crunch gives a quick rundown of what you need to know, from xAI’s Grok 3 to Open AI Deep Research.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
A new app recently went viral: WikiTok. It turns Wikipedia into an infinitely scrollable platform, where you can click on whatever topics pique your imagination. After releasing WikiTok, the app’s creator fielded requests to curate the content with an algorithm—but he declined. According to Business Insider, he sees the app’s virility as a sign that people are tired of algorithmically curated content. He believes they’re hungry for something different.
Artificial Intelligence is seemingly everywhere these days—writing business plans, computer code and even Valentine’s cards. But why is your voice assistant still stuck in the same old rut? Apparently, Amazon has pushed back its plan to release a new souped-up, AI-powered Alexa. Again. The assistant reportedly failed to answer test questions correctly. But let’s not forget that this problem afflicts all AI.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
It might seem like AI is an issue for adults, not kids—but make no mistake. Our kids will be using these platforms before we know it. That is if they aren’t already. Here’s a parent’s guide with everything you need to know about ChatGPT, arguably the most well-known generative AI going.
And, AI is shaking up the way we search for and find information. Traditional engines are incorporating it into search results, and some entirely new AI-powered engines are popping up. Here’s a parent’s guide with everything you need to know about Perplexity.