We didn’t take social media seriously until it was too late. We let platforms grow faster than our understanding of how they were shaping kids’ brains, relationships and beliefs, and we’re still scrambling in the aftermath. Now, we’ve got another chance with a different novel technology.
The American Psychological Association (APA) just released a major advisory on AI and adolescent development. The message? AI is already changing how kids think, feel and grow and if we don’t act now, we’re going to repeat the same mistakes all over again.
Here are some main takeaways from the APA, translated into plain English for busy parents:
1. Sometimes AI acts like a friend—and we need to put boundaries around that
Chatbots and digital companions can blur the line between real and fake relationships. Kids may not realize they’re being influenced by something that doesn’t truly understand them. The APA says these kinds of platforms need guardrails because kids are more likely to trust them without question, and those relationships can get in the way of building real-world connections.
2. AI for kids should be built differently
Kids aren’t just small adults. Their brains are still developing, and they’re more impulsive, more sensitive to social approval and more susceptible to manipulation. AI tools for adolescents should be designed with this in mind. That means fewer engagement traps, more protective defaults and clearer explanations are needed.
3. We need to encourage healthy AI use, not just fear
AI isn’t all bad. It can help kids learn, create and explore. But they need help understanding how it works and where it goes wrong. That means using it together, asking good questions and staying curious, without handing over the steering wheel. In other words, being a Lifeguard Parent.
4. We need to minimize exposure to inappropriate content
It’s important that kids interact with age-appropriate content, but AI can produce some questionable responses. Currently, there are very few guardrails around AI outputs, and virtually zero moderation of the inputs. That means that most AI tools try to catch harmful content after the prompt. I believe that, when kids are involved, input filters matter just as much as output controls.
5. Privacy should be non-negotiable
AI systems collect enormous amounts of data, and kids often don’t understand what they’re giving up. And, most adult platforms collect everything you type in to train the models, meaning nothing is even remotely private. Parents need tools to manage that, and platforms need to stop treating children’s personal info like it’s just another data stream to monetize.
6. Deepfakes and bullying aren’t science fiction anymore
AI can fake voices. Fake faces. Fake videos. And while that technology might seem impressive or harmless at first, it opens the door to a whole new category of harm for kids, especially in the hands of bullies. The APA is clear: we need protections on both ends—from how this kind of content gets created, to how it gets shared. Platforms need serious guardrails. And kids need to understand how to spot this stuff, report it and protect themselves and each other.
7. Parents need support, too
You shouldn’t need a computer science degree to keep your kid safe. The APA calls for clear, user-friendly tools that explain what AI is doing, why it’s doing it and how you can set limits that actually stick. I believe that these tools should be proactive, not reactive. They should help parents stay in the loop without spending their precious time digging into a backlog of chats.
8. AI literacy isn’t optional anymore
Kids and adults need to understand what AI is, what it isn’t and how to spot bias, manipulation and misinformation. The more your kid interacts with AI, the more critical it is that they can think critically about AI.
The bottom line
We’ve seen what happens when we hand powerful technology to kids without asking the hard questions first. We underestimated the impact of social media, and now we’re playing catch-up. With AI, we have a rare opportunity to do it differently. To get ahead of the harm. To build better systems, set better boundaries and raise kids who are not just using these tools, but actually understanding them.
We don’t need to panic. And we don’t need to ban AI. But we do need to talk about it. We need to stay curious, stay involved and make space for conversations that help our kids navigate this new terrain with critical thinking and self-awareness. And we need to demand better tools that are designed from the ground up for kids.
Let’s help them grow up in a world where technology works for them, not the other way around. A world where they understand the tools they’re using, and trust the people guiding them through it.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
If you want to read the full statement from the American Psychological Association, you can find it here. One thing I find especially interesting is that they also draw a parallel between the reckless rollout of AI and social media, saying “We urge all stakeholders to ensure youth safety is considered relatively early in the evolution of AI. It is critical that we do not repeat the same harmful mistakes made with social media.”
Much like other online child safety issues, I wouldn’t hold my breath for regulation to save the day, especially since there might be a provision in Trump’s new bill to ban AI regulation for 10 years. Another thing to keep an eye on.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
Grieving parents helped write a bill in Colorado to protect kids from predators, drug dealers and ghost guns on social media. It passed with bipartisan support. Then the lobbyists showed up. Meta, TikTok and Google all weighed in—but the real blow came from a far-right gun group that flooded lawmakers with emails, calls and pressure. And then, the lawmakers backed down. And this isn’t the first time that Big Tech has aggressively lobbied against measures that would make tech safer for kids.
TikTok has rolled out some new safety features. The latest updates make it possible to block access to TikTok during certain hours, and they offer more visibility into teens’ follower list. As usual, these updates are more reactive than proactive. Tech companies tend to roll out these new features when they get pressure from the public, not because it’s the right thing to do.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
My team is putting together a series on AI to help parents navigate it with confidence. First up: is AI safe for kids? Here’s everything you need to know.
And if you’re curious about what AI can actually do—no hype, just facts—check out this piece here.