When news broke about Australia’s new social media ban for kids under 16, parents everywhere had questions. Is this the solution we’ve been waiting for? Will it make kids safer online? And perhaps most pressing: What does it mean for my family?
As a parent and someone who works in kids’ tech, I have mixed feelings. On the surface, it seems like a bold step toward addressing real concerns. But as with any sweeping change, the reality is more complicated. Let’s break it down into the pros, cons and what could be done differently.
The positives
First, let’s give credit where it’s due. The ban acknowledges something we’ve all been grappling with: social media poses significant risks for kids. Whether it’s exposure to harmful people and content, the pressure to chase likes or the mental health impacts of constant comparison, these platforms aren’t built with children’s wellbeing in mind.
This ban shines a spotlight on the problematic design patterns driving these platforms—endless scrolling, streaks and algorithmically amplified content—all designed to maximize engagement and, by extension, ad revenue. It’s not that social media is inherently good or bad; it’s the incentives behind it that create problems. Platforms are built to capture attention and collect data, not to prioritize safety or meaningful connections.
The drawbacks
By introducing a ban, Australia is elevating the conversation around these risks and forcing us to confront the misalignment between Big Tech’s priorities and our kids’ needs. That’s a win in itself. But here’s the thing: I don’t think a full-on ban is the answer. For starters, it’s unlikely to work as intended. If history has taught us anything, it’s that kids are incredibly resourceful. They’ll find ways around restrictions—whether it’s using VPNs, creating fake accounts or migrating to platforms that fall outside the ban’s scope. A ban might even drive them to riskier corners of the internet.
Then there’s the social impact. For many kids, social media is a lifeline. My own daughter uses Snapchat to stay connected with her friends. If that option were taken away, it wouldn’t erase her need to communicate—it would just force her to find possibly riskier alternatives. Blanket bans risk alienating kids who use these tools responsibly and for positive purposes.
Another concern is the potential emphasis on age verification. If platforms are required to verify ages more rigorously, it introduces privacy risks and creates friction for everyone. Do we really want Big Tech collecting even more personal data to enforce these rules? I’m not so sure.
A better way forward
So, what’s the alternative? Instead of banning social media outright, we should focus on making it safer and less exploitative. The YouTube Kids example is instructive here. After facing a hefty fine from the FTC, YouTube made incremental changes to create a safer environment for younger users. It wasn’t perfect, but it was progress.
What if we applied the same logic to social media? Imagine banning behavioral ads targeting kids or eliminating addictive features like streaks and infinite scroll. These design tweaks wouldn’t just make platforms safer; they’d still give kids access to the digital social spaces they crave and help reduce the incentives to lie about their age to gain access.
Creating closed ecosystems designed with kids’ wellbeing in mind is another promising approach. At Kinzoo Messenger, for example, we’ve built an environment where kids can connect with friends and family without being bombarded by strangers and ads or manipulated by algorithms. Nothing is forced onto their screens, and our design isn’t motivated by maximizing screen time or data collection. It’s a healthier equilibrium between what kids want and what’s safe for them.
Finally, we need to educate parents about the trade-offs. It’s not just about setting up guardrails or enabling parental controls—it’s about understanding the behaviors that platforms incentivize and making informed decisions as a family. In some cases, that might mean opting out of certain platforms altogether. In others, it might mean choosing safer alternatives.
Bans make headlines, but they rarely solve the underlying problem. If we want to protect kids online, we need more nuanced solutions that address the root causes. That means holding Big Tech accountable, rethinking harmful design patterns and fostering environments that prioritize wellbeing over profit.
Australia’s social media ban has sparked an important conversation, and that’s a good thing. But let’s make sure the solutions we pursue are as thoughtful and smart as the kids we’re trying to protect. Our children deserve nothing less.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
One of the thorniest issues tied to Australia’s social media ban is age verification. Proposals like facial recognition services to estimate a user’s age have raised significant concerns. Researcher Vincent Cohney tested these technologies and found troubling disparities. As he explains to NPR: “The very algorithms that were cited as highly effective by these age assurance companies had an average error of about five years when faced with a Western African face. The technology is not ready yet." This highlights the broader challenge of implementing effective and equitable age verification methods without compromising privacy.
The broader question of what constitutes social media further complicates the ban. Platforms like TikTok are clear targets, but what about YouTube, which is exempted for its "significant" educational content? Similarly, video games such as Roblox, which now include robust social features like chat, fall outside the ban’s scope. The lines are increasingly blurred, and determining where to draw them is a thorny issue policymakers must tackle.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
In California, a proposed bill could require social media platforms to display "black box" warning labels about the risks they pose to young users. The bill aims to address the youth mental health crisis by mandating these warnings, which would appear for 90 seconds during a user’s first login and weekly after that. While such labels echo tobacco-style warnings, critics question their effectiveness in addressing platforms’ addictive features and harmful content.
A federal court upheld a possible nationwide TikTok ban unless its Chinese owner, ByteDance, sells its US operations. The government argues the app could expose Americans’ data or manipulate content under pressure from China. TikTok unsuccessfully challenged the ban, arguing that it infringes on free speech. The proposed ban, while fast approaching, is far from certain. It’s difficult to know what the incoming administration will do, and many people still think that a scenario where TikTok is completely blocked in the States is unlikely.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
Setting up new devices for kids can be tricky and confusing. But don’t worry, my team has put together some helpful tools to make the process smooth. Here’s our parent’s guide for setting up your new Amazon Fire tablet.
And if you’ve gone the iPad route, here’s a guide that will have you all set up in no time.