What will become of the lawsuit against Meta? Here are my predictions.
In my last newsletter, I spent some time unpacking the latest lawsuit against serial defendant Meta. In case you missed it, 41 different states have filed a joint lawsuit against the social media giant for allegedly designing a suite of addictive platforms that lure children to Instagram and Facebook, which have in turn led to diminished mental health amongst youth.
The new lawsuit accuses Meta of pursuing profits at the expense of kids’ wellbeing—an argument I’ve been making for years. The plaintiffs called out the way the company designed platforms to induce FOMO (fear of missing out), entice scrolling and flood kids with alerts to keep them coming back for more. And, they allege that Meta did this despite knowing it was detrimental to younger users.
I wasn’t surprised by anything I read in the lawsuit. Because I build technology for kids and have children of my own, I spend a lot of time thinking and writing about the topic—and I just so happened to accurately call a few things in advance. So, I wanted to make a few more predictions, this time about where this lawsuit will lead. Here’s what I think will become of this lawsuit:
Meta will face a historic fine under the Children’s Online Privacy Protection Act (COPPA)
There is, in fact, a law that’s meant to protect children online. You just wouldn’t know it from the way Meta and other social media platforms have behaved. Under COPPA, platforms and websites that cater to children under 13 have to follow a few rules. I won’t bore you with the minutia, but here it is in a nutshell: if your platform or website plans on collecting information from children under 13, you have to disclose it and get consent from parents. In addition, certain design patterns and algorithmically-driven targeted advertisements directed towards children are placed under additional scrutiny. Meta and many other platforms used to skirt this legislation by claiming that their platforms weren’t for kids.
But, according to internal estimates, Meta was aware of millions of users under the age of 13 on Instagram alone. It even received over 1.1 million reports of users under the age of 13 on the platform—and only disabled a fraction of those accounts.
In the past, tech companies that have run afoul of COPPA have faced fines for violating the law. TikTok was fined $5.7M USD back in 2019 and YouTube was fined $170M USD that same year. Of course, those amounts are relatively small when you look at the revenue these companies pull in. The fines, historically, have been too miniscule to be a real deterrent, so it’ll be interesting to see how much Meta ends up paying. I predict it’ll be historic and will significantly surpass the record $170M paid by Google (YouTube), but I have doubts that it’ll be enough to seriously impact the company’s operations.
Meta will be forced to make some changes
While the $170M fine that YouTube paid was record-setting, it wasn’t the most significant part of the settlement they reached with the FTC. The biggest impact came from the changes that they made to make the platform safer for kids. Among other things, they disabled commenting on videos directed at kids and they shifted the onus onto creators to identify whether their content was made for children. More significantly, any creator indicating that their content was directed towards children was unable to monetize that content, which happens via targeted ads displayed by Google throughout the video.
I suspect that Meta will be forced to make some changes in the way its platforms operate. It’ll be interesting to see what those changes look like, but at the very least, I think that Meta’s days of pretending kids aren’t using its platform are numbered. I think there’s even a chance that Meta uses this as an opportunity to resurrect Instagram for Kids. Although it’s the platform that no one asked for, I could see them making the argument that they need a separate ecosystem for users under 13—a place where they’ll try and follow the rules of COPPA. It would be extremely hard for them to sell the public on this idea, especially since their reputation with kids’ safety is taking such a beating. But again, they might take this as an opportunity to revive the plan.
I could also see Meta going in the opposite direction—and thinking long and hard about whether it’s worth it to support platforms for kids. The company is facing increasing scrutiny, and they’ll need to play by the rules going forward. And, complying with all that legislation is costly and time consuming. They might look at all the work involved in building kids’ platforms the right way and decide it’s not worth it. This is why they don’t offer Facebook Messenger Kids in the UK. Under the more stringent rules of GDPR-K, it just didn’t make sense for Meta.
COPPA will get a facelift in 2024
There is a lot of appetite among lawmakers to make the internet safer for kids. It’s one of the few causes that has bipartisan support. And, since COPPA was originally written back in 1998, it’s a prime candidate for some updates. The Federal Trade Commission has already proposed changes to COPPA. Their proposal is intended to “shift the burden” of children’s online safety from parents to platforms, and restrict the way those platforms can use and monetize children’s data. The proposed changes include: turning off targeted advertising for children under 13 by default. Prohibiting the use of personal info to bombard kids with push notifications. And limiting the collection of student data by learning apps—among other things.
Given all the news about social media and youth mental health, I wonder if we’ll also see some bluster about raising the age in COPPA legislation. There seems to be a growing consensus that 13 isn’t old enough for these platforms, so it’ll be interesting to see where that conversation goes. I predict we’ll see the age for COPPA increased to 16 or 18 years old when the legislation is reformed, but enforceability will continue to be a challenge (which is a whole other discussion).
We’ll see a push for better age verification systems
It might not matter all that much if we raise the minimum age for kids to use social platforms if they can just lie about their age anyhow. That’s why we’ve seen some states trying to put the onus on platforms to verify users’ ages when they sign up. This might seem like a good plan in theory, but in practice, it creates a lot of privacy issues.
I think we’ll see a renewed push for a universal age verification system. There’s a huge opportunity for some innovative company to crack this nut. If they can create a system that satisfies regulators and privacy advocates, that’s an elusive golden ticket.
Public sentiment will continue to sour when Big Tech behaves badly
While I wasn’t surprised by anything I read in the lawsuit, some parts of it were downright damning. Meta allegedly knew that its products were harming kids, and it proceeded anyhow. That’s not a good look, and the bad PR is doing real damage to the brand. As more of the company’s internal memos and research are splashed across the headlines, it’ll be harder for them to convince parents to trust them with their children’s data, time and attention.
This last one is less of a prediction and more of a wish: I hope that we see more parents making the switch away from harmful Big Tech products and embracing different platforms that are designed to give kids the best of the digital world. I started Kinzoo because I wanted to give my own kids a better option. I know that there are other companies out there that are trying to make the internet safer for kids and I hope that momentum continues to build for them.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
In an effort to protect kids, many lawmakers (and indeed parents) have tried to prevent kids from using social media all together. While I sympathize with this urge, it’s difficult to enforce—especially when a majority of young people use these platforms so heavily. According to a report from Pew research, a majority of teens ages 13-17 use TikTok, Snapchat and Instagram. I can’t really imagine what it would look like to remove all these users from these platforms and keep them off.
But according to Bloomberg, Prime Minister Rishi Sunak’s UK government is considering a crackdown on social media access for children under 16. This plan could potentially include bans. Discussions are still at an early stage, and one source says that full-on bans are unlikely, but this will be an interesting thing to keep an eye on.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
Have you tried using a generative AI platform? Well, there’s a good chance your children have. In fact, younger users are driving adoption of this new tech. According to new research, 79% of teens 13-17 and 40% of kids 7-12 have used generative AI platforms like ChatGPT and Snapchat My AI.
Did you know Ray-Ban and Meta have a collab in the works? That’s right, there’s a new wearable camera on the market, and it’s a pair of smart glasses from Meta. According to the reviewer at the New York Times, the glasses let him “secretly snap photos and record videos of strangers in parks, on trains, inside stores and at restaurants.” This new tech offers a “glimpse into a future with even less privacy and more distraction.” Now that’s a little chilling.