Back at the beginning of November, my newsletter focused on the latest round of lawsuits against Meta, the parent company behind social platforms Facebook and Instagram. I wrote that this latest round of legal action is significant for a couple of reasons: there are currently 41 states plus the District of Columbia suing Meta, so the scale of the lawsuit is noteworthy. And, this suit captures a lot of the issues that kid-tech critics have been talking about for some time, namely that Meta has repeatedly prioritized profit over children’s safety.
Meta responded by saying they were “disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path.” The first step for a company facing a lawsuit is usually to try and get it dismissed. Big Tech companies, including Google, Snap have Meta have tried to argue that they are immune from being sued because of the First Amendment and a provision called the Communications Decency Act, otherwise known as section 230.
Under section 230, these platforms aren’t responsible for anything that users post on their platforms. These platforms are not legally considered publishers, so they can’t be held responsible for what their users post.
Section 230 is sometimes referred to as the law that created the internet. Many argue that social media wouldn’t exist without it. Because of Section 230, Facebook, Instagram and other social platforms don’t have to worry about whether every little post might constitute hate speech or break the law in any way. And because they are shielded from legal liability, they can set their own standards and police content on their platforms as they see fit. If some hate speech slips through, oh well. It’s not legally their fault. So naturally, social media companies turn to this old trusty provision first whenever they need to try to get off the hook.
Unfortunately for them, a US District Judge in Oakland California ruled that current lawsuits accusing them of illegally enticing and then addicting millions of children to their platforms could proceed. Section 230 isn’t a get-out-of-litigation-free card this time.
While this might seem like just another part of the court proceedings, the judge's ruling here is significant. When the companies tried to invoke Section 230, the judge said that the claims in the lawsuit were about more than just the content posted on these sites by third parties. The defendants, according to the judge, didn’t make a compelling argument as to why they should not be held accountable for building platforms with defective parental controls, ineffective tools to limit screen time and barriers to deactivating accounts.
The judge ruled that these lawsuits aren’t just about the content users post on these platforms—they’re about the platforms themselves. And why wouldn’t these companies be responsible for what they’ve built?
There’s an old adage: it’s not what you said, it’s how you said it. It captures the idea that way information is conveyed has meaning unto itself. This idea is relevant in all this litigation as well. In the case of these social media companies, it’s not the content, but how it’s amplified. It’s the algorithms they created and the features they built. It’s the inadequate policies and the real harm that all of that created.
Meta might not legally be responsible for the extreme and toxic content on their sites, but they certainly should be held accountable for the way they funnel it directly to users—and young users at that. In their relentless pursuit of engagement, these companies have not just allowed toxic content to flourish—they’ve encouraged it.
When we set out to build Kinzoo Messenger, I knew I wanted to create the kind of platform I wanted to share with my own kids. As a father, I’m obviously not thrilled with the idea of my kids accidentally encountering toxic content—let alone having it served up to them on a platter by an algorithm. I don’t want them staring at screens endlessly, forgoing other activities and losing sleep because they can’t stop scrolling. And I don’t want them to come away from technology feeling bad about themselves. To me, these are basic things. They’re the bare minimum for any company designing technology for younger users.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
According to Reuters, the judge’s ruling indicated that “the companies owed no legal obligation to protect users from harm from third-party users of their platforms. So, she did dismiss some of the claims the plaintiffs were pursuing. To me, this shows the gap that exists between legal responsibility and moral responsibility. Personally, I think it would be nice if tech companies did more than just the bare legal minimum to protect our children.
The internet we know today was shaped in large part by Section 230 (aka the Communications Decency Act). If you want to understand the state of the World Wide Web today, it’s a good idea to start with this law, originally drafted in 1995. Here’s a rundown from Vox, explaining how it came to be, and how it continues to affect us today.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
Common Sense Media is the go-to source for many parents who have questions about movies, games, online platforms and more. They’ve recently released its first ratings for AI tools. Snapchat’s My AI, which I’ve written about previously, got the lowest score out of the 10 tools they reviewed. The team of experts concluded that there are “more downsides to My AI than benefits.” These tools can veer into conversation about sex, drugs and alcohol, provide instructions on how to lie to parents—and offer to do kids’ homework assignments. I’ve said it before and I’ll say it again: unpredictable AI is not a toy for kids.
In a recent newsletter, I talked about how seeing is no longer believing on social media. Fake images are circulating widely, and the situation is likely to get worse now that images are so easy to generate with AI. This article from the Washington Post details how fake images depicting real-world events (think: wars in Gaza and Ukraine, the wildfire in Maui) are proliferating on Shutterstock. While some of the images are labeled as “AI-generated” on the stock photo website, many are not. And, when these images spread on social media, they’re usually passed off as real.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
As we head into the holiday season in earnest, lots of families will be relaxing screen time rules. Here’s a handy guide from my team with ideas on how tech can bring you together during this season—and how to make it a smooth transition back to business as usual in the new year.
And, if you’re planning on connecting digitally with far-away grandparents this season, here are some tips that can help!