Mark Zuckerberg recently announced sweeping changes to content moderation at Meta, the parent company of Facebook and Instagram. While the motivations for these changes may be rooted in political pressures, this newsletter isn’t about politics. Instead, I want to focus on what these changes mean for the platforms we use every day and, more importantly, for families trying to navigate the challenges of online life.
Until now, Meta employed a structured content moderation system that included fact-checking protocols. Third-party fact-checkers evaluated content flagged by users or algorithms to determine its accuracy, labeling or removing misinformation as needed. This approach aimed to curb the spread of false information while maintaining impartiality. However, the fact-checking system wasn’t perfect. Both sides of the political spectrum often criticized it, with some arguing it censored legitimate posts and others claiming it failed to catch enough harmful content. And, anyone who’s spent time scrolling through Facebook or Instagram knows that a lot of questionable stuff still makes its way into our feeds. The reality is that many social platforms are flooded with content that’s false, upsetting—and sometimes hateful and illegal. The platforms try and tamp down on it, but they actually aren’t legally obligated to remove any of it.
That’s because social media platforms operate under the protections of Section 230 of the Communications Decency Act. This law shields them from liability for user-generated content while allowing them to moderate as they see fit. Essentially, Section 230 gives companies the freedom to decide what stays up and what comes down without fear of being sued for every post.
Zuckerberg’s announcement pivots Meta toward a “free speech first” philosophy in a few ways: it halts its fact-checking efforts and introduces a new “community notes” feature, similar to the approach taken by X (formerly Twitter). In theory, community notes allow users to provide context or correct misinformation collaboratively. However, this model shifts the burden of fact-checking from the platform to its users.
And, Meta has made the decision to eliminate restrictions on certain forms of speech previously deemed harmful—including some criticisms of immigrants, women and transgender individuals. While Meta frames this as a commitment to free expression, it also opens the door for harmful content to proliferate.
It’s not just Meta moving things in this direction. These changes align with a broader industry trend of scaling back moderation efforts. Platforms argue that amplifying diverse voices is a net positive, but critics worry that reducing guardrails allows misinformation and harmful speech to spread unchecked. And here’s the kicker: Meta’s algorithms are designed to prioritize engagement—even if that means prioritizing outrage. The more controversial or inflammatory a post, the more likely it is to go viral. This “enrage to engage” approach doesn’t just tolerate low-quality content. It thrives on it.
I’ve written before about how readily available AI tools and deepfake technologies will help to flood social media with falsehoods. These tools make it easier than ever to produce convincing fakes, and without robust moderation, platforms could become flooded with content that’s not only untrustworthy but potentially dangerous. Imagine trying to explain to your child why a video of their favorite public figure saying something outrageous isn’t real, even though it looks convincing. And while community notes might be able to add context and clarity in some cases, that system doesn’t always operate swiftly. And, many experts expect falsehoods to proliferate without a structured fact-checking system.
Meta’s shift is likely to serve as a precedent for other platforms to follow. If one of the world’s largest social media companies can scale back moderation without significant backlash, competitors may see an opportunity to do the same. The result? An online ecosystem increasingly filled with noise, misinformation and harmful speech.
It’s especially important for parents to understand these changes. Social media has already been a challenging and sometimes dangerous environment for kids, and these changes could make it worse. The flood of low-quality, untrustworthy content will make it even harder for younger users to discern fact from fiction. And as harmful speech becomes more prevalent, the risk to kids’ mental health grows.
In light of these changes, it’s worth rethinking how we use social media. Rather than engaging with broad, public platforms, families might find more value in narrowing their networks and prioritizing private, secure spaces. Messaging apps designed with safety in mind, family-oriented platforms and other closed ecosystems provide an alternative where the focus is on meaningful connections rather than maximizing engagement. At Kinzoo, we’ve been focusing on building these kinds of spaces for families, because we believe it’s what families and kids deserve.
Meta’s decision to change its moderation policies reflects the complex interplay of free speech, algorithmic amplification and platform responsibility. It’s a tough needle to thread. While these changes may be consistent with Section 230 and First Amendment principles, they also highlight the limitations of self-regulation in protecting users—especially the youngest ones. For families, the takeaway is clear: the less we rely on open social media platforms for meaningful connection, the better off we’ll be. As the online ecosystem becomes more chaotic, prioritizing trusted, private spaces will mean a safer digital experience for everyone.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
X’s Community Notes program is a promising idea in theory: users are empowered to add context to posts at will. However, according to a Washington Post analysis, “Community Notes does a poor job of responding to falsehoods relating to politics, even when contributors correctly identify posts lacking context. Separate data analysis by The Post found that even when a Community Note is publicly added to an election-related post, the process typically takes more than 11 hours—by which time the content may have reached millions of users.” When it comes to falsehoods online, time is of the essence, so hopefully Meta’s system works more swiftly.
Meta’s independent oversight board is concerned about the changes. According to the BBC, Maria Ressa, a journalist who won the Nobel Peace Prize in 2021, doesn’t buy the suggestion that the change would promote free speech. She told AFP news agency that there are "extremely dangerous times ahead" for social media users and democracy.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
Since 2023 when Snapchat rolled out its new chatbot, My AI, it’s been sitting at the top of users’ chats whether they liked it or not. This article from CNET has some handy instructions on how to manage the AI integrations in the app, including how paying subscribers can remove the feature altogether.
The Supreme Court is poised to uphold a TikTok ban on January 19th. There are more than 170 million active monthly users in the US, and many of them are vocal in their opposition to the ban, claiming that they do not share Washington’s concerns over national security. In fact, many of them are flocking to another Chinese app to prove the point. Xiaohongshu, or Red Note, is a social media app with over 300 million users, mostly in China. Similar to TikTok and Instagram, Red Note features a mix of videos and static content. And, the app was the most downloaded free app in the US on Tuesday.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
If your kids got new devices over the holidays, we have a handy playlist of videos on your YouTube channel that’ll help you get things set up safely and securely.
And if you’re more keen on step-by-step instructions in article format, check out the digital parenting page on our blog.