A special edition on last Wednesday's attempted insurrection—and Big Tech's response
I had a newsletter all queued up to send out last week, but somehow it just didn't feel right to spend time talking about Instagram's private Like counts after everything that unfolded on Wednesday. The breach at the Capitol was frightening and surreal, and I'm sure we'll continue to see fallout for months and years to come as we grapple with the circumstances that made such an event possible. What I'm particularly interested in is Big Tech's response, and the government's response to Big Tech.
Twitter, Facebook and Snap are among those that have made bold moves to ban Trump's accounts—a step that many are welcoming. But, a number of critics are questioning why it took an attempted insurrection to spur Big Tech into action. To quote one viral tweet, "that escalated steadily for four years." And while it's a positive move that Trump no longer has a megaphone for spreading disinformation and hateful rhetoric, banning him doesn't address the very deep-seated, systemic ills that make social media so dangerous. It's clear that Trump encouraged his supporters into violence, but he's still just a symptom of the ugliness of social media, not the sole cause of the unrest. Banning him won't fix the problems and polarization inherent with these platforms.
Too little too late?
Twitter removed Trump's inflammatory video and locked his account for 12 hours. Trump resumed tweeting shortly after his account was reinstated—but then Twitter ultimately opted to remove him again and impose a permanent ban. Facebook, for its part, banned Trump indefinitely after the events at the Capitol and Snap followed suit shortly after. So, why now?
“We believe the risks of allowing the President to continue to use our service during this period are simply too great,” Facebook Founder Mark Zuckerberg wrote. “Therefore, we are extending the block we have placed on his Facebook and Instagram accounts indefinitely and for at least the next two weeks until the peaceful transition of power is complete.” Sure, this is true, and a necessary step, but critics have been ringing alarm bells for years about the ways that social media platforms sow division. And, according to a new report by the Wall Street Journal, Facebook has long been aware of the ways its platform cultivates extremism—and has done very little to combat it. An internal Facebook presentation from 2016 states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.” The article argues, "In essence, Facebook is under fire for making the world more divided. Many of its own experts appeared to agree—and to believe Facebook could mitigate many of the problems. The company chose not to."
And on Monday of this week, Facebook executive Sheryl Sandberg claimed that Facebook wasn't responsible for providing a platform to those involved in planning the insurrection. Instead, she claimed that it happened on other platforms "that don’t have our abilities to stop hate, don’t have our standards and don’t have our transparency.” Critics including Media Matters were quick to point out that a preliminary search in Facebook groups "identified at least 70 active Facebook groups that were either named for or affiliated with 'Stop the Steal' that Facebook could have taken action against long before today.” Of the 70 groups the watchdog found, 46 of them were private.
Pretty damning stuff. To me, this all demonstrates that Facebook is more focused on reacting to public relations crises than taking proactive steps that would help to avoid similar situations in the future. After all, proactive measures could get in the way of growth and engagement, which the company has long prioritized above everything else.
An important step—but not a solution
Journalist Steven Levy captures the situation rather eloquently when he writes, "For months—years, really—people have asked what it would take for Facebook and Twitter to ban the policy-violator-in-chief from their platforms. Hate speech, doxing, and dangerous disinformation on Covid evidently weren’t enough." According to Levy, these platforms argued that the newsworthiness of Trump's vitriol outweighed its toxicity. All that changed as we watched his years-long assault on truth and democracy come to a terrible head last Wednesday. He no longer has a massive platform to spread his messages—but what of the extremist groups where his followers connect and organize? Well, Stop the Steal groups are still flourishing on Facebook. Wednesday's attempted insurrection—and social media's role in facilitating it—shows us just how far these platforms have strayed from the original mission of connecting us. And like many other critics, I believe that these companies need to be held accountable.
After all, they build the algorithms that dictate what content we see and measure how we engage with it. It's well-known that more extreme content drives higher engagement. So in a way, platforms and algorithms are actually incentivized to spread—not stop—controversial content and misinformation. As much as they'd like us to believe that they're neutral, social media platforms are not. They need to be accountable for their algorithms and any ripple effects caused by the viral spread of misinformation. They need to take bigger steps than just banning Trump because the consequences of their divisive content are grave.
Where we go from here
There's bound to be plenty of soul searching in the wake of the breach on the Capitol. If there's any silver lining, I hope that we take a good long look at social media and demand accountability. There's been momentum building for years, and we've seen a trend towards individuals tightening up their personal networks. People are beginning to think more about who they're sharing with—and what happens to their data when they use social media. It's one of the main reasons I founded Kinzoo. I believe that the violence we saw in Washington will help accelerate this trend towards greater privacy, but there's only so much that we can do as individuals. After the dust begins to settle, we need to turn our collective attention to the platforms that propped up misinformation and extreme content for so long. Lawmakers and governments need to ask tough questions (and demand answers). Because even though Trump has been banned, there is a lot of work to be done if we want to root out the division and hate that have taken hold of these platforms.
Resources for parents
It's tricky to find the words to explain what happened to our kids, especially when we're still processing it ourselves. Here are a few resources to help parents talk to children about last Wednesday's events:
Common Sense Media has this handy guide that's conveniently broken down by age. There are plenty of tactics to open up a conversation and help kids understand their feelings.
This article from The Washington Post advocates for honesty and has some guidance around tailoring your language for your kid.
This resource from the New York Times is designed specifically for teachers, but it has great ideas for helping kids process Wednesday's events—from comprehension questions to writing prompts.
Okay, that's it from me until next week. If you enjoyed this newsletter, and know of another parent who would as well, please feel free to forward it along.