These days, a lot of us are looking at social media with a bit of a side-eye. Even though we continue to use these platforms, there’s a rising sentiment that they’re doing users dirty—especially when it comes to younger users.
We’ve seen an avalanche of headlines about youth mental health and social media, and everywhere you turn, there’s an expert warning that these platforms might not be healthy spaces for kids. We’ve heard it from whistle-blowers. We’ve heard it from the Surgeon General. And we’ve heard it from leading experts like Jonathan Haidt, whose latest argument I covered in a previous newsletter.
Since Haidt published his book, there’s been a lively debate about whether technology and social media are to blame for the state of kids’ mental health. But regardless of the science, the category of “social media” officially has baggage now.
I would argue that these platforms have inherent problems because of how they make money. These platforms don’t feel like they’re designed for our wellbeing because they’re not. They make money by harvesting our attention, so they’re optimized to keep us online and engaged. This is arguably why social media has toxic connotations, and parents, lawmakers and school boards are pushing back on these companies like never before.
In a perfect world, social media companies would undertake an honest reckoning. All the negative headlines and congressional grillings would make them pause and reevaluate. They’d look at some of the drawbacks of their platforms and change some things to make them healthier and safer. They’d approach the problem like parents—not corporations.
But we don’t live in a perfect work. Instead of making systemic changes, they’re treating the crisis like a PR problem—something to be solved with slick marketing. Today, I want to talk about what Snapchat has been up to, because their recent moves illustrate this issue well.
Back in early February, Snapchat updated its website. “Less social media,” it proclaimed. A strange thing for a social media company to say—until you read the next line: “More Snapchat.” Ah, there it is. The company wants to tell us that they’re not social media. They belong to their own category altogether, and we all need more of whatever that is.
I’m not surprised to see Snapchat distancing itself from social media. I suspect this is an attempt to reassure nervous parents that Snapchat is a safe space for kids, who are the main user group for the app. When Snapchat rolled out this campaign, they released a statement saying “Somewhere along the way the promise of social media changed. Everything it aimed to be, the gaps it was meant to bridge, the connection it was meant to foster, the joy it was supposed to bring started to feel different.” They go on to remind us that they’ve always been a social media alternative. And that they’ve always prioritized doing what’s right for the community and staying true to their values.
“Our goal,” they say, “is to provide a healthy and safe environment for all Snapchatters—and especially the youngest members of our community.” These are nice words, and they sound like they come from a company that cares. But, we see something different when we look at the app itself—and the features it includes.
Take, for example, Snapchat’s feature “My AI.” I wrote about this when I first launched it, but here’s a refresher: It did some sketchy things. Researchers posed as kids, and asked for advice on various topics. Along with telling a kid how to cover the smell of weed smoke and alcohol (air fresheners, candles and essential oils), My AI also wrote a term paper for a fictional 15 year old. But, the most concerning interaction happened when the Center for Humane Technology tested out My AI while pretending to be a 12-year-old who was planning on having sex with a 31-year-old on their 13th birthday. The AI suggested lighting candles to make the event special. Definitely failing at creating a healthy and safe environment with that one.
Snapchat recently wrote about its AI features in a blog post and had this to say: “While all of our AI tools, both text-based and visual, are designed to avoid producing incorrect, harmful or misleading material, mistakes may still occur.” That doesn’t sound all that great for the youngest members of their community to me. Use at your own risk, I guess.
And another one of their recent features was also a head-scratcher. The “Solar System” shows users how close they are to their contacts and ranks them against others in their friends’ orbits—and it predictably created a lot of anxiety for a lot of users. It’s not at all surprising that a feature like this could lead to hard feelings, and the company acknowledged that they received pushback. They quickly released a statement explaining that, “[W]e understand that even though it can feel good to know you are close to someone, it can also feel bad to know that you aren’t as close to a friend as you’d like to be. We’ve heard and understand that the Solar System can make that feeling worse, and we want to avoid that.”
These features had trouble written all over them. It doesn’t take a crystal ball to guess that an AI chatbot could offer up problematic advice. And it doesn’t take a rocket scientist to anticipate that the Solar System feature could amplify social anxiety. So why did a company that says it wants to provide a healthy and safe environment to all Snapchatters release features that have the potential for so much damage?
The truth is that even if the platform wants to uphold its values, the pull of an attention-hungry business model is too much to resist. Companies with business models that rely on harvesting your data tend to put peoples’ needs and wellbeing second to advertisers’. There is immense pressure to boost engagement, and that’s not always conducive to supporting users’ wellbeing.
Money is ultimately what dictates how a company designs its products and interacts with its users. If parents want to asses how safe a platform is for their kids, they can learn more from a company’s business model than anything it writes on its website. In rejecting the social media moniker, Snapchat is trying to create a new category—but this just marketing-speak. If you want to truly understand what an app stands for, you need to look at the user experience. How does it make you feel? What does it do for you? What does it take from you?
Snapchat has leaned into ephemeral messaging, but make no mistake—they want you stick around for as long as possible. No friends online to chat to? No problem. There’s My AI. Stressed you’re not as close to your best friend as you thought? Maybe spend some more time online snapping them.
I believe that the people working for Snapchat probably do want their users to have a positive experience. But I think they’re measured by a different metric. I don’t think they get bonuses when kids feel good about themselves, I think they get bonuses when they maximize engagement.
And while no one can say for sure if social media is a contributing factor behind the youth mental health crisis, I think it just makes good sense to design technology with kids’ wellbeing in mind. Social media companies need to do more than pay lip service to this notion.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
Snapchat has created a standalone website with a manifesto of sorts. It’s an interesting bit of marketing, but I’d invite you to read it and see if you think it holds up next to some of their more controversial features.
Lots of states are trying to reign in the way Big Tech platforms interact with younger users—and the Big Tech platforms are pushing back hard. According to The Guardian, “Lawmakers in multiple states have accused lobbyists for tech firms of deception during public hearings. Tech companies have also spent a quarter of a million dollars lobbying against the Maryland bill to no avail.” These child safety bills are by no means perfect. They have some serious implications for privacy, which isn’t ideal. But it would be nice to see Big Tech pour as much effort into fortifying their platforms as they do into fighting child safety bills.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
Meta has lowered the age in their terms of service for WhatsApp. They platform used to be for ages 16+ in the UK and EU, but the company has lowered that age to 13. The group Smartphone Free Childhood had this to say about the change: “Officially allowing anyone over the age of 12 to use their platform (the minimum age was 16 before today) sends a message that it’s safe for children. But teachers, parents and experts tell a very different story. As a community we’re fed up with the tech giants putting their shareholder profits before protecting our children.”
If it seems like I’m writing a lot about deepfakes, it’s because I am. This is one of the most important issues facing our kids, and I’ll keep on the topic for as long as I need to. It’s a high-stakes issue for teen girls especially, and they’re the ones pushing for legislation in many cases. There are lawmakers in 12 states that have either passed bills or are working on ones to stop the rapid spread of AI nudes, and this is thanks in large part to the advocacy of high school girls.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
Back when Snapchat first released their My AI feature, my team wrote a parents guide with everything you need to know. You can check it out here.
And if you’re curious about the app in general, here’s an overview of the pros and cons for Snapchat.