Suppose you ask the state of New Mexico what it thinks of the mega-popular social media platform Snapchat. They’d tell you that “Snapchat is a “breeding ground” for predators seeking to collect sexually explicit images of children and extort them.” At least that’s what they said in a new lawsuit filed last week against the app’s owner, Snap.
The lawsuit alleges that Snapchat’s “disappearing” messages create the perfect hiding place for the worst actors on the internet. And, the state conducted an undercover sting to gather the evidence. They created a fake account under the name “Sexy14Heather” with AI-generated photos of a 14-year-old girl. Apparently, the fake account was flooded with recommendations to connect with accounts that trade sexually explicit messages and photos—including users with names like “teentradevirgin.” Yikes.
The suit argues that the social media company misrepresented itself as a safe platform so it could continue to profit off of younger users. Sound familiar? That’s the argument that’s often levelled at social media companies.
Researchers have raised the alarm about how Instagram’s algorithm connects a vast network of child predators. X has come under fire for failing to remove child sexual abuse material. And, Discord has been a magnet for predators who prey on vulnerable teens.
And, it’s not just predators that pose a threat. Viral challenges on TikTok can turn deadly. When the algorithm serves up videos of children choking themselves to the point of losing consciousness, children tragically die. So yeah, Big Tech platforms face a lot of flack in the media for not protecting kids online.
Not only do they struggle with the PR fallout from sensational stories about predators, they also regularly pay hundreds of millions and sometimes billions in fines for violating child safety laws. Their CEOs are hauled in front of Congress and grilled by lawmakers. And, they face an onslaught of terrible press after every new blunder comes to light.
They’re even in the crosshairs of the Surgeon General. So why do they still fail to protect children? Why do they continue to knowingly break child online safety laws? And why do they still run their businesses in a manner that harms young users? To me, that seems like that would be enough pressure for any company to change its ways, but not Big Tech. So what gives? What could possibly entice them to carry on this way?
Well, it’s actually a simple explanation. Money. The reason they fail to protect kids is because protecting kids is fundamentally at odds with their bottom line. I don’t think that Big Tech is courting child predators on purpose. I just think that they’re making unsafe choices in their never-ending quest for engagement. They care about their revenue more than anything else, and that leads them to design apps that attract as many people as possible. They want to increase their user count, so they fail to stop kids from signing up, even if that’s against their own terms of service. They want to keep their users online for as long as possible, so show them a steady stream of shocking content. They want to sell as many ads as possible, so they train they harvest vast amounts of data indiscriminately. And, they want to connect as many people as possible—and sometimes that means connecting children to dangerous adults.
When their business models harm children, they usually get away with it. When they do get caught, the repercussions aren’t serious enough to deter them next time. And any bad press hasn’t been enough to make their user base turn on them. Yet.
Personally, I think that our kids deserve better. They deserve companies that put children’s wellbeing over the balance on the spreadsheet. I also know it’s possible to design products that are fun, safe and positive for children—it just takes a bit more effort to do things right. Tech companies just have to be motivated enough to do so, and for whatever reason, protecting children doesn’t seem to motivate Big Tech the way it should.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
Since Big Tech companies aren’t making any sweeping changes to their platforms any time soon, a growing number of school districts are taking matters into their own hands and implementing severe restrictions or outright bans on cellphone use during the school day. According to a Washington Post review, “Of the nation’s 20 largest school districts, at least seven forbid use of cellphones during the school day or plan to do so, while at least another seven impose significant restrictions.” I understand where the impulse comes from, but I wonder if banning tech full-stop is the way to address this nuanced issue.
Child safety experts say new technology is making it harder to protect children online—not easier. AI-generated child sexual abuse material is flooding the internet, and that’s making it more challenging to spot real children in danger. According to the Guardian, “Possessing depictions of child sexual abuse is criminalized under US federal law, and several arrests have been made in the US this year of alleged perpetrators possessing CSAM that has been identified as AI-generated. In most states, however, there aren’t laws that prohibit the possession of AI-generated sexually explicit material depicting minors. The act of creating the images in the first place is not covered by existing laws.”
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
In case you missed it, the Surgeon General published another opinion piece in the New York Times. This one is about how parents are at their wits’ end. He wants to “call attention to the stress and mental health concerns facing parents and caregivers and to lay out what we can do to address them.” He lists money, safety, struggling to get enough sleep, “omnipresent screens,” a youth mental health crisis and widespread fear about the future as the things that we’re contending with. I know first-hand how difficult it can be to navigate some of these issues, and it’s a big reason why I started this newsletter. If I can help make technology a little less stressful for families, I think that’s a big win.
I’ve written before about AI-generated images make it hard to trust what we see online. I recently saw this explainer video that shows just how quick and easy it is to create fake images—and just how convincing they can be.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
If your kids like gaming with friends online, they’re likely using technology to chat with each other. While Discord is a popular choice for adult gamers, it’s not ideal for kids. My team wrote a piece about why you might want to consider an alternative.
It can feel like a losing battle to try and keep kids off social media. If yours are already on Instagram, here’s advice from my team on how to keep them safer.