Not too long ago, I wrote a newsletter about the scariest AI scenario I could imagine. It wasn’t some Skynet-inspired doomsday where machines rule over a post-apocalyptic world, but rather, a not-too-distant future where deepfake technology is available to every high school bully.
Well, about six months after I sent that newsletter, this article from Julie Jargon at the Wall Street Journal shows that I was right to be concerned. As she reported, that exact scenario played out at a high school in New Jersey. Back in mid-October, a number of female students learned that there were AI-generated nudes of them circulating in a group chat.
One of the victim’s mothers told Jargon, “I am terrified by how this is going to surface and when. My daughter has a bright future and no one can guarantee this won’t impact her professionally, academically or socially.” As a parent to a high-school-aged daughter, I can confirm that this is a true nightmare scenario. And, I’m concerned that this is just the beginning.
These AI tools are still in their infancy. So far, it’s just early adopters who are playing around with them—and it’s already leading to a huge bump in faked images. On the 40 most popular websites for this kind of content, more than 143,000 videos have been added this year. That’s more than all the videos between 2016 and 2022 combined. Just imagine what will happen when the majority of people start using this kind of technology.
If your picture has ever been posted online, in theory, you could be turned into a deepfake. We already know that companies like Clearview AI have scraped every photo they can get their hands on—including images on Facebook and Instagram. It makes you think about all the photos you’ve been posting online all these years. Most of us never considered how bad actors and powerful technology could use or abuse them.
According to Jargon, some of the girls at the New Jersey high school who were victims of fake nudes have deleted their social media profiles. I certainly can’t blame them. I don’t think I’ve seen a more compelling argument for deleting your digital footprint.
As deepfake technology gets better and easier to use, the risks for sharing on social media become steeper. As parents, we’ll be forced to think long and hard before we share anything with our kids’ faces online. Not only are you feeding dubious surveillance companies like Clearview AI, but you’re also inadvertently giving their bullies ammunition.
While the Biden administration has issued an executive order about AI in an attempt to reign it in, I wouldn’t hold my breath for regulation to save the day. Often, lawmakers don’t even have a grasp on how these companies operate or understand the technical side of the equation well enough to inform meaningful legislation. Remember back in 2018 when Mark Zuckerberg appeared in front of the Senate and had to explain the basics of Facebook's business model? Even he seemed sort of stunned when a senator asked how they sustain a business where users don’t pay and he had to explain, “Senator, we run ads.”
Even with all the congressional hearings and whistleblower reports and promises from platforms to do better, we haven’t seen meaningful reform in the way that social platforms operate. So yeah, I might not hold my breath for regulation to save the day. But what can parents do? We can double down on our efforts to protect our children’s privacy—and think very seriously about what we post about them online. We can do our best to help them understand what’s at stake when they participate in social media and help them weigh the risks in a manner that feels right for them.
We can also continue to push tech companies to do better by our kids—and in the meantime, seek out platforms that don’t compromise our privacy and wellbeing. Finally, we need to take everything we see on social media with a generous grain of salt and we must train our kids to do the same. It’s crucial that we prepare our children for a world where AI tools are easily accessible—and capable of creating compelling very fakes.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
As Julie Jargon points out in her article, this kind of deepfake technology used to take a network of computers to use. Now, all you need is a smartphone. And while larger AI generation platforms have rules in place to prevent this kind of bad behavior, all it takes is a quick online search to find “clothes remover” tools.
The technology that generates deepfake images disproportionately affects women and girls. Apparently, 96 percent of deepfake images are pornography, and 99 percent of those photos target women. Sophie Maddocks, a researcher and digital rights advocate at the University of Pennsylvania, told the Washington Post “It’s now very much targeting girls. Young girls and women who aren’t in the public eye.”
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
Arturo Bejar was consulting for Meta’s Wellbeing Team when he tried to convince Mark Zuckerberg and other top executives to make changes to the platform for the sake of user safety. He was inspired by his 14-year-old daughter, who had shared her own experiences on Instagram. She told him about the unwante contact she got from strangers on the platform—and how reporting harassment usually led nowhere. When Bejar appealed to Meta to make changes, he hit a wall before his consulting gig eventually ended. He testified in front of a senate subcommittee on Tuesday.
According to unsealed internal communications from Meta, it was Mark Zuckerberg who personally thwartedattempts to make the platform safer and better for teens. Apparently, many executives at the company recommended removing beauty filters from the platform because they were worried about how they could affect teen mental health. Zuckerberg? Not as concerned. He was more interested in the fact that the features were in-demand.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
Instagram is a fraught platform, and there’s mounting evidence that it’s not a great place for younger users. But that won’t stop kids from creating profiles anyhow, so here are some tips from my team to help keep children as safe as possible.
Cyberbullying is an unfortunate reality that kids and parents often have to deal with. My team interviewed Tad Milmine, the expert behind Bullying Ends Here, and he shared some helpful insights.