Last month I attended the TED conference in Vancouver and I bet you could guess which topic dominated this year’s talks: Artificial intelligence. Of the 80 presentations that happened at the conference, 1 in 5 was devoted to the topic of AI.
There were lots of different people with lots of different hot takes—and a few demonstrations showing the potential applications of AI technology. I learned about how the Khan Academy is training an AI tutor and creating a useful tool with lots of guardrails for children. They’ve added a layer onto ChatGPT technology, so the tutor won’t just give kids answers or do their work for them. It encourages them to use their problem-solving skills and discover answers on their own.
I also listened to a panel with two AI advocates and two skeptics, and one of those skeptics talked about some of the idiosyncrasies that AI has a tendency to produce. This woman is a scientist and a big believer in AI—but she’s worried about outsourcing everything to AI, especially when it is quite bad at basic common-sense-type things. She shared one amusing example where she asks AI to solve a simple logic problem: if it takes 5 hours to dry 5 shirts in the sun, how long should it take to dry 30 shirts? According to AI, that would take 30 hours. So yeah, not quite right in the common sense department.
I heard talks about how the military is using AI, and of course, there was one presenter who was concerned that AI was going to kill us all. These perspectives represented the extreme end of the spectrum (and reminded me a bit of Skynet).
But for all the inspiring and unnerving presentations at the conference, I found one event downright chilling—and it wasn’t even focused on the negative potential of AI. It was a demonstration by Tom Graham, the CEO of Metaphysic.ai. This is the company behind the infamous Tom Cruise deepfakes, and they’re leaders in this uncanny technology.
During the demonstration, Tom Graham was chatting with TED host Chris Anderson, and they were both projected onto a large screen at the rear of the stage. Tom Graham used his company’s deepfake technology to gradually transform his projected image into Chris Anderson’s. And then, as the final cherry on top, his voice morphed into Chris’ as well. On the stage, Tom Graham was chatting with Chris Anderson. On the screen behind them? Chris Anderson was chatting with Chris Anderson.
The demonstration left the audience rapt. It was stunning to see such powerful technology deployed in real time. And while the technology is surely impressive, it also left many of us uneasy. It’s hard to watch something like that and not immediately think of Pandora’s Box.
This demonstration wasn’t meant to cause alarm. And in fact, Tom Graham says his company is tightly controlling who can use this technology. For now, use cases are pretty much limited to Hollywood. But tech like this can’t be tightly controlled forever. And when regular people can get their hands on it, I’m afraid of what will happen.
During the demonstration, I couldn’t help but think about the ways this tech could go sideways for children. My mind kept flashing to my own kids—one of whom is entering high school next year. Just imagine how deepfakes could supercharge bullying. One morning, you accidentally bump into someone in the hallway—and by the afternoon, they’ve uploaded a very convincing video of you doing or saying something totally embarrassing. Or offensive. Or illegal.
Think of the possibility to generate deepfake voicemails and voice memos. Maybe your bully generates a recording of you saying something truly terrible or even claims that you’re the one bullying them. These scenarios are the truly chilling ones. Deepfakes have the potential to supercharge cyberbullying, and that’s something that’s literally life-threatening for kids.
If (or when) this technology is readily available to the public, make no mistake: bad things will happen. Bullies are always going to bully, and they’ll always use all the tools at their disposal. Lots of people are already talking about the dystopian possibilities of deepfakes for misinformation, disinformation and nonconsensual pornography. Those are terrible scenarios to be sure, but I’m deeply troubled by the idea of everyday people using this technology against each other.
Once deepfakes become more accessible, they could be a destabilizing force. Seeing won’t be believing when you can’t trust your own eyes. People already distrust media outlets and have trouble believing news reports. Deepfakes promise to accelerate that trend immensely. When there’s a digital record of something, it might not matter if it actually happened, because the digital record can do damage. The “evidence” is out there anyhow.
That’s what makes me so concerned for my kids. I’m afraid of the ways this technology might be used against them. I know I’ll have to talk to them to ensure they never use it against anyone, even if it is meant to be funny or lighthearted. I’m worried about them living in a world where you aren’t even able to post a picture of yourself lest it be scraped to make a deepfake. I’m concerned about all the kids that might grow up in a world where their digital footprint could include pictures, videos and recordings of them that never actually happened. These are the thoughts that give me considerable pause. But with the speed that AI is moving, it seems like there’s no pause button in sight.
If we’ve learned anything from the rise of social media, it’s that moving fast and breaking things leads to a lot of negative consequences. It feels like we’re repeating past mistakes with AI. We have to think really carefully about deepfake technology because the possibility of harm to our children is real.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
For more details about the demonstration by Metaphysics.ai, check out this article here. it includes some telling quotes from Tom Graham about the potential future of this technology. He states that “[o]ur customers have to sign a very long warrant that they have the rights to the images we use to train it. But in the long run, when the cost of this drops to zero and you have a billion people out there doing this, you definitely need a distributed system where people own and control their data.” Personally, I’m not sure owning and controlling your data will be enough to mitigate the damage that a deepfake could create.
Tesla lawyers are already trying out the “it was a deepfake” defense for Elon Musk. The founder has previously made claims about the safety of Tesla’s self-driving technology, but lawyers are trying to say those statements shouldn’t be admitted as evidence in a lawsuit. The reason? Maybe they’re deepfakes. For what it’s worth, the judge is none too impressed with the argument.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
Another day, another massive fine for Meta. This time, it’s a massive privacy settlement and you might be entitled to some money. If you used Facebook between May 2007 to December 2022, you could collect some of the $725 million.
There’s another bipartisan bill in front of the Senate with the aim to protect kids from harm online. If the bill passes into law, social media companies couldn’t use personal data to recommend content to users unless they know the user is over 18. Sounds good in theory, but a bill like this could have unintended consequences. I like the way Evan Greer, director of the digital rights nonprofit Fight for the Future sums it up: “Broadly speaking I’d say this: yes, Big Tech companies are harming kids. We stop that by forcing those companies to change their business practices, not by kicking kids off the internet or taking away kids’ rights.”
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
Gaming is a popular pastime among kids, but e-sports is on a whole other level. To learn all about the pros and cons of this organized and competitive world, check out our parent’s guide here.
One of the most popular social media apps for kids is TikTok—and they recently brought in some time limits for younger users. According to the experts, the limits might be a little toothless. My team put together a parent’s guide, so you can learn more.