In the past, creating realistic images with artificial intelligence was the domain of experts. But in just a few short years, this technology has become incredibly accessible. Today, nearly anyone with a smartphone can create convincing, AI-generated images with just a few taps. When used responsibly, AI has the power to fuel creativity. But unfortunately, it's also rapidly adding new tools to the arsenal of bullies, predators and even criminals, raising concerns for parents of children and teenagers alike.
Today’s bullies have more sophisticated tools at their disposal than ever before, and AI has become a potent weapon in their hands. I’ve written about this scenario before and predicted how it would harm children. Imagine a high school bully using AI to create a convincing, fake nude image of a classmate, then sharing it around the school. These images may be fabricated, but the real-world consequences are severe and lasting. Victims suffer humiliation, anxiety and depression, and their reputations and safety are put at risk—all because of something that technically never happened. And the ripple effects can last well beyond high school, impacting everything from college admissions to future career prospects.
It’s a nightmare scenario that’s already playing out in high schools across the country. But it doesn’t stop with bullies—AI technology is also making it easier for criminals to create Child Sexual Abuse Material (CSAM), including horrifyingly realistic content that looks like actual children but was never taken with a camera.
According to researchers at Stanford, organizations like the National Center for Missing and Exploited Children (NCMEC) are already overwhelmed. This nonprofit, which receives federal funding, plays a critical role in managing reports of CSAM. But with the explosion of AI-generated material, it’s fighting a battle against a tidal wave. The tech has advanced so quickly that enforcement agencies don’t have the resources or even the laws to manage the flood of AI-generated content now being reported. It takes time and expertise to determine whether an image is real or not—and that takes valuable resources that are spread thin already.
What’s even more disturbing is that some criminals are using real images of children as the basis for AI-generated CSAM. This means that images of innocent children could be manipulated into harmful, exploitative content—and then circulated online without the parents ever knowing. It’s a chilling thought that makes many of us question the safety of posting images of our kids on social media.
Making matters more complicated, there are legal grey areas when it comes to AI-generated CSAM. If these images contain real children or if real child images were used to train the AI models, they’re illegal. But purely synthetic images that don’t involve real children are harder to prosecute. This puts parents in a tough spot: where do you draw the line, and what’s actually protected by the law?
If all of this feels overwhelming, that’s because it is. The truth is, AI has moved faster than our ability to legislate and regulate it. For parents, the best defense might be proactive protection and open conversations. Here are a few steps to consider:
Limit Sharing: Think twice about sharing images of your kids online, especially in a public or open setting. The less material available, the less risk there is of it being manipulated by AI.
Educate Your Kids: Talk to your children about digital safety and the potential dangers of AI-manipulated content. (If you need tips on having “the AI talk,” I have you covered.)
Stay Informed: AI technology and the legal landscape around it are constantly evolving. Staying informed is crucial for protecting your family’s privacy and security.
Push for Change: Supporting or voting for legislation aimed at regulating AI and protecting minors online can help ensure these protections catch up with the technology.
AI has incredible potential, but like any powerful tool, it can be misused. By staying vigilant, informed and proactive, we can protect our kids from the unintended consequences of a world where it’s getting harder to tell what’s real and what isn’t. Our kids deserve to grow up in an online world where their safety and dignity are prioritized, and as parents, the onus is unfortunately on us to make that future a reality.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
A man in the UK has been sentenced to 18 years in prison after using AI tools to generate CSAM. According to the Special Prosecutor, “It is extremely disturbing that Hugh Nelson was able to take normal photographs of children and, using AI tools and a computer program, transform them and create images of the most depraved nature to sell and share online.” The case is the first time a crime like this has been prosecuted, and the lengthy sentence is meant to send a message.
Muah.AI is a platform that lets users create romantic partners using artificial intelligence. The AI partners will communicate via text and voice, and send pictures upon request. But apparently, people are using the platform to create child sexual abuse material. According to a reporter at 404 Media, an anonymous hacker lifted data from the company revealing that many user prompts appear to request CSAM.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
As parents, we know that childhood looks different than when we were kids. There’s new technology and new expectations around how we should raise our children. One Georgia mom ended up getting arrested when her 10-year-old son walked a little less than a mile by himself. The boy apparently became bored and decided to walk to town, but a passerby called the police when she spotted him alone. And then the police opted to charge the mother with reckless conduct. This certainly isn’t the way I grew up, but it’s a reality for children today.
Starting in November, Roblox is rolling out new safety features for children under 13. Developers will be required to label games as suitable or not for younger gamers, and children will not be able to access unrated games. The platform is also limiting the ways kids can communicate with other users. These changes are meant to give parents and children more clarity on what types of content are available on the platform.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
If your kids are on Instagram and you’re worried about keeping them safe, here’s a parent’s guide from my team with a few tips and tricks.
Thinking of getting your kids a smartwatch for the holidays? You can check out all the pros and cons here before you dive in.