Back in August, I read a story about a father whose toddler had an infection. Like many of us these days, the dad got medical advice remotely via tele-health. The doctor asked him to snap a few photos of the infection. The dad shared some images, the doctor proscribed antibiotics, and the infection healed. Then, a few days later, the father learned that his entire Google account was disabled due to harmful content. Because his son’s infection happened to be affecting his private parts, the photos the dad took and shared were flagged as child sexual abuse material (CSAM).
The situation quickly spiraled. The dad lost access to his emails, contacts and photos—and even his Google Fi account was disabled, meaning he had to get a new phone number with a different carrier. Without two-factor authentication, he lost access to many other digital platforms as well. Google had also passed the photos on to the police, who had opened an investigation. Eventually, the authorities concluded that nothing untoward had happened, but Google refused to reinstate the dad’s account. There was no explanation and no recourse.
Obviously, the whole situation sounds like a nightmare. Imagine losing access to pretty much your entire digital life because you were trying to get your child some medical attention. This dad’s story is a reminder that child safety, moderation and privacy are extremely complex issues for tech companies—and they need to be treated with thoughtfulness and care.
Keeping kids safe online is crucial. In fact, it’s one of the most important values we have at Kinzoo. We believe that tech companies can play a major role in protecting children. We can make a big difference by building platforms that prioritize safety from the beginning. Flagging and reducing the spread of CSAM is an important aspect of the fight to make the world a safer place for children, but like all things, it’s a fine balance. We’d have the best shot at detecting CSAM if every company scanned every single image ever taken. But then what happens to our privacy when big tech companies have access to every single image we take? After all, privacy is also an important and fundamental aspect of online safety.
Finding the right balance is hard. Proper, responsible monitoring has to take context into consideration. If a doctor asks a parent to share a picture of a toddler’s infection, Google obviously shouldn’t treat that the same way they treat a predator trading CSAM.
The problem is that Google doesn’t seem to have the systems in place to treat this kind of thing with nuance. Here’s how its system works: the company scans all images that are backed up to its servers using an AI tool. If the AI detects something questionable, a human reviews the content and escalates the situation if they deem it necessary. The problem is that Google scans billions and photos, and according to experts, false positives are inevitable. The dad in our story learned the hard way that there isn’t a good system in place to deal with false positives when they happen.
Tech companies are in a tough spot. Should they prioritize privacy and stop scanning for CSAM? Forget privacy and scan everything? This is no easy choice to make. Just ask Apple. They recently announced plans to scan images that users back up to the cloud—but the plan was delayed after major pushback from privacy advocates.
Given how big a role big tech plays in our lives, it seems unconscionable that they don’t have a robust avenue for appeal in these situations. Our photos, emails, contacts, communications and precious memories are often stored on a big tech company’s server somewhere. As per the terms of use, you effectively hand that stuff over to them when you use their platforms. When I see stories like the one about this dad, it always makes me think about just how much of our digital lives aren’t really ours.
At the end of the day, privacy and safety will always be tricky to get right, but we have to prioritize this balance. Tech companies have to put in the time to understand the context. And they have to be able to make it right if they get it wrong.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
In the course of investigating this story, the reporter at the New York Times also came across another dad who was going through the same issue at the same time. She speculated that there are probably many more people experiencing the same thing—but they often don’t come forward because of the damaging nature of the accusations.
Apple’s plan to start scanning images for CSAM was met with a lot of pushback from privacy experts. So much, in fact, that they decided to delay their plan to implement the new feature.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
If you caught my last newsletter, you might remember my thoughts on end-to-end encryption, and how that negatively affects children’s safety online. Meta is now exploring the idea of encrypting its Facebook Messenger platform, and politicians in the UK are voicing their opposition for the exact same reason.
When we talk about the most popular tech platforms among children, tweens and teens, we don’t usually talk about Twitter—but according to the Washington Post, there is an alarming uptick in tweets glorifying self-harm. Some also speculate that these posts are coming from younger users, which is something regulators and parents might want to keep an eye on.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
Content moderation is obviously a key aspect of keeping kids safe online, but it’s important for parents to take an active role in their kids’ digital lives. We sat down with the Senior Director of Moderation at Roblox and learned about his best practices for protecting children when they use technology.
When Apple first announced its now-delayed plan to scan for CSAM, I dedicated a newsletter to the topic and looked at the fine line between safety and privacy. For an even deeper dive into the topic, you can check out my thoughts here.
Okay, that's it from me until next time. If you enjoyed this newsletter, and know of another parent who would as well, please feel free to forward it along.