Hi to all of my readers. The summer has proven a bit hectic, and the cadence of my newsletter has suffered some because of that. I will have another issue for you next week and then plan on getting back to more regular intervals. I hope you all had a great summer. Now, onto this week’s newsletter…
For a while now, Apple has been hanging its proverbial hat on one thing: privacy. There have been privacy-focused ad campaigns, privacy-first software updates and even a contentious public relations battle with Facebook over (you guessed it) privacy. They've stood up to governments who demanded back doors into iPhones—and they've won. Privacy is a major part of their brand ethos, so their recent announcement on expanded protections for children made waves in the industry. The company revealed that it would be implementing updates to help protect children from sexual predators. Notable changes include building a system that detects child sexual abuse material (CSAM), and a system that warns children and parents when a child receives or tries to send explicit pictures in Messenger.
Clearly, protecting children is a good thing. I think we can all agree on that. But many in the tech industry had questions about what these updates mean for user privacy. Here's what parents need to know about the changes.
Scanning the cloud: safety vs. privacy
One of Apple's updates is meant to prevent the spread of CSAM by scanning and comparing images uploaded to the cloud against a database of known images compiled by the National Center for Missing and Exploited Children. In essence, Apple will be using artificial intelligence and machine learning to scan user photos. If the AI detects a potentially problematic pattern, the photos are reviewed by a human moderator, and information may be sent to law enforcement if necessary. Apple will only be scanning photos uploaded to the cloud—so technically users can "opt out" if they decide not to back their photos up.
When I first heard the updates, I have to admit that it felt a bit invasive. I mean, just imagine if Facebook was proposing an update where they scanned your entire photo library. Maybe I'm jaded, but it's tough to trust a Big Tech platform with personal information, no matter their track record on privacy. But on the other hand, it's clear that more needs to be done to protect children in the digital world.
Tech companies aren't required to actively look for CSAM on their platforms—only to report it when they find it. In 2020, Apple reported just 265 instances. By contrast, Discord reported 15,324, Dropbox reported 20,928 and Facebook reported 20,307,216. Clearly, the platform can and should be doing more to protect children, but experts are still on the fence about how this approach will affect user privacy, since it's effectively a backdoor into an otherwise-encrypted system.
That said, those of us in the kid-tech world know that privacy can be a double-edged sword. It's an unfortunate fact that encrypted platforms can be exploited by bad actors—giving people dark digital places to do illegal things. In fact, end-to-end encryption on a platform is extremely dangerous for kids. It's best-practice for children's platforms not to be encrypted, and we decided at the beginning not to fully encrypt Kinzoo for that very reason. We believe in protecting user privacy—but never at the expense of children's safety.
Messenger updates: detection vs. reporting
Sexting. It's not something parents like to think about, but the truth is that tweens and teens are sending and receiving explicit photos. Sometimes it's part of normal sexual development, but other times it's cause for serious concern: it can be unsolicited or even part of a dangerous grooming scenario. One of Apple's updates applies to devices that are part of a family plan. The company will automatically blur images sent to users under 18 that contain explicit content. Children can choose to view the image anyhow, but an alert will be sent to their parents.
And, there will be interventions if a child attempts to send an explicit photo. Apple will ask if they indeed want to send the photo—and warn them that parents will receive an alert if they do.
All of this relies on detection—and the platform is in charge of defining what counts as nudity. It begs the question: why isn't it possible for iMessage users to report content or contacts that send them something inappropriate? A reporting mechanism empowers users. That's why we built one into Kinzoo. It lets them decide what's appropriate and what isn't—and gives them the option to act when something doesn't feel right. By most estimates, CSAM is proliferating in the digital world, and the more tools users have to combat it, the better.
Unintended consequences and ulterior motives?
Any time a Big Tech company announces a major update like this, I always wonder what their true motivation really is. Many privacy advocates and tech experts are not on board with these updates. They're worried about the prospect of a back door—and the potential for governments and regimes to take advantage of a system designed with the best of intentions. A computer scientist and a cybersecurity researcher cautioned that, "[w]hile Apple has vowed to use this technology to search only for child sexual abuse material, and only if your photos are uploaded to iCloud Photos, nothing in principle prevents this sort of technology from being used for other purposes and without your consent."
These updates also give Apple AI and machine learning technology access to a massive trove of information. I wonder if this update has the unintended consequence of giving Apple access to huge swaths of data to train their AI systems in a more "ethical" way.
An update for the better?
At first, I thought these updates were bad news for privacy. But, the more I read on the issue, the more I began to think that they might be moves in the right direction. We need to acknowledge that kids are using the internet. They are exploring the digital world and more needs to be done to keep them safe. Detecting CSAM—if done ethically—is the right thing to do. I hope Apple can pull it off.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
If you'd like to learn more about how this technology works, here is an easy-to-understand breakdown. I'll admit that it sounds scary at first, but when you really dig into it, Apple isn't exactly peeping on all your photos—it's looking for matches to known CSAM, and if a certain number of matches is generated, only then are photos reviewed by a human moderator.
Apples announcement generated so much controversy that they followed up shortly after with an expanded FAQ to address some of the pushback. You can check it out here.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
Emojis... a language unto itself. And as it turns out, that language can be interpreted differently depending on your generation. Here's an enlightening rundown of what common emojis mean to different age groups. Hint: you might want to stop sending a basic smiley to your kids.
After a year of upheaval, a lot of parents are trying to reinstate some sort of screen time boundaries for their kids. Obviously that's much easier said than done, but the parenting experts at The Washington Post have some great suggestions to ease the process.
Okay, that's it from me until next time. If you enjoyed this newsletter, and know of another parent who would as well, please feel free to forward it along.