Would you want your kid to use an app that gave them advice on how to hide the smell of booze and cannabis? Would you want them to use an app that encouraged a 12-year-old to sleep with a 31-year-old? You don’t need to answer. Of course you don’t want your kids using that app. Any parent would say: absolutely not. Unequivocally not. No, no, nope.
Unfortunately, this isn’t a hypothetical question. Snapchat has released a new feature called My AI. Before you panic, know that this feature is only available to a small set of users: only premium subscribers can access it for now. But the company has billed the ChatGPT-powered feature as a “friend” that users can talk to and chat with when their other friends are busy. Various reporters and researchers have tested My AI to get a sense of how it works, and oh boy is it troubling.
Along with giving advice on how to cover the smell of weed smoke and alcohol (air fresheners, candles and essential oils), My AI also wrote a term paper for a fictional 15 year old. But, the most concerning interaction happened when the Center for Humane Technology tested out My AI while pretending to be a 12-year-old who was planning on having sex with a 31-year-old on their 13th birthday. The AI suggested lighting candles to make the event special. What the expletive?
According to Snapchat, “My AI is an experimental product for Snapchat+ subscribers. Please do not share any secrets with My AI and do not rely on it for advice.” That’s a pretty alarming statement given the fact that 59% of American teens between 13 and 17 use Snapchat. The company must know that this product is likely to be used by children. Introducing an experimental, potentially harmful feature to a platform populated by children seems misguided at best.
I think I can speak for all parents when I say: children do not need technology that encourages them to do risky things. We know their brains are already wired for risk-taking behavior. They don’t need any help in that department. As parents, we also already have to spend time countering the negative effects of existing platforms. We don’t need a new, untested, unpredictable technology in the mix. So what gives?
I think it has a lot to do with the current business climate in the tech sector. AI is having a pretty big moment right now, because powerful platforms like ChatGPT are now widely available to the public. We haven’t seen capabilities like this before and it’s kicked off something of an arms race. Companies are sweating because they feel like they need to embrace this new tech now or be left in the dust. Give your customers and users artificial intelligence or resign yourself to irrelevance.
And, there’s a huge market advantage in being first. That’s undeniable. But these AI platforms are still extremely unpredictable. Remember the AI bot that tried to break up a tech reporter’s marriage? Or Tay, the Microsoft bot that Twitter turned into a racist in less than 24 hours? And, we have to consider the fact that AI platforms demonstrate behaviors that stump their creators. These bots also have a tendency to “hallucinate.” They just make stuff up, but present it like it’s fact.
The reality is that even the people who build these platforms can’t anticipate the ways they can change or break. At the end of the day, we just don’t know the way these platforms will evolve. We don’t have a clear picture of the potential pitfalls, and for that reason, we shouldn’t be placing them in children’s hands. Kids need guidance. They need to learn how these platforms work, and they need to understand their limitations before chatting away with them.
As with any groundbreaking technology, we need to take a measured approach when our kids are involved. I’m not suggesting that we ban these platforms or keep them away from children entirely. That’s a losing strategy as well. But we also can’t dump unpredictable new platforms on kids, especially when we don’t yet know the exact ways they might be dangerous.
AI is set to transform a lot of things in the near future. We’re keenly aware of this at Kinzoo, and we’re interested in the ways that we can use AI to make our products better, safer and more exciting for families. But we won’t roll anything out until we’re satisfied that it’s safe for children. We’re parents too. We would never give your kids something we wouldn’t give our own.
The stakes high are when it comes to youth, mental health and technology. We’re starting to get a clearer picture of the way social media can impact our kids’s wellbeing, and we can’t afford to introduce yet another technology with “unintended consequences.”
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
I enjoyed this piece from New York Times tech columnist Kevin Roose (a.k.a. the reporter that the Bing chatbot fell in love with.) He tested GPT4, the newest AI bot from OpenAI, the makers of ChatGPT, and his concluding thought is something we should all keep in mind when rolling out new technology: “The worst AI risks are the ones we can’t anticipate. And the more time I spend with AI systems like GPT4, the less I’m convinced that we know half of what’s coming.”
Tristan Harris of the Center for Humane Technology tweeted this thread about the new Snapchat AI, which includes the conversation where My AI gives a 12-year-old advice on losing her virginity to a 31-year-old man. I think he makes a great point in the thread when he points out that, “this isn’t about one bad tech company. This is the cost of the “Race to Recklessness.” Every tech platform is rapidly being forced to integrate AI agents—Bing, Office, Snap, Slack—because if they don’t, they lose to competitors. But our children cannot be collateral damage.”
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
Utah is on the verge of handing down new laws that will affect the way users under 18 can interact with social media platforms. As Casey Newtown points out, the intentions behind the laws are understandable, but they might actually do more harm than good, because “bills like these create privacy risks for all of us. The way these platforms are going to verify that minors aren’t using them without permission is to ask all of us to submit to facial scanning or worse. The Utah bill, for example, would require platforms to keep records of users’ identities and physical mailing addresses.”
Facebook is rolling out a new subscription service for $12 USD a month. When you sign up for the subscription, you get your account verified by supplying government ID, and then you get access to customer support that helps you recover you account if it gets hacked. That’s right: Facebook is asking you to pay a subscription so you can get access to basic, human customer support. Many critics have drawn a comparison to “protection rackets,” where businesses were forced to pay a fee to the mob for “protection.” Call me crazy, but I think basic security should be a baseline standard—especially when you’ve been offering people a “free” app for nearly two decades.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
Parental control apps can be helpful tools to enforce basic screen time limits and filter inappropriate content—but they have their limitations. My team put together this handy guide for anyone who has questions.
You’ve no doubt been bombarded by news about AI recently, and you might have further questions about how safe it is for kids. Here’s a parent’s guide on ChatGPT to help you cut through the noise.