Like most parents, it’s important to me that I teach my kids the difference between right and wrong. Every day in large and small ways, I attempt to embody this, even when it’s hard or comes at personal expense. These are important lessons because one day they’ll be in a situation where doing the wrong thing is appealing. They’ll face pressure from a friend, a boss or maybe even strangers on the internet to do something they know isn’t right, and I want them to have the strength and confidence to stand up for what they believe in. Of course, that’s something easier said than done. Just ask the executives at a Big Tech company.
Key decision-makers at these businesses seem to operate by a different set of rules. Time and again, we see these companies identifying significant issues within their platforms—from the spread of misinformation to addictive algorithms to sextortion—but rarely taking real action to resolve them. Instead, they double down on public relations, spinning the narrative to downplay risks, even as internal research clearly warns of harm. Whether it's TikTok, Snapchat or Instagram, this pattern of knowing but choosing to ignore persists, putting the responsibility on users instead of prioritizing safe design choices.
If you’ve ever opened TikTok and watched video after video, you’ve seen how it can keep you scrolling, hooked in an endless loop of entertainment, education—and sometimes, pure nonsense. We’ve known for a long time that TikTok’s algorithm is scary-good, but executives at the company had hard data on just how compelling it is. They knew how quickly the app can become addictive, especially for younger users. And thanks to some accidental document dumps, we’re learning just how much TikTok knew—and chose to ignore—about the dangers of its product. Two lawsuits, one in Kentucky and another in South Carolina, included improperly redacted documents TikTok likely never intended to go public. The revelations are, frankly, troubling.
From the Kentucky case, we learned that, according to TikTok’s internal research, a user can become addicted to the platform after watching just 260 videos—achievable in about 35 minutes. TikTok’s leadership knew about this pattern of addiction, as well as the consequences: a decrease in analytical thinking, memory formation, empathy and conversational depth. It’s particularly concerning for younger users, who are more susceptible to digital influence. And, we’re not just talking about kids zoning out here—we’re talking about rewiring their ability to think deeply.
According to internal documents, one exec voiced concerns, saying, “The reason kids watch TikTok is because the algo[rithm] is really good. But I think we need to be cognizant of what it might mean for other opportunities. And when I say other opportunities, I literally mean sleep, and eating, and moving around the room, and looking at somebody in the eyes.” What this tells me is that the company was aware of the stakes. They just chose to put their product in kids’ hands anyway.
Their attempts to combat the dangers were also shockingly hollow. They ran an internal experiment on their screen-time reminders feature. The result? A laughable reduction from 108.5 minutes to 107 minutes a day. Yet TikTok still highlighted this feature in their PR. They downplay the risks of their platform and they hype up the effectiveness of their mitigation tools.
In the South Carolina lawsuit disclosures, we learned how TikTok’s age recommendations are dangerously out of sync with the content on its platform. Officially, it lists its platform as appropriate for children 12 and older. But in private communication, Apple warned that the app's content is mature enough that it would better fit users 17 and up. Apple’s concerns echo those of parents everywhere: how can TikTok claim to be a safe platform for kids when its content regularly includes profanity and promotes unhealthy behavior?
Similarly, Instagram has grappled with a serious sextortion problem for years, and it’s only grown more pervasive as the platform has gained users, especially younger ones. Teens and pre-teens are especially vulnerable to scams that lure them into sharing explicit images, which bad actors then use to extort them. These incidents are not isolated; they’re widespread and have been growing in frequency. Instagram has been well aware of this trend, with internal data showing how easily young users fall prey to such schemes, yet the platform has only recently rolled out tools aimed at addressing it.
This delay in action mirrors the revelations brought to light by Frances Haugen’s whistleblower testimony. Haugen exposed how the company’s algorithms amplified harmful content for teens, contributing to a rise in mental health issues. Instagram, it turns out, was well aware of the adverse effects its platform was having on young people, yet they waited to act until the public outcry became unavoidable. The new tools and safeguards only emerged after the damage was done, painting a familiar picture: a company choosing to manage public perception with reactive measures, rather than proactively addressing the safety concerns it knew were integral to its design.
Here’s the reality: if these platforms wanted to protect young users, they would invest heavily in content moderation, add real limits to screen time and change their algorithms to focus on genuinely age-appropriate content. But this would cut down on screen time and user engagement, reducing ad revenue. It’s a choice between profits and protection, and Big Tech’s priorities are apparent.
I know the costs of such choices because I weigh them every day at my own company. I knew that when I set out to create apps for children, I’d need to make decisions that put safety first. And doing so has a very real impact on our growth and revenue. But for me, it’s not a difficult choice to make. I have two children. Even though I’m a CEO with responsibilities to my board, my employees and my investors, I never take off my parent’s hat. I never stop imagining my app in the hands on my son and daughter. And I make decisions based on what I think is right, not what’s going to help us hack our way to viral growth.
There will always be compelling reasons for tech companies to cut corners. There are targets to hit and bonuses to earn. But as someone who faces those tradeoffs every day, I can’t fathom knowing what these executives know and doing what they’ve done. There’s too much at stake. And what kind of message would that send my kids?
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
We don’t have conclusive evidence that social media is to blame for the decline in youth mental health, but time and again, we see signs that these platforms just hit differently for younger users. According to an internal company report from 2019, “the younger the user, the better the performance” for most engagement metrics. Executives knew their product was compelling in a way that most users couldn’t control—and that pattern was even stronger for children. But, instead of seeing that as a red flag, they saw it as “better performance.”
Being a teen in the digital age is hard enough. But it’s even harder when you’re getting blackmailed after someone tricks you into sharing a private photo. It’s a nightmare, and unfortunately, it’s becoming more common. The FBI reports that financially motivated sextortion incidents involving minors are rising sharply, with boys ages 14 to 17 being targeted the most. Many of these scams are run by bad actors outside the US—and apps like Instagram are making it easier for them.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
A 14-year-old boy allegedly became obsessed with an AI chatbot shortly before his suicide. The friend he created through the company Character.AI had apparently become his closest friend in the days leading up to his death. He knew it wasn’t a real person, but developed an intense emotional attachment anyway. That’s not surprising, given that many of these AI chatbot companies sell their product as a cure for loneliness. But, the technology isn’t human—and we’re just beginning to see how it can impact younger users. Clearly, more thought and better safeguards are needed when children are involved.
For the children and teens who use iMessage to stay in touch with friends and family, there are only a handful of safety measures in place to protect them from predators. If they receive a message that’s sexually explicit, it can be automatically blurred, and a pop-up appears telling them how to get help from a trusted adult. But, Apple is about to take these measures a step further by letting children report these images. Once a report is received, the company will review the message and alert law enforcement if necessary. The feature is only available in Australia for now, and it only works on iMessage, and not on text messages from Android users. And, the reporting feature is limited to sexually explicit content, not bullying or violence. Experts say it’s an important step forward, but something that should have been done years ago.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
If your kids are enthusiastic about gaming, they’re probably trying out new games from time to time. If the classic StarCraft is on the list, check out this piece from my team for everything you need to know.
And, if they’re asking to play Animal Crossing, this parent’s guide has all the pros and cons you need to consider.