I think that technology is amazing. I am blown away every day by apps and platforms that find innovative ways to connect us, spur our creativity and help us cultivate new interests. And, I’m eager to share tech with my kids, because I think it holds worlds of potential for them. That said, not all tech is created equal. Some platforms are safe, well-designed and beneficial, while others are dangerous and detrimental to our wellbeing. The tricky part is that it’s not always easy to tell which is which—especially for the youngest users, our kids.
In my family, we’ve found it helpful to distinguish between apps that offer positive screen time and those that are designed to manipulate us to scroll endlessly, share too much information or spend money unnecessarily. These apps are trying to keep us “screen captured,” meaning there are mechanisms at play that are trying to hold our attention hostage. I know that sounds grim, but there is good news: there are a few fairly common ways that dubious technology companies do this, and once you learn to spot them, it’s easy to tell the good from the not-so-good. So, here are the six most common ways that technology tries to keep us screen captured. As we become aware of how these mechanisms are working in the background to keep us online, we’ll be better able to pull ourselves out of those cycles when it’s not what we intend.
1) Rabbit Holes
The most prevalent form of screen capture in my household comes from YouTube. I (and I’m sure others) call this the “YouTube rabbit hole.” That’s because YouTube continuously serves up recommended videos on auto-play, with the goal of keeping users on the platform so they can serve up the next ad at the beginning or middle of the next video. Subscription on-demand video services like Netflix also have auto-play features, and they’ll play episode after episode if you let them. Netflix may not feature ads, but this is very intentional in keeping its platform more “sticky.”
In our house, we’ve established channels and playlists that are safe and appropriate for my daughter and we try to keep her on them as much as possible. We talk to her about what happens when she clicks a recommended video in the “Up Next” menu on YouTube, and how far that can take her from what she intended to watch. We also have her turn off the auto-play features so that she has to consciously make a choice to click on the next video. Understanding that these features on YouTube and Netflix are there intentionally to keep us (and our kids) on the platform longer and turning them off is a good first step in shifting the balance away from manipulative time.
2) Algorithms
Algorithms themselves aren’t the enemy, but many are definitely designed to attract and retain our attention—and we have to be aware of this fact when our kids are exposed to them. YouTube deserves special attention here because it’s the most popular platform among kids aged 6 to 12. While it doesn’t have an algorithmically curated “feed” per se, it still operates with algorithms designed to pull us in, and the recommended videos are purposely selected to draw your kids into the proverbial rabbit hole.
The way it works is by analyzing our viewing history and suggesting similar content. They do it to keep you screen captured and show you as many ads as possible, but the danger of the manipulation is not just in the advertisements: one study found that toddlers have a 45% chance of clicking through to inappropriate content within just ten recommended videos. This is especially dangerous when multiple family members are using the same account. When the algorithm can’t distinguish between users, recommendations get jumbled together, and your toddler might see suggestions based on your teenager’s recent views.
3) Social validation and comparison
“Likes” are ubiquitous across platforms, and they lead to screen capture by giving us a sense of social validation. That’s because “likes” and follower counts (among other metrics) are known to trigger dopamine in our brain the same way gambling does. We anticipate a reward when we post on a social channel, and it’s no accident that tech companies exploit this human drive to feel accepted. It’s an intentional strategy to hook you in, and the platforms reap the benefit of increased daily and monthly active users who become obsessed with those metrics.
Companies have leveraged the dopamine reward cycle to keep us highly engaged with their platforms. The same will hold true for our children if they are exposed to the same features. The jury may be out on exactly how impactful social validation is on the psyche and how much it is tied to the recent spikes in anxiety and depressive symptoms, but it is a risk I want to avoid for my children and why I’ve steered them away from platforms that feature these.
4) Kidfluencers
One of YouTube’s top-grossing channels stars a young boy named Ryan. He started off doing toy reviews and now is at the center of a multi-million dollar media empire known as Ryan’s World. Ryan’s success at marketing to kids on YouTube is a startling example of manipulation—this time specifically directed at children. “Unboxing videos” are common on YouTube and are highly appealing to children who, ironically, are not supposed to be on the platform in the first place.
The problem with this type of content is that it isn’t always clear that it is, for all intents and purposes, an advertisement. I think most adults are aware of the fact that when an influential figure promotes a product or brand, they are very likely being compensated in some way. I don’t think kids can make that same distinction. Fortunately, my daughter hasn’t shown much interest in these types of videos. If she was more interested, I’d try my best to have her watch other things, but would definitely make the point of explaining to her how influencer marketing works.
4) Arbitrary goals and metrics
Many platforms use arbitrary goals and metrics to screen capture. Snapstreaks are a good example of how an app uses goals and metrics to drive behavior. In the case of Snapchat, they want you (and at least one of your friends) to engage with the platform every single day. For what? To keep an arbitrary streak alive.
This is where goals and metrics can hook us in. Sometimes this can be positive; for instance, we may want to keep up a streak of going to the gym. In the case of the platforms, they want you invested in the streak so you will continue to count as a daily or monthly active user. The Snapchat score—the counter that accumulates with every post shared or received—is another example. Others include arbitrary follower counts, numbers of “likes,” features that unlock when a certain number of followers is reached, and the number of “friends” that we have on a platform. The issue with these metrics is that we (and our kids) will sometimes go to unhealthy lengths to achieve these goals or metrics.
6) Misleading in-app purchases
Young users are particularly vulnerable to manipulation, and privacy and children’s organizations are taking notice. In 2019, a coalition of these organizations filed a Federal Trade Commission (FTC) complaint against Facebook for its structure for in-app purchases in games. Children could make these purchases without parental permission, the group argued, and young users may not be aware that some items cost real money.
Alarmingly, some active young users who make a lot of in-app purchases on their parents’ accounts are sometimes referred to as “whales”—a term inspired by gambling culture that refers to the high rollers who essentially fund casinos with their losses. Social media and app companies use it to describe children and teenagers who make lots of app-related purchases. According to the New York Times, “many mobile games have features that lure children into making in-app purchases using their parents’ credit cards while playing; it has become so prevalent that a new term was coined, ‘bait apps,’ which have been featured in class-action lawsuits.”
Bait apps make it extremely easy to make additional purchases by linking add-ons to existing accounts and associated payment engines. They make it a breeze for children to get on a platform and experience it, but then tease users with additional goodies that they can’t have for free. They turn screen-captured kids into customers. This, of course, is a big deal. Back in 2014, the Federal Trade Commission forced Apple to issue more than $32.5 million in refunds to customers whose children made purchases through the App Store without parental consent. Also in 2014, Google paid out at least $19 million for the same reason through the Play Store. In 2017, the FTC.
How to avoid screen capture
The last thing I want to do is encourage technophobia. When used responsibly, technology can open up minds and introduce adventure. There is a big difference, though, between our kids enjoying screen time, and our kids becoming screen captured. When in doubt, check for these common mechanisms—and when necessary and appropriate, help kids understand the tech they’re using.
A deeper dive
Here are a few helpful resources in case you want to really dig into today's topic:
Jaron Lanier is a pioneer in virtual reality and an outspoken critic of all things social media. He describes how, in our time spent online, we’re experiencing a mild version of behavioral modification all the time. In an interview with Channel 4 News, he described how for kids in particular, the time they spend “being observed by algorithms and tweaked by them is vastly worse than screen time itself.”
The New York Times described kidfluencer Ryan Kaji as “the boy king of YouTube.” If you’re curious to see how his family has turned his unboxing videos into a content empire, check out this profile here.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
Are you familiar with “nip nops?” How about “le dollar bean?” If you’re scratching your head, you’re not alone. These are “algo-speak,” or alternative terms for words that often get flagged by algorithms as inappropriate. Content creators use words like “nip nops” because even mentioning the word “nipple” can get their content downgraded. Similarly, “le dollar bean” is an alternate word for “lesbian.” Check out this fascinating article here to learn how algorithms are changing language on the internet in real-time.
TikTok is testing a new “dislike” button on comments, but don’t worry, it’s not as mean as it sounds. This button is only visible to the person disliking the comment and is intended to help TikTok users flag and remove offensive comments. Basically, it’s another reporting tool—not a mechanism for cyber-bullying.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
At Kinzoo, we spend a lot of time thinking about every new feature we build into our apps. We all work hard to keep manipulative design out of our platforms, and we’ve even written about it on our blog. Check out the insights here.
I know I spilled a lot of ink in this newsletter talking about the negative side of technology, but rest assured, there is plenty of positive tech out there. Here is an article that outlines the activities that parents can prioritize in order for kids to get the most out of their screen time.
Okay, that's it from me until next time. If you enjoyed this newsletter, and know of another parent who would as well, please feel free to forward it along.