A few years back, I had my eyes on a pair of new Bose headphones. It turns out that the headphones had their eyes on me too. I started shopping around on various websites, looking at prices but not making a purchase. Pretty soon, I noticed ads for the headphones everywhere else I went online.
Most people have experienced a product that seems to follow them around the internet. It’s not magic; it’s an intentional marketing move. I had already shown interest in those headphones, and Bose was simply pushing me to complete the purchase—to close the sale. This is a type of subtle manipulation called “conversion” in digital marketing, and it relies on a complex set of instructions called an algorithm.
To make the most of the ads they put in front of you, companies need to know a lot about you. What are your interests? How old are you? Where do you live? Do you have kids? Do you play sports? What kinds of music do you enjoy? And companies need to get that information somewhere. Virtually every brand tracks what you view on their company website, of course, but many of them also advertise with Google and Facebook—platforms that give them the ability to target you because of the reams of data about you that they have available. As you search for things online, visit websites in your Chrome browser, or engage with anything on Facebook or Instagram, Google and Meta are collecting information about who you’re connected to, how you spend your time and money, and what brands you might be aligned with. And it’s an algorithm that helps them do so.
Algorithms are not inherently evil; they are just a set of rules that tell a computer how to perform a particular task. They have many different applications, but a lot of social media and e-commerce platforms use algorithms to analyze your data and “personalize” what you see.
Algorithms on social media, for example, track everything you do on the platform and then serve up advertisements from third parties that are most likely to be of interest to you. Here’s a simple way to think about them: running an algorithm is like the process of picking a movie to watch with someone. When my wife and I went on our first date, I didn’t know her that well, so I picked a movie that was “safe.” I’m sure I settled on a nice romantic comedy. As we continued to date and eventually married, I’ve been able to collect many more data points on my wife. Now, if we decide to watch a movie, I’m in a much better position to choose something that I know she’ll like.
For our movie nights, perhaps I’ll first look at all the movies playing in the theatre. Then I might consider having a night in and expanding my selection to include movies on demand. Then I can bring other data points into my decision-making process: Would she like a drama? Action? Horror? What happened this week? Is she more likely to want an intense movie, or will she want to unwind and have some laughs? What are the last movies we watched? Has she told me there’s a certain movie she wants to see? Have any friends recommended a movie to me or to her? This is how social media populates your feed. Over time, they gather more and more information about you to better understand you. And of course, the latest information is important: what have you been up to recently? That’s why these tech platforms will go to extreme lengths to keep you engaged. They want to learn as much as possible so they can really personalize your feed.
Now, personalization isn’t automatically a bad thing either. But we have to be careful with the attention-grabbing nature of the platforms we use, be mindful of the dopamine-driven reward cycles that hook us—and be wary of the advertisements that we see across our news feeds and websites we visit. These are all subtle forms of manipulation: they are steering us to the desired outcome of the platform or the third party.
There is a lot of confusion about algorithms, what they are, and how they shape our online experiences. They work behind the scenes and tech companies are notoriously secretive about them. While I don’t think algorithms are inherently evil, I do know that for most platforms they are used to grab my attention and keep me there for as long as possible. After all, the longer you stick around, the more ads they can put in front of you.
I am however careful when it comes to my kids. Some algorithms can lead you down deep rabbit holes. As an adult, I have some agency and critical thinking skills—hopefully enough to keep my head about me when an algorithm tries to steer me in a particularly unhealthy direction. Children, on the other hand, may not have the same agency, so it is important that we parents monitor their usage of algorithmically-driven platforms including YouTube and most social sites. We need to be aware that the platforms are collecting data about our children and their usage patterns and using it to keep them engaged. Good or bad, algorithms are highly effective, something most of us parents can relate to when we tell our children it is time to turn off YouTube.
A deeper dive
You can’t talk about “The Algorithm” without mentioning one of the most famous of them all: much ink has been spilled on Facebook’s News Feed algorithm, and a lot of critics have argued that it prioritizes divisive content. In fact, leaked internal memos from Facebook mention that “misinformation, toxicity, and violent content are inordinately prevalent among reshares.”
And Facebook’s algorithm isn’t the only one to come under scrutiny. YouTube’s recommendation algorithm is responsible for nearly 70% of all time spent on the site, and critics alleged that it was serving up increasingly extreme content to keep users watching.
TL;DR
Too long; didn't read. It shouldn't be a full-time job to keep up on industry news, so here is a mercifully quick summary of some other notable developments:
If you’ve ever read an alarming article alleging that screen time is worse for kids than heroin, there’s a good chance it was written by Jean M. Twenge, a professor **of psychology at San Diego State University. I’ve written before about how I have issues with the way she interprets data and I believe she sensationalizes the scree time debate in a way that is profoundly unhelpful for parents. In her most recent opinion piece, she’s adding some nuance to her argument and claiming that it’s social media in particular, not screen time in general, that is worse for teen girls’ mental health than “binge drinking, early sexual activity, hard drug use, being suspended from school, marijuana use, lack of exercise, being stopped by police and carrying a weapon.” She also laments how the screen time debate has been oversimplified because not all screen time is created equal. Ironic, since she’s the one who oversimplified it in the first place.
Video sharing platform TikTok is hugely popular among tweens and teens—and it has recently banned ****misgendering, deadnaming and promotion of ‘conversion therapy.’ The company has been vocal about trying to create a safer space for LGBTQ users, and they have also recently added a feature allowing users to add their preferred pronouns to their profiles.
And lastly
Here are a few more pieces of original writing from me and my team—just in case you're keen for more:
Speaking of TikTok, my team has put together a new, updated parent’s guide on the subject. If your kids are using the platform, there are some ways to make it safer, and we share seven easy tips to do so.
We’re pretty passionate about creating safe technology for kids at Kinzoo, and my team recently created some resources to celebrate Safer Internet Day. Check out this handy list of five easy things you can do to help protect your kids online.
Okay, that's it from me until next time. If you enjoyed this newsletter, and know of another parent who would as well, please feel free to forward it along.