Just after Valentine’s Day last year, New Yorkers were treated to a very unusual digital billboard ad draping one side of the Port Authority in Manhattan. It read: “Dear person who played ‘Sorry’ 42 times on Valentine’s Day, what did you do?”
The ad, which referenced the famous Justin Bieber song, was run by Spotify, the popular streaming music app that boasts more than 70 million global subscribers. It was a hit. Spotify would go on to post versions of the billboard in leading markets around the world. In a further sign of the ad campaign’s influence, Netflix picked up the theme last December with its own tongue-in-cheek tweet about the now infamous movie “A Christmas Prince.” That Tweet was ‘liked’ nearly 454,000 times.
These ads went viral because they picked up on a humorous, if uncomfortable truth about the way the intimate details of our daily lives are being recorded by our favorite online services. Unfortunately, we are all so accustomed to this kind of “surveillance capitalism” that instead of alerting us to potential violations of privacy, this kind of digital schadenfreude is more likely to amuse than alarm us.
These ads show us how much our favorite online services know about us. But for most people the broader implications of how this information can be exploited for commercial and political gain is only now coming to the surface. Music apps like Spotify and Pandora, over-the-top video services like Netflix and Hulu and of course the biggest internet platforms — including Facebook, Twitter and Google — regularly collect tremendous amounts of individual behavioral data and use it to maintain granular profiles about each and every one of their users. This practice is core to the business. These individual profiles are then analyzed over time and synthesized into profiles that are sold to advertisers that target commercial messages. This is the basis of a multi-billion dollar marketplace for digital advertising.
When Spotify and Netflix joke about how they know so much about us, it’s entertaining. But there is a darker side to the intersection of digital data-mining and micro-targeted advertising.
Get the think newsletter.
As the industry collectively gathers more and more data about us over time, it is better able to infer our real-world interests, preferences, likes, dislikes, behaviors and beliefs. They know mundane things about our lives — what work we do, what kind of cars we drive and where we live. But they also know the tiniest details of our personal lives — details reflected in the media we consume and the messages we write to our loved ones. In the midst of this digital ecosystem are agents purveying disinformation who systematically attempt to take advantage of the targeting protocols and algorithms inherent to the industry.
When Spotify and Netflix joke about how they know so much about us, it’s entertaining. But there is a much darker side to the intersection of digital data-mining and micro-targeted advertising. We saw it clearly in 2016 when Russian agents used ad-driven propaganda campaigns on social media and other internet platforms to interfere with the U.S. presidential election. That moment has pulled back the curtain on what we have long known but never fully acknowledged: In a digital marketplace where all the services are free, user data sold to advertisers is the product.
In other words, the internet revolution that has transformed our society in so many positive ways has also introduced a set of public harms that undermine personal privacy and threaten the integrity of democracy. The giants of the industry — unwitting and unwilling as they may be — are the economic handmaidens of these negative trends.
Sensationalism and incendiary, divisive content is popular and leads to large waves of user engagement.
There is no surprise about this reality. As we discuss in a recent report published by Washington D.C. think tank New America and Harvard’s Shorenstein Center on Media, Politics and Public Policy, the industry has, over the past two decades, built incredibly sophisticated technologies that help advertisers target messages at very specific groups of users. When advertisers successfully promote content, they hold user attention and keep them using the service. That means the platform can collect more data and serve more ads, which makes more money for everyone.
There is no real difference between advertisers — whether they are selling shoes, news, or political views. And of course, sensationalism and incendiary, divisive content is popular and leads to large waves of user engagement. So the economics of the business pushes users deeper and deeper into echo-chambers that separate us from a common narrative of facts and reporting about the world and make us vulnerable to fake news.
Disinformation agents operate by using micro-targeted ads to build and expand like-minded audiences — feeding them a steady stream of content that exploits prejudices, stokes fears and incites social division. The Russian effort in the 2016 election was not an isolated incident nor was it atypical of numerous other efforts perpetrated by Russian agents that remain in full force two years later. Among their alleged exploits in just the past month are the promotion of the egregious falsehood that NATO troops sprayed poison on Poland and that a member of Ukraine’s Dnipropetrovsk department of police proudly shared a photo of himself sporting a Nazi uniform.
Disinformation agents operate by using micro-targeted ads to build and expand like-minded audiences — feeding them a steady stream of content that exploits prejudices.
But disinformation can be as powerful as outright falsehoods. After much digital forensic analysis, experts have alleged that Russia’s agents of disinformation were partially behind the bewildering #ReleaseTheMemo controversy. Similarly, agents of the Kremlin were allegedly responsible for promoting reports that the same Russian forces that violently occupied Crimea were now generously training local children about how to defuse and disarm landmines.
These incidents are all enabled and abetted by the collection of behavioral data and the targeting of internet-based content and ads. And perhaps most critically, a great deal of this disinformation is not perpetrated by foreign agents. It is the routine fare of viral online rumors and the polarizing, intentional distortion of the daily news.
From full-blown conspiracies like “pizzagate” to doctored statistics about race and crime in America, digital advertising and filter bubbles play a key role. The industry, under much pressure from the public, the United States Congress and regulators around the globe, has conceded that it will need to increase transparency in advertising so that users can see the person or firm responsible for placing an ad on platforms like Facebook.
Until and unless internet platforms act to segregate their economic interests from those of the purveyors of disinformation, we will most likely see little progress.
But while transparency may be a remedy, it is not an antidote. We anticipate that the agents of disinformation — which will become an increasingly diverse group — will inevitably adjust their tactics. Until and unless internet platforms act to segregate their economic interests from those of the purveyors of disinformation, we will most likely see little progress.
With this context in mind, that Spotify ad becomes less and less amusing. But it is poignant. Of course, now that we are aware of the dangers lurking in the digital systems that inform and entertain us, we must find the political will to put the public interest ahead of profits.
Ben Scott is senior advisor at New America and served as Policy Advisor for Innovation in the U.S. State Department during the Obama administration.