The Targeted Marketing of Ideas: How Constant Tracking Impedes Free Thought and Creativity

It’s not just the government who is watching us.

by Hannah Bloch-Wehba on April 9th, 2015

Since Edward Snowden revealed the scope of national security surveillance in June 2013, critics of widespread government surveillance have focused on the most concrete problems with those programs. The often clinical, detached manner of distilling the many problems with surveillance into something digestible and actionable — a question of government accountability, transparency, of checks and balances — doesn’t quite capture some of the more relatable, but ephemeral, concerns about surveillance: the creepiness of being watched, the eerie resemblance to Big Brother. These concerns aren’t concrete and particularized enough to support a legal challenge to surveillance, anyway. So the threat ubiquitous surveillance poses to freedom of thought, infringing on “intellectual privacy,” as Neil Richards puts it, has gone relatively unexplored while litigants fight out the constitutionality of mass surveillance in court.

But what about the constant surveillance by the very companies that purport to bring the whole universe to users’ doorsteps? As some have pointed out, the government is not alone in being capable of large-scale, long-term monitoring; internet companies have depended on consumer tracking to support targeted advertising for a long time. Understandably, the troves of data stored by companies like Yahoo! and Google are appealing targets for intelligence services seeking to expand their capabilities. But when we worry about being watched, it’s not just the government who is watching us: Tracking both online and offline behavior is par for the course for businesses, both online and brick-and-mortar.

The not-quite-everything store

Surveillance cameras on the side of a marble wall.

Photo CC-BY Jonathan McIntosh, filtered.

Even for the unfortunate person who has “nothing to hide,” the stultifying effect that massive surveillance is having on culture and public discourse is concerning. Writers report that they are self-censoring their work on issues as disparate as the war on drugs, mass incarceration, and pornography because of the Snowden revelations. Journalists are adopting time-intensive methods of protecting their communications and work product that mean they have less time to pursue stories, and their sources are clamming up.

But it’s not just government surveillance that worries journalists and writers: many report that they have changed the ways they communicate and store documents in order to safeguard their own privacy and that of their sources. An overwhelming majority of investigative journalists have little to no confidence that their ISPs can safeguard their information. Government surveillance is a threat, but the privacy-infringing practices of third-party online services make that threat infinitely worse. And being watched can be chilling even if the government is not the watcher. If the knowledge that someone is monitoring your behavior might deter you from researching Islamic finance, PETA membership, or types of ammo, it’s easy to imagine that you might also be cowed from shopping for a specific type of strap-on harness or bong if you have to worry about those items following you around the internet forever after.

Mass purveyors of goods and ideas already employ policies that ensure that a range of “inappropriate” content doesn’t reach a mass audience. Amazon may summarily remove reviews or self-published books that it decides are “offensive,” offering the less-than-helpful clarification: “What we deem offensive is probably about what you would expect.”

When these policies are rendered into algorithmic form, the biases become even more clear. In 2009, a change to Amazon’s algorithm removed the sales rankings for over 50,000 books because they were flagged as containing adult content. Classic works by Jeannette Winterson and Gore Vidal had their ranks removed, preventing them from appearing in Amazon’s recommendations system, before Amazon corrected the “cataloging” error. The effort to create a more “family-friendly” Amazon didn’t accommodate the preferences and recommendations of people who had already bought these “adult” books, or who clearly preferred the types of products Amazon wanted to banish to the deepest recesses of its warehouses.

The government has a constitutional obligation not to meddle with the marketplace of ideas; Amazon has a commercial obligation to provide that marketplace. But even when personalized digital marketing doesn’t overtly discriminate, it actually undermines the diversity of ideas and products available (even though it serves commercial goals well). Take one extraordinarily visible product of online behavioral tracking: Amazon’s recommendation system. Amazon’s success has been attributed, at least in part, to its recommendation algorithm––the system that suggests that you buy nails when you’re searching for a hammer. Item-to-item collaborative filtering suggests that consumers buy items based on other items that they’ve searched for. The effects of Amazon’s consumer tracking system are obvious: the store “radically changes based on customer interests, showing programming titles to a software engineer and baby toys to a new mother.”

Cupcakes adorned with baby-themed decorations, like pacifiers and bows.

Photo CC-BY Clever Cupcakes, filtered.

The fact that a woman is pregnant is considered one of the most valuable pieces of information for marketers to get their hands on; “new parents are a retailer’s holy grail.” A few years ago, stories about Target’s highly effective ways of identifying pregnant women in order to send them special advertising prompted a flurry of reporting on the ways that retailers identify and advertise products to pregnant women, including monitoring their acquisition of products like unscented lotion. The Target statistician who came up with the pregnancy-prediction model––a model based on analysis of the past purchasing behavior of pregnant women––acknowledged that even though the analysis complied with consumer privacy laws, it may still make consumers “queasy” to know that their information is being sifted and vetted so thoroughly. In one case, Target sent a mailer to a teenager’s home, prompting an angry visit from the teen’s dad, who was furious that Target was encouraging his daughter to have a kid––and then an apologetic call after he realized she was already pregnant.

Any qualms about the commodification of pregnancy and motherhood aside, the pregnant-woman case study also exemplifies the kind of echo chamber that targeting creates. A software engineer gets tech books; a mom gets baby toys. Targeting reduces the costs of finding new products that will be of interest. But many people chafe at the idea that their entire personhood can be summed up by a predictive algorithm meant to enhance their buying experience — and that happens to reflect an idea that women are walking wombs with credit cards. Because of the obscurity around online tracking, too, users who object to this constant filtering can’t even tell the services they use about their preferences; their only remedy is to hope that the service eventually learns to reflect their desires. But when their kinks and quirks are not a desirable outcome of the targeted filtering that purports to give us exactly what we want, the hope that companies may one day decide to offer content they have already repudiated seems far-fetched indeed.

The “curated” marketplace of ideas

Hundreds of books piled on the floor and in bookshelves.

Photo CC-BY linmtheu, filtered.

When companies are peddling ideas as well as products, the problem with pigeonholing consumers’ interests is perhaps even more significant. Targeting, after all, is not just a retail phenomenon. Results for search queries are tailored to a user’s profile, meaning that those things a user has already searched for appear front and center. Algorithms that suggest articles you may like to read, or people you may like to follow, are based on past behavior as well. The more customary the consumer’s habit is, the better a determinant it is of the kinds of material the consumer is likely to receive in the future.

While online platforms have difficulty drawing a line between what’s permissible and what is banned, time and again, these companies make content policies that reflect bias and prejudice. Tumblr blogs classified as NSFW are no longer indexed by Tumblr or third-party search engines. Facebook, for example, while allowing pictures of shirtless men, “restrict[s] some images of female breasts if they include the nipple, but we always allow photos of women actively engaged in breastfeeding or showing breasts with post-mastectomy scarring.” Presumably, Facebook has decided that other forms of female nudity are prurient, and the platform’s policies are cis-centric, body-shaming, and focused on the gender binary.

Selection of this kind doesn’t erode the right to receive information, but it does make the possibility of receiving certain kinds of products and ideas more remote, turning online life into a popularity contest in which the same products and ideas–those acceptable to “mainstream” society–will win, again and again. From a consumer perspective, that can be a great thing: it lowers cost and enhances competition. In theory, the products that win will be the ones that are the best.

But this is a different outcome than the unfettered marketplace of ideas we were promised. This is more like being in a library where half the books are in the basement, but you don’t know which half. The librarian tells you that the books in the basement are the ones that are weird and unpopular, and that you wouldn’t be interested in them. You can get them if you know what you are looking for. But when you’re perusing the stacks, looking for books, those will never appear.

An idealized Internet as marketplace of ideas has all the books on the shelf, all the time. But the targeted internet simply can’t accommodate that kind of diversity, because success is premised on the ability to sort, rank, and relegate the losers to the dusty closed stacks downstairs.

Who decides how we can express ourselves online?

TV that says 'censored'.

Photo CC-BY Turinboy, filtered.

In the end, that kind of selection negatively impacts creativity. If the goal of personalized digital marketing is to predict consumers’ tastes and users’ behavior, the result may well be serious limitations on the products, things, and ideas we encounter online. That the invisible hand of surveillance may affect our capitalist marketplace of ideas may not surprise. But it’s ironic that internet companies that are lauded as having made information more accessible and that profit, in turn, on massive collection of information about consumers, are now complicit in narrowing the range and scope of ideas available.

Clearly, corporate tracking and government surveillance are different. Consumers can choose to patronize companies that respect privacy over those that don’t. We don’t elect our corporate overlords, and they don’t take an oath to defend our constitutional rights. But as online platforms become ground zero for fights over the appropriate limitations on online speech–be it sexually harassing, supportive of terrorism, indecent, racist, or simply offensive–internet companies are making important policy decisions that have obvious, serious repercussions for free expression. To date, none of those decisions involve reexamining the constant tracking that supports behavioral advertising — and the important effects it may have on free culture and open discourse.