How Silicon Valley Learned to Love Surveillance
Since the dot-com crash, data collection and analysis have become the core DNA of most internet companies.
There must be something in all that free trail mix, the kind that flows like rivers in the cafeterias of Silicon Valley. How else can one explain the ubiquity of what anthropologists call the “Californian Ideology,†a unique mixture of extreme individualism, embrace of counterculture, distrust of government, empowerment through technology, and a belief that all social problems have a technological fix?
The odd thing about the Californian Ideology is not its universal presence in the halls of start-ups and tech giants, but how an anti-state ethos rooted in individual freedom has come to so thoroughly embrace surveillance as its primary business model.
Free Is Just Another Word For Nothing Left To Sell
Photo CC-BYÂ Patrick Nouhailler, filtered.
To understand how Silicon Valley came to love surveillance, we must start in the mid-1970’s. This period marked the end of post-War America’s golden age, where decades of growth in industrial productivity and profitability came rapidly grinding to a halt. What we now call “neo-liberalism†first emerged as a political response to this crisis: a path toward renewed profits in the face of high labor costs and increased international competition.
As the neo-liberal project took off in the 1980s, first in the US and UK under Reagan-Thatcher, it brought with it two important changes. The first change was a smashing success for business. An assault on regulation and the power of labor successfully drove down wages and increased profits, often spectacularly so. The second change has proved to be a mixed bag. The official state policy in the US and UK, especially by the 1990s, was to aggressively move away from industrial production in favor of knowledge workers in the information economy. This process of de-industrialization was already underway, but government policy worked to rapidly accelerate it, particularly in the UK.
On the one hand, investments in information technology, which grew from 7% of capital expenditures in 1970 to 45% by 1996, allowed much of American business to remain competitive by modernizing agricultural and industrial production through better management of supply chains and work processes (often overseas). And some workers in the information economy have made out like bandits, but this success has not been evenly distributed. Consider craigslist.org, with two dozen employees, that has been responsible, according to one study from NYU, for a $5 billion reduction in newspaper revenue in the US alone. In the first decade of the 21th century, newspaper revenue fell by nearly 70%, and music sales, including online, by over 50%.
Many people saw these industries as dinosaurs that were simply feeling the effects of disruptive technological changes. OK, fine, but what was the precise nature of these changes? In essence, if your job had been to produce information that got sold in a mass market, chances were good you had a rough decade.
Before everything was digital, information capitalism depended on the ability to turn data into something that could be bought and sold. However, when applied to digital information, this commodification turned out to be trickier than the boosters of the knowledge economy had predicted. In an analog world, it was easy to turn information into a commodity, because the only way to get information was through a traditional model of industrial production and distribution: the printing press, broadcast TV and radio, etc.
Digital information, however, has an odd property: it can be endlessly copied, with every copy as perfect as the original. In fact, it can only be copied. This perfect replication makes the industrial model of centralized production and controlled distribution difficult for digital goods.
Like most former editors of Wired Magazine, Chris Anderson has probably never met a technology he didn’t love. Before he hung up his keyboard to go build robot drones, Anderson wrote a curiously insightful little book simply titled Free. His argument, in a nutshell, was this: with industrial production, the cost of each unit produced declines with greater production. At some point, greater economies of scale no longer reduce per unit cost, and the market generally settles on a price. With digital distribution, the marginal cost continues downward infinitely. There is never a point at which producing more units does not lower the overall production cost per unit. As marginal cost approaches zero, it is almost certain that there will be a competitor offering a similar digital product for free. Because you cannot stop “free,†the only choice remaining is to embrace “free†as your core business model. But Anderson is quick to point out that some information will become more expensive: anything that requires customization and non-automated labor will generally become much more costly over time (which is why some knowledge workers, like programmers, are so lavishly remunerated).
This analysis helps explain why the neo-liberal project of de-industrialization has been a double-edged sword. Although the turn toward information technology was a boon for some sectors of the US economy, information capitalism carried a contradiction of how to make money from information that people expect for free.
The Surveillance Profit Fix
Photo CC-BY by Shervinafshar, filtered.
This tension came into spectacular view in March of 2000, when internet companies lost half their stock value in a single cataclysmic week. Within five weeks, the tech heavy NASDAQ had lost a third of its value, and within two years, 76% in valuation had vanished, or roughly $5 trillion (the annual GDP of the US economy at the time was $10 trillion). Despite the hype of the 1990s, the profit potential of internet companies turned out to be low and global capital got very nervous, very quickly.
Before the crash, the captains of Silicon Valley were fond of declaring the tech miracle of the Bay Area as the greatest creation of wealth in the history of the world. The rapid devaluation spoke of a different reality, in which a giant pool of liquid capital darted nervously about in search of something to invest in.
Unfortunately for the world economy, this pool of capital, largely accumulated from rising profits made possible by neo-liberal policies, has not been absorbed by potential investments in productive capacity. Instead this surplus capital has flitted across the globe, creating one speculative bubble after another, from the Asian Financial Crisis in 1997, to the dot-com bust of 2000, to the real estate collapse of 2008.
By 2004, the dust had settled in Silicon Valley and it was clear that some companies had managed to emerged unscathed from the dot-com bubble. In fact, many were thriving. In San Francisco, “thought-leaders†(as described in conference publicity materials) gathered to make sense of the new terrain, a collection of practices grouped together under a new term “web 2.0.†If the bubble internet economy was version one, the new more sound model was version two.
In “What is Web 2.0,†Tim O’Reilly crystallized and popularized the consensus that emerged from this conference in one of the most widely read and influential tech manifestos of all time. One of the key ideas from web 2.0 was “collective intelligence,†a term that refers to the aggregate capacity of internet users to provide both better answers and more data than all the employees one might possibly hire. The key insight is that internet users do not simply consume value, they are, in fact, the source of value. Users comment, rate, review, share, link, and upload—in short, they create all the ephemeral qualities that make a service appealing, and often all the concrete content as well. If you want users to volunteer their intelligence, you must create a system which is easy to use, seductively fun, and interactive (often framed in the context of “empowering†the user). Part of the web 2.0 model relies on the blanket observation that information companies with larger databases tend to make more money, and by far the fastest and most accurate way to fill a database is to crowdsource it for free.
The web 2.0 model is an inversion of the traditional model of mass industrial production: the product flows in reverse, from the end user to the internet company, and back out again. Although it first saw wide acceptance after the dot-com collapse, this idea was not new. In 1980, Alvin Toffler predicted the rise of the “prosumer†(producer-consumer), and by the late 1990s both Don Tapscott and Kevin Kelly wrote books that, in part, argued for collective intelligence as the way to survive in the digital economy. A decade later, Kelly, founding executive editor of Wired Magazine and the living embodiment of the Californian Ideology in its most distilled form, wrote “The New Socialism,†in which he argued that collective intelligence, participatory volunteer labor, and free information was creating a new tech utopia of user empowerment.
The model of collective intelligence helped to fix the crisis of profitability that plagued the old internet companies of the 1990s in two ways: first, when users added value, they worked for free to create content that was often highly sought after by other users; second, it was not just content that users were creating, but also valuable business intelligence from the pattern of their behavior. Tim O’Reilly expressed the Valley’s thinking succinctly when he wrote, in the introduction to a 2007 book titled Programming for Collective Intelligence:
It is no longer enough to know how to build a database-backed web site. If you want to succeed, you need to know how to mine the data the users are adding, both explicitly and as a side effect of their activity on your site.
Instead of evoking “collective intelligence,†these words today read a lot more like “big data†and the never-ending drive to capture more personal data and behavior to grow the databases and analytic models that make market intelligence, targeted advertising, and investor storytime possible. At the time, however, when Silicon Valley first embraced behavioral tracking, it was seen simply as part of the same process of users adding value, and benefiting from it. The narrative of user benefit from user tracking extended to advertising, when the consolidation in online advertising networks made it possible by the mid-2000s to track a user’s activity across the entire web. Google introduced its targeted advertising by announcing, “we believe there is real value to seeing ads about the things that interest you.†In a statement typical of behavioral advertising industry rhetoric, Rocket Fuel declared that it wants “to turn online ads from an annoyance into a useful complement to your web surfing experience.â€
Data, Data, Everywhere and Not a Stop to Think
It is a mistake to view Silicon Valley’s surveillance problem as simply a question of advertising, or even a problem unique to Silicon Valley. Although aggressive analysis of user behavior first found success in building the fortunes of Google, Yahoo, and Amazon, the data analytics cat is out of the bag. Since the dot-com crash, data collection and analysis have become the core DNA of most internet companies. The level of sophistication in data mining, machine learning, and social network analysis pioneered by internet companies has also made its way to the titans of the old economy, as credit card companies, telecoms, and retailers came to realize they had been sitting on a wealth of behavioral data that was insufficiently mined.
If digital information proved difficult to commodify in the traditional sense, it has proven to be a highly valuable commodity when it takes the form of surveillance. Advocates of the knowledge economy imagined information being sold as industrial goods are sold, from the producer to the consumer, but the real money is in information that flows in the opposite direction, in the capture and processing of user behavior.
The same property that makes an industrial model for information difficult to profit from, also makes a surveillance model possible: digital information can only be copied, never just moved. When something is copied, it can be stored, and when it is stored, it can be analyzed. This makes every possible point of contact we make with the digital world a point of potential capture.
It is easy to see these deep changes in our technology and shrug. Some take comfort in the degree to which companies might have too much data, and frequently collect incorrect information or produce analysis full of mistakes (although a lot of money is being spent to fix these mistakes). Others point to a future of consumer choice: in response to the general creepiness of behavioral advertising, the new frontier for the industry will likely be personal data markets, where users are able to make choices about their own tracking and be directly compensated for their personal data. If users are choosing to be tracked, what is the problem?
The truth is we don’t know what the effects of pervasive surveillance will be. We do know, from social science research, that surveillance in general stifles dissent and produces conformity. We also know that the economic model that unpins any communication system (be it the postal system, the telegraph, the telephone, television, or the internet) deeply structures and constrains what is possible and limits how we are able to communicate. Our embrace of surveillance is essentially a grand experiment to see what happens when the balance of information (and power) is radically shifted between consumers and business, citizens and government, workers and employers.
The sad thing is, under all the layers of bullshit that attempt to rationalize Silicon Valley’s embrace of surveillance, the turn toward more interactive communication holds genuine potential. Our newfound ability of “disintermediationâ€: the ability to connect directly from one human to another all over the planet, is likely to mark an important milestone in the history of human communication, however fraught and imperfect this ability currently is. Now we just need to figure out how to disentangle this new power from an infrastructure that has been built with surveillance at its core.