This story was funded by our members. Join Longreads and help us to support more writers.

Shoshana Zuboff | An excerpt adapted from The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power | PublicAffairs | 2019 | 23 minutes (6,281 words)

In 2000 a group of computer scientists and engineers at Georgia Tech collaborated on a project called the “Aware Home.” It was meant to be a “living laboratory” for the study of “ubiquitous computing.” They imagined a “human-home symbiosis” in which many animate and inanimate processes would be captured by an elaborate network of “context aware sensors” embedded in the house and by wearable computers worn by the home’s occupants. The design called for an “automated wireless collaboration” between the platform that hosted personal information from the occupants’ wearables and a second one that hosted the environmental information from the sensors.

There were three working assumptions: first, the scientists and engineers understood that the new data systems would produce an entirely new knowledge domain. Second, it was assumed that the rights to that new knowledge and the power to use it to improve one’s life would belong exclusively to the people who live in the house. Third, the team assumed that for all of its digital wizardry, the Aware Home would take its place as a modern incarnation of the ancient conventions that understand “home” as the private sanctuary of those who dwell within its walls.

All of this was expressed in the engineering plan. It emphasized trust, simplicity, the sovereignty of the individual, and the inviolability of the home as a private domain. The Aware Home information system was imagined as a simple “closed loop” with only two nodes and controlled entirely by the home’s occupants. Because the house would be “constantly monitoring the occupants’ whereabouts and activities…even tracing its inhabitants’ medical conditions,” the team concluded, “there is a clear need to give the occupants knowledge and control of the distribution of this information.” All the information was to be stored on the occupants’ wearable computers “to insure the privacy of an individual’s information.”

By 2018, the global “smart-home” market was valued at $36 billion and expected to reach $151 billion by 2023. The numbers betray an earthquake beneath their surface. Consider just one smart-home device: the Nest thermostat, which was made by a company that was owned by Alphabet, the Google holding company, and then merged with Google in 2018. The Nest thermostat does many things imagined in the Aware Home. It collects data about its uses and environment. It uses motion sensors and computation to “learn” the behaviors of a home’s inhabitants. Nest’s apps can gather data from other connected products such as cars, ovens, fitness trackers, and beds. Such systems can, for example, trigger lights if an anomalous motion is detected, signal video and audio recording, and even send notifications to homeowners or others. As a result of the merger with Google, the thermostat, like other Nest products, will be built with Google’s artificial intelligence capabilities, including its personal digital “assistant.” Like the Aware Home, the thermostat and its brethren devices create immense new stores of knowledge and therefore new power — but for whom?

Wi-Fi–enabled and networked, the thermostat’s intricate, personalized data stores are uploaded to Google’s servers. Each thermostat comes with a “privacy policy,” a “terms-of-service agreement,” and an “end-user licensing agreement.” These reveal oppressive privacy and security consequences in which sensitive household and personal information are shared with other smart devices, unnamed personnel, and third parties for the purposes of predictive analyses and sales to other unspecified parties. Nest takes little responsibility for the security of the information it collects and none for how the other companies in its ecosystem will put those data to use. A detailed analysis of Nest’s policies by two University of London scholars concluded that were one to enter into the Nest ecosystem of connected devices and apps, each with their own equally burdensome and audacious terms, the purchase of a single home thermostat would entail the need to review nearly a thousand so-called contracts.

Should the customer refuse to agree to Nest’s stipulations, the terms of service indicate that the functionality and security of the thermostat will be deeply compromised, no longer supported by the necessary updates meant to ensure its reliability and safety. The consequences can range from frozen pipes to failed smoke alarms to an easily hacked internal home system.

By 2018, the assumptions of the Aware Home were gone with the wind. Where did they go? What was that wind? The Aware Home, like many other visionary projects, imagined a digital future that empowers individuals to lead more-effective lives. What is most critical is that in the year 2000 this vision naturally assumed an unwavering commitment to the privacy of individual experience. Should an individual choose to render her experience digitally, then she would exercise exclusive rights to the knowledge garnered from such data, as well as exclusive rights to decide how such knowledge might be put to use. Today these rights to privacy, knowledge, and application have been usurped by a bold market venture powered by unilateral claims to others’ experience and the knowledge that flows from it. What does this sea change mean for us, for our children, for our democracies, and for the very possibility of a human future in a digital world? It is the darkening of the digital dream into a voracious and utterly novel commercial project that I call surveillance capitalism.


Surveillance capitalism runs contrary to the early digital dream, consigning the Aware Home to ancient history. Instead, it strips away the illusion that the networked form has some kind of indigenous moral content, that being “connected” is somehow intrinsically pro-social, innately inclusive, or naturally tending toward the democratization of knowledge. Digital connection is now a means to others’ commercial ends. At its core, surveillance capitalism is parasitic and self-referential. It revives Karl Marx’s old image of capitalism as a vampire that feeds on labor, but with an unexpected turn. Instead of labor, surveillance capitalism feeds on every aspect of every human’s experience. Google invented and perfected surveillance capitalism in much the same way that a century ago General Motors invented and perfected managerial capitalism. Google was the pioneer of surveillance capitalism in thought and practice, the deep pocket for research and development, and the trailblazer in experimentation and implementation, but it is no longer the only actor on this path. Surveillance capitalism quickly spread to Facebook and later to Microsoft. Evidence suggests that Amazon has veered in this direction, and it is a constant challenge to Apple, both as an external threat and as a source of internal debate and conflict.

As the pioneer of surveillance capitalism, Google launched an unprecedented market operation into the unmapped spaces of the internet, where it faced few impediments from law or competitors, like an invasive species in a landscape free of natural predators. Its leaders drove the systemic coherence of their businesses at a breakneck pace that neither public institutions nor individuals could follow. Google also benefited from historical events when a national security apparatus galvanized by the attacks of 9/11 was inclined to nurture, mimic, shelter, and appropriate surveillance capitalism’s emergent capabilities for the sake of total knowledge and its promise of certainty.

Our personal experiences are scraped and packaged as the means to others’ ends…We are the sources of surveillance capitalism’s crucial surplus.

Surveillance capitalists quickly realized that they could do anything they wanted, and they did. They dressed in the fashions of advocacy and emancipation, appealing to and exploiting contemporary anxieties, while the real action was hidden offstage. Theirs was an invisibility cloak woven in equal measure to the rhetoric of the empowering web, the ability to move swiftly, the confidence of vast revenue streams, and the wild, undefended nature of the territory they would conquer and claim. They were protected by the inherent illegibility of the automated processes that they rule, the ignorance that these processes breed, and the sense of inevitability that they foster.

Surveillance capitalism is no longer confined to the competitive dramas of the large internet companies, where behavioral futures markets were first aimed at online advertising. Its mechanisms and economic imperatives have become the default model for most internet-based businesses. Eventually, competitive pressure drove expansion into the offline world, where the same foundational mechanisms that expropriate your online browsing, likes, and clicks are trained on your run in the park, breakfast conversation, or hunt for a parking space. Today’s prediction products are traded in behavioral futures markets that extend beyond targeted online ads to many other sectors, including insurance, retail, finance, and an ever-widening range of goods and services companies determined to participate in these new and profitable markets. Whether it’s a “smart” home device, what the insurance companies call “behavioral underwriting,” or any one of thousands of other transactions, we now pay for our own domination.

Help us fund our next story

We’ve published hundreds of original stories, all funded by you — including personal essays, reported features, and reading lists.

Surveillance capitalism’s products and services are not the objects of a value exchange. They do not establish constructive producer-consumer reciprocities. Instead, they are the “hooks” that lure users into their extractive operations in which our personal experiences are scraped and packaged as the means to others’ ends. We are not surveillance capitalism’s “customers.” Although the saying tells us “If it’s free, then you are the product,” that is also incorrect. We are the sources of surveillance capitalism’s crucial surplus: the objects of a technologically advanced and increasingly inescapable raw-material-extraction operation. Surveillance capitalism’s actual customers are the enterprises that trade in its markets for future behavior.


Google is to surveillance capitalism what the Ford Motor Company and General Motors were to mass-production–based managerial capitalism. New economic logics and their commercial models are discovered by people in a time and place and then perfected through trial and error. In our time Google became the pioneer, discoverer, elaborator, experimenter, lead practitioner, role model, and diffusion hub of surveillance capitalism. GM and Ford’s iconic status as pioneers of twentieth-century capitalism made them enduring objects of scholarly research and public fascination because the lessons they had to teach resonated far beyond the individual companies. Google’s practices deserve the same kind of examination, not merely as a critique of a single company but rather as the starting point for the codification of a powerful new form of capitalism.

With the triumph of mass production at Ford and for decades thereafter, hundreds of researchers, businesspeople, engineers, journalists, and scholars would excavate the circumstances of its invention, origins, and consequences. Decades later, scholars continued to write extensively about Ford, the man and the company. GM has also been an object of intense scrutiny. It was the site of Peter Drucker’s field studies for his seminal Concept of the Corporation, the 1946 book that codified the practices of the twentieth-century business organization and established Drucker’s reputation as a management sage. In addition to the many works of scholarship and analysis on these two firms, their own leaders enthusiastically articulated their discoveries and practices. Henry Ford and his general manager, James Couzens, and Alfred Sloan and his marketing man, Henry “Buck” Weaver, reflected on, conceptualized, and proselytized their achievements, specifically locating them in the evolutionary drama of American capitalism.

Google is a notoriously secretive company, and one is hard-pressed to imagine a Drucker equivalent freely roaming the scene and scribbling in the hallways. Its executives carefully craft their messages of digital evangelism in books and blog posts, but its operations are not easily accessible to outside researchers or journalists. In 2016 a lawsuit brought against the company by a product manager alleged an internal spying program in which employees are expected to identify coworkers who violate the firm’s confidentiality agreement: a broad prohibition against divulging anything about the company to anyone. The closest thing we have to a Buck Weaver or James Couzens codifying Google’s practices and objectives is the company’s longtime chief economist, Hal Varian, who aids the cause of understanding with scholarly articles that explore important themes. Varian has been described as “the Adam Smith of the discipline of Googlenomics” and the “godfather” of its advertising model. It is in Varian’s work that we find hidden-in-plain-sight important clues to the logic of surveillance capitalism and its claims to power.

In two extraordinary articles in scholarly journals, Varian explored the theme of “computer-mediated transactions” and their transformational effects on the modern economy. Both pieces are written in amiable, down-to-earth prose, but Varian’s casual understatement stands in counterpoint to his often-startling declarations: “Nowadays there is a computer in the middle of virtually every transaction…now that they are available these computers have several other uses.” He then identifies four such new uses: “data extraction and analysis,” “new contractual forms due to better monitoring,” “personalization and customization,” and “continuous experiments.”

Varian’s discussions of these new “uses” are an unexpected guide to the strange logic of surveillance capitalism, the division of learning that it shapes, and the character of the information civilization toward which it leads. “Data extraction and analysis,” Varian writes, “is what everyone is talking about when they talk about big data.”


Google was incorporated in 1998, founded by Stanford graduate students Larry Page and Sergey Brin just two years after the Mosaic browser threw open the doors of the world wide web to the computer-using public. From the start, the company embodied the promise of information capitalism as a liberating and democratic social force that galvanized and delighted second-modernity populations around the world.

Thanks to this wide embrace, Google successfully imposed computer mediation on broad new domains of human behavior as people searched online and engaged with the web through a growing roster of Google services. As these new activities were informated for the first time, they produced wholly new data resources. For example, in addition to key words, each Google search query produces a wake of collateral data such as the number and pattern of search terms, how a query is phrased, spelling, punctuation, dwell times, click patterns, and location.

There was no reliable way to turn investors’ money into revenue…The behavioral value reinvestment cycle produced a very cool search function, but it was not yet capitalism.

Early on, these behavioral by-products were haphazardly stored and operationally ignored. Amit Patel, a young Stanford graduate student with a special interest in “data mining,” is frequently credited with the groundbreaking insight into the significance of Google’s accidental data caches. His work with these data logs persuaded him that detailed stories about each user — thoughts, feelings, interests — could be constructed from the wake of unstructured signals that trailed every online action. These data, he concluded, actually provided a “broad sensor of human behavior” and could be put to immediate use in realizing cofounder Larry Page’s dream of Search as a comprehensive artificial intelligence.

Google’s engineers soon grasped that the continuous flows of collateral behavioral data could turn the search engine into a recursive learning system that constantly improved search results and spurred product innovations such as spell check, translation, and voice recognition. As Kenneth Cukier observed at that time,

Other search engines in the 1990s had the chance to do the same, but did not pursue it. Around 2000 Yahoo! saw the potential, but nothing came of the idea. It was Google that recognized the gold dust in the detritus of its interactions with its users and took the trouble to collect it up…Google exploits information that is a by-product of user interactions, or data exhaust, which is automatically recycled to improve the service or create an entirely new product.

What had been regarded as waste material — “data exhaust” spewed into Google’s servers during the combustive action of Search — was quickly reimagined as a critical element in the transformation of Google’s search engine into a reflexive process of continuous learning and improvement.

At that early stage of Google’s development, the feedback loops involved in improving its Search functions produced a balance of power: Search needed people to learn from, and people needed Search to learn from. This symbiosis enabled Google’s algorithms to learn and produce ever-more relevant and comprehensive search results. More queries meant more learning; more learning produced more relevance. More relevance meant more searches and more users. By the time the young company held its first press conference in 1999, to announce a $25 million equity investment from two of the most revered Silicon Valley venture capital firms, Sequoia Capital and Kleiner Perkins, Google Search was already fielding seven million requests each day. A few years later, Hal Varian, who joined Google as its chief economist in 2002, would note, “Every action a user performs is considered a signal to be analyzed and fed back into the system.” The Page Rank algorithm, named after its founder, had already given Google a significant advantage in identifying the most popular results for queries. Over the course of the next few years it would be the capture, storage, analysis, and learning from the by-products of those search queries that would turn Google into the gold standard of web search.

The key point for us rests on a critical distinction. During this early period, behavioral data were put to work entirely on the user’s behalf. User data provided value at no cost, and that value was reinvested in the user experience in the form of improved services: enhancements that were also offered at no cost to users. Users provided the raw material in the form of behavioral data, and those data were harvested to improve speed, accuracy, and relevance and to help build ancillary products such as translation. I call this the behavioral value reinvestment cycle, in which all behavioral data are reinvested in the improvement of the product or service.

The cycle emulates the logic of the iPod; it worked beautifully at Google but with one critical difference: the absence of a sustainable market transaction. In the case of the iPod, the cycle was triggered by the purchase of a high-margin physical product. Subsequent reciprocities improved the iPod product and led to increased sales. Customers were the subjects of the commercial process, which promised alignment with their “what I want, when I want, where I want” demands. At Google, the cycle was similarly oriented toward the individual as its subject, but without a physical product to sell, it floated outside the marketplace, an interaction with “users” rather than a market transaction with customers.

This helps to explain why it is inaccurate to think of Google’s users as its customers: there is no economic exchange, no price, and no profit. Nor do users function in the role of workers. When a capitalist hires workers and provides them with wages and means of production, the products that they produce belong to the capitalist to sell at a profit. Not so here. Users are not paid for their labor, nor do they operate the means of production. Finally, people often say that the user is the “product.” This is also misleading. Users are not products, but rather we are the sources of raw-material supply. Surveillance capitalism’s unusual products manage to be derived from our behavior while remaining indifferent to our behavior. Its products are about predicting us, without actually caring what we do or what is done to us.

At this early stage of Google’s development, whatever Search users inadvertently gave up that was of value to the company they also used up in the form of improved services. In this reinvestment cycle, serving users with amazing Search results “consumed” all the value that users created when they provided extra behavioral data. The fact that users needed Search about as much as Search needed users created a balance of power between Google and its populations. People were treated as ends in themselves, the subjects of a nonmarket, self-contained cycle that was perfectly aligned with Google’s stated mission “to organize the world’s information, making it universally accessible and useful.”


By 1999, despite the splendor of Google’s new world of searchable web pages, its growing computer science capabilities, and its glamorous venture backers, there was no reliable way to turn investors’ money into revenue. The behavioral value reinvestment cycle produced a very cool search function, but it was not yet capitalism. The balance of power made it financially risky and possibly counterproductive to charge users a fee for search services. Selling search results would also have set a dangerous precedent for the firm, assigning a price to indexed information that Google’s web crawler had already taken from others without payment. Without a device like Apple’s iPod or its digital songs, there were no margins, no surplus, nothing left over to sell and turn into revenue.

Google had relegated advertising to steerage class: its AdWords team consisted of seven people, most of whom shared the founders’ general antipathy toward ads. The tone had been set in Sergey Brin and Larry Page’s milestone paper that unveiled their search engine conception, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” presented at the 1998 World Wide Web Conference: “We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers. This type of bias is very difficult to detect but could still have a significant effect on the market…we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.”

Google’s first revenues depended instead on exclusive licensing deals to provide web services to portals such as Yahoo! and Japan’s BIGLOBE. It also generated modest revenue from sponsored ads linked to search query keywords. There were other models for consideration. Rival search engines such as Overture, used exclusively by the then-giant portal AOL, or Inktomi, the search engine adopted by Microsoft, collected revenues from the sites whose pages they indexed. Overture was also successful in attracting online ads with its policy of allowing advertisers to pay for high-ranking search listings, the very format that Brin and Page scorned.

Prominent analysts publicly doubted whether Google could compete with its more-established rivals. As the New York Times asked, “Can Google create a business model even remotely as good as its technology?” A well-known Forrester Research analyst proclaimed that there were only a few ways for Google to make money with Search: “build a portal [like Yahoo!]…partner with a portal…license the technology…wait for a big company to purchase them.”

Despite these general misgivings about Google’s viability, the firm’s prestigious venture backing gave the founders confidence in their ability to raise money. This changed abruptly in April 2000, when the legendary dot-com economy began its steep plunge into recession, and Silicon Valley’s Garden of Eden unexpectedly became the epicenter of a financial earthquake.

The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising.

By mid-April, Silicon Valley’s fast-money culture of privilege was under siege with the implosion of what came to be known as the “dot-com bubble.” It is easy to forget exactly how terrifying things were for the valley’s ambitious young people and their slightly older investors. Startups with outsized valuations just months earlier were suddenly forced to shutter. Prominent articles such as “Doom Stalks the Dotcoms” noted that the stock prices of Wall Street’s most-revered internet “high flyers” were “down for the count,” with many of them trading below their initial offering price: “With many dotcoms declining, neither venture capitalists nor Wall Street is eager to give them a dime…” The news brimmed with descriptions of shell-shocked investors. The week of April 10 saw the worst decline in the history of the NASDAQ, where many internet companies had gone public, and there was a growing consensus that the “game” had irreversibly changed.

As the business environment in Silicon Valley unraveled, investors’ prospects for cashing out by selling Google to a big company seemed far less likely, and they were not immune to the rising tide of panic. Many Google investors began to express doubts about the company’s prospects, and some threatened to withdraw support. Pressure for profit mounted sharply, despite the fact that Google Search was widely considered the best of all the search engines, traffic to its website was surging, and a thousand résumés flooded the firm’s Mountain View office each day. Page and Brin were seen to be moving too slowly, and their top venture capitalists, John Doerr from Kleiner Perkins and Michael Moritz from Sequoia, were frustrated. According to Google chronicler Steven Levy, “The VCs were screaming bloody murder. Tech’s salad days were over, and it wasn’t certain that Google would avoid becoming another crushed radish.”

The specific character of Silicon Valley’s venture funding, especially during the years leading up to dangerous levels of startup inflation, also contributed to a growing sense of emergency at Google. As Stanford sociologist Mark Granovetter and his colleague Michel Ferrary found in their study of valley venture firms, “A connection with a high-status VC firm signals the high status of the startup and encourages other agents to link to it.” These themes may seem obvious now, but it is useful to mark the anxiety of those months of sudden crisis. Prestigious risk investment functioned as a form of vetting — much like acceptance to a top university sorts and legitimates students, elevating a few against the backdrop of the many — especially in the “uncertain” environment characteristic of high-tech investing. Loss of that high-status signaling power assigned a young company to a long list of also-rans in Silicon Valley’s fast-moving saga.

Other research findings point to the consequences of the impatient money that flooded the valley as inflationary hype drew speculators and ratcheted up the volatility of venture funding. Studies of pre-bubble investment patterns showed a “big-score” mentality in which bad results tended to stimulate increased investing as funders chased the belief that some young company would suddenly discover the elusive business model destined to turn all their bets into rivers of gold. Startup mortality rates in Silicon Valley outstripped those for other venture capital centers such as Boston and Washington, DC, with impatient money producing a few big wins and many losses. Impatient money is also reflected in the size of Silicon Valley startups, which during this period were significantly smaller than in other regions, employing an average of 68 employees as compared to an average of 112 in the rest of the country. This reflects an interest in quick returns without spending much time on growing a business or deepening its talent base, let alone developing the institutional capabilities. These propensities were exacerbated by the larger Silicon Valley culture, where net worth was celebrated as the sole measure of success for valley parents and their children.

For all their genius and principled insights, Brin and Page could not ignore the mounting sense of emergency. By December 2000, the Wall Street Journal reported on the new “mantra” emerging from Silicon Valley’s investment community: “Simply displaying the ability to make money will not be enough to remain a major player in the years ahead. What will be required will be an ability to show sustained and exponential profits.”


The declaration of a state of exception functions in politics as cover for the suspension of the rule of law and the introduction of new executive powers justified by crisis. At Google in late 2000, it became a rationale for annulling the reciprocal relationship that existed between Google and its users, steeling the founders to abandon their passionate and public opposition to advertising. As a specific response to investors’ anxiety, the founders tasked the tiny AdWords team with the objective of looking for ways to make more money. Page demanded that the whole process be simplified for advertisers. In this new approach, he insisted that advertisers “shouldn’t even get involved with choosing keywords — Google would choose them.”

Operationally, this meant that Google would turn its own growing cache of behavioral data and its computational power and expertise toward the single task of matching ads with queries. New rhetoric took hold to legitimate this unusual move. If there was to be advertising, then it had to be “relevant” to users. Ads would no longer be linked to keywords in a search query, but rather a particular ad would be “targeted” to a particular individual. Securing this holy grail of advertising would ensure relevance to users and value to Advertisers.

Absent from the new rhetoric was the fact that in pursuit of this new aim, Google would cross into virgin territory by exploiting sensitivities that only its exclusive and detailed collateral behavioral data about millions and later billions of users could reveal. To meet the new objective, the behavioral value reinvestment cycle was rapidly and secretly subordinated to a larger and more complex undertaking. The raw materials that had been solely used to improve the quality of search results would now also be put to use in the service of targeting advertising to individual users. Some data would continue to be applied to service improvement, but the growing stores of collateral signals would be repurposed to improve the profitability of ads for both Google and its advertisers. These behavioral data available for uses beyond service improvement constituted a surplus, and it was on the strength of this behavioral surplus that the young company would find its way to the “sustained and exponential profits” that would be necessary for survival. Thanks to a perceived emergency, a new mutation began to gather form and quietly slip its moorings in the implicit advocacy-oriented social contract of the firm’s original relationship with users.

Google’s declared state of exception was the backdrop for 2002, the watershed year during which surveillance capitalism took root. The firm’s appreciation of behavioral surplus crossed another threshold that April, when the data logs team arrived at their offices one morning to find that a peculiar phrase had surged to the top of the search queries: “Carol Brady’s maiden name.” Why the sudden interest in a 1970s television character? It was data scientist and logs team member Amit Patel who recounted the event to the New York Times, noting, “You can’t interpret it unless you know what else is going on in the world.”

The team went to work to solve the puzzle. First, they discerned that the pattern of queries had produced five separate spikes, each beginning at forty-eight minutes after the hour. Then they learned that the query pattern occurred during the airing of the popular TV show Who Wants to Be a Millionaire? The spikes reflected the successive time zones during which the show aired, ending in Hawaii. In each time zone, the show’s host posed the question of Carol Brady’s maiden name, and in each zone the queries immediately flooded into Google’s servers.

As the New York Times reported, “The precision of the Carol Brady data was eye-opening for some.” Even Brin was stunned by the clarity of Search’s predictive power, revealing events and trends before they “hit the radar” of traditional media. As he told the Times, “It was like trying an electron microscope for the first time. It was like a moment-by-moment barometer.” Google executives were described by the Times as reluctant to share their thoughts about how their massive stores of query data might be commercialized. “There is tremendous opportunity with this data,” one executive confided.

Just a month before the Carol Brady moment, while the AdWords team was already working on new approaches, Brin and Page hired Eric Schmidt, an experienced executive, engineer, and computer science Ph.D., as chairman. By August, they appointed him to the CEO’s role. Doerr and Moritz had been pushing the founders to hire a professional manager who would know how to pivot the firm toward profit. Schmidt immediately implemented a “belt-tightening” program, grabbing the budgetary reins and heightening the general sense of financial alarm as fund-raising prospects came under threat. A squeeze on workspace found him unexpectedly sharing his office with none other than Amit Patel.

Schmidt later boasted that as a result of their close quarters over the course of several months, he had instant access to better revenue figures than did his own financial planners. We do not know (and may never know) what other insights Schmidt might have gleaned from Patel about the predictive power of Google’s behavioral data stores, but there is no doubt that a deeper grasp of the predictive power of data quickly shaped Google’s specific response to financial emergency, triggering the crucial mutation that ultimately turned AdWords, Google, the internet, and the very nature of information capitalism toward an astonishingly lucrative surveillance project.

That this no longer seems astonishing to us, or perhaps even worthy of note, is evidence of the profound psychic numbing that has inured us to a bold and unprecedented shift in capitalist methods.

Google’s earliest ads had been considered more effective than most online advertising at the time because they were linked to search queries and Google could track when users actually clicked on an ad, known as the “click-through” rate. Despite this, advertisers were billed in the conventional manner according to how many people viewed an ad. As Search expanded, Google created the self-service system called AdWords, in which a search that used the advertiser’s keyword would include that advertiser’s text box and a link to its landing page. Ad pricing depended upon the ad’s position on the search results page.

Rival search startup Overture had developed an online auction system for web page placement that allowed it to scale online advertising targeted to keywords. Google would produce a transformational enhancement to that model, one that was destined to alter the course of information capitalism. As a Bloomberg journalist explained in 2006, “Google maximizes the revenue it gets from that precious real estate by giving its best position to the advertiser who is likely to pay Google the most in total, based on the price per click multiplied by Google’s estimate of the likelihood that someone will actually click on the ad.” That pivotal multiplier was the result of Google’s advanced computational capabilities trained on its most significant and secret discovery: behavioral surplus. From this point forward, the combination of ever-increasing machine intelligence and ever-more-vast supplies of behavioral surplus would become the foundation of an unprecedented logic of accumulation. Google’s reinvestment priorities would shift from merely improving its user offerings to inventing and institutionalizing the most far-reaching and technologically advanced raw-material supply operations that the world had ever seen. Henceforth, revenues and growth would depend upon more behavioral surplus.

Google’s many patents filed during those early years illustrate the explosion of discovery, inventiveness, and complexity detonated by the state of exception that led to these crucial innovations and the firm’s determination to advance the capture of behavioral surplus. One patent submitted in 2003 by three of the firm’s top computer scientists is titled “Generating User Information for Use in Targeted Advertising.” The patent is emblematic of the new mutation and the emerging logic of accumulation that would define Google’s success. Of even greater interest, it also provides an unusual glimpse into the “economic orientation” baked deep into the technology cake by reflecting the mindset of Google’s distinguished scientists as they harnessed their knowledge to the firm’s new aims. In this way, the patent stands as a treatise on a new political economics of clicks and its moral universe, before the company learned to disguise this project in a fog of euphemism.

The patent reveals a pivoting of the backstage operation toward Google’s new audience of genuine customers. “The present invention concerns advertising,” the inventors announce. Despite the enormous quantity of demographic data available to advertisers, the scientists note that much of an ad budget “is simply wasted…it is very difficult to identify and eliminate such waste.”

Advertising had always been a guessing game: art, relationships, conventional wisdom, standard practice, but never “science.” The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising. The inventors point out that online ad systems had also failed to achieve this elusive goal. The then-predominant approaches used by Google’s competitors, in which ads were targeted to keywords or content, were unable to identify relevant ads “for a particular user.” Now the inventors offered a scientific solution that exceeded the most-ambitious dreams of any advertising executive:

There is a need to increase the relevancy of ads served for some user request, such as a search query or a document request…to the user that submitted the request…The present invention may involve novel methods, apparatus, message formats and/or data structures for determining user profile information and using such determined user profile information for ad serving.

In other words, Google would no longer mine behavioral data strictly to improve service for users but rather to read users’ minds for the purposes of matching ads to their interests, as those interests are deduced from the collateral traces of online behavior. With Google’s unique access to behavioral data, it would now be possible to know what a particular individual in a particular time and place was thinking, feeling, and doing. That this no longer seems astonishing to us, or perhaps even worthy of note, is evidence of the profound psychic numbing that has inured us to a bold and unprecedented shift in capitalist methods.

The techniques described in the patent meant that each time a user queries Google’s search engine, the system simultaneously presents a specific configuration of a particular ad, all in the fraction of a moment that it takes to fulfill the search query. The data used to perform this instant translation from query to ad, a predictive analysis that was dubbed “matching,” went far beyond the mere denotation of search terms. New data sets were compiled that would dramatically enhance the accuracy of these predictions. These data sets were referred to as “user profile information” or “UPI.” These new data meant that there would be no more guesswork and far less waste in the advertising budget. Mathematical certainty would replace all of that.

* * *

From THE AGE OF SURVEILLANCE CAPITALISM: The Fight for a Human Future at the New Frontier of Power,  by Shoshana Zuboff.  Reprinted with permission from PublicAffairs, a division of the Hachette Book Group.

Shoshana Zuboff is the Charles Edward Wilson Professor emerita, Harvard Business School. She is the author of In The Age of the Smart Machine: the Future of Work and Power and The Support Economy: Why Corporations Are Failing Individuals and the Next Episode of Capitalism.

Longreads Editor: Dana Snitzky