Jacob Silverman | Longreads | May 2018 | 9 minutes (2,206 words)
For the better part of two decades, an important set of assumptions has underwritten our use of the internet. In exchange for being monitored — to what degree, many people still have no idea — we would receive free digital services. We would give up our privacy, but our data and our rights, unarticulated though they might be, would be respected. This is the simple bargain that drove the development of the social web and rewarded its pioneers — Facebook, Google, and the many apps and services they’ve swallowed up — with global user bases and multi-billion-dollar fortunes.
Now that bargain has been called into question by the scandal surrounding Facebook and the data-hungry political consultancy Cambridge Analytica. Or at least, it should have been. But rather than turning attention to the profound structural issues surrounding surveillance capitalism, mainstream media — along with the U.S. Congress — largely centered this affair around issues of Facebook’s stewardship of user data. The presumption is that Facebook has a right to our information; it simply mishandled it in this case, handing it over to a nefarious actor. Facebook executives did a penitent tour through the halls of media and the Capitol, offering apologies and begging for the public’s forgiveness. And then, this week at Facebook’s developer conference F8, they’ll close off some of their data, make some small concessions, then launch a new commercial analytics app.
The number of victims in this supposed Cambridge Analytica “breach” was first pegged at 50 million, but Facebook since revised the number upwards to 87 million. In another announcement, Facebook said that nearly all of its 2.2 billion users had their public profiles scraped, meaning that some of their basic personal information was gathered by — well, we have no way to know who. Both of these events are significant, but the latter actually speaks more acutely to the crisis surrounding Facebook, for the truth is that our personal information has long been for sale — through data brokers and other shadowy entities — to any commercial or governmental actor that might be interested. The shocking part of the Cambridge Analytica scandal is that it has torn the veil away from this arrangement. For the first time, many people not only have a sense of what data is being collected about them but also how it’s sold and what it can do — in this case, contribute to the election of a singularly disturbing character as president.
At least, that’s the presiding narrative if you believe that Cambridge Analytica’s psychographic targeting techniques are effective persuasive tools. Despite a raft of excellent reporting, it remains hard to know how a company like Cambridge Analytica works and what influence it has in the real world. The undercover videos filmed by a British news outlet of CA executives bragging about swaying elections all over the globe might be chalked up to salesmanship. And some respectable scholars and industry figures have questioned whether psychographic targeting does much at all. But to put it simply, advertising would not be a hugely profitable industry if it were total hokum, and Facebook made just shy of $40 billion last year in digital advertising largely on the strength of connecting advertisers directly with their audiences, all thanks to an increasingly granular set of microtargeting tools. While Facebook has since restricted some of the tools it offers to advertisers, the company still allows for extensive targeting options. Advertisers can upload what are known as “custom audiences,” so if a company like Cambridge Analytica has a large dataset of voter records that it wants to connect to Facebook profiles, it can upload it to Facebook and do just that. CA, or anyone else for that matter, could also use Facebook’s lookalike tool to then target people who resemble their original dataset, thus expanding their potential audience.
Pessimism over the effectiveness of the Cambridge Analytica campaign is tied in part to an understandable reluctance to believe that we can be persuaded. Steeped in advertising all our lives, we are supposed to be cynically immune to its charms. But a similar blitheness once attended how we treated lies and fake news, which many thought had gone the way of the chain letter. Instead, Facebook proved all too fertile a platform for cultivating the most extreme, and frequently the most unbelievable, views. Cambridge Analytica was targeting people who readily consumed this sort of dubious far-right media, people who its model showed were also sympathetic to Trump or who might be persuaded not to vote at all. CA was doing this at a huge scale, bombarding millions of people with an equally varied number of ads. As even some Trump partisans have noted, you need only persuade a small percentage to move the needle in some of the closely contested states that Trump unexpectedly won.
If Cambridge Analytica didn’t have much impact in practice, it certainly has altered the discourse, becoming a kind of post-election emblem of all that can go wrong in the personal data economy. Whereas we once dismissed the ads that follow us around the internet as noisome stalkers (who were sometimes hawking a pair of shoes that we had already expressed interest in), they now seem capable of something far more pernicious. Rather than a prodding offer for a Caribbean vacation, internet ads might now be carefully engineered political messages paid for by some foreign oligarch. We simply don’t know.
In Europe, the situation, along with the regulatory system, is far more developed — perhaps because the big tech giants are seen as powerful foreign players who don’t need to be coddled by European governments. This month, the continent will implement the General Data Protection Regulation, or GDPR, an effort that is considered at the forefront of establishing personal data rights and legal provisions for users to, for instance, demand that companies delete data they hold about them. The GDPR is undoubtedly a positive step forward, and it’s prompted Mark Zuckerberg and other Facebook executives to indicate that they’ll adopt at least the spirit of the law for the rest of its user base.
But we might also beware of short-term incrementalism. True, adopting the GDPR would provide a modicum of data privacy rights currently unavailable to American users. And there are additional steps that could be taken to grant users more agency in the data marketplace. Giving people control over their social graphs — the records of who they know, essentially — would allow them to easily transition to another service. That would be a nightmare for Facebook but a key competitive tool for the next company hoping to challenge them. Tamping down the industry’s most extreme excesses through prudent regulation, including the occasional sweeping fine, might encourage more ethical behavior. These measures could also entrench the Facebook and Google duopoly, both of which boast bottomless fortunes and deep rosters of Beltway lobbyists.
Ultimately, to challenge Facebook, Google, and the many unknown players of the data economy, we must devise new business models and structural incentives that aren’t rooted in manipulation and coercion; that don’t depend on the constant surveillance of users, on gathering information on everything they read and purchase, and on building that information into complex dossiers designed to elicit some action — a click, a purchase, a vote. We must move beyond surveillance capitalism and its built-in inequities. In the short-term, that might be achieved by turning to basic subscription services, by paying for the things we use. But on a longer time horizon, we must consider whether we want to live in a world that converts all of our experiences into machine-readable data — data that doesn’t belong to us, that doesn’t serve us.
Often compared to oil, data was supposed to be the next great resource, enriching everyone and powering a new generation of companies and services. Instead, we have suffered from a form of the resource curse that is said to afflict developing nations. Rather than contributing to the public good, data has become a tool of capital, which is to say a tool of power. In the United States, there’s little effort to regulate data, to tax it, to steer it toward public interests. The Census — perhaps the most important personal data collection scheme undertaken by the US government — is currently imperiled by a Republican administration that wants to use it to survey undocumented immigrants, which would have a chilling effect on the whole project. Even our tech leaders, who have become apostles of success by monetizing our user data, seem to have no sense of how to leverage this resource for anything but private benefit. People like Mark Zuckerberg and Sheryl Sandberg still speak airily of connecting the world without offering any clearer sense of what that means — or of all that could go wrong. This is part and parcel of Silicon Valley ideology, which, among other features, assumes that everyone should want to use their products. “We’re trying to connect the whole world,” Sandberg recently told NPR. She then added, “Two billion people use our service; a lot of them would not be able to if they had to pay for the content itself.”
We must move beyond surveillance capitalism and its built-in inequities.
That’s the rub. If you don’t make it free, if you don’t turn people into commodities, if you don’t monitor and monetize them, you can’t count them among your ever-expanding user base. Put another way, you can’t “connect” them. That’s why Facebook eliminated WhatsApp’s 99 cent annual fee after acquiring the messenger service. Scale and growth have long been at the heart of the Facebook project, as early Facebook employee Kate Losse wrote in her book The Boy Kings: “Scaling and growth are everything, individuals and their experiences are secondary to what is necessary to maximize the system.”
The other imperative driving Facebook comes from its being a publicly traded company: it has to raise revenues every year. For a company that largely relies on monetizing the personal lives of its users, that means that it must collect more information and find more sophisticated ways to parse it and link its users to prospective advertisers. Each user has a value — an average user yields $9.27 in annual revenue — and this value must increase. Facebook can do this by encouraging you to share more, to spend more time on the site, and to use the Facebook login for other services. They can also do it by monitoring you more closely, by making more deals with data brokers to acquire information on your personal life and shopping habits. As I’ve argued elsewhere, this model is essentially unsustainable; at the same time, the informational appetite that powers it is insatiable. As long as personal data is the company’s lifeblood, there will always be a fiduciary requirement to collect more, to compute more. Privacy is a barrier to the company’s bottom line.
To ultimately respect people’s privacy rights, we must not collect data on them in the first place. Giving users data rights, including control over their personal data, in many ways represents a positive step, but it is also a way of enshrining personal data in the market economy. Data has become a form of property, and only those with the proper education, money, and resources will be able to collect, protect, and profit off of their data. This arrangement will undoubtedly impact the poor and vulnerable the most and push us further toward a world where privacy is a boutique commodity affordable only to the rich. The only solution is, when data must be collected, to socialize the profits.
Facebook has accomplished a neat trick in the last fourteen years, draping itself in humanitarian intent while establishing a globe-straddling monopoly. In the name of connecting people, it has built the world’s largest surveillance apparatus, rivaled only by Google. In the name of community, it has turned some 30 percent of the world’s population into its unacknowledged, unpaid workforce. In the name of relevance, it offers advertisements based on our most personal information. And now, in the name of privacy, it offers rote apologies, along with a complex set of controls that mask its deep foothold in your daily life.
For many, Facebook has come to seem like too essential a tool to quit. (And it’s not as if Facebook makes the process easy.) As with digital detoxes, a certain amount of performativity attends any call to quit Facebook. And quitting may not do much, as Facebook collects information even on non-users. But if we haven’t yet stormed the barricades of Menlo Park, we have rattled the company’s leaders. The public is waking up to what Zuckerberg and Sandberg could probably only admit in their darkest imaginings: when it comes to Facebook, the rot is deep. It can’t be excised. It can only be replaced by something else, by a form of connectivity that treats people not as laborers or commodities, but as human beings.
Fact-checker: Ethan Chiel