Jacob Silverman | Longreads | August 2018 | 7 minutes (1,849 words)

On July 31, Facebook executives announced that they had uncovered “coordinated inauthentic behavior” conducted by fraudulent accounts, possibly with Russian backing. After consulting with law enforcement and independent research organizations, Facebook decided to remove eight pages, seventeen profiles, and seven Instagram accounts. Many of them had been made within the past year. The culprits had endeavored to obscure their activities using virtual private networks, known as VPNs, to mask their identities and, Facebook claimed, by paying “third parties to run ads on their behalf.” The message from Facebook, in a lengthy blog post on the discovery, was stark: “We face determined, well-funded adversaries who will never give up and are constantly changing tactics. It’s an arms race and we need to constantly improve too.”

Facebook spun its announcement as a peek into a foreign propaganda operation. But an examination of the accounts in question reveals something far different, and far less menacing, than one is led to believe. Many of the accounts had between zero and ten followers; an Instagram profile had posted only once. With names like “Aztlan Warriors,” “Black Elevation,” “Mindful Being,” and “Resisters,” the pages were in many ways indistinguishable from typical left-wing, anti-Trump pages that dabble in identity politics and resistance sloganeering. A sample of their contents, provided by Facebook, includes mindfulness memes, posts encouraging Trump to resign, and bits of indigenous American history (“Millions of indigenous people died during the conquest of America. History is history,” a post from Resisters reads).

What seemed to alarm Facebook’s leadership (who didn’t respond to questions for this piece) was the coordinated effort in setting up these pages—done in apparent connection to the Internet Research Agency, a nationalist Russian troll farm run out of St. Petersburg by a close Putin ally. Foreign disinformation operations are getting more sophisticated, Facebook’s investigation found, leaving only the smallest of traces; what was discovered in this case could have gone unnoticed. In the blog post, Facebook executives explained that an IRA account that had been disabled in 2017 once shared an event page hosted by the Resisters; for seven minutes, the page also listed an IRA user as one of its administrators.­­­­­ The Resisters page, which posted popular anti-Trump and anti-fascist messages, was one of the few flagged in this investigation that had a sizable following, and the event, to protest a white supremacist rally in Washington D.C., roped in several thousand real people who were interested in attending.

This is where things got complicated in a way that Facebook doesn’t seem to grasp: Setting aside the obvious merit of an event protesting a group of white supremacists, Facebook’s team deemed the page illegitimate because it had been created by an “inauthentic” account. Apparently, it did not matter that hundreds of people had RSVP’d in earnest or that the event page had as administrators several legitimate users who were invested in its outcome. Facebook deleted the page, giving the event’s planners minimal warning. “For Facebook to do it without any consideration is just really disheartening,” one of the organizers told The Washington Post. “It’s almost like Facebook has a disinformation campaign against us.”

***

This isn’t the only time that Facebook has offered the public a dispiriting view inside its battle against disinformation. On August 21, Facebook announced that it had found “more coordinated inauthentic behavior” in the form of an Iranian influence operation that spread political news and memes (and that apparently had the temerity to buy $6,000 in ads). In September 2017, Facebook reported that it had disabled 470 inauthentic accounts and pages. These accounts had spent $100,000 on 3,000 ads over a period of 23 months and were likely Russian in origin. According to a post by Alex Stamos, who was Facebook’s chief security officer, “The ads and accounts appeared to focus on amplifying divisive social and political messages across the ideological spectrum—touching on topics from LGBT matters to race issues to immigration to gun rights.”

In both Russia-related incidents, Facebook’s analytical techniques appear to have been broad. The company searched for ads that might have come from Russia, including those bought from inside the United States with an account that had its language set to Russian. For the September 2017 report, that search yielded about $50,000 “in potentially politically related ad spending,” according to Facebook, amounting to some 2,200 ads. While these posts may have been part of a Russian operation to take down American democracy, just as easily, by the wide-ranging criteria Facebook used, they could have been the work of a Russian immigrant or anyone seeking to get around censorship restrictions at home.

Banning “inauthentic” accounts provides Facebook with a convenient, if still not very clear, line of demarcation.

In an April 2017 white paper, “Information Operations and Facebook,” the company’s executives expressed concern about “false amplification, which we define as coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion or amplifying sensationalistic voices over others).” The bounds of acceptable behavior are ill-defined, here. How can false amplification be usefully distinguished from organic amplification? Political discussion is manipulated all the time by hashtag campaigns, ads, the mercurial swell of public opinion, and well-organized online communities on platforms like Reddit. Where is Facebook drawing a line? Would a disinformation and misogyny fueled hashtag campaign led by a right-wing provocateur, like Mike Cernovich, deserve company attention?

Facebook has dodged setting policy that would resolve these questions, keeping the focus on sources and ignoring the substance of posts. For years, the measure of legitimacy has been “authenticity.” In the April 2017 paper, Facebook researchers wrote of improper behavior on its platform, “We detect this activity by analyzing the inauthenticity of the account and its behaviors, and not the content the accounts are publishing.” In 2011, Sheryl Sandberg, Facebook’s chief operating officer, told Charlie Rose, “The social web can’t exist until you are your real self online.” Mark Zuckerberg has said, “Having two identities for yourself is an example of a lack of integrity.” Recently, the company has been more accepting of pseudonymity—gone are the days when Facebook blocked Salman Rushdie because he wasn’t using his legal first name, Ahmed—but verified identities are the encouraged norm.

Banning “inauthentic” accounts provides Facebook with a convenient, if still not very clear, line of demarcation. Looking only at who is sharing information and not what is being shared, Facebook gleans an incomplete picture of its own environment. When Facebook does turn some attention to the substance of what is being shared, as executives did this summer, the content gets misconstrued. The subjects that Facebook has called “divisive” —LGBTQ rights, racism, immigration, guns—are also the stuff of essential everyday political discourse. Painting “division” itself as an enemy has become a liberal and centrist shibboleth, a lament of the incivility in our era.

***

It’s arguable whether civility is worth pining for, much less whether it can be willed into existence by ignoring genuine social divisions. This has not stopped the Department of Justice from weighing in. According to a July 2018 report by the DOJ’s Cyber-Digital Task Force, Americans must be wary of “recent efforts at creating and operating false U.S. personas on Internet sites designed to . . . spread divisive messages.” The report puts barely-populated Instagram accounts—with possible ties to Russian troll farms—in the same dangerous category as sophisticated state-sponsored info-warfare. Today’s “foreign influence operations include covert actions by foreign governments intended to sow divisions in our society, undermine confidence in our democratic institutions, and otherwise affect political sentiment and public discourse to achieve strategic geopolitical objectives,” the report states.

The DOJ also worries about “operations aimed at removing otherwise eligible voters from the rolls” and discouraging some Americans from voting. In airing these legitimate concerns about electoral integrity, the report unintentionally shines a spotlight on Republicans’ voter suppression tactics, which often include measures, like voter ID laws or prohibitions on enfranchisement for convicted felons, that are designed to make voting more difficult. The DOJ report goes on to suggest that foreign operators are attempting to “convince the public of widespread voter fraud”—a favorite boogeyman of Donald Trump and his allies. However much we should worry about conflict ginned up on Russian troll farms, it seems, Republican policy is just as much, if not more, of an impediment to allowing people to vote.

When it comes to social media platforms, coercion is built into the system.

The use of misinformation, and the destabilization of truth itself, has, of course, been an essential part of the Trump playbook. Last spring, a study published in the Journal of Economic Perspectives found that, during the run-up to the 2016 election, “Fake news was both widely shared and heavily tilted in favor of Donald Trump.” That deceitful sensibility—a willingness to declare reality whatever one wants it to be—is at the heart of the Trump administration. Twitter is only one mechanism by which Trump spreads lies; his enablers also chime in on Facebook, Instagram, Pinterest, and other social media platforms. Many, if not most, of these voices would be considered by Facebook leadership to be “authentic.”

As a document of today’s social-media information operations, the DOJ report is more illuminating and considered than anything produced by Facebook. Here and there, the report even betrays some knowledge of history. “Fabricated news stories and sensational headlines like those sometimes found on social media platforms are just the latest iteration of a practice foreign adversaries have long employed in an effort to discredit and undermine individuals and organizations in the United States,” it reads. “Although the tactics have evolved, the goals of these activities generally remain the same: to spread disinformation and to sow discord on a mass scale in order to weaken the U.S. democratic process, and ultimately to undermine the appeal of democracy itself.”

In other words, Russia, and previously the Soviet Union, have long used inflammatory material to try to discredit the United States and lower its estimation in the opinion of the world. During the Cold War, the Soviet Union would point to America’s history of slavery, Jim Crow, and segregation to illustrate that the U.S. wasn’t free of its own moral rot. In a similar way, by spreading Facebook posts about violence against indigenous Americans or encouraging others to fight fascism, these apparent Russian info-operators are playing on genuine conflicts of the day. Depending on one’s point of view, their message may even be the right one.

Still, the larger info-landscape is more complex than a supposed Russian operation playing on leftist fears. As has been documented, Russians spread different messages to different audiences, and sometimes take multiple sides. When it comes to social media platforms, coercion is built into the system, especially in the form of targeted advertising. It only matters who is at the controls of this, the world’s most developed surveillance apparatus—or paying to use it to spread their message. Russia is poking at long-festering wounds, deep in the American body politic, that have been self-inflicted. Instead of a few scattered Instagram accounts, Facebook—and the rest of us—should be more worried about the vast influence machine it has created, largely undisclosed in nature, and still nearly impossible to understand.

***

Jacob Silverman is the author of Terms of Service: Social Media and the Price of Constant Connection.

***

Editor: Betsy Morais
Fact-checker: Ethan Chiel