Adrienne LaFrance has come to realize that Facebook is not a media company — it’s a doomsday machine, one operating above ground, in plain sight, just east of Highway 101 in Menlo Park, California. At the Atlantic, LaFrance traces the history and growth of the web giant, whose early mission was focused on making the world more open and connected. In its quest, it built “community” at an unprecedented global scale, but along the way stripped away all the good. As we’ve seen, Facebook is a government propaganda machine; a place for hate and terrorist groups to organize; a space for harassment, manipulation, and social experiments; and so much more. Today, its highly personalized, algorithmically powered informational environment is increasingly challenging to moderate — thus incredibly dangerous — and “no one, not even Mark Zuckerberg, can control the product he made,” La France writes.

I recalled Clinton’s warning a few weeks ago, when Zuckerberg defended the decision not to suspend Steve Bannon from Facebook after he argued, in essence, for the beheading of two senior U.S. officials, the infectious-disease doctor Anthony Fauci and FBI Director Christopher Wray. The episode got me thinking about a question that’s unanswerable but that I keep asking people anyway: How much real-world violence would never have happened if Facebook didn’t exist? One of the people I’ve asked is Joshua Geltzer, a former White House counterterrorism official who is now teaching at Georgetown Law. In counterterrorism circles, he told me, people are fond of pointing out how good the United States has been at keeping terrorists out since 9/11. That’s wrong, he said. In fact, “terrorists are entering every single day, every single hour, every single minute” through Facebook.

In previous eras, U.S. officials could at least study, say, Nazi propaganda during World War II, and fully grasp what the Nazis wanted people to believe. Today, “it’s not a filter bubble; it’s a filter shroud,” Geltzer said. “I don’t even know what others with personalized experiences are seeing.” Another expert in this realm, Mary McCord, the legal director at the Institute for Constitutional Advocacy and Protection at Georgetown Law, told me that she thinks 8kun may be more blatant in terms of promoting violence but that Facebook is “in some ways way worse” because of its reach. “There’s no barrier to entry with Facebook,” she said. “In every situation of extremist violence we’ve looked into, we’ve found Facebook postings. And that reaches tons of people. The broad reach is what brings people into the fold and normalizes extremism and makes it mainstream.” In other words, it’s the megascale that makes Facebook so dangerous.

In the days after the 2020 presidential election, Zuckerberg authorized a tweak to the Facebook algorithm so that high-accuracy news sources such as NPR would receive preferential visibility in people’s feeds, and hyper-partisan pages such as Breitbart News’s and Occupy Democrats’ would be buried, according to The New York Times, offering proof that Facebook could, if it wanted to, turn a dial to reduce disinformation—and offering a reminder that Facebook has the power to flip a switch and change what billions of people see online.

The decision to touch the dial was highly unusual for Facebook. Think about it this way: The Doomsday Machine’s sensors detected something harmful in the environment and chose not to let its algorithms automatically blow it up across the web as usual. This time a human intervened to mitigate harm. The only problem is that reducing the prevalence of content that Facebook calls “bad for the world” also reduces people’s engagement with the site. In its experiments with human intervention, the Times reported, Facebook calibrated the dial so that just enough harmful content stayed in users’ news feeds to keep them coming back for more.

Read the story

Cheri Lucas Rowlands

Cheri has been an editor at Longreads since 2014. She's currently based in the San Francisco Bay Area.