“Trashing is insidious. It can damage its subject for life, personally and professionally. Whether or not people sympathize, the damage has been done.”
— Laurie Penny, Who Does She Think She Is?
In the right context, moral outrage can be justified and effective. When marginalized or less-empowered voices leverage a moral megaphone to remedy systemic injustice — when hate suffers consequences — those social repercussions help bend the arc of the moral universe in the right direction. In communities where we can all see each other in person, correcting bad behavior in judicious, measured proportion serves everyone in the long run.
Yet even justified social consequences can get out of hand quickly when they’re exacted by waves of anonymous online strangers. Constructive criticism tips over into merciless abuse that undermines whether transgressors can learn any semblance of an appropriate lesson.
In Mosaic, Gaia Vince examines how our first impulse offline is usually to be generous and kind to each other, but those instinctual wires get crossed online depending on how a social network is set up. Even just one bad experience with a jerk can set a formative precedent that leads to less cooperative behavior.
“You might think that there is a minority of sociopaths online, which we call trolls, who are doing all this harm,” says Cristian Danescu-Niculescu-Mizil, at Cornell University’s Department of Information Science. “What we actually find in our work is that ordinary people, just like you and me, can engage in such antisocial behaviour. For a specific period of time, you can actually become a troll.”
The good news is that this also works the other way around: well-timed, well-meaning interventions can encourage us to bring more of our evolved prosocial habits from offline communities into our online discourse.
Here, Vince visits Yale University’s Human Cooperation Lab to explore how we can redesign social networks in ways that help “further our extraordinary impulse to be nice to others even at our own expense.”
“If you take carbon atoms and you assemble them one way, they become graphite, which is soft and dark. Take the same carbon atoms and assemble them a different way, and it becomes diamond, which is hard and clear. These properties of hardness and clearness aren’t properties of the carbon atoms – they’re properties of the collection of carbon atoms and depend on how you connect the carbon atoms to each other,” [Nicholas Christakis, director of Yale’s Human Nature Lab] says. “And it’s the same with human groups.”
“By engineering their interactions one way, I can make them really sweet to each other, work well together, and they are healthy and happy and they cooperate. Or you take the same people and connect them a different way and they’re mean jerks to each other and they don’t cooperate and they don’t share information and they are not kind to each other.”
In one experiment, he randomly assigned strangers to play the public goods game with each other. In the beginning, he says, about two-thirds of people were cooperative. “But some of the people they interact with will take advantage of them and, because their only option is either to be kind and cooperative or to be a defector, they choose to defect because they’re stuck with these people taking advantage of them. And by the end of the experiment everyone is a jerk to everyone else.”
Christakis turned this around simply by giving each person a little bit of control over who they were connected to after each round. “They had to make two decisions: am I kind to my neighbours or am I not; and do I stick with this neighbour or do I not.” The only thing each player knew about their neighbours was whether each had cooperated or defected in the round before. “What we were able to show is that people cut ties to defectors and form ties to cooperators, and the network rewired itself and converted itself into a diamond-like structure instead of a graphite-like structure.” In other words, a cooperative prosocial structure instead of an uncooperative structure.
But as I’m watching the game I just played unfold, Christakis reveals that three of these players are actually planted bots. “We call them ‘dumb AI’,” he says.
His team is not interested in inventing super-smart AI to replace human cognition. Instead, the plan is to infiltrate a population of smart humans with dumb-bots to help the humans help themselves.
“We wanted to see if we could use the dumb-bots to get the people unstuck so they can cooperate and coordinate a little bit more – so that their native capacity to perform well can be revealed by a little assistance,” Christakis says.
Much antisocial behaviour online stems from the anonymity of internet interactions – the reputational costs of being mean are much lower than offline. Here, bots may also offer a solution. One experiment found that the level of racist abuse tweeted at black users could be dramatically slashed by using bot accounts with white profile images to respond to racist tweeters. A typical bot response to a racist tweet would be: “Hey man, just remember that there are real people who are hurt when you harass them with that kind of language.” Simply cultivating a little empathy in such tweeters reduced their racist tweets almost to zero for weeks afterwards.