Back in the 1970s, as part of his own research, Shneidman asked a group of men at a union hall, “If you were going to commit suicide, what would you write?”
The union hall experiment was, by contemporary research standards, ethically ambiguous at best. “You couldn’t do that today,” Pestian says. But the notes turned out to be important. “We took the real notes and the pseudo-notes and we said, ‘We’ll see if we can tell the difference.’ ”
That meant creating software for sentiment analysis—a computer program that scrutinized the words and phrases of half the real suicide notes and learned how to recognize the emotion-laden language. They tested it by asking the computer to pick out the remaining real notes from the simulated ones. Then they had 40 mental health professionals—psychiatrists, social workers, psychologists—do the same. According to Pestian, the professionals were right about half the time; the computer was correct in 80 percent of the cases.
“So we said, ‘OK, we can figure this out,’ ” he recalls. “If the computer is taught how to listen, it will be able to listen to this database and say, ‘This sounds like it’s suicidal.’ Because there are patterns in the language that are the language of suicide.” Even if those patterns are not always apparent to a trained professional, the real note/fake note test held out the promise that a computer could learn to spot them.
In Cincinnati Magazine, Linda Vaccariello looks at the researchers who are using information from suicide notes to identify potentially suicidal people. Read more from Cincinnati Magazine in the Longreads archive.
***
Photo: Ganeshaisis
We need your help to get to 5,000 Longreads Members.