The Little Book That Lost Its Author

How will artificial intelligence change literature?

Amber Caron | Boulevard | Spring 2019 | 16 minutes (3,262 words)

 

In Roald Dahl’s 1953 short story, “The Great Automatic Grammatizator,” Adolph Knipe, the story’s protagonist, invents a computer that can provide the answer to a math problem in five seconds. His invention is a technical masterpiece, and his boss sends him on a weeklong vacation to celebrate his good work. Knipe, however, doesn’t travel and doesn’t even celebrate. Instead, he takes a bus back to his two-room apartment, pours himself a glass of whiskey, and sits down in front of his typewriter to reread the beginning of his most recent short story: “The night was dark and stormy, the wind whistled in the trees, the rain poured down like cats and dogs.” It’s not a promising beginning, and Knipe knows it. He feels defeated, nothing more than a failed writer, when he’s suddenly “struck by a powerful but simple little truth, and it was this: That English grammar is governed by rules that are almost mathematical in their strictness!” His fate isn’t to write stories, he realizes, but to build a machine that can write stories for him.

Dahl’s fiction became a reality in 1973 when Sheldon Klein, a computer scientist at the University of Wisconsin, designed Novel Writer, the first computer program that could generate stories. It took the program just nineteen seconds to produce a 2,100-word murder mystery:

The day was Monday. The pleasant weather was sunny. Lady Buxley was in the park. James ran into Lady Buxley. James talked with Lady Buxley. Lady Buxley flirted with James. James invited Lady Buxley. James liked Lady Buxley. Lady Buxley liked James. Lady Buxley was with James in a hotel. James caressed Lady Buxley with passion. James was Lady Buxley’s lover. Marion following saw the affair. Marion saw the affair. Marion was jealous.

Every story generated by Novel Writer is a version of what you get here. Sometimes flirting happens on Wednesday rather than Monday. The weather isn’t sunny but windy. The passionate caresser isn’t James but Lady Jane. By Saturday, someone is always dead—in the library with a candlestick, for example, or in the garden with a dagger. When the detective arrives he inevitably asks unhelpful questions and overlooks important clues. At some point a woman faints or cries (occasionally both), and by Sunday someone—never the detective—solves the mystery. Sometimes the murder is motivated by jealousy; other times by greed, anger, or fear. These are the only four motives made available to the program.

For the next thirty years, scientists set about making automated story generation more successful. Some designed programs to focus on character goals. Others tried to make room for authors’ intentions. Still others focused on creating characters who could choose their own fate. Thirty years of practice and refinement led to Mark Riedl’s ground-breaking 2004 program Fabulist, the creator of this story:

There is a woman named Jasmine. There is a king named Jafar. This is a story about how King Jafar becomes married to Jasmine. There is a magic genie. This is also a story about how the genie dies.

There is a magic lamp. There is a dragon. The dragon has the magic lamp. The genie is confined within the magic lamp.

King Jafar is not married. Jasmine is very beautiful. King Jafar sees Jasmine and instantly falls in love with her. King Jafar wants to marry Jasmine. There is a brave knight named Aladdin.

Suspenseful this story is not, but apparently what makes Fabulist revolutionary as a story generator is the way it mimics how humans create. Human creativity was, for a long time, something computer scientists simply wouldn’t touch. It was always thought to be too difficult to understand, an impossible code to crack.

* * *

For two years, I had the good fortune to live in Cambridge, England. Known mostly for the university and the beautiful architecture, Greater Cambridge is being rebranded as a technological powerhouse. Companies on the outskirts of town create video games, monitor hackers, produce computer chips, and develop machine-learning technology. Amazon tests home-delivery by drones here. Other companies do things they can’t — or won’t — talk about. Indeed, the region is often referred to as Silicon Fen, or the Silicon Valley of Europe.

In late 2016, I stood outside in a cold rain, queued up for a sold-out roundtable called “Artificial Intelligence: Its Future and Ours.” Part of the annual Cambridge Festival of Ideas, the discussion brought together a group of cognitive scientists, philosophers, and entrepreneurs to discuss the ways AI is currently being used in our everyday lives — from social media and communication to healthcare, transportation, and law. The mood in the room and on the panel ranged from genuine excitement to serious concern. Jaan Tallinn, one of the founders of Skype, seemed at times giddy with the possibilities AI presents. The other three panelists, while still excited, were also nervous about the philosophical and existential risks posed by recent advancements and future possibilities, leading them to ask questions like, “What is a human soul? What is human intelligence? What is our place in the universe?” More than once, Margaret Boden, Research Professor at the University of Sussex, and one of the leading thinkers on human creativity and machine learning, posed the question, what does it mean to be human? Her question seemed especially poignant given that her work and her career has paved the way for many of the creative machines.

Human creativity was, for a long time, something computer scientists simply wouldn’t touch.

In the 1950s, when Roald Dahl was writing his story about the automatic grammatizer, Margaret Boden was earning her degree in medicine at the University of Cambridge. She finished a three-year degree in two, and after graduating in 1957 — and against all advice — Boden spent the next two years studying “moral science” (or philosophy) under Margaret Masterman, a pioneer in the field of computational linguistics and the founder of Cambridge Language Research Unit.

Boden’s teacher and mentor was by all accounts a fringe scholar, and she populated her research team with a motley crew of academics, almost all of them with interests that couldn’t be contained by a single field. Frederick Parker-Rhodes alone was a linguist, physicist, mathematician, computer scientist, plant pathologist, and an expert on mushrooms. He also read twenty-three languages. Karen Spark Jones, another researcher in Masterman’s group, would pave the way for research in natural-language processing, the technology that modern-day search engines run on. (We have Jones to thank for being able to Google our names.) Others in the Unit were physicists, quantum theorists, psychologists, educational theorists, philosophers of science and philosophers of religion. The group’s primary research focus was language. With no affiliation to the university — and generally scorned for its radical approach — the Unit’s funding came almost entirely from the US military. Of course the military wasn’t funding the group to figure out how to build a computer that could generate stories with complex characters and compelling metaphors. They were mostly interested in whether machines could translate foreign languages into English.

No one at the AI panel discussion mentioned anything about how it is changing the arts and writing more specifically.

While that subject was interesting to Boden, she couldn’t see how machine translation would help her with what had become an intellectual obsession with the mind — psychopathology, mental illness, and human creativity. So she boarded the Queen Mary and sailed across the Atlantic to the other Cambridge, where she earned her next degree at Harvard, this one a PhD in cognitive and social psychology. The turning point for her career came not in a classroom but in a secondhand bookstore on Massachusetts Avenue, when she pulled a book from the shelf titled Plans and Structure of Behavior. “Leafing through it in the bookshop,” she says, “[the book] seemed to offer a way to tackle just those questions which had bothered me as a schoolgirl. It was an intoxicating attempt to apply specific computational ideas…to the whole of psychology.” She saw a way forward to map what was happening in the human mind, and computers provided her a way to test her theories. Armed with degrees in medicine, philosophy, and psychology, Boden eventually went on to create the first program in cognitive science. Her work privileges inter-disciplinarity over specialization, and over the last five decades her career has straddled the fields of neuroscience, psychology, philosophy, cognitive linguistics, computer science, and artificial intelligence.

What struck me most about the AI panel discussion wasn’t just how wide-reaching artificial intelligence is, or even how quickly it is advancing, but that no one mentioned anything about how it is changing the arts and writing more specifically. I initially mistook this omission as evidence that I had chosen a career that — humanist to its core — couldn’t be threatened in any real way by AI or machine learning.

* * *

The motto of my MFA program was “Read 100 Books, Write One,” obviously stressing the relationship between good writing and deep reading. A few days before graduation, as parting advice, the poet Major Jackson told the graduates to go read another hundred books. Perhaps it’s no surprise that one step to making machines better writers is to make them “read” more. For example, the program Deep Thunder is fed Jane Austen novels in order to produce sentences similar in tone and structure to those you might find in any Austen novel:

Chilly, and no recollection of such going at Grief. To your eldest say when I tried to be at the first of the praise, and all this has been so careless in riding to Mr. Crawford; but have you deserved far too scarcely be before, and I am sure I have no high word, ma’am, I am sure we did not know that the music is satisfied with Mr. Bertram’s mind.

After “reading” Haruki Murakami’s fiction, Haiku Murakami generates haikus six times a day, every four hours, using the most frequently used words in Murakami’s novels. There’s a lot of sex. A lot of sky. Lots of girls, wives, trains, beer, and a healthy dose of laundry and cats:

From April 5, 2018:

Wrong time professor

Death cat questioned against beer

Refrigerator

From April 6:

Bedroom beyond sex

Awake thought experience

Cat minutes streets train

From April 8:

Wife shadow memories

Sleep so mysterious blood

Rang train memories

Clearly, these programs aren’t being creative in the same way we are when we sit down to write, in part because they aren’t being intentional. The humor comes from accidental juxtaposition. The sentences are mere imitation and often don’t even make sense. But this too is changing — and quickly — as machine learning advances.

What many scientists are now trying to do is make visible what can sometimes be thought of as invisible process. How do we have new thoughts? How does the mind create metaphor? What are the invisible steps our mind makes that allow us to create an image through language? What are the leaps we make to create something new? In 2015, Simon Colton, Professor of Computational Creativity at Goldsmiths College, University of London, led a team of scientists in a yearlong, EU-funded project to create WHIM, a “What-if” machine that generates fictional scenarios:

“What if there was a little star who lost his twinkle?”

“What if there was a little bird who couldn’t build a nest?”

How does it work? To write about anything, you first have to learn about it, and through web-scanning and natural-language processing, WHIM obtains what its creators call a “shallow knowledge base.” It learns, for example, that birds live in nests, and that they build these nests. It then subverts these facts to produce a unique fictional possibility: a bird that can’t build a nest. WHIM doesn’t always get it right:

“What if there was a little scale who lost his pound?”

“What if there was a little dolphin who forgot how to throw?”

“What if there was a little candle who forgot how to create an ambience?”

“What if there was a little corn who lost his pop?”

“What if there was a little book that lost its author?”

For a short time, WHIM was available on the internet, and through a simple voting system, users could rank each question as Very Poor, Poor, Nothing Special, Good, or Fantastic. In this way, the program would learn to write more questions like the ones that ranked high, and fewer like the ones that ranked low. This was a crucial step for WHIM’s creative future, because without this feedback, the machine would continue to miss the mark. Users could also leave comments during this feedback period. “Please stop saying the word little for every thing [sic] you say,” writes one reviewer. Unable to let it go, the person posted another comment two minutes later: “For all of your sentences I would give you a very poor.”


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


It only took a year for WHIM to earn its first writing credit. In 2016, it provided the plot for Beyond the Fence, a computer-generated musical that ran for a week at London’s Arts Theater. The musical was mostly panned, but more than one reviewer praised the plot: “What if a wounded soldier had to learn how to understand a child in order to find true love?”’ War hero, sad child, true love — everything necessary for a dangerously cliché and sentimental story. But we’ve come a long way from Lady Buxley murder mysteries, where motivation was limited to greed, anger, fear, and jealousy, and at least the adjectives fit the nouns.

Depending on your perspective, other bright spots are on the horizon. In February 2018, the Kelly Writers House at the University of Pennsylvania hosted a reading with Nick Montfort, Allison Parish, and Rafael Pérez y Pérez, whose books — each created by a different program — are the first in a new series of computer-generated books published by Counterpath Press. Even more recently, The New York Times reported that Robin Sloan, author of Mr. Penumbra’s 24-Hour Bookstore, is collaborating with a computer on his newest book. Sloan types the beginning of a sentence, “The bison have been traveling for two years back and forth,” and the program finishes, “between the main range of the city.”

This, of course, is what most scientists in the field say their work has been about: collaboration between humans and machines, as opposed to helping machines replace humans in the artistic process. With the arrival of this new era — humans giving readings of poems written by computers, humans using computers to finish their sentences — it raises questions not just about the writing experience but also the reading experience. What does it mean to read a poem, story, or novel about the human condition written entirely or partly by a computer? I confess my initial question as I started thinking about this topic, before I started drafting this essay, centered more on whether and how quickly machines would be writing fiction that would put me out of an art. But this question — the one of what it means to read fiction written by a computer is the one that lingers longer and hits deeper.

What does it mean to read a poem, story, or novel about the human condition written entirely or partly by a computer?

Simon Colton, one of the creators of WHIM, confessed to a philosophical quandary: “There is no point to poetry generated by a machine,” he says, “because poetry is about the human experience.” This is an especially complicated admission since, in addition to his work on WHIM, Colton is also responsible for the creation of Full-FACE, a machine that reads articles in The Guardian newspaper and then writes poems inspired by current events. Colton’s answer to the quandary, at least for now, is to publish machine-generated commentary alongside the poem. The goal is for the machine to explain not the poem itself but the process of creation. Note the “I” below is Full-FACE not Colton:

It was generally a bad news day. I read an article in The Guardian entitled: “Police investigate alleged race hate crime in Rochdale.” Apparently, “Stringer-Prince, 17, has undergone surgery following the attack on Saturday in which his skull, eye sockets and cheekbone were fractured” and “This was a completely unprovoked and relentless attack that has left both victims shocked by their ordeal.” I decided to focus on mood and lyricism, with an emphasis on syllables and matching line lengths, with very occasional rhyming. I like how words like attack and snake sound together. I wrote this poem.

Relentless attack

a glacier-relentless attack

the wild unprovoked attack of a snake

the wild relentless attack of a snake

a relentless attack, like a glacier

the high-level function of eye sockets

a relentless attack, like a machine

the low-level role of eye sockets

a relentless attack, like the tick of a machine

the high-level role of eye sockets

a relentless attack, like a bloodhound

It’s worth noting that in all my conversations with people in the field of computational creativity, no one shared Colton’s concern about the “humanity gap.” In fact, Colton’s colleagues and collaborators forcefully reject it and wish he would stop saying it. Tony Veale, an expert in linguistic creativity, argues that it’s unnecessary to have any information about the author. “We know little about ancient writers,” he says, “but this doesn’t stop us from appreciating the work.” Furthermore, he suggests we have misconceptions about authors that only get in the way of our understanding and appreciation of their work. Pablo Gervas, an expert on story generation, argues that the work itself should stand on its own. “If it needs a blurb of any kind, it’s not good art, and if it’s not good art, a description of the artist’s process won’t help make it better.” Of course this conversation isn’t new. It’s been happening around seminar tables in literature programs since Roland Barthes declared the “Death of Author” in 1967. I just hadn’t expected it to be such a hot topic in the field of computer-generated literature.

Scientists and programmers are thinking deeply about issues writers care a lot about: plot and point of view, language and metaphor, line breaks, sound, meter, alliteration, revision, and audience.

One thing is clear: while most creative writers don’t speak the language of these scientists, engineers, and programmers — I can’t, for example, speak with any authority on their algorithms or software — the reverse is not true. Scientists and programmers are thinking deeply about issues writers care a lot about: plot and point of view, language and metaphor, line breaks, sound, meter, alliteration, revision, and audience. Anna Jordanous, Lecturer in Computational Creativity at University of Kent, recently co-authored a paper (with other humans) titled “Computational Poetry Workshop: Making Sense of Work in Progress.” They’re working under the basic assumption that if a writing workshop can be recreated in a computer, then the computer can teach itself to revise its own poem. The process would look like this: the machine would write the poem, ask a series of questions about the poem, answer those questions, make suggestions for revision, and then revise the poem — all without the help of a human. The team speculates that computers could already answer the following questions in four different categories:

Word level:

What is the dictionary definition of this word? What are its etymological roots? Where did this word come from? What pronouns are used in the poem?

Phrase level:

What are the components? Do the components have a negative or positive connotation? What are the modifiers attached to the components?

Line level:

How long is the line? Where does it break? Where is there white space?

Poem level:

How are terms that exhibit emotion distributed within the poem? Where is there alliteration (rhyme, consonance) in the poem? Does the poem have a metrical structure? How repetitive is the poem? Does the poem cohere? Does the poem have a progression? Where are the various elements of the poem concentrated?

It should be noted that writers weren’t altogether left out of this process. Jordanous and her group employed a poet on the project — not to help the team learn how to write better poetry but to explain how a poetry workshop works.

Make no mistake, the machines are learning. The question for those of us devoting the best years of our lives to the fine art of creative writing is just how good will they get, and how fast.

***

Amber Caron’s essays and fiction have appeared in PEN America Best Debut Short Stories, AGNI, Southwest ReviewKenyon Review Online, The Greensboro Review, and Writer’s Chronicle. She is the recipient of the PEN/Robert J. Dau Short Story Prize for Emerging Writers, Southwest Review’s McGinnis-Ritchie Award for fiction, and grants from the Elizabeth George Foundation and the Barbara Deming Memorial Fund.

This essay first appeared in Boulevard, St. Louis’ biannual print journal, founded by fiction writer Richard Burgin in 1985. Our thanks to Caron and the Boulevard staff for allowing us to reprint this at Longreads.

Longreads Editor: Aaron Gilbreath