Search Results for: Game-of-Thrones

But Who Tells Them What To Sing?

Getty Images

Adrian Daub | Longreads | September 2021 | 21 minutes (5,894 words)

When a new trailer for the Marvel film Black Widow dropped in April of this year — after the movie had been repeatedly moved back due to the pandemic — the producers seemed intent on reminding people about why they’d been excited about the movie before the lockdowns started. They did so by closing the promo with a new version of the theme from The Avengers, probably to call back viewers to a different, less socially distanced time. How could you know this was a new version of the motif? It was choral, but that was a well Marvel had gone to before. This time it had lyrics. As best I can tell, for the first time.

As fans welcomed the callback in online comments, I was brought back to a question that I’d had when Game of Thrones did something similar at the end of its fourth season and again at the very end of the show. It’s something of a trend these days to take a highly recognizable instrumental theme and make it choral. And I get why: The gesture is big and bold and epic. But my question concerned something comparatively pedestrian: Who decides what the lyrics are? What language are they even in? And who writes them? I decided to find out.

Those of us who listen to soundtracks obsessively do so knowing that that’s not how soundtracks are intended to work on us. Whoever mixed in a chorus for a few seconds of the Black Widow trailer was going for an emotional reaction, not some new layer of meaning to be disentangled. “When I do a film score,” the late James Horner said in a TED talk in 2005, “I am nothing more than a fancy pencil” executing the vision of a filmmaker. You’re not meant to listen to a soundtrack in isolation from the image. It is music in service of the moment.

You’re not meant to listen to a soundtrack in isolation from the image. It is music in service of the moment.

But one place where this fancy pencil has more autonomy is when it comes to the text that a chorus sings. Perhaps it’s better to say that the pencil is condemned to freedom. When the composer John Ottman was hired to score the 2008 Tom Cruise film Valkyrie, he realized that he needed a break in the texture of the soundtrack at the very end of the film. That’s because in the final scenes of the movie basically all of the even remotely redeemable characters get executed. After they had all died and the credits rolled, Ottman decided he wanted a “sense of release, because there had to be a different feeling as the audience walks out of the theater.” So he hit upon the idea of a self-contained choral piece. “The problem was though, what on earth would they be saying?”


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


What on earth indeed? It’s a moment where blockbuster filmmaking — always so anxiously in control of its meanings — seems to be at a bit of a loss. And it’s a moment where we as an audience suddenly get a sense for how films make meaning, and how it isn’t always the meaning they intend to make.

So who decided what the lyrics to the theme from The Avengers were? The short answer is that I still don’t know. But the long answer to my pedestrian question leads into the high-pressure, highly collaborative world of film scoring. A world in which composers often have just a few weeks to write music that pleases the studio and the director, and potentially even test audiences. And in which they toil with assistants, orchestrators, sound editors, and many, many session musicians to find a sound for a film that is still in the process of evolving. I wanted to find out who among this massive group would be the one to say “hey, let’s add a chorus and have it sung in Sanskrit” or something along those lines.

The answer turns out to be: Pretty much any of them can and sometimes do. What film choruses offer us is a perfect synecdoche for the collective, frenzied, and deeply mercenary magic that creates movies in the first place. It’s as likely that a director had the screenwriter invent specific lyrics early in post-production as that a subcontractor, assistant composer, or orchestrator jotted down some words or went on a Wikipedia deep-dive eight weeks out from release in a desperate late-night quest for a non-copyrighted text to use with a cue that might please a bunch of suits half a world away.

What film choruses offer us is a perfect synecdoche for the collective, frenzied, and deeply mercenary magic that creates movies in the first place.

***

Choruses have been part of film scoring for over a century. People have been singing on screen since the earliest silent reels, and with increasing technical wizardry we could even hear them doing it. But something like the Black Widow trailer is what we call an non-diegetic chorus: These are voices that viewers aren’t supposed to somehow locate within the screen action. In early cinema you had to have musicians physically present, first in the cinema with a viewer, eventually in the scene with the actors. Both of which pretty much ruled out the use of a choir. And, as film music historian Mervyn Cooke points out, once technologies existed that allowed films to have at least a partial soundtrack, filmmakers initially avoided non-diegetic music — precisely because they needed to sell the illusion that the sound was coming “from” the scene.

Non-diegetic music started to become the norm only in the early ’30s. And even then the limitations of recording technology meant that non-diegetic voices were not usually worth the trouble. By the late ’30s this had changed. Snow White and the Seven Dwarfs (1937) had its choir chime in even when it wasn’t for the explicit musical numbers. (Snow White was also the first soundtrack issued as an album, so choruses were part of how film soundtracks traveled semi-independently from their films from the very beginning.)

Alfred Newman had begun relying on wordless “heavenly choirs” going ooo and aaa in the background, in films like Wuthering Heights (1939), How Green Was My Valley (1941), and The Song of Bernadette (1943). As the music historian Donald Greig, who is also an active session singer on many modern scores, has pointed out, in the beginning choruses had to be at least somewhat motivated by theme or screen action — they were there to speak for ghosts, to intimate religious dimensions to the screen action.

And then there was Dimitri Tiomkin’s score for Frank Capra’s Lost Horizon (1937). The film concerns the discovery of Shangri-La in the Himalayas, and when we finally get to the fabled land the soundtrack accompanies the matte-painted wonderland with a chorus singing in … well, in a language that isn’t English and doesn’t seem to be Tibetan either. And thus another Hollywood tradition was born: film choruses belting out perfectly nonsensical prose with utter conviction.

And thus another Hollywood tradition was born: film choruses belting out perfectly nonsensical prose with utter conviction.

Both types of choral performance have never left the Hollywood lexicon. In thinking through how film choruses make meaning, I became obsessed with what the process of recording a soundtrack looks like today and at what point in that process someone actually writes lyrics in fake Tibetan. In the Golden Age, studios kept their own choirs — professional singers would show up at the lot and ooo and aaa for a Miklós Rósza score today and belt out a ferocious battle hymn for Erich Wolfgang Korngold the next. Studios also had their house orchestrators (usually several), and while laypeople remember the composers of Hollywood’s Golden Age, there are other figures that probably shaped the way films sound just as much if not more, all the while just quietly collecting their paychecks.

Speaking with modern singers about their experiences, I was struck by how little their day-to-day job description had changed since Tiomkin’s day. But the world in which they are performing is altogether different. As part of my research for this article I made a massive choir belt out the most menacing rendition of “Mary Had a Little Lamb” ever, and all it cost me was $199 plus tax. The EastWest Symphonic Choirs software allows you to make a virtual choir sing in just about any style imaginable. Want your ooos and aaas to sound like a whisper? More Broadway or more classical? All of that’s in the package.

But there’s more: Due to a system called WordBuilder, you can have this choir sing pretty much anything — you can type in text in English, in phonetics, or a proprietary alphabet called Votox, and the software will assemble it out of a massive databank of vowels and consonants. This is a commercially available product, but there are even bigger sample libraries kept by individual composers: If you’re wondering who’s dropping by to supply a quick “agnus dei” for a Hans Zimmer score, well that’s almost certainly a proprietary sample owned by Zimmer’s film score workshop, Remote Control.

All the professional singers I spoke to were keenly aware of products like EastWest Symphonic Choirs and the sample libraries — because more likely than not they’re in them. If you’re in the business of singing on film, these days you won’t always be asked to sing for an actual score, but instead you might get booked to record samples. There’s a scary possibility that these artists are slowly eroding the industry’s need for their labor — that the fruits of their one day of paid work will perform for the studios in perpetuity and with no extra residuals. Their disembodied vowels are putting their vocal chords out of business. But that possibility hasn’t been fully realized: Often enough when they arrive in the recording studio, singers will find that there is a vocal track already, but it’s done by computer. And yet, the composer wants a live version. Almost all the singers I spoke to expressed some surprise that Hollywood still bothered.

Their disembodied vowels are putting their vocal chords out of business.

One possibility why they do: Composers simply like working with live humans and consider it part of their job to do so. As Jonathan Beard, who has been composing and orchestrating in Hollywood for over a decade, put it to me, choirs are an easy, effective way to give dimension to a scene — “because you have a human body as one of the instruments, and there’s a power the human voice [has] over us in general.”

Composers are highly trained musicians, and a lot of their training has involved singing. The composer brothers Harry and Rupert Gregson-Williams (Harry composed for films like Kingdom of Heaven, the Narnia-films, and most of Denzel Washington’s films of the last 15 years, while Rupert is best known for DC Universe films like Wonder Woman and Aquaman) were both choirboys at St. John’s College in Cambridge — it makes biographical sense that choral textures and their creation would be important to them. And that they might like to think through music with a live chorus rather than a computer. Another surprising preference that speaks to a kind of sweet traditionalism: While sometimes vocal tracks get doubled in recording (meaning what sounds like 16 singers is just eight overlaid onto each other), this seems to be the exception rather than the rule. Clearly someone in the process enjoys working with large groups of people and thinks they give you an aesthetic payoff that engineering wizardry would not.

But there’s a more cynical reason as well, and it’s the reason why automation hasn’t displaced human labor in other fields: The process of booking some freelancers through a fixer, having them record for a day, and then paying them no residuals isn’t actually much of an expense. That’s how London became a preferred place for Hollywood to record: a large population of well-trained musicians, whose union doesn’t insist on residuals. Several London-based singers I spoke with suggested that the reason Hollywood doesn’t record in, say, Germany as often is that singers in continental Europe have steadier income and are less dependent on session work. And once a producer decides that even London-based musicians are too demanding — well, then there’s always Prague or Budapest. The gorgeous voices you heard in a John Ford Western were the sound of unions and full-time employment; in a Hollywood score today they are monuments to the globalizing power of the gig economy.

***

So that is the world from which these vocals emerge. Imagine you are a classically trained singer in, say, London who has done some previous work on soundtracks. You get a call from a fixer, who is assembling a chorus, or soloists, for a production company. You book the gig, and you show up for the recording session knowing which film you’re singing for, probably knowing the composer you’re recording for, but nothing else. Most recording sessions take place in the famous Abbey Road Studios, which are expensive, so you’re usually booked for no more than a certain number of union-approved hours.

Importantly, by the time you show up for the recording session, the film is pretty much “in post post production,” as one session singer put it to me. The film is basically finished, the wrangling over what the score is supposed to sound like is over. By the time you record, whatever orchestral parts you are supposed to accompany are fully assembled — you usually have them in your headphones as you sing. When you get there, you are handed a large stack of notes to sing and, according to all the singers I spoke with, you get through some portion of them in the next few hours — never through all of them. Some cues you sing will never be in the finished film, some cues you might do 10 versions of. And then the studio time the composer booked is over, you hand over your stack of notes, sign statements agreeing not to divulge anything about what you just sang, and you are on your way.

As the soprano Catherine Bott said: “You enter a studio and you open the score and off you go. You sing what you’re told, and it’s all about versatility, just being able to adapt to the right approach, whatever that may be for that conductor or that composer.” And part of that, singers told me, was singing the words — whatever they may be. As Donald Greig pointed out to me, a lot of these singers have training in classics; they certainly know their way around a Requiem or a Stabat Mater. And yet often enough when they step into Abbey Road they’re being asked to sing perfectly nonsensical phrases in pseudo-Latin — but the studio is booked, the clock is ticking, and as Bott put it, “that’s not the time to put up your hand and, you know, correct the Latin.”

Or the English: Bott sang on the soundtrack for the 1986 animated feature An American Tail. For a cue where the little immigrant mouse Fievel first lays eyes on New York harbor, composer James Horner had the choir intone the famous Emma Lazarus poem inscribed at the base of the Statue of Liberty. As she was singing through the cue — “Give me your tired, your poor” — Bott realized that whoever had put together the score had written down “your huddled masses yearning to be free” rather than “breathe free.” She was pretty sure she knew better, as did some colleagues, but out of English reserve, deference to the Americans, or professionalism, no one felt it was their place to say anything. The misquote stayed in the picture and you can buy it on CD today.

Perhaps part of what made me look for the meaning behind the lyrics on some of my favorite soundtracks was exactly this professionalism. A good singer sells the emotion and the conviction, to the point that a listener sort of has to believe that it all means something. Interestingly enough, early in this long tradition of made-up languages, Hollywood felt the need to pretend that it did mean something. When Lost Horizon was released in 1937, Columbia Pictures claimed in its publicity material that Dimitri Tiomkin’s score “includes authentic folk songs of Tibet.” The same press sheet noted that the Hall Johnson Choir, a popular gospel choir, “will sing the folk song arrangements in the native Tibetan language.”

Film music historians agree that this is hogwash. There is no evidence Tiomkin researched Tibetan folk songs for his score — what the ad men were selling as “authentic folk songs” were almost certainly newly written pieces in a made-up language. Tiomkin had started out as a concert pianist and relied on a small army of orchestrators to turn his melodies into actual playable scores. Someone in that group put a pen to paper and wrote these pieces, and either that same person or someone else seems to have made up some fake Tibetan text to distribute to the singers.

But for whatever reason Columbia Pictures’ publicity department didn’t want to frame the vocals in this manner. Perhaps extradiegetic voices were still sufficiently new that they wanted to tell an audience what these voices were doing on the soundtrack. Or it had nothing to do with the soundtrack itself, and was just another way of selling the broader spectacle of filmmaking: Look at the lengths we went to.

At the same time, lyrics have a pesky way of clarifying the intended audience. After all, it is not altogether difficult to imagine why Tiomkin and company wouldn’t have bothered with actual folk songs and actual language. Lost Horizon is one of those movies that stars noted non-Asian persons H.B. Warner as “Chang” and Sam Jaffe as “the High Lama of Shangri-La.” The broad and bogus claims to authenticity are also making a point of who the movie is for. The fact that the Hall Johnson Choir was an African American group best known for singing spirituals, amplifies the sense that Lost Horizon turns non-white people’s authenticity into charming window-dressing for white audiences. Like Shangri-La for its white visitors, even when its lyrics were incomprehensible film music was still “for” white English speakers.

At other times when Hollywood filmmaking relied on choruses, the point was the opposite of exoticism: hyper-comprehensibility. Decades later Tiomkin wrote a rousing score for John Wayne’s jingoistic epic The Alamo (1960). At the end of the movie, with the siege over and one lone survivor and her little daughter leaving the ruined fort, a chorus drifts faintly onto the soundtrack, almost as though the singers were standing somewhere far away in the field of battle. Over the movie’s final shots, the choir takes over the soundtrack, singing a version of what would eventually spend some weeks on the pop charts as “The Ballad of the Alamo.” The first lines a viewer is able to clearly hear are: “Let the old men tell the story / let the legend grow and grow. / Of the thirteen days of glory / at the siege of Alamo.”

This music explicitly tells us why it needs to turn human voices singing in a language the viewer is supposed to understand. The “Ballad” tells us what to do with the story we have just heard: Pass it on, let the legend “grow and grow.” Also — since this was made by John Wayne in the ’60s — the message is probably also don’t be a communist. But note how the movie has to treat three things as essentially the same: the singing has to be audible for the casual moviegoer, over people getting out of their seats early or finishing off their popcorn; the words have to be comprehensible on a purely linguistic level to an audience that has been taught to tune out the music on some level for the last two hours; and the reason why these words were included in the movie has to be clear.

Also — since this was made by John Wayne in the ’60s — the message is probably also don’t be a communist.

The fact that these three factors are separate can be easy to forget for an English-speaking audience reared on American pop culture. I grew up on Hollywood films in dubbed versions — though those didn’t typically dub the music. Meaning, as a kid who didn’t speak English, I became pretty used to following a plot in German, then the music would swell and I’d sort of tune out for a few minutes as the soundtrack, and the English language, washed over me. I’d get the basic idea of course — the characters were happy, or sad, or patriotic — but I had no idea what they were saying, and I was okay with that.

That’s sort of how most of us feel when we listen to the theme to the 21st-century version of Battlestar Galactica — unless we happen to be familiar with the mantras of the Rig Veda. Still, it’s a culturally specific experience. These days we can’t watch fantasy or science fiction without being sung at in Sanskrit, Old Norse, Dwarvish, Elvish, Uruk-hai, Klingon, and so on. When composer John Williams returned to the Star Wars universe for 1999’s The Phantom Menace, he composed an amped-up piece for the final duel — and over its churning ostinatos he overlaid a chorus belting out a … Sanskrit translation of a Welsh poem. And apparently the syllables of the Sanskrit text were rearranged to the point of incomprehensibility. Clearly, these shows and movies are not addressing us as potential speakers of Klingon or Sanskrit or even Welsh — they’re interested in the feel and a sound of a language rather than its meaning. At one recording session, Donald Greig told me, “they spent ages telling us how to pronounce the Russian and then we realized, ‘well this doesn’t actually mean anything.’” This turns out to be both a pretty new and pretty old way of listening to music.

When composer John Williams returned to the Star Wars-universe for 1999’s The Phantom Menace, he composed an amped-up piece for the final duel — and over its churning ostinatos he overlaid a chorus belting out a … Sanskrit translation of a Welsh poem.

***

Hollywood scores come in waves. The film industry isn’t known for being particularly fond of risk taking, and film scores in particular often build on previous scores. The director will often cut the film to a temp track consisting of existing pieces, and it’s easy to imagine that the filmmakers would eventually want something that sounds like their temp track to accompany the finished film. Choirs have never really left Hollywood, but there are certainly moments when producers and directors seem to have almost reflexively sought them out and others when they have avoided them. The Omen (1976) with its massive latinate choral opener, “Ave Satani,” kicked off one such wave. Peter Jackson’s The Lord of the Rings trilogy kicked off another.

This new chapter in the way films sounded started in the Town Hall, a storied concert venue in Wellington, New Zealand. That’s where composer Howard Shore recorded the earliest parts of his soundtrack for The Fellowship of the Ring (the rest would be recorded in London). The recording involved a full orchestra on ground level and rotating choirs in the balcony. It wasn’t lost on the composer that the scene was weirdly traditional: “The orchestra,” Shore explained, “was set up very much the way a pit orchestra was set up in an opera.” The collaborative process around the composition, too, felt like something Mozart and his librettist Lorenzo da Ponte might have recognized. The screenwriters wrote the text the choir would be expected to sing, an on-site translator would translate them into Tolkien’s languages, and Shore would then set the Dwarven or Elvish text.

Somewhat counterintuitively it’s not actually choral music with incomprehensible lyrics that is novel and needs explaining, it is choral music with comprehensible ones. For a long time, and for far longer than instrumental music, choral music in the West belonged to the church, to the mass, and that meant to Latin. A language as native to Christian religious life as it was foreign to most Christians. The Lutheran Reformation did a lot to hand church services over to language the congregants could actually understand, but throughout Europe the experience of being talked, and in particular sung, at in Latin persisted. That’s of course not to say that people didn’t sing in their vernacular languages — just that the experience of singing words you don’t, or don’t fully, understand would have been very normal to these people.

For a long time, and for far longer than instrumental music, choral music in the West belonged to the church, to the mass, and that meant to Latin. A language as native to Christian religious life as it was foreign to most Christians.

For the German philosopher Arthur Schopenhauer choral music was meaningful only insofar as the words were not the point. In his The World as Will and Representation, which appeared first in 1819, was republished in 1844, and strongly influenced composers like Richard Wagner, Schopenhauer claimed that music was the purest expression of reality because it didn’t linger with “representations” — words and the things they represent — but tapped automatically into something deeper. Choral music would seem to fall short of that standard — being pretty centrally concerned with words and the things they denote — but Schopenhauer didn’t think so. After all, you shouldn’t listen to sung music primarily for the words, and often you may not even know the words. And Schopenhauer thought this was for the better.

Latin still works that way for most modern audiences: You might argue that there isn’t much of an expectation on the part of an American film composer circa 1989 (or on the part of the filmmakers who hired him) that the audience should be able to follow along with the Latin lyrics — in fact, it might well be distracting if they did. What text is included, both singers and composers confirmed to me, has far more to do with the flow of phonemes and how it interacts with the raw sound of the vocals. The words are simply yet another instrument in the repertoire the composer has at their disposal. But it’s an instrument that comes freighted with all the complications that inevitably arise when our loquacious species uses language.

The words are simply yet another instrument in the repertoire the composer has at their disposal. But it’s an instrument that comes freighted with all the complications that inevitably arise when our loquacious species uses language.

After all, unlike a humming chorus, a Latin chorus does create extra levels of meaning for those who want to listen more carefully. Composer Jerry Goldsmith wrote “Ave Satani” for The Omen as a deliberate transposition of various Catholic masses. While the individual Latin may have been hard to pick up on (and wasn’t entirely correct to boot), listeners who were Catholic likely would have recognized what was being inverted here, given that they’d spent most Sundays around the actual Latin texts. It’s not clear how seriously Goldsmith (or the choirmaster who jotted down the Latin lyrics for the composer) grappled with that dimension of the score — for one thing, the very title of the piece messes up the declension of Satan. But that dimension was there nonetheless —The Omen was part of a kind of religious revival in Hollywood, and though it plays as camp today it was taken far more seriously then.

James Horner’s score for the 1989 film Glory relies heavily on a Latin chorus, and in the film’s climactic moment that chorus sings recognizably in Latin. Glory tells the story of the 54th Massachusetts Infantry regiment, an all-Black unit during the American Civil War, and the film ends with most of the unit being mowed down by Confederate soldiers while assaulting Fort Wagner in South Carolina. The piece in question relies on a text drawn from a Latin mass, frequently incorporated into the classical canon in various requiems from Mozart to Verdi. But, as so often, Horner (or his orchestrator) doesn’t stick to the actual text, but rather seems to create a mashup of snippets from the traditional requiem mass.

So is Horner just using the text of the requiem mass the way layout professionals use the phrase “Lorem ipsum?” Hard to imagine. After all, it makes a lot of sense to have a requiem text being sung as your characters are dying one by one. But more importantly, precisely because the text is so garbled, certain words stick out all the more: “Recordare,” Latin for “recall,” “stricte” (severely), and “judex” (judge). These pieces are largely taken from the Dies Irae, the part of the requiem mass that tells of the end of the world and God’s judgment, albeit with admixtures from just about every other part. The text, though hard to parse, is remarkably consonant-heavy for a Hollywood soundtrack, and a lot of it seems to be due (and I hope I’m hearing that right, as no actual text exists for this piece that I was able to track down) to the text’s overreliance of the future active participle, which ends in “-urus”: just in terms of pure grammar, the threatening hissing in the text is literally about what is to come.

So is Horner just using the text of the requiem mass the way layout professionals use the phrase “Lorem ipsum?” Hard to imagine.

So maybe the text, and the fact that it’s in Latin, isn’t about pretentiousness on the part of the filmmakers at all. It’s a mass for the dead and a tale of divine wrath, and it seems to make — over the heads of most of the film’s audience, admittedly — a point about retribution. It is remarkable how sophistic (white) Americans, who are frequently so proud to deal in moral absolutes, get when it comes to their Civil War. Horner’s grammatically challenged remix of the “Dies Irae,” I think, makes a point that is stark and simple and remarkably rare in American depictions of the country’s most bloody conflict: The Confederacy is evil, those who kill on its behalf are committing a sin, and they are bringing God’s wrath (and future judgment) upon themselves. There is, then, in this particular instance something to be gleaned from a text that otherwise we’re not meant to pick up on.

Which gets at an interesting disconnect — namely, that different constituencies will experience the same song differently. The choir members know what they’re saying, even if they have no clue as to what any of it means. And the composer, director, sound designer, etc., although they live with a soundtrack far longer than either the performers or even the most devoted audience, don’t tend to get to the words that go with the music until fairly late in the game. They often have to rely on orchestrators and assistants, or a helpful choirmaster who claims he really knows Latin. Their budget, and thus their time, is not tailored to their needs, but to the dictates of the director and the studio. The prose simply appears, like a ghost in this immense machine. And — in spite of the fact that most parties involved seem to be content to have it not mean very much — it winds up signifying something.

One example: An “exotic” text can only be understood by very specific listeners. But, very much to the point, they are not therefore the intended listeners. Lost Horizon wasn’t banking on a particular reception in the Tibetan community — rather the opposite: Dimitri Tiomkin and his collaborators seem to have counted on not having any actual speakers of Tibetan in the audience.

This gets a lot more troubling in the case of the phrase “Nants ingonyama bagithi baba,” likely one of the most repeated, parodied, and bowdlerized lines of text in any soundtrack. It’s clear that it isn’t addressing the average viewer with the intention of being understood. The very fact that it is in Zulu, but the story of The Lion King appears to take place in the Serengeti, thousands of miles to the north, suggests that the language is here to signal one thing and one thing only: African-ness.

For contrast, look at the way composer Michael Abels’ score for Jordan Peele’s Get Out features Swahili voices: Outside of the considerable number of Swahili speakers in the world, most people watching Get Out won’t know what the singers are saying. But what they’re saying does matter, in a way: Literally “listen to your ancestors,” but as a saying meaning something kind of like “you’re about to be in danger.” The viewer who doesn’t understand this line is missing an important warning about what is to come in the film. As is, of course, the film’s African American protagonist who cannot listen (or at least understand) his ancestors. Peele and Abels manage to wring from this small decision a whole range of subtle points.

***

But as with all exoticism, there’s a strange tug of war between condescension and appreciation in these kinds of borrowings. When Ottman decided to use a choral piece at the end of the 2008 film Valkyrie, he clearly needed a German text, and I suspect any German text would have sufficed. But he didn’t pick any German text. The film stars Tom Cruise as Claus Graf Schenk von Stauffenberg, a historic figure who led the only attempt by members of the Nazi state to get rid of Adolf Hitler. The text is “Wandrers Nachtlied,” one of Johann Wolfgang von Goethe’s most memorable, well-known texts, and if it’s a little bit treacly by the great poet’s standards, it’s hard to deny it’s a deeply appropriate choice for this moment. Not overtly about politics, it is nevertheless about history, about reflection, about resignation. And about a different use of the German language than one is used to in Hollywood films.

For any German person it’s weird to hear bad guys so consistently speak (and butcher) your language. I’m not complaining, mind you, it makes perfect sense. But what’s remarkable about Valkyrie is that it seems unusually careful for a Hollywood-film in how it deals with the German language. Earlier in the film, Cruise’s character says that “people need to know we were not all like him,” and this final poem seems to do something similar for the German language — the filmmakers close their movie by pointing out that this language is capable of beauty and deep humanity. The poet Paul Celan — himself a Holocaust-survivor — pointed to the strangeness of writing in a language that was both “my mother’s tongue” (Muttersprache) and “the murderer’s tongue” (Mördersprache). Ottman seems to want to recover the former after showing plenty of the murderers.

The strange thing is: I am pretty sure Goethe’s “Nachtlied” is the first utterance in actual German in this film about Germany. Cruise sort of tries a German accent every other scene, the largely British supporting cast doesn’t even bother. And no one speaks any German, the way Sean Connery does with Russian at certain moments in The Hunt for Red October, or Alan Rickman in Die Hard. The film’s supporting cast is stacked with Germans who belt out accented English throughout. It almost feels like the film wants to bend over backwards a little too much: remind us what beauty and thoughtfulness this language is capable of — even though it never shows us the barbarity, which the film renders in English.

I suppose it’s moments like that one that made me obsess over what choirs sing in movies, and who decides what they sing. Because it’s a moment when blockbuster film or TV, which increasingly is created for the greatest possible global audience, which has been focus-grouped and test-audienced within an inch of its life, manages to speak far more directly, more improvisationally to a much smaller audience. All of us are sometimes in that smaller audience, sometimes not. But we’re aware it’s there. When cinema is literally speaking in tongues, how could we not? And to be the person who hears a call the object of fascination never knew it was putting out there — what better definition could there be of what a fan really is?

* * *

Adrian Daub is professor of Comparative Literature and German Studies at Stanford University. He is the author of four books on German thought and culture in the nineteenth century, as well as (with Charles Kronengold) “The James Bond Songs: Pop Anthems of Late Capitalism” (related story here). He tweets @adriandaub.

* * *

Editor: Krista Stevens
Fact checker: Julie Schwietert Collazo

The Martha Stewarting of Powerful Women

Illustration by Jason Raish

Ann Foster | Longreads | July 2019 | 14 minutes (3,613 words)

On March 5th, 2004, Martha Stewart was found guilty of obstructing justice and lying to investigators. At the time, she was one of comparatively few female CEOs, and she was irrevocably tied to her company’s success: her smiling, serene, WASPy perfection thoroughly entwined with her company’s numerous ventures. When she first faced charges of insider trading, news media and the general population reacted with schadenfreude, or as one New York Times article coined it, blondenfreude: the glee felt when a rich, powerful, and fair-haired business woman stumbles.” And stumble she did: In the wake of the scandal, Stewart voluntarily removed herself from most of her roles at the company, and as part of her sentencing she was barred from involvement with the empire for five years. Stewart re-joined the Board of Directors in 2011, but the company never truly bounced back from effects of the scandal.

Read more…

And What of My Wrath?

Illustration by Zoë van Dijk

Sara Fredman | Longreads | May 2019 | 9 minutes (2,555 words)

 

What makes an antihero show work? In this Longreads series, It’s Not Easy Being Mean, Sara Fredman explores the fine-tuning that goes into writing a bad guy we can root for, and asks whether the same rules apply to women.

I didn’t want to write about Game of Thrones. Truly, I didn’t. In the first place, it is an ensemble show and therefore not technically an antihero vehicle. It is also generally the realm of the hot take and this series is usually a place for tepid, if not downright frigid, takes. It is Winterfell, not Dorne. But here we are in Dorne, talking about Game of Thrones, though probably a week or so after it would have been maximally festive. So maybe it’s more accurate to say that we’re in King’s Landing, which is perfect because we’re here to talk about how, on any other show, Cersei Lannister could have been the female antihero we’ve all been waiting for.

Cersei is the closest female analogue to the Golden Age antiheroes who turned the genre into a phenomenon. Those men — Tony Soprano, Don Draper, Walter White — all do terrible things for a host of reasons: because they want to, because power feels good, because they’re doing what they need to do to survive in the world. Despite the fact that these men do terrible things, we root for them because of a careful calibration of their characters and the environment in which they operate. They are marked as special, or especially skilled; they are humanized by their difficult pasts and their dedication to their children; and, finally, they are surrounded by other, more terrible people. Cersei has, at one point or another in the show’s eight-season run, fallen into all of these categories. She is smart and cunning. I recently rewatched a scene I had forgotten, early in the first season in which she pokes holes in the plan her dumb and petulant son Joffrey comes up with to gain control of the North. The scene shows us that she understands the stakes of the titular game and how to play it successfully: “A good king knows when to save his strength and when to destroy his enemies.” The audience knows that Joffrey can never be that king and, despite Cersei’s keen grasp of her political landscape, neither can she. She may be depicted as a villain throughout most of the series but she is also clearly a talent born into the wrong body, and she knows it. As she says to King Robert Baratheon: “I should wear the armor and you the gown.”

This brings us to our next antihero criterion, which is the humanizing influence of interiority and family. It is axiomatic among the show’s characters and creators that Cersei’s most humanizing characteristic is the love and dedication she shows her children. In their final scene together, her brother Tyrion begs her to surrender with the only card he believes will matter: “You’ve always loved your children more than yourself. More than Jaime. More than anything. I beg you if not for yourself then for your child. Your reign is over, but that doesn’t mean your life has to end. It doesn’t mean your baby has to die.” In showrunner David Benioff’s view, Cersei’s children were the only thing that could humanize her: “I think the idea of Cersei without her children is a pretty terrifying prospect because it was the one thing that really humanized her, you know — her love for her kids. As much of a monster as she could sometimes be, she was a mother who truly did love her children.”

But the thing about an antihero show is that it can turn any monster into a hero.

It is of course true that Cersei loves her children, but it is hard to square Tyrion’s description of his sister with the Cersei of season two’s “Blackwater” who was prepared to kill herself and Tommen, her youngest son, rather than be taken alive by Stannis Baratheon and his army. Tyrion thinks that Cersei loves her children like a June Cleaver when she actually loves them like a Walter White. For the antihero, love of family is about self-advancement, not self-sacrifice. Invoking his children will not dissuade him from doing bad things because their existence is the very thing that motivates him to do them. This is why Walter White can yell “WE’RE A FAMILY” right before he takes his infant daughter away from her mother.

David Benioff’s assertion that Cersei’s love of her children is the only thing that humanizes her is possibly the best example of the way in which the Game of Thrones writers misunderstood their characters and their audience. It overlooks the other reasons the show gave us to root for Cersei and betrays an ignorance of the extent to which enduring patriarchy might itself be, for at least a portion of its audience, humanizing. It reveals an inability to grasp the possibility that the mother and the monster can be the same person. For a show dedicated to demonstrating just how thin the line is between good and evil, Game of Thrones was surprisingly blind to Cersei’s potential to become a compelling antihero, to be humanized by something other than her children. Or maybe the show realized it all too well.


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


Seasons five and six in particular could have been a — forgive me — game changer for the audience’s relationship with Cersei. Their storyline has Cersei first trying to manipulate and then fighting off a band of homophobic and misogynist religious ascetics called the Sparrows. Initially, the audience appreciates the way the High Sparrow thwarts Cersei’s attempts to use religion to strengthen her own political position. She’s been a villain for four seasons and we relish seeing her hit a roadblock. But the High Sparrow and his sidekick Septa Unella take it too far and our allegiances begin to shift. Septa Unella tortures Cersei in prison and the High Sparrow declares that Cersei must take a walk of penance through the streets of King’s Landing. Her hair is shorn and she walks naked from the Sept of Baelor to the Red Keep as Septa Unella chants “shame” and rings a bell to draw onlookers. In that sequence, we don’t forget that Cersei’s done terrible things, but we feel sympathy for her because she is, in that moment, at the mercy of other, more sinister forces. We also feel sympathy for her because this showdown with the High Sparrow reminds us that her story is that of a woman living under patriarchy, that her autonomy has always been contingent and therefore largely an illusion. We remember that this is not the first time Cersei has been powerless, that in the first season we saw her husband hit her and then tell her to wear her bruise in silence or he would hit her again. We remember the way her father, Tywin Lannister, spoke to her (“Do you think you’ll be the first person dragged into the Sept to be married against her will?”), and we also remember that she was raped by the one man she loved next to the body of her murdered son.

In most of the ways that matter, Cersei’s relationship with Sansa Stark, betrothed to marry Cersei’s abusive son Joffrey, is evidence of her villainy but it is also a frank education in what becoming a wife and mother means under patriarchy. Looking back on some of their scenes together, one gets the sense that Cersei feels compelled to explain to Sansa what she’s in for, to disabuse her of any notions of happily ever after and replace them with the reality of life as a political pawn, a prisoner in expensive dresses. We see this as coldhearted and evil because we hold out hope that Sansa will be able to remain an innocent princess looking for true love, but that’s not an option for girls like her, and Cersei knows it. In a heart-to-heart after Sansa gets her period for the first time, Cersei assures her that while she will never love the king, she will love her children. Sansa has just become a woman, which makes her eligible to be a wife and mother. Cersei knows that this is an occasion for a political lesson rather than a domestic one: “Permit me to share some womanly wisdom with you on this very special day. The more people you love, the weaker you are. You do things for them that you know you shouldn’t do, you’ll act the fool to make them happy, to keep them safe. Love no one but your children. On that front, a mother has no choice.” When we hear it from her own mouth, Cersei’s love for her children sounds less like deliberate self-sacrifice than yet another matter in which she has no choice.

Tyrion thinks that Cersei loves her children like a June Cleaver when she actually loves them like a Walter White.

It’s probably worthwhile to remember that the “game” we have spent eight years watching is only being played in the first place because Robert Baratheon assumed that a woman who left him had to have been taken (“I only know she was the one thing I ever wanted and someone took her away from me”). Women are things to be taken and traded; they are the tools men use to cement alliances and consolidate power. Freedom of movement and freedom of self-determination are precious commodities to which only some people in Westeros have access, either by birth or cunning. None of those people are women. Cersei is hardly the only victim of patriarchy on the show, but she could have been its most symbolic. More than anything, Cersei wants to control her own body and her own destiny. She wants to be a player, rather than a pawn. When Ned Stark confronts her about her relationship with Jaime and the illegitimacy of their children, he warns, “Wherever you go, Robert’s wrath will follow you.” Cersei replies, “And what of my wrath, Lord Stark?” This question is, of course, rhetorical — everyone knows that a woman’s anger only earns 78 cents on the dollar. We side with Ned, but on another show, Cersei’s question could have been a rallying cry. We might have written it on signs taken to #resistance rallies and anti-abortion protests. Neither Cersei nor Robert has been faithful, but Robert’s anger matters more because he is the king and Cersei’s infidelity matters more because her body is for making him a bloodline.

The Sept of Baelor pyrotechnics in the season six finale could have easily been Cersei’s “Face Off” moment: a shocking triumph over her enemies showcasing her intelligence and tactical skill. The move was not only brilliantly efficient, killing off everyone who opposed her at once without leaving home, but also bursting with symbolism. She destroys the religious cult that stripped her of what little bodily and political autonomy she had and blows up the place where she married Robert and was raped by Jaime. Cersei watches from her window as the architectural incarnation of patriarchy goes up in green flames and then takes a sip of wine.

That masterfully shot suspenseful sequence is immediately followed by Cersei’s vengeful speech to her torturer, Septa Unella, before leaving her in the hands of Gregor Clegane:

“Confess, it felt good, beating me, starving me, frightening me, humiliating me. You didn’t do it because you cared about my atonement, you did it because it felt good. I understand. I do things because they feel good. I drink because it feels good. I killed my husband because it felt good to be rid of him. I fucked my brother, because it feels good to feel him inside of me. I lie about fucking my brother, because it feels good to keep our son safe from hateful hypocrites. I killed your High Sparrow, and all his little sparrows, all his septons and all his septas, all his filthy soldiers because it felt good to watch them burn. It felt good to imagine their shock and their pain. No thought has ever given me greater joy. Even confessing feels good under the right circumstances.”

Cersei is hardly the only victim of patriarchy on the show, but she could have been its most symbolic.

This is Cersei’s “I am the one who knocks” speech, the moment where the antihero lays bare her unsavory machinations, and we applaud because a formerly weak person now has some hard-won power. Walter White takes some time to understand that if he is to have any power, he must take it. Cersei has always understood that power is her only available means toward self-determination, a ballast against the whims and wishes of those who would try to use her to further their own storylines and try to capture a bigger piece of the Westeros pie. Power is, for her, a necessity rather than a perk. Thinking about Cersei as an antihero, however brief the time we spend cheering her on, makes clear the extent to which writing a successful antihero always involves portraying that character as but a small player in a much bigger game. This is Walter White up against Big Pharma, which cut him out of profits to which he feels entitled and is now forcing him to forfeit his family’s financial security to stay alive. It is Tony Soprano chafing against RICO and the possibility that anyone in his orbit could help the FBI lock him up. It is Don Draper trying to hold on to a life he was never supposed to have. And it is Philip and Elizabeth Jennings doing the job they were trained to do, while people we never see change the rules and determine its stakes. An antihero isn’t on top of the world but right there in the melee, jockeying for some small measure of self-determination. We realize, as they do, that no matter how much power or control they seem to have, they are only one step away from being literally or metaphorically paraded through the streets naked while someone rings a bell.

Cersei is the closest we’ve come to a female version of this kind of character. David Benioff is right: Cersei is a monster. But the thing about an antihero show is that it can turn any monster into a hero. It compels us to root for a monster by making us see the monstrosity lurking all around him and, in so doing, turns him into our monster. Monstrosity in Westeros is like wildfire under King’s Landing: There is more than enough of it to make Cersei a queen we root for while she sips her celebratory wine. Allowing Cersei to become a full-on antihero could have been incredible, giving the show an opportunity to explore the particular powerlessness of women under patriarchy. What difference does motherhood make? What particular vulnerabilities does it bestow, what kinds of unexpected powers or motivations? But this is the fantasy world we have, not the one we need, and Game of Thrones could never allow Cersei to fully become the antihero character they had temporarily conjured. Three weeks ago — on Mother’s Day no less — we saw her crushed by a building, dying in the arms of her rapist after begging him not to let her die. As bad as Game of Thrones was at writing women, it gave us one possible roadmap for creating a female antihero on par with the bad men we’ve seen win Emmys over the past two decades. But it also makes clear just how tough that road is to travel because it requires that we expand our idea of what kinds of people are allowed to do bad things in pursuit of their own self-determination, to become the one who knocks.

Next, we’ll dive into half-hour television for our first solo female antihero — single mom Sam Fox of Better Things — because there’s no audience more adept at pointing out a woman’s flaws than her children.

* * *

Previous installments in this series:
The Blaming of the Shrew
The Good Bad Wives of Ozark and House of Cards
Mother/Russia

* * *

Sara Fredman is a writer and editor living in St. Louis. Her work has been featured in Longreads, The Rumpus, Tablet, and Lilith.

Editor: Cheri Lucas Rowlands
Illustrator: Zoë van Dijk

Critics: Endgame

Illustration by Homestead

Soraya Roberts | Longreads | May 2019 | 9 minutes (2,309 words)

It’s a strange feeling being a cultural critic at this point in history. It’s like standing on the deck of the Titanic, feeling it sink into the sea, hearing the orchestra play as they go down — then reviewing the show. Yes, it feels that stupid. And useless. And beside the point. But what if, I don’t know, embedded in that review, is a dissection of class hierarchy, of the fact that the players are playing because what else are you supposed to do when you come from the bottom deck? And what if the people left behind with them are galvanized by this knowledge? And what if, I don’t know, one of them does something about it, like stowing away their kids on a rich person’s boat? And what if someone is saved who might otherwise not have been? If art can save your soul, can’t writing about it do something similar?

The climate report, that metaphorical iceberg, hit in October. You know, the one that said we will all be royally screwed by 2040 unless we reduce carbon emissions to nothing. And then came news story after news story, like a stream of crime scene photos — submerged villages, starving animals, bleached reefs — again and again, wave after wave. It all coalesced into the moment David Attenborough — the man famous for narrating documentaries on the wonders of nature — started narrating the earth’s destruction. I heard about that scene in Our Planet, the one where the walruses start falling off the cliffs because there is no ice left to support them, and I couldn’t bring myself to watch it. Just like I couldn’t bring myself to read about the whales failing to reproduce and the millions of people being displaced. As a human being I didn’t know what to do, and as a cultural critic I was just as lost. So when Columbia Journalism Review and The Nation launched “Covering Climate Change: A New Playbook for a 1.5-Degree World,” along with a piece on how to get newsrooms to prioritize the environment, I got excited. Here is the answer, I thought. Finally.

But there was no answer for critics. I had to come up with one myself.

* * *

Four years ago, William S. Smith, soon to be the editor of Art in America, attended the Minneapolis-based conference “Superscript: Arts Journalism and Criticism in a Digital Age” and noticed the same strange feeling I mentioned. “The rousing moments when it appeared that artists could be tasked with emergency management and that critics could take on vested interests were, however, offset by a weird — and I would say mistaken — indulgence of powerlessness,” he wrote, recalling one speaker describing “criticism as the ‘appendix’ of the art world; it could easily be removed without damaging the overall system.” According to CJR, arts criticism has been expiring at a faster rate than newspapers themselves (is that even possible?). And when your job is devalued so steadily by the industry, it’s hard not to internalize. In these precarious circumstances, exercising any power, let alone taking it on, starts to feel Herculean.

Last week’s bloody battle — not that one — was only the latest reminder of critics’ growing insignificance. In response to several celebrities questioning their profession, beleaguered critics who might have proven they still matter by addressing larger, more urgent issues, instead made their critics’ point by making it all about themselves. First there was Saturday Night Live writer Michael Che denigrating Uproxx writer Steven Hyden on Instagram for critiquing Che’s Weekend Update partner Colin Jost. Then there was Lizzo tweeting that music reviewers should be “unemployed” after a mixed Pitchfork review. And finally, Ariana Grande calling out “all them blogs” after an E! host criticized Justin Bieber’s performance during her show. Various wounded critics responded in kind, complaining that people with so much more clout were using it to devalue them even more than they already have been. “It’s doubtful, for instance, that Lizzo or Grande would have received such blowback if they hadn’t invoked the specter of joblessness in a rapidly deteriorating industry,” wrote Alison Herman at The Ringer, adding, “They’re channeling a deeply troubling trend in how the public exaggerates media members’ power, just as that power — such as it is — has never been less secure.” 

That was the refrain of the weeklong collective wound-lick: “We’re just doing our jobs.” But it all came to a head when Olivia Munn attacked Go Fug Yourself, the fashion criti-comic blog she misconstrued as objectifying snark. “Red carpet fashion is a big business and an art form like any other, and as such there is room to critique it,” site owners Heather Cocks and Jessica Morgan responded, while a number of other critics seized the moment to redefine their own jobs, invoking the anti-media stance of the current administration to convey the gravity of misinterpreting their real function, which they idealized beyond reproach. At Vanity Fair, chief critic Richard Lawson wrote of his ilk offering “a vital counterbalance in whatever kind of cultural discourse we’re still able to have.” The Ringer’s Herman added that criticism includes “advocacy and the provision of context in addition to straightforward pans,” while Caroline Framke at Variety simply said, “Real critics want to move a conversation forward.” Wow, it almost makes you want to be one.

I understand the impulse to lean into idolatry in order to underscore the importance of criticism. Though it dates back as far as art itself, the modern conception of the critic finds its roots in 18th-century Europe, in underground socially aware critiques of newly arrived public art. U.K. artist James Bridle summed up this modern approach at “Superscript,” when he argued that the job of art is “to disrupt and complicate” society, adding, “I don’t see how criticism can function without making the same level of demands and responding to the same challenges as art itself — in a form of solidarity, but also for its own survival.” Despite this unifying objective, it’s important to be honest about what in actual practice passes for criticism these days (and not only in light of the time wasted by critics defending themselves). A lot of it — a lot — kowtows to fandom. And not just within individual reviews, but in terms of what is covered; “criticism” has largely become a publicity-fueled shill of the most high-profile popular culture. The positivity is so pervasive that the odd evisceration of a Bret Easton Ellis novel, for instance, becomes cause for communal rejoicing. An element of much of this polarized approach is an auteur-style analysis that treats each subject like a hermetically sealed objet d’art that has little interaction with the world.

The rare disruption these days tends to come from — you guessed it — writers of color, from K. Austin Collins turning a Green Book review into a meditation on the erasure of black history to Doreen St. Felix’s deconstruction of a National Geographic cover story into the erasure of a black future. This is criticism which does not just wrestle with the work, but also wrestles with the work within the world, parsing the way it reflects, feeds, fights — or none of the above — the various intersections of our circumstances. “For bold and original reviews that strove to put stage dramas within a real-world cultural context, particularly the shifting landscape of gender, sexuality and race,” the Pulitzer committee announced in awarding New Yorker theatre critic Hilton Als in 2017. A year later the prize for feature writing went to Rachel Kaadzi Ghansah, the one freelancer among the nominated staffers, for a GQ feature on Dylann Roof. Profiling everyone from Dave Chappelle to Missy Elliott, Ghansah situates popular culture within the present, the past, the personal, the political — everywhere, really. And this is what the best cultural criticism does. It takes the art and everything around it, and it reckons with all of that together.

But the discourse around art has not often included climate change, barring work which specifically addresses it. Following recent movements that have awoken the general populace to various systemic inequities, we have been slowly shifting toward an awareness of how those inequities inform contemporary popular culture. This has manifested in criticism with varying levels of success, from clunky references to Trump to more considered analyses of how historic disparity is reflected in the stories that are currently told. And while there has been an expansion in representation in the arts as a result, the underlying reality of these systemic shifts is that they don’t fundamentally affect the bottom line of those in power. There is a social acceptability to these adaptations, one which does not ask the 1 Percent to confront its very existence, ending up subsumed under it instead. A more threatening prospect would be reconsidering climate change, which would also involve reconsidering the economy — and the people who benefit from it the most.  

We are increasingly viewing extreme wealth not as success but as inequity — Disney’s billion-dollar opening weekend with Avengers: Endgame was undercut not only by critics who questioned lauding a company that is cannibalizing the entertainment industry, but by Bernie Sanders: “What would be truly heroic is if Disney used its profits from Avengers to pay all of its workers a middle class wage, instead of paying its CEO Bob Iger $65.6 million — over 1,400 times as much as the average worker at Disney makes.” More pertinent, however, is how environmentally sustainable these increasingly elaborate productions are. I am referring to not only literal productions, involving sets and shoots, but everything that goes into making and distributing any kind of art. (That includes publicity — what do you think the carbon footprint of BTS is?) In 2006, a report conducted by UCLA found that the film and television industries contributed more to air pollution in the region than almost all five of the other sectors studied. “From the environmental impact estimates, greenhouse gas emissions are clearly an area where the motion picture industry can be considered a significant contributor,” it stated, concluding, “it is clear that very few people in the industry are actively engaged with greenhouse gas emission reduction, or even with discussions of the issue.”

The same way identity politics has taken root in the critic’s psyche, informing the writing we do, so too must climate change. Establishing a sort of cultural carbon footprint will perhaps encourage outlets not to waste time hiring fans to write outdated consumers reviews that do no traffic in Rotten Tomatoes times. Instead of distracting readers with generic takes, they might shift their focus to the specifics of, for instance, an environmental narrative, such as the one in the lame 2004 disaster movie The Day After Tomorrow, which has since proven itself to be (if nothing else) a useful illustration of how climate change can blow cold as well as hot. While Game of Thrones also claimed a climate-driven plot, one wonders whether, like the aforementioned Jake Gyllenhaal blockbuster, the production planted $200,000 worth of trees to offset the several thousand tons of carbon dioxide it emitted. If the planet is on our minds, perhaps we will also feature Greta Thunberg in glossy magazines instead of Bari Weiss or Kellyanne Conway. Last year, The New York Times’ chief film critic, A.O. Scott, who devoted an entire book to criticism, wrote, “No reader will agree with a critic all the time, and no critic requires obedience or assent from readers. What we do hope for is trust. We try to earn it through the quality of our writing and the clarity of our thought, and by telling the truth.” And the most salient truth of all right now is that there is no art if the world doesn’t exist.

* * *

I am aware that I’m on one of the upper decks of this sinking ship. I have a contract with Longreads, which puts me somewhere in the lower middle class (that may sound unimpressive, but writers have a low bar). Perhaps even better than that, I work for a publication for which page views are not the driving force, so I can write to importance rather than trends. I am aware, also, that a number of writers do not have this luxury, but misrepresenting themselves as the vanguards of criticism not only does them a disservice but also discredits the remaining thoughtful discourse around art. A number of critics, however, are positioned better than me. Yet they personalize the existential question into one that is merely about criticism when the real question is wider: It’s about criticism in the world.

I am not saying that climate change must be shoehorned into every article‚ though even a non sequitur would be better than nothing — but I am saying that just as identity politics is now a consideration when we write, our planet should be too. What I am asking for is simply a widening of perspective, besides economics, besides race, beyond all things human, toward a cultural carbon footprint, one which becomes part of the DNA of our critiques and determines what we choose to talk about and what we say when we do. After more than 60 years of doing virtually the same thing, even nonagenarian David Attenborough knew he had to change tacks; it wasn’t enough just to show the loss of natural beauty, he had to point out how it affects us directly. As he told the International Monetary Fund last month: “We are in terrible, terrible trouble and the longer we wait to do something about it the worse it is going to get.” In Our Planet, Attenborough reminds us over and over that our survival depends on the earth’s. For criticism to survive, it must remind us just as readily.

* * *

Soraya Roberts is a culture columnist at Longreads.

The Age of Forever Crises

The Chernobyl nuclear power plant in Ukraine. Efrem Lukatsky / AP, Illustration by Homestead

Linda Kinstler | Longreads | May 2019 | 10 minutes (2,527 words)

How does one recognize catastrophe, when it comes? What does it look like, how does it sound and smell? If it is an invisible catastrophe, how can you know when you are near it, and when you are far away? And what if it is an everlasting catastrophe, a disaster with a long half-life, so no matter how much time passes, it never quite goes away, and in some places, it only grows stronger? And when a decision from on high announces that it is time to try to move past it, to lay a wreath and get on with life, how does one mark the anniversary of a disaster still in motion, a crisis without end?

Last week marked yet another anniversary of the explosion of the Vladimir I. Lenin Nuclear Power Station’s Reactor No. 4. Thirty-three years ago, in the early hours of the morning on April 26, 1986, an ill-fated safety test unleashed an explosion equivalent to sixty tons of TNT, obliterating the reactor and sending the contents of its core — uranium fuel, graphite, zirconium, and a noxious mixture of radioactive gases — into the surrounding air, water, and earth. Read more…

When Did Pop Culture Become Homework?

Kevin Winter / Getty, Collage by Homestead

Soraya Roberts | Longreads | April 2019 | 6 minutes (1,674 words)

I didn’t do my homework last weekend. Here was the assignment: Beyoncé’s Homecoming — a concert movie with a live album tie-in — the biggest thing in culture that week, which I knew I was supposed to watch, not just as a critic, but as a human being. But I didn’t. Just like I didn’t watch the premiere of Game of Thrones the week before, or immediately listen to Lizzo’s Cuz I Love You. Instead, I watched something I wanted to: RuPaul’s Drag Race. What worse place is there to hide from the demands of pop culture than a show about drag queens, a set of performance artists whose vocabulary is almost entirely populated by celebrity references? In the third episode of the latest season, Vietnamese contestant Plastique Tiara is dragged for her uneven performance in a skit about Mariah Carey, and her response shocks the judges. “I only found out about pop culture about, like, three years ago,” she says. To a comically sober audience, she then drops the biggest bomb of all: “I found out about Beyoncé legit four years ago.” I think Michelle Visage’s jaw might still be on the floor.

“This is where you all could have worked together as a group to educate each other,” RuPaul explains. It is the perfect framing of popular culture right now — as a rolling curriculum for the general populace which determines whether you make the grade as an informed citizen or not. It is reminiscent of an actual educational philosophy from the 1930s, essentialism, which was later adopted by E.D. Hirsch, the man who coined the term “cultural literacy” as “the network of information that all competent readers possess.” Essentialist education emphasizes standardized common knowledge for the entire population, which privileges the larger culture over individual creativity. Essentialist pop culture does the same thing, flattening our imaginations until we are all tied together by little more than the same vocabulary.

***

The year 1987 was when Aretha Franklin became the first woman inducted into the Rock and Roll Hall of Fame, the Simpson family arrived on television (via The Tracey Ullman Show), and Mega Man was released on Nintendo. It was also the year Hirsch published Cultural Literacy: What Every American Needs to Know. None of those three pieces of history were in it (though People published a list for the pop-culturally literate in response). At the back of Hirsch’s book, hundreds of words and quotes delineated the things Americans need to know — “Mary Had a Little Lamb (text),” for instance — which would be expanded 15 years later into a sort of CliffsNotes version of an encyclopedia for literacy signaling. “Only by piling up specific, communally shared information can children learn to participate in complex cooperative activities with other members of their community,” Hirsch wrote. He believed that allowing kids to bathe in their “ephemeral” and “confined” knowledge about The Simpsons, for instance, would result in some sort of modern Tower of Babel situation in which no one could talk to anyone about anything (other than, I guess, Krusty the Klown). This is where Hirsch becomes a bit of a cultural fascist. “Although nationalism may be regrettable in some of its worldwide political effects, a mastery of national culture is essential to mastery of the standard language in every modern nation,” he explained, later adding, “Although everyone is literate in some local, regional, or ethnic culture, the connection between mainstream culture and the national written language justifies calling mainstream culture the basic culture of the nation.”

Because I am not very well-read, the first thing I thought of when I found Hirsch’s book was that scene in Peter Weir’s 1989 coming-of-age drama Dead Poet’s Society. You know the one I mean,  where the prep school teacher played by Robin Williams instructs his class to tear the entire introduction to Understanding Poetry (by the fictional author J. Evans Pritchard) out of their textbooks. “Excrement,” he calls it. “We’re not laying pipe, we’re talking about poetry.” As an alternative, he expects this class of teenagers to think for themselves. “Medicine, law, business, engineering, these are all noble pursuits, and necessary to sustain life,” he tells them. “But poetry, beauty, romance, love, these are what we stay alive for.” Neither Pritchard nor Hirsch appear to have subscribed to this sort of sentiment. And their approach to high culture has of late seeped into low culture. What was once a privileging of certain aspects of high taste, has expanded into a privileging of certain “low” taste. Pop culture, traditionally maligned, now overcompensates, essentializing certain pieces of popular art as additional indicators of the new cultural literacy.

I’m not saying there are a bunch of professors at lecterns telling us to watch Game of Thrones, but there are a bunch of networks and streaming services that are doing that, and viewers and critics following suit, constantly telling us what we “have to” watch or “must” listen to or “should” read. Some people who are more optimistic than me have framed this prescriptive approach as a last-ditch effort to preserve shared cultural experiences. “Divided by class, politics and identity, we can at least come together to watch Game of Thrones — which averaged 32.8 million legal viewers in season seven,” wrote Judy Berman in Time. “If fantasy buffs, academics, TV critics, proponents of Strong Female Characters, the Gay of Thrones crew, Black Twitter, Barack Obama, J. Lo, Tom Brady and Beyoncé are all losing their minds over the same thing at the same time, the demise of that collective obsession is worth lamenting — or so the argument goes.” That may sound a little extreme, but then presidential-hopeful Elizabeth Warren blogs about Game of Thrones and you wonder.

Essentializing any form of art limits it, setting parameters on not only what we are supposed to receive, but how. As Wesley Morris wrote of our increasingly moralistic approach to culture, this “robs us of what is messy and tense and chaotic and extrajudicial about art.” Now, instead of approaching everything with a sense of curiosity, we approach with a set of guidelines. It’s like when you walk around a gallery with one of those audio tours held up to your ear, which is supposed to make you appreciate the art more fully, but instead tends to supplant any sort of discovery with one-size-fits-all analysis. With pop culture, the goal isn’t even that lofty. You get a bunch of white guys on Reddit dismantling the structure of a Star Wars trailer, for instance, reducing the conversation around it to mere mechanics. Or you get an exhaustive number of takes on Arya Stark’s alpha female sex scene in Game of Thrones. One of the most prestige-branded shows in recent memory, the latter in particular often occupies more web space than its storytelling deserves precisely because that is what it’s designed to do. As Berman wrote, “Game of Thrones has flourished largely because it was set up to flourish — because the people who bankroll prestige television decided before the first season even went into production that this story of battles, bastards and butts was worth an episodic budget three times as large as that of the typical cable series.” In this way, HBO — and the critics and viewers who stan HBO — have turned this show into one of the essentials even if it’s not often clear why.

Creating art to dominate this discursive landscape turns that art into a chore — in other words, cultural homework. This is where people start saying things like, “Do I HAVE to watch Captain Marvel?” and “feeling a lot of pressure to read sally rooney!” and “do i have to listen to the yeehaw album?” This kind of coercion has been known to cause an extreme side effect — reactance, a psychological phenomenon in which a person who feels their freedom being constricted adopts a combative stance, turning a piece of art we might otherwise be neutral about into an object of derision. The Guardian’s Oliver Burkeman called it “cultural cantankerousness” and used another psychological concept, optimal distinctiveness theory, to further explain it. That term describes how people try to balance feeling included and feeling distinct within a social group. Burkeman, however, favored his reactance as a form of self-protective FOMO avoidance. “My irritation at the plaudits heaped on any given book, film or play is a way of reasserting control,” he wrote. “Instead of worrying about whether I should be reading Ferrante, I’m defiantly resolving that I won’t.” (This was written in 2016; if it were written now, I’m sure he would’ve used Rooney).

***

Shortly after Beyoncé dropped Homecoming, her previous album, Lemonade, became available on streaming services. That one I have heard — a year after it came out. I didn’t write about it. I barely talked about it. No one wants to read why Beyoncé doesn’t mean much to me when there are a number of better critics who are writing about what she does mean to them and so many others (the same way there are smart, interested parties analyzing Lizzo and Game of Thrones and Avengers: Endgame and Rooney). I am not telling those people not to watch or listen to or read or find meaning there, I understand people have different tastes, that certain things are popular because they speak to us in a way other things haven’t. At the same time, I expect not to be told what to watch or listen to or read, because from what I see and hear around me, from what I read and who I talk to, I can define for myself what I need. After Lemonade came out, in a post titled “Actually,” Gawker’s Rich Juzwiak wrote, “It’s easier to explicate what something means than to illustrate what it does. If you want to know what it does, watch it or listen to it. It’s at your fingertips. … Right is right and wrong is wrong, but art at its purest defies those binaries.” In the same way, there is no art you have to experience, just as there is no art you have to not experience. There is only art — increasingly ubiquitous — and there is only you, and what happens between both of you is not for me to assign.

* * *

Soraya Roberts is a culture columnist at Longreads.

 

Let’s Talk About Sex Scenes

Anna Sastre / Unsplash / Pexels / Photo illustration by Katie Kosma

The first sex scene ever filmed was not a sex scene at all. It was a kiss. And there was way less kissing than talking. May Irwins’ make out session with John Rice, a recreation of the smooch from the Broadway musical The Widow Jones, took all of one second. Filmed in 1896 at Thomas Edison’s Black Maria Studio, the soundless footage — titled, simply, The Kiss — opens with Irwin deep in conversation with Rice. While it is impossible to tell what they are saying, the two actors appear to be discussing logistics. Thirteen seconds in they seem in agreement. Both pull back, Rice dramatically smooths out his moustache and, while Irwin is still talking, he cups her face and the two of them peck. Or, on his end, nibble. All in all, the actual moment their lips touch is almost nothing — 94 percent of the first sex scene was actually the discourse around it.

Were this to happen today, the actors would have had clearer direction. Last week Rolling Stone reported that HBO would be hiring intimacy coordinators for every show that called for it after “The Deuce” star Emily Meade, who plays a prostitute in the series, asked for help with her sex scenes. The network consulted Intimacy Directors International (IDI), a non-profit established in 2016 that represents theatre, tv and film directors and choreographers specializing in the carnal. “The Intimacy Director takes responsibility for the emotional safety of the actors and anyone else in the rehearsal hall while they are present,” their site explains, alongside a standard set of guidelines called The Pillars: context (understanding the story), communication, consent, choreography and closure (signaling the end of the scene). Read more…

The 25 Most Popular Longreads Exclusives of 2017

Our most popular exclusive stories of 2017. If you like these, you can sign up to receive our weekly email every Friday.

1. The Unforgiving Minute

Laurie Penny | Longreads | November 2017 | 12 minutes (3,175 words)

Men, get ready to be uncomfortable for a while. While forgiveness may come one day, it won’t be soon. (At nearly half a million views, this is the most popular piece ever published on Longreads.)

2. A Sociology of the Smartphone

Adam Greenfield | Radical Technologies: The Design of Everyday Life | Verso | June 2017 | 27 minutes (7,433 words)

Smartphones have altered the texture of everyday life, digesting many longstanding spaces and rituals, and transforming others beyond recognition. Read more…

On Barbs and Demogorgons: A Stranger Things Reading List

In a summer marked by record levels of political angst, Netflix show Stranger Things accomplished an impressive feat. It tells a story of such murky ideological leanings that everyone — from the tinfoil hatters to the vegan socialists — just had to surrender to its expertly executed ’80s pastiche and satisfying emotional pull. (And, sure, all those adorable kid actors.)

Whether you’re still high on the show’s well-calculated nostalgia or already experiencing symptoms of Upside Down withdrawal, here’s a two-part selection of stories to keep you going: from deep dives into the design of the show’s title sequence to a sprawling interview with its creators. See you on the other side!

Read more…

'You Hollywood Idiots!' George R.R. Martin on Collaboration and the Creative Process

I think the look of the show is great. There was a bit of an adjustment for me. I had been living with these characters and this world since 1991, so I had close to twenty years of pictures in my head of what these characters looked like, and the banners and the castles, and of course it doesn’t look like that. But that’s fine. It does take a bit of adjustment on the writer’s part but I’m not one of these writers who go crazy and says, “I described six buttons on the jacket and you put eight buttons on the jacket, you Hollywood idiots!” I’ve seen too many writers like that when I was on the other side, in Hollywood. When you work in television or film, it is a collaborative medium, and you have to allow the other collaborators to bring their own creative impulse to it, too.

Game of Thrones author George R.R. Martin, in an in-depth interview with Vanity Fair’s Jim Windolf, about the HBO show, his progress on completing the seven-book series, and working inside and outside Hollywood. Read more on Martin and Game of Thrones.