Search Results for: David Carr

The American Worth Ethic

Getty / Photo Illustration by Longreads

Bryce Covert | Longreads | April 2019 | 13 minutes (3,374 words)

“The American work ethic, the motivation that drives Americans to work longer hours each week and more weeks each year than any of our economic peers, is a long-standing contributor to America’s success.” Thus reads the first sentence of a massive report the Trump administration released in July 2018. Americans’ drive to work ever harder, longer, and faster is at the heart of the American Dream: the idea, which has become more mythology than reality in a country with yawning income inequality and stagnating upward economic mobility, that if an American works hard enough she can attain her every desire. And we really try: We put in between 30 to 90 minutes more each day than the typical European. We work 400 hours more annually than the high-output Germans and clock more office time than even the work-obsessed Japanese.

The story of individual hard work is embedded into the very founding of our country, from the supposedly self-made, entrepreneurial Founding Fathers to the pioneers who plotted the United States’ western expansion; little do we acknowledge that the riches of this country were built on the backs of African slaves, many owned by the Founding Fathers themselves, whose descendants live under oppressive policies that continue to leave them with lower incomes and overall wealth and in greater poverty. We — the “we” who write the history books — would rather tell ourselves that the people who shaped our country did it through their own hard work and not by standing on the shoulders, or stepping on the necks, of others. It’s an easier story to live with. It’s one where the people with power and money have it because they deserve it, not because they took it, and where we each have an equal shot at doing the same.

Because for all our national pride in our puritanical work ethic, the ethic doesn’t apply evenly. At the highest income levels, wealthy Americans are making money passively, through investments and inheritances, and doing little of what most would consider “work.” Basic subsistence may soon be predicated on whether and how much a poor person works, while the rich count on tax credits and carve-outs designed to protect stockpiles of wealth created by money begetting itself. It’s the poor who are expected to work the hardest to prove that they are worthy of Americanness, or a helping hand, or humanity. At the same time, we idolize and imitate the rich. If you’re rich, you must have worked hard. You must be someone to emulate. Maybe you should even be president.

* * *

Trump has a long history of antipathy to the poor, a word which he uses as a synonym for “welfare,” which he understands only as a pejorative. When he and his father were sued by the Department of Justice in 1973 for discriminating against black tenants in their real estate business, he shot back that he was being forced to rent to “welfare recipients.” Nearly 40 years later, he called President Obama “our Welfare & Food Stamp President,” saying he “doesn’t believe in work.” He wrote in his 2011 book Time To Get Tough, “There’s nothing ‘compassionate’ about allowing welfare dependency to be passed from generation to generation.”

Perhaps. But Trump certainly knows about relying on things passed from generation to generation. His self-styled origin story is that he got his start with a “small” $1 million loan from his real estate tycoon father, Fred C. Trump, which he used to grow his own empire. “I built what I built myself,” he has claimed. “I did it by working long hours, and working hard and working smart.”

It’s an interesting interpretation of “myself”: A New York Times investigation in October reported that, instead, Trump has received at least $413 million from his father’s businesses over the course of his life. “By age 3, Mr. Trump was earning $200,000 a year in today’s dollars from his father’s empire. He was a millionaire by age 8. By the time he was 17, his father had given him part ownership of a 52-unit apartment building,” reporters David Barstow, Susanne Craig, and Russ Buettner wrote. “Soon after Mr. Trump graduated from college, he was receiving the equivalent of $1 million a year from his father. The money increased with the years, to more than $5 million annually in his 40s and 50s.” The Times found 295 different streams of revenue Fred created to enrich his son — loans that weren’t repaid, three trust funds, shares in partnerships, lump-sum gifts — much of it further inflated by reducing how much went to the government. Donald and his siblings helped their parents dodge taxes with sham corporations, improper deductions, and undervalued assets, helping evade levies on gifts and inheritances.

If you’re rich, you must have worked hard. You must be someone to emulate. Maybe you should even be president.

Even the money that was made squarely owed a debt to the government. Fred Trump nimbly rode the rising wave of federal spending on housing that began with the New Deal and continued with the G.I. Bill. “Fred Trump would become a millionaire many times over by making himself one of the nation’s largest recipients of cheap government-backed building loans,” the Times reported. Donald carried on this tradition of milking government subsidies to accumulate fortunes. He obtained at least $885 million in perfectly legal grants, subsidies, and tax breaks from New York to build his real estate business.

Someone could have taken this largesse and worked hard to grow it into something more, but Donald Trump was not that someone. Much of his fortune comes not from the down and dirty work of running businesses, but from slapping his name on everything from golf courses to steaks. Many of these deals entail merely licensing his name while a developer actually runs things. And as president, he still doesn’t seem inclined to clock much time doing actual work.

That hasn’t stopped him from putting work at the center of his administration’s poverty-related policies. In the White House Council of Economic Advisers’ lengthy tome, it argued for adding work requirements to a new universe of public benefits. These requirements, which up until the Trump administration only existed for direct cash assistance and food stamps, require a recipient not just to put in a certain number of hours at a job or some other qualifying activity, but to amass paperwork to prove those hours each month. The CEA report is focused, supposedly, on “the importance and dignity of work.” But the benefits of engaging in labor are only deemed important for a particular population: “welfare recipients who society expects to work.” Over and over, it takes for granted that our country only expects the poorest to work in order to prove themselves worthy of government funds, specifically targeting those who get food stamps to feed their families, housing assistance to keep roofs over their heads, and Medicaid to stay healthy.

* * *

The report doesn’t just represent an ethos in the administration; it was also a justification for concrete actions it had already taken and more it would soon roll out. Last April, Trump signed an executive order that ordered federal agencies to review public assistance programs in order to see if they could impose work requirements unilaterally to “ensure that they are consistent with principles that are central to the American spirit — work, free enterprise, and safeguarding human and economic resources,” as the document states, while also “reserving public assistance programs for those who are truly in need.”

The administration has also pushed forward on its own. In 2017, it announced that states could apply for waivers that would allow them to implement work requirements in Medicaid for the first time, and so far more than a dozen states have taken it up on the offer, with Arkansas’s rule in effect since June 2018. (It has now been halted by a federal judge.) In that state, Medicaid recipients had to spend 80 hours a month at work, school, or volunteering, and report those activities to the government in order to keep getting health insurance. And in April 2018, Housing and Urban Development Secretary Ben Carson unveiled a proposal to let housing authorities implement work requirements for public housing residents and rental assistance recipients. Trump pushed Congress to include more stringent work requirements in the food stamp program as it debated the most recent farm bill, arguing it would “get America back to work.” When that effort failed, the Agriculture Department turned around and proposed a rule to impose the requirements by itself.

These aren’t fiscal necessities — they’re crackdowns on the poor, justified by the idea that they should prove themselves worthy of the benefits that help them survive, that are not just cruel but out of step with real life. Most people who turn to public programs already work, and those who don’t often have good reason. More than 60 percent of people on Medicaid are working. They remain on Medicaid because their pay isn’t enough to keep them out of poverty, and many of the low-wage jobs they work don’t offer health insurance they can afford. Of those not working, most either have a physical impairment or conflicting responsibilities like school or caregiving.

Enrollment in food stamps tells the same story. Among the “work-capable” adults on food stamps, about two thirds work at some point during the year, while 84 percent live in a household where someone works. But low-wage work is often chaotic and unpredictable. Recipients are more likely to turn to food stamps during a spell of unemployment or too few hours, then stop when they resume steadier employment. Many of those who are supposedly capable of work but don’t have a job have a health barrier or live with someone who has one; they’re in school, they’re caring for family, or they just can’t find work in their community.

Work requirements, then, fail to account for the reality of poor people’s lives. It’s not that there’s a widespread lack of work ethic among people who earn the least, but that there’s a lack of steady pay and consistent opportunities that allow someone to sustain herself and her family without assistance. We also know work requirements just don’t work. They’ve existed in the Temporary Assistance for Needy Families cash-assistance program for decades, yet they don’t help people find meaningful, lasting work; instead they serve as a way to shove them out of programs they desperately need. The result is more poverty, not more jobs.

If this country were so concerned about helping people who might face barriers to working get jobs, we might not be the second-lowest among OECD member countries by percentage of GDP spent on labor-market programs like job-search assistance or retraining. The poor in particular face barriers like affordable childcare and reliable transportation, and could use education or training to reach for better-paid, more meaningful work. But we do little to extend these supports. Instead, we chastise them for not pulling on their frayed bootstraps hard enough.

We also seem content with the notion that a person who doesn’t work — either out of inability or refusal — doesn’t deserve the building blocks of staying alive. The programs Trump is targeting, after all, are about basic needs: housing to stay safe from the elements, food to keep from going hungry, healthcare to receive treatment and avoid dying of neglect. Even if it were true that there was a horde of poor people refusing to work, do we want to condemn them to starvation and likely death? In one of the world’s richest countries, do we really balk at spending money on keeping our people — even lazy ones — alive?

We also know work requirements just don’t work. They’ve existed in the Temporary Assistance for Needy Families cash-assistance program for decades, yet they don’t help people find meaningful, lasting work; instead they serve as a way to shove them out of programs they desperately need. The result is more poverty, not more jobs.

Plenty of other countries don’t do so. Single mothers experience higher rates of destitution than coupled parents or people without children all over the world. But the higher poverty rate in the U.S. as compared to other developed countries isn’t because we have more single mothers; instead, it’s because we do so little to help them. Compare us to Denmark, which gives parents unconditional cash benefits for each of their children regardless of whether or how much they work, on top of generously subsidizing childcare, offering universal health coverage, and guaranteeing paid leave. It’s no coincidence that they also have a lower poverty rate, both generally and for single mothers specifically. A recent examination of poverty across countries found that children are at higher risk in the U.S because we have a sparse social safety net that’s so closely tied to demanding that people work. It makes us an international outlier, the world’s miser that only opens a clenched fist to the poor if they’re willing to demonstrate their worthiness first.

Here, too, America’s history of slavery and ongoing racism rears its head. According to a trio of renowned economists, we don’t have a European-style social safety net because “racial animosity in the U.S. makes redistribution to the poor, who are disproportionately black, unappealing to many voters.” White people turn against funding public benefit programs when they feel their racial status threatened, particularly benefits they (falsely) believe mainly accrue to black people. The black poor are seen as the most undeserving of help and most in need of proving their worthiness to get it. States with larger percentages of black residents, for example, focus less on TANF’s goal of providing cash to the needy and have stingier benefits with higher hurdles to enrollment.

* * *

The CEA’s report on work requirements claimed that being an adult who doesn’t work is particularly prevalent among “those living in low-income households.” But that’s debatable. The more income someone has, the less likely he is to be getting it from wages. In 2012, those earning less than $25,000 a year made nearly three quarters of that money from a job. Those making more than $10 million, on the other hand, made about half of their money from capital gains — in other words, returns on investments. The bottom half of the country has, on average, just $826 in income from capital investments each; the average for those in the top 1 percent is more than $16 million.

The richest are the least likely to have their money come from hard labor — yet there’s no moral panic over whether they’re coddled or lacking in self reliance. Instead, government benefits help the rich protect and grow idle wealth. Capital gains and dividends are taxed at a lower rate than regular salaried income. Inheritances were taxed at an average rate of 4 percent in 2009, compared to the average rate of 18 percent for money earned by working and saving. When investments are bequeathed, the recipient owes no taxes on any asset appreciation.


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


In fact, government tax benefits that increase people’s take-home money at the expense of what the government collects for its own coffers overwhelmingly benefit the rich over the poor (or even the middle class). More than 60 percent of the roughly $900 billion in annual tax expenditures goes to the richest 20 percent of American families. That figure dwarfs what the government expends on many public benefit programs. The government spends more than three times as much on tax subsidies for homeowners, mostly captured by the well-to-do, than it does on rental assistance for the poor. The three benefit programs the Trump administration is concerned with — Medicaid, food stamps, and housing assistance — come to about $705 billion in combined spending.

While the administration has been concerned with what it can do to compel the poor to work, it’s handed out more largesse to the idle rich. Its signature tax-cut package, the Tax Cuts and Jobs Act, offered an extra cut for so-called “pass-through” businesses, like law or real estate firms. But the fine print included a wrinkle: If someone is considered actively involved in his pass-through business, only 30 percent of his earnings could qualify for the new discount. If someone is passively involved, however — a shareholder who doesn’t do much about the day-to-day work of the company — then he gets 100 percent of the new benefit.

Then there’s the law’s significant lowering of the estate tax. The tax is levied on only the biggest, most valuable inheritances passed down from wealthy parent to newly wealthy child. Before the Republicans’ tax bill, only the richest 0.2 percent of estates had to pay the tax when fortunes changed hands. Now it’s just the richest 0.1 percent, or a mere 1,800 very wealthy families worth more than $22 million. The rest get to pass money to their heirs tax-free. Those who do pay it will be paying less when tax time comes due — $4.4 million less, to be exact.

Despite the Republican rhetoric that lowering the estate tax is about saving family farms, it’s really about allowing an aristocracy to calcify — one in which rich parents ensure their children are rich before they lift a single finger in work. As those heirs receive their fortunes, they also receive the blessing that comes with riches: the halo of success and, therefore, deservedness without having to work to prove it. Yet there’s evidence that increasing taxes on inheritances has the potentially salutary effect of getting heirs to work more. The more their inheritances are taxed, the more they end up paying in labor taxes — evidence that they’re working harder for their livings, not just coasting on generational wealth. Perhaps our tax code could encourage rich heirs to experience the dignity of work.

* * *

Trump’s CEA report is accurate about at least one thing: Our country has a history of only offering public benefits to the poor either deemed worthy through their work or exempt through old age or disability. An outlier was the Aid to Families with Dependent Children program, which became Temporary Assistance for Needy Families after Bill Clinton signed welfare reform into law in the ’90s. But the 1996 transformation of the program took what was a promise of cash for poor mothers and changed it into an obstacle course of proving a mother’s worth before she can get anywhere close to a check. It paved the way for the current administration’s obsession with work requirements.

Largesse for the rich, on the other hand, has rarely included such tests. No one has been made to pee in a cup for tax breaks on their mortgages, which cost as much as the food stamp program but overwhelmingly benefit families that earn more than $100,000. No one has had to prove a certain number of work hours to get a lower tax rate on investment income or an inheritance. They get that discount on their money without having to do any work at all.

We haven’t always been so extreme in our dichotomous treatment of the rich and poor; throughout the 1940s, ’50s, and ’60s, we coupled high marginal taxes on the wealthy with a minimum wage that ensured that people who put in full-time work could rise out of poverty. The estate tax has been as high as 77 percent. As Dutch historian Rutger Bregman recently told an audience of the ultrawealthy at Davos, we’re living proof that high taxes can spread shared prosperity. “The United States, that’s where it has actually worked, in the 1950s, during Republican President Eisenhower,” he pointed out. “This is not rocket science.” It was during the same era that we also created significant anti-poverty programs such as Social Security, Medicare, and Medicaid. In fact, this country pioneered the idea of progressive taxation and has always had some form of tax on inheritance to avoid creating an aristocracy. But we’ve papered over that history as tax rates have cratered and poverty has climbed.

Instead, as Reaganomics and neoliberal ideas took hold of our politics, we turned back to the Horatio Alger myth that success is attained on an individual basis by hard work alone, and that riches are the proof of a dogged drive. Lower tax rates naturally follow under the theory that the rich should keep more of their deserved bounty. And if you’re poor, coming to the government seeking a helping hand up, you failed.

The country is due for a reckoning with our obsession with work. There are certainly financial and emotional benefits that come from having a job. But why are we only concerned with whether the poor reap those benefits? Is working ourselves to the bone the best signifier of our worth — and are there basic elements of life that we should guarantee regardless of work? It doesn’t mean dropping all emphasis on work ethic. But it does require a deeper examination of who we expect to work — and why.

* * *

Bryce Covert is an independent journalist writing about the economy and a contributing op-ed writer at The New York Times.

Editor: Michelle Weber
Fact checker: Ethan Chiel
Copy editor: Jacob Z. Gross   

Orwell’s Last Neighborhood

Barnhill on the Isle of Jura, Scotland. (David Brown)

David Brown | The American Scholar | April 2019 | 23 minutes (5,796 words)

It’s hard to know what would be a good place from which to imagine a future of bad smells and no privacy, deceit and propaganda, poverty and torture. Does a writer need to live in misery and ugliness to conjure up a dystopia?

Apparently not.

We’d been walking more than an hour. The road was two tracks of pebbled dirt separated by a strip of grass. The land was treeless as prairie, with wildflowers and the seedless tops of last year’s grass smudging the new growth.

We rounded a curve and looked down a hillside to the sea. A half mile in the distance, far back from the water, was a white house with three dormer windows. Behind it, a stone wall cut a diagonal to the water like a seam stitching mismatched pieces of green velvet. Far to the right, a boat moved along the shore, its sail as bright as the house.

This was where George Orwell wrote Nineteen Eighty-Four. The house, called Barnhill, sits near the northern end of Jura, an island off Scotland’s west coast in the Inner Hebrides. It was June 2, sunny, short-sleeve warm, with the midges barely out, and couldn’t have been more beautiful.

Orwell lived here for parts of the last three years of his life. He left periodically (mostly in the winter) to do journalism in London and, for seven months in 1947 and 1948, to undergo treatment for pulmonary tuberculosis. Although he rented Barnhill and didn’t own it, he put in fruit trees and a garden, built a chicken house, bought a truck and a boat, and invested numberless hours of labor in what he believed would be his permanent home. When he left it for the last time, in January 1949, he never again lived outside a sanatorium or hospital.

I came to Jura after a two-week backpacking trip across Scotland. My purpose was to drink single-malt on Islay, the island to the south, and enjoy two nights of indulgence at Ardlussa House, where Orwell’s landlord had lived. I was not on a literary pilgrimage. Barnhill is not open to the public, and no one among the island’s 235 residents remembers Orwell. Read more…

But You Look Fine: A Reading List About Disabilities, Accommodations, and School

Getty Images

During my freshman year of college, a series of unexpected neurological episodes ruptured my conception of how I moved through the world. I fainted one evening after track practice and began experiencing episodes of dizziness, blurred vision, and what the doctors would label as “aphasia” and “transient alteration of awareness,” medical terms that tried to characterize the way I would say the same word over and over unintentionally (“I, I, I, I, uh, I, I, I”) and lose memory of what had happened while I was incoherent.

I was a Division I athlete at the time, a runner. My identity in athletics and in school centered around perfectionism; I enjoyed running to hit a precise list of splits and I brought the same ceaseless work ethic to the classroom. I measured success in straight-A’s and faster times. But once my episodes began, my illusion of control eroded. I was no longer able to run without falling, and my schoolwork, which had been a joy all my life, was interrupted by my own body with periods of disorientation that lasted for hours. Though I saw a neurologist frequently, he was unable to give me a diagnosis.
Read more…

On Flooding: Drowning the Culture in Sameness

A 37-meter-long floating sculpture by U.S. artist Kaws in Victoria Harbor, Hong Kong, March 2019. (Imaginechina via AP Images)

Soraya Roberts | Longreads | March 2019 | 7 minutes (2,006 words)

In 1995, the Emmy nominees for Best Drama were Chicago Hope, ER, Law & Order, NYPD Blue, and The X-Files. In 1996, the Emmy nominees for Best Drama were Chicago Hope, ER, Law & Order, NYPD Blue, and The X-Files. In 1997, the Emmy nominees for Best Drama were Chicago Hope, ER, Law & Order, NYPD Blue, and The X-Files. That is: Two cop shows set in New York, two medical shows set in Chicago, and some aliens, spread across four networks, represented the height and breadth of the art form for three years running.

I literally just copied that entire first paragraph from a Deadspin article written by Sean T. Collins. It appeared last week, when every site seemed to be writing about Netflix. His was the best piece. Somehow, within that flood of Netflix content, everyone found that article — it has almost 300,000 page views. I may as well have copied it for all the traffic my actual column — which was not about Netflix — got.

There was definitely a twang of why bother? while I was writing last week, just as there is every week. Why bother, and Jesus Christ, why am I not faster? The web once made something of a biblical promise to give all of us a voice, but in the ensuing flood — and the ensuing floods after that — only a few bobbed to the top. With increased diversity, this hasn’t changed — there are more diverse voices, but the same ones float up each time. There remains a tension that critics, and the larger media, must balance, reflecting what’s in the culture in all its repetitive glory while also nudging it toward the future. But we are repeatedly failing at this by repeatedly drowning ourselves in the first part. This is flooding (a term I just coined, so I would know): the practice of unleashing a mass torrent of the same stories by the same storytellers at the same time, making it almost impossible for anyone but the same select few to rise to the surface.
Read more…

Queens of Infamy: Josephine Bonaparte, from Martinique to Merveilleuse

Illustration by Louise Pomeroy

Anne Thériault | Longreads | March 2019 | 22 minutes (5,569 words)

From the notorious to the half-forgotten, Queens of Infamy, a Longreads series by Anne Thériault, focuses on badass world-historical women of centuries past.

* * *

Looking for a Queens of Infamy t-shirt or tote bag? Choose yours here.

In 1768, a 15-year-old girl traveled to the hills near her family home in Martinique to visit a local wise woman. Desperately curious to know what her future held, the girl handed a few coins to the Afro-Caribbean obeah, Euphémie David, in exchange for a palm reading. Euphémie obligingly delivered an impressive-sounding prediction: the girl would marry twice — first, unhappily, to a family connection in France, and later to a “dark man of little fortune.” This second husband would achieve undreamed of glory and triumph, rendering her “greater than a queen.” But before the girl had time to gloat over her thrilling fate, Euphémie delivered a parting blow: in spite of her incredible success, the girl would die miserable, filled with regret, pining for the “easy, pleasant life” of her childhood. This prophecy would stay with the girl for the rest of her life, and she would think of it often — sometimes with fervent hope, sometimes with despair, always with unwavering belief that it would come true.

That girl was the future Empress Josephine Bonaparte. Everything Euphémie predicted would come to pass, but young Josephine could not have imagined the events that would propel her to her zenith: the rise through Paris society, the cataclysm of the French Revolution, the brutal imprisonment during the Reign of Terror, the transformation into an infamous Merveilleuse, the pivotal dinner at her lover’s house where she would meet her second husband.

She wouldn’t even have recognized the name Josephine — that sobriquet would be bestowed by Napoleon some 18 years hence. The wide-eyed teenager who asked Euphémie to tell her fortune still went by her childhood nickname, Yeyette.

Read more…

How the Guardian Went Digital

Newscast Limited via AP Images

Alan Rusbridger | Breaking News | Farrar, Straus and Giroux | November 2018 | 31 minutes (6,239 words)

 

In 1993 some journalists began to be dimly aware of something clunkily referred to as “the information superhighway” but few had ever had reason to see it in action. At the start of 1995 only 491 newspapers were online worldwide: by June 1997 that had grown to some 3,600.

In the basement of the Guardian was a small team created by editor in chief Peter Preston — the Product Development Unit, or PDU. The inhabitants were young and enthusiastic. None of them were conventional journalists: I think the label might be “creatives.” Their job was to think of new things that would never occur to the largely middle-aged reporters and editors three floors up.

The team — eventually rebranding itself as the New Media Lab — started casting around for the next big thing. They decided it was the internet. The creatives had a PC actually capable of accessing the world wide web. They moved in hipper circles. And they started importing copies of a new magazine, Wired — the so-called Rolling Stone of technology — which had started publishing in San Francisco in 1993, along with the HotWired website. “Wired described the revolution,” it boasted. “HotWired was the revolution.” It was launched in the same month the Netscape team was beginning to assemble. Only 18 months later Netscape was worth billions of dollars. Things were moving that fast.

In time, the team in PDU made friends with three of the people associated with Wired. They were the founders, Louis Rossetto, and Jane Metcalfe; and the columnist Nicholas Negroponte, who was based at the Massachusetts Institute of Technology and who wrote mindblowing columns predicting such preposterous things as wristwatches which would “migrate from a mere timepiece today to a mobile command-and-control center tomorrow . . . an all-in-one, wrist-mounted TV, computer, and telephone.”

As if.

Both Rossetto and Negroponte were, in their different ways, prophets. Rossetto was a hot booking for TV talk shows, where he would explain to baffled hosts what the information superhighway meant. He’d tell them how smart the internet was, and how ethical. Sure, it was a “dissonance amplifier.” But it was also a “driver of the discussion” towards the real. You couldn’t mask the truth in this new world, because someone out there would weigh in with equal force. Mass media was one-way communication. The guy with the antenna could broadcast to billions, with no feedback loop. He could dominate. But on the internet every voice was going to be equal to every other voice.

“Everything you know is wrong,” he liked to say. “If you have a preconceived idea of how the world works, you’d better reconsider it.”

Negroponte, 50-something, East Coast gravitas to Rossetto’s Californian drawl, was working on a book, Being Digital, and was equally passionate in his evangelism. His mantra was to explain the difference between atoms — which make up the physical artifacts of the past — and bits, which travel at the speed of light and would be the future. “We are so unprepared for the world of bits . . . We’re going to be forced to think differently about everything.”

I bought the drinks and listened.

Over dinner in a North London restaurant, Negroponte started with convergence — the melting of all boundaries between TV, newspapers, magazines, and the internet into a single media experience — and moved on to the death of copyright, possibly the nation state itself. There would be virtual reality, speech recognition, personal computers with inbuilt cameras, personalized news. The entire economic model of information was about to fall apart. The audience would pull rather than wait for old media to push things as at present. Information and entertainment would be on demand. Overly hierarchical and status-conscious societies would rapidly erode. Time as we knew it would become meaningless — five hours of music would be delivered to you in less than five seconds. Distance would become irrelevant. A UK paper would be as accessible in New York as it was in London.

Writing 15 years later in the Observer, the critic John Naughton compared the begetter of the world wide web, Sir Tim Berners-Lee, with the seismic disruption five centuries earlier caused by the invention of movable type. Just as Gutenberg had no conception of his invention’s eventual influence on religion, science, systems of ideas, and democracy, so — in 2008 — “it will be decades before we have any real understanding of what Berners-Lee hath wrought.”

The entire economic model of information was about to fall apart.

And so I decided to go to America with the leader of the PDU team, Tony Ageh, and see the internet for myself. A 33-year-old “creative,” Ageh had had exactly one year’s experience in media — as an advertising copy chaser for The Home Organist magazine — before joining the Guardian. I took with me a copy of The Internet for Dummies. Thus armed, we set off to America for a four-day, four-city tour.

In Atlanta, we found the Atlanta Journal-Constitution (AJC), which was considered a thought leader in internet matters, having joined the Prodigy Internet Service, an online service offering subscribers information over dial-up 1,200 bit/second modems. After four months the internet service had 14,000 members, paying 10 cents a minute to access online banking, messaging, full webpage hosting and live share prices.

The AJC business plan envisaged building to 35,000 or 40,000 by year three. But that time, they calculated, they would be earning $3.3 million in subscription fees and $250,000 a year in advertising. “If it all goes to plan,’ David Scott, the publisher, Electronic Information Service, told us, ‘it’ll be making good money. If it goes any faster, this is a real business.”

We also met Michael Gordon, the managing editor. “The appeal to the management is, crudely, that it is so much cheaper than publishing a newspaper,” he said.

We wrote it down.

“We know there are around 100,000 people in Atlanta with PCs. There are, we think, about one million people wealthy enough to own them. Guys see them as a toy; women see them as a tool. The goldmine is going to be the content, which is why newspapers are so strongly placed to take advantage of this revolution. We’re out to maximize our revenue by selling our content any way we can. If we can sell it on CD-ROM or TV as well, so much the better.”

“Papers? People will go on wanting to read them, though it’s obviously much better for us if we can persuade them to print them in their own homes. They might come in customized editions. Edition 14B might be for females living with a certain income.”

It was heady stuff.

From Atlanta we hopped up to New York to see the Times’s online service, @Times. We found an operation consisting of an editor plus three staffers and four freelancers. The team had two PCs, costing around $4,000 each. The operation was confident, but small.

The @Times content was weighted heavily towards arts and leisure. The opening menus offered a panel with about 15 reviews of the latest films, theatre, music, and books – plus book reviews going back two years. The site offered the top 15 stories of the day, plus some sports news and business.

There was a discussion forum about movies, with 47 different subjects being debated by 235 individual subscribers. There was no archive due to the fact that — in one of the most notorious newspaper licensing cock-ups in history — the NYT in 1983 had given away all rights to its electronic archive (for all material more than 24 hours old) in perpetuity to Mead/Lexis.

That deal alone told you how nobody had any clue what was to come.

We sat down with Henry E. Scott, the group director of @Times. “Sound and moving pictures will be next. You can get them now. I thought about it the other day, when I wondered about seeing 30 seconds of The Age of Innocence. But then I realized it would take 90 minutes to download that and I could have seen more or less the whole movie in that time. That’s going to change.”

But Scott was doubtful about the lasting value of what they were doing — at least, in terms of news. “I can’t see this replacing the news- paper,” he said confidently. “People don’t read computers unless it pays them to, or there is some other pressing reason. I don’t think anyone reads a computer for pleasure. The San Jose Mercury [News] has put the whole newspaper online. We don’t think that’s very sensible. It doesn’t make sense to offer the entire newspaper electronically.”

We wrote it all down.

“I can’t see the point of news on-screen. If I want to know about a breaking story I turn on the TV or the radio. I think we should only do what we can do better than in print. If it’s inferior than the print version there’s no point in doing it.”

Was there a business plan? Not in Scott’s mind. “There’s no way you can make money out of it if you are using someone else’s server. I think the LA Times expects to start making money in about three years’ time. We’re treating it more as an R & D project.”


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


From New York we flitted over to Chicago to see what the Tribune was up to. In its 36-storey Art Deco building — a spectacular monument to institutional self-esteem — we found a team of four editorial and four marketing people working on a digital service, with the digital unit situated in the middle of the newsroom. The marketeers were beyond excited about the prospect of being able to show houses or cars for sale and arranged a demonstration. We were excited, too, even if the pictures were slow and cumbersome to download.

We met Joe Leonard, associate editor. “We’re not looking at Chicago Online as a money maker. We’ve no plans even to break even at this stage. My view is simply that I’m not yet sure where I’m going, but I’m on the boat, in the water — and I’m ahead of the guy who is still standing on the pier.”

Reach before revenue.

Finally we headed off to Boulder, Colorado, in the foothills of the Rockies, where Knight Ridder had a team working on their vision of the newspaper of tomorrow. The big idea was, essentially, what would become the iPad — only the team in Boulder hadn’t got much further than making an A4 block of wood with a “front page” stuck on it. The 50-something director of the research centre, Roger Fidler, thought the technology capable of realizing his dream of a ‘personal information appliance’ was a couple of years off.

Tony and I had filled several notebooks. We were by now beyond tired and talked little over a final meal in an Italian restaurant beneath the Rocky Mountains.

We had come. We had seen the internet. We were conquered.

* * *

Looking back from the safe distance of nearly 25 years, it’s easy to mock the fumbling, wildly wrong predictions about where this new beast was going to take the news industry. We had met navigators and pioneers. They could dimly glimpse where the future lay. Not one of them had any idea how to make a dime out of it, but at the same time they intuitively sensed that it would be more reckless not to experiment. It seemed reasonable to assume that — if they could be persuaded to take the internet seriously — their companies would dominate in this new world, as they had in the old world.

We were no different. After just four days it seemed blindingly obvious that the future of information would be mainly digital. Plain old words on paper — delivered expensively by essentially Victorian production and distribution methods — couldn’t, in the end, compete. The future would be more interactive, more image-driven, more immediate. That was clear. But how on earth could you graft a digital mindset and processes onto the stately ocean liner of print? How could you convince anyone that this should be a priority when no one had yet worked out how to make any money out of it? The change, and therefore the threat, was likely to happen rapidly and maybe violently. How quickly could we make a start? Or was this something that would be done to us?

In a note for Peter Preston on our return I wrote, “The internet is fascinating, intoxicating . . . it is also crowded out with bores, nutters, fanatics and middle managers from Minnesota who want the world to see their home page and CV. It’s a cacophony, a jungle. There’s too much information out there. We’re all overloaded. You want someone you trust to fillet it, edit it and make sense of it for you. That’s what we do. It’s an opportunity.”

Looking back from the safe distance of nearly 25 years, it’s easy to mock the fumbling, wildly wrong predictions about where this new beast was going to take the news industry.

I spent the next year trying to learn more and then the calendar clicked on to 1995 — The Year the Future Began, at least according to a recent book by the cultural historian W. Joseph Campbell, who used the phrase as his book title twenty years later. It was the year Amazon.com, eBay, Craigslist, and Match.com established their presence online. Microsoft spent $300m launching Windows 95 with weeks of marketing hype, spending millions for the rights to the Rolling Stones hit “Start Me Up,” which became the anthem for the Windows 95 launch.

Cyberspace — as the cyber dystopian Evgeny Morozov recalled, looking back on that period — felt like space itself. “The idea of exploring cyberspace as virgin territory, not yet colonized by governments and corporations, was romantic; that romanticism was even reflected in the names of early browsers (‘Internet Explorer,’ ‘Netscape Navigator’).”

But, as Campbell was to reflect, “no industry in 1995 was as ill-prepared for the digital age, or more inclined to pooh-pooh the disruptive potential of the Internet and World Wide Web, than the news business.” It suffered from what he called “innovation blindness” — “an inability, or a disinclination to anticipate and understand the consequences of new media technology.”

1995 was, then, the year the future began. It happened also to be the year in which I became editor of the Guardian.

* * *

I was 41 and had not, until very recently, really imagined this turn of events. My journalism career took a traditional enough path. A few years reporting; four years writing a daily diary column; a stint as a feature writer — home and abroad. In 1986 I left the Guardian to be the Observer’s television critic. When I rejoined the Guardian I was diverted towards a route of editing — launching the paper’s Saturday magazine followed by a daily tabloid features section and moving to be deputy editor in 1993. Peter Preston — unshowy, grittily obstinate, brilliantly strategic — looked as if he would carry on editing for years to come. It was a complete surprise when he took me to the basement of the resolutely unfashionable Italian restaurant in Clerkenwell he favored, to tell me he had decided to call it a day.

On most papers the proprietor or chief executive would find an editor and take him or her out to lunch to do the deal. On the Guardian — at least according to tradition dating back to the mid-70s — the Scott Trust made the decision after balloting the staff, a process that involved manifestos, pub hustings, and even, by some candidates, a little frowned-on campaigning.

I supposed I should run for the job. My mission statement said I wanted to boost investigative reporting and get serious about digital. It was, I fear, a bit Utopian. I doubt much of it impressed the would-be electorate. British journalists are programmed to skepticism about idealistic statements concerning their trade. Nevertheless, I won the popular vote and was confirmed by the Scott Trust after an interview in which I failed to impress at least one Trustee with my sketchy knowledge of European politics. We all went off for a drink in the pub round the back of the office. A month later I was editing.

“Fleet Street,” as the UK press was collectively called, was having a torrid time, not least because the biggest beast in the jungle, Rupert Murdoch, had launched a prolonged price war that was playing havoc with the economics of publishing. His pockets were so deep he could afford to slash the price of The Times almost indefinitely — especially if it forced others out of business.

Reach before revenue — as it wasn’t known then.

The newest kid on the block, the Independent, was suffering the most. To their eyes, Murdoch was behaving in a predatory way. We calculated the Independent titles were losing around £42 million (nearly £80 million in today’s money). Murdoch’s Times, by contrast, had seen its sales rocket 80 per cent by cutting its cover prices to below what it cost to print and distribute. The circulation gains had come at a cost — about £38 million in lost sales revenue. But Murdoch’s TV business, BSkyB, was making booming profits and the Sun continued to throw off huge amounts of cash. He could be patient.

But how on earth could you graft a digital mindset and processes onto the stately ocean liner of print.

The Telegraph had been hit hard — losing £45 million in circulation revenues through cutting the cover price by 18 pence. The end of the price war left it slowly clawing back lost momentum, but it was still £23 million adrift of where it had been the previous year. Murdoch — as so often — had done something bold and aggressive. Good for him, not so good for the rest of us. Everyone was tightening their belts in different ways. The Independent effectively gave up on Scotland. The Guardian saved a million a year in newsprint costs by shaving half an inch off the width of the paper.

The Guardian, by not getting into the price war, had “saved” around £37 million it would otherwise have lost. But its circulation had been dented by about 10,000 readers a day. Moreover, the average age of the Guardian reader was 43 — something that pre-occupied us rather a lot. We were in danger of having a readership too old for the job advertisements we carried.

Though the Guardian itself was profitable, the newspaper division was losing nearly £12 million (north of £21 million today). The losses were mainly due to the sister Sunday title, the Observer, which the Scott Trust had purchased as a defensive move against the Independent in 1993. The Sunday title had a distinguished history, but was hemorrhaging cash: £11 million losses.

Everything we had seen in America had to be put on hold for a while. The commercial side of the business never stopped reminding us that only three percent of households owned a PC and a modem.

* * *

But the digital germ was there. My love of gadgets had not extended to understanding how computers actually worked, so I commissioned a colleague to write a report telling me, in language I could understand, how our computers measured up against what the future would demand. The Atex system we had installed in 1987 gave everyone a dumb terminal on their desk — little more than a basic word processor. It couldn’t connect to the internet, though there was a rudimentary internal messaging system. There was no word count or spellchecker and storage space was limited. It could not be used with floppy disks or CD-ROMs. Within eight years of purchase it was already a dinosaur.

There was one internet connection in the newsroom, though most reporters were unaware of it. It was rumored that downstairs a bloke called Paul in IT had a Mac connected to the internet through a dial-up modem. Otherwise we were sealed off from the outside world.

Some of these journalist geeks began to invent Heath Robinson solutions to make the inadequate kit in Farringdon Road to do the things we wanted in order to produce a technology website online. Tom Standage — he later became deputy editor of the Economist, but then was a freelance tech writer — wrote some scripts to take articles out of Atex and format them into HTML so they could be moved onto the modest Mac web server — our first content management system, if you like. If too many people wanted to read this tech system at once the system crashed. So Standage and the site’s editor, Azeem Azhar, would take it in turns sitting in the server room in the basement of the building rebooting the machines by hand — unplugging them and physically moving the internet cables from one machine to another.

What would the future look like? We imagined personalized editions, even if we had not the faintest clue how to produce them. We guessed that readers might print off copies of the Guardian in their homes — and even toyed with the idea of buying every reader a printer. There were glimmers of financial hope. Our readers were spending £56 million a year buying the Guardian but we retained none of it: the money went on paper and distribution. In the back of our minds we ran calculations about how the economics of newspapers would change if we could save ourselves the £56 million a year “old world” cost.

By March 1996, ideas we’d hatched in the summer of 1995 to graft the paper onto an entirely different medium were already out of date. That was a harbinger of the future.

On top of editing, the legal entanglements sometimes felt like a full-time job on their own. Trying to engineer a digital future for the Guardian felt like a third job. There were somehow always more urgent issues. By March 1996, ideas we’d hatched in the summer of 1995 to graft the paper onto an entirely different medium were already out of date. That was a harbinger of the future. No plans in the new world lasted very long.

It was now apparent that we couldn’t get away with publishing selective parts of the Guardian online. Other newspapers had shot that fox by pushing out everything. We were learning about the connectedness of the web — and the IT team tentatively suggested that we might use some “offsite links” to other versions of the same story to save ourselves the need to write our own version of everything. This later became the mantra of the City University of New York (CUNY) digital guru Jeff Jarvis — “Do what you do best, and link to the rest.”

We began to grapple with numerous basic questions about the new waters into which we were gingerly dipping our toes.

Important question: Should we charge?

The Times and the Telegraph were both free online. A March 1996 memo from Bill Thompson, a developer who had joined the Guardian from Pipex, ruled it out:

I do not believe the UK internet community would pay to read an online edition of a UK newspaper. They may pay to look at an archive, but I would not support any attempt to make the Guardian a subscription service online . . . It would take us down a dangerous path.

In fact, I believe that the real value from an online edition will come from the increased contact it brings with our readers: online newspapers can track their readership in a way that print products never can, and the online reader can be a valuable commodity in their own right, even if they pay nothing for the privilege.

Thompson was prescient about how the overall digital economy would work — at least for players with infinitely larger scale and vastly more sophisticated technology.

What time of day should we publish?

The electronic Telegraph was published at 8 a.m. each day — mainly because of its print production methods. The Times, more automated, was available as soon as the presses started rolling. The Guardian started making some copy available from first edition through to the early hours. It would, we were advised, be fraught with difficulties to publish stories at the same time they were ready for the press.

Why were we doing it anyway?

Thompson saw the dangers of cannibalization, that readers would stop buying the paper if they could read it for free online. It could be seen as a form of marketing. His memo seemed ambivalent as to whether we should venture into this new world at all:

The Guardian excels in presenting information in an attractive easy to use and easy to navigate form. It is called a “broadsheet newspaper.” If we try to put the newspaper on-line (as the Times has done) then we will just end up using a new medium to do badly what an old medium does well. The key question is whether to make the Guardian a website, with all that entails in terms of production, links, structure, navigational aids etc. In summer 1995 we decided that we would not do this.

But was that still right a year later? By now we had the innovation team — PDU — still in the basement of one building in Farringdon Road, and another team in a Victorian loft building across the way in Ray Street. We were, at the margins, beginning to pick up some interesting fringe figures who knew something about computers, if not journalism. But none of this was yet pulling together into a coherent picture of what a digital Guardian might look like.

An 89-page business plan drawn up in October 1996 made it plain where the priorities lay: print.

We wanted to keep growing the Guardian circulation — aiming a modest increase to 415,000 by March 2000 — which would make us the ninth-biggest paper in the UK — with the Observer aiming for 560,000 with the aid of additional sections. A modest investment of £200,000 a year in digital was dwarfed by an additional £6 million cash injection into the Observer, spread over three years.

As for “on-line services” (we were still hyphenating it) we did want “a leading-edge presence” (whatever that meant), but essentially we thought we had to be there because we had to be there. By being there we would learn and innovate and — surely? — there were bound to be commercial opportunities along the road. It wasn’t clear what.

We decided we might usefully take broadcasting, rather than print, as a model — emulating its “immediacy, movement searchability and layering.”

If this sounded as if we were a bit at sea, we were. We hadn’t published much digitally to this point. We had taken half a dozen meaty issues — including parliamentary sleaze, and a feature on how we had continued to publish on the night our printing presses had been blown up by the IRA — and turned them into special reports.

It is a tribute to our commercial colleagues that they managed to pull in the thick end of half a million pounds to build these websites. Other companies’ marketing directors were presumably like ours — anxious about the youth market and keen for their brands to feel “cool.” In corporate Britain in 1996, there was nothing much cooler than the internet, even if not many people had it, knew where to find it or understood what to do with it.

* * *

The absence of a controlling owner meant we could run the Guardian in a slightly different way from some papers. Each day began with a morning conference open to anyone on the staff. In the old Farringdon Road office, it was held around two long narrow tables in the editor’s office — perhaps 30 or 40 people sitting or standing. When we moved to our new offices at Kings Place, near Kings Cross in North London, we created a room that was, at least theoretically, less hierarchical: a horseshoe of low yellow sofas with a further row of stools at the back. In this room would assemble a group of journalists, tech developers and some visitors from the commercial departments every morning at about 10 a.m. If it was a quiet news day we might expect 30 or so. On big news days, or with an invited guest, we could host anything up to 100.

A former Daily Mail journalist, attending his first morning conference, muttered to a colleague in the newsroom that it was like Start the Week — a Monday morning BBC radio discussion program. All talk and no instructions. In a way, he was right: It was difficult, in conventional financial or efficiency terms, to justify 50 to 60 employees stopping work to gather together each morning for anything between 25 and 50 minutes. No stories were written during this period, no content generated.

But something else happened at these daily gatherings. Ideas emerged and were kicked around. Commissioning editors would pounce on contributors and ask them to write the thing they’d just voiced. The editorial line of the paper was heavily influenced, and sometimes changed, by the arguments we had. The youngest member of staff would be in the same room as the oldest: They would be part of a common discussion around news. By a form of accretion and osmosis an idea of the Guardian was jointly nourished, shared, handed down, and crafted day by day.

You might love the Guardian or despise it, but it had a definite sense of what it believed in and what its journalism was.

It led to a very strong culture. You might love the Guardian or despise it, but it had a definite sense of what it believed in and what its journalism was. It could sometimes feel an intimidating meeting — even for, or especially for, the editor. The culture was intended to be one of challenge: If we’d made a wrong decision, or slipped up factually or tonally, someone would speak up and demand an answer. But challenge was different from blame: It was not a meeting for dressing downs or bollockings. If someone had made an error the previous day we’d have a post-mortem or unpleasant conversation outside the room. We’d encourage people to want to contribute to this forum, not make them fear disapproval or denunciation.

There was a downside to this. It could, and sometimes did, lead to a form of group-think. However herbivorous the culture we tried to nurture, I was conscious of some staff members who felt awkward about expressing views outside what we hoped was a  fairly broad consensus. But, more often, there would be a good discussion on two or three of the main issues of the day. We encouraged specialists or outside visitors to come in and discuss breaking stories. Leader writers could gauge the temperature of the paper before penning an editorial. And, from time to time, there would be the opposite of consensus: Individuals, factions, or groups would come and demand we change our line on Russia, bombing in Bosnia; intervention in Syria; Israel, blood sports, or the Labor leadership.

The point was this: that the Guardian was not one editor’s plaything or megaphone. It emerged from a common conversation — and was open to internal challenge when editorial staff felt uneasy about aspects of our journalism or culture.

* * *

Within two years — slightly uncomfortable at the power I had acquired as editor — I gave some away. I wanted to make correction a natural part of the journalistic process, not a bitterly contested post-publication battleground designed to be as difficult as possible.

We created a new role on the Guardian: a readers’ editor. He or she would be the first port of call for anyone wanting to complain about anything we did or wrote. The readers’ editor would have daily space in the paper — off-limits to the editor — to correct or clarify anything and would also have a weekly column to raise broader issues of concern. It was written into the job description that the editor could not interfere. And the readers’ editor was given the security that he/she could not be removed by the editor, only by the Scott Trust.

On most papers editors had sat in judgment on themselves. They commissioned pieces, edited and published them — and then were supposed neutrally to assess whether their coverage had, in fact, been truthful, fair, and accurate. An editor might ask a colleague — usually a managing editor — to handle a complaint, but he/she was in charge from beginning to end. It was an autocracy. That mattered even more in an age when some journalism was moving away from mere reportage and observation to something closer to advocacy or, in some cases, outright pursuit.

Allowing even a few inches of your own newspaper to be beyond your direct command meant that your own judgments, actions, ethical standards and editorial decisions could be held up to scrutiny beyond your control. That, over time, was bound to change your journalism. Sunlight is the best disinfectant: that was the journalist-as-hero story we told about what we do. So why wouldn’t a bit of sunlight be good for us, too?

The first readers’ editor was Ian Mayes, a former arts and obituaries editor then in his late 50s. We felt the first person in the role needed to have been a journalist — and one who would command instant respect from a newsroom which otherwise might be somewhat resistant to having their work publicly critiqued or rebutted. There were tensions and some resentment, but Ian’s experience, fairness and flashes of humor eventually won most people round.

One or two of his early corrections convinced staff and readers alike that he had a light touch about the fallibility of journalists:

In our interview with Sir Jack Hayward, the chairman of Wolverhampton Wanderers, page 20, Sport, yesterday, we mistakenly attributed to him the following comment: “Our team was the worst in the First Division and I’m sure it’ll be the worst in the Premier League.” Sir Jack had just declined the offer of a hot drink. What he actually said was: “Our tea was the worst in the First Division and I’m sure it’ll be the worst in the Premier League.” Profuse apologies.

In an article about the adverse health effects of certain kinds of clothing, pages 8 and 9, G2, August 5, we omitted a decimal point when quoting a doctor on the optimum temperature of testicles. They should be 2.2 degrees Celsius below core body temperature, not 22 degrees lower.

But in his columns he was capable of asking tough questions about our editorial decisions —  often prompted by readers who had been unsettled by something we had done. Why had we used a shocking picture which included a corpse? Were we careful enough in our language around mental health or disability? Why so much bad language in the Guardian? Were we balanced in our views of the Kosovo conflict? Why were Guardian journalists so innumerate? Were we right to link to controversial websites?

In most cases Mayes didn’t come down on one side or another. He would often take readers’ concerns to the journalist involved and question them — sometimes doggedly — about their reasoning. We learned more about our readers through these interactions; and we hoped that Mayes’s writings, candidly explaining the workings of a newsroom, helped readers better understand our thinking and processes.

It was, I felt, good for us to be challenged in this way. Mayes was invaluable in helping devise systems for the “proper” way to correct the record. A world in which — to coin a phrase —  you were “never wrong for long” posed the question of whether you went in for what Mayes termed “invisible mending.” Some news organizations would quietly amend whatever it was that they had published in error, no questions asked. Mayes felt differently: The act of publication was something on the record. If you wished to correct the record, the correction should be visible.

But we had some inkling that the iron grip of centralized control that a newspaper represented was not going to last.

We were some years off the advent of social media, in which any error was likely to be pounced on in a thousand hostile tweets. But we had some inkling that the iron grip of centralized control that a newspaper represented was not going to last.

I found liberation in having created this new role. There were few things editors can enjoy less than the furious early morning phone call or email from the irate subject of their journalism. Either the complainant is wrong — in which case there is time wasted in heated self-justification; or they’re right, wholly or partially. Immediately you’re into remorseful calculations about saving face. If readers knew we honestly and rapidly — even immediately — owned up to our mistakes they should, in theory, trust us more. That was the David Broder theory, and I bought it. Readers certainly made full use of the readers’ editor’s existence. Within five years Mayes was dealing with around 10,000 calls, emails, and letters a year — leading to around 1,200 corrections, big and small. It’s not, I think, that we were any more error-prone than other papers. But if you win a reputation for openness, you’d better be ready to take it as seriously as your readers will.

Our journalism became better. If, as a journalist, you know there are a million sleuth-eyed editors out there waiting to leap on your tiniest mistake, it makes you more careful. It changes the tone of your writing. Our readers often know more than we do. That became a mantra of the new world, coined by the blogger and academic Dan Gillmor, in his 2004 book We the Media8 but it was already becoming evident in the late 1990s.

The act of creating a readers’ editor felt like a profound recognition of the changing nature of what we were engaged in. Journalism was not an infallible method guaranteed to result in something we would proclaim as The Truth — but a more flawed, tentative, iterative and interactive way of getting towards something truthful.

Admitting that felt both revolutionary and releasing.

***

Excerpted from Breaking News: The Remaking of Journalism and Why It Matters Now by Alan Rusbridger. Published Farrar, Straus and Giroux November 27, 2018. Copyright © 2018 by Alan Rusbridger. All rights reserved.

Longreads Editor: Aaron Gilbreath

After the Tsunami

Annykos / Getty

Matthew Komatsu | LongreadsMarch 2019 | 24 minutes (6,092 words)

This piece was supported by the Pulitzer Center. 

Ichi (One)

Obā-san tasted ash. Yes: ash and dust. Her youngest son’s kanji and hiragana on paper could not assuage the bitter news the letter delivered: that her youngest son would not return from America to his hometown of Kesennuma, Japan. He would stay to marry the American woman who carried his child. Dishonor. Shame. Betrayal. And I was the ash she tasted: the end of the pure line of the Komatsu name. Nothing more than an accidental flutter in the brine of my mother’s womb.

My grandmother would not have considered this metaphor of the sea, despite the proximity of her home to it, the wind-borne scent of the waterfront fish market and processing plants mere blocks away, burbling down the streets, seeping through the window and door cracks of her home. And beyond, the vast blue-gray of the Pacific Ocean, heaving and rolling the life it contained. She would not have thought of the sea’s power to both create and destroy.

***

A soccer ball washes ashore on Middleton Island in the Gulf of Alaska. On it, handwritten script in permanent marker that identifies its origin as a grade school in Rikuzentakata, Japan, 30 minutes north of Kesennuma. Its owner, Misaki Murakami, survived the tsunami but his family lost their home. It is a personal effect recovered from his home. On one of the panels are kanji characters inscribed by a classmate that read Ganbatte. Good luck.

***

I can only imagine what changed Obā’s heart. Perhaps it was my grandfather. According to my father, Ojī was more sympathetic. It was Ojī who responded to my father’s letter to say that he understood. Or maybe the simple need of a grandparent to hold her grandchild eroded her pride. But these are all, in a way, little fictions: my American need to emote in conflict with a Japanese inclination to accept.

Regardless, Obā and Ojī came to the United States. I wonder what they thought when they held this chubby black-haired infant boy, whether they struggled to pronounce my English first name. What it felt like to stare into the deep, brown eyes of a grandchild whose blood ran mixed. Or if any of this mattered at all.

What I do know: When Ojī and Obā journeyed halfway across the globe to the unlikely destination of Duluth, Minnesota, they didn’t know my parents arranged to leave me with a family friend at the beginning of a cross-country road trip across America that doubled as both honeymoon and getting-to-know-the-in-laws. When Ojī said goodbye to me, he wept. It was the last time we were together and the only time my dad saw his own father cry. My grandfather died in Japan, in 1987.

The only Japanese uttered in my home was spoken into the telephone on holidays. On those days, I rushed to answer the phone in the hope of hearing the voices of my Japanese relatives. Moshi moshi, came the greeting. When I answered in English, the caller usually responded, Ahhhhh… Toshifumi-san?

Dad, for you.

If my mother answered, the single phrase she knew: Chōttō matte, kudasai. One moment, please. I would sit on the brown shag carpet speckled with gold and red and yellow, my back to the heat vent, shirt lifted so the hot air blew up my skin and ruffled the black hairs on my neck. The book on my lap stayed open to the same page as I listened to one half of a conversation, mouthed words whose accented syllables I will never utter with any meaning. A pause for the delay, then the muffled return. A smile, a laugh, an imperceptible head bow from my father.

***

A Canadian finds the rusted hulk of a Harley-Davidson motorcycle on the shores of British Columbia and traces its license plate to its owner, Ikuo Yokoyama. Photos of the bike reveal a year at sea: spokes rusting away and missing, corrosion widespread across a frame whose gleam has been replaced with a forlorn absorption of the light that reflects upon it. Yokoyama resists an outpouring of internet-fueled financial support to restore the bike and repatriate it. Instead he asks that it be preserved in a museum as is, a memorial to what was lost.

***

During a precious summer break from the Air Force Academy, I joined a family trip to Japan. Eager to show the Japanese I’d picked up over two years of college classes, I greeted Obā. My father told her that I knew Japanese now, that she should speak to me. We sat down in the living room of the small family home in Kesennuma. The air was heavy with the smell of the nearby ocean, mothballs, dust, and paper. But when she spoke, I could not understand.

***

Here is a list of Japanese words. Tsunami. Pronounced “tsoo-nah-mee.” Translation: “harbor wave.” E. Pronounced “a-ay.” Interrogative. Translation: “What?” Hayaku. Pronounced “hi-yah-koo.” Translation: “hurry.” Hashitte. Pronounced “hah-shht-ay.” Imperative. Translated to English: “Run.”

 

Ni (Two)

At 2:46 p.m. on Friday, 11 March 2011, a 100-mile-long section of the Pacific tectonic plate 19 miles deep thrusted beneath Japan. Richter scale needles twitched. Japan shifted eight feet east. The Earth shuddered off-axis. The seabed rose, lifting the ocean above it by 25 feet. All that water had to go somewhere. And it did — away, in a series of waves that raced west at 86 miles per hour. The tsunami made landfall roughly 45 minutes later on the shores of my father’s hometown of Kesennuma in northeast Japan’s Miyagi Prefecture.

My 11 March dawned no different than any other. I woke up and checked Facebook over coffee. My sister posted something about a big earthquake in Japan, but the family was fine. Big earthquake, Japan: happens all the time. I didn’t think much of it during the 45-minute drive from Columbia, South Carolina, to Shaw Air Force Base, NPR now revising the magnitude, the Richter climbing. I paid it no mind during my 12-mile run before work. It was spring in South Carolina, flowers opening under a rising sun, the air heavy with their dewy scent.

The tsunami made landfall on the shores of my father’s hometown of Kesennuma in northeast Japan’s Miyagi Prefecture.

It wasn’t until after I showered and changed into my uniform that the narrative unraveled. I turned on the car and the radio cascaded breaking news of a large tsunami in Japan. But even then, I did not think of the risk to my father’s hometown, a fishing city in northeastern Miyagi Prefecture directly in the tsunami’s path.

At work, I punched a code into a keypad and walked through a door into the cubicled space I shared with close to 50 other officers. The room was quiet, all eyes glued to the televisions on the wall. I looked over my shoulder and from the second floor of the Air Forces Central Command Headquarters, I watched 22,000 Japanese die.

***

In the years that follow 3/11, I will often open my laptop to type “Japan Tsunami” into a search engine. In a half second, tens of millions of results cascade down the screen, many of them videos.

***

No phones were allowed in my office. I left to use the bathroom, checked my phone: a missed call and a voicemail from my mother: Matt, call home. My gut twisted.

My mother answered. They were driving from their home, nestled in the green pines and gray popple outside Duluth, to an aunt who had cable. My parents had never paid for cable television — considering it either unaffordable or unnecessary. Now, for the first time in their lives, a luxury became a necessity. The internet was too slow; they needed to see.

Yes, I’ve seen the news, I said. But Lauren posted something on Facebook. Everyone is fine.

No. Uncle Kazafumi called from his office in Kesennuma — it lasted eight seconds — to say he was okay. Then the call ended.

And he tried to call him back?

Yes.

And?

Nothing. Dad can’t get a hold of him, or anyone else.

***

11 March passed. Friday. 12 and 13, Saturday and Sunday. Monday, 14 March. Still nothing. I watched the same scenes looping on the office televisions.

A coworker blurted, “I’m just waiting for some Japanese person to show up on the TV and yell, ‘Godzilla! Godzilla!’” Someone nearby laughed mirthlessly.

The morning of the 15 March, my youngest sister, Lydia, received the news from our cousin in Tokyo. She spoke no Japanese and his English was broken but somehow he conveyed the news.

My uncle and aunt had survived. Tokuno Komatsu, our grandmother, was dead.

***

Sendai, a city two hours south of Kesennuma: Empty cars wash across the airport tarmac. The reporter flying above an ocean-covered Minami-sanriku: Where have all the people gone? Rikuzentakata. Ōshima. Ishinomaki. Miyako. Natori. And finally, Kesennuma, now burning an orange horizon of flame into the black pall of night.

***

Ten days after the tsunami, I boarded a flight to Japan. The U.S. military mobilized a relief effort called Operation Tomodachi. Friend. I called in every favor I had to deploy as a Tomodachi rescue planning officer.

Before the flight, my father told me that he was proud that a member of the family would be in Japan to help. He asked what I’d be doing there, but I didn’t know. I told him I sold my language abilities hard, maybe oversold them. That I was worried. Don’t worry, he said. It will all come back.


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


The flight from Dulles to Narita International Airport was all but empty. Once aboard, I reviewed old Japanese textbooks and watched Harry Potter once in English, then twice in Japanese. I tried to sleep, but nightmares woke me with linguistic versions of the naked dream: Me, aside the American general to whom I’ve been assigned as a translator. His Japanese counterpart speaks a torrent of Japanese, then pauses to look at me and await the translation. The American nods intently, casting ever-increasing looks my way. I recall one word in 10, try to divine meaning from inflection and posture. My mouth works, but the words do not come.

The bus ride from Narita to Yokota Air Base on the outskirts of Tokyo bore no witness to the quake and tsunami. No billboards hung precariously, no cracks split the roadways, and the lights were on. It was as if nothing happened at all. At Yokota, I disembarked to a cold, snowy night and entered a hangar to process into the Tomodachi task force. Airmen, clad in multiple layers, walked between different stations in the hangar, pausing at powered space heaters to warm themselves in the frigid night. I thought of the thousands of Japanese shoved into tiny makeshift evacuation centers. I imagined how they huddled, warmed only by blankets and each other.

***

Yokota fell away from my window of an Air Force HH-60G helicopter as it lifted off and flew east. I needed to see affected Japan for myself. It wasn’t until we were out over the ocean, flying outside an imaginary bubble around Fukushima that I did.

Rivers of debris from the tsunami appeared on the surface of the Pacific and streamed to the horizon, a flotsam road of shattered wood and plastic. We flew low, eyes out and scanning for life. The last survivor had been pulled from the water a week prior, but we hoped despite the odds, knowing we were far more likely to spot the dead.

A crew member saw something, and the helo banked hard. Over the intercom, he admitted it was probably nothing but worth investigating. Lower, slower, we orbited until the rotor wash beat the sea into mist over what turned out to be a white sheet rippling into the depths.

The farther from Japan, the larger the debris. Refrigerators and freezers. Orange tiled roofs bobbed in the blue and gray, impossibly buoyant. The wall of a home, the glass of a window somehow intact, offered a view into the saltwater beneath. All of it surrounded by a mass of splintered wood.

***

The shivering woke me again. I blinked into the darkness of the Sendai Airport first class lounge and pressed a button on my watch. 0300. I retreated further into the insulation of my puffy coat. Snores came from airmen off-shift from their post on the airport roof. Periodically throughout the night one would return and hand off a radio the size of two stacked laptops, then pop a sleeping pill while the other ran air traffic.

It was supposed to be a short visit, an hour or less. Just enough to make contact with the senior officer on the ground and determine what, if any, help I could provide as a planner. But the sound of the helicopter was only audible long enough to make radio contact with the airman on the roof: Tell Major Komatsu that we have to return to Yokota. We’ll be back when we can.

The cold shook me awake every 15 minutes until I stood up at 0600 and crept out of the dark room and into the daybreak of the terminal. Behind glass windows stories high, I wandered the vacant space, pausing at the vendor stands. The airmen were initially ordered not to take any food, but soon after they arrived, vendors themselves showed up and told them to take what they wished. The stacks of dried cuttlefish and shrimp-flavored crackers vanished, leaving only inscrutable books of manga and the assorted comforts required to heel the modern traveler. I lifted one of the books and perused a few of the oddly colored pages, taking in black and white lines of manga from back to front. I set it back in its place and looked out the glass.

Refrigerators and freezers. Orange tiled roofs bobbed in the blue and gray, impossibly buoyant.

In between the east end of the runway and the coast, a road once connected Kesennuma with Sendai; I’d made the drive twice during family trips. Now, I thought about packing my ruck, stuffing it with MREs and walking north, picking my way through the detritus until I reached my father’s hometown. My grandmother lay in the freezer of a morgue. The old family home, gone. Dozens of extended family — great uncles and third cousins and aunties once-removed — missing.

***

The morning of 27 March, I sat in my room back at Yokota alone after a run inside the confines of the base perimeter, under the pink-white beginnings of the cherry tree bloom washing the country from south to north. A rebirth of spring, of hope, of all things green and full of life.    

Three hundred miles away, my relatives cremated Ōba’s remains.

***

Our rescue helicopters and crews went home, the work of finding and extracting the living long over. Only the dead remained missing, and the Japanese government politely declined U.S. military support to the search. My job as a rescue planner turned to playing games of what if. What if an American aircraft transporting radiation measurement crews crashes inside the Fukushima no-fly zone? Who will rescue them and how will we coordinate between Japanese and American operations centers?

These questions could only be answered in conversation with my Japanese counterpart at the Japanese Rescue Coordination Center, located 53 minutes down the Ome train line, on Fuchu Air Base. When we met in the lobby of the Japanese Air Self Defense headquarters building, a fellow American officer acting as my linguist introduced Okahashi-san. We smiled and bowed, then he presented me with his meishi (business card) in the manner I learned in my sophomore Japanese class at the Academy: Both hands present, both receive. Study the card, then place it only in a chest pocket; never, ever in a disrespectful pants pocket.

Fatigue lined his face and eyes — Okahashi-san has worked twenty hours every day since the tsunami. Lt Col Okahashi said something, smiled and gestured toward an imaginary flat surface a few feet off the ground. He sleeps on a cot in the back of the Rescue Coordination Center.

As we ate pork katsu at the Japanese dining facility, I attempted Japanese the best I could. I explained my last name, and when I said Kesennuma, he said, haltingly, “Your daddy. From Kesennuma?” Yes, I said. He simply frowned, lowered his eyes, shook his head and said no more.

***

Cell phones document the tsunami’s arrival in Minami-sanriku from ground level. A woman’s voice reverberates across the town, alternating with sirens to warning the residents over a citywide loudspeaker system. Impossibly, it continues even as the tsunami piles into the streets and people scream to those who’ve not yet made it to high ground, continues even as the ocean continues its inexorable rise. Until it falls silent. And all that remains are the cries of the Japanese who have survived.

***

When I met my Japanese cousins for dinner, I’d been asking my father for weeks to arrange for me to visit Kesennuma at the end of my deployment. I missed my stop on the train from Yokota, had to double back at the next, then wait at the eki for the only cousin who spoke any English to walk from the restaurant. All around me, life streamed through automated ticketing gates amid the wall of sound that is a Tokyo train station during evening rush hour. And yet, not so far away, their countrymen were digging through rubble with their bare hands. Posting desperate signs for missing persons.

We did our best to converse around our sukiyaki. They showed me pictures from Kesennuma. The old family home, gone. My uncle’s two-story office, first floor hollowed by the tsunami. My uncle, passed out on his floor with an empty bottle of whiskey nearby. Uncle drink lot now.

When I asked my cousins about my request to visit Kesennuma, their eyes dropped and they picked at their food. Mizuki — the English speaker — pulled out his phone. We call your daddy. He dialed, spoke Japanese when my father answered. I could not interpret Mizuki’s body language. He handed me the phone. My father talked around the question — his mother’s death, the family shock, the loss of the business and deaths of two employees, the destruction, how his brother wouldn’t say no to my visit but wouldn’t say yes either — until I interrupted him.

“Dad, what’s the bottom line?”

“Culturally, they would lose face if they said no. But the timing is bad.”

“I’d be a burden.”

“Yes.”

“But I have to make the decision.”

“Yes. You will have to tell them you do not want to go.”

“OK, then. I’m not going.” I handed the phone back to my cousin, and the relief on his face told me everything I needed to know.

***

Of the 12 million tsunami videos, I will not watch them all. And yet it will be too much, as well as somehow not enough.

***

On my last day in Japan, I sat with the Air Force colonel who led my shift. He was a pilot without a cockpit anymore, his jet long mothballed. He’d flown a desk for years now, he said as he smiled and removed his glasses; this was his last hurrah. Then he asked about what drew me to volunteer for this. When I told him, he fell silent.

“I’m sorry,” he said. “We should have found a way to get you to Kesennuma.” Then he handed me his card, thanked me for what I’d done, and I walked out of the operations center for the last time.

Before boarding the bus to Narita, I walked to a nearby cherry tree whose branches drooped under a blooming mantel. It stood above a patchwork of dirt and a browning white carpet of fallen blossoms. I found a living flower within reach and pinched its green stem, careful not to disrupt the delicate petals above it. Once free, I carried it two-handed; one pinching its base, the other cradling the bloom in my palm until I was back in my room. A book of devotions lay open on my desk, a gift from my parents. I placed the flower in the book, closed it.

 

San (Three)

 

2018. The shinkansen pitches us north from Tōkyō, picking up speed until the bullet train hits 200 mph and the endless series of the Tōhoku region’s ubiquitous rice paddies visible through my window blur green, flickering as dike-top roads come and go. I have returned to hear, yes, but also to touch. Taste, smell, and once again: see.   

We strategize. Three of us: my father, the linguist I’ve hired, and me. A cousin produced the name of the rest home where my grandmother perished: Shunpo. A classmate worked at Shunpo on 3/11, but my cousin is unwilling to connect us. So the linguist puts on her fixer hat and determines the former manager not only survived, but rebuilt Shunpo in a new location and now speaks internationally on tsunami readiness. It’s as good a lead on determining how my grandmother died as we’re going to get. Anticipation builds as we get off the bullet at Ichinoseki for the drive to Kesennuma until I’m straining against my seatbelt and we finally get where I could not go seven years ago.

I have returned to hear, yes, but also to touch. Taste, smell, and once again: see.

Kesennuma. No longer confined by glass or screen, I step from a cousin’s car in front of the vacant lot that was once 2-13-16 Nakamachi-cho. My father and he speak quietly in Japanese. The home I remember. His home. From where I stand, I could have reached over the street’s gutter and touched the house’s wall, perhaps taken in that odd mothball scent that seems to accompany my few memories of the texture of the place. But there is nothing but the tang of salt air in between me and the violet dusk of a sun long since set behind the hills of tall pine that mark Kesennuma’s western edge.

***

The tsunami is everywhere.

Blue placards on buildings show its maximum height with typical Japanese simplicity: a horizontal line and measurement in meters, in white lettering. Buildings still slated for demolition next to the orange-brown of cleared earth. Construction signs and workers and new roads unimpeded by human artifice. Signs along the sides of the road that undulates up and down through the endless series of ria (“bay”) that pocket the Sanriku coastline mark the tsunami’s maximum inundation points. Dystopian reconstructed landscapes behind massive seawalls that stretch across the horizon. The “Dragon Tree” of Kesennuma — a gnarled pine that survived the tsunami only to later die and be preserved where it stands on the cape of the Iwaisaki area of the city. The “Miracle Pine” of Rikuzentakata: the sole remaining tree of an estimated 70,000 that made up a coastal forest, eventually felled by the saltwater left in the ground by the tsunami, then preserved in detail at an estimated cost of 150 million yen (close to 2 million dollars based on the exchange rate at the time). O-tsunami, the survivors say, applying the honorific “o-” prefix because they cannot adequately capture in words a full integration of all senses. It roared. Smelled of salt. It burned, pulled, swept.

It was incomprehensible in a way that can only be assembled by a comprehension of  what it left behind.

***

We climb a path beneath old-growth pine and cedar until a panorama of the city reveals the tsunami’s reach, still clear, even now. Gray and green mark the untouched. Yellow earth, the scar of the destroyed, the still-being-rebuilt. My cousin guides my father and me to the family gravesite. A light breeze, cool with the ocean across my skin, the sound of traffic. The smell of needle and ocean. I grasp at the sensory through the mantle of jet lag and culture shock, hoping to hold on to this moment. My father stands in front of a polished granite marker, brings his palms together and lowers his head to offer a silent prayer.

It’s been a decade and a half since I last saw my Aunt Fumiko, but her face remains cherubic, her skin pale and smooth. She apologizes for not having the snack she recalls as a favorite: a mix of salted peanuts and chili-flavored rice cracker crescents. She looks thin but well. I show her pictures of my family. When I produce an app on my phone that lets her see my infant daughter at that very moment sleeping halfway around the globe, she smiles.

Kawaii, ne. So cute.

She tells me that the earthquake found her in the midst of shopping. When the world ceased shaking, she felt an overwhelming urge to immediately head home. Something horrible was going to happen. She followed her instinct and drove straight to the new house, three miles inland from the old one that no longer exists. Her son called at about 3:15 p.m. after seeing tsunami warnings on the news. Obā was at Shunpo, but my aunt thought it would be safe. It had two floors, a good flat roof, was a fair distance from the ocean. She worried about my uncle, whose office was on the downtown waterfront at the tip of Kesennuma Bay.

Read more…

Los Angeles Plays Itself

AP Photo/Reed Saxon

David L. Ulin | Sidewalking | University of California Press | October 2015 | 41 minutes (8,144 words)

 

“I want to live in Los Angeles, but not the one in Los Angeles.”

— Frank Black

 

One night not so many weeks ago, I went to visit a friend who lives in West Hollywood. This used to be an easy drive: a geometry of short, straight lines from my home in the mid-Wilshire flats — west on Olympic to Crescent Heights, north past Santa Monica Boulevard. Yet like everywhere else these days, it seems, Los Angeles is no longer the place it used to be. Over the past decade-and-a-half, the city has densified: building up and not out, erecting more malls, more apartment buildings, more high-rises. At the same time, gridlock has become increasingly terminal, and so, even well after rush hour on a weekday evening, I found myself boxed-in and looking for a short-cut, which, in an automotive culture such as this one, means a whole new way of conceptualizing urban space.

There are those (myself among them) who would argue that the very act of living in L.A. requires an ongoing process of reconceptualization, of rethinking not just the place but also our relationship to it, our sense of what it means. As much as any cities, Los Angeles is a work-in-progress, a landscape of fragments where the boundaries we take for granted in other environments are not always clear. You can see this in the most unexpected locations, from Rick Caruso’s Grove to the Los Angeles County Museum of Art, where Chris Burden’s sculpture “Urban Light” — a cluster of 202 working vintage lampposts — fundamentally changed the nature of Wilshire Boulevard when it was installed in 2008. Until then, the museum (like so much of L.A.) had resisted the street, the pedestrian, in the most literal way imaginable, presenting a series of walls to the sidewalk, with a cavernous entry recessed into the middle of a long block. Burden intended to create a catalyst, a provocation; “I’ve been driving by these buildings for 40 years, and it’s always bugged me how this institution turned its back on the city,” he told the Los Angeles Times a week before his project was lit. When I first came to Los Angeles a quarter of a century ago, the area around the Museum was seedy; it’s no coincidence that in the film Grand Canyon, Mary Louise Parker gets held up at gunpoint there. Take a walk down Wilshire now, however, and you’ll find a different sort of interaction: food trucks, pedestrians, tourists, people from the neighborhood.

Read more…

The Blaming of the Shrew

Illustration by Zoë van Dijk

Sara Fredman | Longreads | February 2019 | 10 minutes (2,982 words)

 

What makes an antihero show work? In this Longreads series, It’s Not Easy Being Mean, Sara Fredman explores the fine-tuning that goes into writing a bad guy we can root for, and asks whether the same rules apply to women.

 
As night follows day, so must the announcement of a woman’s candidacy for high political office compel a verdict on her likability, a quality so ineffable that we can really only say we know it when we see it. And so rarely do we see it in people who aren’t men. Still, likability endures as our gold standard, our north star. Almost 20 years after Sam Adams polled voters on which candidate they would rather get a beer with, we are still obsessed with a candidate’s perceived likability and relatability, despite the fact that we now have the least conventionally likable or relatable president in history. This debating of female candidates’ likability while a man like Donald Trump occupies the Oval Office is confusing but it makes much more sense if you see the current political moment for what it is: our least compelling antihero show.

Whether the antihero show is in its twilight or we’re not quite ready to let it go, there is no doubt that it has been a huge cultural presence for the better part of two decades. As the proliferation of think-pieces around the 20th anniversary of The Sopranos premiere revealed that we’re still in the thrall of the show and the genre it spawned, it’s worth noting that the election of Donald Trump to the highest office in the land followed nearly two decades of tuning in to men who were supposed to be unlikable but whom we somehow liked enough to keep watching. Thinking about political likability and a world in which we say things like “President Trump” is kind of like looking at the wall of Homeland’s Carrie Mathison: it seems crazy but the connections are all there. And in this case, many of the threads lead back to television.

TV is a medium with a particular reliance on likability. Seeing a movie involves just one decision, but when we watch a TV show we must repeatedly make the choice to encounter its characters, tuning in week after week or, in the age of streaming, contributing to a show’s completion rate. When a show features a protagonist who is not conventionally “likable” — someone who does things we recognize as illegal, immoral, or just plain offensive — we must engage in some mental gymnastics. We either flip a switch and start seeing that character as a villain or we decide we’re going to excuse his behavior and continue to root for his success. With a television protagonist, if we choose the latter, it is something that we have to do over and over again, escalating our commitment to the character as his misdeeds pile up.

Trump’s path to the presidency was made smoother by a complex relationship to women and gender that finds its expression in pop culture, like television shows about bad dudes.

TV is also what brought us the concept of likability in politics in the first place because most of the time when we talk about likability, we’re really talking about the appearance of likability, and TV brought us unprecedented access to candidates’ appearances. Each emerging communication technology has changed the formula for successful candidacy and television’s contribution has been to reward a certain type of image. Most radio listeners called the first debate between Kennedy and Nixon a draw, but television viewers overwhelmingly perceived a Kennedy victory because of how Kennedy looked. When we consider TV’s role in the 2016 election, we should be thinking about the way in which television itself took Trump from a local D-lister to an icon of American success with a national profile, but also about the image that we now look for, how the medium has changed our expectations for main characters and, in doing so, changed our expectations for the main character of the country: the president.

And after an election in which we faced two very different potential main characters, we should acknowledge the role that gender plays, in politics and in television. Trump’s path to the presidency was made smoother by a complex relationship to women and gender that finds its expression in pop culture, like television shows about bad dudes. Understanding the mechanics of the antihero genre that came to redefine TV drama, particularly the ways in which the phenomenon of the likable unlikable man relies on the way that man interacts with women, might help us reckon with the politics of gender, and gendered politics, as we look toward another election cycle.

***

The mythology of the antihero has him spring from David Chase’s head like a late ’90s Athena. In his book on the transformative shows of the late ’90s and early 2000s, The Revolution was Televised, Alan Sepinwall writes that Chase was fighting against “the notion that a TV series had to have a likable character at its center.” It was important to Chase that this new kind of protagonist not be rehabilitated, like Detective Sipowicz of NYPD Blue. There would be no redemption arc but instead further descent into whatever nefarious activities had characterized him as unlikable in the first place.

But there was a disconnect between this vision and the way viewers reacted to Tony Soprano and the other unreformed Sipowiczes who would follow in his wake. Chase has been known to complain about his audience’s relationship to Tony, cheering him on one minute and wanting to see him punished the next; Vince Gilligan, creator of Breaking Bad’s Walter White, similarly expressed his surprise that fans were still “rooting for” Walt as his misdeeds became ever more serious and destructive. These kinds of fans have been criticized as “bad readers” missing the point of a groundbreaking new form. But I have always found showrunners’ professions of bafflement at audience reception to be disingenuous at best because the whole enterprise of the antihero show was to create a bad guy people would like anyway. Gilligan seems more in touch with his intentions when he recalls that he cast Bryan Cranston as Walter White because he recalled Cranston’s ability to convey “a basic humanity” in another otherwise unappealing character. When thinking about casting Jon Hamm as Don Draper, Mad Men creator Matthew Weiner made a similar observation: “I asked myself a question: ‘When this man goes home to his wife at the end of the pilot, are you going to hate him?’ And I said, ‘No, I will not hate him.’”

Feigned surprise at audience reactions aside, it seems likely that the men who created these “unlikable” men understood that they would still need an audience to invest in them, and that such an investment would not be a slam dunk but would instead require delicate rigging. I like to break down the mechanics of the antihero in the following way:

The antihero is marked as special.

David Chase has said that he used to quote Rockford Files creator Stephen Cannell in the Sopranos writers’ room: “Rockford can be a jerk-off and a fool, but he’s got to be the smartest guy in the room.” The other Golden Age antihero shows followed this formula. Don is a creative genius (“It’s Toasted!”) and Walt is a talented chemist who regularly outsmarts very dangerous people. This distinction of being set apart is something the antihero has in common with regular heroes.

The antihero has interiority.

If, as Chase declared, his character was not going to evolve toward to a more sympathetic future, the case for sympathy would have to be rooted in the past or justified by the present. These shows gave their protagonists an interiority that made sympathizing with them feel less icky. This is where the antiheroes of the early aughts differed from a character like J.R. Ewing, who was also a popular bad guy protagonist. Therapy sessions and flashbacks, revealing monologues, and contemplative moments set to music all softened the blow of the bad things they did. Whatever interiority Chase, Gilligan, and Weiner allowed other characters, it always paled in comparison to that given to their protagonists. Like their smarts and talent, this was another way of distinguishing characters who would have ordinarily coded as villains and instead marking them as the hero of their story.

The antihero is stacked up against antagonists slightly to exceedingly more unlikable than he is.

To me, this is the real key to the antihero’s appeal. Being special and having a sympathetic backstory will only take a traditionally “unlikable” character so far, and there are plenty of movie and TV villains who have been given similar treatment. What separates a true antihero from a villain is that we’re in his corner, we want him to succeed. If we are to root for Don Draper, an identity thief and rampant philanderer, we need to see him opposite, say, a Pete Campbell type: lothario sans charm and talent. Walter White is the small business owner to Gus Fring’s Amazon. Villainy is not a fixed point, it’s a sliding scale. Real people aren’t neatly divided into Supermans and Lex Luthors. Most of us are equal parts potential for good and propensity for shittiness, a heady brew of good instincts and bad inclinations. Our virtue is contextual. While the nature of these men’s misdeeds are (hopefully!) of a different magnitude than our own, part of their appeal is certainly, as Gilligan suspected, the way they mirror our own humanity, the good and the ugly both. And we are able to focus on the former and excuse the latter when showrunners give us other characters who are less multidimensional and therefore easier to hate.

But alongside the Phil Leotardos and Gus Frings, those easier-to-hate people often ended up being women. Skyler White is the most obvious example. Walt was stacked up against all kinds of villains but none inspired the kind of vitriolic responses Anna Gunn famously described in a 2013 New York Times op-ed: the thousands of people who liked the Facebook page “I Hate Skyler White,” the posts complaining that Skyler was “a shrieking, hypocritical harpy … a ball-and-chain, a drag, a shrew, an annoying bitch wife.” Some fans of the show even conflated Gunn and the character she played. One message board post read: “Could somebody tell me where I can find Anna Gunn so I can kill her?” Reddit boards still use her as the bar against which all bad wife characters should be measured. Even the neo-Nazis who killed Hank and made Jesse their slave never raised viewers’ hackles the way Skyler did and still does years later. Fan reaction to Betty Draper was similarly harsh (apparently, the only way to make her “likable” was to kill her) despite the fact that the show was premised on the fact that her life was a lie Don had to tell her over and over.

Women were the accidental antagonists of shows about ‘difficult men,’ but what does it look like when a woman steps into the antihero mold, when it is a difficult woman at the heart of a series?

Sopranos viewers rarely saw Carmela this way because for the most part she declines to take on the role of antagonist. She is instead, as the psychiatrist in season three points out, an enabler. She doesn’t stand in the way of our guy but the show is still built on the foundation of a woman who could wear a man down. In his very first conversation with Dr. Melfi, Tony talks about his parents’ relationship: “My dad was tough. He ran his own crew. Guy like that and my mother wore him down to a little nub. He was a squeaking little gerbil when he died.” Viewers dutifully saw Livia Soprano as an antagonist and a burden Tony had to overcome. In their just released book The Sopranos Sessions, Alan Sepinwall and Matt Zoller-Seitz write: “Tony adored the ducks in the pool because they were guarded by a mother who protected and nurtured them in a manner free of ulterior motive, of deceit and manipulation, of the urge to annihilate. Livia, for all her evident helplessness, is the most actively destructive force in the pilot, a black hole vacuuming up hope.” They’re talking about the episode where Tony runs over a guy who owes him money with his car but somehow it’s his elderly mother who is the most actively destructive force.

In interviewing Chase for The Sopranos Sessions, Sepinwall reminds him that he once said that The Sopranos, as an idea, began with his friends encouraging him to do a show about his mother. The Sopranos’ origin story is rooted in the trope of the “nagging harpy” and Chase himself suggests that the show was successful in large part because he imported domesticity into the mobster genre: “family shows were a women’s medium, and this was a family show. I thought this might be successful, or at least keep its head above water, because it would attract, unlike most Mob pictures, a female audience because of the family show aspect.” But the kind of domesticity of which he availed himself, one that would become a familiar element of shows about “difficult” men, was one in which women are set up to be either enablers or antagonists. Livia might have been the black hole, but all of the women in Tony’s life are implicated. In that same therapy session in episode one, Dr. Melfi asks Tony, “What’s the one thing your mother, your wife, your daughter all have in common?” His response? “They all break my balls.”


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


Wives get the raw end of the deal in an antihero show. They are there to humanize the protagonist but we often see them as villains instead of the victims they truly are because, in opposing our guy, they stand in the way of the show’s plotline. Wives pose a problem in that they fail to deliver on what we perhaps subconsciously assume to be their role. These men provide for their families. They work hard — never mind how or what they do with their leisure time — so that their families can have what they need and all their wives have to do is not call them on it. Philosopher Kate Manne argues that a central dynamic of misogyny is the obligation by, or expectation of, women to give men “feminine-coded goods and service” like attention, care, sympathy, respect, admiration, security, and safe haven. There is, according to Manne, “the threat of withdrawal of social approval if those social duties are not performed, and the incentive of love and gratitude if they are done willingly and gladly.” Viewer response to characters like Skyler and Betty is the natural result of the expectation that wives are supposed to help, not hinder, their husbands. Carmela, on the other hand, explains to Dr. Krakower that her role is to “make sure he’s got clean clothes in his closet and dinner on his table.”

Once you see the degree to which the antihero show is dependent on marriage and heteronormativity, you can’t unsee it. The role of a wife in an antihero story is not incidental but integral: domestic antagonists are a large part of the reason we feel OK about rooting for bad guys like Tony Soprano, Walter White, and Don Draper. These shows taught us to look for the humanity in our male protagonists and ignore it in the women who stood in their way. Television audiences’ identification with and adoration of male antiheroes were the canaries in the coal mine, warning us of the ease with which we might see villains as victims and vice versa.

Looking back, it’s painful to admit that for many in the electorate, Hillary Clinton was the Skyler to Trump’s Walt, the Betty to his Don. We had already spent years seeing her as the Carmela to Bill’s Tony, implicated in her husband’s misdeeds by dint of staying with him, forever tainted by her own moral compromises that, while they paled in comparison to his, were for some reason less forgivable and rendered her eternally “unlikable.” It made sense, then, that when Clinton took a jab at Trump’s penchant for avoiding paying taxes while explaining her plan to raise taxes on the wealthy during the third debate, Trump interrupted to call her “such a nasty woman.” This one, he seemed to be telling viewers at home, is a Skyler.

So where does this leave us, in art and in politics? Are we ready for a female candidate who is – like all of the male candidates over the last 230 years, like all of us – human? As I write this, about half of the announced Democratic candidates for president are women so it is likely that gender will play a starring role this election cycle. Similarly, as television diffuses like so many essential oils over ever-increasing platforms, there are more opportunities than ever before for female-centered shows. How have we done with female characters? Have depictions of women sharing a screen with unlikable men changed at all? Are we able to see the “humanity” that Gilligan identified at the heart of Walter White’s appeal in people who aren’t men? Women were the accidental antagonists of shows about “difficult men,” but what does it look like when a woman steps into the antihero mold, when it is a difficult woman at the heart of a series? What is it, actually, that makes a woman difficult?

When we talk about antiheroes, we’re really talking about the kinds of bad behavior we can countenance and the kinds we can’t, the conditions that need to be met for us to overlook bad behavior; the way we take the sum of some people and not others. Thinking about when and how we extend our understanding and forgiveness is key to understanding the genre and our world. Deconstructing the antihero genre may help us better examine our own attitudes toward women.

This is the first installment of an unscientific and hardly exhaustive journey through shows about difficult people, many of whom are women. Next up? The Good Bad Wives of Ozark and House of Cards.

* * *

Sara Fredman is a writer and editor living in St. Louis. Her work has been featured in Longreads, The Rumpus, Tablet, and Lilith.

 

Editor: Cheri Lucas Rowlands
Illustrator: Zoë van Dijk

The Reappearing Act

Illustration by Greta Kotz

Audrey Olivero | Longreads | February 2019 | 14 minutes (3,621 words)

 

The magic of a knife-throwing range is that it looks as if the prop attic of a theater department vomited onto an abandoned hunters’ lodge. Bright green fake grass shoots up from carpeted ground. Deer hang around the corners, pock-marked with arrow wounds, their plasticky stares watching me fail day after day. It is nothing like the dark stages where I’ve seen knife-throwing performed, spot-lit in anticipation, glittering with the stardust of sequins lost in the name of spectacle. The stakes don’t feel quite so high in this space. Here, my heart doesn’t race the way it does at the clack of a magician’s assistant’s shiny red heels, the spin of a wooden board, the familiar plunge of heart to gut at the sound of the near-fatal miss transformed into success by applause. That is, until a blade careens into a wooden target, tilts upward, and falls with the grace of a pigeon that’s just flown into a window. This is what happens when a knife doesn’t stick.

Today, none of my knives are sticking.

The mystery here, as I pick up my losses like lead dandelions off the range floor, isn’t how this is happening. It’s how I’m still at this.

Read more…