Search Results for: The Verge

How the Guardian Went Digital

Newscast Limited via AP Images

Alan Rusbridger | Breaking News | Farrar, Straus and Giroux | November 2018 | 31 minutes (6,239 words)

 

In 1993 some journalists began to be dimly aware of something clunkily referred to as “the information superhighway” but few had ever had reason to see it in action. At the start of 1995 only 491 newspapers were online worldwide: by June 1997 that had grown to some 3,600.

In the basement of the Guardian was a small team created by editor in chief Peter Preston — the Product Development Unit, or PDU. The inhabitants were young and enthusiastic. None of them were conventional journalists: I think the label might be “creatives.” Their job was to think of new things that would never occur to the largely middle-aged reporters and editors three floors up.

The team — eventually rebranding itself as the New Media Lab — started casting around for the next big thing. They decided it was the internet. The creatives had a PC actually capable of accessing the world wide web. They moved in hipper circles. And they started importing copies of a new magazine, Wired — the so-called Rolling Stone of technology — which had started publishing in San Francisco in 1993, along with the HotWired website. “Wired described the revolution,” it boasted. “HotWired was the revolution.” It was launched in the same month the Netscape team was beginning to assemble. Only 18 months later Netscape was worth billions of dollars. Things were moving that fast.

In time, the team in PDU made friends with three of the people associated with Wired. They were the founders, Louis Rossetto, and Jane Metcalfe; and the columnist Nicholas Negroponte, who was based at the Massachusetts Institute of Technology and who wrote mindblowing columns predicting such preposterous things as wristwatches which would “migrate from a mere timepiece today to a mobile command-and-control center tomorrow . . . an all-in-one, wrist-mounted TV, computer, and telephone.”

As if.

Both Rossetto and Negroponte were, in their different ways, prophets. Rossetto was a hot booking for TV talk shows, where he would explain to baffled hosts what the information superhighway meant. He’d tell them how smart the internet was, and how ethical. Sure, it was a “dissonance amplifier.” But it was also a “driver of the discussion” towards the real. You couldn’t mask the truth in this new world, because someone out there would weigh in with equal force. Mass media was one-way communication. The guy with the antenna could broadcast to billions, with no feedback loop. He could dominate. But on the internet every voice was going to be equal to every other voice.

“Everything you know is wrong,” he liked to say. “If you have a preconceived idea of how the world works, you’d better reconsider it.”

Negroponte, 50-something, East Coast gravitas to Rossetto’s Californian drawl, was working on a book, Being Digital, and was equally passionate in his evangelism. His mantra was to explain the difference between atoms — which make up the physical artifacts of the past — and bits, which travel at the speed of light and would be the future. “We are so unprepared for the world of bits . . . We’re going to be forced to think differently about everything.”

I bought the drinks and listened.

Over dinner in a North London restaurant, Negroponte started with convergence — the melting of all boundaries between TV, newspapers, magazines, and the internet into a single media experience — and moved on to the death of copyright, possibly the nation state itself. There would be virtual reality, speech recognition, personal computers with inbuilt cameras, personalized news. The entire economic model of information was about to fall apart. The audience would pull rather than wait for old media to push things as at present. Information and entertainment would be on demand. Overly hierarchical and status-conscious societies would rapidly erode. Time as we knew it would become meaningless — five hours of music would be delivered to you in less than five seconds. Distance would become irrelevant. A UK paper would be as accessible in New York as it was in London.

Writing 15 years later in the Observer, the critic John Naughton compared the begetter of the world wide web, Sir Tim Berners-Lee, with the seismic disruption five centuries earlier caused by the invention of movable type. Just as Gutenberg had no conception of his invention’s eventual influence on religion, science, systems of ideas, and democracy, so — in 2008 — “it will be decades before we have any real understanding of what Berners-Lee hath wrought.”

The entire economic model of information was about to fall apart.

And so I decided to go to America with the leader of the PDU team, Tony Ageh, and see the internet for myself. A 33-year-old “creative,” Ageh had had exactly one year’s experience in media — as an advertising copy chaser for The Home Organist magazine — before joining the Guardian. I took with me a copy of The Internet for Dummies. Thus armed, we set off to America for a four-day, four-city tour.

In Atlanta, we found the Atlanta Journal-Constitution (AJC), which was considered a thought leader in internet matters, having joined the Prodigy Internet Service, an online service offering subscribers information over dial-up 1,200 bit/second modems. After four months the internet service had 14,000 members, paying 10 cents a minute to access online banking, messaging, full webpage hosting and live share prices.

The AJC business plan envisaged building to 35,000 or 40,000 by year three. But that time, they calculated, they would be earning $3.3 million in subscription fees and $250,000 a year in advertising. “If it all goes to plan,’ David Scott, the publisher, Electronic Information Service, told us, ‘it’ll be making good money. If it goes any faster, this is a real business.”

We also met Michael Gordon, the managing editor. “The appeal to the management is, crudely, that it is so much cheaper than publishing a newspaper,” he said.

We wrote it down.

“We know there are around 100,000 people in Atlanta with PCs. There are, we think, about one million people wealthy enough to own them. Guys see them as a toy; women see them as a tool. The goldmine is going to be the content, which is why newspapers are so strongly placed to take advantage of this revolution. We’re out to maximize our revenue by selling our content any way we can. If we can sell it on CD-ROM or TV as well, so much the better.”

“Papers? People will go on wanting to read them, though it’s obviously much better for us if we can persuade them to print them in their own homes. They might come in customized editions. Edition 14B might be for females living with a certain income.”

It was heady stuff.

From Atlanta we hopped up to New York to see the Times’s online service, @Times. We found an operation consisting of an editor plus three staffers and four freelancers. The team had two PCs, costing around $4,000 each. The operation was confident, but small.

The @Times content was weighted heavily towards arts and leisure. The opening menus offered a panel with about 15 reviews of the latest films, theatre, music, and books – plus book reviews going back two years. The site offered the top 15 stories of the day, plus some sports news and business.

There was a discussion forum about movies, with 47 different subjects being debated by 235 individual subscribers. There was no archive due to the fact that — in one of the most notorious newspaper licensing cock-ups in history — the NYT in 1983 had given away all rights to its electronic archive (for all material more than 24 hours old) in perpetuity to Mead/Lexis.

That deal alone told you how nobody had any clue what was to come.

We sat down with Henry E. Scott, the group director of @Times. “Sound and moving pictures will be next. You can get them now. I thought about it the other day, when I wondered about seeing 30 seconds of The Age of Innocence. But then I realized it would take 90 minutes to download that and I could have seen more or less the whole movie in that time. That’s going to change.”

But Scott was doubtful about the lasting value of what they were doing — at least, in terms of news. “I can’t see this replacing the news- paper,” he said confidently. “People don’t read computers unless it pays them to, or there is some other pressing reason. I don’t think anyone reads a computer for pleasure. The San Jose Mercury [News] has put the whole newspaper online. We don’t think that’s very sensible. It doesn’t make sense to offer the entire newspaper electronically.”

We wrote it all down.

“I can’t see the point of news on-screen. If I want to know about a breaking story I turn on the TV or the radio. I think we should only do what we can do better than in print. If it’s inferior than the print version there’s no point in doing it.”

Was there a business plan? Not in Scott’s mind. “There’s no way you can make money out of it if you are using someone else’s server. I think the LA Times expects to start making money in about three years’ time. We’re treating it more as an R & D project.”


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


From New York we flitted over to Chicago to see what the Tribune was up to. In its 36-storey Art Deco building — a spectacular monument to institutional self-esteem — we found a team of four editorial and four marketing people working on a digital service, with the digital unit situated in the middle of the newsroom. The marketeers were beyond excited about the prospect of being able to show houses or cars for sale and arranged a demonstration. We were excited, too, even if the pictures were slow and cumbersome to download.

We met Joe Leonard, associate editor. “We’re not looking at Chicago Online as a money maker. We’ve no plans even to break even at this stage. My view is simply that I’m not yet sure where I’m going, but I’m on the boat, in the water — and I’m ahead of the guy who is still standing on the pier.”

Reach before revenue.

Finally we headed off to Boulder, Colorado, in the foothills of the Rockies, where Knight Ridder had a team working on their vision of the newspaper of tomorrow. The big idea was, essentially, what would become the iPad — only the team in Boulder hadn’t got much further than making an A4 block of wood with a “front page” stuck on it. The 50-something director of the research centre, Roger Fidler, thought the technology capable of realizing his dream of a ‘personal information appliance’ was a couple of years off.

Tony and I had filled several notebooks. We were by now beyond tired and talked little over a final meal in an Italian restaurant beneath the Rocky Mountains.

We had come. We had seen the internet. We were conquered.

* * *

Looking back from the safe distance of nearly 25 years, it’s easy to mock the fumbling, wildly wrong predictions about where this new beast was going to take the news industry. We had met navigators and pioneers. They could dimly glimpse where the future lay. Not one of them had any idea how to make a dime out of it, but at the same time they intuitively sensed that it would be more reckless not to experiment. It seemed reasonable to assume that — if they could be persuaded to take the internet seriously — their companies would dominate in this new world, as they had in the old world.

We were no different. After just four days it seemed blindingly obvious that the future of information would be mainly digital. Plain old words on paper — delivered expensively by essentially Victorian production and distribution methods — couldn’t, in the end, compete. The future would be more interactive, more image-driven, more immediate. That was clear. But how on earth could you graft a digital mindset and processes onto the stately ocean liner of print? How could you convince anyone that this should be a priority when no one had yet worked out how to make any money out of it? The change, and therefore the threat, was likely to happen rapidly and maybe violently. How quickly could we make a start? Or was this something that would be done to us?

In a note for Peter Preston on our return I wrote, “The internet is fascinating, intoxicating . . . it is also crowded out with bores, nutters, fanatics and middle managers from Minnesota who want the world to see their home page and CV. It’s a cacophony, a jungle. There’s too much information out there. We’re all overloaded. You want someone you trust to fillet it, edit it and make sense of it for you. That’s what we do. It’s an opportunity.”

Looking back from the safe distance of nearly 25 years, it’s easy to mock the fumbling, wildly wrong predictions about where this new beast was going to take the news industry.

I spent the next year trying to learn more and then the calendar clicked on to 1995 — The Year the Future Began, at least according to a recent book by the cultural historian W. Joseph Campbell, who used the phrase as his book title twenty years later. It was the year Amazon.com, eBay, Craigslist, and Match.com established their presence online. Microsoft spent $300m launching Windows 95 with weeks of marketing hype, spending millions for the rights to the Rolling Stones hit “Start Me Up,” which became the anthem for the Windows 95 launch.

Cyberspace — as the cyber dystopian Evgeny Morozov recalled, looking back on that period — felt like space itself. “The idea of exploring cyberspace as virgin territory, not yet colonized by governments and corporations, was romantic; that romanticism was even reflected in the names of early browsers (‘Internet Explorer,’ ‘Netscape Navigator’).”

But, as Campbell was to reflect, “no industry in 1995 was as ill-prepared for the digital age, or more inclined to pooh-pooh the disruptive potential of the Internet and World Wide Web, than the news business.” It suffered from what he called “innovation blindness” — “an inability, or a disinclination to anticipate and understand the consequences of new media technology.”

1995 was, then, the year the future began. It happened also to be the year in which I became editor of the Guardian.

* * *

I was 41 and had not, until very recently, really imagined this turn of events. My journalism career took a traditional enough path. A few years reporting; four years writing a daily diary column; a stint as a feature writer — home and abroad. In 1986 I left the Guardian to be the Observer’s television critic. When I rejoined the Guardian I was diverted towards a route of editing — launching the paper’s Saturday magazine followed by a daily tabloid features section and moving to be deputy editor in 1993. Peter Preston — unshowy, grittily obstinate, brilliantly strategic — looked as if he would carry on editing for years to come. It was a complete surprise when he took me to the basement of the resolutely unfashionable Italian restaurant in Clerkenwell he favored, to tell me he had decided to call it a day.

On most papers the proprietor or chief executive would find an editor and take him or her out to lunch to do the deal. On the Guardian — at least according to tradition dating back to the mid-70s — the Scott Trust made the decision after balloting the staff, a process that involved manifestos, pub hustings, and even, by some candidates, a little frowned-on campaigning.

I supposed I should run for the job. My mission statement said I wanted to boost investigative reporting and get serious about digital. It was, I fear, a bit Utopian. I doubt much of it impressed the would-be electorate. British journalists are programmed to skepticism about idealistic statements concerning their trade. Nevertheless, I won the popular vote and was confirmed by the Scott Trust after an interview in which I failed to impress at least one Trustee with my sketchy knowledge of European politics. We all went off for a drink in the pub round the back of the office. A month later I was editing.

“Fleet Street,” as the UK press was collectively called, was having a torrid time, not least because the biggest beast in the jungle, Rupert Murdoch, had launched a prolonged price war that was playing havoc with the economics of publishing. His pockets were so deep he could afford to slash the price of The Times almost indefinitely — especially if it forced others out of business.

Reach before revenue — as it wasn’t known then.

The newest kid on the block, the Independent, was suffering the most. To their eyes, Murdoch was behaving in a predatory way. We calculated the Independent titles were losing around £42 million (nearly £80 million in today’s money). Murdoch’s Times, by contrast, had seen its sales rocket 80 per cent by cutting its cover prices to below what it cost to print and distribute. The circulation gains had come at a cost — about £38 million in lost sales revenue. But Murdoch’s TV business, BSkyB, was making booming profits and the Sun continued to throw off huge amounts of cash. He could be patient.

But how on earth could you graft a digital mindset and processes onto the stately ocean liner of print.

The Telegraph had been hit hard — losing £45 million in circulation revenues through cutting the cover price by 18 pence. The end of the price war left it slowly clawing back lost momentum, but it was still £23 million adrift of where it had been the previous year. Murdoch — as so often — had done something bold and aggressive. Good for him, not so good for the rest of us. Everyone was tightening their belts in different ways. The Independent effectively gave up on Scotland. The Guardian saved a million a year in newsprint costs by shaving half an inch off the width of the paper.

The Guardian, by not getting into the price war, had “saved” around £37 million it would otherwise have lost. But its circulation had been dented by about 10,000 readers a day. Moreover, the average age of the Guardian reader was 43 — something that pre-occupied us rather a lot. We were in danger of having a readership too old for the job advertisements we carried.

Though the Guardian itself was profitable, the newspaper division was losing nearly £12 million (north of £21 million today). The losses were mainly due to the sister Sunday title, the Observer, which the Scott Trust had purchased as a defensive move against the Independent in 1993. The Sunday title had a distinguished history, but was hemorrhaging cash: £11 million losses.

Everything we had seen in America had to be put on hold for a while. The commercial side of the business never stopped reminding us that only three percent of households owned a PC and a modem.

* * *

But the digital germ was there. My love of gadgets had not extended to understanding how computers actually worked, so I commissioned a colleague to write a report telling me, in language I could understand, how our computers measured up against what the future would demand. The Atex system we had installed in 1987 gave everyone a dumb terminal on their desk — little more than a basic word processor. It couldn’t connect to the internet, though there was a rudimentary internal messaging system. There was no word count or spellchecker and storage space was limited. It could not be used with floppy disks or CD-ROMs. Within eight years of purchase it was already a dinosaur.

There was one internet connection in the newsroom, though most reporters were unaware of it. It was rumored that downstairs a bloke called Paul in IT had a Mac connected to the internet through a dial-up modem. Otherwise we were sealed off from the outside world.

Some of these journalist geeks began to invent Heath Robinson solutions to make the inadequate kit in Farringdon Road to do the things we wanted in order to produce a technology website online. Tom Standage — he later became deputy editor of the Economist, but then was a freelance tech writer — wrote some scripts to take articles out of Atex and format them into HTML so they could be moved onto the modest Mac web server — our first content management system, if you like. If too many people wanted to read this tech system at once the system crashed. So Standage and the site’s editor, Azeem Azhar, would take it in turns sitting in the server room in the basement of the building rebooting the machines by hand — unplugging them and physically moving the internet cables from one machine to another.

What would the future look like? We imagined personalized editions, even if we had not the faintest clue how to produce them. We guessed that readers might print off copies of the Guardian in their homes — and even toyed with the idea of buying every reader a printer. There were glimmers of financial hope. Our readers were spending £56 million a year buying the Guardian but we retained none of it: the money went on paper and distribution. In the back of our minds we ran calculations about how the economics of newspapers would change if we could save ourselves the £56 million a year “old world” cost.

By March 1996, ideas we’d hatched in the summer of 1995 to graft the paper onto an entirely different medium were already out of date. That was a harbinger of the future.

On top of editing, the legal entanglements sometimes felt like a full-time job on their own. Trying to engineer a digital future for the Guardian felt like a third job. There were somehow always more urgent issues. By March 1996, ideas we’d hatched in the summer of 1995 to graft the paper onto an entirely different medium were already out of date. That was a harbinger of the future. No plans in the new world lasted very long.

It was now apparent that we couldn’t get away with publishing selective parts of the Guardian online. Other newspapers had shot that fox by pushing out everything. We were learning about the connectedness of the web — and the IT team tentatively suggested that we might use some “offsite links” to other versions of the same story to save ourselves the need to write our own version of everything. This later became the mantra of the City University of New York (CUNY) digital guru Jeff Jarvis — “Do what you do best, and link to the rest.”

We began to grapple with numerous basic questions about the new waters into which we were gingerly dipping our toes.

Important question: Should we charge?

The Times and the Telegraph were both free online. A March 1996 memo from Bill Thompson, a developer who had joined the Guardian from Pipex, ruled it out:

I do not believe the UK internet community would pay to read an online edition of a UK newspaper. They may pay to look at an archive, but I would not support any attempt to make the Guardian a subscription service online . . . It would take us down a dangerous path.

In fact, I believe that the real value from an online edition will come from the increased contact it brings with our readers: online newspapers can track their readership in a way that print products never can, and the online reader can be a valuable commodity in their own right, even if they pay nothing for the privilege.

Thompson was prescient about how the overall digital economy would work — at least for players with infinitely larger scale and vastly more sophisticated technology.

What time of day should we publish?

The electronic Telegraph was published at 8 a.m. each day — mainly because of its print production methods. The Times, more automated, was available as soon as the presses started rolling. The Guardian started making some copy available from first edition through to the early hours. It would, we were advised, be fraught with difficulties to publish stories at the same time they were ready for the press.

Why were we doing it anyway?

Thompson saw the dangers of cannibalization, that readers would stop buying the paper if they could read it for free online. It could be seen as a form of marketing. His memo seemed ambivalent as to whether we should venture into this new world at all:

The Guardian excels in presenting information in an attractive easy to use and easy to navigate form. It is called a “broadsheet newspaper.” If we try to put the newspaper on-line (as the Times has done) then we will just end up using a new medium to do badly what an old medium does well. The key question is whether to make the Guardian a website, with all that entails in terms of production, links, structure, navigational aids etc. In summer 1995 we decided that we would not do this.

But was that still right a year later? By now we had the innovation team — PDU — still in the basement of one building in Farringdon Road, and another team in a Victorian loft building across the way in Ray Street. We were, at the margins, beginning to pick up some interesting fringe figures who knew something about computers, if not journalism. But none of this was yet pulling together into a coherent picture of what a digital Guardian might look like.

An 89-page business plan drawn up in October 1996 made it plain where the priorities lay: print.

We wanted to keep growing the Guardian circulation — aiming a modest increase to 415,000 by March 2000 — which would make us the ninth-biggest paper in the UK — with the Observer aiming for 560,000 with the aid of additional sections. A modest investment of £200,000 a year in digital was dwarfed by an additional £6 million cash injection into the Observer, spread over three years.

As for “on-line services” (we were still hyphenating it) we did want “a leading-edge presence” (whatever that meant), but essentially we thought we had to be there because we had to be there. By being there we would learn and innovate and — surely? — there were bound to be commercial opportunities along the road. It wasn’t clear what.

We decided we might usefully take broadcasting, rather than print, as a model — emulating its “immediacy, movement searchability and layering.”

If this sounded as if we were a bit at sea, we were. We hadn’t published much digitally to this point. We had taken half a dozen meaty issues — including parliamentary sleaze, and a feature on how we had continued to publish on the night our printing presses had been blown up by the IRA — and turned them into special reports.

It is a tribute to our commercial colleagues that they managed to pull in the thick end of half a million pounds to build these websites. Other companies’ marketing directors were presumably like ours — anxious about the youth market and keen for their brands to feel “cool.” In corporate Britain in 1996, there was nothing much cooler than the internet, even if not many people had it, knew where to find it or understood what to do with it.

* * *

The absence of a controlling owner meant we could run the Guardian in a slightly different way from some papers. Each day began with a morning conference open to anyone on the staff. In the old Farringdon Road office, it was held around two long narrow tables in the editor’s office — perhaps 30 or 40 people sitting or standing. When we moved to our new offices at Kings Place, near Kings Cross in North London, we created a room that was, at least theoretically, less hierarchical: a horseshoe of low yellow sofas with a further row of stools at the back. In this room would assemble a group of journalists, tech developers and some visitors from the commercial departments every morning at about 10 a.m. If it was a quiet news day we might expect 30 or so. On big news days, or with an invited guest, we could host anything up to 100.

A former Daily Mail journalist, attending his first morning conference, muttered to a colleague in the newsroom that it was like Start the Week — a Monday morning BBC radio discussion program. All talk and no instructions. In a way, he was right: It was difficult, in conventional financial or efficiency terms, to justify 50 to 60 employees stopping work to gather together each morning for anything between 25 and 50 minutes. No stories were written during this period, no content generated.

But something else happened at these daily gatherings. Ideas emerged and were kicked around. Commissioning editors would pounce on contributors and ask them to write the thing they’d just voiced. The editorial line of the paper was heavily influenced, and sometimes changed, by the arguments we had. The youngest member of staff would be in the same room as the oldest: They would be part of a common discussion around news. By a form of accretion and osmosis an idea of the Guardian was jointly nourished, shared, handed down, and crafted day by day.

You might love the Guardian or despise it, but it had a definite sense of what it believed in and what its journalism was.

It led to a very strong culture. You might love the Guardian or despise it, but it had a definite sense of what it believed in and what its journalism was. It could sometimes feel an intimidating meeting — even for, or especially for, the editor. The culture was intended to be one of challenge: If we’d made a wrong decision, or slipped up factually or tonally, someone would speak up and demand an answer. But challenge was different from blame: It was not a meeting for dressing downs or bollockings. If someone had made an error the previous day we’d have a post-mortem or unpleasant conversation outside the room. We’d encourage people to want to contribute to this forum, not make them fear disapproval or denunciation.

There was a downside to this. It could, and sometimes did, lead to a form of group-think. However herbivorous the culture we tried to nurture, I was conscious of some staff members who felt awkward about expressing views outside what we hoped was a  fairly broad consensus. But, more often, there would be a good discussion on two or three of the main issues of the day. We encouraged specialists or outside visitors to come in and discuss breaking stories. Leader writers could gauge the temperature of the paper before penning an editorial. And, from time to time, there would be the opposite of consensus: Individuals, factions, or groups would come and demand we change our line on Russia, bombing in Bosnia; intervention in Syria; Israel, blood sports, or the Labor leadership.

The point was this: that the Guardian was not one editor’s plaything or megaphone. It emerged from a common conversation — and was open to internal challenge when editorial staff felt uneasy about aspects of our journalism or culture.

* * *

Within two years — slightly uncomfortable at the power I had acquired as editor — I gave some away. I wanted to make correction a natural part of the journalistic process, not a bitterly contested post-publication battleground designed to be as difficult as possible.

We created a new role on the Guardian: a readers’ editor. He or she would be the first port of call for anyone wanting to complain about anything we did or wrote. The readers’ editor would have daily space in the paper — off-limits to the editor — to correct or clarify anything and would also have a weekly column to raise broader issues of concern. It was written into the job description that the editor could not interfere. And the readers’ editor was given the security that he/she could not be removed by the editor, only by the Scott Trust.

On most papers editors had sat in judgment on themselves. They commissioned pieces, edited and published them — and then were supposed neutrally to assess whether their coverage had, in fact, been truthful, fair, and accurate. An editor might ask a colleague — usually a managing editor — to handle a complaint, but he/she was in charge from beginning to end. It was an autocracy. That mattered even more in an age when some journalism was moving away from mere reportage and observation to something closer to advocacy or, in some cases, outright pursuit.

Allowing even a few inches of your own newspaper to be beyond your direct command meant that your own judgments, actions, ethical standards and editorial decisions could be held up to scrutiny beyond your control. That, over time, was bound to change your journalism. Sunlight is the best disinfectant: that was the journalist-as-hero story we told about what we do. So why wouldn’t a bit of sunlight be good for us, too?

The first readers’ editor was Ian Mayes, a former arts and obituaries editor then in his late 50s. We felt the first person in the role needed to have been a journalist — and one who would command instant respect from a newsroom which otherwise might be somewhat resistant to having their work publicly critiqued or rebutted. There were tensions and some resentment, but Ian’s experience, fairness and flashes of humor eventually won most people round.

One or two of his early corrections convinced staff and readers alike that he had a light touch about the fallibility of journalists:

In our interview with Sir Jack Hayward, the chairman of Wolverhampton Wanderers, page 20, Sport, yesterday, we mistakenly attributed to him the following comment: “Our team was the worst in the First Division and I’m sure it’ll be the worst in the Premier League.” Sir Jack had just declined the offer of a hot drink. What he actually said was: “Our tea was the worst in the First Division and I’m sure it’ll be the worst in the Premier League.” Profuse apologies.

In an article about the adverse health effects of certain kinds of clothing, pages 8 and 9, G2, August 5, we omitted a decimal point when quoting a doctor on the optimum temperature of testicles. They should be 2.2 degrees Celsius below core body temperature, not 22 degrees lower.

But in his columns he was capable of asking tough questions about our editorial decisions —  often prompted by readers who had been unsettled by something we had done. Why had we used a shocking picture which included a corpse? Were we careful enough in our language around mental health or disability? Why so much bad language in the Guardian? Were we balanced in our views of the Kosovo conflict? Why were Guardian journalists so innumerate? Were we right to link to controversial websites?

In most cases Mayes didn’t come down on one side or another. He would often take readers’ concerns to the journalist involved and question them — sometimes doggedly — about their reasoning. We learned more about our readers through these interactions; and we hoped that Mayes’s writings, candidly explaining the workings of a newsroom, helped readers better understand our thinking and processes.

It was, I felt, good for us to be challenged in this way. Mayes was invaluable in helping devise systems for the “proper” way to correct the record. A world in which — to coin a phrase —  you were “never wrong for long” posed the question of whether you went in for what Mayes termed “invisible mending.” Some news organizations would quietly amend whatever it was that they had published in error, no questions asked. Mayes felt differently: The act of publication was something on the record. If you wished to correct the record, the correction should be visible.

But we had some inkling that the iron grip of centralized control that a newspaper represented was not going to last.

We were some years off the advent of social media, in which any error was likely to be pounced on in a thousand hostile tweets. But we had some inkling that the iron grip of centralized control that a newspaper represented was not going to last.

I found liberation in having created this new role. There were few things editors can enjoy less than the furious early morning phone call or email from the irate subject of their journalism. Either the complainant is wrong — in which case there is time wasted in heated self-justification; or they’re right, wholly or partially. Immediately you’re into remorseful calculations about saving face. If readers knew we honestly and rapidly — even immediately — owned up to our mistakes they should, in theory, trust us more. That was the David Broder theory, and I bought it. Readers certainly made full use of the readers’ editor’s existence. Within five years Mayes was dealing with around 10,000 calls, emails, and letters a year — leading to around 1,200 corrections, big and small. It’s not, I think, that we were any more error-prone than other papers. But if you win a reputation for openness, you’d better be ready to take it as seriously as your readers will.

Our journalism became better. If, as a journalist, you know there are a million sleuth-eyed editors out there waiting to leap on your tiniest mistake, it makes you more careful. It changes the tone of your writing. Our readers often know more than we do. That became a mantra of the new world, coined by the blogger and academic Dan Gillmor, in his 2004 book We the Media8 but it was already becoming evident in the late 1990s.

The act of creating a readers’ editor felt like a profound recognition of the changing nature of what we were engaged in. Journalism was not an infallible method guaranteed to result in something we would proclaim as The Truth — but a more flawed, tentative, iterative and interactive way of getting towards something truthful.

Admitting that felt both revolutionary and releasing.

***

Excerpted from Breaking News: The Remaking of Journalism and Why It Matters Now by Alan Rusbridger. Published Farrar, Straus and Giroux November 27, 2018. Copyright © 2018 by Alan Rusbridger. All rights reserved.

Longreads Editor: Aaron Gilbreath

The Darwinian View of Our Storytelling Species

Terray Sylvester/VWPics via AP Images

Taxonomy classifies organisms in a way that maps life’s diversification and ancestral connections across time. When folklorists started charting popular stories like “Little Red Riding Hood” the same way, they built evolutionary trees that revealed surprising connections between childhood tales across cultures. For Harper’s, science writer Ferris Jabr explores this lesser known scientific approach to children’s narratives, which treats a story’s structural elements as genes, called ­mythemes. But this approach peers much deeper than individual stories’ genealogies. It exposes the ancient, durable roots of storytelling itself and our nature as a species. “Beauty and the Beast” and “Rumpelstiltskin,” Jabr writes, were no longer “just a few hundred years old, as some scholars had proposed — they were more than 2,500 years old.”

“Most stories probably don’t survive that long,” says Tehrani. “But when you find a story shared by populations that speak closely related languages, and the variants follow a treelike model of descent, I think coincidence or convergence is an incredibly unlikely explanation. I have young children myself, and I read them bedtime stories, just as parents have done for hundreds of generations. To think that some of these stories are so old that they are older than the language I’m using to tell them—I find something deeply compelling about that.”

The story of storytelling began so long ago that its opening lines have dissolved into the mists of deep time. The best we can do is loosely piece together a first chapter. We know that by 1.5 million years ago early humans were crafting remarkably symmetrical hand axes, hunting cooperatively, and possibly controlling fire. Such skills would have required careful observation and mimicry, step-by-step instruction, and an ability to hold a long series of events in one’s mind—an incipient form of plot. At least one hundred thousand years ago, and possibly much earlier, humans were drawing, painting, making jewelry, and ceremonially burying the dead. And by forty thousand years ago, humans were creating the type of complex, imaginative, and densely populated murals found on the chalky canvases of ancient caves: art that reveals creatures no longer content to simply experience the world but who felt compelled to record and re-imagine it. Over the past few hundred thousand years, the human character gradually changed. We became consummate storytellers.

Read the story

Revisiting the #MeToo Movement: A Reading List

Getty Images

In her TED Talk “Me Too is a movement, not a moment” at TED Women 2018, Tarana Burke paces across the stage, saying, “I’ve read article after article bemoaning wealthy white men who have landed softly with their golden parachutes following the disclosure of their terrible behavior. And we’re asked to consider their futures. But what of survivors?”

Burke’s TED Talk, which took place in late November 2018, came just after the one-year mark of the #MeToo hashtag going viral, giving Burke — and others — a chance to reflect on the history of the movement, and whether or not it’s headed in a direction that supports Burke’s original intent.

“This movement is constantly being called a watershed moment, or even a reckoning, but I wake up some days feeling like all evidence points to the contrary,” Burke says. She pauses, shaking her head. “We have moved so far away from the origins of this movement that started a decade ago, or even the intentions of the hashtag that started just a year ago, that sometimes, the Me Too movement that I hear some people talk about is unrecognizable to me.”

Roxane Gay, in a piece for Refinery 29 at the one-year mark of the #MeToo hashtag, expresses how the movement has diverged from the heart of Burke’s work, asking, “What will change for women? What, especially, will change for the most vulnerable women among us — the undocumented, women of color, working class women, single mothers? What will change for women who cannot afford to come forward when they are harassed or assaulted? As I consider this past year, what strikes me is how #MeToo has mostly benefited culturally prominent, mostly white women.”

Burke’s movement, which originally began in 2006, was originally intended, as Abby Ohlheiser reports in The Chicago Tribune, “to help women and girls — particularly women and girls of color — who had also survived sexual violence.” Beyond the one-year mark of the hashtag going viral and the decade of work Burke has done to support survivors of sexual assault, there exists a history of black women activists fighting against sexual violence. As Danielle McGuire writes in her essay “Recy Taylor, Oprah Winfrey, and the long history of black women saying #MeToo” for The Washington Post, “stories of subversion date from the 1830s, when Harriet Jacobs, an enslaved woman in North Carolina, lived in a crawl space for years to escape her owner’s sexual abuse.”

And Burke, in her TED Talk, emphasizes the true purpose of the #MeToo movement, which is “a movement about the far-reaching power of empathy. And so it’s about the millions and millions of people who, one year ago, raised their hands to say, ‘Me Too,’ and their hands are still raised while the media that they consume erases them and politicians who they elected to represent them pivot away from solutions.”

This erasure from media is noted by Salamishah Tillet and Scheherazade Tillet, in a recent opinion piece for the New York Times, “After the ‘Surviving R. Kelly’ Documentary, #MeToo Has Finally Returned to Black Girls.” Tillet and Tillet note, “even today, as #MeToo continues to dominate headlines, black girls have been invisible in the movement.” While the release of Surviving R. Kelly has pivoted attention toward black women, Tillet and Tillet write, “our optimism is tempered by history, which shows that social justice movements rarely center, for any meaningful period, on black girls, or anyone who has survived sexual violence. That’s because black girls experience racial, gender and economic oppressions at the same time, a phenomenon the law professor Kimberlé Crenshaw calls intersectionality. As a result, their voices and experiences do not neatly fit into a single-issue narrative of gender or race.”

The collection of essays below seeks to heed Burke’s call for inclusivity and her vision of #MeToo as “a movement about the one-in-four girls and the one-in-six boys who are sexually assaulted every year and carry those wounds into adulthood. It’s about the 84 percent of trans women who will be sexually assaulted this year. And the indigenous women, who are three-and-a-half times more likely to be sexually assaulted than any other group. Or people with disabilities, who are seven times more likely to be sexually abused. It’s about the 60 percent of black girls like me who will be experiencing sexual violence before they turn 18. And the thousands and thousands of low-wage workers who are being sexually harassed right now on jobs that they can’t afford to quit.”

1. The Sexual Assault Epidemic That No One Is Talking About (Aviva Stahl, July, 25, 2018, The Village Voice)

Iffat and Mariam (second name changed for anonymity) are two New York City residents who have experienced Islamaphobia firsthand; both women have been assaulted while using public transportation. In this piece, Aviva Stahl reports that more than “one in four” “Muslim Arab hijab-wearing women…had been intentionally pushed or shoved on a subway platform.”

The #MeToo movement has brought new attention to street harassment of women, but Ahmad says she doesn’t think it’s done enough to address the experiences of Muslim women. “I don’t think they’re doing anything” to address gendered Islamophobia, she says. “As a survivor of that specific kind of [Islamophobic] violence, I don’t see myself in that movement. It doesn’t seem connected to the realities of Muslim women.”

2. Hotels See Panic Buttons as a #MeToo Solution for Workers. Guest Bans? Not So Fast. (Julia Jacobs, November 11, 2018, The New York Times)

After Ms. Melara, a housekeeper in Southern California, was accosted by a guest who exposed himself to her, she locked herself in a nearby room to escape, but wasn’t given assistance until nearly twenty minutes later. Her story is not an anomaly; many workers in the hotel industry are sexually assaulted and harassed by guests. Julia Jacobs reports on panic buttons, a solution proposed by the hotel industry to protect workers.

3. We Need to Include Black Women’s Experience in the Movement Against Campus Sexual Assault (Candace King, June 15, 2018, The Nation)

Only a few weeks after Venkayla Haynes received a rape whistle at her Spelman college freshman orientation, she was raped by a football player. Though Haynes reported the rape to a Dean at Spelman at the time, her situation was complicated by “institutional realities. Both Haynes and her assailant are black.”

Haynes believes the way college administrators responded to her assault reflects longstanding tendencies in the black community to shield black men from interactions with authorities.

“We always come to these situations where we can’t come forward because we want to protect black men or protect our black brothers because they’re already fighting against a system that further criminalizes them,” Haynes said.

4. #NotInvisible: Why are Native American women vanishing? (Sharon Cohen, September 6, 2018, The Associated Press)

 

Ashley HeavyRunner Loring has been missing since June 2017, and her family has embarked on around 40 searches in attempts to locate her. Ashley is one of many missing or murdered Native American women and girls, as Sharan Cohen reports in this piece, though the precise number is difficult to establish because “some cases go unreported, others aren’t documented thoroughly and there isn’t a specific government database tracking these cases.”

On some reservations, Native American women are murdered at a rate more than 10 times the national average and more than half of Alaska Native and Native women have experienced sexual violence at some point, according to the U.S. Justice Department. A 2016 study found more than 80 percent of Native women experience violence in their lifetimes.

5. In The #MeToo Conversation, Transgender People Face a Barrier to Belief (KC Clements, April 18, 2018, them.)

Much of the narrative about #MeToo has revolved around sexual assault between cisgender heterosexual people, and too many still believe that it is only experienced by conventionally attractive cisgender women, or that is only perpetrated by “bad” cisgender men.

I’ve wondered where exactly I fit into this dialogue, because I’m a nonbinary person who was assigned female at birth, and, well, #MeToo.

KC Clements recalls their own experiences with sexual harassment and assault, presents testimonies from other trans people, and urges inclusivity, emphasizing the need for more resources, support, and materials for trans survivors of assault and harassment.

Related Read: Trans Women and Femmes Are Shouting #MeToo – But Are You Listening? (Meredith Talusan, March 2, 2018, them.)

6. When will MeToo become WeToo? Some say voices of black women, working class left out (Charisse Jones, October 5, 2018, USA Today)

After being sexually harassed by coworkers at McDonald’s, her place of employment, Kim Lawson, along with nine other employees, filed a harassment complaint with the Equal Employment Opportunity Commission.

An analysis by the law center of complaints filed from 2012 to 2016 with the EEOC found that black women working in the private sector lodged sexual harassment charges at nearly three times the rate of white women.

While the media has focused extensively on the #MeToo movement in Hollywood, Lawson, as well as other activists, emphasize that the #MeToo movement needs to include women of color, particularly those working lower-wage jobs.

7. The Sexual Assault Epidemic No One Talks About (Joseph Shapiro, January 8, 2018, NPR)

In February 2016, Pauline, a 46-year old woman who lived with a longtime caretaker, was raped by two boys who were part of the family. In this piece, the product of a yearlong investigation by NPR, Joseph Shapiro details the staggering statistics related to sexual assault for people with intellectual disabilities, including the fact that women and men with intellectual disabilities are seven times more likely to be sexually assaulted than people without disabilities.

The federal numbers, and the results of our own database, show that people with intellectual disabilities are vulnerable everywhere, including in places where they should feel safest: where they live, work, go to school; on van rides to medical appointments and in public places.

Related read: The #MeToo Movement Hasn’t Been Inclusive of the Disability Community (Emily Flores, April 24, 2018, Teen Vogue)

8. R. Kelly and the Complexities of Race in the #MeToo Era (Jelani Cobb, January 11, 2019, The New Yorker)

 

Jelani Cobb opens this piece with a memory from childhood of a woman with a black eye who visits his mother. Cobb’s mother later tells him that the woman had been abused by her husband, and Cobb recalls the moment being a “lesson in the consequences of male brutality. It was an implicit instruction in how I was not to behave as a man.” By putting his personal experience in conversation with the recent public response to Surviving R. Kelly, Cobb delves into complexities of race and reporting violence, and what it means to bear witness to brutality in the era of #MeToo.

There’s a gulf between the accusations directed at Harvey WeinsteinMatt Lauer, and Les Moonves—wealthy white men whose alleged excesses were understood as a perquisite of their status—and those directed at Bill Cosby and R. Kelly, black men for whom success represented some broader communal hope that long odds in life could be surmounted. Cosby and Kelly know this, which is part of the reason that they were so effective at manipulating public sentiment around their various accusations.

***
Jacqueline Alnes is working on a memoir about running and neurological illness.

In My Own Voice, Redefining Success and Failure

Alamy / Photo illustration by Katie Kosma

Lauren DePino | Longreads | January 2019 | 21 minutes (5,245 words)

Upon eighth-grade graduation from my small elementary school in suburban Pennsylvania, each of my classmates and I walked away with a personalized memory book, hand-bound and laminated by some of our mothers. The theme, Planet Hollywood, in bubbly red type, sweeps across the cover like a comet, over the image of a metallic blue earth. Out of the iridescent globe jets a star-shaped photo of the respective member of the class of 1996.

To imagine that the best parts of our lives were yet to come felt like waiting for immortality to begin. There was an actualized version of us out there somewhere, living the life we hoped for. We just had to find the threshold. Our moment was there, laid out for us in plain sight — like a new outfit, just waiting, waiting for us to wake up and put it on.

My defining moment, your defining moment, it could be anything. It could be meeting a partner, becoming a mother, becoming a writer. You choose your blanks and you fill yourself in. You choose your questions and your answers. You pick your image.

In my eighth-grade photo, I’m encapsulated by a cerulean star. My smile is tentative behind braces and my chin protrudes ungracefully. I had blown out my bangs that morning, but by the time the photo was taken, they had given in to their natural curl. I was hesitant but hopeful.

The inside pages of our memory books display answers to questionnaires we’d filled out about what we wished to remember and who we wanted to become. On page 12, a thought bubble reads: “In the year 2006, I will be…”

When it came to envisioning the future, nothing felt out of reach. I now realize possessing this kind of incipient possibility is characteristic of privilege — of growing up in an upper-middle-class suburb where our biggest worry was not whether we could land a happy future, but which of many futures we would choose. It was also the height of the self-esteem movement, whereby parents and teachers told children that if they worked hard enough, they could be anything they wanted.

In my class, there were future everythings.

There was a major-league baseball player, a lawyer, a NASA scientist. A geneticist, a famous actress, a teacher. There was an obstetrician, a lottery winner, at least four mothers — but no dads, not yet. Someone foresaw “living at home and driving my parents nuts.” Another waxed: “I don’t think about the future, I just let it arrive.” There were a couple of question marks.

There was a paleontologist, an entrepreneur, an eye doctor. A big-time fashion designer. I wonder how many of us became who we said we would. I wonder how many of us still covet the adult life we had imagined for ourselves at 13 years old. I wonder how many of us can peacefully reconcile who we thought we’d be with who we are.

Mine was this:

Hopefully,
I will be a singer.

It looked just like that: a pyramid of letters, whose hope literally rested on the statement below it. It struck me that the mothers who edited the book chose to have “hopefully” hold its own line. Surrounded by gaping space, the word looked lonely and expectant. Hope is not certain. It engenders hesitation. It suggests anticipation without outcome. Why did I need to choose that word? When my middle sister Shayna saw it, she told me I jinxed my future. I don’t believe she’s right. But then again, all of my future hasn’t happened.
Read more…

Jack, Jacqueline — Dad

Illustration by Zoë van Dijk

Yvonne Conza | Longreads | December 2018 | 28 minutes (6,875 words)

 

Dad is dying. A cell phone ping alerts me to a terse, fracturing email from my father’s younger brother.

Your Father is in a Florida Hospice. My eyes freeze on the bold subject line as I’m having dinner with a friend at an East Village restaurant. The muffled music and clatter of cutlery become an inescapable tunnel of sound. Childhood memories torpedo my thoughts and conflict with the reality that Dad is close to passing away on the cusp of turning 79. Thirty years of not knowing where or how he lived vanish.

***

To most everyone, John Joseph Downes was Jack, but to a few he was Jacqueline, and to Mom, my three older siblings and me, called “Jackass” behind his back. Dad’s multiplex of enduring identities also include: door-to-door Encyclopedia Britannica salesman; entrepreneur selling jigs, molds, gauges and fixture parts to automotive plants through a business he built from scratch; and the owner of a successful home health care agency. A Buffalo Bills fan, he gave his season tickets to clients while he watched games at home eating cheese curds and pretzels. He was a seeker of public office, wearer of white button-down shirts with wife-beater tanks underneath, actual wife beater, sporadic psoriasis sufferer, excellent provider, entertainer, showoff, lover of culture and a Chivas Regal drinker who, as these wailing memories emerge, will not live two months more to celebrate his New Year’s Eve birthday.

For a few years, Dad donned a hearse-black, trapezoid-contoured toupee that our Russian Blue cat murderously stalked like a sly predator. When askew on Dad’s head, the cat didn’t tamper with the hairpiece. But once it was placed atop Mom’s dresser she pounced on it, battled with double-sided tape and amused all, even Dad, with her mischief. Stored in a cherry wood armoire and draped over a creepy female Styrofoam white mannequin wig stand was Dad’s more notable wig, a dolled up shoulder-length Jackie O. bouffant postiche with satiny strands looped into starched beach waves. Had he added oval, dark, smoke-tinted oversized sunglasses, the look would have been complete.

He had a proclivity towards cross-dressing, a marital joint venture since Mom slipped him into finery that hung inside a shared closet. Though their bedroom door was kept closed, the curtains weren’t pulled down, perhaps intentionally, to spark a pivotal conversation. As a child of 8, I was blindsided by intimate details that felt jarring and amiss. Whenever I put away his freshly laundered socks and t-shirts, I had to open the shuttered double doors of his dresser and be exposed to the cavernous storage area where timepieces and ties kept Jackie O’s foam head company.

When I was not much older, flickering flashes, not belonging to a swarm of fireflies, distracted me from Charlie’s Angels. Looking up to the wide-open windows of my parent’s second floor bedroom I saw Dad accessorized, demure and toying with puckered painted lips. Backlit and indefinably beautiful, he seemed more himself in a size 16 dress than in one of his polyester baby blue or pickle green leisure suits.

Once while snooping for Christmas presents, I discovered Polaroid portraits of Dad as Jackie stashed in a shabby shoebox on the top shelf of my parents’ bedroom closet. Clad in kitten heels, stockings and a conservative, zip-from-behind dress, he had been transformed into a chunky, rarified suggestion of Jacqueline Kennedy. When not embodying Jacqueline, he wore a suit, white shirt and tie, shaved, splashed on decadent amounts of Old Spice.  It was hard for him to keep a clean shave, 5 o’clock shadow always intruding. He bore a resemblance to Don Knotts, the billboard-sized forehead over his eyebrows, which I inherited, displaying struggle, though in a more generous light it beamed with determination. After stuffing pens in his pocket protector, heigh-ho, heigh-ho, it’s off to work he’d go — a tender, paunch bellied dwarf with pick and shovel who knew not to return home until a million diamonds shined, and his worth to his wife could be proven.

Read more…

My Brother, My Self

Illustration by Eric Peterson

Katie Prout | Longreads | December 2018 | 25 minutes (6,270 words)

Every addict is a lawyer and my brother is no exception. On the first winter day that feels like spring, the boys next-door get too rowdy. Beer cans fall to the ground under a faint February sun. Frat boys slur-shout along to Drake and make my thin walls quake. I huff and puff, and I consider putting on my boots and crunching over through the melting snow to tell my neighbors I have a sick kid (“Will you please turn it down?”), but instead I pull my bathrobe tighter and text Hank. I feel like you know about noise complaints, I write.

Huh? he texts back.

I know it’s only 5:30 and a Saturday, but I’m trying to work on my thesis, I have a deadline, the undergrads next door are having a party. I’m about to cut their wires.

It’s not too early to call in a noise complaint, he writes. It just depends on how loud.

I thank Hank and call in my noise complaint, and as the sun goes down I screenshot our text exchange and go back to writing, as I always do, about him.

Every addict is a pharmacist and my brother is no exception. In June, our mother asks for Hank’s take on a new pain medication before allowing our youngest brother, struck by spina bifida in the womb, to be put on it. I am less inclined to take his advice when it comes to my own medication: “Xanax is as bad as a drink,” he says, and perhaps for him, that’s true. Like my mother, I go to Hank for his take on medicine in general, on how various pills may or may not interact with one another, even if I don’t always follow what he says. As an addict, he’s come to know the law, from its loopholes to its nooses, as intimately as he knows how ADHD meds mix with benzos, or how much vodka can steady withdrawal shakes until he can figure out his insurance for the hospital.

Every alcoholic is an addict, but not every alcoholic is taken seriously as such. I think about this every time I refer to Hank as an addict in conversation with others or on the page by myself: I think about this a lot. “Addict,” I say, and the faces of the people I’m speaking to grow still in sympathy; “alcoholic,” I say, and their faces are blank. The word alcoholic doesn’t mean much to them, or maybe it’s that the word alcoholic could mean anything. “I’m basically an alcoholic,” a man said to me once over drinks, laughing, and then frowning when I didn’t laugh too, when I stood up from my barstool and asked him if he was OK. It’s a joke, he said, you should joke more. But words matter to me, and that one matters in particular.
Read more…

The Neanderthal

Illustration by Lily Padula

Jen Gilman Porat | Longreads | December 2018 | 14 minutes (3,447 words)

A couple of years ago, I purchased a pair of 23andMe kits for myself and my husband, Tomer. I intended to scientifically prove that Tomer’s most irritating behaviors were genetic destiny and therefore unchangeable. I’d grown tired of nagging him — oftentimes, I’d hear my own voice rattling inside my brain in the same way a popular song might get stuck in my head. I needed an out, something to push me toward unconditional acceptance of my husband. My constant complaining yielded zero behavior modification from on his part; on the other hand, it was changing me into a nasty micromanager. I briefly considered marital therapy, but that’s an expensive undertaking, costing much more than the $398.00 one-time fee for both DNA kits. Plus, couples’ therapy could take a long time, requiring detours through our shared history. In much appealing contrast, 23andMe, promised to launch us straight back to our prehistoric roots, to an earlier point in causality, one that might provide Tomer with something akin to a formal pardon note, thereby permitting me to stop fighting against him, once and for all. I imagined we could help others by way of example too, for what long-married woman has not suffered her husband’s most banal tendencies — the socks and underwear on the floor, the snoring? Not me, actually, because my husband puts his used clothes in the hamper, and I’m the snorer. Really, I’m probably blessed as far as masculine disgustingness goes. But my husband is flawed in one repulsive way: his barbaric table manners.

I have no doubt this is a genetic situation, for even back when we were first dating, I’d shuddered upon seeing my father-in-law poke through the serving bowls of a family-style meal with his bare hairy hands. My husband’s father has also been caught eating ice cream directly from the carton (the thought of which I now appreciate for its built-in binge deterrent). Moreover, my father-in-law eats like a caveman-conqueror, reaching across dinner plates to pluck a taste of this or that from his mortified tablemates. A family dinner looks like a scene straight out of Game of Thrones, minus any crowns. And so, when my husband first began to exhibit similar behaviors, I had to wonder: Had I suffered some rare form of blindness previously? Did some barrier of unconscious denial gently shield my eyes each day, year after year, but only at mealtimes? It was as if a blindfold suddenly fell from my face, or as if Tomer had finally removed a mask from his own. My gentleman turned into a beast, seemingly overnight.

I watched with horror, one Sunday evening, as my husband served himself a plate of meat and vegetables with his hands. His fingers ripped skirt steak in lieu of cutting it with a knife. He abandoned his fork altogether, and I lost my appetite.

Had Tomer suffered some obscure symptom of the mid-life crisis? Or was this a regressed state? During a phone conversation with a close friend, I described my father-in-law’s vile eating manners and wondered if his pre-existing condition had grown contagious. She suggested Tomer’s change of behavior might indicate an epigenetic effect; she’d read somewhere that some aspects of our genetic code lie in wait and get activated along the way. Apparently, some inherited traits remained invisible for years, hiding patiently in our cells until: Surprise! Just when you hit middle age and are totally comfortable in your own skin (despite the new fine lines around your eyes and those brown circles that are hopefully age spots and not melanoma), some new biological fact of your genetic code makes itself manifest, waking you up from your mid-age slumber.

Another interesting detail I could not ignore: Around the same time Tomer stopped liking forks, he’d adopted the Paleo diet, (versions of which are known as the caveman diet). He’d cut all processed foods from his intake, eating nothing but meat, nuts, vegetables, and fruit. Prior to going Paleo, he’d suffered from a severe case of irritable bowel syndrome and relied on bread products, thinking that challah and croissants were the softer, gentler foods. I suspected a gluten allergy and told him to lay off all the Pepperidge Farm cookies. I probably even told him to “eat like a caveman,” but I only meant for him to eat a more natural and gluten-free diet, in order to heal him, which in fact, it did.

“My stomach is no longer a quivering idiot,” Tomer said, and he said it more than once, to countless friends and family members, until he’d worked up a complete narrative on how he’d triumphed over his very own stomach. And each time he told this story, he lifted his shirt, pounding his fists upon his midsection. His proud smile began to appear, well, wild and hungry, as if he’d tamed his digestive system but in doing so, had activated a primitive gene and sacrificed his own civility.

Shortly thereafter, I came across an article pertaining to Neanderthal DNA. According to modern science, the Neanderthals and our prehistoric ancestors mated, leaving many of us with a small percentage of Neanderthal DNA. I did more Googling and learned that 23andMe can tell you how much Neanderthal DNA you carry. Although they do mean different things, in my mind’s eye, the words “Neanderthal” and “Caveman” summoned identical images: that of savage meat-eating maniacs ripping raw meat from bone with fat fingers and jagged teeth.

And this was it — the thing that sold me on 23andMe: the chance to determine one’s degree of Neanderthal-ness. Without any consideration of all the possible consequences of submitting one’s DNA to a global database, I ordered two kits, grinning and convinced that my husband’s result would show a statistically significant and above average number of Neanderthal variants in his genome. Since Father’s Day was only a month away, I decided I’d giftwrap the kits upon arrival too. I’d kill two birds with one stone.
Read more…

A Stimulus Plan for the Mutual Aid Economy

iStock / Getty / Photo illustration by Katie Kosma

Livia Gershon | Longreads | November 2018 | 9 minutes (2,142 words)

If you’re a highly educated white man without serious disabilities—a description that, not incidentally, fits a large majority of people who make and write about policy in the United States—the economy probably looks like this to you: a web of financial transactions between individuals and companies, with support and guidance from the government. To Leah Lakshmi Piepzna-Samarasinha—a disabled, chronically ill writer and performer—it looks completely different. “Your life is maintained by a complex, non-monetary economy of shared, reciprocal care,” she writes in her new book, Care Work. “You drop off some extra food; I listen to you when you’re freaking out. You share your car with me; I pick you up from the airport. We pass the same twenty dollars back and forth between each other.”

Read more…

Consider Who Can Afford the Oyster

You may know Ruby Tandoh as the runner-up from season 4 of The Great British Bake Off; you may not know that she’s a thoughtful writer working hard to stretch the boundaries of what “food writing” means. In Vice UK, she uses the life of ur-food writer M.F.K. Fisher — whose Consider the Oyster is about to be republished — as fertile ground for an exploration of the limits and potential of food writing.

The boundaries of food writing are hard to trace, but what is clear is that in spite of the soaring popularity of the food memoir and its ilk, little editorial time and space is being given to topics that sit in more overtly political territories. The Guardian‘s Feast magazine, and many other national food supplements, are rich with imagery, whimsy, and culinary flights of fancy, but largely apolitical. Famine, urban food deserts, food legislation, and the workplace rights of restaurant employees lie outside of the remit of much contemporary food writing, shoved sideways instead into environmental or political journalism and often taken off the plate entirely…

“Pearls,” Fisher explains, “grow slowly, secretly, gleaming ‘worm-coffins’ built in what may be pain around the bodies that have crept inside the shells.” Just as the parasite, the wound and the body converge in the milky stillness of a pearl, food writing must allow itself to crystallise around points of tenderness. Moving away from the assertive “you are what you eat,” we can venture into a more uncertain, questioning space: Why do you eat what you eat? Who has the freedom to eat for pleasure, and who does not? Why does food matter at all? We start, but do not finish, with the Fisher-esque culinary selfie. The gastronomical “me” is no longer a monolith but an anchor point: a place in time, space, family, and culture from which we might turn our lens outwards to explore issues of hunger as well as comfort, suffering as well as joy.

Read the essay

A Mysterious Crack Appears: Past Trauma and Future Doom Meet in “Friday Black”

A sinkhole opened up in Philadelphia on Monday, January 9, 2017. Matt Rourke / AP

Alana Mohamed | Longreads | November 2018 | 11 minutes (2,988 words)

There is a certain genre of viral news story that we recycle every so often: odd activity on the earth’s seemingly stable surface that, while probably having a reasonable explanation, is reported on with breathless excitement when its cause is still unknown. “Mysterious Crack Appears In Mexico,” one headline shouts. “Mysterious crack appears in Wyoming landscape”; “A giant crack in Kenya opens up, but what’s causing it?”; “Splitsville: 2-Mile-Long Crack Opens in Arizona Desert”; “The White House lawn has developed a mysterious sinkhole that’s ‘growing larger by the day.’”

The follow-up stories (“Giant Wyoming Crack Explained”; “Let it sink in: The White House sinkhole is no more”) rarely gain the same traction. The mystery offers a chance to surrender control, an increasingly tantalizing option in a world algorithmically engineered to offer us the appearance of optimized choice. We choose, momentarily, to believe in something bottomless and chaotic. Read more…