Search Results for: San Francisco Magazine

The Anarchists Who Took the Commuter Train

A matchbook ad for Pennsylvania Railroad, 1940. Jim Heimann Collection / Getty.

Amanda Kolson Hurley | An excerpt from Radical Suburbs: Experimental Living on the Fringes of the American City | Belt Publishing | April 2019 | 19 minutes (4,987 words)

The Stelton colony in central New Jersey was founded in 1915. Humble cottages (some little more than shacks) and a smattering of public buildings ranged over a 140-acre tract of scrubland a few miles north of New Brunswick. Unlike America’s better-known  experimental settlements of the nineteenth century, rather than a refuge for a devout religious sect, Stelton was a hive of political radicals, where federal agents came snooping during the Red Scare of 1919-1920. But it was also a suburb, a community of people who moved out of the city for the sake of their children’s education and to enjoy a little land and peace. They were not even the first people to come to the area with the same idea: There was already a German socialist enclave nearby, called Fellowship Farm.

The founders of Stelton were anarchists. In the twenty-first century, the word “anarchism” evokes images of masked antifa facing off against neo-Nazis. What it meant in the early twentieth century was different, and not easily defined. The anarchist movement emerged in the mid-nineteenth century alongside Marxism, and the two were allied for a time before a decisive split in 1872. Anarchist leader Mikhail Bakunin rejected the authority of any state — even a worker-led state, as Marx envisioned — and therefore urged abstention from political engagement. Engels railed against this as a “swindle.”

But anarchism was less a coherent, unified ideology than a spectrum of overlapping beliefs, especially in the United States. Although some anarchists used violence to achieve their ends, like Leon Czolgosz, who assassinated President William McKinley in 1901, others opposed it. Many of the colonists at Stelton were influenced by the anarcho-pacifism of Leo Tolstoy and by the land-tax theory of Henry George. The most venerated hero was probably the Russian scientist-philosopher Peter Kropotkin, who argued that voluntary cooperation (“mutual aid”) was a fundamental drive of animals and humans, and opposed centralized government and state laws in favor of small, self-governing, voluntary associations such as communes and co-ops. Read more…

It’s Tennis, Charlie Brown

Comic strips by Charles M. Schulz

Patrick Sauer | Racquet and Longreads | April 2019 | 11 minutes (2,896 words)

This story is produced in partnership with Racquet magazine and appears in issue no. 9.

In May 1951, seven months after a new comic strip called Peanuts debuted, an extremely roundheaded Charlie Brown is shown trying to return a tennis ball. He whiffs, then walks to the net to discuss a rule change with his pal Shermy, a once prominent but since forgotten character. The last panel shows both boys to be a half foot below the net as ol’ Chuck proposes, “One point if you hit the ball, two if you get it over the net!”

Throughout its 50-year run, tennis was a leitmotif in Peanuts. It wasn’t quite as prevalent as baseball or ice hockey, but forehands in the funny pages weren’t uncommon; the sport was shown or mentioned in a total of 236 Peanuts installments. The heyday of tennis in the beloved strip coincided with the tennis boom of the 1970s, which is when Peanuts creator Charles M. Schulz was hitting the courts most frequently, thanks to his tennis-loving wife, Jean, as well as a close pal with 39 Grand Slam titles to her name. Read more…

Against Hustle: Jenny Odell Is Taking Her Time at the End of the World

"Orb of Ambivalence," Jenny Odell, digital print, 2017. "This print collects people from 1980s-era computer ads and catalog images. In the original image from which each person was taken, he or she was touching a computer, keyboard, or mouse."

Rebecca McCarthy | Longreads | April 2019 | 14 minutes (3,693 words)

“I almost got locked in here once,” Jenny Odell tells me as we step into a mausoleum. We’re at the Chapel of the Chimes, which sits at the base of Oakland’s sprawling Mountain View Cemetery. The chapel first opened in 1909, and was redesigned in 1928 by Julia Morgan (the architect of Hearst Castle) with Gothic flourishes that mirror the Alhambra in Spain — rooms are filled with glass bookshelves, marbled hallways spill out into courtyards, skylights abound, and once you’re inside it’s difficult to find your way out even if you, like Odell, come here on an almost weekly basis. The books that line the walls are not actually books, they are urns. It’s essentially a library of the dead — the acoustics are perfect and there’s no sound inside save for our footsteps. The Chapel used to keep cages of canaries scattered around, but people wouldn’t stop setting them free. Read more…

‘Women Can Be Required To Wear Something That’s Painful.’

Virginia Gonzalez / Getty

Victoria Namkung | Longreads | March 2019 | 16 minutes (4,283 words)

 

From Cinderella’s glass slippers to Carrie Bradshaw’s Manolo Blahniks, Summer Brennan deftly analyzes one of the world’s most provocative and sexualized fashion accessories in High Heel, part of the Object Lessons series from Bloomsbury. Told in 150 vignettes that alternately entertain and educate, disturb and depress, the book ruminates on the ways in which society fetishizes, celebrates, and demonizes the high heel as well as the people, primarily women, who wear them.

She writes: “We’re still sorting out the relationship between glass ceilings and glass heels. For now, the idea of doing something ‘in high heels’ is a near-universally understood shorthand meaning both that the person doing it is female, and that in doing it, she faces additional, gendered challenges.” Whether you see high heels as empowering or a submission to patriarchal gender roles (or land somewhere in between), you’ll likely never look at a pair the same way again after reading High Heel.

Brennan, an award-winning investigative journalist and author of The Oyster War: The True Story of a Small Farm, Big Politics, and the Future of Wilderness in America, has written for New York Magazine, The Paris Review, Scientific American, Pacific Standard, Buzzfeed, and The San Francisco Chronicle, among other publications. A longtime communications consultant at the United Nations, she’s worked on issues and projects ranging from the environment and nuclear weapons to gender equality and human rights. Read more…

How the Guardian Went Digital

Newscast Limited via AP Images

Alan Rusbridger | Breaking News | Farrar, Straus and Giroux | November 2018 | 31 minutes (6,239 words)

 

In 1993 some journalists began to be dimly aware of something clunkily referred to as “the information superhighway” but few had ever had reason to see it in action. At the start of 1995 only 491 newspapers were online worldwide: by June 1997 that had grown to some 3,600.

In the basement of the Guardian was a small team created by editor in chief Peter Preston — the Product Development Unit, or PDU. The inhabitants were young and enthusiastic. None of them were conventional journalists: I think the label might be “creatives.” Their job was to think of new things that would never occur to the largely middle-aged reporters and editors three floors up.

The team — eventually rebranding itself as the New Media Lab — started casting around for the next big thing. They decided it was the internet. The creatives had a PC actually capable of accessing the world wide web. They moved in hipper circles. And they started importing copies of a new magazine, Wired — the so-called Rolling Stone of technology — which had started publishing in San Francisco in 1993, along with the HotWired website. “Wired described the revolution,” it boasted. “HotWired was the revolution.” It was launched in the same month the Netscape team was beginning to assemble. Only 18 months later Netscape was worth billions of dollars. Things were moving that fast.

In time, the team in PDU made friends with three of the people associated with Wired. They were the founders, Louis Rossetto, and Jane Metcalfe; and the columnist Nicholas Negroponte, who was based at the Massachusetts Institute of Technology and who wrote mindblowing columns predicting such preposterous things as wristwatches which would “migrate from a mere timepiece today to a mobile command-and-control center tomorrow . . . an all-in-one, wrist-mounted TV, computer, and telephone.”

As if.

Both Rossetto and Negroponte were, in their different ways, prophets. Rossetto was a hot booking for TV talk shows, where he would explain to baffled hosts what the information superhighway meant. He’d tell them how smart the internet was, and how ethical. Sure, it was a “dissonance amplifier.” But it was also a “driver of the discussion” towards the real. You couldn’t mask the truth in this new world, because someone out there would weigh in with equal force. Mass media was one-way communication. The guy with the antenna could broadcast to billions, with no feedback loop. He could dominate. But on the internet every voice was going to be equal to every other voice.

“Everything you know is wrong,” he liked to say. “If you have a preconceived idea of how the world works, you’d better reconsider it.”

Negroponte, 50-something, East Coast gravitas to Rossetto’s Californian drawl, was working on a book, Being Digital, and was equally passionate in his evangelism. His mantra was to explain the difference between atoms — which make up the physical artifacts of the past — and bits, which travel at the speed of light and would be the future. “We are so unprepared for the world of bits . . . We’re going to be forced to think differently about everything.”

I bought the drinks and listened.

Over dinner in a North London restaurant, Negroponte started with convergence — the melting of all boundaries between TV, newspapers, magazines, and the internet into a single media experience — and moved on to the death of copyright, possibly the nation state itself. There would be virtual reality, speech recognition, personal computers with inbuilt cameras, personalized news. The entire economic model of information was about to fall apart. The audience would pull rather than wait for old media to push things as at present. Information and entertainment would be on demand. Overly hierarchical and status-conscious societies would rapidly erode. Time as we knew it would become meaningless — five hours of music would be delivered to you in less than five seconds. Distance would become irrelevant. A UK paper would be as accessible in New York as it was in London.

Writing 15 years later in the Observer, the critic John Naughton compared the begetter of the world wide web, Sir Tim Berners-Lee, with the seismic disruption five centuries earlier caused by the invention of movable type. Just as Gutenberg had no conception of his invention’s eventual influence on religion, science, systems of ideas, and democracy, so — in 2008 — “it will be decades before we have any real understanding of what Berners-Lee hath wrought.”

The entire economic model of information was about to fall apart.

And so I decided to go to America with the leader of the PDU team, Tony Ageh, and see the internet for myself. A 33-year-old “creative,” Ageh had had exactly one year’s experience in media — as an advertising copy chaser for The Home Organist magazine — before joining the Guardian. I took with me a copy of The Internet for Dummies. Thus armed, we set off to America for a four-day, four-city tour.

In Atlanta, we found the Atlanta Journal-Constitution (AJC), which was considered a thought leader in internet matters, having joined the Prodigy Internet Service, an online service offering subscribers information over dial-up 1,200 bit/second modems. After four months the internet service had 14,000 members, paying 10 cents a minute to access online banking, messaging, full webpage hosting and live share prices.

The AJC business plan envisaged building to 35,000 or 40,000 by year three. But that time, they calculated, they would be earning $3.3 million in subscription fees and $250,000 a year in advertising. “If it all goes to plan,’ David Scott, the publisher, Electronic Information Service, told us, ‘it’ll be making good money. If it goes any faster, this is a real business.”

We also met Michael Gordon, the managing editor. “The appeal to the management is, crudely, that it is so much cheaper than publishing a newspaper,” he said.

We wrote it down.

“We know there are around 100,000 people in Atlanta with PCs. There are, we think, about one million people wealthy enough to own them. Guys see them as a toy; women see them as a tool. The goldmine is going to be the content, which is why newspapers are so strongly placed to take advantage of this revolution. We’re out to maximize our revenue by selling our content any way we can. If we can sell it on CD-ROM or TV as well, so much the better.”

“Papers? People will go on wanting to read them, though it’s obviously much better for us if we can persuade them to print them in their own homes. They might come in customized editions. Edition 14B might be for females living with a certain income.”

It was heady stuff.

From Atlanta we hopped up to New York to see the Times’s online service, @Times. We found an operation consisting of an editor plus three staffers and four freelancers. The team had two PCs, costing around $4,000 each. The operation was confident, but small.

The @Times content was weighted heavily towards arts and leisure. The opening menus offered a panel with about 15 reviews of the latest films, theatre, music, and books – plus book reviews going back two years. The site offered the top 15 stories of the day, plus some sports news and business.

There was a discussion forum about movies, with 47 different subjects being debated by 235 individual subscribers. There was no archive due to the fact that — in one of the most notorious newspaper licensing cock-ups in history — the NYT in 1983 had given away all rights to its electronic archive (for all material more than 24 hours old) in perpetuity to Mead/Lexis.

That deal alone told you how nobody had any clue what was to come.

We sat down with Henry E. Scott, the group director of @Times. “Sound and moving pictures will be next. You can get them now. I thought about it the other day, when I wondered about seeing 30 seconds of The Age of Innocence. But then I realized it would take 90 minutes to download that and I could have seen more or less the whole movie in that time. That’s going to change.”

But Scott was doubtful about the lasting value of what they were doing — at least, in terms of news. “I can’t see this replacing the news- paper,” he said confidently. “People don’t read computers unless it pays them to, or there is some other pressing reason. I don’t think anyone reads a computer for pleasure. The San Jose Mercury [News] has put the whole newspaper online. We don’t think that’s very sensible. It doesn’t make sense to offer the entire newspaper electronically.”

We wrote it all down.

“I can’t see the point of news on-screen. If I want to know about a breaking story I turn on the TV or the radio. I think we should only do what we can do better than in print. If it’s inferior than the print version there’s no point in doing it.”

Was there a business plan? Not in Scott’s mind. “There’s no way you can make money out of it if you are using someone else’s server. I think the LA Times expects to start making money in about three years’ time. We’re treating it more as an R & D project.”


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


From New York we flitted over to Chicago to see what the Tribune was up to. In its 36-storey Art Deco building — a spectacular monument to institutional self-esteem — we found a team of four editorial and four marketing people working on a digital service, with the digital unit situated in the middle of the newsroom. The marketeers were beyond excited about the prospect of being able to show houses or cars for sale and arranged a demonstration. We were excited, too, even if the pictures were slow and cumbersome to download.

We met Joe Leonard, associate editor. “We’re not looking at Chicago Online as a money maker. We’ve no plans even to break even at this stage. My view is simply that I’m not yet sure where I’m going, but I’m on the boat, in the water — and I’m ahead of the guy who is still standing on the pier.”

Reach before revenue.

Finally we headed off to Boulder, Colorado, in the foothills of the Rockies, where Knight Ridder had a team working on their vision of the newspaper of tomorrow. The big idea was, essentially, what would become the iPad — only the team in Boulder hadn’t got much further than making an A4 block of wood with a “front page” stuck on it. The 50-something director of the research centre, Roger Fidler, thought the technology capable of realizing his dream of a ‘personal information appliance’ was a couple of years off.

Tony and I had filled several notebooks. We were by now beyond tired and talked little over a final meal in an Italian restaurant beneath the Rocky Mountains.

We had come. We had seen the internet. We were conquered.

* * *

Looking back from the safe distance of nearly 25 years, it’s easy to mock the fumbling, wildly wrong predictions about where this new beast was going to take the news industry. We had met navigators and pioneers. They could dimly glimpse where the future lay. Not one of them had any idea how to make a dime out of it, but at the same time they intuitively sensed that it would be more reckless not to experiment. It seemed reasonable to assume that — if they could be persuaded to take the internet seriously — their companies would dominate in this new world, as they had in the old world.

We were no different. After just four days it seemed blindingly obvious that the future of information would be mainly digital. Plain old words on paper — delivered expensively by essentially Victorian production and distribution methods — couldn’t, in the end, compete. The future would be more interactive, more image-driven, more immediate. That was clear. But how on earth could you graft a digital mindset and processes onto the stately ocean liner of print? How could you convince anyone that this should be a priority when no one had yet worked out how to make any money out of it? The change, and therefore the threat, was likely to happen rapidly and maybe violently. How quickly could we make a start? Or was this something that would be done to us?

In a note for Peter Preston on our return I wrote, “The internet is fascinating, intoxicating . . . it is also crowded out with bores, nutters, fanatics and middle managers from Minnesota who want the world to see their home page and CV. It’s a cacophony, a jungle. There’s too much information out there. We’re all overloaded. You want someone you trust to fillet it, edit it and make sense of it for you. That’s what we do. It’s an opportunity.”

Looking back from the safe distance of nearly 25 years, it’s easy to mock the fumbling, wildly wrong predictions about where this new beast was going to take the news industry.

I spent the next year trying to learn more and then the calendar clicked on to 1995 — The Year the Future Began, at least according to a recent book by the cultural historian W. Joseph Campbell, who used the phrase as his book title twenty years later. It was the year Amazon.com, eBay, Craigslist, and Match.com established their presence online. Microsoft spent $300m launching Windows 95 with weeks of marketing hype, spending millions for the rights to the Rolling Stones hit “Start Me Up,” which became the anthem for the Windows 95 launch.

Cyberspace — as the cyber dystopian Evgeny Morozov recalled, looking back on that period — felt like space itself. “The idea of exploring cyberspace as virgin territory, not yet colonized by governments and corporations, was romantic; that romanticism was even reflected in the names of early browsers (‘Internet Explorer,’ ‘Netscape Navigator’).”

But, as Campbell was to reflect, “no industry in 1995 was as ill-prepared for the digital age, or more inclined to pooh-pooh the disruptive potential of the Internet and World Wide Web, than the news business.” It suffered from what he called “innovation blindness” — “an inability, or a disinclination to anticipate and understand the consequences of new media technology.”

1995 was, then, the year the future began. It happened also to be the year in which I became editor of the Guardian.

* * *

I was 41 and had not, until very recently, really imagined this turn of events. My journalism career took a traditional enough path. A few years reporting; four years writing a daily diary column; a stint as a feature writer — home and abroad. In 1986 I left the Guardian to be the Observer’s television critic. When I rejoined the Guardian I was diverted towards a route of editing — launching the paper’s Saturday magazine followed by a daily tabloid features section and moving to be deputy editor in 1993. Peter Preston — unshowy, grittily obstinate, brilliantly strategic — looked as if he would carry on editing for years to come. It was a complete surprise when he took me to the basement of the resolutely unfashionable Italian restaurant in Clerkenwell he favored, to tell me he had decided to call it a day.

On most papers the proprietor or chief executive would find an editor and take him or her out to lunch to do the deal. On the Guardian — at least according to tradition dating back to the mid-70s — the Scott Trust made the decision after balloting the staff, a process that involved manifestos, pub hustings, and even, by some candidates, a little frowned-on campaigning.

I supposed I should run for the job. My mission statement said I wanted to boost investigative reporting and get serious about digital. It was, I fear, a bit Utopian. I doubt much of it impressed the would-be electorate. British journalists are programmed to skepticism about idealistic statements concerning their trade. Nevertheless, I won the popular vote and was confirmed by the Scott Trust after an interview in which I failed to impress at least one Trustee with my sketchy knowledge of European politics. We all went off for a drink in the pub round the back of the office. A month later I was editing.

“Fleet Street,” as the UK press was collectively called, was having a torrid time, not least because the biggest beast in the jungle, Rupert Murdoch, had launched a prolonged price war that was playing havoc with the economics of publishing. His pockets were so deep he could afford to slash the price of The Times almost indefinitely — especially if it forced others out of business.

Reach before revenue — as it wasn’t known then.

The newest kid on the block, the Independent, was suffering the most. To their eyes, Murdoch was behaving in a predatory way. We calculated the Independent titles were losing around £42 million (nearly £80 million in today’s money). Murdoch’s Times, by contrast, had seen its sales rocket 80 per cent by cutting its cover prices to below what it cost to print and distribute. The circulation gains had come at a cost — about £38 million in lost sales revenue. But Murdoch’s TV business, BSkyB, was making booming profits and the Sun continued to throw off huge amounts of cash. He could be patient.

But how on earth could you graft a digital mindset and processes onto the stately ocean liner of print.

The Telegraph had been hit hard — losing £45 million in circulation revenues through cutting the cover price by 18 pence. The end of the price war left it slowly clawing back lost momentum, but it was still £23 million adrift of where it had been the previous year. Murdoch — as so often — had done something bold and aggressive. Good for him, not so good for the rest of us. Everyone was tightening their belts in different ways. The Independent effectively gave up on Scotland. The Guardian saved a million a year in newsprint costs by shaving half an inch off the width of the paper.

The Guardian, by not getting into the price war, had “saved” around £37 million it would otherwise have lost. But its circulation had been dented by about 10,000 readers a day. Moreover, the average age of the Guardian reader was 43 — something that pre-occupied us rather a lot. We were in danger of having a readership too old for the job advertisements we carried.

Though the Guardian itself was profitable, the newspaper division was losing nearly £12 million (north of £21 million today). The losses were mainly due to the sister Sunday title, the Observer, which the Scott Trust had purchased as a defensive move against the Independent in 1993. The Sunday title had a distinguished history, but was hemorrhaging cash: £11 million losses.

Everything we had seen in America had to be put on hold for a while. The commercial side of the business never stopped reminding us that only three percent of households owned a PC and a modem.

* * *

But the digital germ was there. My love of gadgets had not extended to understanding how computers actually worked, so I commissioned a colleague to write a report telling me, in language I could understand, how our computers measured up against what the future would demand. The Atex system we had installed in 1987 gave everyone a dumb terminal on their desk — little more than a basic word processor. It couldn’t connect to the internet, though there was a rudimentary internal messaging system. There was no word count or spellchecker and storage space was limited. It could not be used with floppy disks or CD-ROMs. Within eight years of purchase it was already a dinosaur.

There was one internet connection in the newsroom, though most reporters were unaware of it. It was rumored that downstairs a bloke called Paul in IT had a Mac connected to the internet through a dial-up modem. Otherwise we were sealed off from the outside world.

Some of these journalist geeks began to invent Heath Robinson solutions to make the inadequate kit in Farringdon Road to do the things we wanted in order to produce a technology website online. Tom Standage — he later became deputy editor of the Economist, but then was a freelance tech writer — wrote some scripts to take articles out of Atex and format them into HTML so they could be moved onto the modest Mac web server — our first content management system, if you like. If too many people wanted to read this tech system at once the system crashed. So Standage and the site’s editor, Azeem Azhar, would take it in turns sitting in the server room in the basement of the building rebooting the machines by hand — unplugging them and physically moving the internet cables from one machine to another.

What would the future look like? We imagined personalized editions, even if we had not the faintest clue how to produce them. We guessed that readers might print off copies of the Guardian in their homes — and even toyed with the idea of buying every reader a printer. There were glimmers of financial hope. Our readers were spending £56 million a year buying the Guardian but we retained none of it: the money went on paper and distribution. In the back of our minds we ran calculations about how the economics of newspapers would change if we could save ourselves the £56 million a year “old world” cost.

By March 1996, ideas we’d hatched in the summer of 1995 to graft the paper onto an entirely different medium were already out of date. That was a harbinger of the future.

On top of editing, the legal entanglements sometimes felt like a full-time job on their own. Trying to engineer a digital future for the Guardian felt like a third job. There were somehow always more urgent issues. By March 1996, ideas we’d hatched in the summer of 1995 to graft the paper onto an entirely different medium were already out of date. That was a harbinger of the future. No plans in the new world lasted very long.

It was now apparent that we couldn’t get away with publishing selective parts of the Guardian online. Other newspapers had shot that fox by pushing out everything. We were learning about the connectedness of the web — and the IT team tentatively suggested that we might use some “offsite links” to other versions of the same story to save ourselves the need to write our own version of everything. This later became the mantra of the City University of New York (CUNY) digital guru Jeff Jarvis — “Do what you do best, and link to the rest.”

We began to grapple with numerous basic questions about the new waters into which we were gingerly dipping our toes.

Important question: Should we charge?

The Times and the Telegraph were both free online. A March 1996 memo from Bill Thompson, a developer who had joined the Guardian from Pipex, ruled it out:

I do not believe the UK internet community would pay to read an online edition of a UK newspaper. They may pay to look at an archive, but I would not support any attempt to make the Guardian a subscription service online . . . It would take us down a dangerous path.

In fact, I believe that the real value from an online edition will come from the increased contact it brings with our readers: online newspapers can track their readership in a way that print products never can, and the online reader can be a valuable commodity in their own right, even if they pay nothing for the privilege.

Thompson was prescient about how the overall digital economy would work — at least for players with infinitely larger scale and vastly more sophisticated technology.

What time of day should we publish?

The electronic Telegraph was published at 8 a.m. each day — mainly because of its print production methods. The Times, more automated, was available as soon as the presses started rolling. The Guardian started making some copy available from first edition through to the early hours. It would, we were advised, be fraught with difficulties to publish stories at the same time they were ready for the press.

Why were we doing it anyway?

Thompson saw the dangers of cannibalization, that readers would stop buying the paper if they could read it for free online. It could be seen as a form of marketing. His memo seemed ambivalent as to whether we should venture into this new world at all:

The Guardian excels in presenting information in an attractive easy to use and easy to navigate form. It is called a “broadsheet newspaper.” If we try to put the newspaper on-line (as the Times has done) then we will just end up using a new medium to do badly what an old medium does well. The key question is whether to make the Guardian a website, with all that entails in terms of production, links, structure, navigational aids etc. In summer 1995 we decided that we would not do this.

But was that still right a year later? By now we had the innovation team — PDU — still in the basement of one building in Farringdon Road, and another team in a Victorian loft building across the way in Ray Street. We were, at the margins, beginning to pick up some interesting fringe figures who knew something about computers, if not journalism. But none of this was yet pulling together into a coherent picture of what a digital Guardian might look like.

An 89-page business plan drawn up in October 1996 made it plain where the priorities lay: print.

We wanted to keep growing the Guardian circulation — aiming a modest increase to 415,000 by March 2000 — which would make us the ninth-biggest paper in the UK — with the Observer aiming for 560,000 with the aid of additional sections. A modest investment of £200,000 a year in digital was dwarfed by an additional £6 million cash injection into the Observer, spread over three years.

As for “on-line services” (we were still hyphenating it) we did want “a leading-edge presence” (whatever that meant), but essentially we thought we had to be there because we had to be there. By being there we would learn and innovate and — surely? — there were bound to be commercial opportunities along the road. It wasn’t clear what.

We decided we might usefully take broadcasting, rather than print, as a model — emulating its “immediacy, movement searchability and layering.”

If this sounded as if we were a bit at sea, we were. We hadn’t published much digitally to this point. We had taken half a dozen meaty issues — including parliamentary sleaze, and a feature on how we had continued to publish on the night our printing presses had been blown up by the IRA — and turned them into special reports.

It is a tribute to our commercial colleagues that they managed to pull in the thick end of half a million pounds to build these websites. Other companies’ marketing directors were presumably like ours — anxious about the youth market and keen for their brands to feel “cool.” In corporate Britain in 1996, there was nothing much cooler than the internet, even if not many people had it, knew where to find it or understood what to do with it.

* * *

The absence of a controlling owner meant we could run the Guardian in a slightly different way from some papers. Each day began with a morning conference open to anyone on the staff. In the old Farringdon Road office, it was held around two long narrow tables in the editor’s office — perhaps 30 or 40 people sitting or standing. When we moved to our new offices at Kings Place, near Kings Cross in North London, we created a room that was, at least theoretically, less hierarchical: a horseshoe of low yellow sofas with a further row of stools at the back. In this room would assemble a group of journalists, tech developers and some visitors from the commercial departments every morning at about 10 a.m. If it was a quiet news day we might expect 30 or so. On big news days, or with an invited guest, we could host anything up to 100.

A former Daily Mail journalist, attending his first morning conference, muttered to a colleague in the newsroom that it was like Start the Week — a Monday morning BBC radio discussion program. All talk and no instructions. In a way, he was right: It was difficult, in conventional financial or efficiency terms, to justify 50 to 60 employees stopping work to gather together each morning for anything between 25 and 50 minutes. No stories were written during this period, no content generated.

But something else happened at these daily gatherings. Ideas emerged and were kicked around. Commissioning editors would pounce on contributors and ask them to write the thing they’d just voiced. The editorial line of the paper was heavily influenced, and sometimes changed, by the arguments we had. The youngest member of staff would be in the same room as the oldest: They would be part of a common discussion around news. By a form of accretion and osmosis an idea of the Guardian was jointly nourished, shared, handed down, and crafted day by day.

You might love the Guardian or despise it, but it had a definite sense of what it believed in and what its journalism was.

It led to a very strong culture. You might love the Guardian or despise it, but it had a definite sense of what it believed in and what its journalism was. It could sometimes feel an intimidating meeting — even for, or especially for, the editor. The culture was intended to be one of challenge: If we’d made a wrong decision, or slipped up factually or tonally, someone would speak up and demand an answer. But challenge was different from blame: It was not a meeting for dressing downs or bollockings. If someone had made an error the previous day we’d have a post-mortem or unpleasant conversation outside the room. We’d encourage people to want to contribute to this forum, not make them fear disapproval or denunciation.

There was a downside to this. It could, and sometimes did, lead to a form of group-think. However herbivorous the culture we tried to nurture, I was conscious of some staff members who felt awkward about expressing views outside what we hoped was a  fairly broad consensus. But, more often, there would be a good discussion on two or three of the main issues of the day. We encouraged specialists or outside visitors to come in and discuss breaking stories. Leader writers could gauge the temperature of the paper before penning an editorial. And, from time to time, there would be the opposite of consensus: Individuals, factions, or groups would come and demand we change our line on Russia, bombing in Bosnia; intervention in Syria; Israel, blood sports, or the Labor leadership.

The point was this: that the Guardian was not one editor’s plaything or megaphone. It emerged from a common conversation — and was open to internal challenge when editorial staff felt uneasy about aspects of our journalism or culture.

* * *

Within two years — slightly uncomfortable at the power I had acquired as editor — I gave some away. I wanted to make correction a natural part of the journalistic process, not a bitterly contested post-publication battleground designed to be as difficult as possible.

We created a new role on the Guardian: a readers’ editor. He or she would be the first port of call for anyone wanting to complain about anything we did or wrote. The readers’ editor would have daily space in the paper — off-limits to the editor — to correct or clarify anything and would also have a weekly column to raise broader issues of concern. It was written into the job description that the editor could not interfere. And the readers’ editor was given the security that he/she could not be removed by the editor, only by the Scott Trust.

On most papers editors had sat in judgment on themselves. They commissioned pieces, edited and published them — and then were supposed neutrally to assess whether their coverage had, in fact, been truthful, fair, and accurate. An editor might ask a colleague — usually a managing editor — to handle a complaint, but he/she was in charge from beginning to end. It was an autocracy. That mattered even more in an age when some journalism was moving away from mere reportage and observation to something closer to advocacy or, in some cases, outright pursuit.

Allowing even a few inches of your own newspaper to be beyond your direct command meant that your own judgments, actions, ethical standards and editorial decisions could be held up to scrutiny beyond your control. That, over time, was bound to change your journalism. Sunlight is the best disinfectant: that was the journalist-as-hero story we told about what we do. So why wouldn’t a bit of sunlight be good for us, too?

The first readers’ editor was Ian Mayes, a former arts and obituaries editor then in his late 50s. We felt the first person in the role needed to have been a journalist — and one who would command instant respect from a newsroom which otherwise might be somewhat resistant to having their work publicly critiqued or rebutted. There were tensions and some resentment, but Ian’s experience, fairness and flashes of humor eventually won most people round.

One or two of his early corrections convinced staff and readers alike that he had a light touch about the fallibility of journalists:

In our interview with Sir Jack Hayward, the chairman of Wolverhampton Wanderers, page 20, Sport, yesterday, we mistakenly attributed to him the following comment: “Our team was the worst in the First Division and I’m sure it’ll be the worst in the Premier League.” Sir Jack had just declined the offer of a hot drink. What he actually said was: “Our tea was the worst in the First Division and I’m sure it’ll be the worst in the Premier League.” Profuse apologies.

In an article about the adverse health effects of certain kinds of clothing, pages 8 and 9, G2, August 5, we omitted a decimal point when quoting a doctor on the optimum temperature of testicles. They should be 2.2 degrees Celsius below core body temperature, not 22 degrees lower.

But in his columns he was capable of asking tough questions about our editorial decisions —  often prompted by readers who had been unsettled by something we had done. Why had we used a shocking picture which included a corpse? Were we careful enough in our language around mental health or disability? Why so much bad language in the Guardian? Were we balanced in our views of the Kosovo conflict? Why were Guardian journalists so innumerate? Were we right to link to controversial websites?

In most cases Mayes didn’t come down on one side or another. He would often take readers’ concerns to the journalist involved and question them — sometimes doggedly — about their reasoning. We learned more about our readers through these interactions; and we hoped that Mayes’s writings, candidly explaining the workings of a newsroom, helped readers better understand our thinking and processes.

It was, I felt, good for us to be challenged in this way. Mayes was invaluable in helping devise systems for the “proper” way to correct the record. A world in which — to coin a phrase —  you were “never wrong for long” posed the question of whether you went in for what Mayes termed “invisible mending.” Some news organizations would quietly amend whatever it was that they had published in error, no questions asked. Mayes felt differently: The act of publication was something on the record. If you wished to correct the record, the correction should be visible.

But we had some inkling that the iron grip of centralized control that a newspaper represented was not going to last.

We were some years off the advent of social media, in which any error was likely to be pounced on in a thousand hostile tweets. But we had some inkling that the iron grip of centralized control that a newspaper represented was not going to last.

I found liberation in having created this new role. There were few things editors can enjoy less than the furious early morning phone call or email from the irate subject of their journalism. Either the complainant is wrong — in which case there is time wasted in heated self-justification; or they’re right, wholly or partially. Immediately you’re into remorseful calculations about saving face. If readers knew we honestly and rapidly — even immediately — owned up to our mistakes they should, in theory, trust us more. That was the David Broder theory, and I bought it. Readers certainly made full use of the readers’ editor’s existence. Within five years Mayes was dealing with around 10,000 calls, emails, and letters a year — leading to around 1,200 corrections, big and small. It’s not, I think, that we were any more error-prone than other papers. But if you win a reputation for openness, you’d better be ready to take it as seriously as your readers will.

Our journalism became better. If, as a journalist, you know there are a million sleuth-eyed editors out there waiting to leap on your tiniest mistake, it makes you more careful. It changes the tone of your writing. Our readers often know more than we do. That became a mantra of the new world, coined by the blogger and academic Dan Gillmor, in his 2004 book We the Media8 but it was already becoming evident in the late 1990s.

The act of creating a readers’ editor felt like a profound recognition of the changing nature of what we were engaged in. Journalism was not an infallible method guaranteed to result in something we would proclaim as The Truth — but a more flawed, tentative, iterative and interactive way of getting towards something truthful.

Admitting that felt both revolutionary and releasing.

***

Excerpted from Breaking News: The Remaking of Journalism and Why It Matters Now by Alan Rusbridger. Published Farrar, Straus and Giroux November 27, 2018. Copyright © 2018 by Alan Rusbridger. All rights reserved.

Longreads Editor: Aaron Gilbreath

Los Angeles Plays Itself

AP Photo/Reed Saxon

David L. Ulin | Sidewalking | University of California Press | October 2015 | 41 minutes (8,144 words)

 

“I want to live in Los Angeles, but not the one in Los Angeles.”

— Frank Black

 

One night not so many weeks ago, I went to visit a friend who lives in West Hollywood. This used to be an easy drive: a geometry of short, straight lines from my home in the mid-Wilshire flats — west on Olympic to Crescent Heights, north past Santa Monica Boulevard. Yet like everywhere else these days, it seems, Los Angeles is no longer the place it used to be. Over the past decade-and-a-half, the city has densified: building up and not out, erecting more malls, more apartment buildings, more high-rises. At the same time, gridlock has become increasingly terminal, and so, even well after rush hour on a weekday evening, I found myself boxed-in and looking for a short-cut, which, in an automotive culture such as this one, means a whole new way of conceptualizing urban space.

There are those (myself among them) who would argue that the very act of living in L.A. requires an ongoing process of reconceptualization, of rethinking not just the place but also our relationship to it, our sense of what it means. As much as any cities, Los Angeles is a work-in-progress, a landscape of fragments where the boundaries we take for granted in other environments are not always clear. You can see this in the most unexpected locations, from Rick Caruso’s Grove to the Los Angeles County Museum of Art, where Chris Burden’s sculpture “Urban Light” — a cluster of 202 working vintage lampposts — fundamentally changed the nature of Wilshire Boulevard when it was installed in 2008. Until then, the museum (like so much of L.A.) had resisted the street, the pedestrian, in the most literal way imaginable, presenting a series of walls to the sidewalk, with a cavernous entry recessed into the middle of a long block. Burden intended to create a catalyst, a provocation; “I’ve been driving by these buildings for 40 years, and it’s always bugged me how this institution turned its back on the city,” he told the Los Angeles Times a week before his project was lit. When I first came to Los Angeles a quarter of a century ago, the area around the Museum was seedy; it’s no coincidence that in the film Grand Canyon, Mary Louise Parker gets held up at gunpoint there. Take a walk down Wilshire now, however, and you’ll find a different sort of interaction: food trucks, pedestrians, tourists, people from the neighborhood.

Read more…

Versage

Bénédicte Kurzen and Noor

Allyn Gaestel, Photos by Bénédicte Kurzen / Noor | Nataal | February 2019 | 16 minutes (4,113 words)

If you look closely you’ll notice
That the pattern on this soft broadcloth shirt
Is made of working man’s blood
And praying folks’ tears.

If you look closer you’ll notice
That this pattern resembles
Tenement row houses, project high rises,
Cell block tiers,
Discontinued stretches of elevated train tracks,
Slave ship gullies, acres of tombstones.

If you look closer, you’ll notice
That this fabric has been carefully blended
With an advanced new age polymer
To make the fabric lightweight
Weatherproof, and durable.

All this to give some sort of posture and dignity
To a broken body that is a host for scars.

— From ‘Soldier’s Dream’ by YASIIN BEY

Lagos

I took a photograph on election day in 2015. It was golden hour. I was new in town. Though I had a writing fellowship that had nothing to do with electoral politics, I was a recovering news journalist. So I registered with the electoral commission and got my press pass and badge and drove around the ghostly streets of Lagos with some local reporters. It was largely an exercise in futility. I felt adrift. I wasn’t sure what I was looking for. The story I wrote rambles about the stories people tell. My fellowship editor thought it was useless.

But, driving home, I shot this photograph. In it, a teenager is crossing the road. We are in the neighbourhood of Ebute Metta, and he is wearing the most beautiful hoodie, covered in a twirling, swirling motif. He stares at me through glinting shades. Between the patterned sweatshirt and his shorts — also printed black and white but in a different design — he has layered a striped shirt. He stands in front of the Wasimi Community Mosque, a burnt-red building in the 1970s tropical modernist concrete that blankets much of mainland Lagos. Round concrete circles are embedded like a screen for privacy and ventilation at the top corner of the building. The pattern looks classically Lagosian now, but an architect once told me those cutout blocks were imported from Israel.

Photographs flatten reality. They squash three dimensions into two, and turn bodies and buildings into patterns and shapes. They still the world; they solidify a moment. You can breathe with a photograph, though the instant captured was briefer than your exhale. I was driving when I shot this, and my subject was walking; its stillness is stolen. And yet this split second is layered with everything inside the photograph and also everything ephemeral emanating from the image: emotion, history, foreshadowing. The photograph illustrates an obsession I had not yet noted; a string to a web I had yet to pull and untangle.

I liked it when I shot it. I thought: this looks like Lagos. (And I find Lagos beautiful.)

I later became transfixed by both this swirling pattern and by the thought, “This looks like Lagos.”

I saw the pattern everywhere. I took buses around town, little orbs bouncing through the city filled with uncountable lives, personalities, roles, all squished hip to hip on wooden benches. The clothes people wear express just a fragment of their personas. Sometimes it’s obligatory — white garments for Aladura churchgoers, pleated burgundy skirts for school — and sometimes it’s more loosely prescribed: suits and heels for office workers, individual designs in matching aso-ebi for weddings. But there is also a wide range of freedom both within and beyond this criteria, and cosmopolitan Lagosians are unrelentingly expressive and well-dressed. The sweatshirt in the photograph is of a style worn mostly by the young, fly dreamers of Lagos’ lower social strata — street hawkers, bus conductors, entrepreneurs with many hyphens: real estate agent-used car salesman-blogger of a fictional Yoruba playboy in Dubai. I came to call this style, and the concepts it encompasses, “Versage”. Read more…

The Precarity of Everything: On Millennial (Blacks and) Blues

Nina Subin / Bold Type Books

Danielle A. Jackson | Longreads | February 2019 | 14 minutes (3,747 words)

Kimya works in a cardiologist’s office in New Jersey, but at 34, with three kids and dreams of changing careers, she’s planning a move to Atlanta. Joelle, a 23-year-old UCLA graduate who runs a think tank’s youth program, helped her parents financially when she was in college. Jeremy, 25, supported his wife and kids in West Virginia’s coal mines until he got laid off. Simon, CTO of a startup in San Francisco and an alumnus of M.I.T., still worries interviewers may not “think he’s as good as them” because he’s Black.

Millennials, born somewhere between 1980 and 2000, make up more than a quarter of the U.S. population and are more than a third of its workforce. They’re the most diverse generation of adults, according to the Brookings Institute, in American history — 44% of them are non-white. Yet, as journalist Reniqua Allen writes in her new book It Was All a Dream: A New Generation Confronts the Broken Promise to Black America, “discussion about millennials and their ideas of ‘success’ are often deeply rooted in the experiences of privileged White men and women — think more Lena Dunham than Issa Rae.” It explains why I’ve always had difficulty identifying myself as a millennial, and why I hadn’t realized that the stories of some Black celebrities, like melancholic trap artist Future, who turns 36 this year, or glowy 34-year-old showrunner Lena Waithe, are more emblematic of the generation than anything I’ve read about avocado toast. Including Kimya, Joelle, Jeremy, and Simon, Allen conducted interviews with over 75 Black millennials for the book. She paints a complicated, often bleak picture of what it’s really like to achieve in America amid rising college costs, deunionization, two major recessions, and the election of President Trump.

Allen also includes snippets of her own story, writing poignantly about growing up a precocious middle class striver in suburban New Jersey with her devoted mother and aunts. In several sections, her interviewees speak about their dreams at length, in their own voices. She named the book after a lyric from Notorious BIG’s “Juicy,” a joyous hip hop gospel about overcoming great odds, and uses language that refuses to shame or moralize. Taken together, It Was All a Dream is an expansive, engaging tapestry of a generation’s hope and resilience and reads like a hip, sharp heir of The Warmth of Other Suns.

Allen and I went to undergrad together at American University in D.C. and graduated the same year. In our late 30s, we’re part of the oldest sub-group of millennials. I chatted with her about the core themes of her new book, what it means that a generation of “youth” are now heading toward middle age, the millennial burnout pieces in BuzzFeed by Anne Helen Petersen and Tiana Clark, and whether she feels optimistic, given the precarity of everything.

* * *

Danielle Jackson: Are you on your book tour right now?

Reniqua Allen: Yeah, and I’m exhausted, but the audiences have been really good. I’ve been to Atlanta and D.C. and I did some stuff in New York. We’re figuring out the West Coast and Midwest. People have been really engaged, in D.C. and Atlanta in particular.  They’re really trying to figure out what it means to be a millennial, how being a millennial of color, a Black millennial, is different from prior generations.

What topics have people wanted to engage with you about?

Mental health has come up a lot during the Q&As. People are really struggling, which I think is very pervasive in the stories I collected. I feel like mental health treatment has been taboo in the Black community, so it’s interesting that people are so willing to talk about it now.

Some of your interviewees offer solutions when they talk about ways they’ve managed their mental health. In the chapter “Breathe,” Jasmine talks about how breathwork and meditation had been helpful.

Yeah, for her. One very, very unexpected way I heard about managing mental health was with the dominatrix that I talked to, who is mentioned early in the book. She said that cracking the whip on her White clientele and talking to them about race and race relations was healing for her. That was really fascinating.

I’m sure you read the BuzzFeed stories about millennial burnout? I spoke to the author of the Black millennial burnout piece, Tiana Clark. She’s very lovely and nice, and I really enjoyed the piece. When I read the original piece by Anne Helen Petersen, I thought it was interesting yet very rooted in a White experience. My book hadn’t come out yet, and I wanted to respond. I was actually too tired and burnt out to respond to the burnout piece.

I read your book over the Christmas holiday, then the next month, the initial piece came out at BuzzFeed. I definitely thought it aligned with your critiques of how millennials are talked about, but I didn’t have time to address it. I do feel I miss opportunities to engage with people by being tired all the time.

Yeah, but it’s exhausting to have to write these kind of pieces over and over again. I keep trying to figure out what’s the best way to reach people. And I realized there was a period in time when I was writing think pieces in reaction to every police shooting. I’m sure tons of other writers would say the same thing. I was writing the same thing over and over and over again. It felt exhausting. I’ve been trying to figure out what to do with all of these emotions and energy and how best to tell important stories without feeling depleted.

Do you agree with Petersen that burnout is the defining millennial condition? Do you agree with that specifically when considering Black millennials?

Burnout is the definition of the Black experience in America in general. Is it unique to the millennial generation? I don’t think it’s unique to us. I think we feel the burnout even more because of systematic and historic oppression. Some of what she describes are “upper middle-class problems.” In her piece, she talks about how a lot of her friends were nannies or got babysitting jobs after college. I feel like my friends, particularly the friends who you would consider successful if you look at traditional monikers, didn’t have the ability to do that. They were getting internships and jobs basically since day one of college. The young Black people that we went to school with were so on it all the time.

That’s the thing that people don’t understand. Our experiences aren’t always equal, and even though we may end up in the same place, we’ve probably been tired since college or high school. I am so tired of saying it, because everyone says it, but we have to work twice as hard. So that burnout that everyone complains about? Double it up. And we’re not just talking about economic anxiety. We’re also talking about how we have to prove our humanity. That’s exhausting in a different type of way. We should be tired of telling people that Black people matter.

What was the genesis of It Was All a Dream?

I work in media as a producer and writer. I have a pretty middle class existence, despite all my complaints. I have privileges. I don’t want to act like I don’t because I do. Sometimes at work, I’d hear young White people saying they didn’t apply themselves in college, or they’d talk about how they “got drunk like every night.” One person said to me, “Well you know, we’re at the same place, Reniqua, so I don’t really see how you were that impacted by things [like racism].” At the same time, I noticed my Black peers working two or three jobs, with side hustles, trying to do online certificates or whatever it takes to get ahead. Yet, there’s a report from the Washington Post that says 31% of White millennials think that Black people are lazier than White people. It’s very frustrating.  

At one point, I was working on a documentary with an older Black man who grew up in similar circumstances to me. He was of my parents’ generation, probably on the older side of the Baby Boomers, born in the 40s. He had Caribbean parents and had grown up in suburban New Jersey. We had a lot of the same views on race but also a very different experience. I realized that it’s not just about race, but about generations too. While a lot of the same things come into play, growing up and being told that you can do whatever you want to do puts you in a different place. I think growing up with Barack Obama, who is an anomaly himself, puts you in a different place. Experiencing that pushed me to think beyond race and a little bit about class too, to be really more intersectional in my approach to race issues.

Also, I’m on my 10th year of a PhD program. When I entered graduate school I was interested in telling a success story of the Black middle class. And then the recession happened. The discourse got very ugly and racist: Barack and Michelle Obama were being called “monkey” and “baby mama.” By the time it really came to me to write the dissertation, it wasn’t a hopeful story anymore — Donald Trump was being elected president. It felt like it would need to be more about the broken promise of America, about shattered dreams.  

I was writing the same thing over and over and over again. It felt exhausting. I’ve been trying to figure out what to do with all of these emotions and energy and how best to tell important stories without feeling depleted.

What would you say are important markers and milestones for Black millennials that have shaped how we think about opportunity? You mentioned the recession of 2008, Barack Obama’s election to the presidency, various police shootings. What else has been important in defining the mood of our collective lifetimes?

Hurricane Katrina, which I didn’t realize initially. Kanye saying that George Bush doesn’t care about Black people. Rodney King’s video-taped beating and Anita Hill’s testimony before the Senate. I remember when Jesse Jackson was running for president. Some of these are older millennial experiences. For some of the people I spoke to, it was the Jena 6 who inspired them to activism and awareness of racial injustice. For me, it was Amadou Diallo’s shooting and the acquittal of the officers involved. There are also positives, like Beyonce and Oprah coming to dominate everything.

Did you notice major differences between older and younger millennials?

Younger millennials have the attitude that things may not be great but they can change them. For example, a young artist, Shamir, was annoyed about the way he was being treated by his record label. He’d had one successful electronic pop album, and he didn’t want to be boxed into that sound for his next album. It seemed the label was trying to force him into a category of “queer pop artist.” He wanted to make lo-fi music that was way less produced. So he recorded his album on his own in four days in his room and released it.

It was an acknowledgement of how shitty the systems were, but also a real desire to make change despite that. A lot of younger millennial understood that most American systems weren’t made for them to succeed, so they chose to redefine what their idea of success looked like. They weren’t defining success as getting a job at IBM and working there for 20 or 30 years like our parents’ generation would. Or even having a stable marriage. They wanted happiness and freedom, which the older generations probably also wanted. But sometimes the younger millennials in particular were very okay with taking different paths or acknowledging that to get to their happiness, it may look different than past generations.

What about you? Did you feel pressure to go a more traditional route professionally?

I think there was initially a lack of understanding of how hard it is. Older folks may think if you want to be a writer, you should simply get a job at a magazine or a newspaper or whatever it is and work. Or if you want to work in television, in documentary, you know, just do that. In some ways the industry I chose has always been more defined by a gig economy than others. There’s less stability, less money. So for me, I know my mom has always wanted me to be happy, but she didn’t really understand what I needed to be able to do what I wanted. I think she has more of an understanding than earlier in my career of the insecurity that this generation faces. She’s seen us working hard but how it’s paying off less. You go to school but you have so much debt that you can never get out of it. It’s starting to show.


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


When you’re with your family, do you work a lot?

Yes, they see me working all the time. Sometimes they don’t get that you don’t take breaks in the same way. I think they’re very much used to working from 9 to 5. They see me at noon on a Wednesday and it’s like, “You’re just…at home?” It’s mystifying to them. But they don’t see how I stayed up all night the night before or what it is to have to fill out form after form for health care.  

Reports say that 44% percent of millennials are not White. A little while ago, Alexandria Ocasio-Cortez tweeted about how CBS hired no Black campaign reporters for the 2020 election. Why did you decide to focus specifically on Black millennials? Do you agree with the congresswoman that there is something salient about the Black experience in America that is applicable for everybody?

Yes, of course, I think it is THE American experience. It’s really hard to separate the Black experience from the story of America. And the idea of the American dream is so pervasive in Black culture. Black people, believe it or not, actually believe in it more than any other group. An important aspect or recurring theme all across Black culture is the idea of hope and opportunity. Black people are deeply spiritual and forgiving, I guess, but there have been a lot of broken promises. Many different periods in history have promised great hope and progress for Black people, whether it’s Reconstruction, the Great Migration, or the passage of certain Civil Rights legislation. It keeps crashing down. The presidency of Barack Obama was another moment where there was great hope that completely crashed down.  

Older millennials like us are going into middle age. And that’s an interesting time and place to be when so much of what has been written about us has been about our youth and our youthful frivolousness and entitlement. It’s new territory, thinking about this generation going into their 40s. What do you hope for our cohort as we age? What does middle age look like for millennials?

I think middle age for many millennials is very uncertain. We’re not kids, and everybody talks about our youth, but we’re in our 20s and our 30s. We’ve had jobs for a substantial amount of time now. We have to look beyond these kinds of stereotypes, like calling us entitled or lazy. I’m sure that there is some entitlement; we grew up with our parents saying you can do whatever you want. But I would like for people to really think about systemic flaws. For example, you should not have to go to college to be “successful” in America. We should think about student debt. What’s happening culturally is related to these real systematic changes in our world. We can’t not go to college and get a job on a factory line anymore and have a solid middle class life. You could call us noncommittal—I think a lot of my friends are just starting to have kids, getting married now. I’m horrified by the fact that it would be a “geriatric pregnancy” now if I ever want to have kids.

Yeah, starting at 35.

At 35, I’m way past it, right? But by the same token it’s not just that people are noncommittal, it’s that they don’t feel stable. I still work as a freelancer, I still go job to job, and health care is still precarious. I can’t think of anyone besides my fiancé who has had their job for more than 10 years. He’s a public school teacher with a union and a pension. But that’s not the norm.

In addition to mapping the terrain with all we’re up against, you talk about some bright, joyful, and hopeful things. For example, ‘90s culture, like Living Single.

Yeah, I love Living Single. There are moments of joy in our experiences, and there are  things that help. Like Black Twitter. I’m glad I grew up with these wonderful, beautiful moments of Blackness and Black identity. Sometimes, when we see someone like Serena or Venus excelling in a particular sport or somewhere else where Black people have not been historically very visible, they think everything is all good for Black people everywhere. They don’t quite understand that those moments are not as frequent as they should be. They are way too few and far between. I think about Colin Kaepernick…

He didn’t vote in the [2016] presidential election. This is maybe an “old millennial” hang up, but I feel that while that doesn’t discredit him or his protest, it does make me feel like I have questions.

Oh yeah, I know. Because generally Black folks voted. We’re so highly engaged. But we still get asked for voter ID the most out of everybody.  

It’s really hard to separate the Black experience from the story of America.

In your book, you talk about mobility and consider leaving the northeast for the South — the opposite route of the Great Migration. The urban North hasn’t been all that great for Black people and maybe the New South — the urban, progressive South — is a better option. But for many of the people you speak to, the New South isn’t idyllic either. Where in America do you think is safe, hospitable, and abundant for Black people?

Oh, who knows! I wish I had answer for that. Maybe I could move there. This is one of the sections that I cut that I wanted to actually engage with — maybe the answer isn’t even America. I try to understand the South and I get the appeal of it in some ways, but it’s a painful place for me. My mother’s side of the family is from Manning, South Carolina. I like how warm the South is, in terms of the weather. I also love the people there. Even though I didn’t go to an HBCU (Historically Black College or University), I really enjoy that part of the culture. I don’t know whether you consider D.C. the South, but I really liked it. There is a [prominent, vocal, large] educated Black middle class there that I don’t find in New York in the same way. I miss that, but I also just don’t like how you can turn down a road and there’s an old plantation. Maybe that’s actually better because I do think that they deal with their pain more than we do up here. And I know that I can be followed in a store on the Upper East Side. So I don’t know where it is.

I do think about how my family came up from the South during the Great Migration for their dreams. I keep trying to figure out if that was a mistake or not. Because my relatives in the South are all doing quite well. And they have what seems like a connection to the land and a sense of hope that the part of my family that has moved away doesn’t seem to have.

In your chapter about Black Lives Matter activism, you reveal some of the costs of sustained political engagement and movement work. Do you feel like the movements created by this generation are generative spaces or spaces of hope? And do you think it’s worth the cost emotionally and otherwise to really pursue that kind of work?

I think it’s still being debated. I feel like I haven’t made sacrifices in the way that someone like Jasmine [from the chapter “Breathe”] has. Her whole life has profoundly changed due to her visibility in Black Lives Matter. Like she says, when she was in a gang, no one paid her any attention, but she gets a felony when she becomes an activist. I haven’t made sacrifices in that way. But is it worth it? I hope so. I think that people are feeling in some ways that they need to speak on the inequalities in our society and that’s great, because I worry about who has actual power. Beyond faces on camera or other kinds of representation, who is actually wielding power? That has not changed that much.  

Are you hopeful about the future?

That’s a good question, and obviously it’s one that I’ve wrestled with a lot.

You end the book on a hopeful note.

I think I was really depressed after collecting the stories. I was in the thick of it for two years, and it was just sad to see people living on the margins or hearing about how much we still have to fight for our humanity. Seeing really young Black men and women working hard and not getting as much as they need in return was really hard. However, people were also resilient and determined to find a way. They seemed to recognize that America has always been screwed up to us, but they wanted to find a way regardless. That is the story of Black America. That is who we are as a people. So it is a hopeful story. It’s frustrating, but I’m not really worried about us because we are doing what we need to do. We’re doing the hard work, and it reminds me just how amazing the story of Black America is. Because we actually survived this.  

This interview has been edited for length and clarity.

‘I Was Restricting Myself to This One Country All This Time’: An Immigrant’s Search for Work in the U.S.

As a result of Trump’s April 2017 “Buy American and Hire American” executive order, immigration policies have become more strict toward companies applying for H-1B visas, making it much harder for them to hire highly-skilled legal immigrants. And while the U.S. still attracts top talent from around the world, these more rigid policies make education and employment in other countries more feasible and attractive.

For Philadelphia magazine, Gina Tomaine describes the challenges her future brother-in-law, Akirt Sridharan, faced while looking for work in the U.S. Sridharan, a 26-year-old man from India, graduated from the University of Delaware with an MBA and a master’s in electrical engineering. He had spent $125,000 on tuition in the U.S., and after graduating in May 2017, had applied to 2,000 jobs — with no success.

After graduating, Akirt began an odyssey into the byzantine American job market. He had high hopes at first, with an early lead at a financial company in Delaware. But after a second interview, the company learned he needed visa sponsorship and stopped the conversation.

“I’ve been sleeping on so many couches, they’ve just become my bed,” says Akirt. “I obviously never wanted to burden anybody, and that feeling is always in the back of my head. When you’re at someone else’s place all the time, you don’t know where home is anymore.”

He applied to more jobs. Then more jobs. He moved to San Francisco, since that’s supposed to be where the tech jobs are centered. Many companies wanted to hire him. What they didn’t want? To sponsor a visa at a time when applications are often rejected and the lottery system is a gamble.

All of this has been happening, of course, as tech companies in particular are desperate for skilled workers.

With no prospects, Akirt began to look for work outside of the U.S., and after four years of living in the country, he left. And suddenly, he was getting job interviews.

Akirt landed on November 7th in Chennai, a burgeoning start-up hub — the city his parents are originally from and have retired to. Their white marble high-rise apartment, whose decor features Hindu gods and goddesses, African tribal artwork, and every Apple product imaginable, sits next to a huge technological park — one that’s currently hiring Americans. Now that he was looking beyond the United States, Akirt seemed to have opportunities everywhere.

“I was restricting myself to this one country all this time,” he said. “Now, I have hundreds of countries left to explore.”

Read the story

Chimayó

Robert Alexander / Getty

Esmé Weijun Wang | an excerpt from The Collected Schizophrenias | Graywolf | January 2019 | 17 minutes (4,971 words)

When I walked into the neurologist’s office in 2013 with C., it should have been apparent that something was very wrong with me. I struggled to keep open my eyes, not because of exhaustion but because of the weakness of my muscles. If you lifted my arm, it would immediately flop back down again as though boneless. My body frequently broke out into inexplicable sweats and chills. On top of all that, I had been experiencing delusions for approximately ten months that year. My psychiatrist suspected anti-NMDA receptor encephalitis, made famous by Susannah Cahalan’s memoir, Brain on Fire: My Months of Madness, but that did not explain everything that was wrong with me, including the peripheral neuropathy that attacked my hands and feet, my “idiopathic fainting,” or the extreme weight loss that caused suspicions of cancer—and so I was referred to this neurologist, who was described by my psychiatrist as “smart” and “good in her field.”

“I don’t think you have anti-NMDA receptor encephalitis, based on your chart,” she said brusquely while C. and I sat in matching chairs that faced her examination table. “I’m doing this as a favor to your psychiatrist.” And then she added, “Someday, we’ll be able to trace all mental illnesses to autoimmune disorders. But we’re not there yet.”

In Santa Fe, New Mexico, where I had never been prior to 2017, my friend and fellow writer Porochista insisted that we visit the pilgrimage site of Chimayó. “You’ll be able to write something amazing about it,” she said. We were in the IV room of an integrative healthcare clinic when she said this, facing each other in enormous leather chairs with oxygen tubes in our noses and IV needles taped to our veins.

Read more…