Search Results for: The Nation

The Day New York Rose Up Against the Nazis On the Hudson

A demonstration near the German ocean liner SS Bremen in New York, after Hugh Wilson, the American ambassador to Germany was recalled in the wake of Kristallnacht, 1938. (FPG/Hulton Archive/Getty Images)

Peter Duffy | An excerpt adapted from The Agitator: William Bailey and the First American Uprising Against Nazism | PublicAffairs | March 2019 | 20 minutes (5,458 words)


Hear it, boys, hear it? Hell, listen to me! Coast to coast! HELLO AMERICA!
—Clifford Odets, Waiting For Lefty

Seven million New Yorkers, few of them in possession of the luxury item known as an electric fan, woke up to the best news in three weeks on Friday, July 26, 1935. During the overnight hours, the humidity plunged by 33 points. By sunrise, the temperate air from Canada had completed its work. The heat wave was over.

“Humidity Goes Into Tailspin,” the New York Post exulted. “Rain Ushers in Cool Spell,” declared the Brooklyn Eagle.

The New York Times and Herald Tribune didn’t make much of a fuss that morning over Varian Fry’s revelations about his conversation with Ernst Hanfstaengl. “Reich Divided on Way to Treat Jews, Says Fry,” was the cautious headline on page eleven of the Tribune. One faction of the Nazi Party, the paper went on in summary of Hanfstaengl’s comments to Fry, “were the radicals, who wanted to settle the matter by blood.” The other, “the self-styled moderate group,” wanted to “segregate the Jews and settle the question by legal methods.” The Times ran its version on page eight and devoted most of the article to Fry’s retelling of the Berlin Riots. “There were literally hundreds of policemen standing around but I did not see them do anything but protect certain cafés which I was told were owned by Nazis,” Fry was quoted as saying. The paper saved its preview of the Holocaust for the ninth of eleven paragraphs. The nation’s newspaper of record didn’t see the value in highlighting the disclosure that “the radical section” of Hitler’s regime “desired to solve the Jewish question with bloodshed.”

Reached for comment in Berlin, Hanfstaengl called Fry’s account “fictions and lies from start to finish.” Read more…

How the Guardian Went Digital

Newscast Limited via AP Images

Alan Rusbridger | Breaking News | Farrar, Straus and Giroux | November 2018 | 31 minutes (6,239 words)

 

In 1993 some journalists began to be dimly aware of something clunkily referred to as “the information superhighway” but few had ever had reason to see it in action. At the start of 1995 only 491 newspapers were online worldwide: by June 1997 that had grown to some 3,600.

In the basement of the Guardian was a small team created by editor in chief Peter Preston — the Product Development Unit, or PDU. The inhabitants were young and enthusiastic. None of them were conventional journalists: I think the label might be “creatives.” Their job was to think of new things that would never occur to the largely middle-aged reporters and editors three floors up.

The team — eventually rebranding itself as the New Media Lab — started casting around for the next big thing. They decided it was the internet. The creatives had a PC actually capable of accessing the world wide web. They moved in hipper circles. And they started importing copies of a new magazine, Wired — the so-called Rolling Stone of technology — which had started publishing in San Francisco in 1993, along with the HotWired website. “Wired described the revolution,” it boasted. “HotWired was the revolution.” It was launched in the same month the Netscape team was beginning to assemble. Only 18 months later Netscape was worth billions of dollars. Things were moving that fast.

In time, the team in PDU made friends with three of the people associated with Wired. They were the founders, Louis Rossetto, and Jane Metcalfe; and the columnist Nicholas Negroponte, who was based at the Massachusetts Institute of Technology and who wrote mindblowing columns predicting such preposterous things as wristwatches which would “migrate from a mere timepiece today to a mobile command-and-control center tomorrow . . . an all-in-one, wrist-mounted TV, computer, and telephone.”

As if.

Both Rossetto and Negroponte were, in their different ways, prophets. Rossetto was a hot booking for TV talk shows, where he would explain to baffled hosts what the information superhighway meant. He’d tell them how smart the internet was, and how ethical. Sure, it was a “dissonance amplifier.” But it was also a “driver of the discussion” towards the real. You couldn’t mask the truth in this new world, because someone out there would weigh in with equal force. Mass media was one-way communication. The guy with the antenna could broadcast to billions, with no feedback loop. He could dominate. But on the internet every voice was going to be equal to every other voice.

“Everything you know is wrong,” he liked to say. “If you have a preconceived idea of how the world works, you’d better reconsider it.”

Negroponte, 50-something, East Coast gravitas to Rossetto’s Californian drawl, was working on a book, Being Digital, and was equally passionate in his evangelism. His mantra was to explain the difference between atoms — which make up the physical artifacts of the past — and bits, which travel at the speed of light and would be the future. “We are so unprepared for the world of bits . . . We’re going to be forced to think differently about everything.”

I bought the drinks and listened.

Over dinner in a North London restaurant, Negroponte started with convergence — the melting of all boundaries between TV, newspapers, magazines, and the internet into a single media experience — and moved on to the death of copyright, possibly the nation state itself. There would be virtual reality, speech recognition, personal computers with inbuilt cameras, personalized news. The entire economic model of information was about to fall apart. The audience would pull rather than wait for old media to push things as at present. Information and entertainment would be on demand. Overly hierarchical and status-conscious societies would rapidly erode. Time as we knew it would become meaningless — five hours of music would be delivered to you in less than five seconds. Distance would become irrelevant. A UK paper would be as accessible in New York as it was in London.

Writing 15 years later in the Observer, the critic John Naughton compared the begetter of the world wide web, Sir Tim Berners-Lee, with the seismic disruption five centuries earlier caused by the invention of movable type. Just as Gutenberg had no conception of his invention’s eventual influence on religion, science, systems of ideas, and democracy, so — in 2008 — “it will be decades before we have any real understanding of what Berners-Lee hath wrought.”

The entire economic model of information was about to fall apart.

And so I decided to go to America with the leader of the PDU team, Tony Ageh, and see the internet for myself. A 33-year-old “creative,” Ageh had had exactly one year’s experience in media — as an advertising copy chaser for The Home Organist magazine — before joining the Guardian. I took with me a copy of The Internet for Dummies. Thus armed, we set off to America for a four-day, four-city tour.

In Atlanta, we found the Atlanta Journal-Constitution (AJC), which was considered a thought leader in internet matters, having joined the Prodigy Internet Service, an online service offering subscribers information over dial-up 1,200 bit/second modems. After four months the internet service had 14,000 members, paying 10 cents a minute to access online banking, messaging, full webpage hosting and live share prices.

The AJC business plan envisaged building to 35,000 or 40,000 by year three. But that time, they calculated, they would be earning $3.3 million in subscription fees and $250,000 a year in advertising. “If it all goes to plan,’ David Scott, the publisher, Electronic Information Service, told us, ‘it’ll be making good money. If it goes any faster, this is a real business.”

We also met Michael Gordon, the managing editor. “The appeal to the management is, crudely, that it is so much cheaper than publishing a newspaper,” he said.

We wrote it down.

“We know there are around 100,000 people in Atlanta with PCs. There are, we think, about one million people wealthy enough to own them. Guys see them as a toy; women see them as a tool. The goldmine is going to be the content, which is why newspapers are so strongly placed to take advantage of this revolution. We’re out to maximize our revenue by selling our content any way we can. If we can sell it on CD-ROM or TV as well, so much the better.”

“Papers? People will go on wanting to read them, though it’s obviously much better for us if we can persuade them to print them in their own homes. They might come in customized editions. Edition 14B might be for females living with a certain income.”

It was heady stuff.

From Atlanta we hopped up to New York to see the Times’s online service, @Times. We found an operation consisting of an editor plus three staffers and four freelancers. The team had two PCs, costing around $4,000 each. The operation was confident, but small.

The @Times content was weighted heavily towards arts and leisure. The opening menus offered a panel with about 15 reviews of the latest films, theatre, music, and books – plus book reviews going back two years. The site offered the top 15 stories of the day, plus some sports news and business.

There was a discussion forum about movies, with 47 different subjects being debated by 235 individual subscribers. There was no archive due to the fact that — in one of the most notorious newspaper licensing cock-ups in history — the NYT in 1983 had given away all rights to its electronic archive (for all material more than 24 hours old) in perpetuity to Mead/Lexis.

That deal alone told you how nobody had any clue what was to come.

We sat down with Henry E. Scott, the group director of @Times. “Sound and moving pictures will be next. You can get them now. I thought about it the other day, when I wondered about seeing 30 seconds of The Age of Innocence. But then I realized it would take 90 minutes to download that and I could have seen more or less the whole movie in that time. That’s going to change.”

But Scott was doubtful about the lasting value of what they were doing — at least, in terms of news. “I can’t see this replacing the news- paper,” he said confidently. “People don’t read computers unless it pays them to, or there is some other pressing reason. I don’t think anyone reads a computer for pleasure. The San Jose Mercury [News] has put the whole newspaper online. We don’t think that’s very sensible. It doesn’t make sense to offer the entire newspaper electronically.”

We wrote it all down.

“I can’t see the point of news on-screen. If I want to know about a breaking story I turn on the TV or the radio. I think we should only do what we can do better than in print. If it’s inferior than the print version there’s no point in doing it.”

Was there a business plan? Not in Scott’s mind. “There’s no way you can make money out of it if you are using someone else’s server. I think the LA Times expects to start making money in about three years’ time. We’re treating it more as an R & D project.”


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


From New York we flitted over to Chicago to see what the Tribune was up to. In its 36-storey Art Deco building — a spectacular monument to institutional self-esteem — we found a team of four editorial and four marketing people working on a digital service, with the digital unit situated in the middle of the newsroom. The marketeers were beyond excited about the prospect of being able to show houses or cars for sale and arranged a demonstration. We were excited, too, even if the pictures were slow and cumbersome to download.

We met Joe Leonard, associate editor. “We’re not looking at Chicago Online as a money maker. We’ve no plans even to break even at this stage. My view is simply that I’m not yet sure where I’m going, but I’m on the boat, in the water — and I’m ahead of the guy who is still standing on the pier.”

Reach before revenue.

Finally we headed off to Boulder, Colorado, in the foothills of the Rockies, where Knight Ridder had a team working on their vision of the newspaper of tomorrow. The big idea was, essentially, what would become the iPad — only the team in Boulder hadn’t got much further than making an A4 block of wood with a “front page” stuck on it. The 50-something director of the research centre, Roger Fidler, thought the technology capable of realizing his dream of a ‘personal information appliance’ was a couple of years off.

Tony and I had filled several notebooks. We were by now beyond tired and talked little over a final meal in an Italian restaurant beneath the Rocky Mountains.

We had come. We had seen the internet. We were conquered.

* * *

Looking back from the safe distance of nearly 25 years, it’s easy to mock the fumbling, wildly wrong predictions about where this new beast was going to take the news industry. We had met navigators and pioneers. They could dimly glimpse where the future lay. Not one of them had any idea how to make a dime out of it, but at the same time they intuitively sensed that it would be more reckless not to experiment. It seemed reasonable to assume that — if they could be persuaded to take the internet seriously — their companies would dominate in this new world, as they had in the old world.

We were no different. After just four days it seemed blindingly obvious that the future of information would be mainly digital. Plain old words on paper — delivered expensively by essentially Victorian production and distribution methods — couldn’t, in the end, compete. The future would be more interactive, more image-driven, more immediate. That was clear. But how on earth could you graft a digital mindset and processes onto the stately ocean liner of print? How could you convince anyone that this should be a priority when no one had yet worked out how to make any money out of it? The change, and therefore the threat, was likely to happen rapidly and maybe violently. How quickly could we make a start? Or was this something that would be done to us?

In a note for Peter Preston on our return I wrote, “The internet is fascinating, intoxicating . . . it is also crowded out with bores, nutters, fanatics and middle managers from Minnesota who want the world to see their home page and CV. It’s a cacophony, a jungle. There’s too much information out there. We’re all overloaded. You want someone you trust to fillet it, edit it and make sense of it for you. That’s what we do. It’s an opportunity.”

Looking back from the safe distance of nearly 25 years, it’s easy to mock the fumbling, wildly wrong predictions about where this new beast was going to take the news industry.

I spent the next year trying to learn more and then the calendar clicked on to 1995 — The Year the Future Began, at least according to a recent book by the cultural historian W. Joseph Campbell, who used the phrase as his book title twenty years later. It was the year Amazon.com, eBay, Craigslist, and Match.com established their presence online. Microsoft spent $300m launching Windows 95 with weeks of marketing hype, spending millions for the rights to the Rolling Stones hit “Start Me Up,” which became the anthem for the Windows 95 launch.

Cyberspace — as the cyber dystopian Evgeny Morozov recalled, looking back on that period — felt like space itself. “The idea of exploring cyberspace as virgin territory, not yet colonized by governments and corporations, was romantic; that romanticism was even reflected in the names of early browsers (‘Internet Explorer,’ ‘Netscape Navigator’).”

But, as Campbell was to reflect, “no industry in 1995 was as ill-prepared for the digital age, or more inclined to pooh-pooh the disruptive potential of the Internet and World Wide Web, than the news business.” It suffered from what he called “innovation blindness” — “an inability, or a disinclination to anticipate and understand the consequences of new media technology.”

1995 was, then, the year the future began. It happened also to be the year in which I became editor of the Guardian.

* * *

I was 41 and had not, until very recently, really imagined this turn of events. My journalism career took a traditional enough path. A few years reporting; four years writing a daily diary column; a stint as a feature writer — home and abroad. In 1986 I left the Guardian to be the Observer’s television critic. When I rejoined the Guardian I was diverted towards a route of editing — launching the paper’s Saturday magazine followed by a daily tabloid features section and moving to be deputy editor in 1993. Peter Preston — unshowy, grittily obstinate, brilliantly strategic — looked as if he would carry on editing for years to come. It was a complete surprise when he took me to the basement of the resolutely unfashionable Italian restaurant in Clerkenwell he favored, to tell me he had decided to call it a day.

On most papers the proprietor or chief executive would find an editor and take him or her out to lunch to do the deal. On the Guardian — at least according to tradition dating back to the mid-70s — the Scott Trust made the decision after balloting the staff, a process that involved manifestos, pub hustings, and even, by some candidates, a little frowned-on campaigning.

I supposed I should run for the job. My mission statement said I wanted to boost investigative reporting and get serious about digital. It was, I fear, a bit Utopian. I doubt much of it impressed the would-be electorate. British journalists are programmed to skepticism about idealistic statements concerning their trade. Nevertheless, I won the popular vote and was confirmed by the Scott Trust after an interview in which I failed to impress at least one Trustee with my sketchy knowledge of European politics. We all went off for a drink in the pub round the back of the office. A month later I was editing.

“Fleet Street,” as the UK press was collectively called, was having a torrid time, not least because the biggest beast in the jungle, Rupert Murdoch, had launched a prolonged price war that was playing havoc with the economics of publishing. His pockets were so deep he could afford to slash the price of The Times almost indefinitely — especially if it forced others out of business.

Reach before revenue — as it wasn’t known then.

The newest kid on the block, the Independent, was suffering the most. To their eyes, Murdoch was behaving in a predatory way. We calculated the Independent titles were losing around £42 million (nearly £80 million in today’s money). Murdoch’s Times, by contrast, had seen its sales rocket 80 per cent by cutting its cover prices to below what it cost to print and distribute. The circulation gains had come at a cost — about £38 million in lost sales revenue. But Murdoch’s TV business, BSkyB, was making booming profits and the Sun continued to throw off huge amounts of cash. He could be patient.

But how on earth could you graft a digital mindset and processes onto the stately ocean liner of print.

The Telegraph had been hit hard — losing £45 million in circulation revenues through cutting the cover price by 18 pence. The end of the price war left it slowly clawing back lost momentum, but it was still £23 million adrift of where it had been the previous year. Murdoch — as so often — had done something bold and aggressive. Good for him, not so good for the rest of us. Everyone was tightening their belts in different ways. The Independent effectively gave up on Scotland. The Guardian saved a million a year in newsprint costs by shaving half an inch off the width of the paper.

The Guardian, by not getting into the price war, had “saved” around £37 million it would otherwise have lost. But its circulation had been dented by about 10,000 readers a day. Moreover, the average age of the Guardian reader was 43 — something that pre-occupied us rather a lot. We were in danger of having a readership too old for the job advertisements we carried.

Though the Guardian itself was profitable, the newspaper division was losing nearly £12 million (north of £21 million today). The losses were mainly due to the sister Sunday title, the Observer, which the Scott Trust had purchased as a defensive move against the Independent in 1993. The Sunday title had a distinguished history, but was hemorrhaging cash: £11 million losses.

Everything we had seen in America had to be put on hold for a while. The commercial side of the business never stopped reminding us that only three percent of households owned a PC and a modem.

* * *

But the digital germ was there. My love of gadgets had not extended to understanding how computers actually worked, so I commissioned a colleague to write a report telling me, in language I could understand, how our computers measured up against what the future would demand. The Atex system we had installed in 1987 gave everyone a dumb terminal on their desk — little more than a basic word processor. It couldn’t connect to the internet, though there was a rudimentary internal messaging system. There was no word count or spellchecker and storage space was limited. It could not be used with floppy disks or CD-ROMs. Within eight years of purchase it was already a dinosaur.

There was one internet connection in the newsroom, though most reporters were unaware of it. It was rumored that downstairs a bloke called Paul in IT had a Mac connected to the internet through a dial-up modem. Otherwise we were sealed off from the outside world.

Some of these journalist geeks began to invent Heath Robinson solutions to make the inadequate kit in Farringdon Road to do the things we wanted in order to produce a technology website online. Tom Standage — he later became deputy editor of the Economist, but then was a freelance tech writer — wrote some scripts to take articles out of Atex and format them into HTML so they could be moved onto the modest Mac web server — our first content management system, if you like. If too many people wanted to read this tech system at once the system crashed. So Standage and the site’s editor, Azeem Azhar, would take it in turns sitting in the server room in the basement of the building rebooting the machines by hand — unplugging them and physically moving the internet cables from one machine to another.

What would the future look like? We imagined personalized editions, even if we had not the faintest clue how to produce them. We guessed that readers might print off copies of the Guardian in their homes — and even toyed with the idea of buying every reader a printer. There were glimmers of financial hope. Our readers were spending £56 million a year buying the Guardian but we retained none of it: the money went on paper and distribution. In the back of our minds we ran calculations about how the economics of newspapers would change if we could save ourselves the £56 million a year “old world” cost.

By March 1996, ideas we’d hatched in the summer of 1995 to graft the paper onto an entirely different medium were already out of date. That was a harbinger of the future.

On top of editing, the legal entanglements sometimes felt like a full-time job on their own. Trying to engineer a digital future for the Guardian felt like a third job. There were somehow always more urgent issues. By March 1996, ideas we’d hatched in the summer of 1995 to graft the paper onto an entirely different medium were already out of date. That was a harbinger of the future. No plans in the new world lasted very long.

It was now apparent that we couldn’t get away with publishing selective parts of the Guardian online. Other newspapers had shot that fox by pushing out everything. We were learning about the connectedness of the web — and the IT team tentatively suggested that we might use some “offsite links” to other versions of the same story to save ourselves the need to write our own version of everything. This later became the mantra of the City University of New York (CUNY) digital guru Jeff Jarvis — “Do what you do best, and link to the rest.”

We began to grapple with numerous basic questions about the new waters into which we were gingerly dipping our toes.

Important question: Should we charge?

The Times and the Telegraph were both free online. A March 1996 memo from Bill Thompson, a developer who had joined the Guardian from Pipex, ruled it out:

I do not believe the UK internet community would pay to read an online edition of a UK newspaper. They may pay to look at an archive, but I would not support any attempt to make the Guardian a subscription service online . . . It would take us down a dangerous path.

In fact, I believe that the real value from an online edition will come from the increased contact it brings with our readers: online newspapers can track their readership in a way that print products never can, and the online reader can be a valuable commodity in their own right, even if they pay nothing for the privilege.

Thompson was prescient about how the overall digital economy would work — at least for players with infinitely larger scale and vastly more sophisticated technology.

What time of day should we publish?

The electronic Telegraph was published at 8 a.m. each day — mainly because of its print production methods. The Times, more automated, was available as soon as the presses started rolling. The Guardian started making some copy available from first edition through to the early hours. It would, we were advised, be fraught with difficulties to publish stories at the same time they were ready for the press.

Why were we doing it anyway?

Thompson saw the dangers of cannibalization, that readers would stop buying the paper if they could read it for free online. It could be seen as a form of marketing. His memo seemed ambivalent as to whether we should venture into this new world at all:

The Guardian excels in presenting information in an attractive easy to use and easy to navigate form. It is called a “broadsheet newspaper.” If we try to put the newspaper on-line (as the Times has done) then we will just end up using a new medium to do badly what an old medium does well. The key question is whether to make the Guardian a website, with all that entails in terms of production, links, structure, navigational aids etc. In summer 1995 we decided that we would not do this.

But was that still right a year later? By now we had the innovation team — PDU — still in the basement of one building in Farringdon Road, and another team in a Victorian loft building across the way in Ray Street. We were, at the margins, beginning to pick up some interesting fringe figures who knew something about computers, if not journalism. But none of this was yet pulling together into a coherent picture of what a digital Guardian might look like.

An 89-page business plan drawn up in October 1996 made it plain where the priorities lay: print.

We wanted to keep growing the Guardian circulation — aiming a modest increase to 415,000 by March 2000 — which would make us the ninth-biggest paper in the UK — with the Observer aiming for 560,000 with the aid of additional sections. A modest investment of £200,000 a year in digital was dwarfed by an additional £6 million cash injection into the Observer, spread over three years.

As for “on-line services” (we were still hyphenating it) we did want “a leading-edge presence” (whatever that meant), but essentially we thought we had to be there because we had to be there. By being there we would learn and innovate and — surely? — there were bound to be commercial opportunities along the road. It wasn’t clear what.

We decided we might usefully take broadcasting, rather than print, as a model — emulating its “immediacy, movement searchability and layering.”

If this sounded as if we were a bit at sea, we were. We hadn’t published much digitally to this point. We had taken half a dozen meaty issues — including parliamentary sleaze, and a feature on how we had continued to publish on the night our printing presses had been blown up by the IRA — and turned them into special reports.

It is a tribute to our commercial colleagues that they managed to pull in the thick end of half a million pounds to build these websites. Other companies’ marketing directors were presumably like ours — anxious about the youth market and keen for their brands to feel “cool.” In corporate Britain in 1996, there was nothing much cooler than the internet, even if not many people had it, knew where to find it or understood what to do with it.

* * *

The absence of a controlling owner meant we could run the Guardian in a slightly different way from some papers. Each day began with a morning conference open to anyone on the staff. In the old Farringdon Road office, it was held around two long narrow tables in the editor’s office — perhaps 30 or 40 people sitting or standing. When we moved to our new offices at Kings Place, near Kings Cross in North London, we created a room that was, at least theoretically, less hierarchical: a horseshoe of low yellow sofas with a further row of stools at the back. In this room would assemble a group of journalists, tech developers and some visitors from the commercial departments every morning at about 10 a.m. If it was a quiet news day we might expect 30 or so. On big news days, or with an invited guest, we could host anything up to 100.

A former Daily Mail journalist, attending his first morning conference, muttered to a colleague in the newsroom that it was like Start the Week — a Monday morning BBC radio discussion program. All talk and no instructions. In a way, he was right: It was difficult, in conventional financial or efficiency terms, to justify 50 to 60 employees stopping work to gather together each morning for anything between 25 and 50 minutes. No stories were written during this period, no content generated.

But something else happened at these daily gatherings. Ideas emerged and were kicked around. Commissioning editors would pounce on contributors and ask them to write the thing they’d just voiced. The editorial line of the paper was heavily influenced, and sometimes changed, by the arguments we had. The youngest member of staff would be in the same room as the oldest: They would be part of a common discussion around news. By a form of accretion and osmosis an idea of the Guardian was jointly nourished, shared, handed down, and crafted day by day.

You might love the Guardian or despise it, but it had a definite sense of what it believed in and what its journalism was.

It led to a very strong culture. You might love the Guardian or despise it, but it had a definite sense of what it believed in and what its journalism was. It could sometimes feel an intimidating meeting — even for, or especially for, the editor. The culture was intended to be one of challenge: If we’d made a wrong decision, or slipped up factually or tonally, someone would speak up and demand an answer. But challenge was different from blame: It was not a meeting for dressing downs or bollockings. If someone had made an error the previous day we’d have a post-mortem or unpleasant conversation outside the room. We’d encourage people to want to contribute to this forum, not make them fear disapproval or denunciation.

There was a downside to this. It could, and sometimes did, lead to a form of group-think. However herbivorous the culture we tried to nurture, I was conscious of some staff members who felt awkward about expressing views outside what we hoped was a  fairly broad consensus. But, more often, there would be a good discussion on two or three of the main issues of the day. We encouraged specialists or outside visitors to come in and discuss breaking stories. Leader writers could gauge the temperature of the paper before penning an editorial. And, from time to time, there would be the opposite of consensus: Individuals, factions, or groups would come and demand we change our line on Russia, bombing in Bosnia; intervention in Syria; Israel, blood sports, or the Labor leadership.

The point was this: that the Guardian was not one editor’s plaything or megaphone. It emerged from a common conversation — and was open to internal challenge when editorial staff felt uneasy about aspects of our journalism or culture.

* * *

Within two years — slightly uncomfortable at the power I had acquired as editor — I gave some away. I wanted to make correction a natural part of the journalistic process, not a bitterly contested post-publication battleground designed to be as difficult as possible.

We created a new role on the Guardian: a readers’ editor. He or she would be the first port of call for anyone wanting to complain about anything we did or wrote. The readers’ editor would have daily space in the paper — off-limits to the editor — to correct or clarify anything and would also have a weekly column to raise broader issues of concern. It was written into the job description that the editor could not interfere. And the readers’ editor was given the security that he/she could not be removed by the editor, only by the Scott Trust.

On most papers editors had sat in judgment on themselves. They commissioned pieces, edited and published them — and then were supposed neutrally to assess whether their coverage had, in fact, been truthful, fair, and accurate. An editor might ask a colleague — usually a managing editor — to handle a complaint, but he/she was in charge from beginning to end. It was an autocracy. That mattered even more in an age when some journalism was moving away from mere reportage and observation to something closer to advocacy or, in some cases, outright pursuit.

Allowing even a few inches of your own newspaper to be beyond your direct command meant that your own judgments, actions, ethical standards and editorial decisions could be held up to scrutiny beyond your control. That, over time, was bound to change your journalism. Sunlight is the best disinfectant: that was the journalist-as-hero story we told about what we do. So why wouldn’t a bit of sunlight be good for us, too?

The first readers’ editor was Ian Mayes, a former arts and obituaries editor then in his late 50s. We felt the first person in the role needed to have been a journalist — and one who would command instant respect from a newsroom which otherwise might be somewhat resistant to having their work publicly critiqued or rebutted. There were tensions and some resentment, but Ian’s experience, fairness and flashes of humor eventually won most people round.

One or two of his early corrections convinced staff and readers alike that he had a light touch about the fallibility of journalists:

In our interview with Sir Jack Hayward, the chairman of Wolverhampton Wanderers, page 20, Sport, yesterday, we mistakenly attributed to him the following comment: “Our team was the worst in the First Division and I’m sure it’ll be the worst in the Premier League.” Sir Jack had just declined the offer of a hot drink. What he actually said was: “Our tea was the worst in the First Division and I’m sure it’ll be the worst in the Premier League.” Profuse apologies.

In an article about the adverse health effects of certain kinds of clothing, pages 8 and 9, G2, August 5, we omitted a decimal point when quoting a doctor on the optimum temperature of testicles. They should be 2.2 degrees Celsius below core body temperature, not 22 degrees lower.

But in his columns he was capable of asking tough questions about our editorial decisions —  often prompted by readers who had been unsettled by something we had done. Why had we used a shocking picture which included a corpse? Were we careful enough in our language around mental health or disability? Why so much bad language in the Guardian? Were we balanced in our views of the Kosovo conflict? Why were Guardian journalists so innumerate? Were we right to link to controversial websites?

In most cases Mayes didn’t come down on one side or another. He would often take readers’ concerns to the journalist involved and question them — sometimes doggedly — about their reasoning. We learned more about our readers through these interactions; and we hoped that Mayes’s writings, candidly explaining the workings of a newsroom, helped readers better understand our thinking and processes.

It was, I felt, good for us to be challenged in this way. Mayes was invaluable in helping devise systems for the “proper” way to correct the record. A world in which — to coin a phrase —  you were “never wrong for long” posed the question of whether you went in for what Mayes termed “invisible mending.” Some news organizations would quietly amend whatever it was that they had published in error, no questions asked. Mayes felt differently: The act of publication was something on the record. If you wished to correct the record, the correction should be visible.

But we had some inkling that the iron grip of centralized control that a newspaper represented was not going to last.

We were some years off the advent of social media, in which any error was likely to be pounced on in a thousand hostile tweets. But we had some inkling that the iron grip of centralized control that a newspaper represented was not going to last.

I found liberation in having created this new role. There were few things editors can enjoy less than the furious early morning phone call or email from the irate subject of their journalism. Either the complainant is wrong — in which case there is time wasted in heated self-justification; or they’re right, wholly or partially. Immediately you’re into remorseful calculations about saving face. If readers knew we honestly and rapidly — even immediately — owned up to our mistakes they should, in theory, trust us more. That was the David Broder theory, and I bought it. Readers certainly made full use of the readers’ editor’s existence. Within five years Mayes was dealing with around 10,000 calls, emails, and letters a year — leading to around 1,200 corrections, big and small. It’s not, I think, that we were any more error-prone than other papers. But if you win a reputation for openness, you’d better be ready to take it as seriously as your readers will.

Our journalism became better. If, as a journalist, you know there are a million sleuth-eyed editors out there waiting to leap on your tiniest mistake, it makes you more careful. It changes the tone of your writing. Our readers often know more than we do. That became a mantra of the new world, coined by the blogger and academic Dan Gillmor, in his 2004 book We the Media8 but it was already becoming evident in the late 1990s.

The act of creating a readers’ editor felt like a profound recognition of the changing nature of what we were engaged in. Journalism was not an infallible method guaranteed to result in something we would proclaim as The Truth — but a more flawed, tentative, iterative and interactive way of getting towards something truthful.

Admitting that felt both revolutionary and releasing.

***

Excerpted from Breaking News: The Remaking of Journalism and Why It Matters Now by Alan Rusbridger. Published Farrar, Straus and Giroux November 27, 2018. Copyright © 2018 by Alan Rusbridger. All rights reserved.

Longreads Editor: Aaron Gilbreath

How the Shock Jock Became the Outrage Jock

Ben Hider / Invision / AP, Jeff Chiu / AP, Charles Dharapak / AP

Soraya Roberts | Longreads | March 2019 | 8 minutes (2,111 words)

In the past, the bow tie seemed to hold him together, kind of. Tucker Carlson had always been as red-faced and obstreperous as so many other conservative pundits, but he had never been known to be “cunty” or “faggot”-level offensive. Still, it wasn’t much of a shock earlier this week when progressive watchdog Media Matters unearthed him spouting slurs like that — a couple of racist remarks rounded out the misogyny and homophobia — during a series of appearances on Bubba the Love Sponge Clem’s radio show between 2006 and 2011. From Monday to Tuesday, after the first recordings surfaced, Tucker Carlson Tonight hemorrhaged almost half its advertisers.

That bow tie had been a flourish of propriety: a strip of cloth separating him from a loudmouth like Howard Stern, the “shock jock” who looks and acts like a dollar store rock star, grabbing his crotch for whoever will listen. But he dropped it the year he appeared on that radio show. It was Stern who hired Bubba the Love Sponge Clem (yes, that’s his legal name) in the mid-2000s to host a show on his second satellite radio channel, and it was on that show that Carlson crossed the line. That was where the shock jock and the political commentator proved that they were one and the same — the former played off conservatism, the latter played it up, but both relied on its foundation. “Well, you’re talking about God and illegals,” Carlson told Clem. “I thought we were just going to be talking about blow jobs.”

But what’s the difference, really? Blow jobs were once used for shock value. Now it’s “illegals.” The punch line being that neither one of them is transgressive in the end.

* * *

No one used the words shock jock for Joe Pyne, the host of It’s Your Nickel (that’s a reference to pay phones, kids, and I’m including myself here) who pioneered in-your-face talk radio in the ’50s and went on to create TV’s The Joe Pyne Show, which sometimes devolved into actual physical altercations between him and guest. No one really knew what to make of him. His unconventional style — dressed-up to dress down “pinkos” and “women’s libbers” and riff on, rather than read, reports — was neither news nor entertainment. It seemed to be best described (well, The New York Times and Time both did anyway) as an “electronic peepshow.” The personality-free press of the time considered Walter Cronkite the most trusted man in America and Johnny Carson the funniest, but Pyne, with his syndicated show on more than 200 radio outlets, was the most Machiavellian. “When it comes to manipulating media,” Icons of Talk author Donna Halper told Smithsonian Magazine, “he was the father of them all.”

Pyne briefly descended from his soapbox in the mid-’60s — for a week’s “vacation” — after bringing a gun to his show during the Watts riots, suggesting the world wasn’t quite yet ready for his kind of conservative appeal. It took until the mid-’80s, when the FCC was no longer so hard-assed and political correctness was all the rage, for Howard Stern to turn the shock jock into a thing. The idea was that PC America was muting real America, and personalities like his were there to liberate our ids … usually on the way to work. “They were pushing the limits of what you could hear on the public airwaves,” TALKERS Magazine publisher Michael Harrison told Thrillist of mavericks like Pyne and Don Imus, who set the stage for Stern. “That was the key to the whole thing: that it was on the ‘sacred public airwaves.’”

Full disclosure: I have always hated Howard Stern. His banality offends me: “The closest I came to making love to a black woman was I masturbated to a picture of Aunt Jemima on a pancake box” — that’s the kind of joke he makes. It’s the sort of quip that leaves a dumb bro stuck in 1992 in stitches. To be offensive your words have to have power, and his … don’t. He swears a lot and cajoles his guests into talking about fucking and snorting and it’s all very Free Speech, Motherfuckers! He can be sexist and racist and classist, because, hey! He’s sexist about men too! He’s racist to everyone! He drags every class!

Sorry, I just fell asleep.

The rebellion is a pose, because at the heart of Stern and all the other shock jocks is conservatism — 2.1 kids, strong moral fiber. They can joke about fucking and inhaling, because they ostensibly aren’t doing either. So what positions itself against PC America, in fact, at its core, feeds into it — the conservatism is the rebellion. Knowing that, you can see how Don Imus calling the members of Rutgers’ women’s basketball team “nappy-headed hos” can happen as late as 2007 on his radio show Imus in the Morning (he was fired by CBS and NBC, then hired by ABC). As David Remnick wrote in The New Yorker 10 years before Imus’s offense, personalities like Stern and Mancow Muller and Opie and Anthony appeal to the “audience that feels put upon by a new set of rules — sexual harassment guidelines, the taboo against certain kinds of speech — and wants release, if only in the privacy of the drive to work.”

The audience meaning white heterosexual men. The shock jock industry itself is predominantly white men (Stern’s foil, Robin Quivers, is a black woman, but she has never been the star attraction). Which is not to say that women can’t be as “offensive,” it’s just that the people in charge of hiring them would prefer them to be barefoot and pregnant. There are shockingly few exceptions. Wendy Williams, who rode the wave of ’90s hip-hop and shamelessly confronted celebrities like Whitney Houston with tabloid gossip (she also had a bad habit of trying to out rappers) was christened by New York magazine in 2005 as the “shock jockette.” She was “the black Howard Stern” right down to the middle-class moralism. Other than Williams, the female media personalities who cause offense — Ann Coulter, Laura Ingraham — tend toward conservative commentary, presumably because the men on the top floor think they will be less likely to break a nail in those environs. “The complaints of Western feminists look like petty self-absorption when you line them up against human rights abuses in Third World military dictatorships,” is a thing Ingraham came up with — a misogynistic comment cloaked in doublespeak.

This genre of radio personality was dubbed by my colleague Ethan Chiel as the “outrage jock,” the political version of a culture and entertainment-aligned predecessor, who arose in the late 1980s after the FCC regulations on political talk became less clear. This is where a bow tie comes in handy. The outrage jocks market themselves as transgressive, but instead of fighting conservative America, they uphold it, a stance they brand subversive in a sea of progressive liberal media. Rush Limbaugh, who has the most popular talk radio show in America — 15.5 million listeners, according to Talk Magazine — was dubbed by National Review as the “Leader of the Opposition” back in the ’90s. “Rush took radio at a time when the norm was basically NPR. He comes into that church and blows it up,” radio host John Ziegler told The Washington Post in 2015. “Our presidential politics have become a kind of church. The media says, ‘You’re not allowed to say this, or this, or that, because we’re in church.’ People are sick of that.”

So: Stern 2.0, except instead of shouting about pussy, Limbaugh — not to mention Glenn Beck and Michael Savage — shouts about policy. You may remember him calling women’s rights activist Sandra Fluke a “slut” in 2012 for advocating for contraceptive insurance coverage. “She’s having so much sex she can’t afford the contraception,” said the man who has been married four times. “She wants you and me and the taxpayers to pay her to have sex. What does that make us? We’re the pimps.”

Limbaugh needs a brushup on his sex work nomenclature, among other things. But if you want to talk about pimp: Janet Jackson’s nipple ultimately killed the shock jock. In case you aren’t old, it happened during a performance of “Rock Your Body” at the Super Bowl XXXVIII halftime show in 2004, when Justin Timberlake tore off the right cup of Jackson’s bustier, exposing her breast. (Per Jackson, the red bra underneath the rubber was supposed to stay behind, but came away accidentally.) In response, more than 500,000 complaints, all of them from people presumably with nipples of their own, were reportedly lodged with the FCC. President Bush responded two years later by signing the Broadcast Decency Enforcement Act, which raised the penalty for broadcasting “indecency” tenfold. With that, Howard Stern fucked off to satellite radio and the rest of the shock jocks kind of followed suit. Tucker Carlson was what was left behind.

* * *

“Does she have a good body? No. Does she have a fat ass? Absolutely.” Tucker Carlson did not say that. That was Donald Trump in 2013 talking to Howard Stern about a pregnant Kim Kardashian in a radio show appearance that reemerged during his election campaign. On the same show, across almost two decades, the future president also agreed that his daughter was “a piece of ass” and dismissed flat-chested women and women over 35 (thank God). For all his work to divide the nation, Trump had a big hand in bringing shock and outrage jocks together, dissolving any sort of wall (!) between them. “If the political class is appalled by the notion that anything from the morass of ’90s shock-jock radio could become part of a presidential race,” wrote Virginia Heffernan in Politico in 2016, “it may be just as surprising to Stern’s fans, who proudly embraced the outsider-ness of a guy who couldn’t seem further from inside-the-Beltway political chatter.” TALKERS’s Harrison has called Trump “the first shock-politician.”

By the time Trump entered politics, shock jocks were no longer defining the culture and conservative commentators were filling the vacuum. They entered the mainstream on networks like Fox and the intellectual dark web via Ben Shapiro and Jordan Peterson and Dave Rubin. “The shock jocks weren’t defeated,” wrote Dan Jackson at Thrillist. “They went viral.” This is where Tucker Carlson fits in. He called his resurfaced xenophobic, misogynistic, and homophobic comments from Bubba the Love Sponge’s show (he described women as “extremely primitive,” supported child rapist Warren Jeffs, and compared the behavior of Muslims to animals) “naughty,” then equated contrition with betrayal. “We’ve always apologized when we’re wrong and will continue to do that,” he said on Tucker Carlson Tonight Monday. “That’s what decent people do; they apologize. But we will never bow to the mob.”

Almost 70 years after the first shock jock hit the air, Carlson was toeing the same party line as his predecessors. “They claim that they’re just entertainers and yet they deliver this toxic mix of pseudo journalism, misinformation, hate-filled speech, jokes,” Rory O’Connor, author of Shock Jocks: Hate Speech & Talk Radio, told The Guardian in 2009. “It’s all bound together so when it’s convenient for them to be entertainers they say, hey, it’s all just a joke. But when it’s not, they say they’re giving you information that you need.” Carlson’s comments were only shocking because they veered so sharply away from Beltway politics; with his regressive approach no longer couched in policy, they revealed him for the person he is. And even though advertisers have pulled out of his program, the notion that he could disappear like Stern is one from another time — conservatism is the status quo and there’s always room for it now, particularly when it masquerades as information rather than entertainment.

After Megyn Kelly left Fox, Tucker Carlson took her spot, and if Carlson were removed, a new version of him would sprout in his place. This whack-a-mole quality to outrage jocks extends, more troublingly, to their politics — if they are not outraged about one thing, they will immediately find another. They are as adaptive as comedians like Stern, use facts as props to play journalists like Cronkite, and influence voting and policy just as seriously. As Jon Stewart scolded Carlson and his cohost in 2004 on the CNN show Crossfire: “You’re doing theater, when you should be doing debate.” And without the FCC to shut them down for good, or at least out them as entertainers, the only hope is that their audience will realize that the most transgressive thing to do is to stop listening.

* * *

Soraya Roberts is a culture columnist at Longreads.

America’s Post-Frontier Hangover

'American Progress' (1872), by John Gast, depicts settlers moving west, guided and protected by a goddess-like figure and aided by technology (railways, telegraphs), driving Native Americans and bison into obscurity. (Fotosearch / Stringer/Getty)

Will Meyer | Longreads | March 2019 | 17 minutes (4,498 words)

In the small New England town where I live, Hadley, Massachusetts, the common lies a few miles from the mishmash of corporate chains that make up the town’s economic center. A quiet residential neighborhood surrounds the common. It is a grassy patch, left vacant most of the year, save for occasional festivals and craft fairs; open space to be utilized as needed, hardly disturbed otherwise. Adjacent to the college towns of Northampton and Amherst, not much happens in Hadley. I go for walks around my neighborhood most days and seldom run into many people. The common feels like an oasis, a fleeting yet contained sliver of vastness.

In 1995, the Hadley Historical Commission installed a plaque on the side of a rock, near the end of the common, between where it meets the main road and a paved rail trail. The plaque commemorates the “17th Century Palisade,” a wall that was “3 fingers thick and 8 feet high” in 1676, 100 years before the American revolution. The “fortification,” the plaque states, “was one mile long by 40 rods wide.” Most saliently, however, “Hadley was then a frontier outpost which felt threatened by Native American attack.” In other words, the settlers built a wall (around the corner from where I live now) both to assert their settlement and ward off perceived threats — namely the brown-skinned Other the United States was founded, at least partially, to pacify and remove. Read more…

Deciphering the Language of the Body in China

AP Photo/Ng Han Guan

While living in China, English journalist Poppy Sebag-Montefiore experienced the way strangers touched each other in various situations — on the train, in the market, standing in line. “Touch,” Sebag-Montefiore writes in her essay at Granta, “had its own language, and the rules were the opposite of the ones I knew at home.” She recounts her fascination with all of this touch, and how she set about understanding the way it works and where it came from, before the country’s rapid modernization irreparably changed it. All this physical intimacy offered, in her words, “a direct hit of the love, energy and camaraderie that you get from friendship,” but she also wondered if it had a dark side.

Touch in public, among strangers, had a whole range of tones that were neither sexual nor violent. But it wasn’t neutral either. At times, yes, you’d be leaned on indiscriminately because of lack of space, or to help take some weight off someone’s feet. Yet other times you’d choose people you wanted to cling on to, or you’d be chosen. You’d get a sense of someone while haggling over the price of their garlic bulbs and you’d just grab on to each other’s forearms as you spoke or before you went on your way. Touch was a precise tool for communication, to express your appreciation for someone’s way of being, the brightness in their eyes as they smiled, their straightforwardness in a negotiation, a kindness they’d shown.

I felt buoyed and buffeted by this touch. I sometimes felt like I was bouncing or bounding from one person to the next like a pinball, pushed and levered around the city from arm to arm. If the state was like an overly strict patriarch, then the nation, society or the people on the streets were the becalming matriarch. This way of handling each other felt like a gentle, restorative cradle at times. At other times all the hands on you could be another kind of oppressive smothering. But usually touch was like a lubricant that eased the day-to-day goings-on and interactions in the city, and made people feel at home.

I wanted to document this unselfconscious touch. To keep hold of it. I could tell that this ease between the bodies of strangers might not survive rapid urbanisation. This touch was so visual, so visible. I freed my camera from the head-and-shoulders interview shot and took it out to the streets.

Read the story

What to Read After ‘Leaving Neverland’

Washington, DC. 5-14-1984 Michael Jackson with President Ronald Reagan and FIrst Lady Nancy Reagan at ceremony on the South Lawn of the White House where the President awarded "The King Of Pop" with the Presidential Public Safety Communication Award for allowing the song "Beat It" to be used in a public service campaign against teen drinking and driving. Credit: Mark Reinstein (Photo by Mark Reinstein/Corbis via Getty Images)

What struck me about Leaving Neverland, the harrowing, two-part, four-hour HBO documentary about Wade Robson and Jimmy Safechuck’s sexual abuse allegations against superstar performer Michael Jackson, is the mechanical similarity of the men’s stories. Almost play-by-play, their accounts of what happened, how they, along with their families, became dazzled and then ensnared in Jackson’s web, hauntingly mirror each other. I noticed the same thing while watching both Surviving R. Kelly and Kidnapped in Plain Sight — predatory techniques to woo most often follow a similarly uncreative, toxic formula. During Oprah’s follow-up interview special, Leaving Neverland director Dan Reed called the film a deep look into “what grooming child sexual abuse looks like.”

Unique to Robson’s and Safechuck’s dilemma is the sheer magnitude of their accused perpetrator’s fame. As Robson said to Oprah, “the grooming started long before we ever met him.” Michael Jackson entered the national spotlight as lead singer of the Jackson 5 in 1969. Thriller, from 1982, remains the second best selling album of all time in the US. After 50 years in entertainment, the reach and influence of Jackson’s music cannot be overstated: it is difficult to listen to any pop radio and not hear him in its melodies or harmonies, to watch any pop star dance and not see his movement in the shadows.

After a police investigation into allegations brought forth by then 13-year-old Jordan Chandler in 1993, Jackson wasn’t formally charged, and he was acquitted on multiple counts related to child sexual abuse in 2005. In both cases, he settled out of court with his accusers. Before his 2009 death, Jackson denied all allegations of misconduct. His estate and family have issued vehement denials in Leaving Neverland’s wake. Still, no one defending Jackson would go as far as to say he did not behave inappropriately with children: he admitted to some unconscionable behavior himself. Robson’s and Safechuck’s accounts are detailed, credible, and difficult to bear in one sitting. To make sense of the story, and to begin to make sense of how we, the public, fell short, a selection of readings follows, about Jackson, Leaving Neverland, geniusand the toxic cult of celebrity.

1. A Complete Timeline of the Michael Jackson Abuse Allegations. (Kyle McGovern, February 28, 2019, Vulture)

McGovern details every public allegation against Jackson dating back to 1993 — Robson and Safechuck appear and reappear multiple times among many other young men in Jackson’s orbit.

2. ‘Leaving Neverland’ Reveals the Monster We Didn’t Want to See in Michael Jackson. (Niela Orr, March 1, 2019, BuzzFeed)

Orr, a Jackson fan while growing up, says watching Leaving Neverland produced “the shock and pang of betrayal,” and was “a visceral reveal of insidious behavior.” She reckons with Jackson’s duality: the harmless childlike mythos versus his ability to shapeshift into monstrosity.

3. It’s Too Late to Cancel Michael Jackson. (Carl Wilson, February 27, 2019, Slate)

Wilson says Jackson, “was to modern popular music and dance what Dickens was to the Victorian novel” and ponders whether he is “too big to cancel.”

4. Michael Jackson Cast a Spell, ‘Leaving Neverland’ Breaks It. (Wesley Morris, February 27, 2019, New York Times)

I’ve stared at a lingering shot of a photograph of Jackson, who would have been around 30 and Safechuck who was about 9 or 10, and Jackson is beaming in sunglasses and a military jacket, flashing a peace sign, and James, in a too-big baseball cap, is turning to the camera, looking alarmingly ruminative for someone whose life should be rumination-free.

5. He’s Out of My Life: Letting Go of Michael Jackson. (Kierna Mayo, March 6, 2019, Afropunk)

Eye-spying racism should never be the reason we don’t call a predator by his name.

Mayo reckons with the denial and protectionism offered to Jackson and his memory by some in the black community.

6. ‘Leaving Neverland’ Asks an Uncomfortable Question: How Culpable Are the Parents? (EJ Dickson, March 4, 2019, Rolling Stone)

Some have interpreted Leaving Neverland and Abducted thusly, arguing that the parents of Jackson’s victims are just as culpable as Jackson in perpetuating the abuse. And to a degree, Robson and Safechuck seem to share that view: as Safechuck says, he has never fully forgiven his mother for allowing the abuse to continue. “Forgiveness is not a line you cross, it’s a road you take,” he said at the Sundance Festival earlier this year.

Yet Leaving Neverland and Abducted can be seen less indictments of bad parenting than as a condemnation of the cultural mechanisms that allow the individual power of personality to go unchecked. Even though Jackson was a pop superstar hailed as a musical genius, and Berchtold a small-town salesman and Mormon dad of five, both were, by all accounts, men who knew exactly how to wield their charisma as a weapon; both were highly skilled at disarming and seducing adults (in Berchtold’s case, literally) in order to gain access to their children.

Dickson teases out some of the similarities between Leaving Neverland and Netflix’s Abducted in Plain Sight.

7. She Wrote the Book on Michael Jackson. Now She Wishes it Said More. (Anna Silman, March 7, 2019, The Cut)

So if he is guilty — what do we do with the music? What do we do with Michael Jackson?
There are two aspects. One is what kind of restitution is needed. If it’s financial, that’s fine by me, but is that sufficient? I just don’t believe the art should be quote “banned” forever. But if banning, let’s say, R. Kelly’s work for a certain amount of time from the radio, is a way of getting money from his estate, to help give those girls and young women some kind of settlement, that’s absolutely fine with me. I feel the same way about the Jackson estate.

As for what we do with the music — that “we” splits into just millions of people, doesn’t it? There’s no one way to answer that. I got an email from an editor who just said in passing “My God, I’ve loved him all my life. I still do. Would I feel comfortable buying his videos or even his music around my 8 or 9-year-old child? Right now, no.” We’re all sifting through that.

The larger question with every one of these artists is how do we simultaneously keep in our heads and hearts this information and this material and at the same time continue to respond as we feel their art justifies. Those two processes aren’t mutually exclusive at all. And it’s going to keep happening so we need to start finding language and feelings as well as practical, legal ways of coping with it.

The Cut speaks to Margo Jefferson, author of On Michael Jackson, two days after she watched Leaving Neverland. 

8. No One Deserves as Much Power as Michael Jackson Had. (Craig Jenkins, March 1, 2019, Vulture)

It’s hard to explain the relationship between the superstars of the ’80s and their fans to people who weren’t alive or old enough to remember the decade. They were like demigods. They sang about love, peace, politics, and matters of planetary significance. Their art paused time and advanced culture. Their shows incited hysterics. It all seems religious in retrospect. Belief was the core of the bond, belief that these figures acted in the interest of bettering the world no matter the cost, belief that people who do good aregood. Their methods and their presentation were questioned, but the idea that pop stars were out to save the world was quite often taken at face value. This was not wise. We didn’t know any better.

More on the art and crimes of dangerous men:

Shelved: Brian Wilson’s Adult/Child

Getty Images

Tom Maxwell | Longreads | March 2019 | 18 minutes (3,519 words)

 

One day in 1976, Brian Wilson sat down at the piano in his Los Angeles home, turned on a tape recorder, and began to play. There’s a density to the introductory chords, like the air of an approaching storm. “Time for supper now,” he sings on the demo recording, the first verse so banal as to be almost exotic.

Day’s been hard and I’m so tired

I feel like eating now

Smell the kitchen now

Hear the maid whistle a tune

My thoughts are fleeting now

“Still I dream of it,” Wilson continues, his gutted voice not quite hitting the high note, “of that happy day when I can say I’ve fallen in love. And it haunts me so, like a dream that’s somehow linked to all the stars above.”

The extraordinary chord progression, intricate melody, and anguished bridge all demonstrate “Still I Dream of It” to be a song written by a master songsmith, although one in decline. The confident tenor and soaring falsetto of Wilson’s youth are gone, and yet the song is somehow better for the ragged vulnerability. If you know about the life of the man leading up to this moment, the poignancy of this performance is almost unbearable.

“Still I Dream of It” was intended for inclusion on Adult/Child, a Beach Boys album that was immediately shelved upon recording. A bewildering mix of sublime and terrible songs, and a hodgepodge of arrangement approaches from big band to minimoog, Adult/Child is a bookend to the Beach Boys’ famously postponed 1967 opus, Smile. The first project documented a visionary at the height of his musical powers, unmoored by drugs and set adrift by overambition and a general lack of support; the second project is one of the final blows of that artist’s losing battle with his former self. What is most conspicuous about the period in between is Wilson’s absence.

Wilson showed an idiosyncratic musical genius from the start. “Brian took accordion lessons, on one of those little baby accordions, for six weeks,” his mother Audree told Rolling Stone in 1976. “And the teacher said, ‘I don’t think he’s reading. He just hears it once and plays the whole thing through perfectly.’” As a teenager, Wilson learned the complicated harmony parts of the Four Freshmen, teaching them to his younger brothers Carl and Dennis. The three formed a band called the Pendletones with cousin Mike Love and classmate Al Jardine. At Dennis’s suggestion, Wilson wrote songs about surfing and surf culture. Their first single, 1961’s “Surfin’,” and their ensuing demo, was popular enough to eventually get the band, now called the Beach Boys, a seven-year contract with Capitol Records.

Their first album, Surfin’ Safari, owed more to Chuck Berry than Dick Dale, whose reverb-soaked aggressive guitar instrumentals defined the surf music form. (“I wrote ‘Surfin’ U.S.A.,” Wilson recently said, “because of [Berry’s] ‘Sweet Little Sixteen.’”) But the Beach Boys would not only go on to redefine surf music, they would fix the idea of Southern California in the national consciousness. Their music mapped this mythic place, fusing elements of early rock ‘n’ roll, rhythm and blues, doo-wop, and Phil Spector’s Wall of Sound. Much of this music originated in New York; Wilson’s early genius was to synthesize these musical elements and make a home for them on the other side of the country.

Beginning in 1963, two things happened in succession to solidify Wilson’s career path. The first was the release of the Ronettes’ “Be My Baby.” Perhaps more than the song, Wilson was blown away by producer Phil Spector’s orchestrative approach. “That was when I started to design the experience to be a record rather than just a song,” Wilson remembered.  

The second momentous event in young Wilson’s life was the British Invasion, which pretty much killed off all other forms of popular music, including surf. To make things worse, the Beach Boys and the Beatles shared an American record label, who turned its attention from the former to the latter. Wilson wrote his last surf song in 1964, although Capitol Records continued to bill the band as “America’s Top Surfin’ Group.” By 1965, Wilson had produced and mostly composed 16 singles and nine albums for the Beach Boys.

Wilson stopped touring in 1965, concentrating on songwriting and producing. After hearing the Beatles’ Rubber Soul, he was inspired to make his own “complete statement.” While the band toured, he worked for months on a project, using session musicians from collectively known as “the Wrecking Crew,” whose all-star players previously worked with Phil Spector. The resulting album, Pet Sounds, was released in 1966. Paul McCartney described one of its songs, “God Only Knows,” as the best ever written. “If you could just write maybe the bridge to ‘Wouldn’t It Be Nice’ — that would be an accomplishment for most writers for a lifetime,” Al Jardine once reflected about another Pet Sounds track. “Just the bridge.”

Now considered a masterwork, Pet Sounds was not entirely well received by the band or their label. Mike Love, who once called it “Brian’s ego music,” found some of the lyrics “nauseating.” Capitol Records, alarmed at the $70,000 price tag — about $550,000 today — and realizing there weren’t any obvious singles on the record to help them recoup, stopped the recording and considered shelving the album. Wilson showed up at a tense record label meeting with a tape player. Instead of answering label questions, he instead played recordings of his own voice saying, “That’s a great idea,” “No, let’s not do that,” or “I think we should think about that. Rather than embracing the band’s new approach, the label put the record out in May 1966, then quickly compiled Best of the Beach Boys, releasing it less than two months later. The best-of easily outsold the new album. Brian Wilson was already in competition with nostalgia for an earlier version of his own band. He was 24.

Meanwhile, John Lennon and Paul McCartney liked Pet Sounds so much they made Beach Boy Bruce Johnston play it for them twice on a trip to London to promote the album. “I played it to John so much that it would be difficult for him to escape the influence,” McCartney said years later. “If records had a director within a band, I sort of directed [Sergeant Pepper]. And my influence was basically the Pet Sounds album. John was influenced by it, perhaps not as much as me.” (Wilson remembers Lennon calling him after hearing Pet Sounds and telling him it was “the greatest album ever made.”)

Already on a steady diet of amphetamines, marijuana, and hashish, Wilson began dropping LSD. “At first, my creativity increased more than I could believe,” he told The Guardian in 2011. “On the downside, it fucked my brain.”

Although hurt by the way Pet Sounds was treated, Wilson continued to evolve his production and recording process. Central to this approach was topping his previous effort. The result was one song recorded between February and September 1966 — a song that used more than 90 hours of tape and cost, in Wilson’s estimation, as much as the entire Pet Sounds project: “Good Vibrations.” In addition to arranging for cello, a theremin, and a bass harmonica, Wilson consciously used the recording studio as an instrument.

“‘Good Vibrations’ took six months to make,” Wilson told Rolling Stone. “We recorded the very first part of it at Gold Star Recording Studio, then we took it to a place called Western, then we went to Sunset Sound, then we went to Columbia. … Because we wanted to experiment with combining studio sounds. Every studio has its own marked sound. Using four different studios had a lot to do with the way the final record sounded.

“My mother used to tell me about vibrations,” Wilson continued. “I didn’t really understand too much of what that meant when I was just a boy. It scared me, the word ‘vibrations.’ To think that invisible feelings, invisible vibrations existed, scared me to death. But she told about dogs that would bark at people and then not bark at others, that a dog would pick up vibrations from these people that you can’t see, but you can feel. And the same existed with people. … Because we wanted to explain that concept, plus we wanted to do something that was R&B but had a taste of modern, avant-garde R&B to it. ‘Good Vibrations’ was advanced rhythm and blues music.”

The song, and the ensuing record Smile, was written in pieces. “I had a lot of unfinished ideas, fragments of music I called ‘feels,’” Wilson said of this time. “Each feel represented a mood or an emotion I’d felt, and I planned to fit them together like a mosaic.”

Although “Good Vibrations” topped the charts, Smile was never finished. Even in its incomplete state (a compilation of the dozens of sessions was issued in 2011), the project is monumental. At the time, Wilson said the result was going to be “a teenage symphony to God.” Already suffering from panic attacks, and now hearing voices in his head, Wilson had a nervous breakdown in the middle of the sessions. He began self-medicating with cocaine and heroin, ultimately being diagnosed as schizoaffective with mild manic depression. An almost complete lack of support from the band completed the bleak picture; Smile was abandoned in May 1967. “I had to destroy it before it destroyed me,” Wilson later said.

What followed for Wilson was a period of increasing indulgence and withdrawal. In the coming decade, he turned production duties over to his brother Carl, contributed fewer original songs to the band, and became known as a difficult recluse. He gained weight and increased his abuse of cigarettes and alcohol. The band toured and made records without him.

Wilson became completely withdrawn after the death of his father, Murry, in 1973. Theirs was a complicated, abusive relationship: Murry beat his children (purportedly causing Brian to go deaf in one ear), initially managed the band, and sold off much of his son’s publishing rights in 1969. “The story of my dad is the big can of worms,” Wilson wrote, “because it’s connected to everything else.” Wilson sequestered himself in the chauffeur’s quarters of his mansion and commenced a two-year period of orgiastic self-destruction.

Capitol Records released Endless Summer, another Beach Boys greatest hits compilation, in 1974. It went to Number 1. The Beach Boys, or at least the earlier, sunnier version of them, remained in demand, especially in the dark days of the Watergate era.

By now, Wilson’s reputation as the band’s guiding light had caught up with him. A 1969 contract with Reprise Records stipulated his involvement in every album. Now, without access to much of their former publishing revenue, the band needed a hit. The problem was that, by this time, Wilson was almost incapable of even getting out of bed. His wife and family hired radical therapist and former record PR man Eugene Landy in 1975.

Landy’s regiment was absolute: Wilson was surrounded by bodyguards in his own home, preventing him from doing drugs or overeating. Landy would dole out hamburgers or joints if Wilson was productive.

“Brian wanted to be left alone, but there was too much at stake,” the band’s manager, and Mike Love’s brother, Stephen Love once said. “If you’ve got an oil well, you don’t want it to wander off and become someone else’s oil well.” The label conceived of a new PR campaign, called “Brian’s Back” — Love even wrote a song with this title — which brought Wilson back on the road with the band for the first time since 1964.

 15 Big Ones, the first Beach Boys album to be solely produced by Brian Wilson since Wild Honey in 1967, was comprised mostly of covers. (Wilson blamed writer’s block, but he was working on a solo project of new material, tentatively called Brian Loves You.) The band’s version of Chuck Berry’s “Rock and Roll Music” gave them their first Top 10 since “Good Vibrations.” Critics rejected it. “The Beach Boys,” wrote one, “only succeed in jumping several steps sideways and 10 years back.”

Rolling Stone featured Wilson on the cover in 1976. The first interview, which took place in June, didn’t produce any useful material. “Brian was ready to talk, all right,” wrote correspondent David Felton, “just as he was ready to walk or ready to start dressing himself; but there could be no definitive Brian Wilson interview because Brian Wilson was not yet definitively himself.”

On the Rolling Stone cover, Wilson stood in the sand on a beach, surfboard in hand. Barefoot and wearing only a blue bathrobe, he appeared for all the world like an Old Testament prophet. The feature was called “The Healing of Brother Brian.”

Photographer Annie Leibovitz took the picture on Wilson’s 34th birthday. It took place during the filming of a clip for an upcoming TV special, called The Beach Boys: It’s OK, produced by Saturday Night Live creator Lorne Michaels. In the skit, John Belushi and Dan Aykroyd appear as “Surf Police” who force Wilson out of bed and onto the beach. Pounded by waves and, in one shot, using his board backwards, Wilson (who had never surfed before) was frightened by the ocean. In his bathrobe pocket was a folded piece of paper on which was written, “You will not drown. You will live. Signed, Dr. Landy.” (When Wilson made public appearances during this time, Landy would stand offstage, holding up cardboard signs reading “POSITIVE” and “SMILE” — the latter apparently written without irony.)

“He was not happy about it,” Michaels later remembered about the surfing scene. “It was almost a baptism.”

Though Wilson wrote and recorded the record mostly by himself, Brian Loves You was retitled The Beach Boys Love You and released in April 1977. Despite his desire to leave the group and go solo, Wilson realized he couldn’t. “Sometimes,” he said, “I feel like a commodity in a stock market.”

“Once you’ve established yourself as an artist, a producer — somebody who has a style to say, something to say with a definite profound effect, you feel obligated to fulfill commitments,” he awkwardly told a BBC interviewer in 1976. “In other words, it’s an artist’s obligation to continue his, uh, constructive work — you know, his work. Any artist that you find has that feeling — he feels the need to please, you know. And it’s a very personal thing and it’s something that, uh, that you work on it. It’s something that comes … it’s natural. It’s a natural thing.”

Shortly after finishing the mixes for The Beach Boys Love You, Wilson began work on what would become Adult/Child. “[That] was Dr. Landy’s title,” Wilson wrote in I Am Brian Wilson: A Memoir. “He meant that there were always two parts of a personality, always an adult who wants to be in charge and a child who wants to be cared for, always an adult who things he knows the rules and a child who is learning and testing the rules. I also thought about it in terms of family. I thought about my dad and me, and all the things he did that were good and bad, all the things that I can talk about easily and all the things I can’t talk about at all.”

“Still I Dream of It” was written for Frank Sinatra. “He didn’t say yes to the song,” Wilson wrote, “and that bothered me. It was a beautiful song about loneliness and hope.”

It’s strange to hear the 34-year-old Wilson sing from a teenager’s perspective. “When I was younger, mother told me Jesus loves the world,” Wilson sings in the bridge.

And if that’s true, then

Why hasn’t he helped me to find a girl?

Or find my world?

Till then I’m just a dreamer

Though jarring, this is the viewpoint Wilson returned to, as if the previous 15 years never happened. “We’ll make sweet lovin’ when the sun goes down,” Wilson sings in “Roller Skating Child” from The Beach Boys Love You. Hey Little Tomboy,” another track slated for inclusion on Adult/Child, extends this idea further, creating something that band biographer Peter Ames Carlin described as what “may be the most unsettling moment in the entire recorded history of the Beach Boys.”

Wilson called in arranger Dick Reynolds to help with Adult/Child. Reynolds originally worked with the Four Freshmen and collaborated with Sinatra in 1964, the same year he arranged The Beach Boys’ Christmas Album. Though Wilson claimed to want “a similar feel” as those classic Sinatra albums, the big band arrangements on Adult/Child are peculiarly lifeless. “Life is for the living,” Wilson sings with strangled enthusiasm over a high kick horn arrangement on the opening track.

I thought you wanted to see

How it could be

When you’re in shape and your head plugs into

Life

His last vocalization of “life!” is a harrowing shriek. Reportedly when Mike Love heard the album in the studio, he turned to Wilson and hissed, “What the fuck are you doing?” Love and Jardine’s vocals on the album were culled exclusively from earlier sessions; Wilson did most of the work alone, or with his brothers.

Adult/Child was shelved, by nearly unanimous consent. The band was nearing the end of their record contract with Warner/Reprise — who didn’t think the album had commercial potential anyway — and might have wanted to save some of the material for a major upcoming deal with CBS. Oddly, the only track from Adult/Child to be formally issued was “Hey Little Tomboy,” on the largely despised M.I.U., released in late 1978. “That album is an embarrassment to my life,” Dennis Wilson said tartly. “It should self-destruct.”

But it was his brother Brian who self-destructed more successfully. The voices in his head would multiply in the coming years, sounding by turns like his domineering father Murry, Chuck Berry, Phil Spector, and others he doesn’t recognize. What they tell him is almost universally negative. Landy was fired in December 1976, but returned in the early 1980s after Wilson, 340 pounds and hooked on cocaine, overdosed. Landy ultimately began writing lyrics and, under their shared company Brains and Genius, claimed a 50 percent take of Wilson’s earnings. He “produced” Wilson’s 1988 solo record and is widely thought to have directed his first ghost-written autobiography — one which loudly sang Landy’s praises. Landy voluntarily surrendered his license in 1989, after being accused by the family of gross negligence.

The Beach Boys broke up for two weeks in late 1977. During a September meeting at Brian’s house, a settlement was negotiated which gave Mike Love control of Brian’s vote, allowing him and Al Jardine to outvote the other two Wilson brothers. The commercial, nostalgia-driven faction of the band advanced, while the experimental, vulnerable side receded.

Dennis Wilson, deeply addicted to alcohol, drowned in 1983. His 1976 solo album, Pacific Ocean Blue, outsold the contemporary Beach Boys albums. “Brian Wilson is the Beach Boys,” he once said. “He is the band. We’re his fucking messengers. He is all of it. Period. We’re nothing. He’s everything.”

And this was true, at least for the few years until Brian Wilson became incapable and unwilling to fill the role. For a little while, at least, he was able to be John Lennon and Paul McCartney and Beatles’ producer George Martin at once: a gifted melodicist with a knack for hooks; an arranger of enormous sensitivities; and a producer able to employ even the studio as an instrument. It didn’t last because it couldn’t last: Every fire goes out after consuming all that sustains it. Especially those that burn brightest.

***

Tom Maxwell is a writer and musician. He likes how one informs the other.

Editor: Aaron Gilbreath; Fact-checker:  Samantha Schuyler

Los Angeles Plays Itself

AP Photo/Reed Saxon

David L. Ulin | Sidewalking | University of California Press | October 2015 | 41 minutes (8,144 words)

 

“I want to live in Los Angeles, but not the one in Los Angeles.”

— Frank Black

 

One night not so many weeks ago, I went to visit a friend who lives in West Hollywood. This used to be an easy drive: a geometry of short, straight lines from my home in the mid-Wilshire flats — west on Olympic to Crescent Heights, north past Santa Monica Boulevard. Yet like everywhere else these days, it seems, Los Angeles is no longer the place it used to be. Over the past decade-and-a-half, the city has densified: building up and not out, erecting more malls, more apartment buildings, more high-rises. At the same time, gridlock has become increasingly terminal, and so, even well after rush hour on a weekday evening, I found myself boxed-in and looking for a short-cut, which, in an automotive culture such as this one, means a whole new way of conceptualizing urban space.

There are those (myself among them) who would argue that the very act of living in L.A. requires an ongoing process of reconceptualization, of rethinking not just the place but also our relationship to it, our sense of what it means. As much as any cities, Los Angeles is a work-in-progress, a landscape of fragments where the boundaries we take for granted in other environments are not always clear. You can see this in the most unexpected locations, from Rick Caruso’s Grove to the Los Angeles County Museum of Art, where Chris Burden’s sculpture “Urban Light” — a cluster of 202 working vintage lampposts — fundamentally changed the nature of Wilshire Boulevard when it was installed in 2008. Until then, the museum (like so much of L.A.) had resisted the street, the pedestrian, in the most literal way imaginable, presenting a series of walls to the sidewalk, with a cavernous entry recessed into the middle of a long block. Burden intended to create a catalyst, a provocation; “I’ve been driving by these buildings for 40 years, and it’s always bugged me how this institution turned its back on the city,” he told the Los Angeles Times a week before his project was lit. When I first came to Los Angeles a quarter of a century ago, the area around the Museum was seedy; it’s no coincidence that in the film Grand Canyon, Mary Louise Parker gets held up at gunpoint there. Take a walk down Wilshire now, however, and you’ll find a different sort of interaction: food trucks, pedestrians, tourists, people from the neighborhood.

Read more…

Baring the Bones of the Lost Country: The Last Paleontologist in Venezuela

Photo courtesy of Ascanio Rincon / Tachiraptor admirabilis illustration by Maurílio Oliveira / Photo illustration by Katie Kosma

Zoe Valery | Longreads | February 2019 | 18 minutes (5,011 words)

 

— Orocual tar pit, northeastern Venezuela, 2007 C.E.

Ascanio Rincón was standing on a veritable fossil paradise when one of his students brought to his attention a tooth that was sticking out through the dirt. The site presented innumerable shards of prehistoric bones that had been fortuitously unearthed by a steamroller digging a trench for a pipeline. After assessing the value of the site, the young paleontologist stood his ground to protect the tar pit where millions of fossils have been preserved by the asphalt, eventually forcing the workers to redraw the course of the oil duct. When he cleaned around the tooth that was embedded in the trench wall, he found that it was attached to the skull of a creature that the steamroller had missed only by inches. He looked at the eye socket in disbelief: “A saber-toothed tiger was looking at me in the eye,” he recalls. This specimen would constitute a groundbreaking discovery for Rincón and a landmark for the field of paleontology in Venezuela and at large.

To this day, Richard Parker — named after the tiger in Life of Pi — remains one of the most remarkable findings in the country and one of Rincón’s dearest fossils. The sabre-toothed tiger has shed light on a migratory wave during the Ice Age that the scientific community previously had not been aware of. Due to the current mass migration of people from Venezuela, Rincón is one of the only scientists left in the country tapping into the overwhelming wealth of fossils yet to be uncovered at the Orocual tar pit. Like most of his colleagues, the eight students he had trained have all left the country, joining 3 million other Venezuelans fleeing the rampant economic crisis, creating what has been described by the U.N. High Commissioner for Refugees as the most dire refugee crisis on the continent. Rincón is an endling — the only extant individual of a species — in his field: the last vertebrate paleontologist in Venezuela.* Read more…

Maybe What We Need Is … More Politics?

Alfred Gescheidt / Getty Images

Aaron Timms | Longreads | February 2019 | 20 minutes (5,514 words)

Alpacas are native to South America, but to find the global center of alpaca spinning you’ll need to travel to Bradford, England. The man most responsible for this quirk of history is Titus Salt. Until the 1830s alpaca yarn was considered an unworkable material throughout Europe. Salt, a jobbing young entrepreneur from the north of England, commercialized a form of alpaca warp that made the animal’s fleece suitable for mass production. Within a decade alpaca, finer and softer than wool, had become the rage of England’s fashionable classes.

Already by the mid-19th century industrialization had begun to disfigure the English countryside with “machinery and tall chimneys, out of which interminable serpents of smoke trailed themselves for ever and ever, and never got uncoiled,” as Dickens put it in Bleak House. The immiseration of the working classes was under way. Troubled by the emerging horrors of the new industrial age, Salt built a model village to house the workers he employed in his textile mill. Saltaire, with its neat, spacious houses, running water, efficient sewerage, parks, schools and recreational facilities, became a symbol of what enlightened capitalism could look like. It was also a model in the truest sense, serving as the inspiration for workers’ villages built later in the 19th century by companies such as Cadbury’s and Lever Brothers, the soap manufacturer that eventually became Unilever.

According to economist Paul Collier, these Victorian capitalists instituted a tradition that survives, however precariously, today: the tradition of “business with purpose, business with a sense of obligation to a workforce and a community.” Among the modern successors of this model of compassionate capitalism, Collier has argued, are U.S. pharmaceutical giant Johnson & Johnson and John Lewis & Partners, the British department store. In the 1940s Johnson & Johnson set out a credo stating that the company’s first responsibility was to its customers. Thanks to this credo, Johnson & Johnson’s management led a mass recall of Tylenol off supermarket and pharmacy shelves following a contamination scare in the early 1980s. Now standard practice, this type of product recall was uncommon for its time — and allowed the company to maintain goodwill with its customers. John Lewis, for its part, has prospered through difficult decades for brick-and-mortar retail largely thanks to its unusual power structure: the company is owned by a trust run in the interests of its workforce.

The thread uniting this strain of capitalism, Collier contends in his new book The Future of Capitalism: Facing The New Anxieties, is ethics. An ethics of reciprocal responsibility and care — between owners, workers, and customers — has allowed different businesses to prosper in different eras without destroying the communities and environments around them. But very few businesses are run according to these principles today. According to Collier, it is to this model of reciprocal ethics that capitalism, having lost its way over the past four decades, now must return — and reciprocity must become the principle that guides human interaction at all levels of society, not just in the firm. “Our sense of mutual regard has to be rebuilt,” he says. “Public policy needs to be complemented by a sense of purpose among firms.” “We need to meet each other.” “A new generation needs to reset social narratives.” “Norms need to change.” Prescriptivism today, the future of capitalism tomorrow. Read more…